When a web application is deployed on KUbernetes, to access the application, there must be some set up needs to be done so that external users can access the resource within the Kubernetes clusters. IN this post, we will walk through different ways to access application from outside.
Before exploring different ways, let's set up a simple nginx eb application which will just serve the nginx welcome message when loading.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.16.1
ports:
- containerPort: 80
Port forward
If you are doing some local testing and deployed an nginx web application on a Kubernetes cluster, you can test whether the application is up and running fine using port forwarding. The port forwarding can be used to forward the request to a specific pod port.
Let's assume there are below pods created for the nginx web application.
NAME READY STATUS RESTARTS AGE
nginx-deployment-559d658b74-rt7l5 1/1 Running 1 20h
nginx-deployment-559d658b74-wcrfp 1/1 Running 1 20h
And now if wanna forward the request to pod nginx-deployment-559d658b74-rt7l5 and port 80(the application listens to). Can run below:
k8s@k8s-VirtualBox:~$ kubectl port-forward nginx-deployment-559d658b74-rt7l5 8088:80
Forwarding from 127.0.0.1:8088 -> 80
Forwarding from [::1]:8088 -> 80
This command will forward all the requests received at port 8088 on localhost to the KUbernetes pod specified port 80. And now if accessing through http://localhost:8088, you would see the welcome message.
Service
Normally a deployment would contain multiple pods having same application deployed. Like in our demo case, there are two pods with same nginx deployment. When wanna just access the application but don't care about which actual pod the request goes to for load balance purpose, a service can be created which has different service types so that it can take the corresponding request and forwards the request to corresponding pod. There are a few available service types in a service resource: ClusterIP, NodePort and LoadBalancer.
ClusterIP
If you created a service with ClusterIP type, you can access the service with the corresponding cluster IP address and the port.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8089
targetPort: 80
If no type is specified, the default type will be ClusterIP. From above yaml, you can find that the listening port for the service nginx-service is 8089 and it forwards the request to pod at port 80.
k8s@k8s-VirtualBox:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service ClusterIP 10.109.53.35 <none> 8089/TCP 19h
k8s@k8s-VirtualBox:~$ kubectl describe svc nginx-service
Name: nginx-service
Namespace: basic
Labels: <none>
Annotations: <none>
Selector: app=nginx
Type: ClusterIP
IP Families: <none>
IP: 10.109.53.35
IPs: 10.109.53.35
Port: <unset> 8089/TCP
TargetPort: 80/TCP
Endpoints: 172.17.0.3:80,172.17.0.4:80
Session Affinity: None
Events: <none>
Now if you try to access the application using http://10.109.53.35:8089, you will find it's not working. The IP address is an internal IP which is not accessible outside again. To access the application, we would still use port forward to the service.
k8s@k8s-VirtualBox:~$ kubectl port-forward service/nginx-service 8089:8089
Forwarding from 127.0.0.1:8089 -> 80
Forwarding from [::1]:8089 -> 80
The application is accessible with http://localhost:8089 now.
NodePort
The second service type is NodePort where the application can be accessed using node IP address.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30001
The node port is 30001, note the nodePort must start with 30000, otherwise it would throw error when creating the service.
k8s@k8s-VirtualBox:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.101.154.94 <none> 80:30001/TCP 99s
To find the node IP, can run below command:
k8s@k8s-VirtualBox:~$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane,master 23h v1.20.2 192.168.49.2 <none> Ubuntu 20.04.2 LTS 5.8.0-53-generic docker://20.10.6
The INTERNAL IP field is the node IP. Now you can access the application at http://192.168.49.2:30001.
LoadBalancer
The LoadBalancer type requires you to have a cloud provider such as AWS ELB and Azure. Here we will not demonstrate it. But the general idea is similar.
Ingress
Ingress is an external tool which can be set up to serve as the reverse proxy so that traffic hitting the Ingress controller can be forwarded to internal services/applications. Just like nginx reverse proxy.
If you don't have ingress controller installed, you can follow the simple guideline here to play around with it if you are using minikube. If you are using other Kubernetes set up, can follow the introduction from Kubernetes.
Once an ingress controlled is installed, can create ingress resource and apply.
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nginx.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
With this, it says you can access the application using host nginx.info
.
k8s@k8s-VirtualBox:~$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress <none> nginx.info 192.168.49.2 80 20h
This host needs to be added into /etc/hosts with below entry:
192.168.49.2 nginx.info
Now can access the application with http://nginx.info