Getting started with Kubernetes on Minikube
How to set up and run applications locally on Minikube cluster ?
Kubernetes is a container orchestrator which helps to manage and scale application running on containers. Setting up a production grade Kubernetes cluster is a hassle when it comes to provision different kind of nodes which manages a Kubernetes workload. In a Kubernetes cluster, there are two kinds of nodes
- Master Nodes (control plane) : These nodes manage and control how the application are "orchestrated". This simply means master nodes keeps track of current application state, does the current state of the deployment matches the desired state and so on. To sum it up, this is the brain of the Kubernetes cluster.
- Worker Nodes: These are the actual nodes where the containers are provisioned and run. These are the nodes that actually handle the workload.
Note: There are other components to a Kubernetes cluster like Controller Manager, etcd etc. but that is beyond the scope of this post
For starters, knowing the above information is just right to get started with some hands-on on Kubernetes. While we can set up our own VM's as control nodes and worker nodes, but it is too much hassle to install all the components required to run a Kubernetes cluster.
Therefore, we will take use Minikube that comes with all the bells and whistles that are required to run a Kubernetes cluster with just few commands. Let us get started with some hands-on now. Fire up that terminal of your choice and start following
Assumption: You must have docker-desktop installed on your machine to follow along. Check if docker is installed on your system by running the following command in your terminal
docker -v
If you see something like this, then you are good to go :smile:
~ โ docker -v Docker version 20.10.12, build e91ed57
Installing Minikube
For macOS,
brew install minikube
For Windows,
choco install minikube
For Linux,
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
If you do not have brew
installed on macOS or chocolatey
installed on Windows, you can alternatively run:
For macOS,
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
For Windows, Follow instructions on Minikube for Windows
Start Minikube cluster
On your terminal, run the following command,
minikube start --vm-driver=docker
You should now see Minikube spinning up Kubernetes cluster for you. If you get output something like this, then you are good to go.
~ โ minikube start --vm-driver=docker
๐ minikube v1.25.2 on Darwin 12.3.1
๐ Kubernetes 1.23.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.3
โจ Using the docker driver based on existing profile
๐ Starting control plane node minikube in cluster minikube
๐ Pulling base image ...
๐ณ Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
โช kubelet.housekeeping-interval=5m
๐ Verifying Kubernetes components...
โช Using image gcr.io/k8s-minikube/storage-provisioner:v5
๐ Enabled addons: storage-provisioner, default-storageclass
๐ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
You can now view the nodes created by Minikube by running kubectl get nodes
command
~ โ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 23d v1.23.1
As you can see, you now have a single node Minikube cluster running on your local machine. In the output you can see something interesting. There is only one node, which is the master node.
Now, you might be wondering where is the worker node that I told you at the beginning of this post. Well, Minikube creates a single node cluster by default, meaning that a single node act as both master and worker node.
This setup is fine since we are using this for our local development purpose. In actual production grade setup, the setup has multiple master and multiple worker node for high availability and to make the system fault-tolerant.
Note: Minikube also supports multi-node setup but that is a discussion meant for some other post. We will just work with single node cluster for this tutorial.
Now that we have the setup ready, we can now run our applications on this setup.
For the purpose of this tutorial we will use an echo-server
by Google Cloud which when sent an HTTP request will echo out details about the HTTP request and client who sent that request.
Key Concepts
Before getting into the deployment, let us understand three basics concepts when it comes to Kubernetes.
- Pods: The smallest compute unit in a Kubernetes setup. The pods are the place where containers run (where the application code runs). Although a pod can run multiple containers, usually we see pods running one or two containers. For horizontal scaling, Kubernetes spawns multiple pods in a node so that the underlying application can scale.
- Deployment: This is a way we define a pod configuration. A pod configuration consists of what image to run, the number of replicas a pod can have, resource configuration etc. In short, a deployment is a way to run pods running containers at scale.
- Service: A service helps us to expose the underlying ports in the containers that we want to send network requests to. So services are tied to pods, making it a means to communicate with our application. No matter how many pods are running in the background we cannot connect to the service and in turn the Kubernetes service can help us communicate with the application. Kubernetes' service also act as a load balancer for the pods.
Now that we have enough background, let us deploy our first pod in the Kubernetes cluster on Minikube.
Your First Deployment
Run the following command in your terminal
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.4
So we have now created a deployment with a bare minimum specification which is a name
for the deployment, and we have specified the container image
to run.
Here hello-minikube
is the name of the deployment and the image is specified using --image
argument.
Now, let us verify if the pod is actually running. To see the pods running, run the following command kubectl get pods
~ โ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-minikube-7bc9d7884c-cb2fs 1/1 Running 0 78m
The pods are in RUNNING status and 1 replica is running. The 1/1 means that we have 1 replica running, currently out of 1 desired replica.
Exposing through a Service
Now that our echo server is running, How can we access the server ? If you now try to access localhost:8080 you will see Unable to connect error.
As I already mentioned, to access/communicate with pods, we need to expose the pods with a service. Let us create a service and expose port 8080
of the hello-minikube
deployment. To create a service, run the following command
kubectl expose deployment hello-minikube --type=NodePort --port=8080
This command tells Kubernetes to expose the port 8080
of the pods of hello-minikube
deployment. The service of type NodePort. Kubernetes has different types of services namely NodePort, ClusterIP, LoadBalancer etc.
We will dive deeper into the types of services in some other post, but for now let us just assume that using NodePort we can direct external traffic (HTTP request from host machine) to the Kubernetes cluster.
Now let us run kubectl get service
command to see the details of the service that is created
~ โ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-minikube NodePort 10.103.34.250 <none> 8080:32262/TCP 78m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21d
From the output, we see that a NodePort service named hello-minikube
is created. So now let us hit localhost:8080. Uhhh Oh! We still cannot reach the echo-server
.
Port Forwarding
There is one missing piece of the puzzle still left so that we can access the echo-server. So let us understand why is this not working even after exposing the deployment using the hello-minikube
service?
The Minikube setup basically runs a VM where the application pods are running. We are trying to access the port that is exposed on the Minikube VM from our host machine. That is the reason the communication is not happening. Basically, the network does not know how to route traffic from host machine to Minikube VM.
To enable the routing of traffic from host machine, run kubectl port-forward service/hello-minikube 8080:8080
~ โ kubectl port-forward service/hello-minikube 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Now, let us try to access our echo-server
localhost:8080. Viola!! We are able to access our echo-server
and it give us the see the following output. Here we can see the details of our request sent to the echo-server
.
CLIENT VALUES:
client_address=127.0.0.1
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://localhost:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
accept-encoding=gzip, deflate, br
accept-language=en-US,en;q=0.5
connection=keep-alive
host=localhost:8080
sec-fetch-dest=document
sec-fetch-mode=navigate
sec-fetch-site=none
sec-fetch-user=?1
upgrade-insecure-requests=1
user-agent=Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
BODY:
-no body in request-
This command kubectl port-forward service/hello-minikube 8080:8080
allow us to port forward our request from our host machine at port 8080 to the pods port 8080.
Conclusion
So to wrap up we have successfully
- Set up a Kubernetes cluster
- Run pods and deployment
- Expose deployment using Kubernetes service
- Port-forwarded traffic from host machine to the Minikube VM
I hope now you are a little more comfortable in starting your own journey in Kubernetes. Feel free to share your journey with me!!
If you liked what you read, please share and leave a like! ๐