May 18, 2021
Kubernetes is an open-source container orchestration tool that helps in managing containerized application services that consist of multiple containers, scheduling these containers across a cluster, scaling the containers, and manage their performance over time.
If we consider microservices, which typically each run in their own containers, a containerized application might become hundreds or thousands of containers when building and operating a large-scale system. Significant complexity would be involved to manage these containers manually. Hence, this is where container orchestration can come in handy for making that operational complexity manageable for development and operations. Container orchestration technologies like Kubernetes automatically and continuously monitor the cluster of containers and make adjustments as required ensuring that there is no downtime in a production environment. For instance, if a container goes down, another container automatically takes its place without the end-user ever noticing.
Docker refers to a specific platform meant for building containerized applications, while Kubernetes is a container orchestration tool that aids in managing the container’s lifecycle. Docker Swarm is Docker’s own container orchestration tool that enables managing multiple containers deployed across multiple host machines.
image sourceLike other distributed computing platforms, a kubernetes cluster has at least one master and multiple commpute nodes. The master node is the node that works on exposing the Application Program Interface (API), scheduling the deployments, and managing the overall cluster. Each of the worker nodes runs a container runtime, such as docker, along with an agent that communicates with master. The nodes can either be virtual machines(VM’s) running in a cloud or physical servers runnin within a data center.
The master node receives input from a CLI or UI via an API. This input can be the commands that a developer provides to Kubernetes. For example, which container images are to be run, which ports are to be exposed, parameters of the desired state of the applications running in the cluster are defined by the developer/
Worker Nodes are responsible for executing the work assignments from the master Node are and reporting back the results to the master node.
We can run Kubernetes on our local system, instead of a cloud service. For this, we would need to install two programs – Minikube and Kubectl. Minikube allows us to run a single-node Kubernetes cluster on our local computer. Kubectl is a command-line tool for Kubernetes.
Minikube uses a hypervisor driver that varies by the operating system. This link has the list of drivers that one could use for setting up Minikube- https://minikube.sigs.k8s.io/docs/drivers/
We will be using Docker as the hypervisor drive in this illustration. The hypervisor drive acts as an abstraction layer to separate the virtual machine from the system hardware.
To start the cluster, we can run the following command from the terminal with administrative access:
minikube start
In case Minikube fails to start, you can choose some other appropriate driver for setting up a compatible container or virtual machine manager.
As in the output, Minikube has used the docker driver for our case.
We can interact with the created Kubernetes cluster using kubectl by running the following command
kubectl get po -A
This displays all the components of the Kubernetes cluster that have been created like etcd – minikube, Kube-apiserver-minikube, Kube-controller-manager-minikube, Kube-scheduler-minikube, etc. These, as the name indicates, are the components that we had discussed in the previous sections.
We can use the following commands to create a deployment that manages a pod. This pod manages a container based on the Docker image provided
kubectl create deployment hello-world --image=k8s.gcr.io/echoserver:1.4
We can view the deployment using the command below:
kubectl get deployments
To run pods, we can run this command:
kubectl get pods
We can expose the deployment on a port:
kubectl expose deployment hello-world --type=NodePort --port=8080
To access the deployment, we can run the following command:
kubectl port-forward service/hello-world 7080:8080
Orchestration is the automated configuration, management, and coordination of computer systems, applications, and services.