Kubernetes (also written k8s) is a powerful container orchestration platform that works with Docker. This first video provides a high level explanation of how kubernetes differs from traditional application deployment and infrastructure management.
A kubernetes cluster is made up of masters and nodes. The masters are responsible for orchestration and the nodes host the orchestrated containers. In addition to orchestrating containers, it is helpful to have a gateway to route traffic through the cluster and a persistent storage mechanism. While these last two components aren’t strictly part of kubernetes, I consider them essential to a cluster. They will also be somewhat proprietary, depending on where you deploy your cluster (e.g. AWS, GKE, etc.).
Most interactions with a kubernetes cluster will be through the CLI
kubectl. This client can be installed locally on some systems and is provided through predefined consoles on others. I prefer to use
kubectl in a container, which is what I demonstrate in this video.
Assume a kubernetes cluster exposes its API Server on:
The following command would create a new container running kubectl and execute the get nodes command against that cluster.
docker run --rm lachlanevenson/k8s-kubectl --server=http://192.168.13.180:8080 get nodes
It is often convenient to cache the configuration data in order to more easily access one or more clusters. This is accomplished using a configuration file as shown below written to
current-context: k8sdev apiVersion: v1 clusters: - cluster: api-version: v1 server: http://192.168.13.180:8080 name: my-cluster contexts: - context: cluster: my-cluster namespace: default name: k8sdev
With the above file written somewhere on the host where you run docker, the command below will create an interactive container that can be used to run many commands.
docker run -ti --rm --entrypoint /bin/sh -v ~/.kube/:/root/.kube -v $(pwd):/kubeyaml lachlanevenson/k8s-kubectl
-timakes the container interactive so you can operate against a shell
--rmremoves the container when it does exit
--entrypoint /bin/shchanges the predefined entry point to run a shell rather than kubectl and exit
-v ~/.kube/:/root/.kubewill make available the config file to the kubectl client. You should replace ~/.kube/ with the path on your computer where you have written the config file.
-v $(pwd):/kubeyamlmounts the current working directory to /kubeyaml in the container and presumes you have YAML files in the current directory
kubectl run and expose
Kubernetes provides some shortcuts that make it fast and easy to deploy and expose applications. This video demonstrates
kubectl run and
kubectl expose which create a new Deployment and Service respectively.
The following two commands will create a new Deployment, which in turn will create a ReplicaSet, Pod(s) and then a Service. The value you choose for YOURPORT will depend on the gateway technology you use and what security rules you have applied to the gateway. The following commands assume you are are at a prompt with the kubectl client and it is configured for the target cluster as shown above.
kubectl run daniel-nginx --image=nginx --replicas=2 --port=80 kubectl expose deployment daniel-nginx --port=YOURPORT --target-port=80 --type=LoadBalancer
After creating the Deployment and Service using the above commands, the following commands illustrate how to interact with that new deployment, its related pods and the service
kubectl describe svc daniel-nginx kubectl get pod kubectl logs daniel-nginx-714585941-krw4z
The last command above illustrates how easy it is to view logs on running pods. This videos shows the steps to identify the desired pod and view logs. The command
kubectl logs functions like Linux tail and even supports a follow function with the
Execute commands inside a running container
Sometimes it is necessary to run commands or access a running container in a kubernetes pod. Two exampls when this can be helpful are when troubleshooting or designing service discovery. This video demonstrates how to enter a running container to execute arbitrary commands.
kubectl exec daniel-nginx-714585941-krw4z ls
Scale a Deployment
Deployments are easy to scale using the kubectl client. This video demonstrates how to scale an existing deployment up and down.
kubectl scale --replicas=6 deployment/daniel-nginx
Understanding YAML descriptions
Every kubernetes resource has an internal definition. These can be represented as either JSON or YAML. In the video below I explain the YAML definitions for some common resources and demonstrate how to view the YAML description for existing resources. I also show how to create new resources from a YAML definition.
Conclusion and next steps
In this brief bootcamp you have learned how kubernetes works and begun to interact with some of the most basic parts of kubernetes through the
You may want to try running some of the commands above in a kubernetes playground: https://www.katacoda.com/courses/kubernetes/playground
All the videos above are available in a YouTube playlist here: https://www.youtube.com/watch?v=cPAGKITejGk&list=PLeeFeZgciaDqFLx3jH_T6Wu9N5vKe2ekA