Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged kubernetes

Software Engineering

kubernetes overview

Kubernetes is getting a lot of attention recently, and there is good reason for that. Docker containers alone are little more than a developer convenience. Orchestration moves containers from laptop into the datacenter. Kubernetes does that in a way that simplifies development and operations. Unfortunately I struggled to find easy to understand high level descriptions of how kubernetes worked, so I made the diagram below.

Operations

While I don’t show the operator specifically (usually someone in IT, or a managed offering like GKE), everything in the yellow box would be managed by the operator. The nodes host pods. The master nodes facilitate orchestration. The gateway facilitates incoming traffic. A bastion host is often used to manage members of the cluster, but isn’t shown above.

Persistent storage, in the form of block storage, databases, etc. is also not directly part of the cluster. There are kubernetes resources, like Persistent Volumes, that can be used to associate external persistent storage with pods, but the management of the storage solution is outside the scope of managing kubernetes.

Development

The Developer performs two primary activities: Create docker images and tell kubernetes to deploy them. Kubernetes is not opinionated when it comes to container image design. This means a developer can choose to manage ports, volumes, libraries, runtimes, and so on in any way that suits him. Kubernetes also has no opinion on container registry, as long as it is compliant with the docker registry v2 spec.

There are a few ways to deploy a container image to kubernetes. The diagram above shows two, one based on an application definition (usually in YAML format) and the other using shortcut commands. Both are typically triggered using the kubectl CLI. In either case, the developer gives kubernetes a desired state that includes details about which container image, how many replicas and exposed ports, for example. Kubernetes then assumes the job of ensuring that the desired state is the actual state. When nodes in a cluster change, or containers fail, kubernetes acts in realtime to to what is necessary to get back to the desired state.

Consumer

The consumer doesn’t need to know anything about the kubernetes cluster, it’s members or even that kubernetes is being used. The gateway shown might be an nginx reverse proxy or HAProxy. The important point is that the gateway needs to be able to route to the pods, which are generally managed on a flannel or calico type network. It is possible to create redundant gateways and place them behind a load balancer.

Services are used to expose pods (and deployments). Usually the service type of LoadBalancer will trigger an automatic reconfiguration of the gateway to route traffic. Since the gateway is a single host, each port can be used only once. To get around this limitation, it is possible to use an ingress controller to provide name based routing.

Conclusion

Kubernetes definitely has its share of complexity. Depending on your role, it can be very approachable. Cluster installation is by far the most difficult part, but after that, the learning curve is quite small.

Software Engineering

Infrastructure as Code

One of the most significant enablers of IT and software automation has been the shift away from fixed infrastructure to flexible infrastructure. Virtualization, process isolation, resource sharing and other forms of flexible infrastructure have been in use for many decades in IT systems. It can be seen in early Unix systems, Java application servers and even in common tools such as Apache and IIS in the form of virtual hosts. If flexible infrastructure has been a part of technology practice for so long, why is it getting so much buzz now?

Infrastructure as Code

In the last decade, virtualization has become more accessible and transparent, in part due to text based abstractions that describe infrastructure systems. There are many such abstractions that span IaaS, PaaS, CaaS (containers) and other platforms, but I see four major categories of tool that have emerged.

  • Infrastructure Definition. This is closest to defining actual server, network and storage.
  • Runtime or system configuration. This operates on compute resources to overlay system libraries, policies, access control, etc.
  • Image definition. This produces an image or template of a system or application that can then be instantiated.
  • Application description. This is often a composite representation of infrastructure resources and relationships that together deliver a functional system.

Right tool for the right job

I have observed a trend among these toolsets to expand their scope beyond one of these categories to encompass all of them. For example, rather than use a chain of tools such as Packer to define an image, HEAT to define the infrastructure and Ansible to configure the resources and deploy the application, someone will try to use Ansible to to all three. Why is that bad?

A tool like HEAT is directly tied to the OpenStack charter. It endeavors to adhere to the native APIs as they evolve. The tools is accessible, reportable and integrated into the OpenStack environment where the managed resources are also visible. This can simplify troubleshooting and decrease development time. In my experience, a tool like Ansible generally lags behind in features, API support and lacks the native interface integration. Some argue that using a tool like Ansible makes the automation more portable between cloud providers. Given the different interfaces and underlying APIs, I haven’t seen this actually work. There is always a frustrating translation when changing providers, and in many cases there is additional frustration due to idiosyncrasies of the tool, which could have been avoided if using more native interfaces.

The point I’m driving at is that when a native, supported and integrated tool exists for a given stage of automation, it’s worth exploring, even if it represents another skill set for those who develop the automation. The insight gained can often lead to a more robust and appropriate implementation. In the end, a tool can call a combination of HEAT and Ansible as easily as just Ansible.

Containers vs. Platforms

Another lively discussion over the past few years revolves around where automation efforts should focus. AWS made popular the idea that automation at the IaaS layer was the way to go. A lot of companies have benefitted from that, but many more have found the learning curve too steep and the cost of fixed resources too high. Along came Heroku and promised to abstract away all the complexity of IaaS but still deliver all the benefits. The cost of that benefit came in either reduced flexibility or a steep learning curve to create new deployment contexts (called buildpacks). When Docker came along and provided a very easy way to produce a single function image that could be quickly instantiated, this spawned discussion related to how the container lifecycle should be orchestrated.

Containers moved the concept of image creation away from general purpose compute, which had been the focus of IaaS, and toward specialized compute, such as a single application executable. Start time and resource efficiency made containers more appealing than virtual servers, but questions about how to handle networking and storage remained. The docker best practice of single function containers drove up the number of instances when compared to more complex virtual servers that filled multiple roles and had longer life cycles. Orchestration became the key to reliable container based deployments.

The descriptive approaches that evolved to accommodate containers, such as kubernetes, provide more ease and speed than IaaS, while providing more transparency and control than PaaS. Containers make it possible to define their application deployment scenario, including images, networking, storage, configuration, routing, etc., in plain text and trust the Container as a Service (CaaS) to orchestrate it all.

Evolution

Up to this point, infrastructure as code has evolved from shell and bash scripts, to infrastructure definitions for IaaS tools, to configuration and image creation tools for what those environments look like to full application deployment descriptions. What remains to mature are the configuration, secret management and regional distribution of compute locality for performance and edge data processing.

Software Engineering

Kubernetes vs. Docker Datacenter

I found this article on serverwatch today: http://www.serverwatch.com/server-trends/why-kubernetes-is-all-conquering.html

It’s not technically deep, but it does highlight the groundswell of interest for and adoption of kubernetes. It’s also worth noting that GCE and Azure will now both have a native, fully managed kubernetes offering. I haven’t found a fully managed docker datacenter offering, but I’m sure there is one. It would be interesting to compare the two from a public cloud offering perspective.

I’ve worked a lot with OpenStack for on premises clouds. This naturally leads to the idea of using OpenStack as a platform for container orchestration platforms (yes, I just layered platforms). As of today, the process of standing up Docker Datacenter or kubernetes still needs to mature. Last month eBay mentioned that it created its own kubernetes deployment tool on top of openstack: http://www.zdnet.com/article/ebay-builds-its-own-tool-to-integrate-kubernetes-and-openstack/. While it does plan to open source the new tool, it’s not available today.

One OpenStack Vendor, Mirantis, provides support for kubernetes through Murano as their preferred container solution: https://www.mirantis.com/solutions/container-technologies/. I’m not sure how reliable Murano is for long term management of kubernetes. For organizations that have an OpenStack vendor, support like this could streamline the evaluation and adoption of containers in the enterprise.

I did find a number of demo, PoC, kick the tires examples of Docker datacenter on OpenStack, but not much automation or production support. I still love the idea of using the Docker trusted registry. I know that kubernetes provides a private registry component (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/registry), but it’s not as sophisticated as Docker Trusted Registry in terms of signing, scanning, etc. However, this functionality is quickly making its way into kubernetes, with some functionality already available in alpha: https://github.com/kubernetes/kubernetes/issues/22888

On the whole, I’m more drawn to kubernetes from a wholistic point of view, but Docker is effectively keying into some real enterprise concerns. Given the open source community and vendor investment in kubernetes, I expect the enterprise gap (like a trusted registry for kubernetes) will close this year.

Software Engineering

High level view of Container Orchestration

Container orchestration is at the heart of a successful container architecture. Orchestration takes as input a definition of how a deployed application should look. This usually includes how many containers for a certain image are needed, volumes for persistent data, networking for communication between containers and awareness of various discovery mechanisms. Discovery may include such things as identifying other containers which are also participating with the application or how to access services required by the running containers. Here’s a high level view.

container-architecture

Infrastructure

Containers need infrastructure to run. Both virtual and physical infrastructure can be used to host containers. Some argue that it’s better to run containers directly on physical servers to get the maximum performance. While there are performance benefits, there is also more operational overhead in standing up and maintaining physical servers. Automation available in virtual environments often makes it easier to provision, monitor and remediate servers. Using virtual infrastructure also makes it possible to share capacity between different types of workloads, where some may not be optimized for containers. Tools like Docker cloud (formerly Tutum) and Rancher can streamline operations for virtual environments.

If all workloads will be containerized and top performance is critical, favor a physical deployment. If some applications will still require IaaS and capacity will be shared between various types of workloads, choose virtual.

Orchestration

Orchestration is the process by which containers are managed to ensure that a predefined application configuration is maintained. These often require a plain text definition (usually YAML) of which container images are wanted, networking between those containers, mounted volumes, etc. The orchestration tool is then given this definition, which it uses to pull the necessary images and create containers, setup networking and mount storage.

Kubernetes

Kubernetes (http://kubernetes.io/) is was originally contributed to the open source community by Google and was based on their decade old container technology Borg. It aims to be a comprehensive container management platform providing everything from orchestration to monitoring to service and discovery and more. It abstracts the container technology in what it calls a pod, making it possible to use Docker or rkt or any other technology that comes around in the future. For many people the appeal of this platform is that it has no direct tie back to a commercial vendor, so investment is more likely to be driven by the community.

Swarm

Docker Swarm is Docker’s orchestration layer. It is designed to integrate seamlessly with other Docker tools, including the Docker daemon and registry tools. Some of the appeal to Swarm has to do with simplicity. Swarn is more narrowly focused than kubernetes, which may suggest better focus and more flexibility in choosing the right solutions for each container management need, althought it’s optimal to stick with Docker solutions.

PaaS

As containers continue to grow in prominence, some PaaS solutions, such as cloudfoundry, are reworking their narrative to position themselves as container management systems. It is true that the current version of cloudfoundry supports direct deployment of Docker container images and provides platform components, like routing, health management and scaling. Some drawbacks to using a PaaS for container orchestration is that deployments become more prescriptive and it provides less granular control over container deployment and interactions.

Image management

Container images can be created in several ways, including using a mechanism like Dockerfile, or using other automation tools. Container images should never contain credentials or other sensitive data (see Discovery below). In some cases it may be appropriate to host an internal container registry. External registry options that provide private images may provide sufficient protection for some applications.

Another aspect of image security has to do with vulnerabilities. Some registry solutions provide image scanning tools that can detect vulnerabilities or out of date packages. When external images are used as a base for internal application images, these should be carefully curated and confirmed to be safe before using them to derive application images.

Automation

One motivation behind containerization is that it better accommodates Continuous Integration (CI) and Continuous Delivery (CD). When building CI/CD pipelines, it’s important that the orchestration layer make it easy to automate to lifecycle of containers for unittests, functional tests, load tests and other automatic verification of the current state of an applicaiton. The CI/CD pipeline may be responsible for both triggering container creation as well as creating the container image.

Two way communication with CI/CD tooling is important so that the end result of testing and validation can be reported and possibly acted on by the CI/CD tool affecting later stages.

Discovery

Discovery is the process by which a container identifies other containers and services or registers itself to be found by other containers with which it participates in order to function. Discovery may include scenarios such as finding a database or static file storage with data necessary to run, or identifying other containers across which requests are distributed in order to accommodate synchronization.

Two common solutions for Discovery include a distributed key/value store and DNS. A distributed key/value store, such as etcd, ensures that each physical node hosting containers has a synchronized set of key/value data. In this scenario, the orchestration tool can add details about newly created containers to the key/value store so that existing containers are aware of them. New containers can query the key/value store to identify related containers and services.

DNS based discovery (a popular tools is Consul) is very similar, except that DNS is used to manage resolution of services and containers based on URLs. In this way, new containers can simply call the predetermined URL and trust that the request will be routed to the appropriate container or resource. As containers change, DNS is updated in realtime so that no changes are required on individual containers.