Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged heat

Software Engineering

Infrastructure as Code

One of the most significant enablers of IT and software automation has been the shift away from fixed infrastructure to flexible infrastructure. Virtualization, process isolation, resource sharing and other forms of flexible infrastructure have been in use for many decades in IT systems. It can be seen in early Unix systems, Java application servers and even in common tools such as Apache and IIS in the form of virtual hosts. If flexible infrastructure has been a part of technology practice for so long, why is it getting so much buzz now?

Infrastructure as Code

In the last decade, virtualization has become more accessible and transparent, in part due to text based abstractions that describe infrastructure systems. There are many such abstractions that span IaaS, PaaS, CaaS (containers) and other platforms, but I see four major categories of tool that have emerged.

  • Infrastructure Definition. This is closest to defining actual server, network and storage.
  • Runtime or system configuration. This operates on compute resources to overlay system libraries, policies, access control, etc.
  • Image definition. This produces an image or template of a system or application that can then be instantiated.
  • Application description. This is often a composite representation of infrastructure resources and relationships that together deliver a functional system.

Right tool for the right job

I have observed a trend among these toolsets to expand their scope beyond one of these categories to encompass all of them. For example, rather than use a chain of tools such as Packer to define an image, HEAT to define the infrastructure and Ansible to configure the resources and deploy the application, someone will try to use Ansible to to all three. Why is that bad?

A tool like HEAT is directly tied to the OpenStack charter. It endeavors to adhere to the native APIs as they evolve. The tools is accessible, reportable and integrated into the OpenStack environment where the managed resources are also visible. This can simplify troubleshooting and decrease development time. In my experience, a tool like Ansible generally lags behind in features, API support and lacks the native interface integration. Some argue that using a tool like Ansible makes the automation more portable between cloud providers. Given the different interfaces and underlying APIs, I haven’t seen this actually work. There is always a frustrating translation when changing providers, and in many cases there is additional frustration due to idiosyncrasies of the tool, which could have been avoided if using more native interfaces.

The point I’m driving at is that when a native, supported and integrated tool exists for a given stage of automation, it’s worth exploring, even if it represents another skill set for those who develop the automation. The insight gained can often lead to a more robust and appropriate implementation. In the end, a tool can call a combination of HEAT and Ansible as easily as just Ansible.

Containers vs. Platforms

Another lively discussion over the past few years revolves around where automation efforts should focus. AWS made popular the idea that automation at the IaaS layer was the way to go. A lot of companies have benefitted from that, but many more have found the learning curve too steep and the cost of fixed resources too high. Along came Heroku and promised to abstract away all the complexity of IaaS but still deliver all the benefits. The cost of that benefit came in either reduced flexibility or a steep learning curve to create new deployment contexts (called buildpacks). When Docker came along and provided a very easy way to produce a single function image that could be quickly instantiated, this spawned discussion related to how the container lifecycle should be orchestrated.

Containers moved the concept of image creation away from general purpose compute, which had been the focus of IaaS, and toward specialized compute, such as a single application executable. Start time and resource efficiency made containers more appealing than virtual servers, but questions about how to handle networking and storage remained. The docker best practice of single function containers drove up the number of instances when compared to more complex virtual servers that filled multiple roles and had longer life cycles. Orchestration became the key to reliable container based deployments.

The descriptive approaches that evolved to accommodate containers, such as kubernetes, provide more ease and speed than IaaS, while providing more transparency and control than PaaS. Containers make it possible to define their application deployment scenario, including images, networking, storage, configuration, routing, etc., in plain text and trust the Container as a Service (CaaS) to orchestrate it all.

Evolution

Up to this point, infrastructure as code has evolved from shell and bash scripts, to infrastructure definitions for IaaS tools, to configuration and image creation tools for what those environments look like to full application deployment descriptions. What remains to mature are the configuration, secret management and regional distribution of compute locality for performance and edge data processing.

Software Engineering

Do Humans or Machines consume IaaS (REST API)

Infrastructure as a Service, like OpenStack and AWS, have made it possible to consume infrastructure on demand. It’s important to understand the ways in which both humans and machines interact with IaaS offerings in order to design optimal systems that leverage all possible automation opportunities. I drew the diagram below to help illustrate.

terraform-stackato-automation0001

Everything is an API

At the heart of IaaS are REST APIs that provide granular access to every resource type, such as compute, storage and network. These APIs provide clarity about which resources are being managed and accommodate the type of independent evolution of each resource offering that keeps responsibilities focused. APIs include things like Nova for compute on the OpenStack side and EC2 for compute on the AWS side. Other IaaS providers have similar APIs with similar delineations between resource types.

Consume the API RAW

Since REST today is typically done over HTTP and makes use of the HTTP methods and response codes, it is relatively straight forward to use a tool such as curl, fiddler or a REST client to craft individual calls to APIs. This works fine for humans who want to understand the APIs better, but it is slow and doesn’t scale well. Machines (by way of software, obviously) can make those same calls using HTTP request libraries (e.g. python requests, Go http package, Java URLConnection, etc.).

Abstractions

After a couple of hours sending raw requests against the IaaS REST APIs, you’ll be thrilled to know that there are well designed abstractions that provide a more friendly interface to IaaS. The openstack Command Line Interface (CLI) allows you to operate against openstack APIs by typing concise commands at a shell prompt. The Horizon web console provides a web based user experience to manage those same resources. In both cases, the CLI and Horizon console are simply translating your commands or clicks into sequences of REST API calls. This means the ultimate interaction with IaaS is still based on the REST API, but you, the human, have a more human like interface.

Since machines don’t like to type on a keyboard or click on web pages, it makes sense to have abstractions for them too. The machines I’m talking about are the ones we ask to automatically test our code, deploy our software, monitor and scale and heal our applications. For them, we have other abscrations, such including HEAT and terraform. This allows us to provide the machine with a plain text description (often in YAML or similar markup), of what we want our infrastructure to look like. These orchestration tools then analyze the current state of our infrastructure and decide whether any actions are necessary. They then translate those actions into REST API calls, just like the CLI and Horizon panel did for the human. So again, all interaction with IaaS happens by way of the REST API, but the machine too gets an abstraction that humans can craft and read.

Automation is the end goal

Whichever method you pursue, the end goal is to automate as many of the IaaS interactions as possible. The more the machine can do without human interaction, the more time humans have to design the software running on top of IaaS and the more resilient the end system will be to failures and fluctuations. If you find yourself using the human abstractions more than telling machines what to do by way of infrastructure definitions, you have a great opportunity to change and gain some efficiency.

Software Engineering

HEAT or Ansible in OpenStack? Both!

Someone asked me today whether he should use HEAT or Ansible to automate his OpenStack deployment. My answer is that he should use both! It’s helpful to understand the original design decisions for each tool in order to use each effectively. OpenStack HEAT and Ansible were designed to do different things, although in the opensource tradition, they have been extended to accommodate some overlapping functionalities.

Cloud Native

In my post on What is Cloud Native, I show the five elements of application life cycle that can be automated in the cloud (image shown below). The two life cycle elements in blue, provision and configure, correspond to the most effective use of HEAT and Ansible.

application-lifecycle-elements

OpenStack HEAT for Provisioning

HEAT is designed to capture details related to infrastructure and accommodate provisioning of that infrastructure on OpenStack. CloudFormation does the same thing in AWS and Terraform is an abstraction that has providers for both OpenStack and AWS (and many others).

HEAT provides vocabulary to define compute, storage, network and other infrastructure related resources. This includes the interrelationships between infrastructure resources, such as associating floating IPs with compute resources or binding a compute resource to a specific network. This also includes some bookkeeping items, like assigning key pairs for authentication and naming resources.

The end result of executing a heat template is a collection of one or more infrastructure resources based on existing images (VM, or volume).

Ansible for Configuration

Ansible, on the other hand, is designed to configure infrastructure after it has been provisioned. This includes activities like installing libraries and setting up a specific run time environment. System details like firewalls and log management, as well as application stack, databases, etc. are easily managed from Ansible.

Ansible can also easily accommodate application deployment. Activities such as moving application artifacts into specific places, managing users/groups and file permissions, tweaking configuration files, etc. are all easily done in Ansible.

The end result of executing an Ansible playbook is ready-to-use infrastructure.

Where is the Overlap?

Ansible can provision resources in openstack. HEAT can send a cloud-init script to a new server to perform configuration of the server. In the case of Ansible for provisioning, it is not nearly as articulate or granular for the purpose of defining infrastructure as HEAT. In the case of HEAT configuring infrastructure through cloud-init, you still need to find some way to dynamically manage the cloud-init scripts to configure each compute resource to fit into your larger system. I do use cloud-init with HEAT, but I generally find more value in leaving the bulk of configuration to Ansible.

Ansible inventory from HEAT

When using HEAT and Ansible together, it is necessary to generate the ansible inventory file from HEAT output. To accomplish this, you want to make sure HEAT outputs necessary information, like IP addresses. You can use your favorite scripting language to query HEAT and write the inventory file.

Example using both HEAT and Ansible

A while ago I published two articles that showed how I develop the Ansible configuration, and then extend that to work with HEAT for deploying complex, multi-server environments.

Install and configure a Multi-node Hadoop cluster using Ansible

Bulid a multi-server Hadoop cluster in OpenStack in minutes

The first article lays the foundation for deploying a complex system with Ansible. The second article builds on this by introducing HEAT to provision the infrastructure. The Ansible inventory file is dynamically generated using a python script and the OpenStack CLI.

Conclusion

While there is some ambiguity around the term provision in cloud parlance, I consider provision to be the process of creating infrastructure resources that are not generally configured. I refer to configuration as the process of operating against those provisioned resources to prepare them for a specific use case, such as running an application or a database. HEAT is a powerful tool for provisioning resources in OpenStack and Ansible is a great fit for configuring existing infrastructure resources.

Software Engineering

Bulid a multi-server Hadoop cluster in OpenStack in minutes

In a previous post I demonstrated a method to deploy a multi-node Hadoop cluster using Vagrant and Ansible. This post builds on that and shows how to deploy a Hadoop cluster with an arbitrary number of slave nodes in minutes on OpenStack. This process makes use of the OpenStack orchestration layer HEAT to provision the resources, after which Ansible use used to configure those resources.

All the scripts to do this yourself is available on github to clone and fork:
https://github.com/dwatrous/hadoop-multi-server-ansible

I have recorded a video demonstrating the entire process, including scaling the cluster after initial deployment:

Scope

The scope of this article is to create a Hadoop cluster with an arbitrary number of slave nodes, which can be automatically scaled up or down to accommodate changes in capacity as workloads change. The following diagram illustrates this:
hadoop-design-openstack

Build the servers

For convenience, this process still uses Vagrant to create a server that will function as the heat and ansible controller. It’s also possible create a server in OpenStack to fill this role. In this case you could simply use the bootstrap-master.sh script to configure that server. The steps to create the servers in OpenStack using heat are:

  1. Install openstack clients (we do this in a python virtual environment)
  2. Download and source the openrc file from your OpenStack environment
  3. Use the openstack clients to get details about keypairs, images, networks, etc.
  4. Update the heat template for your environment
  5. Use heat to build your servers

Install and Run Hadoop

Once the servers are provisioned, it’s time to install Hadoop. This is done using Ansible and can be run from the same host where heat was used (the vagrant created server in this case). Ansible requires an inventory file to run. Since heat is aware of the server resources it created, I added a python script to request information about provisioned servers from heat and write an inventory file. Any time you update your stack using heat, be sure to run the heat-inventory.py script so Ansible is working against the current state. Keep in mind that if you have a proxied environment, you may need to update group_vars/all.

When the Ansible scripts complete, remember to connect to the master node in openstack. From the Hadoop master node, the same process as before can be followed to start Hadoop and run a job.

Security and Configuration

In this example, a floating IP is attached to every server so the local vagrant server can connect via SSH and configure them. If a server was manually prepared in the same openstack environment, the SSH connectivity could leverage IP addresses on the private network. This would eliminate all but one floating IP address, which is still required for the master node.

Future work might include additional automation to tie together the steps I’ve demonstrated. These can also be executed as part of a CI/CD tool chain for fully automated deployments.