Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged infrastructure

Software Engineering

IT General Controls: Infrastructure vs Routing

IT general controls are important for various reasons, such as business continuity and regulatory compliance. Traditionally, controls have focused on the infrastructure itself. In the context of long running servers in fixed locations, this was often an effective approach. As virtualization and container technologies become more prevalent, especially in public cloud, infrastructure focused IT controls can start to get in the way of realizing the following benefits:

  • Just in time provisioning
  • Workload migration
  • Network isolation
  • Tight capacity management
  • DevOps
  • Automated deployments
  • Automated remediation

One way to maintain strong IT controls can still get the above benefits is to shift the focus of those controls away from the infrastructure and instead focus on routing (traffic management).

As shown above, a focus on routing ensures that IT can control where production traffic is routed, including production data. Engineering teams are free to deploy as needed and automation can be used freely. Since infrastructure is replaced with each deployment, rather than updated, there is no need to maintain rigid controls around any specific server, VM or container.

In the diagram shown, a gateway is used to facilitate routing. Other mechanisms, like segregated container image repositories and deployment environments may also be appropriate.

Software Engineering

Infrastructure as Code

One of the most significant enablers of IT and software automation has been the shift away from fixed infrastructure to flexible infrastructure. Virtualization, process isolation, resource sharing and other forms of flexible infrastructure have been in use for many decades in IT systems. It can be seen in early Unix systems, Java application servers and even in common tools such as Apache and IIS in the form of virtual hosts. If flexible infrastructure has been a part of technology practice for so long, why is it getting so much buzz now?

Infrastructure as Code

In the last decade, virtualization has become more accessible and transparent, in part due to text based abstractions that describe infrastructure systems. There are many such abstractions that span IaaS, PaaS, CaaS (containers) and other platforms, but I see four major categories of tool that have emerged.

  • Infrastructure Definition. This is closest to defining actual server, network and storage.
  • Runtime or system configuration. This operates on compute resources to overlay system libraries, policies, access control, etc.
  • Image definition. This produces an image or template of a system or application that can then be instantiated.
  • Application description. This is often a composite representation of infrastructure resources and relationships that together deliver a functional system.

Right tool for the right job

I have observed a trend among these toolsets to expand their scope beyond one of these categories to encompass all of them. For example, rather than use a chain of tools such as Packer to define an image, HEAT to define the infrastructure and Ansible to configure the resources and deploy the application, someone will try to use Ansible to to all three. Why is that bad?

A tool like HEAT is directly tied to the OpenStack charter. It endeavors to adhere to the native APIs as they evolve. The tools is accessible, reportable and integrated into the OpenStack environment where the managed resources are also visible. This can simplify troubleshooting and decrease development time. In my experience, a tool like Ansible generally lags behind in features, API support and lacks the native interface integration. Some argue that using a tool like Ansible makes the automation more portable between cloud providers. Given the different interfaces and underlying APIs, I haven’t seen this actually work. There is always a frustrating translation when changing providers, and in many cases there is additional frustration due to idiosyncrasies of the tool, which could have been avoided if using more native interfaces.

The point I’m driving at is that when a native, supported and integrated tool exists for a given stage of automation, it’s worth exploring, even if it represents another skill set for those who develop the automation. The insight gained can often lead to a more robust and appropriate implementation. In the end, a tool can call a combination of HEAT and Ansible as easily as just Ansible.

Containers vs. Platforms

Another lively discussion over the past few years revolves around where automation efforts should focus. AWS made popular the idea that automation at the IaaS layer was the way to go. A lot of companies have benefitted from that, but many more have found the learning curve too steep and the cost of fixed resources too high. Along came Heroku and promised to abstract away all the complexity of IaaS but still deliver all the benefits. The cost of that benefit came in either reduced flexibility or a steep learning curve to create new deployment contexts (called buildpacks). When Docker came along and provided a very easy way to produce a single function image that could be quickly instantiated, this spawned discussion related to how the container lifecycle should be orchestrated.

Containers moved the concept of image creation away from general purpose compute, which had been the focus of IaaS, and toward specialized compute, such as a single application executable. Start time and resource efficiency made containers more appealing than virtual servers, but questions about how to handle networking and storage remained. The docker best practice of single function containers drove up the number of instances when compared to more complex virtual servers that filled multiple roles and had longer life cycles. Orchestration became the key to reliable container based deployments.

The descriptive approaches that evolved to accommodate containers, such as kubernetes, provide more ease and speed than IaaS, while providing more transparency and control than PaaS. Containers make it possible to define their application deployment scenario, including images, networking, storage, configuration, routing, etc., in plain text and trust the Container as a Service (CaaS) to orchestrate it all.

Evolution

Up to this point, infrastructure as code has evolved from shell and bash scripts, to infrastructure definitions for IaaS tools, to configuration and image creation tools for what those environments look like to full application deployment descriptions. What remains to mature are the configuration, secret management and regional distribution of compute locality for performance and edge data processing.

Software Engineering

HEAT or Ansible in OpenStack? Both!

Someone asked me today whether he should use HEAT or Ansible to automate his OpenStack deployment. My answer is that he should use both! It’s helpful to understand the original design decisions for each tool in order to use each effectively. OpenStack HEAT and Ansible were designed to do different things, although in the opensource tradition, they have been extended to accommodate some overlapping functionalities.

Cloud Native

In my post on What is Cloud Native, I show the five elements of application life cycle that can be automated in the cloud (image shown below). The two life cycle elements in blue, provision and configure, correspond to the most effective use of HEAT and Ansible.

application-lifecycle-elements

OpenStack HEAT for Provisioning

HEAT is designed to capture details related to infrastructure and accommodate provisioning of that infrastructure on OpenStack. CloudFormation does the same thing in AWS and Terraform is an abstraction that has providers for both OpenStack and AWS (and many others).

HEAT provides vocabulary to define compute, storage, network and other infrastructure related resources. This includes the interrelationships between infrastructure resources, such as associating floating IPs with compute resources or binding a compute resource to a specific network. This also includes some bookkeeping items, like assigning key pairs for authentication and naming resources.

The end result of executing a heat template is a collection of one or more infrastructure resources based on existing images (VM, or volume).

Ansible for Configuration

Ansible, on the other hand, is designed to configure infrastructure after it has been provisioned. This includes activities like installing libraries and setting up a specific run time environment. System details like firewalls and log management, as well as application stack, databases, etc. are easily managed from Ansible.

Ansible can also easily accommodate application deployment. Activities such as moving application artifacts into specific places, managing users/groups and file permissions, tweaking configuration files, etc. are all easily done in Ansible.

The end result of executing an Ansible playbook is ready-to-use infrastructure.

Where is the Overlap?

Ansible can provision resources in openstack. HEAT can send a cloud-init script to a new server to perform configuration of the server. In the case of Ansible for provisioning, it is not nearly as articulate or granular for the purpose of defining infrastructure as HEAT. In the case of HEAT configuring infrastructure through cloud-init, you still need to find some way to dynamically manage the cloud-init scripts to configure each compute resource to fit into your larger system. I do use cloud-init with HEAT, but I generally find more value in leaving the bulk of configuration to Ansible.

Ansible inventory from HEAT

When using HEAT and Ansible together, it is necessary to generate the ansible inventory file from HEAT output. To accomplish this, you want to make sure HEAT outputs necessary information, like IP addresses. You can use your favorite scripting language to query HEAT and write the inventory file.

Example using both HEAT and Ansible

A while ago I published two articles that showed how I develop the Ansible configuration, and then extend that to work with HEAT for deploying complex, multi-server environments.

Install and configure a Multi-node Hadoop cluster using Ansible

Bulid a multi-server Hadoop cluster in OpenStack in minutes

The first article lays the foundation for deploying a complex system with Ansible. The second article builds on this by introducing HEAT to provision the infrastructure. The Ansible inventory file is dynamically generated using a python script and the OpenStack CLI.

Conclusion

While there is some ambiguity around the term provision in cloud parlance, I consider provision to be the process of creating infrastructure resources that are not generally configured. I refer to configuration as the process of operating against those provisioned resources to prepare them for a specific use case, such as running an application or a database. HEAT is a powerful tool for provisioning resources in OpenStack and Ansible is a great fit for configuring existing infrastructure resources.

Software Engineering

The Road to PaaS

I have observed that discussions about CloudFoundry often lack accurate context. Some questions I get that indicate context is missing include:

  • What Java version does CloudFoundry support?
  • What database products/versions are available
  • How can I access the server directly?

There are a few reasons that the questions above are not relevant for CloudFoundry (or any modern PaaS environment). To understand why, it’s important to understand how we got to PaaS and where we came from.

cloudfoundry-compared-traditional

Landscape

When computers were first becoming a common requirement for the enterprise, most applications were monolithic. All applicaiton components would run on the same general purpose server. This included interface, application technology (e.g. Java, .NET and PHP) and data and file storage. Over time, these functions were distributed across different servers. The servers also began to take on characteristic differences that would accommodate the technology being run.

Today, compute has been commoditized and virtualized. Rather than thinking of compute as a physical server, built to suit a specific purpose, compute is instead viewed in discreet chunks that can be scaled horizontally. PaaS today marries an application with those chunks of compute capacity as needed and abstracts application access to services, which may or may not run on the same PaaS platform.

Contributor and Organization Dynamic

The role of contributors and organizations have changed throughout the evolution of the landscape. Early monolithic systems required technology experts who were familiar with a broad range of technologies, including system administration, programming, networking, etc. As the functions were distributed, the roles became more defined by their specializations. Webmasters, DBAs, and programmers became siloed. Some unintended conflicts complicated this more distributed architecture due in part to the fact that efficiencies in one silo did not always align with the best interests of other silos.

DevOps

As the evolution pushed toward compute as a commodity, the new found flexibility drove many frustrated technologists to reach beyond their respective silo to accomplish their design and delivery objectives. Programmers began to look at how different operating system environments and database technologies could enable them to produce results faster and more reliably. System administrators began to rethink system management in ways that abstracted hardware dependencies and decreased the complexity involved in augmenting compute capacity available to individual functions. Datastore, network, storage and other experts began a similar process of abstracting their offering. This blending of roles and new dynamic of collaboration and contribution has come to be known as DevOps.

Interoperability

Interoperability between systems and applications in the days of monolithic application development made use of many protocols. This was due in part to the fact that each monolithic system exposed it’s services in different ways. As the above progression took place, the field of available protocols normalized. RESTful interfaces over HTTP have emerged as an accepted standard and the serialization structures most common to REST are XML and JSON. This makes integration straight forward and provides for a high amount of reuse of existing services. This also makes services available to a greater diversity of devices.

Security and Isolation

One key development that made this evolution from compute as hardware to compute as a utility possible was effective isolation of compute resources on shared hardware. The first big step in this direction came in the form of virualization. Virtualized hardware made it possible to run many distinct operating systems simultaneously on the same hardware. It also significantly reduced the time to provision new server resources, since the underlying hardware was already wired and ready.

Compute as a ________

The next step in the evolution came in the form of containers. Unlike virtualization, containers made it possible to provide an isolated, configurable compute instance in much less time that consumed fewer system resources to create and manage (i.e. lightweight). This progression from compute as hardware to compute as virtual and finally to compute as a container made it realistic to literally view compute as discreet chunks that could be created and destroyed in seconds as capacity requirements changed.

Infrastructure as Code

Another important observation regarding the evolution of compute is that as the compute environment became easier to create (time to provision decreased), the process to provision changed. When a physical server required ordering, shipping, mounting, wiring, etc., it was reasonable to take a day or two to install and configure the operating system, network and related components. When that hardware was virtualized and could be provisioned in hours (or less), system administrators began to pursue more automation to accommodate the setup of these systems (e.g. ansible, puppet, chef and even Vagrant). This made it possible to think of systems as more transient. With the advent of Linux containers, the idea of infrastructure as code became even more prevalent. Time to provision is approaching zero.

A related byproduct of infrastructure defined by scripts or code was reproduceability. Whereas it was historically difficult to ensure that two systems were configured identically, the method for provisioning containers made it trivial to ensure that compute resources were identically configured. This in turn improved debugging, collaboration and accommodated versioning of operating environments.

Contextual Answers

Given that the landscape has changed so drastically, let’s look at some possible answers to the questions from the beginning of this post.

  • Q. What Java (or any language) version does CloudFoundry support?
    A. It supports any language that is defined in the scripts used to provision the container that will run the application. While it is true that some such scripts may be available by default, this doesn’t imply that the PaaS provides only that. If it’s a fit, use it. If not, create new provisioning scripts.
  • Q. What database products/versions are available?
    A. Any database product or version can be used. If the datastore services available that are associated with the PaaS by default are not sufficient, bring your own or create another application component to accommodate your needs.
  • Q. How can I access the server directly?
    A. There is no “the server” If you want to know more about the server environment, look at the script/code that is responsible for provisioning it. Even better, create a new container and play around with it. Once you get things just right, update your code so that every new container incorporates the desired changes. Every “the server” will look exactly how you define it.