Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged buildpack

Software Engineering

nginx buildpack – installed

It’s possible to add a custom buildpack to Stackato or Helion Development Platform so that it’s available to all applications. When using an installed buildpack it is not necessary to include a manifest or identify the buildpack. Instead it will be selected by the detect script in the buildpack. All files are on the cloud controller node which eliminates download time and bandwidth.

Package the buildpack

Prepare your buildpack for installation by adding all files to a zip file (of any name). The bin folder should be in the root of the zip file with any related files in their relative locations. Normally this will happen as expected if the buildpack is cloned and then within the buildpack directory zip is run, as shown.

vagrant@vagrant-ubuntu-trusty-64:~$ git clone -b precompiled https://github.com/dwatrous/buildpack-nginx.git
Cloning into 'buildpack-nginx'...
remote: Counting objects: 156, done.
remote: Compressing objects: 100% (99/99), done.
remote: Total 156 (delta 44), reused 142 (delta 32)
Receiving objects: 100% (156/156), 1.80 MiB | 2.64 MiB/s, done.
Resolving deltas: 100% (44/44), done.
Checking connectivity... done.
vagrant@vagrant-ubuntu-trusty-64:~$ cd buildpack-nginx/
vagrant@vagrant-ubuntu-trusty-64:~/buildpack-nginx$ zip -r buildpack-nginx.zip *
  adding: bin/ (stored 0%)
  adding: bin/detect (deflated 19%)
  adding: bin/compile (deflated 57%)
  adding: bin/release (deflated 2%)
  adding: bin/launch.sh (deflated 54%)
  adding: README.md (deflated 44%)
  adding: vendor/ (stored 0%)
  adding: vendor/nginx.tar.gz (deflated 4%)
vagrant@vagrant-ubuntu-trusty-64:~/buildpack-nginx$ cp buildpack-nginx.zip /vagrant/

It may be obvious that the commands above were run in a vagrant built Ubuntu box. Buildpack scripts must have the executable bit set, which doesn’t carry over when the zip file is created on windows. To accommodate this I created a vagrant box, installed git and cloned and zipped. I then copied the zip file into the vagrant folder for easy access.

The README.md isn’t necessary for the buildpack, obviously, but it doesn’t interfere either. Any unrelated files that are in the buildpack can be removed before creating the buildpack zip file, but they won’t have any impact if they are left.

Install the buildpack

Adding the buildpack to Stackato or Helion is very straight forward. Use the stackato or helion client to call create-buildpack and provide the name and zip file, as shown.

C:\Users\watrous\Documents\buildpack>stackato create-buildpack daniel-static-nginx buildpack-nginx.zip
Creating new buildpack daniel-static-nginx ... OK
Uploading buildpack bits ...  OK

All installed buildpacks can be listed using the stackato client with the buildpacks command.

C:\Users\watrous\Documents\buildpack>stackato buildpacks
| # | Name                | Filename            | Enabled | Locked |
| 1 | java                | java.zip            | yes     | no     |
| 2 | ruby                | ruby.zip            | yes     | no     |
| 3 | nodejs              | nodejs.zip          | yes     | no     |
| 4 | python              | python.zip          | yes     | no     |
| 5 | go                  | go.zip              | yes     | no     |
| 6 | clojure             | clojure.zip         | yes     | no     |
| 7 | scala               | scala.zip           | yes     | no     |
| 8 | perl                | perl.zip            | yes     | no     |
| 9 | daniel-static-nginx | buildpack-nginx.zip | yes     | no     |

Use the buildpack

In the case of this buildpack, the detect script is looking for an index file with and extension of either html or htm. To deploy using this buildpack, just make sure that a file matching that description is in the root of your application files. There is no need to include a manifest.

Software Engineering

Custom buildpacks in CloudFoundry

In the past PaaS has been rigid and invasive for application developers. CloudFoundry aims to change that perception of PaaS with the use of Buildpacks. A Buildpack allows an application developer to define his deployment environment in plain text. Some refer to this as infrastructure as code since the aspects of a deployment environment that were previously handled by system administrators on dedicated servers now exist in plain text alongside the application files.

What’s available out-of-the-box?

Before diving into custom buildpacks, a lot of people will ask “What is available out-of-the-box with CloudFoundry?”. The answer is everything and nothing. It’s true that there are some buildpacks that come pre-configured within the CloudFoundry, Stackato or Helion environments. However, it’s not entirely accurate or useful to consider these as “a version available in cloudfoundry”. The article below provides crucial context needed to determine whether or not an application should leverage an out-of-the-box buildpack or customize one.

Custom Buildpacks

The process to create custom buildpacks includes the following steps.

  • Develop the process to build out the runtime environment
  • Understand the staging process
  • Create a buildpack that stages the application for deployment

Understand the buildpack staging process

The buildpack staging process comprises all the steps to download, compile, configure and prepare a runtime environment. The result of staging is a droplet (a gzipped tar file) which contains all application and runtime dependencies. When a new docker application container is created, the droplet is used to deploy the application.

Design the runtime environment

Designing the runtime environment should be done using the same docker container that will be used to stage and deploy the application. Helion and Stackato use an Ubuntu based Docker image prepared specifically for the CloudFoundry environment.

Create a buildpack

With steps to produce a runtime environment and an understanding of the staging process, it’s now possible to create a custom buildpack. This includes creating detect, compile and release scripts, in addition to any related files.

Software Engineering

nginx buildpack – pre-compiled

Previously I detailed the process to create a buildpack for Stackato or Helion, including reconfiguring the buildpack to be self-contained. In both previous examples, the compile script performed a configure and make on source to build the binaries for the application. Since the configure->make process is often slow, this can be done once and the binaries added to the git repository for the buildpack.

Pre-compile nginx

The first step is to build nginx to run in the docker container for Stackato or Helion. These steps were previously in the compile script, but now they need to be run independently. After buliding, package up the necessary files and include that file in the git repository in place of the source file. Here are the commands I ran to accomplish this.

# create a docker image for the build process
docker run -i -t stackato/stack-alsek:latest /bin/bash
# download, configure and compile
wget -e use_proxy=yes http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
# rearrange and package up the files
mkdir -p nginx/bin
mkdir -p nginx/conf
mkdir -p nginx/logs
cp nginx-1.6.2/objs/nginx nginx/bin/
cp nginx-1.6.2/conf/nginx.conf  nginx/conf/
cp nginx-1.6.2/conf/mime.types  nginx/conf/
tar czvf nginx.tar.gz nginx
# copy packaged file where I can get it to add to git repository
scp nginx.tar.gz user@myhost.domain.com:~/

After you get this new packaged file in place, the structure would look like this.


Updated compile script

The compile script is quite different. Here’s what I ended up with.

#!/usr/bin/env bash
# bin/compile <build-dir> <cache-dir>
shopt -s dotglob    # enables commands like 'mv *' to see hidden files
set -e              # exit immediately if any command fails (non-zero status)
# create local variables pointing to key paths
buildpack_dir=$(cd $(dirname $0) && cd .. && pwd)
# unpackage nginx
mkdir -p $cache_dir
cp $buildpack_dir/vendor/nginx.tar.gz $cache_dir
tar xzf $cache_dir/nginx.tar.gz -C $cache_dir
# move applicaiton files into public directory
mkdir -p $cache_dir/public
mv $app_files_dir/* $cache_dir/public/
# put everything in place for droplet creation
mv $buildpack_dir/bin/launch.sh $app_files_dir/
mv $cache_dir/public $app_files_dir/
mv $cache_dir/nginx $app_files_dir/
# ensure manifest not in public directory
if [ -f $cache_dir/public/manifest.yml ]; then rm $cache_dir/public/manifest.yml; fi
if [ -f $cache_dir/public/stackato.yml ]; then rm $cache_dir/public/stackato.yml; fi

Now rather than build nginx it is just unpackaged right into place.

Staging Performance

Pre-compiling can save a lot of time when staging. For example, in this case the following times were observed when staging under each scenario.

Time to stage build from source

[stackato[dea_ng]] 2014-10-22T15:42:34.000Z: Completed uploading droplet
[staging]          2014-10-22T15:41:16.000Z:

That’s a total of 1:18 to build from source as part of the staging process.

Time to stage pre-compiled

[stackato[dea_ng]] 2014-10-22T16:33:38.000Z: Completed uploading droplet
[staging]          2014-10-22T16:33:32.000Z:

The total time to stage is 0:06 when the binary resources are precompiled. That’s 92.3% faster staging. That can be a significant advantage when staging applications which employ a very heavy compile process to establish the runtime environment.


It could be argued that a pre-compiled binary is more likely to have a conflict at deploy time. In many deployment scenarios I would be tempted to agree. However, since the deployment environment is a Docker container and will presumably be identical to the container used to compile the binary, this risk is relatively low.

One question is whether a git repository is an appropriate place to store binary files. If this is a concern, the binary files can be stored on a file server, CDN or any location which can be accessed by the staging Docker container. This will keep the git repository clean with text only and can also provide a file system collection of binary artifacts that are available independent of git.

Software Engineering

nginx buildpack – offline

I previously documented the process to create a buildpack for nginx to use with Stackato or Helion Dev Platform. In that buildpack example, the compile script would download the nginx source using wget. In some cases, the time, bandwidth or access required to download external resources may be undesirable. In those cases the buildpack can be adjusted to work offline by adding the external resources to the git repository. The new structure would look like this.


Updated compile script

The only bulidpack related file that would need to change to accommodate this is the compile script. The affected section is download and build nginx.

# download and build nginx
mkdir -p $cache_dir
cp $buildpack_dir/vendor/nginx-1.6.2.tar.gz $cache_dir
tar xzf $cache_dir/nginx-1.6.2.tar.gz -C $cache_dir
cd $cache_dir/nginx-1.6.2

Notice that now the source is copied from the vendor directory in the buildpack instead downloaded using wget.

Using buildpack branches

In this case I created a branch of my existing buildpack to make these changes. By adding #branchname, it was possible to identify that branch with a in the manifest.yml. In this case my branch name was offline.

- name: static-test
  buildpack: https://github.com/dwatrous/buildpack-nginx.git#offline
Software Engineering

nginx buildpack – realtime

CloudFoundry accommodates buildpacks which define a deployment environment. A buildpack is distinct from an application and provides everything the application needs to run, including web server, language runtime, libraries, etc. The most basic structure for a buildpack requires three files inside a directory named bin.


The buildpack files discussed in this post can be cloned or forked at https://github.com/dwatrous/buildpack-nginx

Some quick points about these buildpack files

  • All three files must be executable via bash
  • Can be shell scripts or any language that can be invoked using bash
  • Explicit use of paths recommended


The detect script is intended to examine application files to determine whether it can accommodate the application. If it finds evidence that it can provide an environment to run an application, it should echo a message and exit with a status of zero. Otherwise it should exit with a non-zero status. In this example, the detect script looks for an index file with the extension html or htm.

#!/usr/bin/env bash
if [[ ( -f $1/index.html || -f $1/index.htm ) ]]
  echo "Static" && exit 0
  exit 1


The compile script is responsible to gather, build and position everything exactly as it needs to be in order to create the droplet that will be used to deploy the application. The second line of the script below shows that cloudfoundry calls the compile script and passes in paths to build-dir and cache-dir.

Develop and test the compile script

All of the commands in this file will run on a docker instance created specifically to perform the staging operation. It’s possible to develop the compile script inside the same docker container that will be used to stage your applicationi.

#!/usr/bin/env bash
# bin/compile <build-dir> <cache-dir>
shopt -s dotglob    # enables commands like 'mv *' to see hidden files
set -e              # exit immediately if any command fails (non-zero status)
# create local variables pointing to key paths
buildpack_dir=$(cd $(dirname $0) && cd .. && pwd)
# download and build nginx
mkdir -p $cache_dir
cd $cache_dir
wget http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
# create hierarchy with only needed files
mkdir -p $cache_dir/nginx/bin
mkdir -p $cache_dir/nginx/conf
mkdir -p $cache_dir/nginx/logs
cp $cache_dir/nginx-1.6.2/objs/nginx $cache_dir/nginx/bin/nginx
cp $cache_dir/nginx-1.6.2/conf/nginx.conf $cache_dir/nginx/conf/nginx.conf
cp $cache_dir/nginx-1.6.2/conf/mime.types $cache_dir/nginx/conf/mime.types
# move applicaiton files into public directory
mkdir -p $cache_dir/public
mv $app_files_dir/* $cache_dir/public/
# copy nginx error template
cp $cache_dir/nginx-1.6.2/html/50x.html $cache_dir/public/50x.html
# put everything in place for droplet creation
mv $buildpack_dir/bin/launch.sh $app_files_dir/
mv $cache_dir/public $app_files_dir/
mv $cache_dir/nginx $app_files_dir/
# ensure manifest not in public directory
if [ -f $cache_dir/public/manifest.yml ]; then rm $cache_dir/public/manifest.yml; fi
if [ -f $cache_dir/public/stackato.yml ]; then rm $cache_dir/public/stackato.yml; fi

Notice that after nginx has been compiled, the desired files, such as the nginx binary and configuration file, must be explicitly copied into place where they will be included when the droplet is packaged up. In this case I put the application files in a sub-directory called public and the nginx binary, conf and logs in a sub-directory called nginx. A shell script, launch.sh, is also copied into the root of the application directory, which will be explained in a minute.


The output of the release script is a YAML file, but it’s important to understand that the release file itself must be an executable script. The key detail in the release file is the line that indicates how to start the web server process. In some cases that may require several commands, which would require them to be encapsulated in another script, such as the launch.sh script shown below.

#!/usr/bin/env bash
cat <<YAML
  web: sh launch.sh


The launch.sh script creates a configuration file for nginx that includes the PORT and HOME directory for this specific docker instance. It then starts nginx as the local user.

#!/usr/bin/env bash
# create nginx conf file with PORT and HOME directory from cloudfoundry environment variables
mv $HOME/nginx/conf/nginx.conf $HOME/nginx/conf/nginx.conf.original
sed "s|\(^\s*listen\s*\)80|\1$PORT|" $HOME/nginx/conf/nginx.conf.original > $HOME/nginx/conf/nginx.conf
sed -i "s|\(^\s*root\s*\)html|\1$HOME/public|" $HOME/nginx/conf/nginx.conf
# start nginx web server
$HOME/nginx/bin/nginx -c $HOME/nginx/conf/nginx.conf -p $HOME/nginx

Using a buildpack

There are two ways to use a buildpack: either install it on the cloud controller node and let the detect script select it or store it in a GIT repository and provide the GIT URL in manifest.yml. The GIT repository must be publicly readable and accessible from the docker instance where staging will occur (proxy issues may interfere with this).

To use this buildpack, in an empty directory, create two files:


- name: static-test
  memory: 40M
  disk: 200M
  instances: 1
  buildpack: https://github.com/dwatrous/buildpack-nginx.git


<h1>Hello World!</h1>

From within that directory, target, login and push your app (can be done locally or in the cloud).




Software Engineering

Buildpack staging process in Stackato and Helion

One of the major strengths of CloudFoundry was the adoption of buildpacks. A buildpack represents a blueprint which defines all runtime requirements for an application. These may include web server, application, language, library and any other requirements to run an application. There are many buildpacks available for common environments, such as Java, PHP, Python, Go, etc. It is also easy to fork an existing buildpack or create a new buildpack.

When an application is pushed to CloudFoundry, there is a staging step and a deploy step, as shown below. The buildpack comes in to play when an application is staged.


Staging, or droplet creation

A droplet is a compressed tar file which contains all application files and runtime dependencies to run the application. Depending on the technology, this may include source code for interpreted languages, like PHP, or bytecode or even compiled objects, as with Python or Java. It will also include binary executables needed to run the application files. For example, in the case of Java, this would be a Java Runtime Environment (JRE).

The staging process, which produces a droplet that is ready to deploy, is shown in the following activity diagram.


Staging environment

The staging process happens on a docker instance created using the same docker image used to deploy the app. Once created, all application files are copied to the new docker staging instance. Among these application files may be a deployment definition in the form of a manifest.yml or stackato.yml. The YAML files can include information that will prepare or alter the staging environment, including installation of system libraries. The current docker image is based on Ubuntu.

Next we get to the buildpack. Most installations of CloudFoundry include a number of buildpacks for common technologies, such as PHP, Java, Python, Go, Node and Ruby. If no manifest is included, or if there is a manifest which doesn’t explicitly identify a buildpack, then CloudFoundry will loop through all available buildpacks and run the detect script to find a match.

If the manifest explicitly identifies a buildpack, which must be hosted in a GIT repository, then that buildpack will be cloned onto the staging instance and the compile process will proceed immediately.

environment variables

CloudFoundry makes certain environment variables available to the docker instance used for staging and deploying apps. These should be treated as dynamic, since every instance (stage or deploy) will have different variables. For example, the PORT will be different for every host.

Here is a sample of the environment available during the staging step:

SSH_CLIENT= 41465 22


The detect script analyzes the application files looking for specific artifacts. For example, the Ruby buildpack looks for a Gemfile. The PHP buildpack looks for index.php or composer.json. The Java buildpack looks for a pom.xml. And so on for each technology. If a buildpack detects the required artifacts, it prints a message and exits with a system value of 0, indicating success. At this point, the buildpack files are also copied to the staging instance.


The compile script is responsible for doing the rest of the work to prepare the runtime environment. This may include downloading and compiling source files. It may include downloading or copying pre-compiled binaries. The compile script is responsible for arranging the application files and runtime artifacts in a way that accommodates configuration.

The compile script, as well as all runtime components, are run as local user without sudo privileges. This will often mean that configuration files, log files, and other typical file system paths will need to be adapted to run as an unprivileged user.

The follow image shows the directory structure on the docker staging instance which will be useful when creating a compile script.


Hooks are available for pre-staging and post-staging. This may be useful if access to external resources are needed to stage and require authentication or other preliminary setup.

Create droplet

Once the compile script completes, everything in the user’s directory is packaged up as a compressed tar file. This is the droplet. The droplet is copied to the cloud controller and the staging instance is destroyed.

Software Engineering

Nginx in Docker for Stackato buildpacks

I’m about to write a few articles about creating buildpacks for Stackato, which is a derivative of CloudFoundry and the technology behind Helion Development Platform. The approach for deploying nginx in docker as part of a buildpack differs from the approach I published previously. There are a few reasons for this:

  • Stackato discourages root access to the docker image
  • All services will run as the stackato user
  • The PORT to which services must bind is assigned and in the free range (no root required)

Get a host setup to run docker

The easiest way to follow along with this tutorial is to deploy stackato somewhere like hpcloud.com. The resulting host will have the docker image you need to follow along below.

Manually prepare a VM to run docker containers

You can also use Vagrant to spin up a virtual host where you can run these commands.

vagrant init ubuntu/trusty64

Modify the Vagrantfile to contain this line

  config.vm.provision :shell, path: "bootstrap.sh"

Then create a the bootstrap.sh file based on the details below.


#!/usr/bin/env bash
# set proxy variables
#export http_proxy=http://proxy.example.com:8080
#export https_proxy=https://proxy.example.com:8080
# bootstrap ansible for convenience on the control box
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
apt-get update
apt-get -y install lxc-docker
# need to add proxy specifically to docker config since it doesn't pick them up from the environment
#sed -i '$a export http_proxy=http://proxy.example.com:8080' /etc/default/docker
#sed -i '$a export https_proxy=https://proxy.example.com:8080' /etc/default/docker
# enable non-root use by vagrant user
groupadd docker
gpasswd -a vagrant docker
# restart to enable proxy
service docker restart
# give docker a few seconds to start up
sleep 2s
# load in the stackato/stack-alsek image (can take a while)
docker load < /vagrant/stack-alsek.tar

Get the stackato docker image

Before the last line in the above bootstrap.sh script will work, it’s necessary to place the docker image for Stackato in the same directory as the Vagrantfile. Unfortunately the Stackato docker image is not published independently, which makes it more complicated to get. One way to do this is to deploy stackato locally and grab a copy of the image with this command.

docker save stackato/stack-alsek > stack-alsek.tar

You might want to save a copy of stack-alsek.tar to save time in the future. I’m not sure if it can be published (legally), but you’ll want to update this image with each release of Stackato anyway.

Launch a new docker instance using the Stackato image

You should now have three files in the directory where you did ‘vagrant init’.

-rw-rw-rw-   1 user     group          4998 Oct  9 14:01 Vagrantfile
-rw-rw-rw-   1 user     group           971 Oct  9 14:23 bootstrap.sh
-rw-rw-rw-   1 user     group    1757431808 Oct  9 14:02 stack-alsek.tar

At this point you should be ready to create a new VM and spin up a docker instance. First tell Vagrant to build the virtual server.

vagrant up

Next, log in to your server and create the docker container.

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -i -t stackato/stack-alsek:latest /bin/bash

Build and configure your services

Once you have a system setup and can create docker instances based on the Stackato image, you’re ready to craft your buildpack compile script. One of the first things I do is install the w3m browser so I can test my setup. In this example, I’m just going to build and test nginx. The same process could be used to build any number of steps into a compile script. It may be necessary to manage http_proxy, https_proxy and no_proxy environment variables for both root and stackato users while completing the steps below.

apt-get -y install w3m

Since everything in a stackato DEA is expected to run as the stackato user, we’ll switch to that user and move into the home directory

root@33ad737d42cf:/# su stackato
stackato@33ad737d42cf:/$ cd
stackato@33ad737d42cf:~$ pwd

Next I’m going to grab the source for nginx and configure and make.

wget -e use_proxy=yes http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2

By this point nginx has been built successfully and we’re in the nginx source directory. Next I want to update the configuration file to use a non-privileged port. For now I’ll use 8080, but Stackato will assign the actual PORT when deployed.

mv conf/nginx.conf conf/nginx.conf.original
sed 's/\(^\s*listen\s*\)80/\1 8080/' conf/nginx.conf.original > conf/nginx.conf

I also need to make sure that there is a logs directory available for nginx error logs on startup.

mkdir logs

It’s time to test the nginx build, which we can do with the command below. A message should be displayed saying the test was successful.

stackato@33ad737d42cf:~/nginx-1.6.2$ ./objs/nginx -t -c conf/nginx.conf -p /home/stackato/nginx-1.6.2
nginx: the configuration file /home/stackato/nginx-1.6.2/conf/nginx.conf syntax is ok
nginx: configuration file /home/stackato/nginx-1.6.2/conf/nginx.conf test is successful

With the setup and configuration validated, it’s time to start nginx and verify that it works.

./objs/nginx -c conf/nginx.conf -p /home/stackato/nginx-1.6.2

At this point it should be possible to load the nginx welcome page using the command below.

w3m http://localhost:8080


Next steps

If an application requires other resources, this same process can be followed to build, integrate and test them within a running docker container based on the stackato image.