Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged helion

Software Engineering

Managed Services in CloudFoundry

CloudFoundry defines a Service Broker API which can be implemented and added to a CloudFoundry installation to provide managed services for apps. In order to better understand the way managed services are created and integrated with CloudFoundry (and derivative technologies like Stackato and HP Helion Development Platform), I created an example service and implemented the Service Broker API to manage it. The code for both implementations are on github.

Deploy the services

For this exercise I deployed bosh-lite on my Windows 7 laptop. I then follow the documented procedure to push cf-echo-service and cf-service-broker-python to CloudFoundry. The desired end state is two running apps in CloudFoundry that can be used for service broker management and testing.

NOTE: Deploy the echo-service first. Then update the service-broker script so that it will use the cf hosted echo-service. The relevant lines in the service-broker.py script are shown.

service_base = "echo-service."


In this setup, it is necessary for the service-broker app to communicate with the echo-service app within CloudFoundry. This requires adding a security group and applying it to running services. The first step is to create a JSON file with the definition of the new security group.

    "protocol": "tcp",
    "destination": "",
    "ports": "80"

The security group can then be created using create-security-group. This command expects a security group name and a path to the JSON file created above. After creating the security group, it must be bound to running instances (optionally to staging instances too using bind-staging-security-group). If the service-broker was deployed prior to enabling this security group, the app will also need to be restarted.

vagrant@vagrant-ubuntu-trusty-64:~$ cf create-security-group port80 security.json
Creating security group port80 as admin
vagrant@vagrant-ubuntu-trusty-64:~$ cf security-groups
Getting security groups as admin
     Name              Organization   Space
#0   public_networks
#1   dns
#2   port80
vagrant@vagrant-ubuntu-trusty-64:~$ cf bind-running-security-group port80
Binding security group port80 to defaults for running as admin
TIP: Changes will not apply to existing running applications until they are restarted.
vagrant@vagrant-ubuntu-trusty-64:~$ cf restart service-broker
Stopping app service-broker in org myorg / space mydept as admin...
Starting app service-broker in org myorg / space mydept as admin...
0 of 1 instances running, 1 starting
1 of 1 instances running
App started
Showing health and status for app service-broker in org myorg / space mydept as admin...
requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: service-broker.
last uploaded: Mon Nov 24 15:59:07 UTC 2014
     state     since                    cpu    memory        disk
#0   running   2014-11-24 08:13:40 PM   0.0%   56M of 256M   0 of 1G

NOTE: I deploy them to CloudFoundry for convenience and reliability, but this is not required. The service could be any service and the service broker API implementation can be deployed anywhere.

vagrant@vagrant-ubuntu-trusty-64:~$ cf apps
Getting apps in org myorg / space mydept as admin...
name             requested state   instances   memory   disk   urls
echo-service     started           1/1         256M     1G     echo-service.
service-broker   started           1/1         256M     1G     service-broker.

Optional validation

It is possible to validate that the deployed apps work as expected. The curl commands below validate the echo service is working properly.

vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://echo-service. -w "\n"
{"instance_id": "51897770-560b-11e4-b75a-9ad2017223a9", "state": "provision_success", "dashboard_url": "http://localhost:8090/dashboard/51897770-560b-11e4-b75a-9ad2017223a9"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://echo-service. -w "\n"
{"state": "bind_success", "id": "51897770-560b-11e4-b75a-9ad2017223a9", "app": "myapp"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service. -H "Content-Type: application/json" -d '{"message": "Hello World"}' -w "\n"
{"response": "Hello World"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service. -H "Content-Type: application/json" -d '{"message": "Hello World 2"}' -w "\n"
{"response": "Hello World 2"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service. -H "Content-Type: application/json" -d '{"message": "Hello Worldz!"}' -w "\n"
{"response": "Hello Worldz!"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X GET http://echo-service.
<table class="pure-table">
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://echo-service. -w "\n"
{"state": "unbind_success", "id": "51897770-560b-11e4-b75a-9ad2017223a9", "app": "myapp"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://echo-service. -w "\n"
{"state": "deprovision_success", "bindings": 0, "id": "51897770-560b-11e4-b75a-9ad2017223a9", "messages": 3}

The service broker API implementation can also be verified using curl as shown below.

vagrant@vagrant-ubuntu-trusty-64:~$ curl -X GET http://service-broker. -H "X-Broker-Api-Version: 2.3" -H "Authorization: Basic dXNlcjpwYXNz" -w "\n"
{"services": [{"name": "Echo Service", "dashboard_client": {"id": "client-id-1", "redirect_uri": "http://echo-service.", "secret": "secret-1"}, "description": "Echo back the value received", "id": "echo_service", "plans": [{"free": false, "description": "A large dedicated service with a big storage quota, lots of RAM, and many connections", "id": "big_0001", "name": "large"}], "bindable": true}, {"name": "Invert Service", "dashboard_client": null, "description": "Invert the value received", "id": "invert_service", "plans": [{"description": "A small shared service with a small storage quota and few connections", "id": "small_0001", "name": "small"}], "bindable": true}]}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://service-broker. -H "Content-type: application/json" -H "Authorization: Basic dXNlcjpwYXNz" -d '{"service_id": "echo_service", "plan_id": "small_0001", "organization_guid": "HP", "space_guid": "IT"}' -w "\n"
{"dashboard_url": "http://echo-service."}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://service-broker.  -H "Content-type: application/json" -H "Authorization: Basic dXNlcjpwYXNz" -d '{"service_id": "echo_service", "plan_id": "small_0001", "app_guid": "otherappid"}' -w "\n"
{"credentials": {"uri": "http://echo-service."}}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://service-broker.  -H "Authorization: Basic dXNlcjpwYXNz" -w "\n"
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://service-broker. -H "Authorization: Basic dXNlcjpwYXNz" -w "\n"

Manage the Service Broker

In CloudFoundry, the service broker can be added using the create-service-broker command with the cf cli.

vagrant@vagrant-ubuntu-trusty-64:~$ cf create-service-broker echo-broker user pass http://service-broker.
Creating service broker echo-broker as admin...
vagrant@vagrant-ubuntu-trusty-64:~$ cf service-brokers
Getting service brokers as admin...
name          url
echo-broker   http://service-broker.

The service broker is added, but the service plans in the catalog do not allow public access by default, as can be seen with the call to service-access below. This also means that calls to marketplace show no available services.

vagrant@vagrant-ubuntu-trusty-64:~$ cf service-access
Getting service access as admin...
broker: echo-broker
   service          plan    access   orgs
   Echo Service     large   none
   Invert Service   small   none
vagrant@vagrant-ubuntu-trusty-64:~$ cf marketplace
Getting services from marketplace in org myorg / space mydept as admin...
No service offerings found

The the service plan must be enabled to accommodate public consumption. This is done using the enable-service-access call and providing the name of the service. In this case, the “Echo Service” is desired. After enabling the service plan, the service-access listing shows it available and a call to marketplace shows it available to apps.

vagrant@vagrant-ubuntu-trusty-64:~$ cf enable-service-access "Echo Service"
Enabling access to all plans of service Echo Service for all orgs as admin...
vagrant@vagrant-ubuntu-trusty-64:~$ cf service-access
Getting service access as admin...
broker: echo-broker
   service          plan    access   orgs
   Echo Service     large   all
   Invert Service   small   none
vagrant@vagrant-ubuntu-trusty-64:~$ cf marketplace
Getting services from marketplace in org myorg / space mydept as admin...
service        plans   description
Echo Service   large   Echo back the value received

Manage Services

With the new service in the marketplace, it’s now possible to provision, bind and use instances of the new service by way of the cf cli. This happens in a few steps in CloudFoundry. First an instance of the service is provisioned. An app is then pushed to CloudFoundry. Finally the service and the app are associated through a binding. The call to create-service expects three arguments which can be found in the marketplace output above.

  • service name
  • service plan
  • a name for the new service instance
vagrant@vagrant-ubuntu-trusty-64:~$ cf create-service "Echo Service" large test-echo-service
Creating service test-echo-service in org myorg / space mydept as admin...
vagrant@vagrant-ubuntu-trusty-64:~$ cf services
Getting services in org myorg / space mydept as admin...
name                service        plan    bound apps
test-echo-service   Echo Service   large

Push an app

Next step is to create a simple app to which the service can be bound. It’s important to understand how the service details will be injected into the app environment. Service details will be contained in a JSON document under the key VCAP_SERVICES. The structure of the service details is shown below.

  "Echo Service": [
      "name": "test-php-echo",
      "label": "Echo Service",
      "tags": [
      "plan": "large",
      "credentials": {
        "uri": ""

For this example, the PHP script below extracts, decodes and isolates the service URI from the VCAP_SERVICES environment variable. The cURL library is used to send a call to that URI containing a basic message JSON document. The response is then printed using var_dump.

// extract injected service details from environment
$services_json = getenv("VCAP_SERVICES");
$services = json_decode($services_json);
$service_uri = $services->{'Echo Service'}[0]->{'credentials'}->{'uri'};
// create a message to send
$message = '{"message": "Hello CloudFoundry!"}';
// setup and execute the cURL call to the echo service
$curl = curl_init($service_uri);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_HTTPHEADER, array("Content-type: application/json"));
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $message);
$response = curl_exec($curl);
// handle non-200 response
$status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ( $status != 200 ) {
    die("Error: call to URL $url failed with status $status, response $response, curl_error " . curl_error($curl) . ", curl_errno " . curl_errno($curl));
// clean up cURL connection
// decode response and output
$response_as_json = json_decode($response);

A simple manifest file is provided for the deployment.

- name: echo-php
  framework: php

The app can then be pushed to CloudFoundry. The resulting list of apps is shown.

vagrant@vagrant-ubuntu-trusty-64:~/echo-php$ cf apps
Getting apps in org myorg / space mydept as admin...
name             requested state   instances   memory   disk   urls
echo-service     started           1/1         256M     1G     echo-service.,
service-broker   started           1/1         256M     1G     service-broker.
echo-php         started           1/1         256M     1G     echo-php.

Bind the service to the app

With an existing service and an existing app, it’s possible to bind the service to the app. This is done using bind-service, as shown below. Note that it is necessary to restage the app after binding the service so that the environment variables will be properly injected.

vagrant@vagrant-ubuntu-trusty-64:~$ cf bind-service echo-php test-echo-service
Binding service test-echo-service to app echo-php in org myorg / space mydept as admin...
TIP: Use 'cf restage' to ensure your env variable changes take effect
vagrant@vagrant-ubuntu-trusty-64:~$ cf restage echo-php
Restaging app echo-php in org myorg / space mydept as admin...
-----> Downloaded app package (4.0K)
-----> Downloaded app buildpack cache (4.0K)
-------> Buildpack version 1.0.2
Use locally cached dependencies where possible
 !     WARNING:        No composer.json found.
       Using index.php to declare PHP applications is considered legacy
       functionality and may lead to unexpected behavior.
       See https://devcenter.heroku.com/categories/php
-----> Setting up runtime environment...
       - PHP 5.5.12
       - Apache 2.4.9
       - Nginx 1.4.6
-----> Installing PHP extensions:
       - opcache (automatic; bundled, using 'ext-opcache.ini')
-----> Installing dependencies...
       Composer version ac497feabaa0d247c441178b7b4aaa4c61b07399 2014-06-10 14:13:12
       Warning: This development build of composer is over 30 days old. It is recommended to update it by running "/app/.heroku/php/bin/composer self-update" to get the latest version.
       Loading composer repositories with package information
       Installing dependencies
       Nothing to install or update
       Generating optimized autoload files
-----> Building runtime environment...
       NOTICE: No Procfile, defaulting to 'web: vendor/bin/heroku-php-apache2'
-----> Uploading droplet (64M)
0 of 1 instances running, 1 starting
1 of 1 instances running
App started
Showing health and status for app echo-php in org myorg / space mydept as admin...
requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: echo-php.
last uploaded: Mon Nov 24 21:34:29 UTC 2014
     state     since                    cpu    memory          disk
#0   running   2014-11-24 09:39:27 PM   0.0%   88.4M of 256M   0 of 1G

The echo-php service can be called using cURL from the command line to verify proper connectivity between the echo php app and the new service.

vagrant@vagrant-ubuntu-trusty-64:$ curl http://echo-php.
string(117) "http://echo-service."
object(stdClass)#4 (1) {
  string(19) "Hello CloudFoundry!"

The existing service can be bound to additional app instances. It is also trivial to create new service instances. If the service is no longer needed by the application, it is straightforward to unbind and deprovision.

Software Engineering

External Services in CloudFoundry

CloudFoundry, Stackato and Helion Development Platform accommodate (and encourage) external services for persistent application needs. The types of services include relational databases, like MySQL or PostgreSQL, NoSQL datastores, like MongoDB, messaging services like RabbitMQ and even cache technologies like Redis and Memcached. In each case, connection details, such as a URL, PORT and credentials, are maintained by the cloud controller and injected into the environment of new application instances.



It’s important to understand that regardless of how the cloud controller receives details about the service, the process of getting those details to application instances is the same. Much like applications use dependency injection, the cloud controller injects environment variables into each application instance. The application is written to use these environment variables to establish a connection to the external resource. From an application perspective, this looks the same whether a warden or docker container is used.

Connecting to the Service

Connecting to the service from the application instance is the responsibility of the application. There is nothing in CloudFoundry or its derivatives that facilitates this connection beyond injecting the connection parameters into the application instance container. The fact that there is no intermediary between the application instance and the service means that there is no additional latency or potential disconnect. However, the fact that CloudFoundry can scale to an indefinite number of application instances does mean the external service must be able to accommodate all the connections that will result.

Connection pooling is a popular method to reduce the overhead of creating new connections. Since CloudFoundry scales out to many instances, it may be less valuable to manage connection pooling in the application. This may increase memory usage on the application instance while consuming available connections that should be distributed among all instances.

Managed vs. Unmanaged

The Service Broker API may be implemented to facilitate provisioning, binding, unbinding and deprovisioning of resources. This is referred to as a managed service, since the life-cycle of the resource is managed by the PaaS. In the case of managed services, the user interacts with the service only by way of the CloudFoundry command line client.

In an unmanaged scenario, the service resource is provisioned outside of the PaaS. The user then provides connection details to the PaaS in one of two ways.

  • The first is to register it as a service that can then be bound to application instances.
  • The second is to add connection details manually as individual environment variable key/name pairs.

The three methods of incorporating services discussed in this post range from high to low touch and make it possible to incorporate any type of service, even existing services.

Use caution when designing services to prevent them from getting overwhelmed. The scalable character of CloudFoundry means that the number of instances making connections to a service can grow very quickly to an indeterminate number.

Software Engineering

nginx buildpack – installed

It’s possible to add a custom buildpack to Stackato or Helion Development Platform so that it’s available to all applications. When using an installed buildpack it is not necessary to include a manifest or identify the buildpack. Instead it will be selected by the detect script in the buildpack. All files are on the cloud controller node which eliminates download time and bandwidth.

Package the buildpack

Prepare your buildpack for installation by adding all files to a zip file (of any name). The bin folder should be in the root of the zip file with any related files in their relative locations. Normally this will happen as expected if the buildpack is cloned and then within the buildpack directory zip is run, as shown.

vagrant@vagrant-ubuntu-trusty-64:~$ git clone -b precompiled https://github.com/dwatrous/buildpack-nginx.git
Cloning into 'buildpack-nginx'...
remote: Counting objects: 156, done.
remote: Compressing objects: 100% (99/99), done.
remote: Total 156 (delta 44), reused 142 (delta 32)
Receiving objects: 100% (156/156), 1.80 MiB | 2.64 MiB/s, done.
Resolving deltas: 100% (44/44), done.
Checking connectivity... done.
vagrant@vagrant-ubuntu-trusty-64:~$ cd buildpack-nginx/
vagrant@vagrant-ubuntu-trusty-64:~/buildpack-nginx$ zip -r buildpack-nginx.zip *
  adding: bin/ (stored 0%)
  adding: bin/detect (deflated 19%)
  adding: bin/compile (deflated 57%)
  adding: bin/release (deflated 2%)
  adding: bin/launch.sh (deflated 54%)
  adding: README.md (deflated 44%)
  adding: vendor/ (stored 0%)
  adding: vendor/nginx.tar.gz (deflated 4%)
vagrant@vagrant-ubuntu-trusty-64:~/buildpack-nginx$ cp buildpack-nginx.zip /vagrant/

It may be obvious that the commands above were run in a vagrant built Ubuntu box. Buildpack scripts must have the executable bit set, which doesn’t carry over when the zip file is created on windows. To accommodate this I created a vagrant box, installed git and cloned and zipped. I then copied the zip file into the vagrant folder for easy access.

The README.md isn’t necessary for the buildpack, obviously, but it doesn’t interfere either. Any unrelated files that are in the buildpack can be removed before creating the buildpack zip file, but they won’t have any impact if they are left.

Install the buildpack

Adding the buildpack to Stackato or Helion is very straight forward. Use the stackato or helion client to call create-buildpack and provide the name and zip file, as shown.

C:\Users\watrous\Documents\buildpack>stackato create-buildpack daniel-static-nginx buildpack-nginx.zip
Creating new buildpack daniel-static-nginx ... OK
Uploading buildpack bits ...  OK

All installed buildpacks can be listed using the stackato client with the buildpacks command.

C:\Users\watrous\Documents\buildpack>stackato buildpacks
| # | Name                | Filename            | Enabled | Locked |
| 1 | java                | java.zip            | yes     | no     |
| 2 | ruby                | ruby.zip            | yes     | no     |
| 3 | nodejs              | nodejs.zip          | yes     | no     |
| 4 | python              | python.zip          | yes     | no     |
| 5 | go                  | go.zip              | yes     | no     |
| 6 | clojure             | clojure.zip         | yes     | no     |
| 7 | scala               | scala.zip           | yes     | no     |
| 8 | perl                | perl.zip            | yes     | no     |
| 9 | daniel-static-nginx | buildpack-nginx.zip | yes     | no     |

Use the buildpack

In the case of this buildpack, the detect script is looking for an index file with and extension of either html or htm. To deploy using this buildpack, just make sure that a file matching that description is in the root of your application files. There is no need to include a manifest.

Software Engineering

Custom buildpacks in CloudFoundry

In the past PaaS has been rigid and invasive for application developers. CloudFoundry aims to change that perception of PaaS with the use of Buildpacks. A Buildpack allows an application developer to define his deployment environment in plain text. Some refer to this as infrastructure as code since the aspects of a deployment environment that were previously handled by system administrators on dedicated servers now exist in plain text alongside the application files.

What’s available out-of-the-box?

Before diving into custom buildpacks, a lot of people will ask “What is available out-of-the-box with CloudFoundry?”. The answer is everything and nothing. It’s true that there are some buildpacks that come pre-configured within the CloudFoundry, Stackato or Helion environments. However, it’s not entirely accurate or useful to consider these as “a version available in cloudfoundry”. The article below provides crucial context needed to determine whether or not an application should leverage an out-of-the-box buildpack or customize one.

Custom Buildpacks

The process to create custom buildpacks includes the following steps.

  • Develop the process to build out the runtime environment
  • Understand the staging process
  • Create a buildpack that stages the application for deployment

Understand the buildpack staging process

The buildpack staging process comprises all the steps to download, compile, configure and prepare a runtime environment. The result of staging is a droplet (a gzipped tar file) which contains all application and runtime dependencies. When a new docker application container is created, the droplet is used to deploy the application.

Design the runtime environment

Designing the runtime environment should be done using the same docker container that will be used to stage and deploy the application. Helion and Stackato use an Ubuntu based Docker image prepared specifically for the CloudFoundry environment.

Create a buildpack

With steps to produce a runtime environment and an understanding of the staging process, it’s now possible to create a custom buildpack. This includes creating detect, compile and release scripts, in addition to any related files.

Software Engineering

nginx buildpack – pre-compiled

Previously I detailed the process to create a buildpack for Stackato or Helion, including reconfiguring the buildpack to be self-contained. In both previous examples, the compile script performed a configure and make on source to build the binaries for the application. Since the configure->make process is often slow, this can be done once and the binaries added to the git repository for the buildpack.

Pre-compile nginx

The first step is to build nginx to run in the docker container for Stackato or Helion. These steps were previously in the compile script, but now they need to be run independently. After buliding, package up the necessary files and include that file in the git repository in place of the source file. Here are the commands I ran to accomplish this.

# create a docker image for the build process
docker run -i -t stackato/stack-alsek:latest /bin/bash
# download, configure and compile
wget -e use_proxy=yes http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
# rearrange and package up the files
mkdir -p nginx/bin
mkdir -p nginx/conf
mkdir -p nginx/logs
cp nginx-1.6.2/objs/nginx nginx/bin/
cp nginx-1.6.2/conf/nginx.conf  nginx/conf/
cp nginx-1.6.2/conf/mime.types  nginx/conf/
tar czvf nginx.tar.gz nginx
# copy packaged file where I can get it to add to git repository
scp nginx.tar.gz user@myhost.domain.com:~/

After you get this new packaged file in place, the structure would look like this.


Updated compile script

The compile script is quite different. Here’s what I ended up with.

#!/usr/bin/env bash
# bin/compile <build-dir> <cache-dir>
shopt -s dotglob    # enables commands like 'mv *' to see hidden files
set -e              # exit immediately if any command fails (non-zero status)
# create local variables pointing to key paths
buildpack_dir=$(cd $(dirname $0) && cd .. && pwd)
# unpackage nginx
mkdir -p $cache_dir
cp $buildpack_dir/vendor/nginx.tar.gz $cache_dir
tar xzf $cache_dir/nginx.tar.gz -C $cache_dir
# move applicaiton files into public directory
mkdir -p $cache_dir/public
mv $app_files_dir/* $cache_dir/public/
# put everything in place for droplet creation
mv $buildpack_dir/bin/launch.sh $app_files_dir/
mv $cache_dir/public $app_files_dir/
mv $cache_dir/nginx $app_files_dir/
# ensure manifest not in public directory
if [ -f $cache_dir/public/manifest.yml ]; then rm $cache_dir/public/manifest.yml; fi
if [ -f $cache_dir/public/stackato.yml ]; then rm $cache_dir/public/stackato.yml; fi

Now rather than build nginx it is just unpackaged right into place.

Staging Performance

Pre-compiling can save a lot of time when staging. For example, in this case the following times were observed when staging under each scenario.

Time to stage build from source

[stackato[dea_ng]] 2014-10-22T15:42:34.000Z: Completed uploading droplet
[staging]          2014-10-22T15:41:16.000Z:

That’s a total of 1:18 to build from source as part of the staging process.

Time to stage pre-compiled

[stackato[dea_ng]] 2014-10-22T16:33:38.000Z: Completed uploading droplet
[staging]          2014-10-22T16:33:32.000Z:

The total time to stage is 0:06 when the binary resources are precompiled. That’s 92.3% faster staging. That can be a significant advantage when staging applications which employ a very heavy compile process to establish the runtime environment.


It could be argued that a pre-compiled binary is more likely to have a conflict at deploy time. In many deployment scenarios I would be tempted to agree. However, since the deployment environment is a Docker container and will presumably be identical to the container used to compile the binary, this risk is relatively low.

One question is whether a git repository is an appropriate place to store binary files. If this is a concern, the binary files can be stored on a file server, CDN or any location which can be accessed by the staging Docker container. This will keep the git repository clean with text only and can also provide a file system collection of binary artifacts that are available independent of git.

Software Engineering

nginx buildpack – offline

I previously documented the process to create a buildpack for nginx to use with Stackato or Helion Dev Platform. In that buildpack example, the compile script would download the nginx source using wget. In some cases, the time, bandwidth or access required to download external resources may be undesirable. In those cases the buildpack can be adjusted to work offline by adding the external resources to the git repository. The new structure would look like this.


Updated compile script

The only bulidpack related file that would need to change to accommodate this is the compile script. The affected section is download and build nginx.

# download and build nginx
mkdir -p $cache_dir
cp $buildpack_dir/vendor/nginx-1.6.2.tar.gz $cache_dir
tar xzf $cache_dir/nginx-1.6.2.tar.gz -C $cache_dir
cd $cache_dir/nginx-1.6.2

Notice that now the source is copied from the vendor directory in the buildpack instead downloaded using wget.

Using buildpack branches

In this case I created a branch of my existing buildpack to make these changes. By adding #branchname, it was possible to identify that branch with a in the manifest.yml. In this case my branch name was offline.

- name: static-test
  buildpack: https://github.com/dwatrous/buildpack-nginx.git#offline
Software Engineering

nginx buildpack – realtime

CloudFoundry accommodates buildpacks which define a deployment environment. A buildpack is distinct from an application and provides everything the application needs to run, including web server, language runtime, libraries, etc. The most basic structure for a buildpack requires three files inside a directory named bin.


The buildpack files discussed in this post can be cloned or forked at https://github.com/dwatrous/buildpack-nginx

Some quick points about these buildpack files

  • All three files must be executable via bash
  • Can be shell scripts or any language that can be invoked using bash
  • Explicit use of paths recommended


The detect script is intended to examine application files to determine whether it can accommodate the application. If it finds evidence that it can provide an environment to run an application, it should echo a message and exit with a status of zero. Otherwise it should exit with a non-zero status. In this example, the detect script looks for an index file with the extension html or htm.

#!/usr/bin/env bash
if [[ ( -f $1/index.html || -f $1/index.htm ) ]]
  echo "Static" && exit 0
  exit 1


The compile script is responsible to gather, build and position everything exactly as it needs to be in order to create the droplet that will be used to deploy the application. The second line of the script below shows that cloudfoundry calls the compile script and passes in paths to build-dir and cache-dir.

Develop and test the compile script

All of the commands in this file will run on a docker instance created specifically to perform the staging operation. It’s possible to develop the compile script inside the same docker container that will be used to stage your applicationi.

#!/usr/bin/env bash
# bin/compile <build-dir> <cache-dir>
shopt -s dotglob    # enables commands like 'mv *' to see hidden files
set -e              # exit immediately if any command fails (non-zero status)
# create local variables pointing to key paths
buildpack_dir=$(cd $(dirname $0) && cd .. && pwd)
# download and build nginx
mkdir -p $cache_dir
cd $cache_dir
wget http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
# create hierarchy with only needed files
mkdir -p $cache_dir/nginx/bin
mkdir -p $cache_dir/nginx/conf
mkdir -p $cache_dir/nginx/logs
cp $cache_dir/nginx-1.6.2/objs/nginx $cache_dir/nginx/bin/nginx
cp $cache_dir/nginx-1.6.2/conf/nginx.conf $cache_dir/nginx/conf/nginx.conf
cp $cache_dir/nginx-1.6.2/conf/mime.types $cache_dir/nginx/conf/mime.types
# move applicaiton files into public directory
mkdir -p $cache_dir/public
mv $app_files_dir/* $cache_dir/public/
# copy nginx error template
cp $cache_dir/nginx-1.6.2/html/50x.html $cache_dir/public/50x.html
# put everything in place for droplet creation
mv $buildpack_dir/bin/launch.sh $app_files_dir/
mv $cache_dir/public $app_files_dir/
mv $cache_dir/nginx $app_files_dir/
# ensure manifest not in public directory
if [ -f $cache_dir/public/manifest.yml ]; then rm $cache_dir/public/manifest.yml; fi
if [ -f $cache_dir/public/stackato.yml ]; then rm $cache_dir/public/stackato.yml; fi

Notice that after nginx has been compiled, the desired files, such as the nginx binary and configuration file, must be explicitly copied into place where they will be included when the droplet is packaged up. In this case I put the application files in a sub-directory called public and the nginx binary, conf and logs in a sub-directory called nginx. A shell script, launch.sh, is also copied into the root of the application directory, which will be explained in a minute.


The output of the release script is a YAML file, but it’s important to understand that the release file itself must be an executable script. The key detail in the release file is the line that indicates how to start the web server process. In some cases that may require several commands, which would require them to be encapsulated in another script, such as the launch.sh script shown below.

#!/usr/bin/env bash
cat <<YAML
  web: sh launch.sh


The launch.sh script creates a configuration file for nginx that includes the PORT and HOME directory for this specific docker instance. It then starts nginx as the local user.

#!/usr/bin/env bash
# create nginx conf file with PORT and HOME directory from cloudfoundry environment variables
mv $HOME/nginx/conf/nginx.conf $HOME/nginx/conf/nginx.conf.original
sed "s|\(^\s*listen\s*\)80|\1$PORT|" $HOME/nginx/conf/nginx.conf.original > $HOME/nginx/conf/nginx.conf
sed -i "s|\(^\s*root\s*\)html|\1$HOME/public|" $HOME/nginx/conf/nginx.conf
# start nginx web server
$HOME/nginx/bin/nginx -c $HOME/nginx/conf/nginx.conf -p $HOME/nginx

Using a buildpack

There are two ways to use a buildpack: either install it on the cloud controller node and let the detect script select it or store it in a GIT repository and provide the GIT URL in manifest.yml. The GIT repository must be publicly readable and accessible from the docker instance where staging will occur (proxy issues may interfere with this).

To use this buildpack, in an empty directory, create two files:


- name: static-test
  memory: 40M
  disk: 200M
  instances: 1
  buildpack: https://github.com/dwatrous/buildpack-nginx.git


<h1>Hello World!</h1>

From within that directory, target, login and push your app (can be done locally or in the cloud).




Software Engineering

Buildpack staging process in Stackato and Helion

One of the major strengths of CloudFoundry was the adoption of buildpacks. A buildpack represents a blueprint which defines all runtime requirements for an application. These may include web server, application, language, library and any other requirements to run an application. There are many buildpacks available for common environments, such as Java, PHP, Python, Go, etc. It is also easy to fork an existing buildpack or create a new buildpack.

When an application is pushed to CloudFoundry, there is a staging step and a deploy step, as shown below. The buildpack comes in to play when an application is staged.


Staging, or droplet creation

A droplet is a compressed tar file which contains all application files and runtime dependencies to run the application. Depending on the technology, this may include source code for interpreted languages, like PHP, or bytecode or even compiled objects, as with Python or Java. It will also include binary executables needed to run the application files. For example, in the case of Java, this would be a Java Runtime Environment (JRE).

The staging process, which produces a droplet that is ready to deploy, is shown in the following activity diagram.


Staging environment

The staging process happens on a docker instance created using the same docker image used to deploy the app. Once created, all application files are copied to the new docker staging instance. Among these application files may be a deployment definition in the form of a manifest.yml or stackato.yml. The YAML files can include information that will prepare or alter the staging environment, including installation of system libraries. The current docker image is based on Ubuntu.

Next we get to the buildpack. Most installations of CloudFoundry include a number of buildpacks for common technologies, such as PHP, Java, Python, Go, Node and Ruby. If no manifest is included, or if there is a manifest which doesn’t explicitly identify a buildpack, then CloudFoundry will loop through all available buildpacks and run the detect script to find a match.

If the manifest explicitly identifies a buildpack, which must be hosted in a GIT repository, then that buildpack will be cloned onto the staging instance and the compile process will proceed immediately.

environment variables

CloudFoundry makes certain environment variables available to the docker instance used for staging and deploying apps. These should be treated as dynamic, since every instance (stage or deploy) will have different variables. For example, the PORT will be different for every host.

Here is a sample of the environment available during the staging step:

SSH_CLIENT= 41465 22


The detect script analyzes the application files looking for specific artifacts. For example, the Ruby buildpack looks for a Gemfile. The PHP buildpack looks for index.php or composer.json. The Java buildpack looks for a pom.xml. And so on for each technology. If a buildpack detects the required artifacts, it prints a message and exits with a system value of 0, indicating success. At this point, the buildpack files are also copied to the staging instance.


The compile script is responsible for doing the rest of the work to prepare the runtime environment. This may include downloading and compiling source files. It may include downloading or copying pre-compiled binaries. The compile script is responsible for arranging the application files and runtime artifacts in a way that accommodates configuration.

The compile script, as well as all runtime components, are run as local user without sudo privileges. This will often mean that configuration files, log files, and other typical file system paths will need to be adapted to run as an unprivileged user.

The follow image shows the directory structure on the docker staging instance which will be useful when creating a compile script.


Hooks are available for pre-staging and post-staging. This may be useful if access to external resources are needed to stage and require authentication or other preliminary setup.

Create droplet

Once the compile script completes, everything in the user’s directory is packaged up as a compressed tar file. This is the droplet. The droplet is copied to the cloud controller and the staging instance is destroyed.

Software Engineering

Overview of CloudFoundry

CloudFoundry is an opensource Platform as a Service (PaaS) technology originally introduced and commercially supported by Pivotal. The software makes it possible to very easily stage, deploy and scale applications, thanks in part to its adoption of buildpacks which were originally introduced by Heroku.

Some software design principles are required to achieve scale with cloud foundry. The most notable design choice is a complete abstraction of persistence, including filesystem, datastore and even in memory cache. This is because instances of an application are transient and stateless. Since this is generally good design anyway, many applications may find it easy to migrate to CloudFoundry.

CloudFoundry Internals

CloudFoundry can be viewed from two perspectives: CloudFoundry internals and Application Developers who want to deploy on CloudFoundry. The image directly below is from the CloudFoundry architecture overview.


This shows the internal components that make up CloudFoundry. The router is the internet facing component that matches requests up with application instances. It performs load balancing among instances of an application. Unlike a load balancer, there is no concept of sticky sessions, since application instances are assumed to be stateless.

The DEA is a compute resource (usually a virtual server, but it can be any compute resource, including bare metal). CloudFoundry uses a technology called Warden for containerization. Other distributions use alternative technologies, like docker.

Services, such as database, cache, filesystem, etc. must implement a Service Broker API. Through this API, CloudFoundry is able to discover, provision and facilitate communication of credentials to each instance of an application.

Application Development

Application Developers interact with CloudFoundry in a few different ways. Two common methods include the command line client and Eclipse plugin. Using these tools, developers may login to a CloudFoundry installation and deploy apps under organizations and spaces. The following diagram illustrates this.


When a developer is ready to deploy, he pushes his app. CloudFoundry then identifies a suitable buildpack and stages the application resulting in a droplet. A droplet is a compressed tar file which contains all the application files and runtime dependencies. After staging, app instances are created and the droplet is extracted on the new instance at which point the application is started. If services are required, these are provisioned when the application is pushed.


There are many contributors to open source CloudFoundry. This has resulted in various distributions of CloudFoundry aside from Pivotal’s commercial offering.

ActiveState Stackato

ActiveState distributes CloudFoundry under the brand name Stackato. Some notable differences include the use of docker for instances and a web interface that includes an app store for deploying common applications.

HP Helion Development Platform

Hewlett Packard then offers an enterprise focused distribution of Stackato as HP Helion Development Platform. The enterprise focus includes an emphasis on the ability to use private cloud, public cloud and traditional IT to cost effectively, securely and reliably deploy and scale mission critical applications.

Getting started with CloudFoundry

It’s easy to get started with CloudFoundry. Here are a couple of tutorials that will get you ready to quickly deploy apps.

CloudFoundry on HPCloud.com
Install CloudFoundry in VirtualBox

Software Engineering

Nginx in Docker for Stackato buildpacks

I’m about to write a few articles about creating buildpacks for Stackato, which is a derivative of CloudFoundry and the technology behind Helion Development Platform. The approach for deploying nginx in docker as part of a buildpack differs from the approach I published previously. There are a few reasons for this:

  • Stackato discourages root access to the docker image
  • All services will run as the stackato user
  • The PORT to which services must bind is assigned and in the free range (no root required)

Get a host setup to run docker

The easiest way to follow along with this tutorial is to deploy stackato somewhere like hpcloud.com. The resulting host will have the docker image you need to follow along below.

Manually prepare a VM to run docker containers

You can also use Vagrant to spin up a virtual host where you can run these commands.

vagrant init ubuntu/trusty64

Modify the Vagrantfile to contain this line

  config.vm.provision :shell, path: "bootstrap.sh"

Then create a the bootstrap.sh file based on the details below.


#!/usr/bin/env bash
# set proxy variables
#export http_proxy=http://proxy.example.com:8080
#export https_proxy=https://proxy.example.com:8080
# bootstrap ansible for convenience on the control box
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
apt-get update
apt-get -y install lxc-docker
# need to add proxy specifically to docker config since it doesn't pick them up from the environment
#sed -i '$a export http_proxy=http://proxy.example.com:8080' /etc/default/docker
#sed -i '$a export https_proxy=https://proxy.example.com:8080' /etc/default/docker
# enable non-root use by vagrant user
groupadd docker
gpasswd -a vagrant docker
# restart to enable proxy
service docker restart
# give docker a few seconds to start up
sleep 2s
# load in the stackato/stack-alsek image (can take a while)
docker load < /vagrant/stack-alsek.tar

Get the stackato docker image

Before the last line in the above bootstrap.sh script will work, it’s necessary to place the docker image for Stackato in the same directory as the Vagrantfile. Unfortunately the Stackato docker image is not published independently, which makes it more complicated to get. One way to do this is to deploy stackato locally and grab a copy of the image with this command.

docker save stackato/stack-alsek > stack-alsek.tar

You might want to save a copy of stack-alsek.tar to save time in the future. I’m not sure if it can be published (legally), but you’ll want to update this image with each release of Stackato anyway.

Launch a new docker instance using the Stackato image

You should now have three files in the directory where you did ‘vagrant init’.

-rw-rw-rw-   1 user     group          4998 Oct  9 14:01 Vagrantfile
-rw-rw-rw-   1 user     group           971 Oct  9 14:23 bootstrap.sh
-rw-rw-rw-   1 user     group    1757431808 Oct  9 14:02 stack-alsek.tar

At this point you should be ready to create a new VM and spin up a docker instance. First tell Vagrant to build the virtual server.

vagrant up

Next, log in to your server and create the docker container.

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -i -t stackato/stack-alsek:latest /bin/bash

Build and configure your services

Once you have a system setup and can create docker instances based on the Stackato image, you’re ready to craft your buildpack compile script. One of the first things I do is install the w3m browser so I can test my setup. In this example, I’m just going to build and test nginx. The same process could be used to build any number of steps into a compile script. It may be necessary to manage http_proxy, https_proxy and no_proxy environment variables for both root and stackato users while completing the steps below.

apt-get -y install w3m

Since everything in a stackato DEA is expected to run as the stackato user, we’ll switch to that user and move into the home directory

root@33ad737d42cf:/# su stackato
stackato@33ad737d42cf:/$ cd
stackato@33ad737d42cf:~$ pwd

Next I’m going to grab the source for nginx and configure and make.

wget -e use_proxy=yes http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2

By this point nginx has been built successfully and we’re in the nginx source directory. Next I want to update the configuration file to use a non-privileged port. For now I’ll use 8080, but Stackato will assign the actual PORT when deployed.

mv conf/nginx.conf conf/nginx.conf.original
sed 's/\(^\s*listen\s*\)80/\1 8080/' conf/nginx.conf.original > conf/nginx.conf

I also need to make sure that there is a logs directory available for nginx error logs on startup.

mkdir logs

It’s time to test the nginx build, which we can do with the command below. A message should be displayed saying the test was successful.

stackato@33ad737d42cf:~/nginx-1.6.2$ ./objs/nginx -t -c conf/nginx.conf -p /home/stackato/nginx-1.6.2
nginx: the configuration file /home/stackato/nginx-1.6.2/conf/nginx.conf syntax is ok
nginx: configuration file /home/stackato/nginx-1.6.2/conf/nginx.conf test is successful

With the setup and configuration validated, it’s time to start nginx and verify that it works.

./objs/nginx -c conf/nginx.conf -p /home/stackato/nginx-1.6.2

At this point it should be possible to load the nginx welcome page using the command below.

w3m http://localhost:8080


Next steps

If an application requires other resources, this same process can be followed to build, integrate and test them within a running docker container based on the stackato image.