Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged cloudfoundry

Software Engineering

High level view of Container Orchestration

Container orchestration is at the heart of a successful container architecture. Orchestration takes as input a definition of how a deployed application should look. This usually includes how many containers for a certain image are needed, volumes for persistent data, networking for communication between containers and awareness of various discovery mechanisms. Discovery may include such things as identifying other containers which are also participating with the application or how to access services required by the running containers. Here’s a high level view.

container-architecture

Infrastructure

Containers need infrastructure to run. Both virtual and physical infrastructure can be used to host containers. Some argue that it’s better to run containers directly on physical servers to get the maximum performance. While there are performance benefits, there is also more operational overhead in standing up and maintaining physical servers. Automation available in virtual environments often makes it easier to provision, monitor and remediate servers. Using virtual infrastructure also makes it possible to share capacity between different types of workloads, where some may not be optimized for containers. Tools like Docker cloud (formerly Tutum) and Rancher can streamline operations for virtual environments.

If all workloads will be containerized and top performance is critical, favor a physical deployment. If some applications will still require IaaS and capacity will be shared between various types of workloads, choose virtual.

Orchestration

Orchestration is the process by which containers are managed to ensure that a predefined application configuration is maintained. These often require a plain text definition (usually YAML) of which container images are wanted, networking between those containers, mounted volumes, etc. The orchestration tool is then given this definition, which it uses to pull the necessary images and create containers, setup networking and mount storage.

Kubernetes

Kubernetes (http://kubernetes.io/) is was originally contributed to the open source community by Google and was based on their decade old container technology Borg. It aims to be a comprehensive container management platform providing everything from orchestration to monitoring to service and discovery and more. It abstracts the container technology in what it calls a pod, making it possible to use Docker or rkt or any other technology that comes around in the future. For many people the appeal of this platform is that it has no direct tie back to a commercial vendor, so investment is more likely to be driven by the community.

Swarm

Docker Swarm is Docker’s orchestration layer. It is designed to integrate seamlessly with other Docker tools, including the Docker daemon and registry tools. Some of the appeal to Swarm has to do with simplicity. Swarn is more narrowly focused than kubernetes, which may suggest better focus and more flexibility in choosing the right solutions for each container management need, althought it’s optimal to stick with Docker solutions.

PaaS

As containers continue to grow in prominence, some PaaS solutions, such as cloudfoundry, are reworking their narrative to position themselves as container management systems. It is true that the current version of cloudfoundry supports direct deployment of Docker container images and provides platform components, like routing, health management and scaling. Some drawbacks to using a PaaS for container orchestration is that deployments become more prescriptive and it provides less granular control over container deployment and interactions.

Image management

Container images can be created in several ways, including using a mechanism like Dockerfile, or using other automation tools. Container images should never contain credentials or other sensitive data (see Discovery below). In some cases it may be appropriate to host an internal container registry. External registry options that provide private images may provide sufficient protection for some applications.

Another aspect of image security has to do with vulnerabilities. Some registry solutions provide image scanning tools that can detect vulnerabilities or out of date packages. When external images are used as a base for internal application images, these should be carefully curated and confirmed to be safe before using them to derive application images.

Automation

One motivation behind containerization is that it better accommodates Continuous Integration (CI) and Continuous Delivery (CD). When building CI/CD pipelines, it’s important that the orchestration layer make it easy to automate to lifecycle of containers for unittests, functional tests, load tests and other automatic verification of the current state of an applicaiton. The CI/CD pipeline may be responsible for both triggering container creation as well as creating the container image.

Two way communication with CI/CD tooling is important so that the end result of testing and validation can be reported and possibly acted on by the CI/CD tool affecting later stages.

Discovery

Discovery is the process by which a container identifies other containers and services or registers itself to be found by other containers with which it participates in order to function. Discovery may include scenarios such as finding a database or static file storage with data necessary to run, or identifying other containers across which requests are distributed in order to accommodate synchronization.

Two common solutions for Discovery include a distributed key/value store and DNS. A distributed key/value store, such as etcd, ensures that each physical node hosting containers has a synchronized set of key/value data. In this scenario, the orchestration tool can add details about newly created containers to the key/value store so that existing containers are aware of them. New containers can query the key/value store to identify related containers and services.

DNS based discovery (a popular tools is Consul) is very similar, except that DNS is used to manage resolution of services and containers based on URLs. In this way, new containers can simply call the predetermined URL and trust that the request will be routed to the appropriate container or resource. As containers change, DNS is updated in realtime so that no changes are required on individual containers.

Software Engineering

User provided services in CloudFoundry

This post builds on the discussion of Managed Services in CloudFoundry and covers the first of the two methods for using unmanaged services in CloudFoundry. It makes use of the Python echo service and Python service broker API implementation used previously.

Manually provision the service

This method assumes that an existing service has been provisioned outside of CloudFoundry. An instance of the echo service can be manually provisioned using cURL using the following commands. In this example, a new instance is provisioned with the id user-service-instance and bound to an app with id someapp. The new instance and binding is then confirmed to work by echoing a message.

vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://echo-service.10.244.0.34.xip.io/echo/user-service-instance -w "\n"
{"instance_id": "user-service-instance", "state": "provision_success", "dashboard_url": "http://localhost:8090/dashboard/user-service-instance"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://echo-service.10.244.0.34.xip.io/echo/user-service-instance/someapp -w "\n"
{"state": "bind_success", "id": "user-service-instance", "app": "someapp"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service.10.244.0.34.xip.io/echo/user-service-instance/someapp -H "Content-Type: application/json" -d '{"message": "Hello Test"}' -w "\n"
{"response": "Hello Test"}

The URI that should be provided to an application is

http://echo-service.10.244.0.34.xip.io/echo/user-service-instance/someapp

Add the service to CloudFoundry

The service can now be manually added to CloudFoundry as a user provided service. The CloudFoundry documentation suggests a predefined structure for the service parameters, but freeform JSON can be provided.

vagrant@vagrant-ubuntu-trusty-64:~$ cf cups user-echo-service -p '{"uri": "http://echo-service.10.244.0.34.xip.io/echo/user-service-instance/someapp"}'
Creating user provided service user-echo-service in org myorg / space mydept as admin...
OK
vagrant@vagrant-ubuntu-trusty-64:~$ cf services
Getting services in org myorg / space mydept as admin...
OK
 
name                service         plan    bound apps
test-echo-service   Echo Service    large   echo-php
user-echo-service   user-provided

The service is now available and can be bound to any existing app. Binding the user-provided service to an app will cause CloudFoundry to inject service details into each app instance. The next command assumes the existence of an app echo-php. The app must be restaged to enable environment injection.

vagrant@vagrant-ubuntu-trusty-64:~$ cf bind-service echo-php user-echo-service
Binding service user-echo-service to app echo-php in org myorg / space mydept as admin...
OK
TIP: Use 'cf restage' to ensure your env variable changes take effect
vagrant@vagrant-ubuntu-trusty-64:~$ cf restage echo-php
Restaging app echo-php in org myorg / space mydept as admin...
-----> Downloaded app package (4.0K)
-----> Downloaded app buildpack cache (4.0K)
-------> Buildpack version 1.0.2
Use locally cached dependencies where possible
 !     WARNING:        No composer.json found.
       Using index.php to declare PHP applications is considered legacy
       functionality and may lead to unexpected behavior.
       See https://devcenter.heroku.com/categories/php
-----> Setting up runtime environment...
       - PHP 5.5.12
       - Apache 2.4.9
       - Nginx 1.4.6
-----> Installing PHP extensions:
       - opcache (automatic; bundled, using 'ext-opcache.ini')
-----> Installing dependencies...
       Composer version ac497feabaa0d247c441178b7b4aaa4c61b07399 2014-06-10 14:13:12
       Warning: This development build of composer is over 30 days old. It is recommended to update it by running "/app/.heroku/php/bin/composer self-update" to get the latest version.
       Loading composer repositories with package information
       Installing dependencies
       Nothing to install or update
       Generating optimized autoload files
-----> Building runtime environment...
       NOTICE: No Procfile, defaulting to 'web: vendor/bin/heroku-php-apache2'
 
-----> Uploading droplet (64M)
 
0 of 1 instances running, 1 starting
1 of 1 instances running
 
App started
 
 
OK
Showing health and status for app echo-php in org myorg / space mydept as admin...
OK
 
requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: echo-php.10.244.0.34.xip.io
last uploaded: Mon Nov 24 21:44:40 UTC 2014
 
     state     since                    cpu    memory          disk
#0   running   2014-11-25 05:31:22 PM   0.0%   87.3M of 256M   0 of 1G

The env command can then be used to confirm that the user-provided service is available in the environment for the app.

vagrant@vagrant-ubuntu-trusty-64:~$ cf env echo-php
Getting env variables for app echo-php in org myorg / space mydept as admin...
OK
 
System-Provided:
{
 "VCAP_SERVICES": {
  "Echo Service": [
   {
    "credentials": {
     "uri": "http://echo-service.10.244.0.34.xip.io/echo/8dea3b8e-4230-4509-8879-5f94b4812985/d2944b66-2d9e-46fe-8e3d-d9b102cb5bca"
    },
    "label": "Echo Service",
    "name": "test-echo-service",
    "plan": "large",
    "tags": []
   }
  ],
  "user-provided": [
   {
    "credentials": {
     "uri": "http://echo-service.10.244.0.34.xip.io/echo/user-service-instance/someapp"
    },
    "label": "user-provided",
    "name": "user-echo-service",
    "syslog_drain_url": "",
    "tags": []
   }
  ]
 }
}
 
{
 "VCAP_APPLICATION": {
  "application_name": "echo-php",
  "application_uris": [
   "echo-php.10.244.0.34.xip.io"
  ],
  "application_version": "921b2927-cedd-4623-9db5-b0f48985de39",
  "limits": {
   "disk": 1024,
   "fds": 16384,
   "mem": 256
  },
  "name": "echo-php",
  "space_id": "275cf0c6-b10e-4ef1-afbc-966880feb09f",
  "space_name": "mydept",
  "uris": [
   "echo-php.10.244.0.34.xip.io"
  ],
  "users": null,
  "version": "921b2927-cedd-4623-9db5-b0f48985de39"
 }
}
 
No user-defined env variables have been set
 
No running env variables have been set
 
No staging env variables have been set

From the output of the environment it is seen that this app has two service bindings. The first binding corresponds to a managed instance of the echo service. The second is the service instance that was manually provisioned in this post. It’s up to the app to extract and use the appropriate service. This means that the app must specifically look for the service connection values that are injected.

Software Engineering

Managed Services in CloudFoundry

CloudFoundry defines a Service Broker API which can be implemented and added to a CloudFoundry installation to provide managed services for apps. In order to better understand the way managed services are created and integrated with CloudFoundry (and derivative technologies like Stackato and HP Helion Development Platform), I created an example service and implemented the Service Broker API to manage it. The code for both implementations are on github.

Deploy the services

For this exercise I deployed bosh-lite on my Windows 7 laptop. I then follow the documented procedure to push cf-echo-service and cf-service-broker-python to CloudFoundry. The desired end state is two running apps in CloudFoundry that can be used for service broker management and testing.

NOTE: Deploy the echo-service first. Then update the service-broker script so that it will use the cf hosted echo-service. The relevant lines in the service-broker.py script are shown.

# UPDATE THIS FOR YOUR ECHO SERVICE DEPLOYMENT
service_base = "echo-service.10.244.0.34.xip.io"

Security

In this setup, it is necessary for the service-broker app to communicate with the echo-service app within CloudFoundry. This requires adding a security group and applying it to running services. The first step is to create a JSON file with the definition of the new security group.

[
  {
    "protocol": "tcp",
    "destination": "10.244.0.34",
    "ports": "80"
  }
]

The security group can then be created using create-security-group. This command expects a security group name and a path to the JSON file created above. After creating the security group, it must be bound to running instances (optionally to staging instances too using bind-staging-security-group). If the service-broker was deployed prior to enabling this security group, the app will also need to be restarted.

vagrant@vagrant-ubuntu-trusty-64:~$ cf create-security-group port80 security.json
Creating security group port80 as admin
OK
vagrant@vagrant-ubuntu-trusty-64:~$ cf security-groups
Getting security groups as admin
OK
 
     Name              Organization   Space
#0   public_networks
#1   dns
#2   port80
vagrant@vagrant-ubuntu-trusty-64:~$ cf bind-running-security-group port80
Binding security group port80 to defaults for running as admin
OK
 
TIP: Changes will not apply to existing running applications until they are restarted.
vagrant@vagrant-ubuntu-trusty-64:~$ cf restart service-broker
Stopping app service-broker in org myorg / space mydept as admin...
OK
 
Starting app service-broker in org myorg / space mydept as admin...
 
0 of 1 instances running, 1 starting
1 of 1 instances running
 
App started
 
OK
Showing health and status for app service-broker in org myorg / space mydept as admin...
OK
 
requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: service-broker.10.244.0.34.xip.io
last uploaded: Mon Nov 24 15:59:07 UTC 2014
 
     state     since                    cpu    memory        disk
#0   running   2014-11-24 08:13:40 PM   0.0%   56M of 256M   0 of 1G

NOTE: I deploy them to CloudFoundry for convenience and reliability, but this is not required. The service could be any service and the service broker API implementation can be deployed anywhere.

vagrant@vagrant-ubuntu-trusty-64:~$ cf apps
Getting apps in org myorg / space mydept as admin...
OK
 
name             requested state   instances   memory   disk   urls
echo-service     started           1/1         256M     1G     echo-service.10.244.0.34.xip.io
service-broker   started           1/1         256M     1G     service-broker.10.244.0.34.xip.io

Optional validation

It is possible to validate that the deployed apps work as expected. The curl commands below validate the echo service is working properly.

vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9 -w "\n"
{"instance_id": "51897770-560b-11e4-b75a-9ad2017223a9", "state": "provision_success", "dashboard_url": "http://localhost:8090/dashboard/51897770-560b-11e4-b75a-9ad2017223a9"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9/myapp -w "\n"
{"state": "bind_success", "id": "51897770-560b-11e4-b75a-9ad2017223a9", "app": "myapp"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9/myapp -H "Content-Type: application/json" -d '{"message": "Hello World"}' -w "\n"
{"response": "Hello World"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9/myapp -H "Content-Type: application/json" -d '{"message": "Hello World 2"}' -w "\n"
{"response": "Hello World 2"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X POST http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9/myapp -H "Content-Type: application/json" -d '{"message": "Hello Worldz!"}' -w "\n"
{"response": "Hello Worldz!"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X GET http://echo-service.10.244.0.34.xip.io/echo/dashboard/51897770-560b-11e4-b75a-9ad2017223a9
 
<table class="pure-table">
    <thead>
        <tr>
            <th>Instance</th>
            <th>Bindings</th>
            <th>Messages</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>51897770-560b-11e4-b75a-9ad2017223a9</td>
            <td>1</td>
            <td>3</td>
        </tr>
    </tbody>
</table>
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9/myapp -w "\n"
{"state": "unbind_success", "id": "51897770-560b-11e4-b75a-9ad2017223a9", "app": "myapp"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://echo-service.10.244.0.34.xip.io/echo/51897770-560b-11e4-b75a-9ad2017223a9 -w "\n"
{"state": "deprovision_success", "bindings": 0, "id": "51897770-560b-11e4-b75a-9ad2017223a9", "messages": 3}

The service broker API implementation can also be verified using curl as shown below.

vagrant@vagrant-ubuntu-trusty-64:~$ curl -X GET http://service-broker.10.244.0.34.xip.io/v2/catalog -H "X-Broker-Api-Version: 2.3" -H "Authorization: Basic dXNlcjpwYXNz" -w "\n"
{"services": [{"name": "Echo Service", "dashboard_client": {"id": "client-id-1", "redirect_uri": "http://echo-service.10.244.0.34.xip.io/echo/dashboard", "secret": "secret-1"}, "description": "Echo back the value received", "id": "echo_service", "plans": [{"free": false, "description": "A large dedicated service with a big storage quota, lots of RAM, and many connections", "id": "big_0001", "name": "large"}], "bindable": true}, {"name": "Invert Service", "dashboard_client": null, "description": "Invert the value received", "id": "invert_service", "plans": [{"description": "A small shared service with a small storage quota and few connections", "id": "small_0001", "name": "small"}], "bindable": true}]}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://service-broker.10.244.0.34.xip.io/v2/service_instances/mynewinstance -H "Content-type: application/json" -H "Authorization: Basic dXNlcjpwYXNz" -d '{"service_id": "echo_service", "plan_id": "small_0001", "organization_guid": "HP", "space_guid": "IT"}' -w "\n"
{"dashboard_url": "http://echo-service.10.244.0.34.xip.io/echo/dashboard/mynewinstance"}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X PUT http://service-broker.10.244.0.34.xip.io/v2/service_instances/mynewinstance/service_bindings/myappid  -H "Content-type: application/json" -H "Authorization: Basic dXNlcjpwYXNz" -d '{"service_id": "echo_service", "plan_id": "small_0001", "app_guid": "otherappid"}' -w "\n"
{"credentials": {"uri": "http://echo-service.10.244.0.34.xip.io/echo/mynewinstance/myappid"}}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://service-broker.10.244.0.34.xip.io/v2/service_instances/mynewinstance/service_bindings/myappid  -H "Authorization: Basic dXNlcjpwYXNz" -w "\n"
{}
vagrant@vagrant-ubuntu-trusty-64:~$ curl -X DELETE http://service-broker.10.244.0.34.xip.io/v2/service_instances/mynewinstance -H "Authorization: Basic dXNlcjpwYXNz" -w "\n"
{}

Manage the Service Broker

In CloudFoundry, the service broker can be added using the create-service-broker command with the cf cli.

vagrant@vagrant-ubuntu-trusty-64:~$ cf create-service-broker echo-broker user pass http://service-broker.10.244.0.34.xip.io
Creating service broker echo-broker as admin...
OK
vagrant@vagrant-ubuntu-trusty-64:~$ cf service-brokers
Getting service brokers as admin...
 
name          url
echo-broker   http://service-broker.10.244.0.34.xip.io

The service broker is added, but the service plans in the catalog do not allow public access by default, as can be seen with the call to service-access below. This also means that calls to marketplace show no available services.

vagrant@vagrant-ubuntu-trusty-64:~$ cf service-access
Getting service access as admin...
broker: echo-broker
   service          plan    access   orgs
   Echo Service     large   none
   Invert Service   small   none
vagrant@vagrant-ubuntu-trusty-64:~$ cf marketplace
Getting services from marketplace in org myorg / space mydept as admin...
OK
 
No service offerings found

The the service plan must be enabled to accommodate public consumption. This is done using the enable-service-access call and providing the name of the service. In this case, the “Echo Service” is desired. After enabling the service plan, the service-access listing shows it available and a call to marketplace shows it available to apps.

vagrant@vagrant-ubuntu-trusty-64:~$ cf enable-service-access "Echo Service"
Enabling access to all plans of service Echo Service for all orgs as admin...
OK
vagrant@vagrant-ubuntu-trusty-64:~$ cf service-access
Getting service access as admin...
broker: echo-broker
   service          plan    access   orgs
   Echo Service     large   all
   Invert Service   small   none
 
vagrant@vagrant-ubuntu-trusty-64:~$ cf marketplace
Getting services from marketplace in org myorg / space mydept as admin...
OK
 
service        plans   description
Echo Service   large   Echo back the value received

Manage Services

With the new service in the marketplace, it’s now possible to provision, bind and use instances of the new service by way of the cf cli. This happens in a few steps in CloudFoundry. First an instance of the service is provisioned. An app is then pushed to CloudFoundry. Finally the service and the app are associated through a binding. The call to create-service expects three arguments which can be found in the marketplace output above.

  • service name
  • service plan
  • a name for the new service instance
vagrant@vagrant-ubuntu-trusty-64:~$ cf create-service "Echo Service" large test-echo-service
Creating service test-echo-service in org myorg / space mydept as admin...
OK
vagrant@vagrant-ubuntu-trusty-64:~$ cf services
Getting services in org myorg / space mydept as admin...
OK
 
name                service        plan    bound apps
test-echo-service   Echo Service   large

Push an app

Next step is to create a simple app to which the service can be bound. It’s important to understand how the service details will be injected into the app environment. Service details will be contained in a JSON document under the key VCAP_SERVICES. The structure of the service details is shown below.

{
  "Echo Service": [
    {
      "name": "test-php-echo",
      "label": "Echo Service",
      "tags": [
 
      ],
      "plan": "large",
      "credentials": {
        "uri": "http://16.98.49.183:8090/echo/f2c43d9c-e912-4e56-b624-14e8522be912/4a421367-0e1a-4b56-a07f-9a6b404119a5"
      }
    }
  ]
}

For this example, the PHP script below extracts, decodes and isolates the service URI from the VCAP_SERVICES environment variable. The cURL library is used to send a call to that URI containing a basic message JSON document. The response is then printed using var_dump.

<?php
// extract injected service details from environment
$services_json = getenv("VCAP_SERVICES");
$services = json_decode($services_json);
$service_uri = $services->{'Echo Service'}[0]->{'credentials'}->{'uri'};
var_dump($service_uri);
 
// create a message to send
$message = '{"message": "Hello CloudFoundry!"}';
 
// setup and execute the cURL call to the echo service
$curl = curl_init($service_uri);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_HTTPHEADER, array("Content-type: application/json"));
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $message);
$response = curl_exec($curl);
 
// handle non-200 response
$status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ( $status != 200 ) {
    die("Error: call to URL $url failed with status $status, response $response, curl_error " . curl_error($curl) . ", curl_errno " . curl_errno($curl));
}
 
// clean up cURL connection
curl_close($curl);
 
// decode response and output
$response_as_json = json_decode($response);
var_dump($response_as_json);
?>

A simple manifest file is provided for the deployment.

applications:
- name: echo-php
  framework: php

The app can then be pushed to CloudFoundry. The resulting list of apps is shown.

vagrant@vagrant-ubuntu-trusty-64:~/echo-php$ cf apps
Getting apps in org myorg / space mydept as admin...
OK
 
name             requested state   instances   memory   disk   urls
echo-service     started           1/1         256M     1G     echo-service.10.244.0.34.xip.io, 16.85.146.179.xip.io
service-broker   started           1/1         256M     1G     service-broker.10.244.0.34.xip.io
echo-php         started           1/1         256M     1G     echo-php.10.244.0.34.xip.io

Bind the service to the app

With an existing service and an existing app, it’s possible to bind the service to the app. This is done using bind-service, as shown below. Note that it is necessary to restage the app after binding the service so that the environment variables will be properly injected.

vagrant@vagrant-ubuntu-trusty-64:~$ cf bind-service echo-php test-echo-service
Binding service test-echo-service to app echo-php in org myorg / space mydept as admin...
OK
TIP: Use 'cf restage' to ensure your env variable changes take effect
vagrant@vagrant-ubuntu-trusty-64:~$ cf restage echo-php
Restaging app echo-php in org myorg / space mydept as admin...
-----> Downloaded app package (4.0K)
-----> Downloaded app buildpack cache (4.0K)
-------> Buildpack version 1.0.2
Use locally cached dependencies where possible
 !     WARNING:        No composer.json found.
       Using index.php to declare PHP applications is considered legacy
       functionality and may lead to unexpected behavior.
       See https://devcenter.heroku.com/categories/php
-----> Setting up runtime environment...
       - PHP 5.5.12
       - Apache 2.4.9
       - Nginx 1.4.6
-----> Installing PHP extensions:
       - opcache (automatic; bundled, using 'ext-opcache.ini')
-----> Installing dependencies...
       Composer version ac497feabaa0d247c441178b7b4aaa4c61b07399 2014-06-10 14:13:12
       Warning: This development build of composer is over 30 days old. It is recommended to update it by running "/app/.heroku/php/bin/composer self-update" to get the latest version.
       Loading composer repositories with package information
       Installing dependencies
       Nothing to install or update
       Generating optimized autoload files
-----> Building runtime environment...
       NOTICE: No Procfile, defaulting to 'web: vendor/bin/heroku-php-apache2'
 
-----> Uploading droplet (64M)
 
0 of 1 instances running, 1 starting
1 of 1 instances running
 
App started
 
 
OK
Showing health and status for app echo-php in org myorg / space mydept as admin...
OK
 
requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: echo-php.10.244.0.34.xip.io
last uploaded: Mon Nov 24 21:34:29 UTC 2014
 
     state     since                    cpu    memory          disk
#0   running   2014-11-24 09:39:27 PM   0.0%   88.4M of 256M   0 of 1G

The echo-php service can be called using cURL from the command line to verify proper connectivity between the echo php app and the new service.

vagrant@vagrant-ubuntu-trusty-64:$ curl http://echo-php.10.244.0.34.xip.io
string(117) "http://echo-service.10.244.0.34.xip.io/echo/8dea3b8e-4230-4509-8879-5f94b4812985/d2944b66-2d9e-46fe-8e3d-d9b102cb5bca"
object(stdClass)#4 (1) {
  ["response"]=>
  string(19) "Hello CloudFoundry!"
}

The existing service can be bound to additional app instances. It is also trivial to create new service instances. If the service is no longer needed by the application, it is straightforward to unbind and deprovision.

Software Engineering

External Services in CloudFoundry

CloudFoundry, Stackato and Helion Development Platform accommodate (and encourage) external services for persistent application needs. The types of services include relational databases, like MySQL or PostgreSQL, NoSQL datastores, like MongoDB, messaging services like RabbitMQ and even cache technologies like Redis and Memcached. In each case, connection details, such as a URL, PORT and credentials, are maintained by the cloud controller and injected into the environment of new application instances.

cloudfoundry-service-injection

Injection

It’s important to understand that regardless of how the cloud controller receives details about the service, the process of getting those details to application instances is the same. Much like applications use dependency injection, the cloud controller injects environment variables into each application instance. The application is written to use these environment variables to establish a connection to the external resource. From an application perspective, this looks the same whether a warden or docker container is used.

Connecting to the Service

Connecting to the service from the application instance is the responsibility of the application. There is nothing in CloudFoundry or its derivatives that facilitates this connection beyond injecting the connection parameters into the application instance container. The fact that there is no intermediary between the application instance and the service means that there is no additional latency or potential disconnect. However, the fact that CloudFoundry can scale to an indefinite number of application instances does mean the external service must be able to accommodate all the connections that will result.

Connection pooling is a popular method to reduce the overhead of creating new connections. Since CloudFoundry scales out to many instances, it may be less valuable to manage connection pooling in the application. This may increase memory usage on the application instance while consuming available connections that should be distributed among all instances.

Managed vs. Unmanaged

The Service Broker API may be implemented to facilitate provisioning, binding, unbinding and deprovisioning of resources. This is referred to as a managed service, since the life-cycle of the resource is managed by the PaaS. In the case of managed services, the user interacts with the service only by way of the CloudFoundry command line client.

In an unmanaged scenario, the service resource is provisioned outside of the PaaS. The user then provides connection details to the PaaS in one of two ways.

  • The first is to register it as a service that can then be bound to application instances.
  • The second is to add connection details manually as individual environment variable key/name pairs.

The three methods of incorporating services discussed in this post range from high to low touch and make it possible to incorporate any type of service, even existing services.

Use caution when designing services to prevent them from getting overwhelmed. The scalable character of CloudFoundry means that the number of instances making connections to a service can grow very quickly to an indeterminate number.

Software Engineering

The Road to PaaS

I have observed that discussions about CloudFoundry often lack accurate context. Some questions I get that indicate context is missing include:

  • What Java version does CloudFoundry support?
  • What database products/versions are available
  • How can I access the server directly?

There are a few reasons that the questions above are not relevant for CloudFoundry (or any modern PaaS environment). To understand why, it’s important to understand how we got to PaaS and where we came from.

cloudfoundry-compared-traditional

Landscape

When computers were first becoming a common requirement for the enterprise, most applications were monolithic. All applicaiton components would run on the same general purpose server. This included interface, application technology (e.g. Java, .NET and PHP) and data and file storage. Over time, these functions were distributed across different servers. The servers also began to take on characteristic differences that would accommodate the technology being run.

Today, compute has been commoditized and virtualized. Rather than thinking of compute as a physical server, built to suit a specific purpose, compute is instead viewed in discreet chunks that can be scaled horizontally. PaaS today marries an application with those chunks of compute capacity as needed and abstracts application access to services, which may or may not run on the same PaaS platform.

Contributor and Organization Dynamic

The role of contributors and organizations have changed throughout the evolution of the landscape. Early monolithic systems required technology experts who were familiar with a broad range of technologies, including system administration, programming, networking, etc. As the functions were distributed, the roles became more defined by their specializations. Webmasters, DBAs, and programmers became siloed. Some unintended conflicts complicated this more distributed architecture due in part to the fact that efficiencies in one silo did not always align with the best interests of other silos.

DevOps

As the evolution pushed toward compute as a commodity, the new found flexibility drove many frustrated technologists to reach beyond their respective silo to accomplish their design and delivery objectives. Programmers began to look at how different operating system environments and database technologies could enable them to produce results faster and more reliably. System administrators began to rethink system management in ways that abstracted hardware dependencies and decreased the complexity involved in augmenting compute capacity available to individual functions. Datastore, network, storage and other experts began a similar process of abstracting their offering. This blending of roles and new dynamic of collaboration and contribution has come to be known as DevOps.

Interoperability

Interoperability between systems and applications in the days of monolithic application development made use of many protocols. This was due in part to the fact that each monolithic system exposed it’s services in different ways. As the above progression took place, the field of available protocols normalized. RESTful interfaces over HTTP have emerged as an accepted standard and the serialization structures most common to REST are XML and JSON. This makes integration straight forward and provides for a high amount of reuse of existing services. This also makes services available to a greater diversity of devices.

Security and Isolation

One key development that made this evolution from compute as hardware to compute as a utility possible was effective isolation of compute resources on shared hardware. The first big step in this direction came in the form of virualization. Virtualized hardware made it possible to run many distinct operating systems simultaneously on the same hardware. It also significantly reduced the time to provision new server resources, since the underlying hardware was already wired and ready.

Compute as a ________

The next step in the evolution came in the form of containers. Unlike virtualization, containers made it possible to provide an isolated, configurable compute instance in much less time that consumed fewer system resources to create and manage (i.e. lightweight). This progression from compute as hardware to compute as virtual and finally to compute as a container made it realistic to literally view compute as discreet chunks that could be created and destroyed in seconds as capacity requirements changed.

Infrastructure as Code

Another important observation regarding the evolution of compute is that as the compute environment became easier to create (time to provision decreased), the process to provision changed. When a physical server required ordering, shipping, mounting, wiring, etc., it was reasonable to take a day or two to install and configure the operating system, network and related components. When that hardware was virtualized and could be provisioned in hours (or less), system administrators began to pursue more automation to accommodate the setup of these systems (e.g. ansible, puppet, chef and even Vagrant). This made it possible to think of systems as more transient. With the advent of Linux containers, the idea of infrastructure as code became even more prevalent. Time to provision is approaching zero.

A related byproduct of infrastructure defined by scripts or code was reproduceability. Whereas it was historically difficult to ensure that two systems were configured identically, the method for provisioning containers made it trivial to ensure that compute resources were identically configured. This in turn improved debugging, collaboration and accommodated versioning of operating environments.

Contextual Answers

Given that the landscape has changed so drastically, let’s look at some possible answers to the questions from the beginning of this post.

  • Q. What Java (or any language) version does CloudFoundry support?
    A. It supports any language that is defined in the scripts used to provision the container that will run the application. While it is true that some such scripts may be available by default, this doesn’t imply that the PaaS provides only that. If it’s a fit, use it. If not, create new provisioning scripts.
  • Q. What database products/versions are available?
    A. Any database product or version can be used. If the datastore services available that are associated with the PaaS by default are not sufficient, bring your own or create another application component to accommodate your needs.
  • Q. How can I access the server directly?
    A. There is no “the server” If you want to know more about the server environment, look at the script/code that is responsible for provisioning it. Even better, create a new container and play around with it. Once you get things just right, update your code so that every new container incorporates the desired changes. Every “the server” will look exactly how you define it.
Software Engineering

nginx buildpack – installed

It’s possible to add a custom buildpack to Stackato or Helion Development Platform so that it’s available to all applications. When using an installed buildpack it is not necessary to include a manifest or identify the buildpack. Instead it will be selected by the detect script in the buildpack. All files are on the cloud controller node which eliminates download time and bandwidth.

Package the buildpack

Prepare your buildpack for installation by adding all files to a zip file (of any name). The bin folder should be in the root of the zip file with any related files in their relative locations. Normally this will happen as expected if the buildpack is cloned and then within the buildpack directory zip is run, as shown.

vagrant@vagrant-ubuntu-trusty-64:~$ git clone -b precompiled https://github.com/dwatrous/buildpack-nginx.git
Cloning into 'buildpack-nginx'...
remote: Counting objects: 156, done.
remote: Compressing objects: 100% (99/99), done.
remote: Total 156 (delta 44), reused 142 (delta 32)
Receiving objects: 100% (156/156), 1.80 MiB | 2.64 MiB/s, done.
Resolving deltas: 100% (44/44), done.
Checking connectivity... done.
vagrant@vagrant-ubuntu-trusty-64:~$ cd buildpack-nginx/
vagrant@vagrant-ubuntu-trusty-64:~/buildpack-nginx$ zip -r buildpack-nginx.zip *
  adding: bin/ (stored 0%)
  adding: bin/detect (deflated 19%)
  adding: bin/compile (deflated 57%)
  adding: bin/release (deflated 2%)
  adding: bin/launch.sh (deflated 54%)
  adding: README.md (deflated 44%)
  adding: vendor/ (stored 0%)
  adding: vendor/nginx.tar.gz (deflated 4%)
vagrant@vagrant-ubuntu-trusty-64:~/buildpack-nginx$ cp buildpack-nginx.zip /vagrant/

It may be obvious that the commands above were run in a vagrant built Ubuntu box. Buildpack scripts must have the executable bit set, which doesn’t carry over when the zip file is created on windows. To accommodate this I created a vagrant box, installed git and cloned and zipped. I then copied the zip file into the vagrant folder for easy access.

The README.md isn’t necessary for the buildpack, obviously, but it doesn’t interfere either. Any unrelated files that are in the buildpack can be removed before creating the buildpack zip file, but they won’t have any impact if they are left.

Install the buildpack

Adding the buildpack to Stackato or Helion is very straight forward. Use the stackato or helion client to call create-buildpack and provide the name and zip file, as shown.

C:\Users\watrous\Documents\buildpack>stackato create-buildpack daniel-static-nginx buildpack-nginx.zip
Creating new buildpack daniel-static-nginx ... OK
Uploading buildpack bits ...  OK
OK

All installed buildpacks can be listed using the stackato client with the buildpacks command.

C:\Users\watrous\Documents\buildpack>stackato buildpacks
+---+---------------------+---------------------+---------+--------+
| # | Name                | Filename            | Enabled | Locked |
+---+---------------------+---------------------+---------+--------+
| 1 | java                | java.zip            | yes     | no     |
| 2 | ruby                | ruby.zip            | yes     | no     |
| 3 | nodejs              | nodejs.zip          | yes     | no     |
| 4 | python              | python.zip          | yes     | no     |
| 5 | go                  | go.zip              | yes     | no     |
| 6 | clojure             | clojure.zip         | yes     | no     |
| 7 | scala               | scala.zip           | yes     | no     |
| 8 | perl                | perl.zip            | yes     | no     |
| 9 | daniel-static-nginx | buildpack-nginx.zip | yes     | no     |
+---+---------------------+---------------------+---------+--------+

Use the buildpack

In the case of this buildpack, the detect script is looking for an index file with and extension of either html or htm. To deploy using this buildpack, just make sure that a file matching that description is in the root of your application files. There is no need to include a manifest.

Software Engineering

Custom buildpacks in CloudFoundry

In the past PaaS has been rigid and invasive for application developers. CloudFoundry aims to change that perception of PaaS with the use of Buildpacks. A Buildpack allows an application developer to define his deployment environment in plain text. Some refer to this as infrastructure as code since the aspects of a deployment environment that were previously handled by system administrators on dedicated servers now exist in plain text alongside the application files.

What’s available out-of-the-box?

Before diving into custom buildpacks, a lot of people will ask “What is available out-of-the-box with CloudFoundry?”. The answer is everything and nothing. It’s true that there are some buildpacks that come pre-configured within the CloudFoundry, Stackato or Helion environments. However, it’s not entirely accurate or useful to consider these as “a version available in cloudfoundry”. The article below provides crucial context needed to determine whether or not an application should leverage an out-of-the-box buildpack or customize one.

Custom Buildpacks

The process to create custom buildpacks includes the following steps.

  • Develop the process to build out the runtime environment
  • Understand the staging process
  • Create a buildpack that stages the application for deployment

Understand the buildpack staging process

The buildpack staging process comprises all the steps to download, compile, configure and prepare a runtime environment. The result of staging is a droplet (a gzipped tar file) which contains all application and runtime dependencies. When a new docker application container is created, the droplet is used to deploy the application.

Design the runtime environment

Designing the runtime environment should be done using the same docker container that will be used to stage and deploy the application. Helion and Stackato use an Ubuntu based Docker image prepared specifically for the CloudFoundry environment.

Create a buildpack

With steps to produce a runtime environment and an understanding of the staging process, it’s now possible to create a custom buildpack. This includes creating detect, compile and release scripts, in addition to any related files.

Software Engineering

nginx buildpack – pre-compiled

Previously I detailed the process to create a buildpack for Stackato or Helion, including reconfiguring the buildpack to be self-contained. In both previous examples, the compile script performed a configure and make on source to build the binaries for the application. Since the configure->make process is often slow, this can be done once and the binaries added to the git repository for the buildpack.

Pre-compile nginx

The first step is to build nginx to run in the docker container for Stackato or Helion. These steps were previously in the compile script, but now they need to be run independently. After buliding, package up the necessary files and include that file in the git repository in place of the source file. Here are the commands I ran to accomplish this.

# create a docker image for the build process
docker run -i -t stackato/stack-alsek:latest /bin/bash
# download, configure and compile
wget -e use_proxy=yes http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
./configure
make
# rearrange and package up the files
cd
mkdir -p nginx/bin
mkdir -p nginx/conf
mkdir -p nginx/logs
cp nginx-1.6.2/objs/nginx nginx/bin/
cp nginx-1.6.2/conf/nginx.conf  nginx/conf/
cp nginx-1.6.2/conf/mime.types  nginx/conf/
tar czvf nginx.tar.gz nginx
# copy packaged file where I can get it to add to git repository
scp nginx.tar.gz user@myhost.domain.com:~/

After you get this new packaged file in place, the structure would look like this.

buildpack-files-precompiled

Updated compile script

The compile script is quite different. Here’s what I ended up with.

#!/usr/bin/env bash
# bin/compile <build-dir> <cache-dir>
 
shopt -s dotglob    # enables commands like 'mv *' to see hidden files
set -e              # exit immediately if any command fails (non-zero status)
 
# create local variables pointing to key paths
app_files_dir=$1
cache_dir=$2
buildpack_dir=$(cd $(dirname $0) && cd .. && pwd)
 
# unpackage nginx
mkdir -p $cache_dir
cp $buildpack_dir/vendor/nginx.tar.gz $cache_dir
tar xzf $cache_dir/nginx.tar.gz -C $cache_dir
 
# move applicaiton files into public directory
mkdir -p $cache_dir/public
mv $app_files_dir/* $cache_dir/public/
 
# put everything in place for droplet creation
mv $buildpack_dir/bin/launch.sh $app_files_dir/
mv $cache_dir/public $app_files_dir/
mv $cache_dir/nginx $app_files_dir/
 
# ensure manifest not in public directory
if [ -f $cache_dir/public/manifest.yml ]; then rm $cache_dir/public/manifest.yml; fi
if [ -f $cache_dir/public/stackato.yml ]; then rm $cache_dir/public/stackato.yml; fi

Now rather than build nginx it is just unpackaged right into place.

Staging Performance

Pre-compiling can save a lot of time when staging. For example, in this case the following times were observed when staging under each scenario.

Time to stage build from source

[stackato[dea_ng]] 2014-10-22T15:42:34.000Z: Completed uploading droplet
[staging]          2014-10-22T15:41:16.000Z:

That’s a total of 1:18 to build from source as part of the staging process.

Time to stage pre-compiled

[stackato[dea_ng]] 2014-10-22T16:33:38.000Z: Completed uploading droplet
[staging]          2014-10-22T16:33:32.000Z:

The total time to stage is 0:06 when the binary resources are precompiled. That’s 92.3% faster staging. That can be a significant advantage when staging applications which employ a very heavy compile process to establish the runtime environment.

Considerations

It could be argued that a pre-compiled binary is more likely to have a conflict at deploy time. In many deployment scenarios I would be tempted to agree. However, since the deployment environment is a Docker container and will presumably be identical to the container used to compile the binary, this risk is relatively low.

One question is whether a git repository is an appropriate place to store binary files. If this is a concern, the binary files can be stored on a file server, CDN or any location which can be accessed by the staging Docker container. This will keep the git repository clean with text only and can also provide a file system collection of binary artifacts that are available independent of git.

Software Engineering

nginx buildpack – offline

I previously documented the process to create a buildpack for nginx to use with Stackato or Helion Dev Platform. In that buildpack example, the compile script would download the nginx source using wget. In some cases, the time, bandwidth or access required to download external resources may be undesirable. In those cases the buildpack can be adjusted to work offline by adding the external resources to the git repository. The new structure would look like this.

buildpack-files-offline

Updated compile script

The only bulidpack related file that would need to change to accommodate this is the compile script. The affected section is download and build nginx.

# download and build nginx
mkdir -p $cache_dir
cp $buildpack_dir/vendor/nginx-1.6.2.tar.gz $cache_dir
tar xzf $cache_dir/nginx-1.6.2.tar.gz -C $cache_dir
cd $cache_dir/nginx-1.6.2
./configure
make

Notice that now the source is copied from the vendor directory in the buildpack instead downloaded using wget.

Using buildpack branches

In this case I created a branch of my existing buildpack to make these changes. By adding #branchname, it was possible to identify that branch with a in the manifest.yml. In this case my branch name was offline.

---
applications:
- name: static-test
  buildpack: https://github.com/dwatrous/buildpack-nginx.git#offline
Software Engineering

nginx buildpack – realtime

CloudFoundry accommodates buildpacks which define a deployment environment. A buildpack is distinct from an application and provides everything the application needs to run, including web server, language runtime, libraries, etc. The most basic structure for a buildpack requires three files inside a directory named bin.

buildpack-files

The buildpack files discussed in this post can be cloned or forked at https://github.com/dwatrous/buildpack-nginx

Some quick points about these buildpack files

  • All three files must be executable via bash
  • Can be shell scripts or any language that can be invoked using bash
  • Explicit use of paths recommended

detect

The detect script is intended to examine application files to determine whether it can accommodate the application. If it finds evidence that it can provide an environment to run an application, it should echo a message and exit with a status of zero. Otherwise it should exit with a non-zero status. In this example, the detect script looks for an index file with the extension html or htm.

#!/usr/bin/env bash
 
if [[ ( -f $1/index.html || -f $1/index.htm ) ]]
then
  echo "Static" && exit 0
else
  exit 1
fi

compile

The compile script is responsible to gather, build and position everything exactly as it needs to be in order to create the droplet that will be used to deploy the application. The second line of the script below shows that cloudfoundry calls the compile script and passes in paths to build-dir and cache-dir.

Develop and test the compile script

All of the commands in this file will run on a docker instance created specifically to perform the staging operation. It’s possible to develop the compile script inside the same docker container that will be used to stage your applicationi.

#!/usr/bin/env bash
# bin/compile <build-dir> <cache-dir>
 
shopt -s dotglob    # enables commands like 'mv *' to see hidden files
set -e              # exit immediately if any command fails (non-zero status)
 
# create local variables pointing to key paths
app_files_dir=$1
cache_dir=$2
buildpack_dir=$(cd $(dirname $0) && cd .. && pwd)
 
# download and build nginx
mkdir -p $cache_dir
cd $cache_dir
wget http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
./configure
make
 
# create hierarchy with only needed files
mkdir -p $cache_dir/nginx/bin
mkdir -p $cache_dir/nginx/conf
mkdir -p $cache_dir/nginx/logs
cp $cache_dir/nginx-1.6.2/objs/nginx $cache_dir/nginx/bin/nginx
cp $cache_dir/nginx-1.6.2/conf/nginx.conf $cache_dir/nginx/conf/nginx.conf
cp $cache_dir/nginx-1.6.2/conf/mime.types $cache_dir/nginx/conf/mime.types
 
# move applicaiton files into public directory
mkdir -p $cache_dir/public
mv $app_files_dir/* $cache_dir/public/
# copy nginx error template
cp $cache_dir/nginx-1.6.2/html/50x.html $cache_dir/public/50x.html
 
# put everything in place for droplet creation
mv $buildpack_dir/bin/launch.sh $app_files_dir/
mv $cache_dir/public $app_files_dir/
mv $cache_dir/nginx $app_files_dir/
 
# ensure manifest not in public directory
if [ -f $cache_dir/public/manifest.yml ]; then rm $cache_dir/public/manifest.yml; fi
if [ -f $cache_dir/public/stackato.yml ]; then rm $cache_dir/public/stackato.yml; fi

Notice that after nginx has been compiled, the desired files, such as the nginx binary and configuration file, must be explicitly copied into place where they will be included when the droplet is packaged up. In this case I put the application files in a sub-directory called public and the nginx binary, conf and logs in a sub-directory called nginx. A shell script, launch.sh, is also copied into the root of the application directory, which will be explained in a minute.

release

The output of the release script is a YAML file, but it’s important to understand that the release file itself must be an executable script. The key detail in the release file is the line that indicates how to start the web server process. In some cases that may require several commands, which would require them to be encapsulated in another script, such as the launch.sh script shown below.

#!/usr/bin/env bash
 
cat <<YAML
---
default_process_types:
  web: sh launch.sh
YAML

launch.sh

The launch.sh script creates a configuration file for nginx that includes the PORT and HOME directory for this specific docker instance. It then starts nginx as the local user.

#!/usr/bin/env bash
 
# create nginx conf file with PORT and HOME directory from cloudfoundry environment variables
mv $HOME/nginx/conf/nginx.conf $HOME/nginx/conf/nginx.conf.original
sed "s|\(^\s*listen\s*\)80|\1$PORT|" $HOME/nginx/conf/nginx.conf.original > $HOME/nginx/conf/nginx.conf
sed -i "s|\(^\s*root\s*\)html|\1$HOME/public|" $HOME/nginx/conf/nginx.conf
 
# start nginx web server
$HOME/nginx/bin/nginx -c $HOME/nginx/conf/nginx.conf -p $HOME/nginx

Using a buildpack

There are two ways to use a buildpack: either install it on the cloud controller node and let the detect script select it or store it in a GIT repository and provide the GIT URL in manifest.yml. The GIT repository must be publicly readable and accessible from the docker instance where staging will occur (proxy issues may interfere with this).

To use this buildpack, in an empty directory, create two files:

manifest.yml

---
applications:
- name: static-test
  memory: 40M
  disk: 200M
  instances: 1
  buildpack: https://github.com/dwatrous/buildpack-nginx.git

index.html

<h1>Hello World!</h1>

From within that directory, target, login and push your app (can be done locally or in the cloud).

helion-static-nginx-app-deployed

Resources

https://developer.ibm.com/answers/questions/15623/buildpack-compilation-step-failed/
http://blog.lesc.se/2011/11/how-to-change-file-premissions-in-git.html