Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged cloudfoundry

Software Engineering

Buildpack staging process in Stackato and Helion

One of the major strengths of CloudFoundry was the adoption of buildpacks. A buildpack represents a blueprint which defines all runtime requirements for an application. These may include web server, application, language, library and any other requirements to run an application. There are many buildpacks available for common environments, such as Java, PHP, Python, Go, etc. It is also easy to fork an existing buildpack or create a new buildpack.

When an application is pushed to CloudFoundry, there is a staging step and a deploy step, as shown below. The buildpack comes in to play when an application is staged.

cloudfoundry-stage-deploy

Staging, or droplet creation

A droplet is a compressed tar file which contains all application files and runtime dependencies to run the application. Depending on the technology, this may include source code for interpreted languages, like PHP, or bytecode or even compiled objects, as with Python or Java. It will also include binary executables needed to run the application files. For example, in the case of Java, this would be a Java Runtime Environment (JRE).

The staging process, which produces a droplet that is ready to deploy, is shown in the following activity diagram.

helion-app-staging

Staging environment

The staging process happens on a docker instance created using the same docker image used to deploy the app. Once created, all application files are copied to the new docker staging instance. Among these application files may be a deployment definition in the form of a manifest.yml or stackato.yml. The YAML files can include information that will prepare or alter the staging environment, including installation of system libraries. The current docker image is based on Ubuntu.

Next we get to the buildpack. Most installations of CloudFoundry include a number of buildpacks for common technologies, such as PHP, Java, Python, Go, Node and Ruby. If no manifest is included, or if there is a manifest which doesn’t explicitly identify a buildpack, then CloudFoundry will loop through all available buildpacks and run the detect script to find a match.

If the manifest explicitly identifies a buildpack, which must be hosted in a GIT repository, then that buildpack will be cloned onto the staging instance and the compile process will proceed immediately.

environment variables

CloudFoundry makes certain environment variables available to the docker instance used for staging and deploying apps. These should be treated as dynamic, since every instance (stage or deploy) will have different variables. For example, the PORT will be different for every host.

Here is a sample of the environment available during the staging step:

USER=stackato
HOME=/home/stackato
PWD=/tmp/staged
SHELL=/bin/bash
SSH_CLIENT=172.17.42.1 41465 22
STACKATO_LOG_FILES=stdout=/home/stackato/logs/stdout.log:stderr=/home/stackato/logs/stderr.log
STACKATO_APP_NAME=static-test
STACKATO_APP_NAME_UPCASE=STATIC_TEST
STACKATO_DOCKER=True
LC_ALL=en_US.UTF-8
http_proxy=http://10.0.0.13:8123
https_proxy=http://10.0.0.13:8123
no_proxy=
VCAP_APPLICATION={"limits":{"mem":40,"disk":200,"fds":16384,"allow_sudo":false},"application_version":"79eb8896-37e4-4a22-8d0f-f14bd3fbe7e5","application_name":"static-test","application_uris":["static-test.stackato.danielwatrous.com"],"version":"79eb8896-37e4-4a22-8d0f-f14bd3fbe7e5","name":"static-test","space_name":"EA","space_id":"fb8ea5f4-36f1-4484-86ea-b12655669d5f","uris":["static-test.stackato.danielwatrous.com"],"users":null,"sso_enabled":false}
VCAP_SERVICES={}
STAGING_TIMEOUT=900.0
TCL_TEMPLOAD_NO_UNLINK=1
MAIL=/var/mail/stackato
PATH=/opt/ActivePython-2.7/bin:/opt/ActivePython-3.3/bin:/opt/ActivePerl-5.16/bin:/opt/rubies/current/bin:/opt/node-v0.10/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
STACKATO_SERVICES={}
LANG=en_US.UTF-8
SHLVL=3
LANGUAGE=en_US.UTF-8
LOGNAME=stackato
SSH_CONNECTION=172.17.42.1 41465 172.17.0.120 22
GOVERSION=1.2.1
BUILDPACK_CACHE=/var/vcap/packages/buildpack_cache
MEMORY_LIMIT=40m

detect

The detect script analyzes the application files looking for specific artifacts. For example, the Ruby buildpack looks for a Gemfile. The PHP buildpack looks for index.php or composer.json. The Java buildpack looks for a pom.xml. And so on for each technology. If a buildpack detects the required artifacts, it prints a message and exits with a system value of 0, indicating success. At this point, the buildpack files are also copied to the staging instance.

compile

The compile script is responsible for doing the rest of the work to prepare the runtime environment. This may include downloading and compiling source files. It may include downloading or copying pre-compiled binaries. The compile script is responsible for arranging the application files and runtime artifacts in a way that accommodates configuration.

The compile script, as well as all runtime components, are run as local user without sudo privileges. This will often mean that configuration files, log files, and other typical file system paths will need to be adapted to run as an unprivileged user.

The follow image shows the directory structure on the docker staging instance which will be useful when creating a compile script.

stackato-staging-directory-structure

Hooks are available for pre-staging and post-staging. This may be useful if access to external resources are needed to stage and require authentication or other preliminary setup.

Create droplet

Once the compile script completes, everything in the user’s directory is packaged up as a compressed tar file. This is the droplet. The droplet is copied to the cloud controller and the staging instance is destroyed.

Software Engineering

Overview of CloudFoundry

CloudFoundry is an opensource Platform as a Service (PaaS) technology originally introduced and commercially supported by Pivotal. The software makes it possible to very easily stage, deploy and scale applications, thanks in part to its adoption of buildpacks which were originally introduced by Heroku.

Some software design principles are required to achieve scale with cloud foundry. The most notable design choice is a complete abstraction of persistence, including filesystem, datastore and even in memory cache. This is because instances of an application are transient and stateless. Since this is generally good design anyway, many applications may find it easy to migrate to CloudFoundry.

CloudFoundry Internals

CloudFoundry can be viewed from two perspectives: CloudFoundry internals and Application Developers who want to deploy on CloudFoundry. The image directly below is from the CloudFoundry architecture overview.

cf_architecture_block

This shows the internal components that make up CloudFoundry. The router is the internet facing component that matches requests up with application instances. It performs load balancing among instances of an application. Unlike a load balancer, there is no concept of sticky sessions, since application instances are assumed to be stateless.

The DEA is a compute resource (usually a virtual server, but it can be any compute resource, including bare metal). CloudFoundry uses a technology called Warden for containerization. Other distributions use alternative technologies, like docker.

Services, such as database, cache, filesystem, etc. must implement a Service Broker API. Through this API, CloudFoundry is able to discover, provision and facilitate communication of credentials to each instance of an application.

Application Development

Application Developers interact with CloudFoundry in a few different ways. Two common methods include the command line client and Eclipse plugin. Using these tools, developers may login to a CloudFoundry installation and deploy apps under organizations and spaces. The following diagram illustrates this.

cloudfoundry-diagram

When a developer is ready to deploy, he pushes his app. CloudFoundry then identifies a suitable buildpack and stages the application resulting in a droplet. A droplet is a compressed tar file which contains all the application files and runtime dependencies. After staging, app instances are created and the droplet is extracted on the new instance at which point the application is started. If services are required, these are provisioned when the application is pushed.

Distributions

There are many contributors to open source CloudFoundry. This has resulted in various distributions of CloudFoundry aside from Pivotal’s commercial offering.

ActiveState Stackato

ActiveState distributes CloudFoundry under the brand name Stackato. Some notable differences include the use of docker for instances and a web interface that includes an app store for deploying common applications.

HP Helion Development Platform

Hewlett Packard then offers an enterprise focused distribution of Stackato as HP Helion Development Platform. The enterprise focus includes an emphasis on the ability to use private cloud, public cloud and traditional IT to cost effectively, securely and reliably deploy and scale mission critical applications.

Getting started with CloudFoundry

It’s easy to get started with CloudFoundry. Here are a couple of tutorials that will get you ready to quickly deploy apps.

CloudFoundry on HPCloud.com
Install CloudFoundry in VirtualBox

Software Engineering

Nginx in Docker for Stackato buildpacks

I’m about to write a few articles about creating buildpacks for Stackato, which is a derivative of CloudFoundry and the technology behind Helion Development Platform. The approach for deploying nginx in docker as part of a buildpack differs from the approach I published previously. There are a few reasons for this:

  • Stackato discourages root access to the docker image
  • All services will run as the stackato user
  • The PORT to which services must bind is assigned and in the free range (no root required)

Get a host setup to run docker

The easiest way to follow along with this tutorial is to deploy stackato somewhere like hpcloud.com. The resulting host will have the docker image you need to follow along below.

Manually prepare a VM to run docker containers

You can also use Vagrant to spin up a virtual host where you can run these commands.

vagrant init ubuntu/trusty64

Modify the Vagrantfile to contain this line

  config.vm.provision :shell, path: "bootstrap.sh"

Then create a the bootstrap.sh file based on the details below.

bootstrap.sh

#!/usr/bin/env bash
 
# set proxy variables
#export http_proxy=http://proxy.example.com:8080
#export https_proxy=https://proxy.example.com:8080
 
# bootstrap ansible for convenience on the control box
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
apt-get update
apt-get -y install lxc-docker
 
# need to add proxy specifically to docker config since it doesn't pick them up from the environment
#sed -i '$a export http_proxy=http://proxy.example.com:8080' /etc/default/docker
#sed -i '$a export https_proxy=https://proxy.example.com:8080' /etc/default/docker
 
# enable non-root use by vagrant user
groupadd docker
gpasswd -a vagrant docker
 
# restart to enable proxy
service docker restart
# give docker a few seconds to start up
sleep 2s
 
# load in the stackato/stack-alsek image (can take a while)
docker load < /vagrant/stack-alsek.tar

Get the stackato docker image

Before the last line in the above bootstrap.sh script will work, it’s necessary to place the docker image for Stackato in the same directory as the Vagrantfile. Unfortunately the Stackato docker image is not published independently, which makes it more complicated to get. One way to do this is to deploy stackato locally and grab a copy of the image with this command.

docker save stackato/stack-alsek > stack-alsek.tar

You might want to save a copy of stack-alsek.tar to save time in the future. I’m not sure if it can be published (legally), but you’ll want to update this image with each release of Stackato anyway.

Launch a new docker instance using the Stackato image

You should now have three files in the directory where you did ‘vagrant init’.

-rw-rw-rw-   1 user     group          4998 Oct  9 14:01 Vagrantfile
-rw-rw-rw-   1 user     group           971 Oct  9 14:23 bootstrap.sh
-rw-rw-rw-   1 user     group    1757431808 Oct  9 14:02 stack-alsek.tar

At this point you should be ready to create a new VM and spin up a docker instance. First tell Vagrant to build the virtual server.

vagrant up

Next, log in to your server and create the docker container.

vagrant@vagrant-ubuntu-trusty-64:~$ docker run -i -t stackato/stack-alsek:latest /bin/bash
root@33ad737d42cf:/#

Build and configure your services

Once you have a system setup and can create docker instances based on the Stackato image, you’re ready to craft your buildpack compile script. One of the first things I do is install the w3m browser so I can test my setup. In this example, I’m just going to build and test nginx. The same process could be used to build any number of steps into a compile script. It may be necessary to manage http_proxy, https_proxy and no_proxy environment variables for both root and stackato users while completing the steps below.

apt-get -y install w3m

Since everything in a stackato DEA is expected to run as the stackato user, we’ll switch to that user and move into the home directory

root@33ad737d42cf:/# su stackato
stackato@33ad737d42cf:/$ cd
stackato@33ad737d42cf:~$ pwd
/home/stackato/

Next I’m going to grab the source for nginx and configure and make.

wget -e use_proxy=yes http://nginx.org/download/nginx-1.6.2.tar.gz
tar xzf nginx-1.6.2.tar.gz
cd nginx-1.6.2
./configure
make

By this point nginx has been built successfully and we’re in the nginx source directory. Next I want to update the configuration file to use a non-privileged port. For now I’ll use 8080, but Stackato will assign the actual PORT when deployed.

mv conf/nginx.conf conf/nginx.conf.original
sed 's/\(^\s*listen\s*\)80/\1 8080/' conf/nginx.conf.original > conf/nginx.conf

I also need to make sure that there is a logs directory available for nginx error logs on startup.

mkdir logs

It’s time to test the nginx build, which we can do with the command below. A message should be displayed saying the test was successful.

stackato@33ad737d42cf:~/nginx-1.6.2$ ./objs/nginx -t -c conf/nginx.conf -p /home/stackato/nginx-1.6.2
nginx: the configuration file /home/stackato/nginx-1.6.2/conf/nginx.conf syntax is ok
nginx: configuration file /home/stackato/nginx-1.6.2/conf/nginx.conf test is successful

With the setup and configuration validated, it’s time to start nginx and verify that it works.

./objs/nginx -c conf/nginx.conf -p /home/stackato/nginx-1.6.2

At this point it should be possible to load the nginx welcome page using the command below.

w3m http://localhost:8080

nginx-docker

Next steps

If an application requires other resources, this same process can be followed to build, integrate and test them within a running docker container based on the stackato image.

Software Engineering

Explore CloudFoundry using bosh-lite on Windows

It seems like most of the development around CloudFoundry and bosh happen on Linux or Mac. Getting things up and running in Windows was a real challenge. Below is how I worked things out.

**Make sure you have a modern processor that supports all virtualization technologies, such as VTx and extended paging.

Aside from the deviations mentioned below, I’m following the steps documented at https://github.com/cloudfoundry/bosh-lite

Changes to Vagrantfile

I’m using VirtualBox on Windows 7. To begin with, I modified the Vagrantfile to create two VMs rather than a single VM. The first is the VM that will run CloudFoundry. The second is to run bosh for the deployment of CloudFoundry. I use a second Linux VM to execute the bosh deployment since all the commands and files were developed in a *nix environment.

I am also more explicit in my network setup. I want the two hosts to have free communication on a local private network. I leave the default IP address assignment for the CloudFoundry host. For the bosh host I change the last octet of the IP address to 14.

  config.vm.define "cf" do |cf|
    cf.vm.provider :virtualbox do |v, override|
      override.vm.box = 'cloudfoundry/bosh-lite'
      override.vm.box_version = '388'
 
      # To use a different IP address for the bosh-lite director, uncomment this line:
      override.vm.network :private_network, ip: '192.168.50.4', id: :local
      override.vm.network :public_network
    end
  end
 
  config.vm.define "boshlite" do |boshlite|
    boshlite.vm.provider :virtualbox do |v, override|
      override.vm.box = 'ubuntu/trusty64'
 
      # To use a different IP address for the bosh-lite director, uncomment this line:
      override.vm.network :private_network, ip: '192.168.50.14', id: :local
      override.vm.network :public_network
      v.memory = 6144
      v.cpus = 2
    end
  end

At this point you can spin up the two hosts.

vagrant up --provider=virtualbox

The remaining steps need to happen on your bosh deployment host (192.168.0.14 based on the Vagrantfile above). In case you need it, here is a refresher on setting up Vagrant SSH connectivity using PuTTY on Windows.

Prepare for provision_cf

If you are in a proxied environment, you’ll need to set the environment variables, including no_proxy for the CloudFoundry host. I include xip.io for ease of access in future steps.

export http_proxy=http://proxy.domain.com:8080
export https_proxy=https://proxy.domain.com:8080
export no_proxy=192.168.50.4,xip.io

Next we need to get prerequisites going and then install the bosh CLI. You may have some of these already, and you may need some additional libraries. This is based on a clean Ubuntu trusty 64 box.

sudo -E add-apt-repository multiverse
sudo -E apt-get update
sudo -E apt-get -y install build-essential linux-headers-`uname -r`
sudo -E apt-get -y install ruby ruby-dev git zip

Now bosh_cli can be installed. I’ve added flags to skip ‘ri’ and ‘rdoc’ since they take a long time. If you really want those, you can drop those arguments.

sudo -E gem install bosh_cli --no-ri --no-rdoc

We also need spiff on this system. Here I grab and unzip the latest spiff, then move the binary into /usr/local/bin.

wget https://github.com/cloudfoundry-incubator/spiff/releases/download/v1.0.3/spiff_linux_amd64.zip
unzip spiff_linux_amd64.zip
sudo mv spiff /usr/local/bin/

Next we need to clone both bosh-lite and cf-release. Even though the contents of bosh-lite are available in “/vagrant”, we need these two directories side by side, so it’s easiest to just clone them both into the home directory of the bosh deployment host. We then change into the bosh-lite directory.

git clone https://github.com/cloudfoundry/bosh-lite.git
git clone https://github.com/cloudfoundry/cf-release
cd bosh-lite/

The script ./bin/provision_cf needs to be edited so that get_ip_from_vagrant_ssh_config simply outputs the private network IP address that was assigned in the Vagrant file. The default functionality assumes that the provision script is run from the the host running Vagrant and VirtualBox. However, these commands are running on the bosh deployment host, which doesn’t know anything about vagrant or virtualbox. Here’s what the function should look like.

get_ip_from_vagrant_ssh_config() {
  echo 192.168.50.4
}

Target bosh and provision

Everything is set to target the bosh host, set the route and provision CloudFoundry. When you first target the cloudfoundry host, it will as for credentials to login.

vagrant@vagrant-ubuntu-trusty-64:~/bosh-lite$ bosh target 192.168.50.4 lite
Target set to `Bosh Lite Director'
Your username: admin
Enter password: *****
Logged in as `admin'

Next we can add the route to the bosh deployment host.

vagrant@vagrant-ubuntu-trusty-64:~/bosh-lite$ ./bin/add-route
Adding the following route entry to your local route table to enable direct warden container access. Your sudo password may be required.
  - net 10.244.0.0/19 via 192.168.50.4

Provision CloudFoundry

The only thing left to do is provision CloudFoundry.

./bin/provision_cf
...
Started         2014-09-29 18:54:39 UTC
Finished        2014-09-29 19:36:11 UTC
Duration        00:41:32
 
Deployed `cf-manifest.yml' to `Bosh Lite Director'

This takes quite a while (possibly hours depending on your hardware). If you have an older processor that doesn’t support all the modern virtualization technologies, this could take much longer.

Verify your new CloudFoundry deployment

In order to use CloudFoundry we need the ‘cf’ client. The cf client is available as a binary download from the main GitHub page for CloudFoundry. The following commands will prepare the cf CLI for use.

wget http://go-cli.s3-website-us-east-1.amazonaws.com/releases/v6.6.1/cf-linux-amd64.tgz
tar xzvf cf-linux-amd64.tgz
sudo mv cf /usr/local/bin/

With the cf CLI installed, it is now possible connect to the API and setup org and space details.

cf api --skip-ssl-validation https://api.10.244.0.34.xip.io
cf auth admin admin
cf create-org myorg
cf target -o myorg
cf create-space mydept
cf target -o myorg -s mydept

You should now have an environment that matches the below.

API endpoint:   https://api.10.244.0.34.xip.io (API version: 2.14.0)
User:           admin
Org:            myorg
Space:          mydept

Deploy an app

You can now deploy an application. To verify, create a directory can add a file:

index.php

<?php phpinfo(); ?>

Now push that app as follows:

vagrant@vagrant-ubuntu-trusty-64:~/test-php$ cf push test-php
Creating app test-php in org myorg / space mydept as admin...
OK
 
Creating route test-php.10.244.0.34.xip.io...
OK
 
Binding test-php.10.244.0.34.xip.io to test-php...
OK
 
Uploading test-php...
Uploading app files from: /home/vagrant/test-php
Uploading 152, 1 files
OK
 
Starting app test-php in org myorg / space mydept as admin...
OK
-----> Downloaded app package (4.0K)
Use locally cached dependencies where possible
 !     WARNING:        No composer.json found.
       Using index.php to declare PHP applications is considered legacy
       functionality and may lead to unexpected behavior.
       See https://devcenter.heroku.com/categories/php
-----> Setting up runtime environment...
       - PHP 5.5.12
       - Apache 2.4.9
       - Nginx 1.4.6
-----> Installing PHP extensions:
       - opcache (automatic; bundled, using 'ext-opcache.ini')
-----> Installing dependencies...
       Composer version ac497feabaa0d247c441178b7b4aaa4c61b07399 2014-06-10 14:13:12
       Warning: This development build of composer is over 30 days old. It is recommended to update it by running "/app/.heroku/php/bin/composer self-update" to get the latest version.
       Loading composer repositories with package information
       Installing dependencies
       Nothing to install or update
       Generating optimized autoload files
-----> Building runtime environment...
       NOTICE: No Procfile, defaulting to 'web: vendor/bin/heroku-php-apache2'
-----> Uploading droplet (64M)
 
0 of 1 instances running, 1 starting
1 of 1 instances running
 
App started
 
Showing health and status for app test-php in org myorg / space mydept as admin...
OK
 
requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: test-php.10.244.0.34.xip.io
 
     state     since                    cpu    memory          disk
#0   running   2014-09-29 07:52:38 PM   0.0%   84.9M of 256M   0 of 1G

It’s now possible to view the app using a browser. From the command line you can access it using this command:

w3m http://test-php.10.244.0.34.xip.io

Observations

In my tests, the xip.io resolution was flaky. I saw intermittent failures with the response:

dial tcp: lookup api.10.244.0.34.xip.io: no such host

In some cases I would have to run the same command a few times before it could resolve the host.

The VMs I setup obtained IP addresses on my network. However, when I tried to access apps or the API over that IP address, the connection is refused. Despite adding the domain (e.g. dhcpip.xip.io) to CloudFoundry and creating routes to my application, all attempts to use the API or load apps over the external IP failed.

Software Engineering

Install Stackato (CloudFoundry) on HPCloud

I recently published an article to get CloudFoundry running by way of Stackato on a local machine using VirtualBox. As soon as you have something ready to share with the world (testers, executives, investors, etc.), you’ll want something more public. Fortunately, it’s easy to run Stackato on HPCloud.com. I’m following the steps outlined here: https://docs.stackato.com/admin/server/hpcs.html.

Configuration of HPCloud.com

For the security groups, I created two separate groups, one for SSH and another for web. I do this to allow for separation of access and web service functions in the future (using a Bastian host for example).

stackato-hpcloud-security-groups

Launch an Instance

My settings for the instance are straight forward and shown below. I’m using a medium size instance and a I check both the web and remote security groups that I created above. If you haven’t already added a key pair for secure remote access, do that before creating the instance. If you forget to do this, the default password for the stackato user is ‘stackato’, at least until you complete the setup of the first admin user.

stackato-hpcloud-instance-01

And the security group assignments.

stackato-hpcloud-instance-02

I follow the instructions as provided to associate a floating IP address with the new instance and create a DNS entry for easy access to my new stackato installation. In my case I didn’t move DNS management to hpcloud.com. Instead I just added A and CNAME records where it was already being managed.

Role and name management

When I installed Stackato on VirtualBox previously, I had to rename the node to use wildcard DNS. In this case I need to rename it to use the domain name for which I created the A and CNAME records above.

For my scenario I ran the following two commands:

kato role remove mdns
kato node rename stackato.danielwatrous.com

You may see some errors about it being unable to resolve the default node name. You can safely ignore these. Restarting all the affected roles can take a few minutes.

Stackato uses HTTPS by default. However, if you are using the default certificate, you may have to bypass some security warnings in your browser.

Pushing code

You should be all set to push an app to the newly deployed Stackato instance. Let’s push a simple PHP app. Two files are required to push this using the stackato command line client. First is the PHP file, index.php. Next is the manifest yaml file. The manifest is required so that CloudFoundry can identify the correct buildpack to prepare the instance.

index.php

<?php
phpinfo();
?>

And a very basic manifest.yml file.

applications:
- name: test-php
  framework: php

The console session to deploy this simple app is shown below:

F:\cf-php-test>stackato target stackato.danielwatrous.com
Host redirects to: 'https://api.stackato.danielwatrous.com'
Successfully targeted to [https://api.stackato.danielwatrous.com]
Target:       https://api.stackato.danielwatrous.com
Organization: <none>
Space:        <none>
 
F:\cf-php-test>stackato login watrous
Attempting login to [https://api.stackato.danielwatrous.com]
Password: ********
Successfully logged into [https://api.stackato.danielwatrous.com]
Choosing the one available organization: "DanielWatrous.com"
Choosing the one available space: "Explore"
Target:       https://api.stackato.danielwatrous.com
Organization: DanielWatrous.com
Space:        Explore
No license installed.
Using 4G of 4G.
 
F:\cf-php-test>stackato push
Would you like to deploy from the current directory ?  [Yn]:
Using manifest file "manifest.yml"
Application Deployed URL [test-php.stackato.danielwatrous.com]:
Application Url:   https://test-php.stackato.danielwatrous.com
Enter Memory Reservation [256]: 20
Enter Disk Reservation [2048]: 1024
Creating Application [test-php] as [https://api.stackato.danielwatrous.com -> DanielWatrous.com -> Explore -> test-php] ... OK
  Map https://test-php.stackato.danielwatrous.com ... OK
Create services to bind to 'test-php' ?  [yN]:
Uploading Application [test-php] ...
  Checking for bad links ...  OK
  Copying to temp space ...  OK
  Checking for available resources ...  OK
  Processing resources ... OK
  Packing application ... OK
  Uploading (231) ...  OK
Push Status: OK
Starting Application [test-php] ...
stackato[dea_ng]: Staging application
stackato[fence]: Created Docker container
stackato[fence]: Prepared Docker container
stackato[cloud_controller_ng]: Updated app 'test-php' -- {"console"=>true, "state"=>"STARTED"}
staging: -----> Downloaded app package (4.0M)
staging: ****************************************************************************
staging: * Using the legacy buildpack to stage a 'php' framework application.
staging: *
staging: * Note that the legacy buildpack is a migration tool to provide backwards
staging: * compatibility while moving from Stackato 2.x to Stackato 3.0.  It is not
staging: * updated with new features beyond what Stackato 2.10.6 supplied.
staging: *
staging: * Please use a non-legacy buildpack for any new code developed for Stackato!
staging: ****************************************************************************
staging:
staging: end of staging
staging: -----> Uploading droplet (4.0M)
stackato[dea_ng]: Uploading droplet
stackato[dea_ng]: Completed uploading droplet
stackato[fence]: Destroyed Docker container
stackato[fence.0]: Created Docker container
stackato[fence.0]: Prepared Docker container
stackato[dea_ng.0]: Launching web process: /home/stackato/startup
app[stderr.0]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.39. Set the 'ServerName' directive globally to suppress this message
stackato[dea_ng.0]: Instance is ready
OK
http://test-php.stackato.danielwatrous.com/ deployed
 
F:\cf-php-test>

At which point I can load my PHP test app, which is just phpinfo().

stackato-hpcloud-php-test-app

Software Engineering

Explore CloudFoundry using Stackato and VirtualBox

Stackato, which is released by ActiveState, extends out of the box CloudFoundry. It adds a web interface and a command line client (‘stackato’), although the existing ‘cf’ command line client still works (as long as versions match up). Stackato includes some autoscale features and a very well done set of documentation.

ActiveState publishes various VM images that can be used to quickly spin up a development environment. These include images for VMWare, KVM and VirtualBox, among others. In this post I’ll walk through getting a Stackato environment running on Windows using VirtualBox.

Install and Configure Stackato

The obvious first step is to download and install VirtualBox. The steps shown below should work on any system that runs VirtualBox.

I follow these steps to install and configure the Stackato VM.
http://docs.stackato.com/admin/setup/microcloud.html

After downloading the VirtualBox images for Stackato, from VirtualBox I click “File->Import Appliance” and navigating to the unzipped contents of the VM download. I select the OVF file and click open.

import-stackato-vm-virtualbox

This process can take several minutes depending on your system.

import-stackato-vm-virtualbox-progress

After clicking next, you can configure various settings. Click the checkbox to Reinitialize the MAC address on all network cards.

import-stackato-vm-virtualbox-reinitialize-mac

Once the import completes, right click on the VM in VirtualBox and choose settings.

stackato-vm-virtualbox-settings

Navigate to the network settings and make sure it’s set to Bridged. The default is NAT. Bridged will allow the VM to obtain it’s own IP address on the network. This will facilitate access later on.

stackato-vm-virtualbox-settings-network

Depending on your system resources, you may also want to go into the System settings and increase the Base Memory and number of Processors. You can also ensure that all virtualization accelerators are enabled.

Launch the Stackato VM

After you click OK, the VM is ready to launch. When you launch the VM, there is an initial message asking if you want to boot into recovery mode. Choose regular boot and you will then see a message about the initial setup.

launch-stackato-virtualbox-initial-setup

Eventually the screen will show system status. You’ll notice that it bounces around until it finally settles on a green READY status.

launch-stackato-virtualbox-ready

At this point you have a running instance of Stackato.

Configure Stackato

You can see in the above screenshot that Stackato displays a URL for the management console and the IP address of the system. In order to complete the configuration of the system (and before you can SSH into the server), you need to access the web console using that URL. This may require that you edit your local hosts file (/etc/hosts on Linux or C:\Windows\System32\drivers\etc\hosts on Windows). The entries in your hosts file should look something like this:

16.85.146.131	stackato-7kgb.local
16.85.146.131	api.stackato-7kgb.local

You can now access the console using https://api.stackato-7kgb.local. Don’t forget to put “http://” in front so the browser knows to request that URL rather than search for it. The server is also using a self-signed certificate, so you can expect your browser to complain. It’s OK to tell your browser to load the page despite the self-signed certificate.

On the page that loads, you need to provide a username, email address, and some other details to configure this Stakcato installation. Provide the requested details and click to setup the initial admin user.

stackato-cloudfoundry-configuration

A couple of things just happened. First off, a user was created with the username you provide. The password you chose will also become the password for the system user ‘stackato‘. This is important because it allows you to SSH into your instance.

Wildcard DNS

Widlcard DNS, using a service like xip.io, will make it easier to access Stackato and any applications that you deploy there. First we log in to our VM over SSH and we use the node rename command to enable wildcard DNS.

kato node rename 16.85.146.131.xip.io

Stopping and starting related roles takes a few minutes after running the command above. Once the server returns to READY state, the console is available using the xip.io address, https://api.16.85.146.131.xip.io. This also applies to any applications that are deployed.

The entries in the local hosts file are no longer needed and can be removed.

Proxy Configuration

Many enterprise environments route internet access through a proxy. If this is the case for you, it’s possible to identify the upstream proxy for all Stackato related services. Run the following commands on the Stackato VM to enable proxy access.

kato op upstream_proxy set proxy.domain.com:8080
sudo /etc/init.d/polipo restart

It may also be necessary to set the http_proxy and https_proxy environment variables by way of the .bashrc for the stackato system user.

At this point you should be able to easily deploy a new app from the app store using the Stackato console. Let’s turn our attention now to using a client to deploy a simple app.

Use the stackato CLI to Deploy an App

The same virtual host that is running Stackato also includes the command line client. That meas it can be used to deploy a simple application and verify that Stackato is working properly. To do this, first connect to the VM using SSH. Once connected, the following steps will prepare the command line client to deploy a simple application.

  1. set the target
  2. login/authenticate
  3. push the app

To set the target and login, we use these commands

stackato target api.16.85.146.131.xip.io
stackato login watrous

The output of the commands can be seen in the image below:

stackato-cli-target-login

Example Python App

There is a simple python bottle application we can use to confirm our Stackato deployment. To deploy we first clone, then use the stackato client to push, as shown below.

stackato-cli-deploy-push-python

Here are those commands in plain text:

git clone https://github.com/Stackato-Apps/bottle-py3.git
cd bottle-py3/
stackato push -n

Using the URL provided in the output from ‘stackato push’ we can view the new app.

stackato-python-bottle-app

You can now scale up instances and manage other aspects of the app using the web console or the stackato client.

Software Engineering

OpenStack Development using DevStack

I’ve been sneaking up on CloudFoundry for a few weeks now. Each time I try to get a CloudFoundry test environment going I find a couple more technologies that serve either as foundation or support to CloudFoundry. For example, Vagrant, Ansible and Docker all feed into CloudFoundry. Today I come to OpenStack, by way of DevStack (see resources below for more links).

Objectives

My objectives for this post are get OpenStack up and running by way of DevStack so that I can begin to explore the OpenStack API. CloudFoundry uses the OpenStack API to provision new VMs, so understanding the API is essential to troubleshooting CloudFoundry issues.

Some deviations from out of the box DevStack include:

  • Enable higher overcommit allocation ratios
  • Add Ubuntu image for new VMs
  • Remove quotas

Environment

For these experiments I am using a physical server with 8 physical cores, 16GB RAM and 1TB disk. The host operating system is Ubuntu 14.04 LTS. I’m following the DevStack instructions for a single machine. It may be possible to duplicate these experiments in a Vagrant environment, but the nested networking and resource constraints for spinning up multiple guests would muddle the effort.

Configuration

DevStack looks for a local.conf in the same directory as the stack.sh script. This makes it possible to override default settings for any of the OpenStack components that will be started as part of the DevStack environment.

Passwords for Quick Resets

I found that I was frequently unstack.shing and clean.shing and stack.shing. To speed up this process I added the following lines to my local.conf to set passwords with default values. This prevented the prompts for these values so the script would run without any input.

[[local|localrc]]
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD

Networking

I also included some networking defaults. NOTE: Even though I show [[local|localrc]] again in the snippet below, there is only one such section, so the values can be added to the above if you decide to use them.

[[local|localrc]]
FLOATING_RANGE=192.168.1.224/27
FIXED_RANGE=10.11.12.0/24
FIXED_NETWORK_SIZE=256
FLAT_INTERFACE=eth0
GIT_BASE=${GIT_BASE:-https://git.openstack.org}
VOLUME_BACKING_FILE_SIZE=200G

As always, you may have to handle proxy environment variables for your network. Some features of OpenStack may require local non-proxy access, so remember to set the no_proxy parameter for local traffic.

The line about GIT is to accommodate some environments where git:// protocols are blocked. This allows GIT to operate over HTTPS, which can leverage proxy settings.

The last line about the VOLUME_BACKING_FILE_SIZE is to allocate more space to cinder to create volumes (we’ll need this to work with CloudFoundry in the future).

Add Ubuntu Image

The IMAGE_URLS parameter accepts a comma separated list of images to pre-load into OpenStack. Ubuntu publishes many cloud images for ease of inclusion into OpenStack.

[[local|localrc]]
IMAGE_URLS="https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img,http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img,http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-uec.tar.gz"

Logging in DevStack

The Linux screen command allows multiple shell commands to be running at the same time and to survive a terminal exit. Adding the command below collects all log output into a single directory. This can be extremely helpful when troubleshooting.

[[local|localrc]]
SCREEN_LOGDIR=$DEST/logs/screen

The default location for $DEST is /opt/stack.

Overcommitting

OpenStack makes it easy to overcommit physical resources. This can be a valuable tool to increase hardware utilization, especially when loads are unevenly distributed. Setting the allocation ratios for CPU, RAM and DISK affects the nova scheduler. CPU is the most safely overcommitted, which RAM coming up next and DISK last. A post-config on the nova.conf file can be accomplished by adding the snippet below to local.conf.

[[post-config|$NOVA_CONF]]
[DEFAULT]
cpu_allocation_ratio = 16.0
ram_allocation_ratio = 4.0
disk_allocation_ratio = 1.0

Running DevStack

With a local.conf all set, OpenStack can be started and reset using the following DevStack commands from within the devstack directory.

./stack.sh
./unstack.sh
./clean.sh

Access OpenStack

At this point you should have a running OpenStack server. Depending on the IP address or DNS settings, you should be able to access it with a URL similar to the following:

http://openstack.yourdomain.com/auth/login/

Which should look something like this:

openstack-login

Remember that the credentials are whatever you set in your local.conf file (see above).

Troubleshooting

On a couple of occasions, unstack.sh and clean.sh were unable to properly reset the system and a subsequent stack.sh failed. In those cases it was necessary to manually remove some services. I accomplished this by manually calling apt-get remove package followed by apt-get autoremove. On one occasion, I finally tried a reboot, which corrected the issue (a first for me on Linux).

It’s very possible that I could have cycled through the running screen sessions or spent more time in the logs to discover the root cause.

Create and Use VMs

With a running OpenStack environment, we can now create some VMs and start testing the API. Security groups and key pairs can make it easy to work with newly created instances. You can manage both by navigating to Project -> Compute -> Access & Security -> Security Groups.

Create a Security Group

The first thing to do is add a security group. I called mine “standard”. After adding the security group, there is a short delay and then a new line item appears. Click “Manage Rules” for that line item.

I typically open up ICMP (for pings), SSH(22), HTTP(80) and HTTPS(443). This is what the rules look like.

openstack-security-group-rules

Key Pairs

Key pairs can be managed by navigating to Project -> Compute -> Access & Security -> Key Pairs. All you have to do is click “Create Key Pair”. Choose a name that represents where you’ll use that key to gain access (such as ‘laptop’ or ‘workstation’). It will then trigger a download of the resulting pem file.

If you have a range of IP addresses that will allow off box access to these hosts, they you can use puttygen to get a version of the key that you can import into putty, similar to what I did here.

Otherwise, the typical use case will require uploading the pem file to the host where you are running OpenStack. I store the file in ~/.ssh/onserver.pem. Set the permissions to 400 on the pem file.

chmod 400 ~/.ssh/onserver.pem

Remove Quotas (optional)

You can optionally disable quotas so that an unlimited number of VMs of any size can be created (or at least up to the allocation ratio limits). This is done from the command line using the nova client. First you set the environment so you can fun the nova client as admin. Then you identify the tenant and finally call quota-update to remove the quotas. From the devstack folder, you can use these commands.

stack@watrous:~/devstack$ source openrc admin admin
stack@watrous:~/devstack$ keystone tenant-list
+----------------------------------+--------------------+---------+
|                id                |        name        | enabled |
+----------------------------------+--------------------+---------+
| 14f42a9bc2e3479a91fb163807767dbc |       admin        |   True  |
| 88d670a4dc714e8b982b3ee8f7a95554 |      alt_demo      |   True  |
| 6c4be67d187c4dcab0fba70d6a004021 |        demo        |   True  |
| 5d1f69d120514caab75bcf27a202d358 | invisible_to_admin |   True  |
| 422e13e213e44f08a79f64440b56ee5c |      service       |   True  |
+----------------------------------+--------------------+---------+
stack@watrous:~/devstack$ nova quota-update --instances -1 --cores -1 --ram -1 --fixed-ips -1 --floating-ips -1 6c4be67d187c4dcab0fba70d6a004021

NOTE: Quotas can also be managed through the web interface as part of a project. When logged in as admin, navigate to “Admin -> Identity -> Projects”. Then for the project you want to update, click “More -> Modify Quotas”. Quotas will have to be modified for each project individually.

Launch a New Instance

Finally we can launch a new instance by navigating to Project -> Compute -> Instances and clicking the “Launch Instance” button. There are four tabs on the Launch Instance panel. The “Details” tab provides options related to the size of the VM and the image used to create it. The Access & Security tab makes it possible to choose a key pair to be automatically installed as well as any security groups that should apply.

openstack-launch-instance-details

openstack-launch-instance-access-security

You can see any running instances from the Instances view, including an assigned IP address to access the server. This should be on a private network and is based on the networking parameters you provided in local.conf above. If you have public IP addresses, you can assign those and perform other maintenance on your instance using the “More” menu for that instance line item, as shown here.

openstack-launch-instance-more

Connect to the Instance

Connect to the new instance from the host system by using the “-i” option to ssh to connect without needing a password.

ssh -i ~/.ssh/onserver.pem ubuntu@10.11.12.2

openstack-connect-instance

Connect to the Internet from Guests

Networking is complicated in OpenStack. The process I have outlined in this post sets up DevStack to use nova-network. In order to route traffic from guests in OpenStack and the internet, iptables needs to be updated to handle masquerading. You can typically do this by running the command below on the OpenStack host (not the guest).

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Depending on your setup, you may need to set this on eth1 instead of eth0 as shown.

NOTE: This iptables command is transient and will be required on each boot. It is possible to make iptables changes persistent on Ubuntu.

Comments

Networking remains the most complicated aspect of OpenStack to both manage and simulate for a development environment. It can feel awkward to have to connect to all instances by way of the host machine, but it greatly simplifies the setup. Everything is set now to explore the API.

Resources and related:

http://www.stackgeek.com/blog/kordless/post/taking-openstack-for-a-spin
http://docs.openstack.org/grizzly/openstack-image/content/ch_obtaining_images.html
http://docs.openstack.org/user-guide/content/dashboard_create_images.html
http://askubuntu.com/questions/471154/how-to-import-and-install-a-uec-image-on-openstack