Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Posts tagged linux

Software Engineering

Increase Munin Resolution to sub-minute

I previously explained how to get one-minute resolution in Munin. The process to get sub-minute resolution in Munin is more tricky. The main reason it’s more tricky is that cron only runs once per minute, which means data must be generated and cached in between cron runs for collection when cron runs.

munin-plugin-resolution

In the case where a single datapoint is collected each time cron runs, the time at which cron runs is sufficient to store the data in rrd. With multiple datapoints being collected on a single cron run, it’s necessary to embed a timestamp with each datapoint so the datapoints can be properly stored in rrd.

For example, the load plugin which produces this for a one minute or greater collection time:

load.value 0.00

Would need to produce output like this for a five (5) second collection time:

load.value 1426889886:0.00
load.value 1426889891:0.00
load.value 1426889896:0.00
load.value 1426889901:0.00
load.value 1426889906:0.00
load.value 1426889911:0.00
load.value 1426889916:0.00
load.value 1426889921:0.00
load.value 1426889926:0.00
load.value 1426889931:0.00
load.value 1426889936:0.00
load.value 1426889941:0.00

Caching mechanism

In one example implementation of a one second collection rate, a datafile is either appended to or replaced using standard Linux file management mechanisms.

This looks a lot like a message queue problem. Each message needs to be published to the queue. The call to fetch is like the subscriber who then pulls all available messages, which empties the queue from the cache window.

Acquire must be long running

In the case of one minute resolution, the data can be generated at the moment it is collected. This means the process run by cron is sufficient to generate the desired data and can die after the data is output. For sub-minute resolution, a separate long running process is required to generate and cache the data. There are a couple of ways to accomplish this.

  1. Start a process that will only run util the next cron runs. This would be started each time cron fetched the data
  2. Create a daemon process that will produce a stream of data.

A possible pitfall with #2 above is that it would continue producing data even if the collection cron was failing. Option #1 results in more total processes being started.

Example using Redis

Redis is a very fast key/value datastore that runs natively on Linux. The following example shows how to use a bash script based plugin with Redis as the cache between cron runs. I can install Redis on Ubuntu using apt-get.

sudo apt-get install -y redis-server

And here is the plugin.

#!/bin/bash
# (c) 2015  - Daniel Watrous
 
update_rate=5                   # sampling interval in seconds
cron_cycle=1                    # time between cron runs in minutes
pluginfull="$0"                 # full name of plugin
plugin="${0##*/}"               # name of plugin
redis_cache="$plugin.cache"
graph="$plugin"
section="system:load"
style="LINE"
 
run_acquire() {
   while [ "true" ]
   do
      sleep $update_rate
      datapoint="$(cat /proc/loadavg | awk '{print "load.value " systime() ":" $1}')"
      redis-cli RPUSH $redis_cache "$datapoint"
   done
}
 
# --------------------------------------------------------------------------
 
run_autoconf() {
   echo "yes"
}
 
run_config() {
cat << EOF
graph_title $graph
graph_category $section
graph_vlabel System Load
graph_scale no
update_rate $update_rate
graph_data_size custom 1d, 10s for 1w, 1m for 1t, 5m for 1y
load.label load
load.draw $style
style=STACK
EOF
}
 
run_fetch()  {
   timeout_calc=$(expr $cron_cycle \* 60 + 5)
   timeout $timeout_calc $pluginfull acquire >/dev/null &
   while [ "true" ]
   do
     datapoint="$(redis-cli LPOP $redis_cache)"
     if [ "$datapoint" = "" ]; then
       break
     fi
     echo $datapoint
   done
}
 
 
run_${1:-fetch}
exit 0

Restart munin-node to find plugin

Before the new plugin will be found and executed, it’s necessary to restart munin-node. If the autoconfig returns yes data collection will start automatically.

ubuntu@munin-dup:~$ sudo service munin-node restart
munin-node stop/waiting
munin-node start/running, process 4684

It’s possible to view the cached values using the LRANGE command without disturbing their collection. Recall that calling fetch will remove them from the queue, so you want to leave Munin to call that.

ubuntu@munin-dup:~$ redis-cli LRANGE load_dynamic.cache 0 -1
1) "load.value 1426910287:0.13"
2) "load.value 1426910292:0.12"
3) "load.value 1426910297:0.11"
4) "load.value 1426910302:0.10"

That’s it. Now you have a Munin plugin with resolution down to the second.

Software Engineering

Increase Munin Resolution to One Minute

I’ve recently been conducting some performance testing of a PaaS solution. In an effot to capture specific details resulting from these performance tests and how the test systems hold up under load, I looked into a few monitoring tools. Munin caught my eye as being easy to setup and having a large set of data available out of the box. The plugin architecture also made it attractive.

One major drawback to Munin was the five minute resolution. During performance tests, it’s necessary to capture data much more frequently. With the latest Munin, it’s possible to easily get down to one minute resolution with only a couple of minor changes.

One minute sampling

Since Munin runs on a cron job, the first step to increase the sampling resolution is to run the cron job more frequently. There are two cron jobs to consider.

/etc/cron.d/munin-node

the /etc/cron.d/munin-node cron runs the plugins that actually generate datapoints and store them in RRD files. In order to increase the amount of data collected, modify this file to sample more frequently (up to once a minute).

/etc/cron.d/munin

The /etc/cron.d/munin cron runs the Munin core process which builds the HTML and generates the graphs. Running this cron more frequently will not increase the number of data samples collected. However, if you do modify the munin-node cron to collect more samples, you may want to update this cron so that data is visible as it is collected.

Change plugin config to capture more data

The plugin config must be modified to include (or change) two arguments.

  • update_rate: this modifies the step for the rrd file
  • graph_data_size: this sets the rrd file size to accommodate the desired amount of data

To sample at a rate of once per minute, the config would need to return the following along with other values.

graph_data_size custom 1d, 1m for 1w, 5m for 1t, 15m for 1y
update_rate 60

Historical Data

After changing the rrd step using update_rate and redefining how much data to capture using graph_data_size, the existing rrd file will no longer accommodate the new data. If the historical data is not needed, the rrd files can be deleted from /var/lib/munin/<domain>. New files with the desired step and amount of data will be created when update is run next.

If the historical data is needed, it may be possible to reconfigure the existing data files, but I haven’t personally tried this.

Software Engineering

Understanding Munin Plugins

Munin is a monitoring tool which captures and graphs system data, such as CPU utilization, load and I/O. Munin is designed so that all data is collected by plugins. This means that every built in graph is a plugin that was included with the Munin distribution. Each plugin adheres to the interface (not a literal OO inteface), as shown below.

munin-plugin-interface

Munin uses Round Robin Database files (.rrd) to store captured data. The default time configuration in Munin collects data in five minute increments.

Some important details:

  • Plugins can be written in any language, including shell scripts, interpreted languages and even compiled languages like C.
  • Plugin output prints to stdout.
  • When the plugin is called with no arguments or with fetch, the output should be the actual data from the monitor
  • The config function defines the characteristics of the graph that is produced.

config

Each Munin plugin is expected to respond to a config call. The output of config is a list of name/value pairs, one per line, a space separating name and value. Config values are further separated into two subsets of configuration data. One defines global properties of the graph while the other defines the details for each datapoint defined for the graph. A reference describing these items is available here: http://munin.readthedocs.org/en/latest/reference/plugin.html.

Here’s the config output from the load plugin. Several graph properties are defined followed by a definition of the single datapoint, “load”.

ubuntu@munin:~$ sudo munin-run load config
graph_title load_dynamic
graph_title Load average
graph_args --base 1000 -l 0
graph_vlabel load
graph_scale no
graph_category system
graph_info The load average of the machine describes how many processes are in the run-queue (scheduled to run "immediately").
load.info 5 minute load average
load.label load

One graph parameter which is not included on the plugin reference above is graph_data_size. This parameter makes it possible to change the size and structure of the RRD file to capture and retain data at different levels of granularity (default is normal, which captures data at a five minute interval).

autoconf

The autoconf should return “yes” or “no” indicating whether the plugin should be activated by default. This part of the script might check to ensure that a required library or specific service is installed on a system to determine whether it should be enabled.

fetch or no argument

This should return the actual values, either in real-time or as cached since the last fetch. If a plugin is called without any arguments, the plugin should return the same as if fetch were called.

Install plugin

Plugins are typically kept in /usr/share/munin/plugins. A symlink is then created from /etc/munin/plugins pointing to the plugin.

Running plugins

It’s possible to run a plugin manually using munin-run.

ubuntu@munin:~$ sudo munin-run load
load.value 0.01

Example

The Munin wiki contains an example, so for now I’ll link to that rather than create my own simple example.

http://munin-monitoring.org/wiki/HowToWritePlugins

Reference

http://guide.munin-monitoring.org/en/latest/plugin/writing.html

Software Engineering

Use Docker to Build a LEMP Stack (Buildfile)

I’ve been reviewing Docker recently. As part of that review, I decided to build a LEMP stack in Docker. I use Vagrant to create an environment in which to run Docker. For this experiment I chose to create Buildfiles to create the Docker container images. I’ll be discussing the following files in this post.

Vagrantfile
bootstrap.sh
mysql/Dockerfile
mysql/mysqlpwdseed
nginx/Dockerfile
nginx/default
nginx/wall.php

Download the Docker LEMP files as a zip (docker-lemp.zip).

Spin up the Host System

I start with Vagrant to spin up a host system for my Docker containers. To do this I use the following files.

Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :
 
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
 
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "public_network"
  config.vm.provider "virtualbox" do |v|
    v.name = "docker LEMP"
    v.cpus = 1
    v.memory = 512
  end
  config.vm.provision :shell, path: "bootstrap.sh"
 
end

I start with Ubuntu and keep the size small. I create a public network to make it easier to test my LEMP setup later on.

bootstrap.sh

#!/usr/bin/env bash
 
# set proxy variables
#export http_proxy=http://proxy.example.com:8080
#export https_proxy=https://proxy.example.com:8080
 
# bootstrap ansible for convenience on the control box
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
sh -c "echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list"
apt-get update
apt-get -y install lxc-docker
 
# need to add proxy specifically to docker config since it doesn't pick them up from the environment
#sed -i '$a export http_proxy=http://proxy.example.com:8080' /etc/default/docker
#sed -i '$a export https_proxy=https://proxy.example.com:8080' /etc/default/docker
 
# enable non-root use by vagrant user
groupadd docker
gpasswd -a vagrant docker
 
# restart to enable proxy
service docker restart

I’m working in a proxied environment, so I need to provide proxy details to the Vagrant host for subsequent Docker installation steps. Unfortunately Docker doesn’t key off the typical environment variables for proxy, so I have to define them explicitly in the docker configuration. This second proxy configuration allows Docker to download images from DockerHub. Finally I create a docker group and add the vagrant user to it so I don’t need sudo for docker commands.

At this point, it’s super easy to play around with Docker and you might enjoy going through the Docker User Guide. We’re ready to move on to create Docker container images for our LEMP stack.

Build the LEMP Stack

This LEMP stack will be split across two containers, one for MySQL and the other for Nginx+PHP. I build on Ubuntu as a base, which may increase the total size of the image in exchange for the ease of using Ubuntu.

MySQL Container

We’ll start with MySQL. Here’s the Dockerfile.

Dockerfile

# LEMP stack as a docker container
FROM ubuntu:14.04
MAINTAINER Daniel Watrous <email>
#ENV http_proxy http://proxy.example.com:8080
#ENV https_proxy https://proxy.example.com:8080
 
RUN apt-get update
RUN apt-get -y upgrade
# seed database password
COPY mysqlpwdseed /root/mysqlpwdseed
RUN debconf-set-selections /root/mysqlpwdseed
 
RUN apt-get -y install mysql-server
 
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf
 
RUN /usr/sbin/mysqld & \
    sleep 10s &&\
    echo "GRANT ALL ON *.* TO admin@'%' IDENTIFIED BY 'secret' WITH GRANT OPTION; FLUSH PRIVILEGES" | mysql -u root --password=secret &&\
    echo "create database test" | mysql -u root --password=secret
 
# persistence: http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/ 
 
EXPOSE 3306
 
CMD ["/usr/bin/mysqld_safe"]

Notice that I declare my proxy servers for the third time here. This one is so that the container can access package downloads. I then seed the root database passwords and proceed to install and configure MySQL. Before running CMD, I expose port 3306. Remember that this port will be exposed on the private network between Docker containers and is only accessible to linked containers when you start it the way I show below. Here’s the mysqlpwdseed file.

mysqlpwdseed

mysql-server mysql-server/root_password password secret
mysql-server mysql-server/root_password_again password secret

If you downloaded the zip file above and ran vagrant in the resulting directory, you should have the files above available in /vagrant/mysql. The following commands will build and start the MySQL container.

cd /vagrant/mysql
docker build -t "local/mysql:v1" .
docker images
docker run -d --name mysql local/mysql:v1
docker ps

At this point you should show a local image for MySQL and a running mysql container, as shown below.

docker-mysql-container

Nginx Container

Now we’ll build the Nginx container with PHP baked in.

Dockerfile

# LEMP stack as a docker container
FROM ubuntu:14.04
MAINTAINER Daniel Watrous <email>
ENV http_proxy http://proxy.example.com:8080
ENV https_proxy https://proxy.example.com:8080
 
# install nginx
RUN apt-get update
RUN apt-get -y upgrade
RUN apt-get -y install nginx
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN mv /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bak
COPY default /etc/nginx/sites-available/default
 
# install PHP
RUN apt-get -y install php5-fpm php5-mysql
RUN sed -i s/\;cgi\.fix_pathinfo\s*\=\s*1/cgi.fix_pathinfo\=0/ /etc/php5/fpm/php.ini
 
# prepare php test scripts
RUN echo "<?php phpinfo(); ?>" > /usr/share/nginx/html/info.php
ADD wall.php /usr/share/nginx/html/wall.php
 
# add volumes for debug and file manipulation
VOLUME ["/var/log/", "/usr/share/nginx/html/"]
 
EXPOSE 80
 
CMD service php5-fpm start && nginx

After setting the proxy, I install Nginx and update the configuration file. I then install PHP5 and update the php.ini file. I then use two different methods to create php files. If you’re deploying an actual application to a Docker container, you may not end up using either of these, but instead install a script that will grab your application from git or subversion.

Next I define two volumes. You’ll see shortly that this makes it straightforward to view logs and manage code for debug. Finally I expose port 80 for web traffic and the CMD references two commands using && to make sure both PHP and Nginx are running in the container.

default

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;
 
        root /usr/share/nginx/html;
        index index.php index.html index.htm;
 
        server_name localhost;
 
        location / {
                try_files $uri $uri/ =404;
        }
 
        error_page 404 /404.html;
 
        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
                root /usr/share/nginx/html;
        }
 
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                include fastcgi_params;
        }
}

This is my default Nginx configuration file.

wall.php

<?php
 
// database credentials (defined in group_vars/all)
$dbname = "test";
$dbuser = "admin";
$dbpass = "secret";
$dbhost = "mysql";
 
// query templates
$create_table = "CREATE TABLE IF NOT EXISTS `wall` (
   `id` int(11) unsigned NOT NULL auto_increment,
   `title` varchar(255) NOT NULL default '',
   `content` text NOT NULL default '',
   PRIMARY KEY  (`id`)
   ) ENGINE=MyISAM  DEFAULT CHARSET=utf8";
$select_wall = 'SELECT * FROM wall';
 
// Connect to and select database
$link = mysql_connect($dbhost, $dbuser, $dbpass)
    or die('Could not connect: ' . mysql_error());
echo "Connected successfully\n<br />\n";
mysql_select_db($dbname) or die('Could not select database');
 
// create table
$result = mysql_query($create_table) or die('Create Table failed: ' . mysql_error());
 
// handle new wall posts
if (isset($_POST["title"])) {
    $result = mysql_query("insert into wall (title, content) values ('".$_POST["title"]."', '".$_POST["content"]."')") or die('Create Table failed: ' . mysql_error());
}
 
// Performing SQL query
$result = mysql_query($select_wall) or die('Query failed: ' . mysql_error());
 
// Printing results in HTML
echo "<table>\n";
while ($line = mysql_fetch_array($result, MYSQL_ASSOC)) {
    echo "\t<tr>\n";
    foreach ($line as $col_value) {
        echo "\t\t<td>$col_value</td>\n";
    }
    echo "\t</tr>\n";
}
echo "</table>\n";
 
// Free resultset
mysql_free_result($result);
 
// Closing connection
mysql_close($link);
?>
 
<form method="post">
Title: <input type="text" name="title"><br />
Message: <textarea name="content"></textarea><br />
<input type="submit" value="Post to wall">
</form>

Wall is meant to be a simple example that shows both PHP and MySQL are working. The commands below will build this new image and start it as a linked container to the already running MySQL container.

cd /vagrant/nginx
docker build -t "local/nginx:v1" .
docker images
docker run -d -p 80:80 --link mysql:mysql --name nginx local/nginx:v1
docker ps

So that now your Docker environment looks like the image below with an image for MySQL and one for Nginx and two processes running that are linked together.

docker-nginx-container

As shown in the image above, we have mapped port 80 on the container to port 80 on the host system. This means we can discover the IP address of our host (remember the public network in the vagrant file) and load the web page.

docker-php-wall-browser

Working with Running Containers

True to the single responsibility container design of Docker, these running containers are only running their respective service(s). That means they aren’t running SSH (and they shouldn’t be). So how do we interact with them, such as viewing the log files on the Nginx server or connecting to MySQL? We use Linked containers that leverage the shared volumes or exposed ports.

Connect to MySQL

To connect to MySQL, create a new container from the same MySQL image and link it to the already running MySQL. Start that container with /bin/bash. You can then use the mysql binaries to connect. Notice that I identify the host as ‘mysql’. That’s because when I linked the containers, Docker added an alias in the /etc/hosts file of the new container that mapped the private Docker network IP to ‘mysql’.

docker run -i -t --link mysql:mysql local/mysql:v1 /bin/bash
mysql -u admin -h mysql test --password=secret

docker-mysql-linked

View Ngnix Logs

It’s just as easy to view the Nginx logs. In this case I start up a plain Ubuntu container using the –volumes-from and indicate the running nginx container. From bash I can easily navigate to the Nginx logs or the html directory.

docker run -i -t --volumes-from nginx ubuntu /bin/bash

docker-nginx-view-logs

References

https://docs.docker.com/userguide/dockerlinks/

Software Engineering

Build a Multi-server LEMP stack using Ansible

My objective in this post is to explore the use of Ansible to configure a multi-server LEMP stack. This builds on the preliminary work I did demonstrating how to use Vagrant to create an environment to run Ansible. You can follow this entire example on any Windows (or Linux) host.

Ansible only runs on Linux hosts, not Windows. As a result, I needed to provision one Linux host to act as Ansible controller. One aspect of Ansible that I wanted to explore is the ability to manage multiple hosts with different configurations. For this experiment, I provision two more Linux hosts, one to act as a database host and the other to function as an Nginx/PHP server for a complete LEMP stack. I created the diagram below to illustrate my setup.

vagrant-ansible-lemp

There are two primary artifact categories for this experiement:

  • Vagrantfile to provision each host
  • Ansible playbook related files

Since there were more than a few Ansible playbook files, I chose to create a github repository rather than provide all the code here. You can clone/fork the files to run this experiment here:

https://github.com/dwatrous/vagrant-ansible-lemp

Explanation

Here is a list of the files you’ll find in that repository.

  • Vagrantfile
  • control.sh
  • lemp/group_vars/all
  • lemp/hosts
  • lemp/roles/common/handlers/main.yml
  • lemp/roles/common/tasks/main.yml
  • lemp/roles/database/handlers/main.yml
  • lemp/roles/database/tasks/main.yml
  • lemp/roles/web/handlers/main.yml
  • lemp/roles/web/tasks/main.yml
  • lemp/roles/web/templates/default
  • lemp/roles/web/templates/wall.php
  • lemp/site.yml

I do use a bootstrap shell script, control.sh, with Vagrant for the Ansible control server. It is necessary to install Ansible on the control server, but since Ansible doesn’t require an agent, there’s no need to bootstrap the other servers.

Playbook files

For each Ansible defined role there are three artifact categories.

  • handlers
  • tasks
  • templates

Handlers are named tasks that can be called or notified when Ansible detects other events. These are commonly used to trigger service restarts when configuration files change, as an example.

Tasks are the meat of the playbook. This lists out the steps to put a system into a desired state, including installing software, copying templates, registering and calling handlers, etc.

Configuration files, such as the nginx ‘default’ configuration in this case, can be stored in the templates folder and copied to the host using a task. Templates are helpful when a desired configuration differs significantly from a system default, this can be easier than updating individual lines in a file one at a time using lineinfile. The Ansible playbook files are in the following directory.

/vagrant/lemp

The site.yml file ties it all together by associating host groups with roles. You run the playbook like this.

ansible-playbook -i hosts site.yml

The example wall.php script should be accessible locally using the port 80->8080 mapping as http://127.0.0.1:8080/wall.php or over port 80 on the external IP assigned to the web host. Here’s what you can expect to see.

ansible-wall-example

Resources

I used the ansible examples repository on Github while putting this together. You may find it useful. For the specifics of installing LEMP on Ubuntu, I followed my Vagrant tutorial.

Software Engineering

Using Vagrant to build a LEMP stack

I may have just fallen in love with the tool Vagrant. Vagrant makes it possible to quickly create a virtual environment for development. It is different than cloning or snapshots in that it uses minimal base OSes and provides a provisioning mechanism to setup and configure the environment exactly the way you want for development. I love this for a few reasons:

  • All developers work in the exact same environment
  • Developers can get a new environment up in minutes
  • Developers don’t need to be experts at setting up the environment.
  • System details can be versioned and stored alongside code

This short tutorial below demonstrates how easy it is to build a LEMP stack using Vagrant.

Install VirtualBox

Vagrant is not a virtualization tool. Instead vagrant will leverage an existing provider of virtual compute resources, either local or remote. For example, Vagrant can be used to create a virtual environment on Amazon Web Services or locally using a tool like VirtualBox. For this tutorial, we’ll use VirtualBox. You can download and install VirtualBox from the official website.

https://www.virtualbox.org/

Install Vagrant

Next, we install Vagrant. Downloads are freely available on their website.

http://www.vagrantup.com/

For the remainder of this tutorial, I’m going to assume that you’ve been through the getting started training and are somewhat familiar with Vagrant.

Accommodate SSH Keys

UPDATE 6/26/2015: Vagrant introduced the unfortunate feature of producing a random key for each new VM as the default behavior. It’s possible to restore the original functionality (described below) and use the insecure key with the config.ssh.insert_key = false setting in a Vagrantfile.

Until (if ever) Vagrant defaults to using the insecure key, a system wide work around is to add a Vagrantfile to the local .vagrant.d folder which will add set this setting for all VMs (see Load Order and Merging), unless otherwise overridden. The Vagrant file can be as simple as this:

# -*- mode: ruby -*-
# vi: set ft=ruby :
 
VAGRANTFILE_API_VERSION = "2"
 
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.ssh.insert_key = false
 
end

Vagrant creates an SSH key which it installs on guest hosts by default. This can be a huge time saver since it prevents the need for passwords. Since I use PuTTY on windows, I needed to convert the SSH key and save a PuTTY session to accommodate connections. Use PuTTYgen to do this.

  1. Open PuTTYgen
  2. Click “Load”
  3. Navigate to the file C:\Users\watrous\.vagrant.d\insecure_private_key

PuTTYgen shows a dialog saying that the import was successful and displays the details of the key, as shown here:

import-vagrant-ssh-key-puttygen

Click “Save private key”. You will be prompted about saving the key without a passphrase, which in this case is fine, since it’s just for local development. If you end up using Vagrant to create public instances, such as using Amazon Web Services, you should use a more secure connection method. Give the key a unique name, like C:\Users\watrous\.vagrant.d\insecure_private_key-putty.ppk and save.

Finally, create a saved PuTTY session to connect to new Vagrant instances. Here are some of my PuTTY settings:

putty-session-vagrant-settings-1

putty-session-vagrant-settings-auth

The username may change if you choose a different base OS image from the vagrant cloud, but the settings shown above should work fine for this tutorial.

Get Ready to ‘vagrant up’

Create a directory where you can store the files Vagrant needs to spin up your environment. I’ll refer to this directory as VAGRANT_ENV.

To build a LEMP stack we need a few things. First is a Vagrantfile file where we identify the base OS, or box, ports, etc. This is a text file that follows Ruby language conventions. Create the file VAGRANT_ENV/Vagrantfile with the following contents:

# -*- mode: ruby -*-
# vi: set ft=ruby :
 
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
 
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # All Vagrant configuration is done here. The most common configuration
  # options are documented and commented below. For a complete reference,
  # please see the online documentation at vagrantup.com.
 
  # Every Vagrant virtual environment requires a box to build off of.
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision :shell, path: "bootstrap.sh"
  config.vm.network :forwarded_port, host: 4567, guest: 80
  config.ssh.shell = "bash -c 'BASH_ENV=/etc/profile exec bash'"
end

This file chooses a 64 bit trusty version of Ubuntu, forwards port 4567 on the host machine to port 80 on the guest machine and identifies a bootstrap shell script, which I show next.

Create VAGRANT_ENV/bootstrap.sh with the following contents:

#!/usr/bin/env bash
 
#accommodate proxy environments
#export http_proxy=http://proxy.company.com:8080
#export https_proxy=https://proxy.company.com:8080
apt-get -y update
apt-get -y install nginx
debconf-set-selections <<< 'mysql-server mysql-server/root_password password secret'
debconf-set-selections <<< 'mysql-server mysql-server/root_password_again password secret'
apt-get -y install mysql-server
#mysql_install_db
#mysql_secure_installation
apt-get -y install php5-fpm php5-mysql
sed -i s/\;cgi\.fix_pathinfo\s*\=\s*1/cgi.fix_pathinfo\=0/ /etc/php5/fpm/php.ini
service php5-fpm restart
mv /etc/nginx/sites-available/default /etc/nginx/sites-available/default.bak
cp /vagrant/default /etc/nginx/sites-available/default
service nginx restart
echo "<?php phpinfo(); ?>" > /usr/share/nginx/html/info.php

This script executes a sequence of commands from the shell as root after provisioning the new server. This script must run without requiring user input. It also should accommodate any configuration changes and restarts necessary to get your environment ready to use.

More sophisticated tools like Ansible, Chef and Puppet can also be used.

you may have noticed that the above script expects a modified version of nginx’s default configuration. Create the file VAGRANT_ENV/default with the following contents:

server {
	listen 80 default_server;
	listen [::]:80 default_server ipv6only=on;
 
	root /usr/share/nginx/html;
	index index.php index.html index.htm;
 
	server_name localhost;
 
	location / {
		try_files $uri $uri/ =404;
	}
 
	error_page 404 /404.html;
 
	error_page 500 502 503 504 /50x.html;
	location = /50x.html {
		root /usr/share/nginx/html;
	}
 
	location ~ \.php$ {
		fastcgi_split_path_info ^(.+\.php)(/.+)$;
		fastcgi_pass unix:/var/run/php5-fpm.sock;
		fastcgi_index index.php;
		include fastcgi_params;
	}
}

vagrant up

Now it’s time to run ‘vagrant up‘. To do this, open a console window and navigate to your VAGRANT_ENV directory, then run ‘vagrant up’.

vagrant-up-console

If this is the first time you have run ‘vagrant up’, it may take a few minutes to download the ‘box’. Once it’s done, you should be ready to visit your PHP page rendered by nginx on a local virtual machine created and configured by Vagrant:

http://127.0.0.1:4567/info.php

Software Engineering

Load Testing with Locust.io

I’ve recently done some load testing using Locust.io. The setup was more complicated than other tools and I didn’t feel like it was well documented on their site. Here’s how I got Locust.io running on two different Linux platforms.

Locust.io on RedHat Enterprise Linux (RHEL) or CentOS

Naturally, these instructions will work on CentOS too.

sudo yum -y install python-setuptools python-devel
sudo yum -y install libevent libevent-devel

One requirement of Locust.io is ZeroMQ. I found instructions to install that on their site http://zeromq.org/distro:centos

sudo yum -y -c http://download.opensuse.org/repositories/home:/fengshuo:/zeromq/CentOS_CentOS-6/home:fengshuo:zeromq.repo install zeromq zeromq-devel
sudo easy_install locustio
sudo easy_install pyzmq

Locust.io on Debian or Ubuntu

sudo apt-get update
sudo apt-get -y install python-setuptools
sudo apt-get -y install python-dev
sudo apt-get -y install libevent-dev
sudo apt-get -y install libzmq3-dev
sudo apt-get -y install python-pip
sudo pip install locustio
sudo easy_install pyzmq

Running Locust.io Test

Here’s a simple python test script for Locust.io. Save this to a file named locust-example.py.

from locust import HttpLocust, TaskSet, task
 
class UserBehavior(TaskSet):
    @task
    def index(self):
        self.client.get("/")
 
class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 1000
    max_wait = 3000

At this point you should be ready to run a test. You can run this from any user directory.

locust -H http://15.125.92.195 -f locust-example.py --master&
locust -f locust-example.py --slave&
locust -f locust-example.py --slave&
locust -f locust-example.py --slave&

The first line above specifies the host against which all requests will be executed, an IP address 15.125.92.195 in this case. I start these as jobs that run in the background so I can start up some slaves too. In this case I start three. All the output from these will still go to stdout. You can verify that the processes are running using the jobs command.

debian@locust:~$ jobs
[1]   Running                 locust -H http://15.125.92.195 -f locust-testnginx.py --master &
[2]   Running                 locust -f locust-testnginx.py --slave &
[3]-  Running                 locust -f locust-testnginx.py --slave &
[4]+  Running                 locust -f locust-testnginx.py --slave &

You can now load the Locust.io user interface in a web browser. Just point to the hostname/IP on port 8089.

locust-io-ui-start

After starting up, Locust will spin up clients, distributed among the slaves, until it has reached the desired number. After it reaches the full number of clients it automatically resets the stats and keeps running until it is manually stopped.

locust-io-ui-stats

That’s it. For better utilization of multiple cores, spin up additional slaves. I keep a locust.io image on hand so I can quickly spin it up when I have load testing to do. Since the test scripts are python, I keep them in the same repositories with the applications under test. Follow this link for full documentation for Locust.io.

Software Engineering

Install subversion on Linux without root access

Today I needed to install subversion on a Linux host on which I don’t have root access. With root access the install would have been very simple. However, I couldn’t find a good tutorial showing how to go about installing the software just for the local user. This post goes through how I did that.

Requirements

Subversion relies on several third party libraries. Some of these may already be available on your server. Others may not be, so you’ll need more than I show here. What I provide should give you a roadmap to find and install any remaining requirements

Where to install it

Without root access you’ll need to install subversion and other related software where you have file permissions. My home directory is

/home/watrous

So I chose to install all my software in

/home/watrous/programs

This will ensure that I have full permissions to write the necessary files.

Steps

Download requirements

Start by downloading all the requirements that are needed. I did this using wget (I had to set the proxy in /etc/wgetwc or by exporting a value for http_proxy).

wget http://www.bizdirusa.com/mirrors/apache/subversion/subversion-1.7.5.tar.bz2
wget http://mirrors.gigenet.com/apache//apr/apr-1.4.6.tar.bz2
wget http://mirrors.gigenet.com/apache//apr/apr-util-1.4.1.tar.bz2
wget http://www.sqlite.org/sqlite-autoconf-3071201.tar.gz
wget http://zlib.net/zlib-1.2.7.tar.bz2
wget http://mirrors.gigenet.com/apache//apr/apr-iconv-1.2.1.tar.bz2
wget http://www.openssl.org/source/openssl-1.0.1b.tar.gz
wget ftp://xmlsoft.org/libxml2/libxml2-2.8.0.tar.gz
wget http://www.webdav.org/neon/neon-0.29.6.tar.gz

If you’re a concerned about security (and I suppose we all should be) it’s wise to confirm the published hash for the files that you download to make sure they haven’t been modified before being posted on a mirror site. The most common ways to do this include md5sum, sha1sum and pgp. Here’s an example showing how to verify the subversion download.

sha1sum subversion-1.7.5.tar.bz2

Extract Sources

This next step will leave you with folder for each application. If you don’t want to clutter your home directory, you can create a working directory where you’ll extract all of these.

Next extract the source code for each. This is done using the tar command, but may be slightly different depending on the compression method used.

tar xjvf subversion-1.7.5.tar.bz2
tar xjvf apr-1.4.6.tar.bz2
tar xjvf apr-util-1.4.1.tar.bz2
tar xzvf sqlite-autoconf-3071201.tar.gz
tar xjvf zlib-1.2.7.tar.bz2
tar xjvf apr-iconv-1.2.1.tar.bz2
tar xzvf openssl-1.0.1b.tar.gz
tar xzvf libxml2-2.8.0.tar.gz
tar xzvf neon-0.29.6.tar.gz

Now you’re ready to configure, compile and install each one. The order of installing these matters since there are dependencies between them. Here’s the order I followed.

  1. apr
  2. sqlite
  3. zlib
  4. openssl
  5. libxml2
  6. neon
  7. subversion

Clean first

If you’ve already performed some of the steps listed below, don’t forget that it may be necessary to run

make clean

to clear out existing binaries and ensure that your new build leverages other recompiled files.

Apache Portable Runtime (APR)

The Apache Portable Runtime has three components that need to be installed. The apr-util and apr-iconv depend on apr, so start with that first. From the directory where you extracted the sources based on the step above, run the following commands to configure, make and install apr. Notice that I ues the –prefix option. That’s the crucial element in this whole process since it’s what installs everything where you have permissions.

cd apr-1.4.6
./configure --prefix=/home/watrous/programs
make
make install

Notice here that I have to tell the configure script where apr is. This satisfies the dependency.

cd apr-util-1.4.1
./configure --with-apr=/home/watrous/apr-1.4.6 --prefix=/home/watrous/programs
make
make install

And finally apr-iconv.

cd apr-iconv-1.2.1
./configure --with-apr=/home/watrous/apr-1.4.6 --prefix=/home/watrous/programs
make
make install

SQLite

SQLite doesn’t have any dependencies in this case, so it’s really simple to make and install. This is one of the more likely to already be on your system, so you might want to check. SQLite is a fantastic, lightweight database that doesn’t require any configuration or server overhead.

cd sqlite-autoconf-3071201
./configure --prefix=/home/watrous/programs
make
make install

ZLib

ZLib is also very easy to configure, build and install.

cd zlib-1.2.7
./configure --prefix=/home/watrous/programs
make
make install

OpenSSL

OpenSSL is used indirectly by neon to accommodate SSL connections (repository URLs over HTTPS). If you plan to access subversion repositories over HTTP (without encryption) then you can skip this step).

cd openssl-1.0.1b
./config --prefix=/home/watrous/programs/
make
make install

Libxml 2

Libxml 2 is also use by neon to accommodate the webdav format.

cd libxml2-2.8.0
./configure --prefix=/home/watrous/programs/
make
make install

neon

The neon libraries enable the webdav functionality on which subversion is built. Note that I use the –with-libs option to specify where to find the openssl libraries. If you have the openssl-devel packages installed, this may not be necessary.

cd neon-0.29.6
./configure --with-ssl --with-libs=/home/watrous/programs/ --prefix=/home/watrous/programs/
make
make install

Subversion

Now we tie it all together with our call to configure. We need to tell the configure script where to find all the dependencies that we’ve just put in place and to install the finished program into our programs folder. Based on the steps above, here are the commands I used to configure, build and install subversion.

cd subversion-1.7.5
./configure --without-berkeley-db --without-apxs --without-swig --with-zlib=/home/watrous/programs --with-sqlite=/home/watrous/programs --with-neon=/home/watrous/programs --with-ssl --without-pic --disable-shared --with-apr=/home/watrous/apr-1.4.6 --with-apr-util=/home/watrous/apr-util-1.4.1 --with-ssl --prefix=/home/watrous/programs
make
make install

Note that the –without-pic and –disable-shared options are to prevent collisions between the libraries we’ve installed above and similar libraries that may exist elsewhere in the system. In particular this may help resolve a conflict with the OpenSSL libraries.

At this point the subversion binary ‘svn‘ should be available in ~/programs/bin/svn.

~/programs/bin/svn help

Path

It would be a bit of a pain to always have to type ‘~/programs/bin/svn’ to run subversion. So the next thing I did was to add my new programs/bin folder to the path.

export PATH=$PATH:/home/watrous/programs/bin

I can make this more permanent by adding PATH=$PATH:$HOME/programs/bin to my .bash_profile.

Conclusion

That’s it. Now you have the subversion client installed for your local use on Linux without root access. Naturally this same process would work to install any software you need under your own account on a *nix host.

It’s worth noting that you still need permissions wherever your new svn binary intends to operate.

Sharing

If you want to share your new subversion with other users on the same system, all you need to do is set the execute bit on your home, programs and bin directories and each binary that you want others to access. They can then add your programs/bin directory to their path and make use of any programs that you installed.

Software Engineering

Hands on MongoDB introduction and installation

MongoDB is a database. However, unlike conventional relational databases that are based on well defined schema and use SQL as the primary interface to manage the data, MongoDB instead uses document based storage.

The storage uses a format known as BSON, which is a modified form of JSON. This makes the stored documents very flexible and lightweight. It also makes it easy to adjust what is contained in any document without any significant impact to the other documents in a collection (a collection in MongoDB is like a table in a relational database).

In this short video I show you how to install and begin using MongoDB. The video is HD, so be sure to watch it full screen to get all the details.

 

When you’re done with this, go have a look at how to improve MongoDB’s reliability using replica sets.