Daniel Watrous on Software Engineering

A Collection of Software Problems and Solutions

Software Engineering

Load Testing with Locust.io

I’ve recently done some load testing using Locust.io. The setup was more complicated than other tools and I didn’t feel like it was well documented on their site. Here’s how I got Locust.io running on two different Linux platforms.

Locust.io on RedHat Enterprise Linux (RHEL) or CentOS

Naturally, these instructions will work on CentOS too.

sudo yum -y install python-setuptools python-devel
sudo yum -y install libevent libevent-devel

One requirement of Locust.io is ZeroMQ. I found instructions to install that on their site http://zeromq.org/distro:centos

sudo yum -y -c http://download.opensuse.org/repositories/home:/fengshuo:/zeromq/CentOS_CentOS-6/home:fengshuo:zeromq.repo install zeromq zeromq-devel
sudo easy_install locustio
sudo easy_install pyzmq

Locust.io on Debian or Ubuntu

sudo apt-get update
sudo apt-get -y install python-setuptools
sudo apt-get -y install python-dev
sudo apt-get -y install libevent-dev
sudo apt-get -y install libzmq3-dev
sudo apt-get -y install python-pip
sudo pip install locustio
sudo easy_install pyzmq

Running Locust.io Test

Here’s a simple python test script for Locust.io. Save this to a file named locust-example.py.

from locust import HttpLocust, TaskSet, task
 
class UserBehavior(TaskSet):
    @task
    def index(self):
        self.client.get("/")
 
class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 1000
    max_wait = 3000

At this point you should be ready to run a test. You can run this from any user directory.

locust -H http://15.125.92.195 -f locust-example.py --master&
locust -f locust-example.py --slave&
locust -f locust-example.py --slave&
locust -f locust-example.py --slave&

The first line above specifies the host against which all requests will be executed, an IP address 15.125.92.195 in this case. I start these as jobs that run in the background so I can start up some slaves too. In this case I start three. All the output from these will still go to stdout. You can verify that the processes are running using the jobs command.

debian@locust:~$ jobs
[1]   Running                 locust -H http://15.125.92.195 -f locust-testnginx.py --master &
[2]   Running                 locust -f locust-testnginx.py --slave &
[3]-  Running                 locust -f locust-testnginx.py --slave &
[4]+  Running                 locust -f locust-testnginx.py --slave &

You can now load the Locust.io user interface in a web browser. Just point to the hostname/IP on port 8089.

locust-io-ui-start

After starting up, Locust will spin up clients, distributed among the slaves, until it has reached the desired number. After it reaches the full number of clients it automatically resets the stats and keeps running until it is manually stopped.

locust-io-ui-stats

That’s it. For better utilization of multiple cores, spin up additional slaves. I keep a locust.io image on hand so I can quickly spin it up when I have load testing to do. Since the test scripts are python, I keep them in the same repositories with the applications under test. Follow this link for full documentation for Locust.io.

16 Comments Load Testing with Locust.io

  1. santosh

    Hello, i am running locust with master and slave mode with 8 slaves. i set Number of users as 500 and hatch rate of 200.
    I see my locust master process gets killed after sometime saying out of memory.
    I am running on 64 bit Ubuntu 4 GB RAM.
    I see locust consumes almost around 3 GB of memory. Can you please help me to fix this issue.

    Reply
    1. Daniel Watrous

      Santosh, this doesn’t sound like a locust.io problem, but instead a problem with the test script you wrote in python. If you’re attempting to store requests or do post processing it’s possible your test script is holding on to that memory. Without seeing your test script, it’s difficult to say exactly. You can reply with a hist or pastbin url and I’ll look at it.

      Reply
  2. Alex

    Have you had any luck deploying locust? Ie running it behind Apache so it lives past the ssh session or some similar method.

    Reply
    1. Daniel Watrous

      I haven’t run locust behind apache. I treat locust more as a one-off element in my testing, but I suppose it could be run by way of (u)WSGI behind apache or nginx.

      I have an image on my cloud hosting provider so I can quickly spin up a machine to use for testing and then I destroy it when I’m done testing.

      Reply
  3. Yogesh

    How to analyse result of locust.io ? i have csv file containing “Method Name # requests # failures Median response time Average response time Min response time Max response time Average Content Size Requests/s
    ” but how to analyse this ?

    Reply
    1. Daniel Watrous

      Yogesh, I’m not sure what you’re asking. Your analysis will depend on what’s important to you. It might be that you’re looking at the pages that have the highest outliers in initial response time. You may want to focus on the average or median. The metrics that matter will be different for each site.

      Reply
      1. Yogesh

        Hi Daniel,
        Let me explain you what i have done with apache benchmark, i created output.dat file using apache benchmark and using this file i plotted a graph as shown in this image http://s5.postimg.org/j9seasd7r/test100k_Linear.png and this is what i got from locust test but i don’t how to analyse this output ? here is the link: https://docs.google.com/spreadsheets/d/16dt3oLam_6nYBrQuPl3DN2WXp9muTpkt87P0PWlmvRI/pubhtml?gid=427970069&single=true

        https://docs.google.com/spreadsheets/d/1JXJNFEtBqHXL081McUPa-xsGxmKT4y8UbyVY1OFhYZg/pubhtml?gid=765618842&single=true

        Reply
        1. Daniel Watrous

          Yogesh, Locust by default rolls all values up into a summary. One of the reasons it can produce an extremely high load with very little output is because it is not burdened with recording details about every request. I have been using munin (http://software.danielwatrous.com/tag/munin/) to monitor the systems under test. It’s conceivable that you could write a munin plugin to derive something useful from the application log files and achieve a sort of real-time view similar to what you show.

          Reply
  4. Suhas

    Hi,
    The REST API’s that we use for our project has the authentication token and authorization token to perform any GET or POST requests.
    How can we handle the accessing tokens in case of locust?

    Reply
  5. Bruce

    Hi!
    Do you know if it’s possible to start a CLI Locust load test using a master / slave configuration? It seems to only support the slaves when started from the Web UI…

    Reply
    1. Daniel Watrous

      I’m only aware of being able to start tests from the web UI. That would be a great automation opportunity. I wonder if you would examine the calls the web UI makes and automate some HTTP calls to setup and start a test…

      Reply
  6. Gorthi B

    Hi Daniel,

    How do we get response_time per request in locustio?

    I am running a thread to get values for 250 different requests in a loop and I need to get response_time for each request.

    The global_stats is giving avg_response_time, but I need individual request response time.

    Please let me know if you know this?

    Thanks
    -Gorthi

    Reply
  7. Livne

    Hi Daniel,
    i’ve used your explanation to run a parallel test using master and slaves on separate urls (URLs are provided in the py script):
    locust -f file1.py –logfile=logfile1.log –master&
    locust -f file2.py –logfile=logfile2.log –slave&
    locust -f file3.py –logfile=logfile3.log –slave&
    i’m having trouble getting the separate stats for each URL, as the logfile only retains debug info and not the statistics of the load.
    i get the separate stats when i kill the jobs, but it is messy and i’m not sure what stats to attribute to which of the jobs.
    any idea how to perform this better?
    it would have been nice if i had a way to tell locust to shut down the job at some point without having to kill it.

    Reply
    1. Daniel Watrous

      I have used a combination of Locust and Munin to capture server metrics alongside performance metrics. I published a few articles on my site (search ‘munin’) that explain how to get munin resolution to be more granular during load tests. Hopefully this answers your question.

      Reply

Leave A Comment