Load Testing with Locust.io

I’ve recently done some load testing using Locust.io. The setup was more complicated than other tools and I didn’t feel like it was well documented on their site. Here’s how I got Locust.io running on two different Linux platforms.

Locust.io on RedHat Enterprise Linux (RHEL) or CentOS

Naturally, these instructions will work on CentOS too.

sudo yum -y install python-setuptools python-devel
sudo yum -y install libevent libevent-devel

One requirement of Locust.io is ZeroMQ. I found instructions to install that on their site http://zeromq.org/distro:centos

sudo yum -y -c http://download.opensuse.org/repositories/home:/fengshuo:/zeromq/CentOS_CentOS-6/home:fengshuo:zeromq.repo install zeromq zeromq-devel
sudo easy_install locustio
sudo easy_install pyzmq

Locust.io on Debian or Ubuntu

sudo apt-get update
sudo apt-get -y install python-setuptools
sudo apt-get -y install python-dev
sudo apt-get -y install libevent-dev
sudo apt-get -y install libzmq3-dev
sudo apt-get -y install python-pip
sudo pip install locustio
sudo easy_install pyzmq

Running Locust.io Test

Here’s a simple python test script for Locust.io. Save this to a file named locust-example.py.

from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
    def index(self):
class WebsiteUser(HttpLocust):
    task_set = UserBehavior
    min_wait = 1000
    max_wait = 3000

At this point you should be ready to run a test. You can run this from any user directory.

locust -H -f locust-example.py --master&
locust -f locust-example.py --slave&
locust -f locust-example.py --slave&
locust -f locust-example.py --slave&

The first line above specifies the host against which all requests will be executed, an IP address in this case. I start these as jobs that run in the background so I can start up some slaves too. In this case I start three. All the output from these will still go to stdout. You can verify that the processes are running using the jobs command.

debian@locust:~$ jobs
[1]   Running                 locust -H -f locust-testnginx.py --master &
[2]   Running                 locust -f locust-testnginx.py --slave &
[3]-  Running                 locust -f locust-testnginx.py --slave &
[4]+  Running                 locust -f locust-testnginx.py --slave &

You can now load the Locust.io user interface in a web browser. Just point to the hostname/IP on port 8089.


After starting up, Locust will spin up clients, distributed among the slaves, until it has reached the desired number. After it reaches the full number of clients it automatically resets the stats and keeps running until it is manually stopped.


That’s it. For better utilization of multiple cores, spin up additional slaves. I keep a locust.io image on hand so I can quickly spin it up when I have load testing to do. Since the test scripts are python, I keep them in the same repositories with the applications under test. Follow this link for full documentation for Locust.io.

Twitter Digg Delicious Stumbleupon Technorati Facebook Email

About Daniel Watrous

I'm a Software & Electrical Engineer and online entrepreneur.

16 Responses to “Load Testing with Locust.io”

  1. Hello, i am running locust with master and slave mode with 8 slaves. i set Number of users as 500 and hatch rate of 200.
    I see my locust master process gets killed after sometime saying out of memory.
    I am running on 64 bit Ubuntu 4 GB RAM.
    I see locust consumes almost around 3 GB of memory. Can you please help me to fix this issue.

    • Santosh, this doesn’t sound like a locust.io problem, but instead a problem with the test script you wrote in python. If you’re attempting to store requests or do post processing it’s possible your test script is holding on to that memory. Without seeing your test script, it’s difficult to say exactly. You can reply with a hist or pastbin url and I’ll look at it.

  2. Have you had any luck deploying locust? Ie running it behind Apache so it lives past the ssh session or some similar method.

    • I haven’t run locust behind apache. I treat locust more as a one-off element in my testing, but I suppose it could be run by way of (u)WSGI behind apache or nginx.

      I have an image on my cloud hosting provider so I can quickly spin up a machine to use for testing and then I destroy it when I’m done testing.

  3. How to analyse result of locust.io ? i have csv file containing “Method Name # requests # failures Median response time Average response time Min response time Max response time Average Content Size Requests/s
    ” but how to analyse this ?

  4. Hi,
    The REST API’s that we use for our project has the authentication token and authorization token to perform any GET or POST requests.
    How can we handle the accessing tokens in case of locust?

  5. Hi!
    Do you know if it’s possible to start a CLI Locust load test using a master / slave configuration? It seems to only support the slaves when started from the Web UI…

    • I’m only aware of being able to start tests from the web UI. That would be a great automation opportunity. I wonder if you would examine the calls the web UI makes and automate some HTTP calls to setup and start a test…

  6. Hi Daniel,

    How do we get response_time per request in locustio?

    I am running a thread to get values for 250 different requests in a loop and I need to get response_time for each request.

    The global_stats is giving avg_response_time, but I need individual request response time.

    Please let me know if you know this?


  7. Hi Daniel,
    i’ve used your explanation to run a parallel test using master and slaves on separate urls (URLs are provided in the py script):
    locust -f file1.py –logfile=logfile1.log –master&
    locust -f file2.py –logfile=logfile2.log –slave&
    locust -f file3.py –logfile=logfile3.log –slave&
    i’m having trouble getting the separate stats for each URL, as the logfile only retains debug info and not the statistics of the load.
    i get the separate stats when i kill the jobs, but it is messy and i’m not sure what stats to attribute to which of the jobs.
    any idea how to perform this better?
    it would have been nice if i had a way to tell locust to shut down the job at some point without having to kill it.

    • I have used a combination of Locust and Munin to capture server metrics alongside performance metrics. I published a few articles on my site (search ‘munin’) that explain how to get munin resolution to be more granular during load tests. Hopefully this answers your question.

Leave a Reply