Load Testing with Locust.io
I’ve recently done some load testing using Locust.io. The setup was more complicated than other tools and I didn’t feel like it was well documented on their site. Here’s how I got Locust.io running on two different Linux platforms.
Locust.io on RedHat Enterprise Linux (RHEL) or CentOS
Naturally, these instructions will work on CentOS too.
sudo yum -y install python-setuptools python-devel sudo yum -y install libevent libevent-devel |
One requirement of Locust.io is ZeroMQ. I found instructions to install that on their site http://zeromq.org/distro:centos
sudo yum -y -c http://download.opensuse.org/repositories/home:/fengshuo:/zeromq/CentOS_CentOS-6/home:fengshuo:zeromq.repo install zeromq zeromq-devel sudo easy_install locustio sudo easy_install pyzmq |
Locust.io on Debian or Ubuntu
sudo apt-get update sudo apt-get -y install python-setuptools sudo apt-get -y install python-dev sudo apt-get -y install libevent-dev sudo apt-get -y install libzmq3-dev sudo apt-get -y install python-pip sudo pip install locustio sudo easy_install pyzmq |
Running Locust.io Test
Here’s a simple python test script for Locust.io. Save this to a file named locust-example.py.
from locust import HttpLocust, TaskSet, task class UserBehavior(TaskSet): @task def index(self): self.client.get("/") class WebsiteUser(HttpLocust): task_set = UserBehavior min_wait = 1000 max_wait = 3000 |
At this point you should be ready to run a test. You can run this from any user directory.
locust -H http://15.125.92.195 -f locust-example.py --master& locust -f locust-example.py --slave& locust -f locust-example.py --slave& locust -f locust-example.py --slave& |
The first line above specifies the host against which all requests will be executed, an IP address 15.125.92.195 in this case. I start these as jobs that run in the background so I can start up some slaves too. In this case I start three. All the output from these will still go to stdout. You can verify that the processes are running using the jobs command.
debian@locust:~$ jobs [1] Running locust -H http://15.125.92.195 -f locust-testnginx.py --master & [2] Running locust -f locust-testnginx.py --slave & [3]- Running locust -f locust-testnginx.py --slave & [4]+ Running locust -f locust-testnginx.py --slave & |
You can now load the Locust.io user interface in a web browser. Just point to the hostname/IP on port 8089.
After starting up, Locust will spin up clients, distributed among the slaves, until it has reached the desired number. After it reaches the full number of clients it automatically resets the stats and keeps running until it is manually stopped.
That’s it. For better utilization of multiple cores, spin up additional slaves. I keep a locust.io image on hand so I can quickly spin it up when I have load testing to do. Since the test scripts are python, I keep them in the same repositories with the applications under test. Follow this link for full documentation for Locust.io.
Hello, i am running locust with master and slave mode with 8 slaves. i set Number of users as 500 and hatch rate of 200.
I see my locust master process gets killed after sometime saying out of memory.
I am running on 64 bit Ubuntu 4 GB RAM.
I see locust consumes almost around 3 GB of memory. Can you please help me to fix this issue.
Santosh, this doesn’t sound like a locust.io problem, but instead a problem with the test script you wrote in python. If you’re attempting to store requests or do post processing it’s possible your test script is holding on to that memory. Without seeing your test script, it’s difficult to say exactly. You can reply with a hist or pastbin url and I’ll look at it.
Have you had any luck deploying locust? Ie running it behind Apache so it lives past the ssh session or some similar method.
I haven’t run locust behind apache. I treat locust more as a one-off element in my testing, but I suppose it could be run by way of (u)WSGI behind apache or nginx.
I have an image on my cloud hosting provider so I can quickly spin up a machine to use for testing and then I destroy it when I’m done testing.
How to analyse result of locust.io ? i have csv file containing “Method Name # requests # failures Median response time Average response time Min response time Max response time Average Content Size Requests/s
” but how to analyse this ?
Yogesh, I’m not sure what you’re asking. Your analysis will depend on what’s important to you. It might be that you’re looking at the pages that have the highest outliers in initial response time. You may want to focus on the average or median. The metrics that matter will be different for each site.
Hi Daniel,
Let me explain you what i have done with apache benchmark, i created output.dat file using apache benchmark and using this file i plotted a graph as shown in this image http://s5.postimg.org/j9seasd7r/test100k_Linear.png and this is what i got from locust test but i don’t how to analyse this output ? here is the link: https://docs.google.com/spreadsheets/d/16dt3oLam_6nYBrQuPl3DN2WXp9muTpkt87P0PWlmvRI/pubhtml?gid=427970069&single=true
https://docs.google.com/spreadsheets/d/1JXJNFEtBqHXL081McUPa-xsGxmKT4y8UbyVY1OFhYZg/pubhtml?gid=765618842&single=true
Yogesh, Locust by default rolls all values up into a summary. One of the reasons it can produce an extremely high load with very little output is because it is not burdened with recording details about every request. I have been using munin (http://software.danielwatrous.com/tag/munin/) to monitor the systems under test. It’s conceivable that you could write a munin plugin to derive something useful from the application log files and achieve a sort of real-time view similar to what you show.
Hi,
The REST API’s that we use for our project has the authentication token and authorization token to perform any GET or POST requests.
How can we handle the accessing tokens in case of locust?
Is this in the HTTP header? If so, any GET, POST, PUT, etc. is an extension of request: http://docs.locust.io/en/latest/api.html#locust.clients.HttpSession.request. That accepts a headers argument.
Hello
Im new to Locust. I have the same problem The API I’m testing has authorization token and can you post code snippet of how to connect to the API?
We use Bearer Token for authorization.
You can generate the token ahead of time and include it as a header in your request.
Hi!
Do you know if it’s possible to start a CLI Locust load test using a master / slave configuration? It seems to only support the slaves when started from the Web UI…
I’m only aware of being able to start tests from the web UI. That would be a great automation opportunity. I wonder if you would examine the calls the web UI makes and automate some HTTP calls to setup and start a test…
Hi Daniel,
How do we get response_time per request in locustio?
I am running a thread to get values for 250 different requests in a loop and I need to get response_time for each request.
The global_stats is giving avg_response_time, but I need individual request response time.
Please let me know if you know this?
Thanks
-Gorthi
Gorthi, if I understand you right, you can create a separate task for each request type. You will then see the results for each request (URL) on its own line. It may be that you want to know what is happening on the server, in which case, something like Munin (http://software.danielwatrous.com/tag/munin/) might be helpful.
Hi Daniel,
i’ve used your explanation to run a parallel test using master and slaves on separate urls (URLs are provided in the py script):
locust -f file1.py –logfile=logfile1.log –master&
locust -f file2.py –logfile=logfile2.log –slave&
locust -f file3.py –logfile=logfile3.log –slave&
i’m having trouble getting the separate stats for each URL, as the logfile only retains debug info and not the statistics of the load.
i get the separate stats when i kill the jobs, but it is messy and i’m not sure what stats to attribute to which of the jobs.
any idea how to perform this better?
it would have been nice if i had a way to tell locust to shut down the job at some point without having to kill it.
I have used a combination of Locust and Munin to capture server metrics alongside performance metrics. I published a few articles on my site (search ‘munin’) that explain how to get munin resolution to be more granular during load tests. Hopefully this answers your question.
what about if i want to use it in a web page moodle, i want to log in, make some clicks and then make a quiz, i have the python for that but i want to use it with locust.io
i hope you can help me thanks.
You should be able to include credentials in a login request. You would then capture a cookie (presumably) and use that for subsequent authenticated requests.
Hi there, Love your blog and website. So much interesting and useful information.
i want to ask about locust testing, do you ever test HTTPS request?
Thanks!
I have used it with HTTPS. It should work without any changes for valid, signed certificates. If you self-sign or use an obscure certificate authority, you may need to include a CA cert with each request.
Hi,
Can we use locust to perform tasks such as downloading files from the server onto the client using various protocols such as http,https,dns,smtp? Is it possible to get the throughput for each of these protocols?
On a scale of 1 to 10 how difficult would it be to do?
I think requests that are not HTTP(S) might require some additional work to setup.
hello, Great blog! Very helpful.
You mentioned that ‘After it reaches the full number of clients it automatically resets the stats and keeps running until it is manually stopped.’… why does it keep running after resetting stats? Earlier when I first set it up, I think I saw that it reset stats and then stopped running automatically… however after some fiddling with resetting/not resetting stats I can now see that it no longer stops until I manually stop it (as you said in your post)… Is there a way to make it stop running after resetting stats? Thanks a lot!
I was confused that the results reset at all, but I think the reasoning behind that choice was to give a clean view of how the application performed after reaching the target load. In that case I’m not sure it makes sense to stop the test after reaching the target load because then you don’t get any data about how you perform under heavy load.
Python Girl,
You can run the test in –no-web mode and also quantify duration how long you want to run the test, so that you need not stop the test manually.
locust -f script_path –csv=csvFileName –no-web -c 50 -r 1 -t60m
when you run in no web mode and provide the csv file name, results will be automatically stored into that csv file. Also ‘c 50’ – 50 Users, t60m – 60 minutes. The test will automatically stop after running for 60 minutes.
Hi Daniel,
I am new to locust. I have a situation where I have to run the load test with 50 users. Out of 50 I have to split 20 users to script1, 20 users to script2, 10 users to script3. I know I can split the request load by giving weight to the tasks. But, how do I split the users?
Can Locust be used to cover all API calls for any heavy build applications?
For current application that we performance test, there are 100 of API calls in a work flow.
where expectation is to simulate 100 users?
Do you think JMeter will be better than Locust
I think either will work. Jmeter may feel easier, but Locus will scale better to many users and high load.