Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add example of a bottlenecked server and use that test to make a new graph for the docs #2805

Merged
merged 1 commit into from
Jul 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/images/bottlenecked_server.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed docs/images/number_of_users.png
Binary file not shown.
Binary file removed docs/images/response_times.png
Binary file not shown.
Binary file removed docs/images/total_requests_per_second.png
Binary file not shown.
8 changes: 2 additions & 6 deletions docs/quickstart.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,17 +42,13 @@ The following screenshots show what it might look like when running this test us

| Under the *Charts* tab you'll find things like requests per second (RPS), response times and number of running users:

.. image:: images/total_requests_per_second.png

.. image:: images/response_times.png

.. image:: images/number_of_users.png
.. image:: images/bottlenecked_server.png

.. note::

Interpreting performance test results is quite complex (and mostly out of scope for this manual), but if your graphs start looking like this, the target service/system cannot handle the load and you have found a bottleneck.

When we get to around 9 users, response times start increasing so fast that even though Locust is still spawning more users, the number of requests per second is no longer increasing. The target service is "overloaded" or "saturated".
When we get to around 20 users, response times start increasing so fast that even though Locust is still spawning more users, the number of requests per second is no longer increasing. The target service is "overloaded" or "saturated".

If your response times are *not* increasing then add even more users until you find the service's breaking point, or celebrate that your service is already performant enough for your expected load.

Expand Down
38 changes: 38 additions & 0 deletions examples/bottlenecked_server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
"""
This example uses extensions in Locust's own WebUI to simulate a bottlenecked server and runs a test against itself.

The purpose of this is mainly to generate nice graphs in the UI to teach new users how to interpret load test results.

See https://docs.locust.io/en/stable/quickstart.html#locust-s-web-interface
"""

from locust import HttpUser, events, run_single_user, task

import time
from threading import Semaphore

# Only allow up to 10 concurrent requests. Similar to how a server with 10 threads might behave.
sema = Semaphore(10)


class WebsiteUser(HttpUser):
host = "http://127.0.0.1:8089"

@task
def index(l):
l.client.get("/slow")


@events.init.add_listener
def locust_init(environment, **kwargs):
assert environment.web_ui, "you can't run this headless"

@environment.web_ui.app.route("/slow")
def my_added_page():
with sema: # only 10 requests can hold this lock at the same time
time.sleep(1) # pretend each request takes 1 second to execute
return "Another page"


if __name__ == "__main__":
run_single_user(WebsiteUser)
Loading