Merge branch 'feature/benchmarks' into 'master'

Server Performance Tests

See merge request cara/cara!306
This commit is contained in:
Andre Henriques 2021-12-15 11:35:56 +01:00
commit cf827977d5
2 changed files with 60 additions and 0 deletions

View file

@ -0,0 +1,18 @@
# locust
A simple open source load testing tool that allows to define user behavior.
In order to set it up for the first time, we followed the documentation at https://locust.io/. In particular, we:
* Defined a class for the users that will be simulating.
* Defined a ``wait_time`` variable that will make the simulated users wait between the specified seconds after each task executed.
* Decorated our method with ``@Task`` that creates a micro-thread that calls this method.
* Defined the ``self.client`` attribute that makes it possible to make HTTP calls that will be logged by Locust.
To use, uncomment the desired method on ``lucust.py``` file, open the terminal on this folder and run the following command:
``locust -f locust.py --host https://cara.web.cern.ch``
Then, open up a browser and point it to http://localhost:8089.
By default we pointed out the test to our own web server.
``Start swarming`` will trigger the simulation.

View file

@ -0,0 +1,42 @@
from locust import HttpUser, task, between
from gevent.pool import Group
import time
'''
Method no. 1 - Simulation with each single user
running x requests in parallel.
Specify the desired number of parallel requests in
"num_of_parallel_requests" variable (35 by default).
This method was used in simulations with one single
user perfoming 35 requests in parallel.
'''
# num_of_parallel_requests = 35
# class User(HttpUser):
# wait_time = between(0.05, 0.1)
# @task(1)
# def test_api(self):
# group = Group()
# for i in range(0, num_of_parallel_requests):
# group.spawn(lambda:self.client.get("/calculator-open/baseline-model/result"))
# group.join()
# while(1):
# time.sleep(1)
'''
Method no. 2 - Simulation with different users
running x requests concurrently.
With this method, each user is intended to
perform one single request.
This method was used in simulations with different
number of users requesting once at the same time.
'''
# class User(HttpUser):
# @task(1)
# def test_api(self):
# self.client.get("/calculator-open/baseline-model/result")
# while(1):
# time.sleep(1)