Skip to content

Commit

Permalink
add performance chapter
Browse files Browse the repository at this point in the history
  • Loading branch information
direvius committed Apr 4, 2018
1 parent d304baf commit 95b180c
Show file tree
Hide file tree
Showing 11 changed files with 126 additions and 21 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,4 @@ out/
vendor/

docs/_*
.DS_Store
6 changes: 4 additions & 2 deletions docs/architecture.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,9 @@ It can control Instances startup times, RPS amount (requests per second) or othe
By combining two types of Schedulers, RPS Scheduler and Instance Startup Scheduler, you can simulate different types of load.
Instace Startup Scheduler controls the level of parallelism and RPS Scheduler controls throughput.

RPS Scheduler can limit throughput of a whole instances pool, i.e. 10 RPS on 10 instances means 10 RPS overall, or
limit throughput of each instance in a pool individually, i.e. 10 RPS on each of 10 instances means 100 RPS overall.

If you set RPS Scheduler to 'unlimited' and then gradually raise the number of Instances in your system by using Instance
Startup Scheduler, you'll be able to study the `scalability <http://www.perfdynamics.com/Manifesto/USLscalability.html>`_
of your service.
Expand All @@ -45,8 +48,7 @@ If you set Instances count to a big, unchanged value (you can estimate the neede
`Little's Law <https://en.wikipedia.org/wiki/Little%27s_law>`_) and then gradually raise the RPS by using RPS Scheduler,
you'll be able to simulate Internet and push your service to its limits.

You can also combine two methods mentioned above. And, one more thing, RPS Scheduler can control a whole Instances Pool or
each Instance individually.
You can also combine two methods mentioned above.

Instances and Guns
++++++++++++++++++
Expand Down
104 changes: 104 additions & 0 deletions docs/performance.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
Pandora's performance
=====================

We made some performance tests for the gun itself. Here are the results.

* Server: NGinx, 32 cores, 64G RAM.
* Tank: 32 cores, 128G RAM.
* Network: 1G.

HTTP requests to nginx
----------------------

Static pages with different sizes. Server delays implemented in Lua script, we can
set delay time using ``sleep`` query parameter:

.. code-block:: lua
server {
listen 12999 default;
listen [::]:12999 default ipv6only=on;
server_name pandora.test.yandex.net;
location ~* / {
rewrite_by_lua_block {
local args = ngx.req.get_uri_args()
if args['sleep'] then
ngx.sleep(args['sleep']/1000)
end;
}
root /etc/nginx/pandora;
error_page 404 = 404;
}
access_log off;
error_log off;
}
* **Connection: Close** 23k RPS

.. image:: screenshot/http_connection_close_td.png
:align: center
:alt: Connection:Close, response times distribution

* **Connection: Keep-Alive** 95k RPS

.. image:: screenshot/http_keep_alive_td.png
:align: center
:alt: Keep-Alive, response times distribution

* **Response size 10kB** maxed out network interface. OK.
* **Response size 100kb** maxed out network interface. OK.
* **POST requests 10kB** maxed out network interface. OK.
* **POST requests 100kB** maxed out network interface. OK.
* **POST requests 1MB** maxed out network interface. OK.

.. image:: screenshot/http_100kb_net.png
:align: center
:alt: 100 kb responses, network load


* **50ms server delay** 30k RPS. OK.
* **500ms server delay** 30k RPS, 30k instances. OK.
* **1s server delay** 50k RPS, 50k instances. OK.
* **10s server delay** 5k RPS, 5k instances. OK.

**All good.**

.. image:: screenshot/http_delay_10s_td.png
:align: center
:alt: 10s server delay, response times distribution

.. image:: screenshot/http_delay_10s_instances.png
:align: center
:alt: 10s server delay, instances count


* **Server fail during test** OK.

.. image:: screenshot/http_srv_fail_q.png
:align: center
:alt: server fail emulation, response times quantiles


Custom scenarios
----------------

Custom scenarios performance depends very much of their implementation. In some our
test we saw spikes caused by GC. They can be avoided by reducing allocation size.
It is a good idea to optimize your scenarios.
Go has `a lot <https://github.com/golang/go/wiki/Performance>`_ of tools helping you
to do this.

.. note:: We used JSON-formatted ammo to specify parameters for each scenario run.

* **Small requests** 35k RPS. OK.
* **Some scenario steps with big JSON bodies** 35k RPS. OK.

.. image:: screenshot/scn_cases.png
:align: center
:alt: scenario steps
Binary file added docs/screenshot/http_100kb_net.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot/http_connection_close_td.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot/http_delay_10s_instances.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot/http_delay_10s_td.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot/http_keep_alive_td.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot/http_srv_fail_q.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/screenshot/scn_cases.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
36 changes: 17 additions & 19 deletions docs/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,28 +12,26 @@ Pandora supports config files in `YAML`_ format. Create a new file named ``load.
.. code-block:: yaml
pools:
- id: HTTP pool # Pool name
- id: HTTP pool # pool name (for your choice)
gun:
type: http # Gun type
target: example.com:80 # Gun target
type: http # gun type
target: example.com:80 # gun target
ammo:
type: uri # Ammo format
file: ./ammo.uri # Ammo File
type: uri # ammo format
file: ./ammo.uri # ammo File
result:
type: phout # Report format (phout is for Yandex.Tank)
destination: ./phout.log # Report file name
rps: # RPS Schedule
type: periodic # shoot periodically
period: 0.1s # ten batches each second
max: 30 # thirty batches total
batch: 2 # in batches of two shoots
startup: # Startup Schedule
type: periodic # start Instances periodically
period: 0.5s # every 0.5 seconds
batch: 1 # one Instance at a time
max: 5 # five Instances total
type: phout # report format (phout is compatible with Yandex.Tank)
destination: ./phout.log # report file name
rps: # shooting schedule
type: line # linear growth
from: 1 # from 1 response per second
to: 5 # to 5 responses per second
duration: 60s # for 60 seconds
startup: # instances startup schedule
type: once # start 10 instances
times: 10
`ammo.uri`:

Expand Down

0 comments on commit 95b180c

Please sign in to comment.