Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update to use poetry package manager #165

Merged
merged 4 commits into from
Aug 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 15 additions & 12 deletions .github/workflows/pull_request.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,17 @@ jobs:
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install Pipenv
run: pip install pipenv==2022.9.24
- name: Cache virtualenv
id: cache-virtualenv
uses: actions/cache@v4
- name: Install Poetry
uses: snok/install-poetry@v1
with:
path: ~/.local/share/virtualenvs/
key: ${{ runner.os }}-${{ env.PYTHON_VERSION }}-virtualenvs-${{ hashFiles('Pipfile.lock') }}
version: 1.8.3
virtualenvs-create: true
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
cache: 'poetry'
- name: Install virtual environment
if: steps.cache-virtualenv.outputs.cache-hit != 'true'
run: pipenv install --dev
run: poetry install
- name: Lint scripts
run: make lint
test:
Expand All @@ -37,10 +37,13 @@ jobs:
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Install pipenv
run: pip install pipenv==2023.8.22
- name: Install Poetry
uses: snok/install-poetry@v1
with:
version: 1.8.3
virtualenvs-create: true
- name: Install virtual environment
run: pipenv install --dev
run: poetry install
- name: Running unit tests
run: make test
docker-push:
Expand Down
7 changes: 5 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,11 @@ COPY . /benchmark
RUN apt-get update && apt-get install -y gcc python3-dev

# Install the required dependencies via pip
RUN pip install pipenv==2022.9.24
RUN pipenv install --deploy --system
COPY pyproject.toml pyproject.toml
COPY poetry.lock poetry.lock
RUN pip install "poetry==1.8.3"
RUN poetry config virtualenvs.create false
RUN poetry install

# Start Locust using LOCUS_OPTS environment variable
ENTRYPOINT ["bash", "./docker_entrypoint.sh"]
12 changes: 6 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
lint: flake8
pipenv run black --check .
poetry run black --check .

flake8:
pipenv run flake8 --max-complexity 10 --count
poetry run flake8 --max-complexity 10 --count

format:
pipenv run isort .
pipenv run black .
poetry run isort .
poetry run black .

run:
pipenv run ./run.sh requests/test_checkbox.json
poetry run ./run.sh requests/test_checkbox.json

test:
pipenv run ./scripts/run_tests.sh
poetry run ./scripts/run_tests.sh
30 changes: 0 additions & 30 deletions Pipfile

This file was deleted.

2,290 changes: 0 additions & 2,290 deletions Pipfile.lock

This file was deleted.

20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,19 +21,19 @@ The benchmark consumes a requests JSON file that contains a list of HTTP request
To run a benchmark, use:

```bash
pipenv run ./run.sh <REQUESTS_JSON> <INCLUDE_SCHEMA_URL_IN_TOKEN: Optional> <HOST: Optional>
poetry run ./run.sh <REQUESTS_JSON> <INCLUDE_SCHEMA_URL_IN_TOKEN: Optional> <HOST: Optional>
```
e.g.
```bash
pipenv run ./run.sh requests/test_checkbox.json
poetry run ./run.sh requests/test_checkbox.json
```

This will run 1 minute of locust requests with 1 user and no wait time between requests. The output files are `output_stats.csv`, `output_stats_history.csv` and `output_failures.csv`.

For the web interface:

```bash
REQUESTS_JSON=requests/test_checkbox.json HOST=http://localhost:5000 pipenv run locust
REQUESTS_JSON=requests/test_checkbox.json HOST=http://localhost:5000 poetry run locust
```

## Configuration
Expand All @@ -57,11 +57,11 @@ Open the network inspector in Chrome or Firefox and ensure 'preserve log' is tic
After the test is complete, right-click on one of the requests in the network inspector and save the log as a HAR file. To generate a requests file from the HAR file run:

```bash
pipenv run python generate_requests.py <HAR_FILEPATH> <REQUESTS_FILEPATH> <SCHEMA_NAME>
poetry run python generate_requests.py <HAR_FILEPATH> <REQUESTS_FILEPATH> <SCHEMA_NAME>
```
e.g.
```bash
pipenv run python generate_requests.py requests.har requests/test_checkbox.json test_checkbox
poetry run python generate_requests.py requests.har requests/test_checkbox.json test_checkbox
```

## Dealing with repeating sections
Expand Down Expand Up @@ -202,21 +202,21 @@ export GOOGLE_APPLICATION_CREDENTIALS="<path_to_json_credentials_file>"

To run the script and download results:
```bash
OUTPUT_BUCKET="<bucket_name>" pipenv run python -m scripts.get_benchmark_results
OUTPUT_BUCKET="<bucket_name>" poetry run python -m scripts.get_benchmark_results
```

This script also accepts optional `NUMBER_OF_DAYS` and `OUTPUT_DIR` environment variables which allows the user to download a subset of results and set
a specific output directory e.g.
```bash
OUTPUT_BUCKET="<bucket_name>" NUMBER_OF_DAYS=<number_of_days> OUTPUT_DIR="<output_directory>" pipenv run python -m scripts.get_benchmark_results
OUTPUT_BUCKET="<bucket_name>" NUMBER_OF_DAYS=<number_of_days> OUTPUT_DIR="<output_directory>" poetry run python -m scripts.get_benchmark_results
```

### Summarise the Daily Benchmark results
You can get a breakdown of the average response times for a result set by doing:
```bash
OUTPUT_DIR="outputs/daily-test" \
OUTPUT_DATE="2020-01-01" \
pipenv run python -m scripts.get_summary
poetry run python -m scripts.get_summary
```

This will output something like:
Expand Down Expand Up @@ -245,7 +245,7 @@ If `OUTPUT_DATE` is not provided, then it will output a summary for all results
To get a breakdown of results for a stress test use the `get_aggregated_summary` script. This accepts a folder containing
results as a parameter, and will provide aggregate totals at the folder level:
```bash
OUTPUT_DIR="outputs/stress-test" pipenv run python -m scripts.get_aggregated_summary
OUTPUT_DIR="outputs/stress-test" poetry run python -m scripts.get_aggregated_summary
```

This will output something like:
Expand Down Expand Up @@ -275,7 +275,7 @@ For example, to visualise results for the last 7 days:
```bash
OUTPUT_DIR="outputs/daily-test" \
NUMBER_OF_DAYS="7" \
pipenv run python -m scripts.visualise_results
poetry run python -m scripts.visualise_results
```

A line chart will be generated and saved as `performance_graph.png`.
Expand Down
8 changes: 4 additions & 4 deletions ci/output-results-to-github.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,21 +26,21 @@ run:

cd eq-survey-runner-benchmark

pip3 install pipenv
pipenv install --deploy
pip3 install poetry
poetry install

# Get benchmark outputs
OUTPUT_BUCKET="$OUTPUT_BUCKET" \
OUTPUT_DIR="outputs" \
NUMBER_OF_DAYS="$NUMBER_OF_DAYS" \
pipenv run python -m scripts.get_benchmark_results
poetry run python -m scripts.get_benchmark_results

# Date to get summary for
RUNTIME_DATE_STRING="$(date +'%Y-%m-%d')"
SUMMARY=$(OUTPUT_DIR="outputs/${OUTPUT_DIR}" \
OUTPUT_DATE="$RUNTIME_DATE_STRING" \
OUTPUT_TO_GITHUB="True" \
pipenv run python -m scripts.get_summary)
poetry run python -m scripts.get_summary)

# Post summary to Github
cd ../
Expand Down
14 changes: 7 additions & 7 deletions ci/output-results-to-slack.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,33 +26,33 @@ run:

cd eq-survey-runner-benchmark

pip3 install pipenv
pipenv install --deploy
pip3 install poetry
poetry install

# Get benchmark outputs
OUTPUT_BUCKET="$OUTPUT_BUCKET" \
OUTPUT_DIR="outputs" \
NUMBER_OF_DAYS="$NUMBER_OF_DAYS" \
pipenv run python -m scripts.get_benchmark_results
poetry run python -m scripts.get_benchmark_results

# Create performance graph
OUTPUT_DIR="outputs/${OUTPUT_DIR}" \
NUMBER_OF_DAYS="$NUMBER_OF_DAYS" \
pipenv run python -m scripts.visualise_results
poetry run python -m scripts.visualise_results

# Date to get summary for
RUNTIME_DATE_STRING="$(date +'%Y-%m-%d')"
SUMMARY=$(OUTPUT_DIR="outputs/${OUTPUT_DIR}" \
OUTPUT_DATE="$RUNTIME_DATE_STRING" \
pipenv run python -m scripts.get_summary)
poetry run python -m scripts.get_summary)

# Post summary to Slack
TITLE="Results For Latest Benchmark" \
INITIAL_COMMENT="Latest Daily Performance Metrics" \
CONTENT="$SUMMARY" \
pipenv run python -m scripts.slack_notification
poetry run python -m scripts.slack_notification

# Post performance graph to Slack
TITLE="Performance Graph" \
ATTACHMENT_FILENAME="performance_graph.png" \
pipenv run python -m scripts.slack_notification
poetry run python -m scripts.slack_notification
4 changes: 2 additions & 2 deletions doc/performance-investigations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,12 @@ outputs/

To get a summary for each dated folder, run the following:
```
OUTPUT_DIR=outputs/baseline pipenv run python -m scripts.get_summary
OUTPUT_DIR=outputs/baseline poetry run python -m scripts.get_summary
```

To get an aggregated summary for all dated folders, run the following:
```
OUTPUT_DIR=outputs/baseline pipenv run python -m scripts.get_aggregated_summary
OUTPUT_DIR=outputs/baseline poetry run python -m scripts.get_aggregated_summary
```


Expand Down
40 changes: 20 additions & 20 deletions generate_requests.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,49 +17,49 @@ def parse_har_file(har_file):
requests = []

for page in har_parser.pages:
entries = page.filter_entries(content_type=r'(text/html|application/pdf)')
entries = page.filter_entries(content_type=r"(text/html|application/pdf)")
for entry in entries:
entry_request = entry['request']
entry_request = entry["request"]

request_base_url = "{0.scheme}://{0.netloc}".format(
urlsplit(entry_request['url'])
urlsplit(entry_request["url"])
)

request = {
'method': entry_request['method'],
'url': entry_request['url'].replace(request_base_url, ""),
'datetime': dateutil.parser.parse(entry['startedDateTime']),
"method": entry_request["method"],
"url": entry_request["url"].replace(request_base_url, ""),
"datetime": dateutil.parser.parse(entry["startedDateTime"]),
}

if entry_request['method'] == 'POST':
request['data'] = {
unquote_plus(item['name']): unquote_plus(item['value'])
for item in entry_request['postData']['params']
if entry_request["method"] == "POST":
request["data"] = {
unquote_plus(item["name"]): unquote_plus(item["value"])
for item in entry_request["postData"]["params"]
}
request['data'].pop('csrf_token', None)
request["data"].pop("csrf_token", None)

requests.append(request)

requests.sort(key=itemgetter('datetime'))
requests.sort(key=itemgetter("datetime"))

for request in requests:
request.pop('datetime', None)
request.pop("datetime", None)

return {'requests': requests}
return {"requests": requests}


@click.command()
@click.argument('har_file', type=click.File('r'))
@click.argument('requests_file', type=click.File('w'))
@click.argument('schema_name')
@click.argument("har_file", type=click.File("r"))
@click.argument("requests_file", type=click.File("w"))
@click.argument("schema_name")
def generate_requests(har_file, requests_file, schema_name):
requests = parse_har_file(har_file)
requests['schema_name'] = schema_name
requests['schema_url'] = (
requests["schema_name"] = schema_name
requests["schema_url"] = (
f"https://storage.googleapis.com/eq-questionnaire-schemas/{schema_name}.json"
)
json.dump(requests, requests_file, indent=4)


if __name__ == '__main__':
if __name__ == "__main__":
generate_requests()
Loading