Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add session/download-pdf endpoints and add graph to represent values #164

Merged
merged 55 commits into from
Oct 4, 2024

Conversation

VirajP1002
Copy link
Contributor

@VirajP1002 VirajP1002 commented Jul 29, 2024

What is the context of this PR?

To update the benchmark outputs in order for us to monitor and highlight the performance of more intensive endpoints by including session and download-pdf.

benchmark_stats has been provisioned with the 99th percentiles for the session and download-pdf endpoints. In order to achieve this, the happy path has been updated to include download-pdf such that the csv file created when running the benchmark contains the percentiles for this endpoint. The output "string" of the benchmark stats has been updated to include the new metrics for both Github and non-Github outputs.

visualise_results has been updated to support the new metrics. Currently, as the 99th percentiles for both session and download-pdf are significantly larger than the original figures, it was decided that outputting 2 graphs was the most appropriate solution. Thus, 2 dataframes have been created, one for the original percentiles and a new one for the additional metrics. The "image" of the graphs created use subplots to place both graphs created next to each other.

Various tests have been updated and parametrised. In addition to this conftest was updated to include more fixtures that support the new endpoints.

How to review

  • Check if the changes make sense
  • Suggestions for refactoring
  • Ensure all tests pass
  • Use the README to test the updated visualise_results script
  • Spin up a benchmark pipeline and test the output to Slack and spin up a Dependabot staging pipeline to test the output to Github

@VirajP1002 VirajP1002 marked this pull request as ready for review July 30, 2024 15:03
scripts/benchmark_stats.py Outdated Show resolved Hide resolved
@petechd
Copy link
Contributor

petechd commented Aug 20, 2024

Do we want to fix the other places where we cast integer on floats and round up/down instead (as part of this PR, since we are already here)? Are there other places that still do that? Do other devs have any thoughts?

@VirajP1002
Copy link
Contributor Author

Do we want to fix the other places where we cast integer on floats and round up/down instead (as part of this PR, since we are already here)? Are there other places that still do that? Do other devs have any thoughts?

I'm happy to add those in as it removes the rounding issue we talked about. These changes are minor and I guess it would be potentially tweaking the graph values? So I think it's somewhat relevant to this PR, but yep, let's see what the other devs think

tests/test_visualise_results.py Outdated Show resolved Hide resolved
scripts/visualise_results.py Outdated Show resolved Hide resolved
scripts/benchmark_stats.py Outdated Show resolved Hide resolved
tests/test_visualise_results.py Show resolved Hide resolved
@berroar
Copy link
Contributor

berroar commented Sep 23, 2024

We should bump the test coverage as part of this PR as I think we are up to 85% now 🎉

@VirajP1002 VirajP1002 merged commit d5b3d85 into main Oct 4, 2024
3 checks passed
@VirajP1002 VirajP1002 deleted the add-additional-endpoints branch October 4, 2024 14:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants