-
-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excluding non-zero exit codes from Relative Time comparison. #591
Comments
I was looking for an issue about this, so I'm happy I found yours ! Another case is when you use another command to populate a parameter list for example, and the output is a little bit off, it can be frustrating to have run all of the benchmarks for nothing because of an error like this, but using However, it seems that including non-zero exit code might still be relevant in some cases, so this should be another flag right ? (I'm not really familiar with hyperfine besides using it, this is just a thought I had) |
Thank you for your feedback.
I think so. There are valid use cases where we want to benchmark a program that always returns with a non-zero exit status. So we want it to be included in the comparison. For example, I was once benchmarking What we do have is the possibility to retrieve exit codes in post-processing (e.g. through the Python scripts in {
"results": [
{
"command": "fd",
"mean": 0.010896327180000002,
"stddev": 0.00019708197362518857,
"median": 0.010896327180000002,
"user": 0.0025372600000000004,
"system": 0.00880242,
"min": 0.010756969180000003,
"max": 0.011035685180000001,
"times": [
0.010756969180000003,
0.011035685180000001
],
"exit_codes": [
0,
0
]
},
{
"command": "find",
"mean": 0.0018015931800000004,
"stddev": 0.00024363505567138774,
"median": 0.0018015931800000004,
"user": 0.00223126,
"system": 0.0,
"min": 0.00162931718,
"max": 0.0019738691800000006,
"times": [
0.0019738691800000006,
0.00162931718
],
"exit_codes": [
1,
1
]
}
]
} But I understand you would like to see this feature included in hyperfine itself. If so, I would appreciate any help in designing this feature.
|
For use cases, I don't know if I can think of anything else outside of what @zamu-flowerpot already mentioned (and maybe messing up the parameter lists like I said). I can't really think of a way to reconcile "ignore non-zero exit codes" and "exclude runs with non-zero exit codes from the result" in a single option (but again, I'm not that familiar with the project). Let me know what you think :) |
I needed exactly this. Thanks @sharkdp, I was able to get the exact flakiness of a test I was debugging by doing:
|
I usually run a few sets of parameter lists which quickly explodes in number of instances (which is working as intended!). However, all the runs are shown in the summary at the end including those that exited with an non-zero exit code.
Looking at the code base, adding a filter in
src/benchmark/relative_speed::compute
to remove results with non-zero exit codes from the output should fix it.I'm not really sure if there are more dependencies on the
compute
function elsewhere though or I would have just pushed a PR.PS. Thanks for the all the great software! I use hyperfine, fd, and bat all the time!
The text was updated successfully, but these errors were encountered: