Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks that do not return output / updated arrays are not validated #26

Open
hardik01shah opened this issue Dec 14, 2024 · 1 comment

Comments

@hardik01shah
Copy link

hardik01shah commented Dec 14, 2024

Certain benchmarks like lu, cavity_flow, scattering_self_energies and many more do not return the output arrays that the benchmark computes, or the input arrays that are updated during the computation. These benchmarks are not validated!

In the validation function in utilities.py, the zip call between the output arguments of the reference implementation (numpy) and the framework implementation constrains the validation to the minimum of the two arguments which would be an empty list for the reference implementation if the numpy implementation returns None. So, returning the output arrays in the framework implementation would also not validate the implementation, despite the message <Framework> - <impl> - validation: SUCCESS in the terminal.

To fix this, I suggest:

  • Raise an error or atleast a warning if the length of output arrays of the reference and framework implementation do not match
  • If the length of the output arrays is zero i.e. a None is returned, again raise an error.

Happy to put in a PR addressing this.

@alexnick83
Copy link
Contributor

Thanks for noting this. This is a known issue, and there is an unfinished PR that addresses it (#20). Thank you for reminding me; I will try to get it done soon(ish).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants