Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Methodology for evaluation #19

Open
2 tasks
GolanTrev opened this issue Aug 28, 2024 · 0 comments
Open
2 tasks

Methodology for evaluation #19

GolanTrev opened this issue Aug 28, 2024 · 0 comments

Comments

@GolanTrev
Copy link
Collaborator

We need to differentiate evaluations in a subset of special cases, versus evaluations on realistic datasets.

  • 3 experiments related to: missing stops, merging, splitting. Of the type "for users with x, y, z properties, then this problem happens in a, b, c regime of parameters". Just proof of concept, no need for all combinations of everything.

  • Think of 1 experiment that is more general. Perhaps we look at a histogram of q (completeness) of an actual dataset, and at some preferential return patterns, and we test a whole population without focusing on special cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant