Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue/4/evaluation rearrangement #61

Merged
merged 133 commits into from
Jul 7, 2021

Conversation

aimalz
Copy link
Collaborator

@aimalz aimalz commented May 25, 2021

We took a stab at rearranging the evaluation subpackage for the Golden Spike v1 release. The initial goal of this work was to isolate the functionality that we hope to ultimately call through qp, hopefully making that future transition a little easier. More of the changes, however, ended up being related to enforcing some of the API conventions established in the estimation subpackage, specifically those affecting contributors adding new metrics and those affecting how we as experimenters will use it for the baseline tests of the Golden Spike. We should converge on these before getting too far into unit testing.

We got as far as the PIT and metrics thereof (modulo a snag with unexpected behavior of qp) and are working on a demo notebook as well as a similar rearrangement for the CDE Loss -- changes will indeed have to be propagated to the plotting functionality as well, but that can be done back in the primary evaluation branch, along with the unit tests, once this patch is merged in.

aimalz and others added 30 commits January 14, 2020 20:08
…magnitude normalizing flow. Demo for the creation class.
… creation demo, and made a plot of some of the pdfs. Removed redundancies in if states in some of the scripts. Removed some old plots.
aimalz added a commit that referenced this pull request Jun 14, 2021
Let's merge this ASAP because it will make #61 simpler, and getting the notebooks to render nicely for the docs seems like a follow-up issue to the minimal version of #39.
Julia Gschwend added 21 commits June 21, 2021 16:51
…rdingly to ensure consistency in the i/o from rail estination to rail evaluation.
…py, in examples directory where there is no name conflict.
…to inform that it is based on a discrete distribution, as required by scipy.stats.entropy function.
return output as standardized named tuple as all other metrics.
@aimalz aimalz marked this pull request as ready for review July 2, 2021 16:25
@gschwend
Copy link
Member

gschwend commented Jul 2, 2021

@aimalz this branch is ready to merge with the evaluation baseline. I already moved the main.py to the examples directory and tested it there.

@gschwend
Copy link
Member

gschwend commented Jul 2, 2021

I'm not sure if I'm the one supposed to press the merge button. Just in case, I'll wait for people to take a look at the code changes added by me this week.

@aimalz aimalz merged commit 58e470a into issue/4/evaluation-baseline Jul 7, 2021
@aimalz aimalz deleted the issue/4/evaluation-rearrange branch July 20, 2021 19:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants