Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation/tutorial #112

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft

Documentation/tutorial #112

wants to merge 3 commits into from

Conversation

kaiserls
Copy link
Collaborator

@kaiserls kaiserls commented Dec 6, 2023

Description

Fixes #107
Fixes #101

Type of change

  • This change requires a documentation update

How Has This Been Tested?

Please note/describe at least one tests that you ran to verify your changes.

  • Built documentation and run code examples locally

Checklist For Contributor

  • My code follows the style guidelines of this project (I installed pre-commit)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I wrote my changes in the changelog in the [unreleased] section
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • New and existing unit tests pass locally with my changes

Checklist For Maintainers

  • Continuous Integration (CI) is successfully running
  • Do we want to release/tag a new version? [✔️ / ❌]
    • If yes, add a release to the changelog and set the new version in pyproject.toml. After the merge, tag the release!

Copy link
Collaborator Author

@kaiserls kaiserls left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great! I think you hit a sweet spot between brevity and comprehensiveness!
Looking forward to the final version :)

"close the cycle between the data and parameters, we can again sample\n",
"from this distribution and use the forward model to get a discrete\n",
"distribution of the parameters.\n",
"That's enough maths for now - let's take a look at an example! We start with something simple: the average temperatures in different locations, i.e. a model for the dependence of the temperature $y$ on the latitude $q$. In the real world, problems with a known continous data distribution are rare. Instead, we often rely on discrete measurements. Hence, EPI starts with discrete data points as input and derives a continous distribution using Kernel Density Estimation (KDE) techniques. From this data distribution the EPI algorithm derives the parameter distribution. To close the cycle between the data and parameters, we can again sample from this distribution and use the forward model to get a finite sample of the parameters.\n",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to get a finite sample of the parameters
to get a finite sample of the data?

@@ -119,12 +115,12 @@
"\n",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Additionally, the attributes \n",
"- `param_dim`\n",
"- `data_dim`\n",
"- `PARAM_LIMITS`\n",
"- `CENTRAL_PARAM`\n",

See a few lines above. Maybe say that you need to pass central_param and param_limits to the constructor? PARAM_LIMITS and CENTRAL_PARAM are just for convenience in our example models. But that's just a small detail, which I noticed while looking over the tutorial.

"\n",
"Now we can now use EPI to infer the parameter distribution from the data.By default, the `inference` method uses Markov chain Monte Carlo sampling (this can be changed using the inference_type argument). `inference` returns a tuple containing samples from the parameter Markov chain $y_i$, the corresponding data points $q_i = s(y_i)$, the estimated densities $\\Phi_Q (q_i)$ scaled by a constant $c$, and a `ResultManager` object that can be used to load and manipulate the results of EPI."
"Now we can now use `eulerpi` to infer the parameter distribution from the data. By default, the `inference` method uses Markov chain Monte Carlo (MCMC) sampling (this can be changed using the inference_type argument). `inference` returns a tuple containing three dicts with the samples from the parameter Markov chain $q_i$, the corresponding data points $y_i = s(q_i)$, the estimated densities $\\Phi_Q (q_i)$ scaled by a constant $c$, as well as a `ResultManager` object that can be used to load and manipulate the results of EPI."
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is $q_i$ a single sample? I yes, would it be better to write it after the samples?

"cell_type": "markdown",
"metadata": {},
"source": [
"The two plots correspond to the inferred parameter distribution $\\hat{\\Phi}_{\\mathcal{Q}}$ and a KDE of the pushforward of the inferred sample, i.e. $s(\\hat{q_i})$. We will come back to the plotting function when we defined a more interesting model:\n",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe already give a hint/reminder, what the plot of the KDE of $s(\hat q_i)$ shows?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants