-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breaking the multimodalities #1
Comments
A neat solution for this would be able to reparametrize velocity centroids of This is actually quite a neat trick - we can set minimal priors on the velocity separations! Ongoing work in the ordered-vlsr branch. |
The time for coordinate transformation should be negligible given that the transformation matrix was precomputed: In [42]: T = get_xoff_transform(5)
In [43]: Tinv = np.linalg.inv(T)
In [44]: %timeit np.dot(Tinv, [44, 0.2, 0.3, 1, 0.1])
3.08 µs ± 77.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) |
Bayesian evidence for a two-component model doesn't differ much between the two , although more testing wouldn't hurt.
(both computed for Marginalized posterior distributions for both cases: |
An important note - when we use the reparametrized |
Despite the feature branch working better than I've expected, a few things still need to be done:
|
For now the priors on centroid separations can be input by hand, time to merge the branch. |
One issue that often pops up in the sampled spectra is that the posterior is symmetrically multimodal, indicating that the velocity some components are "mirrored" on each other.
This is expected, and here's a reason of why this happens. As we don't have the prior information on where exactly the split in velocity occurs, the easiest way to set a velocity prior is to pass the same prior for centroid positions of all the components. When the live points are generated, they do so randomly within the constrained prior volume, without the knowledge of what spectral components come first or last. So when the posterior is sampled, we would often get a mess where different components are mixed together in
Why it's not such a big of a concern? The "best-fit" parameters don't care whether or not we broke the multimodalities - a point estimate of maximum likelihood would not change. On top of that, the evidences computed integrate over the whole parameter space anyway, and should also be ignorant of the component separation. This is reflected in the smoothness of the evidence and Bayes factor maps.
Why it is crucial to fix anyway? Uncertainty analyses become one hell of a problem when your posterior is multimodal. Also, MLE component maps would look way more coherent then.
The text was updated successfully, but these errors were encountered: