-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SMCABC BoundsError in Parallel #39
Comments
I think the error occurs when the maxiterations is reached but there are less than 100 (nparticles) non-undefined particles. I noticed that the error occurred on the 2nd tolerance. If I increase the maxiterations, the error only occurs on the 3rd tolerance I see that the ABC Rejection has a check for this situation which prevents the error. I will try incorporate that into the SMC code |
I've made a PR for a possible fix, but I am not too sure if the approach is correct. I essentially copied the check that is done in the ABC Rejection Parallel method, where the particles are ranked and the top particles are chosen even if they are above the epsilon. |
Thanks! It looks fine at first glance. I will check in more detail later in the week. FYI there is also this package that performs ABC - https://github.com/tanhevg/GpABC.jl I haven't used it but it looks to be better documented and has more features than this package. |
I've been doing more digging and there is more to the problem. For some reason the Thanks for the info, looks nice but it doesn't seem to be parallelized which is the main reason I want to use this package. EDIT: I think I've figured it out! Line 156 here
doesn't take into account BODMAS, so the minimum is divided by 2 and then subtracted from the maximum. So it should be
|
Thanks. That's an embarrassing mistake! Thanks for spotting. Does this fix the issue? I've merged into master. |
The issue can still appear but not as frequently. I'm currently using my fork as I've made a number of changes for my specific use case so you can close this issue for now. My fork is a bit messy but when I clean it up, I'll make a PR and see if you like it. Essentially I've changed how the parallel loop works, I have nested the parallel for loop in an while loop. The while loop ends when the number of particles has been reach or number of iterations has been reached. The parallel for loop runs for 1 to parallel_batch parameter. I've also made it so that there cannot be any zero prior probability particles from being perturbed. I'm just using the uniform perturbation kernel and making sure that the upper and lower bound are within the prior uniform bounds. I was a bit worried about this change in terms of whether it messes with the theory it should be the same as before, just more efficient as it doesn't go to the continue statement and try again. |
OK, I'll close for now then. If there's any way to make a simple example we could include in the tests, that would be valuable. |
this issue is still relevant, it just occurred today to me, i think it is best to keep this issue open, since it can occurr |
ok thanks for this, I'll try and take another look in the next few weeks. |
Hello, Any update on this? I think this error is related to the one spotted in this issue |
I am running the SMCABC method, with
parallel=true
, to find parameter posterior distribution for my custom model. But I end up with a BoundsError like the one below. The "43-element" is a different number each timeAny idea why this occurring? I don't get the error when I run the parallel example from the README
The text was updated successfully, but these errors were encountered: