Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SMCABC BoundsError in Parallel #39

Open
GantZA opened this issue Jan 7, 2020 · 10 comments
Open

SMCABC BoundsError in Parallel #39

GantZA opened this issue Jan 7, 2020 · 10 comments

Comments

@GantZA
Copy link
Contributor

GantZA commented Jan 7, 2020

I am running the SMCABC method, with parallel=true, to find parameter posterior distribution for my custom model. But I end up with a BoundsError like the one below. The "43-element" is a different number each time

BoundsError: attempt to access 43-element Array{ApproxBayes.ParticleSMC,1} at index [1:100]
throw_boundserror(::Array{ApproxBayes.ParticleSMC,1}, ::Tuple{UnitRange{Int64}}) at abstractarray.jl:484
checkbounds at abstractarray.jl:449 [inlined]
getindex(::Array{ApproxBayes.ParticleSMC,1}, ::UnitRange{Int64}) at array.jl:737
#runabc#88(::Bool, ::Bool, ::Bool, ::Function, ::ABCSMC, ::Array{Float64,1}) at ABCalgorithm.jl:225
(::getfield(ApproxBayes, Symbol("#kw##runabc")))(::NamedTuple{(:verbose, :progress, :parallel),Tuple{Bool,Bool,Bool}}, ::typeof(runabc), ::ABCSMC, ::Array{Float64,1}) at none:0
top-level scope at none:0

Any idea why this occurring? I don't get the error when I run the parallel example from the README

@GantZA
Copy link
Contributor Author

GantZA commented Jan 7, 2020

I think the error occurs when the maxiterations is reached but there are less than 100 (nparticles) non-undefined particles. I noticed that the error occurred on the 2nd tolerance. If I increase the maxiterations, the error only occurs on the 3rd tolerance

I see that the ABC Rejection has a check for this situation which prevents the error. I will try incorporate that into the SMC code

@GantZA
Copy link
Contributor Author

GantZA commented Jan 7, 2020

I've made a PR for a possible fix, but I am not too sure if the approach is correct. I essentially copied the check that is done in the ABC Rejection Parallel method, where the particles are ranked and the top particles are chosen even if they are above the epsilon.

@marcjwilliams1
Copy link
Owner

Thanks! It looks fine at first glance. I will check in more detail later in the week.

FYI there is also this package that performs ABC - https://github.com/tanhevg/GpABC.jl

I haven't used it but it looks to be better documented and has more features than this package.

@GantZA
Copy link
Contributor Author

GantZA commented Jan 7, 2020

I've been doing more digging and there is more to the problem. For some reason the perturbparticle function is returning newparticle with 0.0 prior probability about 95% of the time. I'll investigate further and get back to you.

Thanks for the info, looks nice but it doesn't seem to be parallelized which is the main reason I want to use this package.

EDIT: I think I've figured it out! Line 156 here

ABCsetup.kernel.kernel_parameters = (maximum(ABCrejresults.parameters, dims = 1) - minimum(ABCrejresults.parameters, dims = 1) ./2)[:]

doesn't take into account BODMAS, so the minimum is divided by 2 and then subtracted from the maximum. So it should be

ABCsetup.kernel.kernel_parameters = ((maximum(ABCrejresults.parameters, dims = 1) - minimum(ABCrejresults.parameters, dims = 1)) ./2)[:]

@marcjwilliams1
Copy link
Owner

Thanks. That's an embarrassing mistake! Thanks for spotting.

Does this fix the issue? I've merged into master.

@GantZA
Copy link
Contributor Author

GantZA commented Jan 8, 2020

The issue can still appear but not as frequently. I'm currently using my fork as I've made a number of changes for my specific use case so you can close this issue for now. My fork is a bit messy but when I clean it up, I'll make a PR and see if you like it.

Essentially I've changed how the parallel loop works, I have nested the parallel for loop in an while loop. The while loop ends when the number of particles has been reach or number of iterations has been reached. The parallel for loop runs for 1 to parallel_batch parameter.

I've also made it so that there cannot be any zero prior probability particles from being perturbed. I'm just using the uniform perturbation kernel and making sure that the upper and lower bound are within the prior uniform bounds. I was a bit worried about this change in terms of whether it messes with the theory it should be the same as before, just more efficient as it doesn't go to the continue statement and try again.

@marcjwilliams1
Copy link
Owner

OK, I'll close for now then. If there's any way to make a simple example we could include in the tests, that would be valuable.

@francescoalemanno
Copy link

francescoalemanno commented May 27, 2020

this issue is still relevant, it just occurred today to me, i think it is best to keep this issue open, since it can occurr

@marcjwilliams1
Copy link
Owner

ok thanks for this, I'll try and take another look in the next few weeks.

@ClaudMor
Copy link

Hello,

Any update on this? I think this error is related to the one spotted in this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants