-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fyi some other Stan benchmarking code #1
Comments
I remember having some trouble getting Adept, one of the other libraries, working with gbench, but it was for an earlier version so who knows if things changed. I just manually wrote this just to be safe, but I might clean this up and try porting it to gbench (though I don't think the times will be much different)! |
Do you remember what the issue was? I can try running some of this either this week or next. Yeah if your stuff works then ya know that's fine. I just like gbench because it's pretty standard. Also side note fastAD looks very neat! I like how you compound expressions together and a lot of the other design choices are very cool |
If I recall correctly, the gradient was actually not correct and there were memory errors (sometimes leading to segfault). But there's also a good chance I didn't write proper code for Adept back then. Actually, for my own curiosity, I'm going to try to use gbench instead, what the hell why not :D I agree gbench is super nice. Also thanks for checking out FastAD! That means a lot coming from a stan dev! |
It may be useful to separate out each program into it's own executable. My guess is that would effect what the compiler looks at. There's also some oddities to watch out for. For instance in Stan we have a global memory arena that we need to keep track of. After each gradient evaluation the program needs to zero out the adjoints to run the same gradient again and for a new gradient calculation you want to run |
I see, so in my benchmark run_test.hpp I'm currently doing this: sw.start(); // start stopwatch
for (int i = 0; i < stan_pack.n_iter; ++i) { // iterate like 10000 times
stan::math::gradient(f, x, fx, grad_fx);
}
sw.stop(); // stop stopwatch and you're recommending this instead: sw.start();
for (int i = 0; i < stan_pack.n_iter; ++i) {
stan::math::gradient(f, x, fx, grad_fx);
stan::math::recover_memory();
}
sw.stop(); |
@bbbales2 it's the second one right? Since gradient creates a vector of |
|
Oh yes good call! Forgot the nested stack recovers memory after leaving scope |
@SteveBronder update: I just ported everything to gbench - turns out I was just coding Adept wrong before! Stan-related code is in benchmark/stan if you're interested. |
Nice!! I'll try to take a look this week or next |
@bbbales2 has been working on benchmarks using google bench. Thought you may find some of them useful
https://github.com/bbbales2/perf-math/pull/1/files
Also is there a reason you are not using googlebench? I find it pretty easy for setting up stuff and it supports manual timing so you can catch the forward and/or reverse pass times
The text was updated successfully, but these errors were encountered: