You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should make sure to handle the buffering / IO / memory usage of reboost. For example in our current (legacy) L-200 simulations stack often some processes use > 10 GB memory meaning few processes can be run, since some simulations (2vbb) are very large.
currently for the hit tier (for hpge) I read the data in chunks and this works fine
however for events / TCM it is not possible and the full hit tier file is read in memory (similar to evt tier in data)
we could consider adding some limit on the maximum hit file size? Or an option to generate multiple files?
Maybe we should review the overall strategy @gipert / @ManuelHu ?
The text was updated successfully, but these errors were encountered:
on the optical part: the actual files containing stp data or my optmap "evt" files (just summed hitcounts) are iterated and should not take too much memory.
optical maps on the other hand can get huge (we are talking about > 50 GB for all individual channels together). So this is certainly a problem.
We could store the index of the elm table (i.e. additional evtid column) along with hits in the hit tier. Then we would already have the necessary information to read chunks from each detector hit table that correspond to a given number of events that constitute the buffer at each iteration. Note that this also works if hits are dropped at the build-hit stage (if for example not a single scintillation photon reaches the SiPMs: there are steps but not hits)
We should make sure to handle the buffering / IO / memory usage of reboost. For example in our current (legacy) L-200 simulations stack often some processes use > 10 GB memory meaning few processes can be run, since some simulations (2vbb) are very large.
Maybe we should review the overall strategy @gipert / @ManuelHu ?
The text was updated successfully, but these errors were encountered: