-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example of how to write data files as series of submodels. #18
Open
nrnhines
wants to merge
2
commits into
master
Choose a base branch
from
test_submodel
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 1 commit
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
# Example of how to construct a model as a series of submodels. Each | ||
# submodel is built, CoreNEURON data is written, and the submodel | ||
# is destroyed. This file can be run with | ||
# python test_submodel.py -rparm | ||
# which writes data to the coredat folder. That folder can be compared to | ||
# the corenrn_data folder written by | ||
# mpiexec -n 4 nrniv -python -mpi -rparm -coreneuron -filemode ringtest.py | ||
# diff -r coredat corenrn_data | ||
# Note: -rparm is used for a more substantive test in that all the cells are | ||
# distinct with differt parameters. | ||
|
||
# The overall strategy for a ringtest model relies on the fact that it is | ||
# already does parallel setup so that there are pc.nhost() submodels, | ||
# one built on each rank, pc.id(). This setup did not use pc.nhost() or | ||
# pc.id() directly but stored those values in the global variables nhost and | ||
# rank respectively. So it is an easy matter to subvert that slightly and | ||
# run with a single process and iterate over range(nsubmodel) and for each | ||
# merely set rank and nhost to the proper values. The one exception to this | ||
# in the ringtest setup was the call to pc.set_gid2node(gid, rank) which was | ||
# changed to pc.set_gid2node(gid, pc.id()) since that function requires the | ||
# true rank of this process to function correctly. The other ringtest | ||
# transformation that was required was to factor out the ring build and | ||
# randomization into functions that are callable from here as well as in | ||
# the original ringtest.py . | ||
|
||
from neuron import h | ||
pc = h.ParallelContext() | ||
cvode = h.CVode() | ||
|
||
import ringtest | ||
|
||
def test_submodel(nsubmodel): | ||
coredat = "./coredat" | ||
cvode.cache_efficient(1) | ||
gidgroups = [h.Vector() for _ in range(nsubmodel)] # used to write files.dat at end | ||
for isubmodel in range(nsubmodel): | ||
submodel = build_submodel(isubmodel, nsubmodel) # just like a single rank on an nhost cluster | ||
pc.nrnbbcore_write("./coredat", gidgroups[isubmodel]) | ||
teardown() | ||
submodel = None | ||
|
||
# verify no netcons or sections. Ready to go on to next submodel | ||
assert (h.List("NetCon").count() == 0) | ||
assert (len([s for s in h.allsec()]) == 0) | ||
|
||
write_files_dat(coredat, gidgroups) | ||
|
||
def build_submodel(isubmodel, nsubmodel): | ||
# fake nhost and rank | ||
ringtest.settings.nhost = nsubmodel | ||
ringtest.settings.rank = isubmodel | ||
|
||
# broke into two parts to avoid timeit problems. | ||
rings= ringtest.network() | ||
ringtest.randomize(rings) | ||
|
||
# same initialization as ringtest | ||
pc.set_maxstep(10) | ||
h.stdinit() | ||
|
||
return rings | ||
|
||
def teardown(): | ||
pc.gid_clear() | ||
# delete your NetCons list | ||
# delete your Cells list | ||
# unfortunately, cannot delete submodel here as there is reference to it | ||
# in test_submodel(nsubmodel) | ||
|
||
# write out the files.dat file | ||
def write_files_dat(coredat, gidgroups): | ||
f = open(coredat+"/files.dat", "w") | ||
f.write("1.4\n") # CoreNEURON data version | ||
|
||
ng = sum(len(g) for g in gidgroups) | ||
f.write(str(ng) + '\n') # number of groups | ||
|
||
for gidgroup in gidgroups: | ||
for x in gidgroup: | ||
f.write(str(int(x)) + '\n') # group id | ||
|
||
f.close() | ||
|
||
if __name__ == "__main__": | ||
test_submodel(4) |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nrnhines : can we get version as property or via some method? We have same problem of hard coding it in neurodamus and I wonder if it could be done in better way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the same thing. The first thing that comes to mind is yet another ParallelContext method such as
pc.nrncore_data_version()
or perhaps a more generic thing such aspc.nrncore_property('property_name')
since I would also like to know if coreneuron is available, the gpu is available, etc. Also the documentation for
pc.nrnbbcore_write(...)
is missing the first line data version requirement for files.dat.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or, as an end user, I don't want to manage combining these models like this. What I want to say is
take these subdir_1 subdir_2 subdir_3 and simulate it
. Internally we can do necessary symlinks and group files.dat.Another possibility: coreneuron can be also updated to accept multiple data directories! (I am now thinking more about this possibility!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should discuss by zoom. I'm not seeing the value of multiple data directories for this problem since I can't envision the filenames not being unique regardless of the number of submodels. You are right that files.dat can easily be handled by
nrnbbcore_write
but need to work out some api details. Maybe the user does not have to be aware of the gidgroups array of Vector either.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose extending nrnbbcore_write with a new signature
which indicates how many submodels are involved and at which point in the sequence this call is embedded. (when i_submodel == (nsubmodel - 1), the files.dat file is written at the end. Other possibilities are to count down from a beginning n_submodel to 0 and when 0 is reached, emit the files.dat file. In any event we would like for this to work on an mpi cluster and/or threads. It would also be nice to support the submodel strategy for the case of separate launch of NEURON for groups of submodels. That would require that the accumulating groupgid information persist across launches which would mean writing some kind of intermediate file or else a files.dat to which nrnbbcore_write appends further gidgroup information as well as update the second line of the file (ngroup). If this latter is implemented then the signature could be further simplified to
which means that files.dat is to be updated at the end of this call and so is always valid when the call exits.
Lastly, I could imagine that the optional bool append could be eliminated and default is always append and it would be
up to the user to clear out the "outdat" folder or start a new one if wanting to start from the beginning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And is this a good time to allow the synonym,
pc.nrncore_write([path], [bool append])
?Note, in
test_submodel.py
this would change the statement toand would eliminate
write_files_dat(...)