Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example of how to write data files as series of submodels. #18

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ring.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def mkcells(self, ncell, nbranch, ncompart, types):
secpar, segvec = celltypeinfo(type, nbranch, ncompart)
cell = h.B_BallStick(secpar, segvec)
self.cells.append(cell)
settings.pc.set_gid2node(gid, settings.rank)
settings.pc.set_gid2node(gid, settings.pc.id())
nc = cell.connect2target(None)
settings.pc.cell(gid, nc)

Expand Down
22 changes: 14 additions & 8 deletions ringtest.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,29 +118,35 @@ def prun(tstop):
return runtime, load_balance, avg_comp_time, spk_time, gap_time


if __name__ == '__main__':

## Create all rings ##

timeit(None, settings.rank)

def network():
# create network / ring of cells
ran = h.Random()
ran.Random123(0, 1)
types = shuffle([i % ntype for i in range(ncell * nring)], ran)
rings = [Ring(ncell, nbranch, ncompart, i * ncell, types) for i in range(nring)]

timeit("created rings", settings.rank)
return rings

def randomize(rings):
# randomize parameters if asked
if randomize_parameters:
from ranparm import cellran
for ring in rings:
for gid in ring.gids:
if pc.gid_exists(gid):
cellran(gid, ring.nclist)
timeit("randomized parameters", settings.rank)

if __name__ == '__main__':

## Create all rings ##

timeit(None, settings.rank)

rings = network()
timeit("created rings", settings.rank)
if randomize_parameters:
randomize(rings)
timeit("randomized parameters", settings.rank)

## CoreNEURON setting ##

Expand Down
85 changes: 85 additions & 0 deletions test_submodel.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Example of how to construct a model as a series of submodels. Each
# submodel is built, CoreNEURON data is written, and the submodel
# is destroyed. This file can be run with
# python test_submodel.py -rparm
# which writes data to the coredat folder. That folder can be compared to
# the corenrn_data folder written by
# mpiexec -n 4 nrniv -python -mpi -rparm -coreneuron -filemode ringtest.py
# diff -r coredat corenrn_data
# Note: -rparm is used for a more substantive test in that all the cells are
# distinct with differt parameters.

# The overall strategy for a ringtest model relies on the fact that it is
# already does parallel setup so that there are pc.nhost() submodels,
# one built on each rank, pc.id(). This setup did not use pc.nhost() or
# pc.id() directly but stored those values in the global variables nhost and
# rank respectively. So it is an easy matter to subvert that slightly and
# run with a single process and iterate over range(nsubmodel) and for each
# merely set rank and nhost to the proper values. The one exception to this
# in the ringtest setup was the call to pc.set_gid2node(gid, rank) which was
# changed to pc.set_gid2node(gid, pc.id()) since that function requires the
# true rank of this process to function correctly. The other ringtest
# transformation that was required was to factor out the ring build and
# randomization into functions that are callable from here as well as in
# the original ringtest.py .

from neuron import h
pc = h.ParallelContext()
cvode = h.CVode()

import ringtest

def test_submodel(nsubmodel):
coredat = "./coredat"
cvode.cache_efficient(1)
gidgroups = [h.Vector() for _ in range(nsubmodel)] # used to write files.dat at end
for isubmodel in range(nsubmodel):
submodel = build_submodel(isubmodel, nsubmodel) # just like a single rank on an nhost cluster
pc.nrnbbcore_write("./coredat", gidgroups[isubmodel])
teardown()
submodel = None

# verify no netcons or sections. Ready to go on to next submodel
assert (h.List("NetCon").count() == 0)
assert (len([s for s in h.allsec()]) == 0)

write_files_dat(coredat, gidgroups)

def build_submodel(isubmodel, nsubmodel):
# fake nhost and rank
ringtest.settings.nhost = nsubmodel
ringtest.settings.rank = isubmodel

# broke into two parts to avoid timeit problems.
rings= ringtest.network()
ringtest.randomize(rings)

# same initialization as ringtest
pc.set_maxstep(10)
h.stdinit()

return rings

def teardown():
pc.gid_clear()
# delete your NetCons list
# delete your Cells list
# unfortunately, cannot delete submodel here as there is reference to it
# in test_submodel(nsubmodel)

# write out the files.dat file
def write_files_dat(coredat, gidgroups):
f = open(coredat+"/files.dat", "w")
f.write("1.4\n") # CoreNEURON data version
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nrnhines : can we get version as property or via some method? We have same problem of hard coding it in neurodamus and I wonder if it could be done in better way.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking the same thing. The first thing that comes to mind is yet another ParallelContext method such as pc.nrncore_data_version() or perhaps a more generic thing such as pc.nrncore_property('property_name')
since I would also like to know if coreneuron is available, the gpu is available, etc. Also the documentation for pc.nrnbbcore_write(...) is missing the first line data version requirement for files.dat.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or, as an end user, I don't want to manage combining these models like this. What I want to say is take these subdir_1 subdir_2 subdir_3 and simulate it. Internally we can do necessary symlinks and group files.dat.

Another possibility: coreneuron can be also updated to accept multiple data directories! (I am now thinking more about this possibility!)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should discuss by zoom. I'm not seeing the value of multiple data directories for this problem since I can't envision the filenames not being unique regardless of the number of submodels. You are right that files.dat can easily be handled by nrnbbcore_write but need to work out some api details. Maybe the user does not have to be aware of the gidgroups array of Vector either.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I propose extending nrnbbcore_write with a new signature

pc.nrnbbcore_write([path, [i_submodel, n_submodel])

which indicates how many submodels are involved and at which point in the sequence this call is embedded. (when i_submodel == (nsubmodel - 1), the files.dat file is written at the end. Other possibilities are to count down from a beginning n_submodel to 0 and when 0 is reached, emit the files.dat file. In any event we would like for this to work on an mpi cluster and/or threads. It would also be nice to support the submodel strategy for the case of separate launch of NEURON for groups of submodels. That would require that the accumulating groupgid information persist across launches which would mean writing some kind of intermediate file or else a files.dat to which nrnbbcore_write appends further gidgroup information as well as update the second line of the file (ngroup). If this latter is implemented then the signature could be further simplified to

pc.nrnbbcore_write([path], [bool append] )

which means that files.dat is to be updated at the end of this call and so is always valid when the call exits.
Lastly, I could imagine that the optional bool append could be eliminated and default is always append and it would be
up to the user to clear out the "outdat" folder or start a new one if wanting to start from the beginning.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And is this a good time to allow the synonym, pc.nrncore_write([path], [bool append])?
Note, in test_submodel.py this would change the statement to

pc.nrncore_write('./coredat', isubmodel != 0)

and would eliminate write_files_dat(...)


ng = sum(len(g) for g in gidgroups)
f.write(str(ng) + '\n') # number of groups

for gidgroup in gidgroups:
for x in gidgroup:
f.write(str(int(x)) + '\n') # group id

f.close()

if __name__ == "__main__":
test_submodel(4)