You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current module picongpu does provide (should provide) a unified interface to read:
hdf5 and
adios files
with the in-file markup used by PIConGPU/libSplash.
We are currently changing/updating our output's markup to fulfill the openPMD standard (with ED-PIC extension). The openPMD standard will additionally get a common, general pip module for serial read/write access (not PIC specific), contributed by our great colleagues (@RemiLehe@MKirchen and more? :) ).
Updates for openPMD
The reason for this issue is now, that pyDive would be an excellent choice for a parallel, mesh & particle, object-oriented read/write API. I would therefore suggest to rename the module from picongpu to openPMD (exact spelling) and to update the few changes coming with the now finished standard it timely.
Since the programming of pyDive scripts is very similar to the well-known python multiprocessing pools (but way more helpful when RAM is sparse), it would be great if we could go for simple feature-completeness in that manner for our two most common data formats (parallel read and write support).
PIC Specific Parts in pyDive
In case we want to be very pic specific, we could also add an openPMD_EDPIC module that also understands further markups and attributes from the ED-PIC extension and provides some parallel routines such as postpic, but this is highly optional and can also be build by users themselves (I actually think it would be out-of-scope for this project).
But this last part is part of a different discussion (#11), it might be a good idea to focus and finish the implementations and to keep pyDive as a minimal platform for distributes data processing (#6) with elementary functionality (#16) and feature-completeness (#3#4) in the implementation we currently have (#14).
The text was updated successfully, but these errors were encountered:
Introduction
The current module
picongpu
does provide (should provide) a unified interface to read:We are currently changing/updating our output's markup to fulfill the openPMD standard (with
ED-PIC
extension). The openPMD standard will additionally get a common, generalpip
module for serial read/write access (not PIC specific), contributed by our great colleagues (@RemiLehe @MKirchen and more? :) ).Updates for openPMD
The reason for this issue is now, that pyDive would be an excellent choice for a parallel, mesh & particle, object-oriented read/write API. I would therefore suggest to rename the module from
picongpu
toopenPMD
(exact spelling) and to update the few changes coming with the now finished standard it timely.Since the programming of
pyDive
scripts is very similar to the well-known python multiprocessing pools (but way more helpful when RAM is sparse), it would be great if we could go for simple feature-completeness in that manner for our two most common data formats (parallel read and write support).PIC Specific Parts in pyDive
In case we want to be very pic specific, we could also add an
openPMD_EDPIC
module that also understands further markups and attributes from the ED-PIC extension and provides some parallel routines such as postpic, but this is highly optional and can also be build by users themselves (I actually think it would be out-of-scope for this project).But this last part is part of a different discussion (#11), it might be a good idea to focus and finish the implementations and to keep
pyDive
as a minimal platform for distributes data processing (#6) with elementary functionality (#16) and feature-completeness (#3 #4) in the implementation we currently have (#14).The text was updated successfully, but these errors were encountered: