Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Add OpenPMD support #1050

Open
wants to merge 94 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
94 commits
Select commit Hold shift + click to select a range
49027c7
Add OpenPMD as external lib
pgrete Jan 8, 2024
4525e86
Add OpenPMD skeleto
pgrete Jan 11, 2024
4de250c
WIP more Open PMD
pgrete Jan 11, 2024
b906af7
WIP OpenPMD use file id
pgrete Jan 12, 2024
108fc7a
Merge branch 'develop' into pgrete/pmd-output
pgrete Feb 29, 2024
79660a2
Write blocks
pgrete Feb 29, 2024
372b585
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Mar 8, 2024
8d40c91
Centralize getting var info for output
pgrete Mar 8, 2024
4f20c26
WIP openpmd, chunks don't work yet plus check dimensionality
pgrete Mar 8, 2024
4dde705
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Mar 14, 2024
f29e8d1
Fix chunk extents
pgrete Mar 14, 2024
a501a5d
Write Attributes
pgrete Mar 15, 2024
c0be75c
Rename restart to restart_hdf5
pgrete Mar 15, 2024
7f03528
WIP abstract RestartReader
pgrete Mar 18, 2024
56795f2
WIP separating RestartReader
pgrete Mar 18, 2024
8587303
Make RestartReader abstract
pgrete Mar 19, 2024
8bf955b
Merge branch 'pgrete/refactor-restart' into pgrete/pmd-output
pgrete Mar 20, 2024
e3ea8d7
Add OpenPMD restart skeleton
pgrete Mar 20, 2024
33b6261
WIP updating loc logic
pgrete Mar 20, 2024
788118c
Merge branch 'develop' into pgrete/pmd-output
pgrete Apr 11, 2024
62e54da
Fix interface from recent changes in develop
pgrete Apr 11, 2024
6309663
Read and Write loc
pgrete Apr 11, 2024
7224f42
Houston, we have a build
pgrete Apr 11, 2024
ae1f241
Added OpenPMD restart ReadBlocks
pgrete Apr 12, 2024
19863d7
Fix loc level
pgrete Apr 12, 2024
e2b2bd1
Make Series persistent and fix rootlevel typo
pgrete Apr 15, 2024
f9373e8
WIP Read/Write Params
pgrete Apr 15, 2024
96a3f4c
Make ReadParams private member
pgrete Apr 15, 2024
61306a0
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 18, 2024
56976a8
Fix root level in output
pgrete Apr 18, 2024
6199843
Move to mesh per record standard for writing
pgrete Apr 22, 2024
9fb9f68
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 23, 2024
684b7ab
Allow for 2D and 3D output. Fix single dataset reset.
pgrete Apr 23, 2024
251c6ea
Fix logical loc
pgrete Apr 23, 2024
03f80c7
Rename opmd files
pgrete Apr 24, 2024
890fffe
Separate common calls to chunks and names
pgrete Apr 24, 2024
04359d3
Reuse shared chunk and name for restarts
pgrete Apr 24, 2024
2b89659
Add regression test
pgrete Apr 24, 2024
804e60d
Somewhat make restarts working
pgrete Apr 24, 2024
a436f55
Fix order of arguments for correct flush
pgrete Apr 26, 2024
6a3a80d
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 26, 2024
28d725d
Fix handling of output variable names
pgrete Apr 26, 2024
a039ea1
Fix reading chunks for sparsely populated output files
pgrete Apr 26, 2024
ae8519f
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Apr 29, 2024
7476641
Dont tell anyone I spent days on this...
pgrete May 3, 2024
39d4b99
Temp disable dumping Views from device
pgrete May 3, 2024
d2ba882
Merge branch 'develop' into pgrete/pmd-output
pgrete May 22, 2024
626303d
Merge branch 'pgrete/fix-hasghost-restart' into pgrete/pmd-output
pgrete Jun 12, 2024
4241198
Dump deref cnt in opmd restart
pgrete Jun 12, 2024
101ebf2
Merge branch 'develop' into pgrete/pmd-output
BenWibking Jun 18, 2024
60a38b2
install openpmd in macOS CI
BenWibking Jun 18, 2024
461eeaa
Merge branch 'develop' into pgrete/pmd-output
BenWibking Jun 21, 2024
cd007d3
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Jun 25, 2024
28020db
Remove extraneous popRegion
pgrete Jun 25, 2024
5324e00
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Jul 11, 2024
af4b966
Fix formatting
pgrete Jul 11, 2024
0e015d1
Make format clang16 compatible
pgrete Jul 11, 2024
5135aea
Fix default backend_config parsing
pgrete Jul 12, 2024
b344291
another attempt
pgrete Jul 12, 2024
7486915
Bump OpenPMD version
pgrete Jul 12, 2024
0e3f758
pmd: Write scalar particle data
pgrete Jul 25, 2024
e39ef61
Code dedup
pgrete Jul 25, 2024
b938511
Allow writing non-scalar particles
pgrete Jul 25, 2024
08d6b41
Make positions standard compliant
pgrete Jul 25, 2024
bf74c7e
Allow for particles restarts (serial works)
pgrete Jul 25, 2024
3b37c26
Support particle restarts in parallel
pgrete Jul 26, 2024
0ca5b6f
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Jul 26, 2024
b30788a
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Aug 6, 2024
5f95466
Add now prefix to pmd outputs
pgrete Aug 6, 2024
6811594
Make linter happy
pgrete Aug 7, 2024
c0d7f11
Merge remote-tracking branch 'origin/develop' into pgrete/pmd-output
pgrete Sep 2, 2024
b2d7525
Bump OPMD version and add delim
pgrete Sep 4, 2024
f229274
Merge branch 'develop' into pgrete/pmd-output
pgrete Sep 26, 2024
3283635
Change delim. Is this stupid?
pgrete Sep 26, 2024
d59d573
Merge branch 'develop' into pgrete/pmd-output
pgrete Oct 7, 2024
3f95fc4
Make params IO test more flexible
pgrete Oct 7, 2024
1141ff3
Add test case for opmd params IO
pgrete Oct 7, 2024
e6e4d0b
Reading/writing non-ParArray Params works
pgrete Oct 7, 2024
8230f94
Allow writing ParArray and Views to Params
pgrete Oct 8, 2024
e64ae7e
Read ParArray/View from opmd raw
pgrete Oct 8, 2024
b789711
Make basic parsing of Params work
pgrete Oct 8, 2024
466ecd2
Restore view from opmd params
pgrete Oct 8, 2024
3aa4c78
Fix manual pararray reading
pgrete Oct 8, 2024
4c1c70f
Make linter happy?
pgrete Oct 8, 2024
8a1850b
Make restoring HostViews possible
pgrete Oct 9, 2024
1f02ffa
Make reading host arrays work using the direct interface in the unit …
pgrete Oct 9, 2024
d6565c5
Expands types for read/write to all vec
pgrete Oct 9, 2024
6065213
Remove debug info
pgrete Oct 9, 2024
4bf6a92
Merge branch 'develop' into pgrete/pmd-output
BenWibking Oct 16, 2024
e67f589
Merge branch 'develop' into pgrete/pmd-output
pgrete Nov 9, 2024
503a5b6
Fix bc interface from PR 1177
pgrete Nov 9, 2024
d064f03
Merge branch 'develop' into pgrete/pmd-output
BenWibking Nov 22, 2024
dfbcb27
Merge branch 'develop' into pgrete/pmd-output
pgrete Nov 29, 2024
4769a63
Fix output numbering for triggered opmd outputs
pgrete Nov 29, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 5 additions & 1 deletion .github/workflows/ci-macos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,14 @@ jobs:
cache-dependency-path: '**/requirements.txt'
- run: pip install -r requirements.txt

- name: Install dependencies
- name: Install dependencies (Homebrew)
run: |
brew install openmpi hdf5-mpi adios2 || true

- name: Install OpenPMD
run: |
openPMD_USE_MPI=ON python3 -m pip install openpmd-api --no-binary openpmd-api

- name: Configure
run: cmake -B build -DCMAKE_BUILD_TYPE=Release

Expand Down
21 changes: 21 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ option(PARTHENON_DISABLE_MPI "MPI is enabled by default if found, set this to Tr
option(PARTHENON_ENABLE_HOST_COMM_BUFFERS "CUDA/HIP Only: Allocate communication buffers on host (may be slower)" OFF)
option(PARTHENON_DISABLE_HDF5 "HDF5 is enabled by default if found, set this to True to disable HDF5" OFF)
option(PARTHENON_DISABLE_HDF5_COMPRESSION "HDF5 compression is enabled by default, set this to True to disable compression in HDF5 output/restart files" OFF)
option(PARTHENON_ENABLE_OPENPMD "OpenPMD is enabled by default if found, set this to True to disable OpenPMD" ON)
option(PARTHENON_DISABLE_SPARSE "Sparse capability is enabled by default, set this to True to compile-time disable all sparse capability" OFF)
option(PARTHENON_ENABLE_ASCENT "Enable Ascent for in situ visualization and analysis" OFF)
option(PARTHENON_LINT_DEFAULT "Linting is turned off by default, use the \"lint\" target or set \
Expand Down Expand Up @@ -200,6 +201,26 @@ if (NOT PARTHENON_DISABLE_HDF5)
install(TARGETS HDF5_C EXPORT parthenonTargets)
endif()

if (PARTHENON_ENABLE_OPENPMD)
#TODO(pgrete) add logic for serial/parallel
#TODO(pgrete) add logic for internal/external build
include(FetchContent)
set(CMAKE_POLICY_DEFAULT_CMP0077 NEW)
set(openPMD_BUILD_CLI_TOOLS OFF)
set(openPMD_BUILD_EXAMPLES OFF)
set(openPMD_BUILD_TESTING OFF)
set(openPMD_BUILD_SHARED_LIBS OFF) # precedence over BUILD_SHARED_LIBS if needed
set(openPMD_INSTALL OFF) # or instead use:
# set(openPMD_INSTALL ${BUILD_SHARED_LIBS}) # only install if used as a shared library
set(openPMD_USE_PYTHON OFF)
FetchContent_Declare(openPMD
GIT_REPOSITORY "https://github.com/openPMD/openPMD-api.git"
# we need newer than the latest 0.15.2 release to support writing attriutes from a subset of ranks
GIT_TAG "1c7d7ff") # develop as of 2024-09-02
FetchContent_MakeAvailable(openPMD)
install(TARGETS openPMD EXPORT parthenonTargets)
endif()

# Kokkos recommendatation resulting in not using default GNU extensions
set(CMAKE_CXX_EXTENSIONS OFF)

Expand Down
9 changes: 9 additions & 0 deletions src/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -191,6 +191,7 @@ add_library(parthenon
outputs/history.cpp
outputs/io_wrapper.cpp
outputs/io_wrapper.hpp
outputs/output_attr.hpp
outputs/output_utils.cpp
outputs/output_utils.hpp
outputs/outputs.cpp
Expand All @@ -203,10 +204,14 @@ add_library(parthenon
outputs/parthenon_hdf5_types.hpp
outputs/parthenon_xdmf.cpp
outputs/parthenon_hdf5.hpp
outputs/parthenon_opmd.cpp
outputs/parthenon_opmd.hpp
outputs/parthenon_xdmf.hpp
outputs/restart.hpp
outputs/restart_hdf5.cpp
outputs/restart_hdf5.hpp
outputs/restart_opmd.cpp
outputs/restart_opmd.hpp
outputs/vtk.cpp

parthenon/driver.hpp
Expand Down Expand Up @@ -326,6 +331,10 @@ if (ENABLE_HDF5)
target_link_libraries(parthenon PUBLIC HDF5_C)
endif()

if (PARTHENON_ENABLE_OPENPMD)
target_link_libraries(parthenon PUBLIC openPMD::openPMD)
endif()

# For Cuda with NVCC (<11.2) and C++17 Kokkos currently does not work/compile with
# relaxed-constexpr, see https://github.com/kokkos/kokkos/issues/3496
# However, Parthenon heavily relies on it and there is no harm in compiling Kokkos
Expand Down
2 changes: 2 additions & 0 deletions src/config.hpp.in
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,8 @@
// defne ENABLE_HDF5 or not at all
#cmakedefine ENABLE_HDF5

#cmakedefine PARTHENON_ENABLE_OPENPMD

// define PARTHENON_DISABLE_HDF5_COMPRESSION or not at all
#cmakedefine PARTHENON_DISABLE_HDF5_COMPRESSION

Expand Down
6 changes: 6 additions & 0 deletions src/interface/params.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,12 @@ class Params {
return it->second;
}

const Mutability &GetMutability(const std::string &key) const {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

auto const it = myMutable_.find(key);
PARTHENON_REQUIRE_THROWS(it != myMutable_.end(), "Key " + key + " doesn't exist");
return it->second;
}

std::vector<std::string> GetKeys() const {
std::vector<std::string> keys;
for (auto &x : myParams_) {
Expand Down
47 changes: 47 additions & 0 deletions src/outputs/output_attr.hpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
//========================================================================================
// Parthenon performance portable AMR framework
// Copyright(C) 2023-2024 The Parthenon collaboration
// Licensed under the 3-clause BSD License, see LICENSE file for details
//========================================================================================
// (C) (or copyright) 2020-2024. Triad National Security, LLC. All rights reserved.
//
// This program was produced under U.S. Government contract 89233218CNA000001 for Los
// Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC
// for the U.S. Department of Energy/National Nuclear Security Administration. All rights
// in the program are reserved by Triad National Security, LLC, and the U.S. Department
// of Energy/National Nuclear Security Administration. The Government is granted for
// itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide
// license in this material to reproduce, prepare derivative works, distribute copies to
// the public, perform publicly and display publicly, and to permit others to do so.
//========================================================================================

#ifndef OUTPUTS_OUTPUT_ATTR_HPP_
#define OUTPUTS_OUTPUT_ATTR_HPP_

#include <vector>

// JMM: This could probably be done with template magic but I think
// using a macro is honestly the simplest and cleanest solution here.
// Template solution would be to define a variatic class to conain the
// list of types and then a hierarchy of structs/functions to turn
// that into function calls. Preprocessor seems easier, given we're
// not manipulating this list in any way.
// The following types are the ones we allow to be stored as attributes in outputs
// (specifically within Params).
#define PARTHENON_ATTR_VALID_VEC_TYPES(T) \
T, std::vector<T>, ParArray1D<T>, ParArray2D<T>, ParArray3D<T>, HostArray1D<T>, \
HostArray2D<T>, HostArray3D<T>, Kokkos::View<T *>, Kokkos::View<T **>, \
ParArrayND<T>, ParArrayHost<T>
// JMM: This is the list of template specializations we
// "pre-instantiate" We only pre-instantiate device memory, not host
// memory. The reason is that when building with the Kokkos serial
// backend, DevMemSpace and HostMemSpace are the same and so this
// resolves to the same type in the macro, which causes problems.
#define PARTHENON_ATTR_FOREACH_VECTOR_TYPE(T) \
PARTHENON_ATTR_APPLY(T); \
PARTHENON_ATTR_APPLY(Kokkos::View<T *, LayoutWrapper, DevMemSpace>); \
PARTHENON_ATTR_APPLY(Kokkos::View<T **, LayoutWrapper, DevMemSpace>); \
PARTHENON_ATTR_APPLY(Kokkos::View<T ***, LayoutWrapper, DevMemSpace>); \
PARTHENON_ATTR_APPLY(device_view_t<T>)

#endif // OUTPUTS_OUTPUT_ATTR_HPP_
72 changes: 72 additions & 0 deletions src/outputs/output_utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
#include <algorithm>
#include <functional>
#include <map>
#include <memory>
#include <set>
#include <string>
#include <type_traits>
Expand All @@ -33,6 +34,7 @@
#include "mesh/meshblock.hpp"
#include "outputs/output_utils.hpp"
#include "parameter_input.hpp"
#include "utils/mpi_types.hpp"

namespace parthenon {
namespace OutputUtils {
Expand Down Expand Up @@ -253,6 +255,47 @@ std::vector<int> ComputeDerefinementCount(Mesh *pm) {
});
}

template <typename T>
std::vector<T> FlattendedLocalToGlobal(Mesh *pm, const std::vector<T> &data_local) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what this function does. Is it actually doing an MPI all-to-all to build up a global data vector? Is this something we ever want to do?

const int n_blocks_global = pm->nbtotal;
const int n_blocks_local = static_cast<int>(pm->block_list.size());

const int n_elem = data_local.size() / n_blocks_local;
PARTHENON_REQUIRE_THROWS(data_local.size() % n_blocks_local == 0,
"Results from flattened input vector does not evenly divide "
"into number of local blocks.");
std::vector<T> data_global(n_elem * n_blocks_global);

std::vector<int> counts(Globals::nranks);
std::vector<int> offsets(Globals::nranks);

const auto &nblist = pm->GetNbList();
counts[0] = n_elem * nblist[0];
offsets[0] = 0;
for (int r = 1; r < Globals::nranks; r++) {
counts[r] = n_elem * nblist[r];
offsets[r] = offsets[r - 1] + counts[r - 1];
}

#ifdef MPI_PARALLEL
PARTHENON_MPI_CHECK(MPI_Allgatherv(data_local.data(), counts[Globals::my_rank],
MPITypeMap<T>::type(), data_global.data(),
counts.data(), offsets.data(), MPITypeMap<T>::type(),
MPI_COMM_WORLD));
#else
return data_local;
#endif
return data_global;
}

// explicit template instantiation
template std::vector<std::size_t>
FlattendedLocalToGlobal(Mesh *pm, const std::vector<std::size_t> &data_local);
template std::vector<int64_t>
FlattendedLocalToGlobal(Mesh *pm, const std::vector<int64_t> &data_local);
template std::vector<int> FlattendedLocalToGlobal(Mesh *pm,
const std::vector<int> &data_local);

// TODO(JMM): I could make this use the other loop
// functionality/high-order functions. but it was more code than this
// for, I think, little benefit.
Expand Down Expand Up @@ -329,6 +372,35 @@ std::size_t MPISum(std::size_t val) {
return val;
}

VariableVector<Real> GetVarsToWrite(const std::shared_ptr<MeshBlock> pmb,
const bool restart,
const std::vector<std::string> &variables) {
const auto &var_vec = pmb->meshblock_data.Get()->GetVariableVector();
auto vars_to_write = GetAnyVariables(var_vec, variables);
if (restart) {
// get all vars with flag Independent OR restart
auto restart_vars = GetAnyVariables(
var_vec, {parthenon::Metadata::Independent, parthenon::Metadata::Restart});
for (auto restart_var : restart_vars) {
vars_to_write.emplace_back(restart_var);
}
}
return vars_to_write;
}

std::vector<VarInfo> GetAllVarsInfo(const VariableVector<Real> &vars,
const IndexShape &cellbounds) {
std::vector<VarInfo> all_vars_info;
for (auto &v : vars) {
all_vars_info.emplace_back(v, cellbounds);
}

// sort alphabetically
std::sort(all_vars_info.begin(), all_vars_info.end(),
[](const VarInfo &a, const VarInfo &b) { return a.label < b.label; });
return all_vars_info;
}

void CheckParameterInputConsistent(ParameterInput *pin) {
#ifdef MPI_PARALLEL
CheckMPISizeT();
Expand Down
23 changes: 20 additions & 3 deletions src/outputs/output_utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -213,8 +213,8 @@ struct SwarmInfo {
std::size_t count_on_rank = 0; // per-meshblock
std::size_t global_offset; // global
std::size_t global_count; // global
std::vector<std::size_t> counts; // per-meshblock
std::vector<std::size_t> offsets; // global
std::vector<std::size_t> counts; // on local meshblocks
std::vector<std::size_t> offsets; // global offset for local meshblocks
// std::vector<ParArray1D<bool>> masks; // used for reading swarms without defrag
std::vector<std::size_t> max_indices; // JMM: If we defrag, unneeded?
void AddOffsets(const SP_Swarm &swarm); // sets above metadata
Expand All @@ -236,7 +236,7 @@ struct SwarmInfo {
// Copies swarmvar to host in prep for output
template <typename T>
std::vector<T> FillHostBuffer(const std::string vname,
ParticleVariableVector<T> &swmvarvec) {
const ParticleVariableVector<T> &swmvarvec) const {
const auto &vinfo = var_info.at(vname);
std::vector<T> host_data(count_on_rank * vinfo.nvar);
std::size_t ivec = 0;
Expand All @@ -245,6 +245,7 @@ struct SwarmInfo {
for (int n4 = 0; n4 < vinfo.GetN(4); ++n4) {
for (int n3 = 0; n3 < vinfo.GetN(3); ++n3) {
for (int n2 = 0; n2 < vinfo.GetN(2); ++n2) {
// TODO(pgrete) understand what's doing on with the blocks here...
std::size_t block_idx = 0;
for (auto &swmvar : swmvarvec) {
// Copied extra times. JMM: If we defrag, unneeded?
Expand Down Expand Up @@ -344,13 +345,29 @@ std::vector<int64_t> ComputeLocs(Mesh *pm);
std::vector<int> ComputeIDsAndFlags(Mesh *pm);
std::vector<int> ComputeDerefinementCount(Mesh *pm);

// Takes a vector containing flattened data of all rank local blocks and returns the
// flattened data over all blocks.
template <typename T>
std::vector<T> FlattendedLocalToGlobal(Mesh *pm, const std::vector<T> &data_local);

// TODO(JMM): Potentially unsafe if MPI_UNSIGNED_LONG_LONG isn't a size_t
// however I think it's probably safe to assume we'll be on systems
// where this is the case?
// TODO(JMM): If we ever need non-int need to generalize
std::size_t MPIPrefixSum(std::size_t local, std::size_t &tot_count);
std::size_t MPISum(std::size_t local);

// Return all variables to write, i.e., for restarts all indpendent variables and ones
// with explicit Restart flag, but also variables explicitly defined to output in the
// input file.
VariableVector<Real> GetVarsToWrite(const std::shared_ptr<MeshBlock> pmb,
const bool restart,
const std::vector<std::string> &variables);

// Returns a sorted vector of VarInfo associated with vars
std::vector<VarInfo> GetAllVarsInfo(const VariableVector<Real> &vars,
const IndexShape &cellbounds);
Comment on lines +360 to +369
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can these two functions be unified with the HDF5 machinery? I actually thought I already wrote GetAllVarsInfo...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we wrote them in parallel.


void CheckParameterInputConsistent(ParameterInput *pin);
} // namespace OutputUtils
} // namespace parthenon
Expand Down
23 changes: 21 additions & 2 deletions src/outputs/outputs.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -214,8 +214,13 @@ Outputs::Outputs(Mesh *pm, ParameterInput *pin, SimTime *tm) {
// set output variable and optional data format string used in formatted writes
if ((op.file_type != "hst") && (op.file_type != "rst") &&
(op.file_type != "ascent") && (op.file_type != "histogram")) {
op.variables = pin->GetOrAddVector<std::string>(pib->block_name, "variables",
std::vector<std::string>());
// differentiating here whether a block exists or not to not add an empty
// parameter to the input file (which might interfere with restarts)
if (pin->DoesParameterExist(pib->block_name, "variables")) {
op.variables = pin->GetVector<std::string>(pib->block_name, "variables");
} else {
op.variables = std::vector<std::string>();
}
// JMM: If the requested var isn't present for a given swarm,
// it is simply not output.
op.swarms.clear(); // Not sure this is needed
Expand Down Expand Up @@ -263,6 +268,20 @@ Outputs::Outputs(Mesh *pm, ParameterInput *pin, SimTime *tm) {
pnew_type = new VTKOutput(op);
} else if (op.file_type == "ascent") {
pnew_type = new AscentOutput(op);
} else if (op.file_type == "openpmd") {
#ifdef PARTHENON_ENABLE_OPENPMD
const auto backend_config =
pin->GetOrAddString(op.block_name, "backend_config", "default");

pnew_type = new OpenPMDOutput(op, backend_config);
#else
msg << "### FATAL ERROR in Outputs constructor" << std::endl
<< "Executable not configured for OpenPMD outputs, but OpenPMD file format "
<< "is requested in output/restart block '" << op.block_name << "'. "
<< "You can disable this block without deleting it by setting a dt < 0."
<< std::endl;
PARTHENON_FAIL(msg);
#endif // ifdef PARTHENON_ENABLE_OPENPMD
} else if (op.file_type == "histogram") {
#ifdef ENABLE_HDF5
pnew_type = new HistogramOutput(op, pin);
Expand Down
17 changes: 17 additions & 0 deletions src/outputs/outputs.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
#include <memory>
#include <set>
#include <string>
#include <utility>
#include <vector>

#include "Kokkos_ScatterView.hpp"
Expand Down Expand Up @@ -171,6 +172,22 @@ class AscentOutput : public OutputType {
ParArray1D<Real> ghost_mask_;
};

//----------------------------------------------------------------------------------------
//! \class OpenPMDOutput
// \brief derived OutputType class for OpenPMD based output

class OpenPMDOutput : public OutputType {
public:
explicit OpenPMDOutput(const OutputParameters &oparams, std::string backend_config)
: OutputType(oparams), backend_config_(std::move(backend_config)) {}
void WriteOutputFile(Mesh *pm, ParameterInput *pin, SimTime *tm,
const SignalHandler::OutputSignal signal) override;

private:
// path to file containing config passed to backend
std::string backend_config_;
};

#ifdef ENABLE_HDF5
//----------------------------------------------------------------------------------------
//! \class PHDF5Output
Expand Down
Loading
Loading