-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Add OpenPMD support #1050
base: develop
Are you sure you want to change the base?
WIP: Add OpenPMD support #1050
Changes from all commits
49027c7
4525e86
4de250c
b906af7
108fc7a
79660a2
372b585
8d40c91
4f20c26
4dde705
f29e8d1
a501a5d
c0be75c
7f03528
56795f2
8587303
8bf955b
e3ea8d7
33b6261
788118c
62e54da
6309663
7224f42
ae1f241
19863d7
e2b2bd1
f9373e8
96a3f4c
61306a0
56976a8
6199843
9fb9f68
684b7ab
251c6ea
03f80c7
890fffe
04359d3
2b89659
804e60d
a436f55
6a3a80d
28d725d
a039ea1
ae8519f
7476641
39d4b99
d2ba882
626303d
4241198
101ebf2
60a38b2
461eeaa
cd007d3
28020db
5324e00
af4b966
0e015d1
5135aea
b344291
7486915
0e3f758
e39ef61
b938511
08d6b41
bf74c7e
3b37c26
0ca5b6f
b30788a
5f95466
6811594
c0d7f11
b2d7525
f229274
3283635
d59d573
3f95fc4
1141ff3
e6e4d0b
8230f94
e64ae7e
b789711
466ecd2
3aa4c78
4c1c70f
8a1850b
1f02ffa
d6565c5
6065213
4bf6a92
e67f589
503a5b6
d064f03
dfbcb27
4769a63
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
//======================================================================================== | ||
// Parthenon performance portable AMR framework | ||
// Copyright(C) 2023-2024 The Parthenon collaboration | ||
// Licensed under the 3-clause BSD License, see LICENSE file for details | ||
//======================================================================================== | ||
// (C) (or copyright) 2020-2024. Triad National Security, LLC. All rights reserved. | ||
// | ||
// This program was produced under U.S. Government contract 89233218CNA000001 for Los | ||
// Alamos National Laboratory (LANL), which is operated by Triad National Security, LLC | ||
// for the U.S. Department of Energy/National Nuclear Security Administration. All rights | ||
// in the program are reserved by Triad National Security, LLC, and the U.S. Department | ||
// of Energy/National Nuclear Security Administration. The Government is granted for | ||
// itself and others acting on its behalf a nonexclusive, paid-up, irrevocable worldwide | ||
// license in this material to reproduce, prepare derivative works, distribute copies to | ||
// the public, perform publicly and display publicly, and to permit others to do so. | ||
//======================================================================================== | ||
|
||
#ifndef OUTPUTS_OUTPUT_ATTR_HPP_ | ||
#define OUTPUTS_OUTPUT_ATTR_HPP_ | ||
|
||
#include <vector> | ||
|
||
// JMM: This could probably be done with template magic but I think | ||
// using a macro is honestly the simplest and cleanest solution here. | ||
// Template solution would be to define a variatic class to conain the | ||
// list of types and then a hierarchy of structs/functions to turn | ||
// that into function calls. Preprocessor seems easier, given we're | ||
// not manipulating this list in any way. | ||
// The following types are the ones we allow to be stored as attributes in outputs | ||
// (specifically within Params). | ||
#define PARTHENON_ATTR_VALID_VEC_TYPES(T) \ | ||
T, std::vector<T>, ParArray1D<T>, ParArray2D<T>, ParArray3D<T>, HostArray1D<T>, \ | ||
HostArray2D<T>, HostArray3D<T>, Kokkos::View<T *>, Kokkos::View<T **>, \ | ||
ParArrayND<T>, ParArrayHost<T> | ||
// JMM: This is the list of template specializations we | ||
// "pre-instantiate" We only pre-instantiate device memory, not host | ||
// memory. The reason is that when building with the Kokkos serial | ||
// backend, DevMemSpace and HostMemSpace are the same and so this | ||
// resolves to the same type in the macro, which causes problems. | ||
#define PARTHENON_ATTR_FOREACH_VECTOR_TYPE(T) \ | ||
PARTHENON_ATTR_APPLY(T); \ | ||
PARTHENON_ATTR_APPLY(Kokkos::View<T *, LayoutWrapper, DevMemSpace>); \ | ||
PARTHENON_ATTR_APPLY(Kokkos::View<T **, LayoutWrapper, DevMemSpace>); \ | ||
PARTHENON_ATTR_APPLY(Kokkos::View<T ***, LayoutWrapper, DevMemSpace>); \ | ||
PARTHENON_ATTR_APPLY(device_view_t<T>) | ||
|
||
#endif // OUTPUTS_OUTPUT_ATTR_HPP_ |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -18,6 +18,7 @@ | |
#include <algorithm> | ||
#include <functional> | ||
#include <map> | ||
#include <memory> | ||
#include <set> | ||
#include <string> | ||
#include <type_traits> | ||
|
@@ -33,6 +34,7 @@ | |
#include "mesh/meshblock.hpp" | ||
#include "outputs/output_utils.hpp" | ||
#include "parameter_input.hpp" | ||
#include "utils/mpi_types.hpp" | ||
|
||
namespace parthenon { | ||
namespace OutputUtils { | ||
|
@@ -253,6 +255,47 @@ std::vector<int> ComputeDerefinementCount(Mesh *pm) { | |
}); | ||
} | ||
|
||
template <typename T> | ||
std::vector<T> FlattendedLocalToGlobal(Mesh *pm, const std::vector<T> &data_local) { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't understand what this function does. Is it actually doing an MPI all-to-all to build up a global data vector? Is this something we ever want to do? |
||
const int n_blocks_global = pm->nbtotal; | ||
const int n_blocks_local = static_cast<int>(pm->block_list.size()); | ||
|
||
const int n_elem = data_local.size() / n_blocks_local; | ||
PARTHENON_REQUIRE_THROWS(data_local.size() % n_blocks_local == 0, | ||
"Results from flattened input vector does not evenly divide " | ||
"into number of local blocks."); | ||
std::vector<T> data_global(n_elem * n_blocks_global); | ||
|
||
std::vector<int> counts(Globals::nranks); | ||
std::vector<int> offsets(Globals::nranks); | ||
|
||
const auto &nblist = pm->GetNbList(); | ||
counts[0] = n_elem * nblist[0]; | ||
offsets[0] = 0; | ||
for (int r = 1; r < Globals::nranks; r++) { | ||
counts[r] = n_elem * nblist[r]; | ||
offsets[r] = offsets[r - 1] + counts[r - 1]; | ||
} | ||
|
||
#ifdef MPI_PARALLEL | ||
PARTHENON_MPI_CHECK(MPI_Allgatherv(data_local.data(), counts[Globals::my_rank], | ||
MPITypeMap<T>::type(), data_global.data(), | ||
counts.data(), offsets.data(), MPITypeMap<T>::type(), | ||
MPI_COMM_WORLD)); | ||
#else | ||
return data_local; | ||
#endif | ||
return data_global; | ||
} | ||
|
||
// explicit template instantiation | ||
template std::vector<std::size_t> | ||
FlattendedLocalToGlobal(Mesh *pm, const std::vector<std::size_t> &data_local); | ||
template std::vector<int64_t> | ||
FlattendedLocalToGlobal(Mesh *pm, const std::vector<int64_t> &data_local); | ||
template std::vector<int> FlattendedLocalToGlobal(Mesh *pm, | ||
const std::vector<int> &data_local); | ||
|
||
// TODO(JMM): I could make this use the other loop | ||
// functionality/high-order functions. but it was more code than this | ||
// for, I think, little benefit. | ||
|
@@ -329,6 +372,35 @@ std::size_t MPISum(std::size_t val) { | |
return val; | ||
} | ||
|
||
VariableVector<Real> GetVarsToWrite(const std::shared_ptr<MeshBlock> pmb, | ||
const bool restart, | ||
const std::vector<std::string> &variables) { | ||
const auto &var_vec = pmb->meshblock_data.Get()->GetVariableVector(); | ||
auto vars_to_write = GetAnyVariables(var_vec, variables); | ||
if (restart) { | ||
// get all vars with flag Independent OR restart | ||
auto restart_vars = GetAnyVariables( | ||
var_vec, {parthenon::Metadata::Independent, parthenon::Metadata::Restart}); | ||
for (auto restart_var : restart_vars) { | ||
vars_to_write.emplace_back(restart_var); | ||
} | ||
} | ||
return vars_to_write; | ||
} | ||
|
||
std::vector<VarInfo> GetAllVarsInfo(const VariableVector<Real> &vars, | ||
const IndexShape &cellbounds) { | ||
std::vector<VarInfo> all_vars_info; | ||
for (auto &v : vars) { | ||
all_vars_info.emplace_back(v, cellbounds); | ||
} | ||
|
||
// sort alphabetically | ||
std::sort(all_vars_info.begin(), all_vars_info.end(), | ||
[](const VarInfo &a, const VarInfo &b) { return a.label < b.label; }); | ||
return all_vars_info; | ||
} | ||
|
||
void CheckParameterInputConsistent(ParameterInput *pin) { | ||
#ifdef MPI_PARALLEL | ||
CheckMPISizeT(); | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -213,8 +213,8 @@ struct SwarmInfo { | |
std::size_t count_on_rank = 0; // per-meshblock | ||
std::size_t global_offset; // global | ||
std::size_t global_count; // global | ||
std::vector<std::size_t> counts; // per-meshblock | ||
std::vector<std::size_t> offsets; // global | ||
std::vector<std::size_t> counts; // on local meshblocks | ||
std::vector<std::size_t> offsets; // global offset for local meshblocks | ||
// std::vector<ParArray1D<bool>> masks; // used for reading swarms without defrag | ||
std::vector<std::size_t> max_indices; // JMM: If we defrag, unneeded? | ||
void AddOffsets(const SP_Swarm &swarm); // sets above metadata | ||
|
@@ -236,7 +236,7 @@ struct SwarmInfo { | |
// Copies swarmvar to host in prep for output | ||
template <typename T> | ||
std::vector<T> FillHostBuffer(const std::string vname, | ||
ParticleVariableVector<T> &swmvarvec) { | ||
const ParticleVariableVector<T> &swmvarvec) const { | ||
const auto &vinfo = var_info.at(vname); | ||
std::vector<T> host_data(count_on_rank * vinfo.nvar); | ||
std::size_t ivec = 0; | ||
|
@@ -245,6 +245,7 @@ struct SwarmInfo { | |
for (int n4 = 0; n4 < vinfo.GetN(4); ++n4) { | ||
for (int n3 = 0; n3 < vinfo.GetN(3); ++n3) { | ||
for (int n2 = 0; n2 < vinfo.GetN(2); ++n2) { | ||
// TODO(pgrete) understand what's doing on with the blocks here... | ||
std::size_t block_idx = 0; | ||
for (auto &swmvar : swmvarvec) { | ||
// Copied extra times. JMM: If we defrag, unneeded? | ||
|
@@ -344,13 +345,29 @@ std::vector<int64_t> ComputeLocs(Mesh *pm); | |
std::vector<int> ComputeIDsAndFlags(Mesh *pm); | ||
std::vector<int> ComputeDerefinementCount(Mesh *pm); | ||
|
||
// Takes a vector containing flattened data of all rank local blocks and returns the | ||
// flattened data over all blocks. | ||
template <typename T> | ||
std::vector<T> FlattendedLocalToGlobal(Mesh *pm, const std::vector<T> &data_local); | ||
|
||
// TODO(JMM): Potentially unsafe if MPI_UNSIGNED_LONG_LONG isn't a size_t | ||
// however I think it's probably safe to assume we'll be on systems | ||
// where this is the case? | ||
// TODO(JMM): If we ever need non-int need to generalize | ||
std::size_t MPIPrefixSum(std::size_t local, std::size_t &tot_count); | ||
std::size_t MPISum(std::size_t local); | ||
|
||
// Return all variables to write, i.e., for restarts all indpendent variables and ones | ||
// with explicit Restart flag, but also variables explicitly defined to output in the | ||
// input file. | ||
VariableVector<Real> GetVarsToWrite(const std::shared_ptr<MeshBlock> pmb, | ||
const bool restart, | ||
const std::vector<std::string> &variables); | ||
|
||
// Returns a sorted vector of VarInfo associated with vars | ||
std::vector<VarInfo> GetAllVarsInfo(const VariableVector<Real> &vars, | ||
const IndexShape &cellbounds); | ||
Comment on lines
+360
to
+369
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can these two functions be unified with the HDF5 machinery? I actually thought I already wrote There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, I think we wrote them in parallel. |
||
|
||
void CheckParameterInputConsistent(ParameterInput *pin); | ||
} // namespace OutputUtils | ||
} // namespace parthenon | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍