Skip to content

Commit

Permalink
Hermes Wiki update
Browse files Browse the repository at this point in the history
  • Loading branch information
JaimeCernuda committed Oct 29, 2023
1 parent 6020b4c commit b9a7c5b
Show file tree
Hide file tree
Showing 21 changed files with 17 additions and 17 deletions.
14 changes: 7 additions & 7 deletions docs/03-Hermes/Home.md → docs/03-Hermes/01-Hermes.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[[/images/Hermes_hierachy.jpg|Deep Distributed Storage Hierarchy (DDSH)]]

Consider an HPC cluster equipped with a deep, distributed [storage
hierarchy](Storage-Hierarchy) (DDSH), the bottom layer of
hierarchy](06-Hermes-components/10-Storage-Hierarchy.md) (DDSH), the bottom layer of
which is typically a parallel file system (PFS). DDSH was introduced to
boost or to at least improve the I/O (POSIX, MPI-IO, HDF5, ...)
performance of applications performing poorly otherwise. Unfortunately,
Expand Down Expand Up @@ -44,24 +44,24 @@ with the following characteristics:
where in DDSH a given data item is <b>best/well/optimally-</b>placed at
that point in time.
- To that end, the system consists of the following major components:
- [Strategies and algorithms](./Data-Placement-Strategies) that
- [Strategies and algorithms](06-Hermes-components/04-Data-Placement-Strategies.md) that
implement policies and facilitate
data placement decisions. Speculative data
placement for read operations is also known as
[prefetching](./Prefetcher).
[prefetching](06-Hermes-components/09-Prefetcher.md).
- These strategies work with (dynamic) sets of [buffering
target](Buffering-Target)s and are applicable more
target](06-Hermes-components/03-Buffering-Target.md)s and are applicable more
broadly.
- The physical buffering resources are managed in a distributed
[buffer pool](Buffer-Pool) (see also [Batching
[buffer pool](06-Hermes-components/02-Buffer-Pool.md) (see also [Batching
System](Batching-System)).
- [Buffer Organizer](./Buffer-Organizer)
- [Buffer Organizer](06-Hermes-components/01-Buffer-Organizer.md)
- [Profiler](./Profiler)
- To separate concerns and for portability, system buffers are
**not** directly exposed to applications. There is a set of
intermediate [primitives](Primitives) targeted by
[adapters](./Adapters) for different I/O libraries. A
generic [metadata manager](./Metadata-Manager) (MDM),
generic [metadata manager](06-Hermes-components/08-Metadata-Manager.md) (MDM),
supports the bookkeeping needs of the various components.
- The whole system is deployed in a server-less fashion.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ daemon, `LD_PRELOAD` a Hermes adapter and set some environment variables.
We spawn a daemon on each node, then run our app with the appropriate
environment variables, similar to the process described
[above](./1.-Getting-Started#hermes-services-running-in-separate-process-as-a-daemon).
[above](01-Getting-Started.md#hermes-services-running-in-separate-process-as-a-daemon).

```bash
HERMES_CONF_PATH=/absolute/path/to/hermes.yaml
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ First we perform a strong scaling study.
2. Processes vary between 1 and 48
3. Transfer size of 1MB

| ![run workloads](images/performance/ssd-scale.svg) |
| ![run workloads](../images/performance/ssd-scale.svg) |
|:--:|
|SSD strong scaling|

Expand All @@ -45,7 +45,7 @@ we want to measure the impact of garbage collection and OS caching.
2. Processes fixed at 4
3. Transfer size of 1MB

| ![run workloads](images/performance/ssd-dset.svg) |
| ![run workloads](../images/performance/ssd-dset.svg) |
|:--:|
|SSD dataset size scaling|

Expand All @@ -62,7 +62,7 @@ First we perform a strong scaling study.
2. Processes vary between 1 and 48
3. Transfer size of 1MB

| ![run workloads](images/performance/nvme-scale.svg) |
| ![run workloads](../images/performance/nvme-scale.svg) |
|:--:|
|NVMe strong scaling|

Expand All @@ -77,7 +77,7 @@ we want to measure the impact of garbage collection and OS caching.
2. Processes fixed at 4
3. Transfer size of 1MB

| ![run workloads](images/performance/nvme-dset.svg) |
| ![run workloads](../images/performance/nvme-dset.svg) |
|:--:|
|NVMe dataset size scaling|

Expand All @@ -99,7 +99,7 @@ processes to be between 1 and 48. Each case performs a total of 100GB of
I/O with transfer sizes of 1MB using the Hermes native Put/Get API.
This evaluation was conducted only over a single node.

| ![multi-core scaling (NVMe)](images/performance/multicore-nvme-scale.svg) |
| ![multi-core scaling (NVMe)](../images/performance/multicore-nvme-scale.svg) |
|:--:|
|NVMe dataset size scaling|

Expand Down Expand Up @@ -128,7 +128,7 @@ updated based on its current value.
In addition, each workload contains a "Load" phase which perform insert-only
workloads. Unlike an update, insert replaces the entire record.

| ![load workloads](images/performance/ycsb-load.svg) |
| ![load workloads](../images/performance/ycsb-load.svg) |
|:--:|
|Performance of KVS for the LOAD phase of YCSB|

Expand All @@ -139,7 +139,7 @@ KVS adapter, since insert operations replace data. In our KVS, a record
directly to a single Put operation in the Hermes KVS. This demonstrates
that Hermes can perform comparably to well-established in-memory KVS.

| ![run workloads](images/performance/ycsb-run.svg) |
| ![run workloads](../images/performance/ycsb-run.svg) |
|:--:|
|Performance of KVS for the RUN phase of YCSB|

Expand All @@ -158,7 +158,7 @@ In this evaluation, we run a multi-tiered experiment using Hermes native
API. The workload sequentially PUTs 10GB per-node. We ran this experiment
with 16 nodes and 16 processes per node. The overall dataset size is 160GB.

| ![run workloads](images/performance/tiering.svg) |
| ![run workloads](../images/performance/tiering.svg) |
|:--:|
|Performance of Hermes for varying Tiers|

Expand All @@ -183,7 +183,7 @@ synthetic workload. Each rank produces a total of 1GB of data, there are 16
ranks per node, and a total of 4 nodes. The total dataset size produced is
160GB. We use a hierarchical setup with RAM, NVMe, and SATA SSD.

| ![run workloads](images/performance/dpe.svg) |
| ![run workloads](../images/performance/dpe.svg) |
|:--:|
|Performance of Hermes for varying DPEs|

Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.

0 comments on commit b9a7c5b

Please sign in to comment.