Skip to content

Commit

Permalink
Add file system benchmarks to the benchmark doc.
Browse files Browse the repository at this point in the history
  • Loading branch information
ignatz committed Feb 1, 2025
1 parent b120321 commit 0f4899b
Show file tree
Hide file tree
Showing 2 changed files with 216 additions and 3 deletions.
164 changes: 164 additions & 0 deletions docs/src/content/docs/reference/_benchmarks/benchmarks.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -634,3 +634,167 @@ export function FibonacciPocketBaseAndTrailBaseUsageChart() {
/>
);
}

const fsColors = {
ext4: "#008b6dff",
xfs: "#29c299ff",
bcachefs: "#47a1cdff",
btrfsNoComp: "#ba36c8ff",
btrfsZstd1: "#c865d5ff",
btrfsLzo: "#db9be3ff",
zfs: "#e6bb1eff",
};

export function TrailBaseFileSystemReadLatency() {
// i100k i10k ip50 ip75 ip90 ip95 rp50 rp75 rp90 rp95
// ext4 2.415 0.23942 355 425 476 499 169 196 226 249
// zfs 3.532 0.35463 535 581 655 730 170 197 229 253
// xfs 2.3789 0.24695 372 441 481 503 168 195 226 248
// btrfs no compr 3.2142 0.32212 475 533 646 689 168 195 226 249
// btrfs zstd:1 3.1774 0.31789 475 523 607 659 167 194 225 249
// btrfs lzo 3.2673 0.34607 513 609 687 726 167 194 224 247
// bcachefs 2.6001 0.27165 398 489 547 572 169 195 226 249

// 2025-02-01
const readLatenciesMicroSec = {
ext4: [169, 196, 226, 249],
zfs: [170, 197, 229, 253],
xfs: [168, 195, 226, 248],
btrfsNoComp: [168, 195, 226, 249],
btrfsZstd1: [167, 194, 225, 249],
btrfsLzo: [167, 194, 224, 247],
bcachefs: [169, 195, 226, 249],
};

const data: ChartData<"bar"> = {
labels: ["p50", "p75", "p90", "p95"],
datasets: [
{
label: "ext4",
data: readLatenciesMicroSec.ext4,
backgroundColor: fsColors.ext4,
},
{
label: "xfs",
data: readLatenciesMicroSec.xfs,
backgroundColor: fsColors.xfs,
},
{
label: "bcachefs",
data: readLatenciesMicroSec.bcachefs,
backgroundColor: fsColors.bcachefs,
},
{
label: "btrfs w/o compression",
data: readLatenciesMicroSec.btrfsNoComp,
backgroundColor: fsColors.btrfsNoComp,
},
{
label: "btrfs zstd:1",
data: readLatenciesMicroSec.btrfsZstd1,
backgroundColor: fsColors.btrfsZstd1,
},
{
label: "btrfs lzo",
data: readLatenciesMicroSec.btrfsLzo,
backgroundColor: fsColors.btrfsLzo,
},
{
label: "zfs",
data: readLatenciesMicroSec.zfs,
backgroundColor: fsColors.zfs,
},
],
};

return (
<BarChart
data={data}
scales={{
y: {
title: {
display: true,
text: "Read Latency [µs]",
},
},
}}
/>
);
}

export function TrailBaseFileSystemWriteLatency() {
// i100k i10k ip50 ip75 ip90 ip95 rp50 rp75 rp90 rp95
// ext4 2.415 0.23942 355 425 476 499 169 196 226 249
// zfs 3.532 0.35463 535 581 655 730 170 197 229 253
// xfs 2.3789 0.24695 372 441 481 503 168 195 226 248
// btrfs no compr 3.2142 0.32212 475 533 646 689 168 195 226 249
// btrfs zstd:1 3.1774 0.31789 475 523 607 659 167 194 225 249
// btrfs lzo 3.2673 0.34607 513 609 687 726 167 194 224 247
// bcachefs 2.6001 0.27165 398 489 547 572 169 195 226 249

// 2025-02-01
const writeLatenciesMicroSec = {
ext4: [355, 425, 476, 499],
zfs: [535, 581, 655, 730],
xfs: [372, 441, 481, 503],
btrfsNoComp: [475, 533, 646, 689],
btrfsZstd1: [475, 523, 607, 659],
btrfsLzo: [513, 609, 687, 726],
bcachefs: [398, 489, 547, 572],
};

const data: ChartData<"bar"> = {
labels: ["p50", "p75", "p90", "p95"],
datasets: [
{
label: "ext4",
data: writeLatenciesMicroSec.ext4,
backgroundColor: fsColors.ext4,
},
{
label: "xfs",
data: writeLatenciesMicroSec.xfs,
backgroundColor: fsColors.xfs,
},
{
label: "bcachefs",
data: writeLatenciesMicroSec.bcachefs,
backgroundColor: fsColors.bcachefs,
},
{
label: "btrfs w/o compression",
data: writeLatenciesMicroSec.btrfsNoComp,
backgroundColor: fsColors.btrfsNoComp,
},
{
label: "btrfs zstd:1",
data: writeLatenciesMicroSec.btrfsZstd1,
backgroundColor: fsColors.btrfsZstd1,
},
{
label: "btrfs lzo",
data: writeLatenciesMicroSec.btrfsLzo,
backgroundColor: fsColors.btrfsLzo,
},
{
label: "zfs",
data: writeLatenciesMicroSec.zfs,
backgroundColor: fsColors.zfs,
},
],
};

return (
<BarChart
data={data}
scales={{
y: {
title: {
display: true,
text: "Read Latency [µs]",
},
},
}}
/>
);
}
55 changes: 52 additions & 3 deletions docs/src/content/docs/reference/benchmarks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ import {
SupaBaseMemoryUsageChart,
SupaBaseCpuUsageChart,
PocketBaseAndTrailBaseUsageChart,
TrailBaseFileSystemReadLatency,
TrailBaseFileSystemWriteLatency,
FibonacciPocketBaseAndTrailBaseUsageChart,
} from "./_benchmarks/benchmarks.tsx";

Expand Down Expand Up @@ -178,6 +180,49 @@ being roughly 5 times slower than p50.
Slower insertions can take north of 100ms. This may be related to GC pauses,
scheduling, or more generally the same CPU variability we observed earlier.

## File System Performance

File systems play an important role for the performance of storage systems,
we'll therefore take a quick at their impact on SQLite/TrailBase's performance.

Note that the numbers aren't directly comparable to the ones above, due to
being taken on a different machine with more storage options,
specifically an AMD 8700G with a 2TB WD SN850x NVMe SSD running a 6.12 kernel.

<div class="flex justify-center h-[340px] w-[90%]">
<div class="w-[50%]">
<TrailBaseFileSystemReadLatency client:only="solid-js" />
</div>

<div class="w-[50%]">
<TrailBaseFileSystemWriteLatency client:only="solid-js" />
</div>
</div>

Interestingly, the read performance appears identical suggesting that caching
is at play w/o much real file system interaction.
In the future we should rerun with a larger data set to observe the degraded
performance for when not everything fits into memory anymore.
Yet, most queries being cached is a not too uncommon reality and it's
reassuring to see that caches are indeed working independent of the file system
😅.

The write latencies are bit more interesting. Unsurprisingly modern
copy-on-write (CoW) file systems have a bit more overhead. The relative ranking
is roughly in line with
[Phoronix'](https://www.phoronix.com/review/linux-611-filesystems/2)
results[^7], with the added OpenZFS (2.2.7) falling in line with its peers.
Also, the differences are attenuated due to the constant overhead TrailBase
adds over vanilla SQLite.

We won't discuss the specific trade-offs and baggage coming with each file
system but hope that the numbers can help guide the optimization of your own
production setup.
In the end its a trade-off between performance, maturity, reliability, physical
disc space and feature sets. For example, CoW snapshots may or may not be
important to you.


## JavaScript Performance

The benchmark sets up a custom HTTP endpoint `/fibonacci?n=<N>` using the same
Expand Down Expand Up @@ -224,9 +269,8 @@ threads will be an effective remedy (`--js-runtime-threads`).

We're very happy to confirm that TrailBase's APIs and JS/ES6/TS runtime are
quick.
The significant performance gap we observed, especially for the APIs, is likely
a consequence of how quick SQLite is making even small overheads matter that
much more.
The significant performance gap we observed for API access is a consequence of
how fast SQLite itself is making even small overheads matter that much more.

With the numbers fresh off the press, prudence is of the essence and ultimately
nothing beats benchmarking your own setup and workloads.
Expand Down Expand Up @@ -282,3 +326,8 @@ The benchmarks are available on [GitHub](https://github.com/trailbaseio/trailbas
Runtime-effects, such as garbage collection, may have an effect, however we
would have expected these to show on shorter time-scales.
This could also indicate a contention or thrashing issue 🤷.

[^7]:
Despite being close, XFS' and ext4's relative ranking is swapped, which
could be due to a variety of reason like mount options, kernel version,
hardware, ...

0 comments on commit 0f4899b

Please sign in to comment.