From 3b7279b376da062d4b2fef9d3d6d47320c704451 Mon Sep 17 00:00:00 2001 From: Vitaliy Filippov Date: Sun, 1 Nov 2020 14:13:35 +0300 Subject: [PATCH] Add Ceph EC 2+1 test results --- README.md | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 66d979c0..9931f1b4 100644 --- a/README.md +++ b/README.md @@ -153,9 +153,9 @@ I use the following 6 commands with small variations to benchmark any storage: `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=write -runtime=60 -filename=/dev/sdX` - Linear read: `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=read -runtime=60 -filename=/dev/sdX` -- Random write latency (this hurts storages the most): +- Random write latency (T1Q1, this hurts storages the most): `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=1 -fsync=1 -rw=randwrite -runtime=60 -filename=/dev/sdX` -- Random read latency: +- Random read latency (T1Q1): `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=1 -rw=randread -runtime=60 -filename=/dev/sdX` - Parallel write iops (use numjobs if a single CPU core is insufficient to saturate the load): `fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4k -iodepth=128 [-numjobs=4 -group_reporting] -rw=randwrite -runtime=60 -filename=/dev/sdX` @@ -206,7 +206,7 @@ Hardware configuration: 4 nodes, each with: CPU powersaving was disabled. Both Vitastor and Ceph were configured with 2 OSDs per 1 SSD. -All of the results below apply to 4 KB blocks. +All of the results below apply to 4 KB blocks and random access (unless indicated otherwise). Raw drive performance: - T1Q1 write ~27000 iops (~0.037ms latency) @@ -258,6 +258,14 @@ Vitastor: - Linear write (4M T1Q32): 3200 MB/s - Linear read (4M T1Q32): 1800 MB/s +Ceph: +- T1Q1 write: 730 iops (~1.37ms latency) +- T1Q1 read: 1500 iops with cold cache (~0.66ms latency), 2300 iops after 2 minute metadata cache warmup (~0.435ms latency) +- T4Q128 write (4 RBD images): 45300 iops, total CPU usage by OSDs about 30 virtual cores on each node +- T8Q64 read (4 RBD images): 278600 iops, total CPU usage by OSDs about 40 virtual cores on each node +- Linear write (4M T1Q32): 1950 MB/s before preallocation, 2500 MB/s after preallocation +- Linear read (4M T1Q32): 2400 MB/s + ### NBD NBD is currently required to mount Vitastor via kernel, but it imposes additional overhead