From 900171586b78e9b0d201928b805475f8f37de292 Mon Sep 17 00:00:00 2001 From: Vitaliy Filippov Date: Sat, 17 Oct 2020 14:58:08 +0300 Subject: [PATCH] XOR 2+1 test results --- README.md | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 613823b3..e5b16a0e 100644 --- a/README.md +++ b/README.md @@ -233,6 +233,8 @@ Vitastor: - T1Q1 read: 6838 iops (0.145ms latency) - T2Q64 write: 162000 iops, total CPU usage by OSDs about 3 virtual cores on each node - T8Q64 read: 895000 iops, total CPU usage by OSDs about 4 virtual cores on each node +- Linear write (4M T1Q32): 2800 MB/s +- Linear read (4M T1Q32): 1500 MB/s T8Q64 read test was conducted over 1 larger inode (3.2T) from all hosts (every host was running 2 instances of fio). Vitastor has no performance penalties related to running multiple clients over a single inode. @@ -246,6 +248,16 @@ Vitastor was configured with: `--disable_data_fsync true --immediate_commit all --journal_no_same_sector_overwrites true --journal_sector_buffer_count 1024 --journal_size 16777216`. +### EC/XOR 2+1 + +Vitastor: +- T1Q1 write: 2808 iops (~0.355ms latency) +- T1Q1 read: 6190 iops (~0.16ms latency) +- T2Q64 write: 85500 iops, total CPU usage by OSDs about 3.4 virtual cores on each node +- T8Q64 read: 812000 iops, total CPU usage by OSDs about 4.7 virtual cores on each node +- Linear write (4M T1Q32): 3200 MB/s +- Linear read (4M T1Q32): 1800 MB/s + ### NBD NBD is currently required to mount Vitastor via kernel, but it imposes additional overhead @@ -257,8 +269,8 @@ Vitastor with single-thread NBD on the same hardware: - T1Q1 read: 5518 iops (0.18ms latency) - T1Q128 write: 94400 iops - T1Q128 read: 103000 iops -- Linear write (4M T1Q128): 1266 MB/s (compared to 2600 MB/s via fio) -- Linear read (4M T1Q128): 975 MB/s (compared to 1400 MB/s via fio) +- Linear write (4M T1Q128): 1266 MB/s (compared to 2800 MB/s via fio) +- Linear read (4M T1Q128): 975 MB/s (compared to 1500 MB/s via fio) ## Building