Изменения

Перейти к: навигация, поиск

Ceph performance

6270 байтов добавлено, 13:07, 18 июля 2020
Нет описания правки
[[File:Ceph-funnel-en.svg|500px|right]] [[ru:Производительность Ceph]]
Ceph is a Software-Defined Storage system. It’s very feature-rich: it provides object storage, VM disk storage, shared cluster filesystem and a lot of additional features. In some ways, it’s even unique.
 
It could be an excellent solution which you could take for free, immediately solve all your problems, become a cloud provider and earn piles of money. However there is a subtle problem: PERFORMANCE. Rational people rarely want to lower the performance by 95 % in production. It seems cloud providers like AWS, GCP, Yandex don’t care — all of them run their clouds on top of their own crafted SDS-es (not even Ceph) and all these SDS-es are just as slow. :-) we don’t judge them of course, that’s their own business.
 
This article describes which performance numbers you can achieve with Ceph and how. But I warn you: you won’t catch up with local SSDs. Local SSDs (especially NVMe) are REALLY fast right now, their latency is about 0.05ms. It’s very hard for an SDS to achieve the same result, and beating it is almost impossible. The network alone eats those 0.05ms...
 
== General benchmarking principles ==
=== Test your disks ===
 
[https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc SSD Bench Google Docs]
Run `fio` on your drives before deploying Ceph:
*: Or https://github.com/vitalif/ceph-bench. The original idea comes from the «Mark’s bench» from russian Ceph chat ([https://github.com/socketpair/ceph-bench original outdated tool was here]). Both use a non-replicated Ceph pool (size=1), create several 4MB objects (16 by default) in each separate OSD and do random single-thread 4kb writes in randomly selected objects within one OSD. This mimics random writes to RBD and allows to determine the problematic OSDs by benchmarking them separately.
*: To create the non-replicated benchmark pool use {{Cmd|ceph osd pool create bench 128 replicated; ceph osd pool set bench size 1; ceph osd pool set bench min_size 1}}. Just note that 128 (PG count) should be enough for all OSDs to get at least one PG each.
* S3 (rgw):
** [https://github.com/intel-cloud/cosbench cosbench]
** [https://github.com/markhpc/hsbench hsbench]
** [https://github.com/minio/warp minio warp]
Notes:
* Don't use `rados bench`. It creates a small number of objects (1-2 for a thread) so all of them always reside in cache and improve the results far beyond they should be.
* You can use `rbd bench`, but fio is better.
 
=== Test your network ===
 
ping -f (flood ping).
 
sockperf. On the first node, run <tt>sockperf sr -i IP --tcp</tt>. On the second, run <tt>sockperf pp -i SERVER_IP --tcp -m 4096</tt>. Decent average number is around 0.05-0.07ms.
 
<s>qperf. On the first node, just run <tt>qperf</tt>. On the second, <tt>qperf -vvs SERVER_IP tcp_lat -m 4096</tt>.</s>
 
Don’t use qperf. It is super-stupid: it doesn’t disable Nagle (no TCP_NODELAY) and it doesn’t honor the <tt>-m 4096</tt> parameter — message size is always set to 1 BYTE in latency tests.
 
[[File:Warning icon.svg|32px|link=]] Warning: Ubuntu has AppArmor enabled by default and it affects network latency adversely. Disable it if you want good performance. The effect of AppArmor is like the following (Intel X520-DA2):
 
* centos 3.10: rtt min/avg/max/mdev = 0.039/0.053/0.132/0.012 ms
* ubuntu 4.x + apparmor: rtt min/avg/max/mdev = 0.068/0.163/0.230/0.029 ms
* ubuntu 4.x: rtt min/avg/max/mdev = 0.037/0.071/0.157/0.018 ms
== Why is it so slow ==
The latency doesn’t scale with the number of servers or OSDs-per-SSD or two-RBD-in-RAID0. When you’re benchmarking your cluster with iodepth=1 you’re benchmarking only ONE placement group at a time (PG is a triplet or a pair of OSDs). The result is only affected by how fast a single OSD processes a single request. In fact, with iodepth=1 IOPS=1/latency. There is Nick Fisk’s presentation titled «Low-latency Ceph». By «low-latency» he means 0.7ms, which is only ~1500 iops.
 
=== Expected performance ===
 
Estimating the cluster performance based on disks' performance is absolutely wrong.
 
The real expected performance for Bluestore is like the following (iops applies to random 4KB reads/writes):
 
1 HDD (usual SATA, 7200 rpm, non SMR, without SSD cache) is:
* ~100-120 iops with QD=128
* ~66 iops with QD=1
* ~40 MB/s with linear read/write
* The numbers will be worse if you're short on available RAM, because you'll get a lot of metadata cache misses
 
1 fast SSD or NVMe SSD with capacitors (see below) and write iops >= 25000:
* ~1000 write iops with QD=1. May vary between 300 and, in the best possible case, ~2500 iops depending on CPU frequency and settings.
* Up to ~10000-20000 write iops with QD=128 per 1 OSD.
* Read iops are around 2-2.5 times better: QD=1 ~2000 iops (up to ~4000), QD=128 ~20000 (up to ~50000 depending on the CPU).
* Of course, the QD=128 iops number is limited by the performance of the disk itself :). However, as good SSDs usually perform great in parallel mode, they're usually not a bottleneck.
* By running multiple OSDs on a single drive, you can multiply your parallel (QD=128) iops number by the number of OSDs, as long as the drive allows it. Of course, you get the same increase in CPU load. HUGE increase.
* Linear reads and writes are almost as fast as raw disk reads and writes.
* Difference between SATA SSDs and NVMes in terms of random I/O in Ceph is negligible as long as they both have capacitors. Of course, server NVMes are still the best and you should try to get them instead of SATA and SAS, but it's hard to notice the difference with Ceph and random I/O.
* Modern SSDs often have slower QD=1 random reads than writes, just because they write into a fast capacitor-protected cache, but they can't serve all random reads from it. The difference is usually like 8000 QD=1 read iops compared to 40000 QD=1 write iops.
 
Aggregate performance:
* Linear read from the cluster = OSD number * MB/s of one OSD
* Linear write to a replicated pool = OSD number / Replica number * MB/s of one OSD
* Linear write to a EC pool = OSD number / (K+M) * K * MB/s of one OSD
* Random QD=1 performance is the average for all OSDs (treat it like latency); iops with QD=128 is the sum
* Random IOPS are limited by the client, too. 1 RBD client can squeeze out up to ~30000 read iops and up to ~15000 write iops
* Linear I/O is of course limited by the network bandwidth, too
=== Micron setup example ===
TODO: This section lacks random read performance comparisons.
Bluestore is the «new» storage layer of Ceph. All presentations and documents say it’s better in all ways, which in fact indeed seems reasonable for something «new».
Bluestore is really 2x faster than Filestore for linear write workloads, because it has no double-writes — big blocks are written only once, not twice as in Filestore. Filestore journals everything, so all writes first go to the journal and then get copied to the main device.
Official documents say that you should allocate 4 % of the slow device space for block.db (Bluestore’s metadata partition). This is a lot, Bluestore rarely needs that amount of space.
But the main problem is that Bluestore uses RocksDB and RocksDB puts a file on the fast device only if it thinks that the whole layer will fit there (RocksDB is organized in files). So, default Default RocksDB settings in Ceph are:
* 1 GB WAL = 4x256 Mb
* …But trying to tune them is pointless, default configuration (1x5 for HDDs and 2x8 for SSDs) is optimal. The problem is that all worker threads still serialize writes into a single kv_sync_thread, and the whole scheme only scales up to ~6 worker threads.
* There is one thing that decreases latency 2-3 times at once. It’s disabling all power-save functions of CPUs:
** <tt>cpupower idle-set -D 10</tt> — this disables C-States (or you can pass <tt>processor.max_cstate=1 intel_idle.max_cstate=0</tt> to the kernel command-line)** <tt>cpupower frequency-set -g performance</tt> or (for older versions) <tt>for i in $(seq 0 $((`nproc`-1))); do cpufreq-set -c $i -g performance; done</tt> — this disables frequency scaling.
* When power-save is disabled CPU heats up as a GTX, but you get 2-3 times more iops.
* High CPU requirement is one of the arguments to NOT use Ceph in a «hyperconverged setup», the setup in which storage and compute nodes are combined.
* Drive cache in qemu is controlled by the `cache` option (surprise). It can be <missing>, writethrough, writeback, none, unsafe, directsync. With RBD this option also affects rbd cache, which is the cache on the Ceph’s client library (librbd) side.
* But cache=unsafe doesn’t work with RBD, it still waits for write confirmations. And writethrough, <missing> and directsync are basically equivalent.
* RBD cache helps a lot on HDDs, but on all-flash clusters it slows everything downin all-flash clusters. Something is implemented with locks, something is single-threaded, somebody tries to optimize it all, but the work isn’t done yet.
* There are the following drive emulation options: lsi (slowest), virtio-scsi (fast), virtio (fastest, but can’t do TRIM until QEMU 4.0). virtio-scsi can use multiple queues and thus should be the fastest with fast underlying storage (with a local NVMe?) — but it seems it doesn’t matter with Ceph.
* The filesystem also slows things down! Specifically it updates inode mtime on each small write if you don’t have lazytime enabled. mtime is part of the metadata, so this change is journaled, which makes the <tt>fio -sync=1 -iodepth=1 -direct=1</tt> test result 3-4 times worse when you run it over a file in FS.
In addition to that, SATA/SAS drives also have a cache disable command. When you disable the cache Linux stops sending flushes at all. It may seem that this should also result in the same performance as fsync/O_SYNC, but that’s not the case either! SSDs with supercaps give '''much''' better performance with disabled cache. For example, Seagate Nytro 1351 gives you 288 iops with cache and 18000 iops without it (!).
Why? It seems that’s because FLUSH CACHE is interpreted by the drive as a «please flush all caches, including non-volatile cache» command, and «disable cache» is interpreted as «please disable the volatile cache, but you may leave the non-volatile one on if you want to». This makes writes with a flush after every write slower than writes with the cache disabled.
What about NVMe? NVMe has slightly less variability — there is no «disable cache» command in the NVMe spec at all, but just as in the SATA spec there is the FLUSH CACHE command and FUA bit. But again, based on the personal experience I can say that it seems that FUA is often ignored with NVMe either by Linux or by the drive itself, thus '''fio -sync=1''' gives the same results as '''fio -direct=1''' without any sync flags. '''-fsync=1''' performs correctly and lands the performance down to where it must belong (1000—2000 iops for desktop NVMes).
== Quick insight into SSD and flash memory organization ==
Although The distinctive feature of NAND flash memory allows fast random writes is that you can write it in small blocks (usually 512 to 4096 bytes), its distinctive feature is that every but erase only big block groups at once, and you must be erased erase any block before being written tooverwriting it. But Write unit is called «page», erase unit is called «block». Actual NAND chips have 16 KB pages and 16-24 MB blocks (1024 pages for Micron MLC and 1536 pages for Micron TLC). This is probably because erasing is slow compared to reading and writing, so manufacturers design memory chips so that they always erase but it can be done for a large group lot of blocks at once, as this takes almost the same time as (common sense suggests that erasing one block could take. This group of blocks called «erase unit» is typically 2-4 megabytes in size~1000 times slower than writing). Another distinctive feature is that the total number of erase/program cycles is physically limited — after several thousands cycles (a usual number for MLC memory) the block becomes faulty and stops accepting new writes or even loses the data previously written to it. Denser and cheaper (MLC/TLC/QLC, 2/3/4 bits per cell) memory chips have smaller erase limits, while sparser and more expensive ones (SLC, 1 bit per cell) have bigger limits (up to 100000 rewrites). However, all limits are still finite, so stupidly overwriting the same block would be very slow and would break SSD very rapidly.
But that’s not the case with modern SSDs — even cheap models are very fast and usually very durable. But why? The credit goes to SSD controllers: SSDs contain very smart and powerful controllers, usually with at least 4 cores and 1-2 GHz clock frequency, which means they’re as powerful as mobile phones' processors. All that power is required to make FTL firmware run smoothly. FTL stands for «Flash Translation Layer» and it is the firmware responsible for translating addresses of small blocks into physical addresses on flash memory chips. Every write request is always put into a space freed in advance, and FTL just remembers the new physical location of the data. This makes writes very fast. FTL also defragments free space and moves blocks around to achieve uniform wear across all memory cells. This feature is called Wear Leveling. SSDs also usually have some extra physical space reserved to add even more endurance and to make wear leveling easier; this is called overprovisioning. Pricier server SSDs have a lot of space overprovisioned, for example, Micron 5100 Max has 37,5 % of physical memory reserved (extra 60 % is added to the user-visible capacity).
When I tried to lecture someone in the mailing list about «all SSDs doing fsyncs correctly» I got this as the reply: https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf. Long story short, it says that in 2013 a common scenario was SSDs not syncing metadata on fsync calls at all which led to all kinds of funny things on a power loss, up to (!!!) total failures of some SSDs.
There also exist some very old SSDs without capacitors (OCZ Vector/Vertex) which are capable of very large sync iops numbers. How do they work? Nobody knows, but I suspect that they just don’t do safe writes :). The core principle of flash memory overwrites hasn't hasn’t changed in recent years. 5 years ago SSDs were also FTL-based, just as now.
So it seems there are two kinds of «power loss protection»: simple PLP means «we do fsyncs and don’t die or lose your data when a power loss occurs», and advanced PLP means that fsync’ed writes are just as fast as non-fsynced. It also seems that in the current years (2018—2019) simple PLP is already a standard and most SSDs don’t lose data on power failure.
This limit is always sufficient to copy big files to a flash drive formatted in any of common filesystems. One opened block receives metadata and another receives data, then it just moves on. But if you start doing random writes you stop hitting the opened blocks and this is where lags come in.
 
== Bonus: Micron vSAN reference architecture ==
 
[https://media-www.micron.com/-/media/client/global/documents/products/other-documents/micron_vsan_6,-d-,7_on_x86_smc_reference_architecture.pdf Micron Accelerated All-Flash SATA vSAN 6.7 Solution]
 
Node configuration:
 
* 384 GB RAM 2667 MHz
* 2X Micron 5100 MAX 960 GB (randread: 93k iops, randwrite: 74k iops)
* 8X Micron 5200 ECO 3.84TB (randread: 95k iops, randwrite: 17k iops)
* 2x Xeon Gold 6142 (16c 2.6GHz)
* Mellanox ConnectX-4 Lx
* Connected to 2x Mellanox SN2410 25GbE switches
 
«Aligns with VMWare AF-6, aims up to 50K read iops per node»
 
* 2 replicas (like Ceph size=2)
* 4 nodes
* 4 VMs on each node
* 8 vmdk per VM
* 4 threads per vmdk
Total I/O parallelism: 512
 
100%/70%/50%/30%/0% write
* «Baseline» (fits in cache): 121k/178k/249k/314k/486k iops
* «Capacity» (doesn’t): 51k/66k/90k/134k/363k
* Latency is 1000*512/IOPS ms in all tests (1000ms * parallelism / iops)
* '''No latency tests with low parallelism'''
* '''No linear read/write tests'''
 
Conclusion:
* ~3800 write iops per drive
* ~11343 read iops per drive
* ~1600 write iops per drive when not in cache
* Parallel workload doesn’t look better than Ceph. vSAN is hyperconverged, though.
== Good SSD models ==
* Micron 5100/5200 and soon , 9300. Maybe 5300, 7300 too
* Seagate Nytro 1351/1551
* HGST SN260
* At least until Nautilus: <tt>[global] debug objecter = 0/0</tt> (there is a big client-side slowdown)
* Try to disable rbd cache in the userspace driver (QEMU options cache=none)
* <s>For HDD-only or Bad-SSD-Only and at least until it’s backported — backported (it is) — remove the handbrake https://github.com/ceph/ceph/pull/26909</s>

Навигация