Изменения

Ceph performance

5 байтов убрано, 19:45, 24 июля 2019
м
Micron setup example
=== Micron setup example ===
Here’s an example setup from Micron. They used 2x replication, very costly CPUs (2x Xeon Gold per server), very fast network (100G) and 10x best NVMe they had their NVMes in each of the 4 nodes: https://www.micron.com/resource-details/30c00464-e089-479c-8469-5ecb02cfe06f
They only got 350000 peak write iops with high parallelism with 100 % CPU load. It may seem a lot, but if you divide it by the number of NVMe — NVMes — 350000/40 NVMe — it’s only 8750 iops per an a NVMe. If we account for 2 replicas and WAL we get 8750*2*2 = 35000 iops per drive. So… Ceph only squeezed 35000 iops from an out of a NVMe '''that can deliver 260000 iops alone'''. That’s what Ceph overhead is.
Also there are no single-thread latency tests in that PDF. It could be very interesting.