Изменения

Ceph performance

23 байта добавлено, 15:49, 24 июля 2019
Нет описания правки
== Network, DPDK and SPDK ==
* Fast network mostly matters for linear read/write and rebalancing. Yes, you need 10G or more, but usual Ethernet latencies of 0.05ms-0.1ms latency is are totally enough for Ceph. Improving it them further won’t improve your random read/write performance. Jumbo frames (mtu=9000) also only matter for linear read/write.
* DPDK = Data Plane Developer Kit, fast Intel library for working with network and RDMA (Infiniband) devices in userspace, without kernel context-switches
* SPDK = Storage Performance Developer Kit, additional Intel library for working with NVMe SSDs in userspace, also very fast. There is also libnvme — a fork of SPDK with removed DPDK dependency.