Изменения

Ceph performance

42 байта добавлено, 15:50, 24 июля 2019
Нет описания правки
* SPDK build scripts are OK and Ceph is even built with it by default. There are even some reports that it works, however, my OSDs have just hung when I tried to start them with SPDK.
* Both are pointless to use because Ceph itself isn’t that fast. It doesn’t matter if your network latency is 0.05ms or 0.005ms — Ceph software takes 0.5-1ms. There was an experiment report in the mailing list — one guy tried to isolate AsyncMessenger from all other Ceph code and benchmark it alone — https://www.spinics.net/lists/ceph-devel/msg43555.html - and he only got ~80000 iops.
* SPDK is unneeded in the long term even for NVMes, because Linux 5.1 finally has a proper asynchronous I/O implementation called '''io_uring''': https://lore.kernel.org/linux-block/20190116175003.17880-1-axboe@kernel.dk/ - it gives you almost the same latency as SPDK with a lot less complexity. Also it finally works with buffered I/O.
== Drive cache is slowing you down ==