Изменения

Перейти к: навигация, поиск

Ceph performance

431 байт добавлено, 16:17, 13 августа 2019
Нет описания правки
* it seems they haven’t tried changing prefer_deferred_size and min_alloc_size.
* new NVMes definitely didn’t change anything. 260000 iops is over the top for Ceph anyway.
* in the new PDF there is a 70/30 R/W test with QD=1. It was done for 100 RBD clients, but for their cluster it was a «low-load» condition (19.38 % CPU load on the hosts). They report 0.37ms/0.72ms random read/write latencies. In fact they report it reversed :) but let’s assume that 0.37ms is actually for reads because reads are always faster in Ceph. This again corresponds only to 2700/1388 single-thread read/write iops.
== CAPACITORS! ==

Навигация