Изменения

Перейти к: навигация, поиск

Ceph performance

1 байт добавлено, 10:35, 27 июля 2019
Нет описания правки
'''However''', the naive expectation is that as you replace your HDDs with SSDs and use a fast network — Ceph should become almost as faster. Everyone is used to the idea that I/O is slow and software is fast. And this is generally NOT true with Ceph.
Ceph is a Software-Defined Storage system, and its «software» is a significant overhead. The general rule currently is: with Ceph it’s hard to achieve random read latencies less than 0.5ms and random write latencies less than 1ms, '''no matter what drives or network you use'''. This With one thread, this stands for only 2000 iops random read iops and 1000 iops random write with one threadiops, and even if you manage to achieve this result you’re already in a good shape. With best-in-slot hardware and some tuning you may be able to improve it further, but only twice or so.
But does latency matter? Yes, it does, when it comes to single-threaded (synchronous) random reads or writes. Basically, all software that wants the data to be durable does fsync() calls which serialize writes. For example, all DBMSs do. So to understand the performance limit of these apps you should benchmark your cluster with iodepth=1.

Навигация