Изменения

Перейти к: навигация, поиск

Ceph performance

25 байтов добавлено, 09:56, 24 июля 2019
Нет описания правки
To make it more clear this means that Ceph '''does not use any drive write buffers'''. It does quite the opposite — it clears all buffers after each write. It doesn’t mean that there’s no write buffering at all — there is some on the client side (RBD cache, Linux page cache inside VMs). But internal disk write buffers aren’t used.
This makes typical desktop SSDs perform absolutely terrible for Ceph journal in terms of write IOPS. The numbers you can expect are something between 100 and 1000 (or 500—2000) iops, while you’d probably like to see at least 10000 (even Chinese noname SSD can do10000 iops without fsync).
So your disks should also be benchmarked with '''-iodepth=1 -fsync=1''' (or '''-sync=1''', see [[#O_SYNC vs fsync vs hdparm -W 0]]).

Навигация