Изменения

Перейти к: навигация, поиск

Ceph performance

Нет изменений в размере, 17:49, 13 января 2020
Нет описания правки
* Drive cache in qemu is controlled by the `cache` option (surprise). It can be <missing>, writethrough, writeback, none, unsafe, directsync. With RBD this option also affects rbd cache, which is the cache on the Ceph’s client library (librbd) side.
* But cache=unsafe doesn’t work with RBD, it still waits for write confirmations. And writethrough, <missing> and directsync are basically equivalent.
* RBD cache helps a lot on HDDs, but on all-flash clusters it slows everything downin all-flash clusters. Something is implemented with locks, something is single-threaded, somebody tries to optimize it all, but the work isn’t done yet.
* There are the following drive emulation options: lsi (slowest), virtio-scsi (fast), virtio (fastest, but can’t do TRIM until QEMU 4.0). virtio-scsi can use multiple queues and thus should be the fastest with fast underlying storage (with a local NVMe?) — but it seems it doesn’t matter with Ceph.
* The filesystem also slows things down! Specifically it updates inode mtime on each small write if you don’t have lazytime enabled. mtime is part of the metadata, so this change is journaled, which makes the <tt>fio -sync=1 -iodepth=1 -direct=1</tt> test result 3-4 times worse when you run it over a file in FS.

Навигация