Изменения

Перейти к: навигация, поиск

Ceph performance

1 байт добавлено, 19:28, 22 июля 2019
Нет описания правки
* The first recommended tool is again `fio` with `-ioengine=rbd -pool=<your pool> -rbdname=<your image>`. All of the above tests valid for raw drives can be repeated for RBD and they mean the same things, just sync, direct and invalidate flags don't matter, because RBD has no concept of "sync" - all operations are always "sync". And there's no page cache involved either, so "direct" also doesn't mean anything.
* The second recommended tool, especially useful for hunting performance problems, comes in the several improved varieties of "Mark's bench" from russian Ceph chat: https://github.com/rumanzo/ceph-gobench or https://github.com/vitalif/ceph-bench. Both use a non-replicated Ceph pool (size=1), create several 4MB objects (16 by default) in each separate OSD and do random single-thread 4kb writes. This allows to determine the problematic OSDs by benchmarking them separately.
*: To create the non-replicated benchmark pool use {{Cmd|ceph osd pool create bench 128 replicated; ceph osd pool set bench size 1; ceph osd pool set bench min_size 1}}. Just not note that 128 (PG count) should be enough for all OSDs to get at least one PG each.
* Do not use `rados bench`. It creates a small number of objects (1-2 for a thread) so all of them always reside in cache and improve the results far beyond they should be.
* You can also use the simple `fio -ioengine=libaio` with a kernel-mounted RBD. However, that requires to disable some features of that RBD, because kernel client still lacks their support. Note that regardless of the overhead of moving data in and out the kernel, the kernel client is actually faster.

Навигация