Изменения

Ceph performance

379 байтов добавлено, 11:10, 23 июля 2019
Нет описания правки
Recommended benchmarking tools:
* The first recommended tool is again `fio` with `-ioengine=rbd -pool=<your pool> -rbdname=<your image>`. All of the above tests valid for raw drives can be repeated for RBD and they mean the same things, just sync. Sync, direct and invalidate flags don't mattercan be omitted, because RBD has no concept of "sync" - «sync» — all operations are always "sync"«sync». And there's there’s no page cache involved either, so "direct" «direct» also doesn't doesn’t mean anything.* The second recommended tool, especially useful for hunting performance problems, comes in the several improved varieties of "Mark's bench" «Mark’s bench» from russian Ceph chat: https://github.com/rumanzo/ceph-gobench or https://github.com/vitalif/ceph-bench. Both use a non-replicated Ceph pool (size=1), create several 4MB objects (16 by default) in each separate OSD and do random single-thread 4kb writesin randomly selected objects within one OSD. This mimics random writes to RBD and allows to determine the problematic OSDs by benchmarking them separately.Original Mark’s bench (outdated) was here: https://github.com/socketpair/ceph-bench
*: To create the non-replicated benchmark pool use {{Cmd|ceph osd pool create bench 128 replicated; ceph osd pool set bench size 1; ceph osd pool set bench min_size 1}}. Just note that 128 (PG count) should be enough for all OSDs to get at least one PG each.
* Do not use `rados bench`. It creates a small number of objects (1-2 for a thread) so all of them always reside in cache and improve the results far beyond they should be.
* You can also use the simple `fio -ioengine=libaio` with a kernel-mounted RBD. However, that requires to disable some features of that RBD, because kernel client still lacks their support. Note that regardless of the overhead of moving data in and out the kernel, the kernel client is actually faster.
* And you can also use it from inside your VMs, the results are usually similar to the above. Just note that the result also depends on the storage driver being used. Virtio is the fastest, virtio-scsi is slightly slower and everything else (like LSI emulation) is terribly slow. Results are also considerably affected by whether the RBD cache is enabled or not (RBD cache turns on automatically with cache=writeback/none). For random reads or writes, disabling RBD cache is faster.
== Bluestore vs Filestore ==