Изменения

Ceph performance

4 байта убрано, 19:38, 24 июля 2019
м
Test your Ceph cluster
* The first recommended tool is again `fio` with `-ioengine=rbd -pool=<your pool> -rbdname=<your image>`. All of the above tests valid for raw drives can be repeated for RBD and they mean the same things. Sync, direct and invalidate flags can be omitted, because RBD has no concept of «sync» — all operations are always «sync». And there’s no page cache involved either, so «direct» also doesn’t mean anything.
* The second recommended tool, especially useful for hunting performance problems, comes in the several improved varieties of «Mark’s bench» from russian Ceph chat: https://github.com/rumanzo/ceph-gobench or https://github.com/vitalif/ceph-bench. Both use a non-replicated Ceph pool (size=1), create several 4MB objects (16 by default) in each separate OSD and do random single-thread 4kb writes in randomly selected objects within one OSD. This mimics random writes to RBD and allows to determine the problematic OSDs by benchmarking them separately. Original Mark’s bench (outdated) was here: https://github.com/socketpair/ceph-bench
*: To create the non-replicated benchmark pool use {{Cmd|ceph osd pool create bench 128 replicated; ceph osd pool set bench size 1; ceph osd pool set bench min_size 1}}. Just note that 128 (PG count) should be enough for all OSDs to get at least one PG each.
* Do not use `rados bench`. It creates a small number of objects (1-2 for a thread) so all of them always reside in cache and improve the results far beyond they should be.