Изменения

Ceph performance

22 байта добавлено, 08:32, 24 июля 2019
Нет описания правки
== Controllers ==
* SATA is OK, you don't don’t need SAS at all. SATA is simple and definitely faster than old RAID controllers.* Don't Don’t use RAID unless you're you’re absolutely sure you need it. All drives should be connected using the pass-through (HBA) mode.* If your RAID controller can't can’t do passthrough mode, reflash it. If you can't can’t reflash it, throw it away and buy an HBA ("RAID «RAID without RAID"RAID»), for example, LSI 9300-8i.* If you still can't can’t throw it away - away — disable all caches and pray :) the problem is that RAID controllers sometimes ignore fsync requests so Ceph can become corrupted on a sudden power loss. Even some HBAs may do that (namely some Adaptecs).
* In theory, you may try to leverage the battery- (or supercap-) backed controller cache in RAID0 mode to improve write latency. However, that can easily become shooting yourself. At least conduct some power-unplug tests if you do that.
* IOPS difference between RAID and HBA/SATA may be very noticeable. A bad or old RAID controller can easily become a bottleneck.
* HBAs also have IOPS limits. For example, it's it’s ~280000 iops for the whole controller for the LSI 9211-8i.* Always turn on blk-mq for SAS and NVMe - NVMe — or just use recent kernel versions, blk-mq is on by default since 4.18 or so. However, blk-mq does almost nothing for SATA.
== CPUs ==