Изменения

Ceph performance

1351 байт добавлено, 08:31, 24 июля 2019
Нет описания правки
So just stick with 30 GB for all Bluestore OSDs :)
 
== Controllers ==
 
* SATA is OK, you don't need SAS at all. SATA is simple and definitely faster than old RAID controllers.
* Don't use RAID unless you're absolutely sure you need it. All drives should be connected using the pass-through (HBA) mode.
* If your RAID controller can't do passthrough mode, reflash it. If you can't reflash it, throw it away and buy an HBA ("RAID without RAID"), for example, LSI 9300-8i.
* If you still can't throw it away - disable all caches and pray :) the problem is that RAID controllers sometimes ignore fsync requests so Ceph can become corrupted on a sudden power loss. Even some HBAs may do that (namely some Adaptecs).
* In theory, you may try to leverage the battery- (or supercap-) backed controller cache in RAID0 mode to improve write latency. However, that can easily become shooting yourself. At least conduct some power-unplug tests if you do that.
* IOPS difference between RAID and HBA/SATA may be very noticeable. A bad or old RAID controller can easily become a bottleneck.
* HBAs also have IOPS limits. For example, it's ~280000 iops for the whole controller for the LSI 9211-8i.
* Always turn on blk-mq for SAS and NVMe - or just use recent kernel versions, blk-mq is on by default since 4.18 or so. However, blk-mq does almost nothing for SATA.
 
== CPUs ==
 
== Drive cache is slowing you down ==
== RAID WRITE HOLE ==