Изменения

Ceph performance

36 байтов добавлено, 10:46, 13 августа 2019
Нет описания правки
https://www.micron.com/-/media/client/global/documents/products/other-documents/micron_9300_and_red_hat_ceph_reference_architecture.pdf
New NVMes are Micron 9300 of maximum possible capacity — 12.8 TB. Each of these delivers even more write iopsthan 9200’s: 310k instead of 260k. Everything else remains the same.
The new write performance result for 100 RBD clients is 477029 iops (3636 % more than in the previous test). Remember that it’s still only 4770 iops per a client, though. For 10 RBD clients get the result is better: 294000 iops in their case, that’s which stands for 29400 iops per a client, which is of course better.
What helped the performance? I guess the configuration did. In comparison to the previous test they changed the following:
* disabled messenger checksums (ms_crc_data=false) and bluestore checksums (bluestore_csum_type=none)
* tuned rocksdb: <tt>bluestore_rocksdb_options = compression=kNoCompression,max_write_buffer_number=64,min_write_buffer_number_to_merge=32,recycle_log_file_num=64,compaction_style=kCompactionStyleLevel,<br />write_buffer_size=4MB,target_file_size_base=4MB,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,<br />level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB</tt> -  — this divides into:** The main part is probably 64x32x4 MB memtables setting (number x merge x size) instead of default 4x1x256 MB. The effect of this change isn’t really clear for me. It may slightly reduce CPU load because sorting a big memtable is slower than sorting a small one. However, 32x4 compactions aren’t probably that much faster than 1x256.
** max_bytes_for_level_base is changed dramatically — it’s raised to 6 GB from 256 MB!
** added compaction threads
Other remarks:
* cephx was already disabled in the previous version of the test. This time they also disabled signatures. However, it seems pointless — disabled cephx doesn’t sign anything.
* they already had debug objecter = 0/0 and the rest of debugsdebug levels set to zero.
* it seems they haven’t tried changing prefer_deferred_size and min_alloc_size.
* new NVMes definitely didn’t change anything. 260000 iops is over the top for Ceph anyway.