Ceph performance

Материал из YourcmcWiki
Перейти к: навигация, поиск


WHY SO SLOW

Warning icon.svg Bad news: All writes in Ceph are transactional, even ones that aren’t specifically requested to be. It means that write operations do not finish until they are written into all OSD journals and fsync()'ed to disks. This is to prevent RAID WRITE HOLE-like (see #RAID Write Hole) situations.

RAID Write Hole

Even if we’re talking about RAID, the thing that is much simpler than distributed software-defined storage like Ceph, we’re still talking about a distributed storage system — every system that has multiple physical drives is distributed, because each drive behaves and commits the data (or doesn’t commit it) independently of others.

Write Hole is the name for several situations in RAID arrays when drives go out of sync. Suppose you have a simple RAID1 array of two disks. You write a sector. You send a write command to both drive. And then a power failure occurs before commands finish. Now, after the system boots again, you don’t know if your replicas contain same data, because you don’t know which drive had succeeded to write it and which didn’t.

You say OK, I don’t care. I’ll just read from both drives and if I encounter different data I’ll just pick one of the copies, and I’ll either get the old data or the new.

But then imagine that you have RAID 5. Now you have three drives: two for data and one for parity. Now suppose that you overwrite a sector again. Before writing your disks contain: (A1), (B1) and (A1 XOR B1). You overwrite (B1) with (B2). To do so you write (B2) to the second disk and (A1 XOR B2) to the third. A power failure occurs again… And then, on the next boot, you also find out that disk 1 (one that you didn’t write anything to) is dead. You might think that you can still reconstruct your data because you have RAID 5 and 2 disks out of 3 are still alive.

But imagine that disk 2 succeeded to write new data, while disk 3 failed. Now you have: (lost disk), (B2) and (A1 XOR B1). If you try to reconstruct A from these copies you’ll get (A1 XOR B1 XOR B2) which is obviously not equal to A1. Bang! Your RAID5 has corrupted the data that wasn’t overwritten at all.

Because of this problem, Linux `mdadm` refuses at all to start an incomplete array after unclean shutdown. There’s no solution to this problem except full data journaling at the level of each disk drive. And this is… exactly what Ceph does! So, Ceph is actually safer than RAID. :)

Quick insight into SSD and flash memory organization

Although flash memory allows fast random writes in small blocks (usually 512 to 4096 bytes), its distinctive feature is that every block must be erased before being written to. But erasing is slow compared to reading and writing, so manufacturers design memory chips so that they always erase a large group of blocks at once, which takes the same time as erasing one block could take. This group of blocks called «erase unit» is typically 2-4 megabytes in size. Another distinctive feature is that the total number of erase/program cycles is physically limited — after several thousands cycles (a usual number for MLC memory) the block becomes faulty and stops accepting new writes or even loses the data previously written to it. Denser and cheaper (MLC/TLC/QLC, 2/3/4 bits per cell) memory chips have smaller erase limits, while sparser and more expensive ones (SLC, 1 bit per cell) have bigger limits (up to 100000 rewrites). However, all limits are still finite, so stupidly overwriting the same block would be very slow and would break SSD very rapidly.

But that’s not the case with modern SSDs, they are very fast and usually last very long. Even cheap models are usually rather strong. But why? The credit goes to SSD controllers: SSDs contain very smart and powerful controllers, usually with at least 4 cores and 1-2 GHz clock frequency, which means they’re as powerful as mobile phones' processors. All that power is required to make FTL firmware run smoothly. FTL stands for «Flash Translation Layer» and it is the firmware responsible for translating addresses of small blocks into physical addresses on flash memory chips. Every write request is always put into a space freed in advance, and FTL just remembers the new physical location of the data. This makes writes very fast. FTL also defragments free space and moves blocks around to achieve uniform wear across all memory cells. This feature is called Wear Leveling. SSDs also usually have some extra physical space reserved to add even more endurance and to make wear leveling easier; this is called overprovisioning. Pricier server SSDs have a lot of space overprovisioned, for example, Micron 5100 Max has 37,5 % of physical memory reserved (extra 60 % is added to the user-visible capacity inside).

And this is also the FTL which makes power loss protection a problem. Mapping tables are metadata which must also be forced into non-volatile memory when you flush the cache, and it’s what makes desktop SSDs slow with fsync… In fact, as I wrote it I thought that they could use RocksDB or similar LSM-tree based system to store mapping tables and that could make fsyncs fast even without the capacitors. It would lead to some waste of journal space and some extra write amplification (as every journal block would only contain 1 write), but still it would make writes fast. So… either they don’t know about LSM trees or the FTL metadata is not the only problem for fsync.

When I tried to lecture someone in the mailing list about «all SSDs doing fsyncs correctly» I got this as the reply: https://www.usenix.org/system/files/conference/fast13/fast13-final80.pdf. Long story short, it says that in 2013 a common scenario was SSDs not syncing metadata on fsync calls at all which lead to all kinds of funny things on a power loss, up to (!!!) total failures of some SSDs.

There also exist some very old SSDs without capacitors (OCZ Vector/Vertex) which are capable of very large sync iops numbers. How do they work? Nobody knows, but I suspect that they just don’t do safe writes :). The core principle of flash memory overwrites didn’t change in the last years, and SSDs were also based on FTLs just as they do now.

So it seems there are two kinds of «power loss protection»: simple PLP means «we do fsyncs and don’t die or lose your data when a power loss occurs», and advanced PLP means that fsync’ed writes are just as fast as non-fsynced. It also seems that in the current years (2018—2019) simple PLP is already a standard and most SSDs don’t lose data on power failure.

A bonus: USB thumb drives

Why are USB flash drives so slow then? In terms of small random writes they usually only deliver 2-3 operations per second, while being powered by similar flash memory chips — maybe slightly cheaper and worse ones, but obviously not 1000 times worse.

The answer also lies in the FTL. Thumb drives also have FTL and they even have some Wear Leveling, but it’s very small and dumb compared to SSD FTLs. It has a slow CPU and only a little memory. Thus it doesn’t have place to store a full mapping table for small blocks and thus it translates the positions of big blocks (1-2 megabytes or even bigger) instead. Writes are buffered and then flushed one block at a time; there is a small limit on number of blocks that can be buffered at once. The limit is usually only between 3 and 6 blocks.

This limit is always sufficient to copy big files to a flash drive formatted in any of common filesystems. One opened block receives metadata and another receives data, then it just moves on. But if you start doing random writes you stop hitting the opened blocks and this is where lags come in.