Изменения

Ext4 vs XFS

860 байтов добавлено, 14:51, 20 декабря 2013
м
sysbench random read/write 16K in 8M files
* XFS supports fully dynamic inode allocation, i.e. you’ll never run out of inodes, and at the same time you don’t need to waste disk space by reserving it for inodes
* Ext4 does NOT support changing inode count without reformatting the filesystem, even with resize2fs; by default, 1/64 of disk space is reserved for inodes (!!!)
*: It's not hard to change inode count in theory: (1) move data blocks out of the way if we need to reserve them for inodes (2) change inode numbers in all directory entries (3) overwrite/move inode bitmaps and tables. But it's not implemented :-(
* XFS does NOT support shrinking of a filesystem at all (you can only grow it)
= Benchmarks =
== Copy Operations with kernel 3.10 source (from SSD with warm cache) tree ==
* HDD: WD Scorpio Black 2.5" 750GB 7200rpm
* Kernel: 3.12.3(Debian 3.12.3-1~exp1) Copy kernel source from SSD to tested FS and then sync, with warm page cache (i.e. not read-bound):* xfs 1 parallel copy: 12.348s* xfs 4 parallel copies: 65.883s* ext4 1 parallel copy: 7.662s* ext4 4 parallel copies: 33.876s tar 3 kernel source copies from tested FS to /dev/null (basically just read and discard) after 'echo 3 > /proc/sys/vm/drop_caches':* xfs: real 26.815s, user 0.936s, sys 1.556s* ext4: real 5.509s, user 0.584s, sys 0.872s (almost 5 times faster!)
Resultsrm 3 kernel source copies and sync after 'echo 3 > /proc/sys/vm/drop_caches':* xfs 1 thread: 12real 7.348s* xfs 4 threads: 65244s, user 0.148s, sys 2.883s748s* ext4 1 thread: 7real 8.662s993s, user 0.108s, sys 2.664s* ext4 4 threads: 33.876s(oh, xfs is in fact faster in this test!)
== FS-Mark 3.3, creating 1M files ==
* Kernel: 3.12.3
* sysbench was ran with O_DIRECT, so the page cache should also have no impact.
* It’s not a filesystem benchmark at all! It tests disk performance because it holds ALL prepared files open during the test. It only shows us that neither ext4 nor XFS isn’t aren’t slowing down the direct access to the underlying device (which is also good, of course)…
* Probably because of the above note, the filesystems don’t differ, and the results are totally same for 1x 1GB file and 128x 8MB files… and very similar for 3072x 16KB files (next test below).
* HDD: WD VelociRaptor WD6000HLHX, 10000rpm
* Kernel: 3.10.11 (Debian 3.10-3-amd64)
* Same test but ran with default 20% dirty_ratio setting. It's clearly seen that the system was using page cache extensively - ext4 was permanently gaining an unreal result in the single-thread threaded test...
{|