Ext4 vs XFS — различия между версиями
Материал из YourcmcWiki
м (→sysbench random read/write 16K in 8M files) |
м (→filebench fileserver, dirty_ratio=20%) |
||
Строка 182: | Строка 182: | ||
* HDD: WD VelociRaptor WD6000HLHX, 10000rpm | * HDD: WD VelociRaptor WD6000HLHX, 10000rpm | ||
− | * Same test but ran with default 20% dirty_ratio setting. It's clearly seen that the system was using page cache extensively | + | * Same test but ran with default 20% dirty_ratio setting. It's clearly seen that the system was using page cache extensively - ext4 was permanently gaining an unreal result in the single-thread test... |
<plot> | <plot> | ||
Строка 215: | Строка 215: | ||
50.0 2632 | 50.0 2632 | ||
64.0 2778 | 64.0 2778 | ||
+ | ENDDATASET | ||
+ | </plot> | ||
+ | |||
+ | <plot> | ||
+ | set xrange [1:64] | ||
+ | set logscale x | ||
+ | set xtics (1, 2, 4, 8, 16, 32, 50, 64) | ||
+ | set yrange [0:400] | ||
+ | set xlabel 'threads' | ||
+ | set ylabel 'MB/s (more is better)' | ||
+ | set xzeroaxis | ||
+ | set grid ytics | ||
+ | set style fill solid 1.0 noborder | ||
+ | set boxwidth 0.7 relative | ||
+ | plot 'xfs.dat' using 1:2 title 'XFS' with linespoints, 'ext4.dat' using 1:2 title 'ext4' with linespoints | ||
+ | DATASET xfs | ||
+ | 1.0 182 | ||
+ | 2.0 88.7 | ||
+ | 4.0 78.5 | ||
+ | 8.0 78.3 | ||
+ | 16.0 83.8 | ||
+ | 32.0 85.6 | ||
+ | 50.0 62.0 | ||
+ | 64.0 39.2 | ||
+ | ENDDATASET | ||
+ | DATASET ext4 | ||
+ | 1.0 382.9 | ||
+ | 2.0 96.8 | ||
+ | 4.0 97.5 | ||
+ | 8.0 94.9 | ||
+ | 16.0 83.8 | ||
+ | 32.0 73.3 | ||
+ | 50.0 61.5 | ||
+ | 64.0 64.2 | ||
ENDDATASET | ENDDATASET | ||
</plot> | </plot> |
Версия 19:13, 19 декабря 2013
Содержание
Copy kernel source (from SSD with warm cache)
HDD: WD Scorpio Black 2.5" 750GB 7200rpm
- xfs 1 thread: 12.348s
- xfs 4 threads: 65.883s
- ext4 1 thread: 7.662s
- ext4 4 threads: 33.876s
FS-Mark 3.3, creating 1M files
- HDD: WD Scorpio Black 2.5" 750GB 7200rpm
- fs_mark is a write-only test and it does fsync(), so there should be no skew caused by page cache
sysbench random read/write 16K in 8M files
- HDD: WD Scorpio Black 2.5" 750GB 7200rpm
- sysbench was ran with O_DIRECT, so the page cache should also have no impact.
- It’s not a filesystem benchmark at all! It tests disk performance because it holds ALL prepared files open during the test. It only shows us that XFS isn’t slowing down the direct access to the underlying device (which is also good, of course)…
- Probably because of the above note, the filesystems don’t differ, and the results are totally same for 1x 1GB file and 128x 8MB files… and very similar for 3072x 16KB files (next test below).
sysbench random read/write 16K in 16K files
HDD: WD Scorpio Black 2.5" 750GB 7200rpm
filebench fileserver, dirty_ratio=1%
- HDD: WD VelociRaptor WD6000HLHX, 10000rpm
- fileserver test is read whole file + append + write whole file test ran on 10000 files in X threads
- filebench fails to run fileserver test with O_DIRECT, so I tried to "disable" page cache using dirty_ratio=1% and ran tests like this:
echo 1 > /proc/sys/vm/dirty_ratio echo 0 > /proc/sys/vm/dirty_bytes echo 0 > /proc/sys/kernel/randomize_va_space for i in 1 2 4 8 16 32 50 64; do echo echo "== $i threads ==" echo echo 1 > /proc/sys/vm/drop_caches sync filebench <<EOF load fileserver set \$dir=/media/sdd set \$nthreads=$i run 30 EOF done echo 20 > /proc/sys/vm/dirty_ratio
filebench fileserver, dirty_ratio=20%
- HDD: WD VelociRaptor WD6000HLHX, 10000rpm
- Same test but ran with default 20% dirty_ratio setting. It's clearly seen that the system was using page cache extensively - ext4 was permanently gaining an unreal result in the single-thread test...