Ext4 vs XFS — различия между версиями

Материал из YourcmcWiki
Перейти к: навигация, поиск
м (filebench fileserver, dirty_ratio=1%)
м (filebench fileserver, dirty_ratio=1%)
Строка 119: Строка 119:
 
* HDD: WD VelociRaptor WD6000HLHX, 10000rpm
 
* HDD: WD VelociRaptor WD6000HLHX, 10000rpm
 
* fileserver test is read whole file + append + write whole file test ran on 10000 files in X threads
 
* fileserver test is read whole file + append + write whole file test ran on 10000 files in X threads
* filebench fails to run fileserver test with O_DIRECT, so I tested it with dirty_ratio=1% like this:
+
* filebench fails to run fileserver test with O_DIRECT, so I tried to "disable" page cache using dirty_ratio=1% and ran tests like this:
  
 
<code-bash>
 
<code-bash>
Строка 172: Строка 172:
 
50.0  1317
 
50.0  1317
 
64.0  1163
 
64.0  1163
 +
ENDDATASET
 +
</plot>
 +
 +
== [http://sourceforge.net/projects/filebench/ filebench] fileserver, dirty_ratio=20% ==
 +
 +
* HDD: WD VelociRaptor WD6000HLHX, 10000rpm
 +
* Same test but ran with default 20% dirty_ratio setting. It's clearly seen that the system was using page cache extensively.
 +
 +
<plot>
 +
set xrange [1:64]
 +
set logscale x
 +
set xtics (1, 2, 4, 8, 16, 32, 50, 64)
 +
set yrange [0:3200]
 +
set xlabel 'threads'
 +
set ylabel 'ops/s (more is better)'
 +
set xzeroaxis
 +
set grid ytics
 +
set style fill solid 1.0 noborder
 +
set boxwidth 0.7 relative
 +
plot 'xfs.dat' using 1:2 title 'XFS' with linespoints, 'ext4.dat' using 1:2 title 'ext4' with linespoints
 +
DATASET xfs
 +
1.0  7755
 +
2.0  3796
 +
4.0  3333
 +
8.0  3320
 +
16.0  3559
 +
32.0  3653
 +
50.0  2650
 +
64.0  1671
 +
ENDDATASET
 +
DATASET ext4
 +
1.0  16415
 +
2.0  4136
 +
4.0  4108
 +
8.0  4026
 +
16.0  3570
 +
32.0  3147
 +
50.0  2632
 +
64.0  2778
 
ENDDATASET
 
ENDDATASET
 
</plot>
 
</plot>

Версия 19:02, 19 декабря 2013

Copy kernel source (from SSD with warm cache)

HDD: WD Scorpio Black 2.5" 750GB 7200rpm

  • xfs 1 thread: 12.348s
  • xfs 4 threads: 65.883s
  • ext4 1 thread: 7.662s
  • ext4 4 threads: 33.876s

FS-Mark 3.3, creating 1M files

HDD: WD Scorpio Black 2.5" 750GB 7200rpm

sysbench random read/write 16K in 8M files

HDD: WD Scorpio Black 2.5" 750GB 7200rpm

sysbench random read/write 16K in 16K files

HDD: WD Scorpio Black 2.5" 750GB 7200rpm

filebench fileserver, dirty_ratio=1%

  • HDD: WD VelociRaptor WD6000HLHX, 10000rpm
  • fileserver test is read whole file + append + write whole file test ran on 10000 files in X threads
  • filebench fails to run fileserver test with O_DIRECT, so I tried to "disable" page cache using dirty_ratio=1% and ran tests like this:
echo 1 > /proc/sys/vm/dirty_ratio
echo 0 > /proc/sys/vm/dirty_bytes
echo 0 > /proc/sys/kernel/randomize_va_space
for i in 1 2 4 8 16 32 50 64; do
    echo
    echo "== $i threads =="
    echo
    echo 1 > /proc/sys/vm/drop_caches
    sync
    filebench <<EOF
load fileserver
set \$dir=/media/sdd
set \$nthreads=$i
run 30
EOF
done
echo 20 > /proc/sys/vm/dirty_ratio

filebench fileserver, dirty_ratio=20%

  • HDD: WD VelociRaptor WD6000HLHX, 10000rpm
  • Same test but ran with default 20% dirty_ratio setting. It's clearly seen that the system was using page cache extensively.