My colleague extended the SAN slice to 5TB for more realistic testing.
I formatted the disk with btrfs, and mounted it like this:
mount /dev/sdd /mnt_test/
Then I ran the test with block size 512k:
Toggle snippet (25 lines)
root@berlin ~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt_test/test --bs=512k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=64
fio-3.6
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=802MiB/s,w=274MiB/s][r=1603,w=547 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=16949: Mon Dec 20 22:18:28 2021
read: IOPS=1590, BW=795MiB/s (834MB/s)(3055MiB/3842msec)
bw ( KiB/s): min=747520, max=857088, per=99.83%, avg=812763.43, stdev=44213.07, samples=7
iops : min= 1460, max= 1674, avg=1587.43, stdev=86.35, samples=7
write: IOPS=542, BW=271MiB/s (284MB/s)(1042MiB/3842msec)
bw ( KiB/s): min=262144, max=297984, per=100.00%, avg=278820.57, stdev=15115.88, samples=7
iops : min= 512, max= 582, avg=544.57, stdev=29.52, samples=7
cpu : usr=1.98%, sys=96.28%, ctx=1096, majf=0, minf=6
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=6109,2083,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=795MiB/s (834MB/s), 795MiB/s-795MiB/s (834MB/s-834MB/s), io=3055MiB (3203MB), run=3842-3842msec
WRITE: bw=271MiB/s (284MB/s), 271MiB/s-271MiB/s (284MB/s-284MB/s), io=1042MiB (1092MB), run=3842-3842msec
Because this is fun I reran it with the same arguments:
Toggle snippet (24 lines)
root@berlin ~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt_test/test --bs=512k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=64
fio-3.6
Starting 1 process
Jobs: 1 (f=0): [f(1)][-.-%][r=756MiB/s,w=260MiB/s][r=1511,w=519 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=17488: Mon Dec 20 22:18:56 2021
read: IOPS=1647, BW=824MiB/s (864MB/s)(3055MiB/3708msec)
bw ( KiB/s): min=738304, max=929792, per=99.28%, avg=837485.71, stdev=73710.05, samples=7
iops : min= 1442, max= 1816, avg=1635.71, stdev=143.96, samples=7
write: IOPS=561, BW=281MiB/s (295MB/s)(1042MiB/3708msec)
bw ( KiB/s): min=234496, max=320512, per=99.79%, avg=287012.57, stdev=29009.60, samples=7
iops : min= 458, max= 626, avg=560.57, stdev=56.66, samples=7
cpu : usr=1.38%, sys=96.47%, ctx=1394, majf=0, minf=16420
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=6109,2083,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=824MiB/s (864MB/s), 824MiB/s-824MiB/s (864MB/s-864MB/s), io=3055MiB (3203MB), run=3708-3708msec
WRITE: bw=281MiB/s (295MB/s), 281MiB/s-281MiB/s (295MB/s-295MB/s), io=1042MiB (1092MB), run=3708-3708msec
Then I mounted with compression and space cache:
mount /dev/sdd -o compress-force=zstd,space_cache=v2 /mnt_test/
The numbers don’t differ much at all.
Toggle snippet (6 lines)
Run status group 0 (all jobs):
READ: bw=882MiB/s (925MB/s), 882MiB/s-882MiB/s (925MB/s-925MB/s), io=3055MiB (3203MB), run=3464-3464msec
WRITE: bw=301MiB/s (315MB/s), 301MiB/s-301MiB/s (315MB/s-315MB/s), io=1042MiB (1092MB), run=3464-3464msec
I then erased the file system and again put on a big ext4:
Toggle snippet (28 lines)
root@berlin ~# fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/mnt_test/test --bs=512k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=64
fio-3.6
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][-.-%][r=1539MiB/s,w=526MiB/s][r=3078,w=1052 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=20672: Mon Dec 20 22:23:29 2021
read: IOPS=3077, BW=1539MiB/s (1614MB/s)(3055MiB/1985msec)
bw ( MiB/s): min= 1530, max= 1548, per=100.00%, avg=1539.33, stdev= 9.02, samples=3
iops : min= 3060, max= 3096, avg=3078.67, stdev=18.04, samples=3
write: IOPS=1049, BW=525MiB/s (550MB/s)(1042MiB/1985msec)
bw ( KiB/s): min=533504, max=557056, per=100.00%, avg=546133.33, stdev=11868.39, samples=3
iops : min= 1042, max= 1088, avg=1066.67, stdev=23.18, samples=3
cpu : usr=2.17%, sys=11.24%, ctx=4787, majf=0, minf=8
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=6109,2083,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
READ: bw=1539MiB/s (1614MB/s), 1539MiB/s-1539MiB/s (1614MB/s-1614MB/s), io=3055MiB (3203MB), run=1985-1985msec
WRITE: bw=525MiB/s (550MB/s), 525MiB/s-525MiB/s (550MB/s-550MB/s), io=1042MiB (1092MB), run=1985-1985msec
Disk stats (read/write):
sdd: ios=5926/2087, merge=1/0, ticks=119183/3276, in_queue=122460, util=94.87%
No idea why btrfs performs so much worse in comparison.
I’ll copy over /gnu/store/trash next.
--
Ricardo