[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend
bugzilla at redhat.com
bugzilla at redhat.com
Mon Nov 13 16:40:21 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1467614
--- Comment #48 from Manoj Pillai <mpillai at redhat.com> ---
(In reply to Manoj Pillai from comment #47)
for the mutrace runs:
fio output for run with 24 concurrent jobs:
read: IOPS=2868, BW=11.2Mi (11.7M)(6144MiB/548349msec)
clat (usec): min=1471, max=23086, avg=8363.15, stdev=4030.90
lat (usec): min=1471, max=23087, avg=8363.44, stdev=4030.89
fio output for run with 48 concurrent jobs:
read: IOPS=2833, BW=11.1Mi (11.6M)(6144MiB/555024msec)
clat (usec): min=2975, max=44776, avg=16932.81, stdev=7990.35
lat (usec): min=2975, max=44776, avg=16933.09, stdev=7990.35
When you run without mutrace, IOPs on these randread runs is ~23k. With mutrace
it drops to less than 3k. For the normal run with 48 jobs, top shows fuse
thread at <70% CPU utlization; for the mutrace run, top shows ~97% CPU
utilization. That's a concern, because it could mean that the bottleneck is the
fuse thread, and hence the lock contention results are not useful.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=B1tPSMjiSY&a=cc_unsubscribe
More information about the Bugs
mailing list