[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend

bugzilla at redhat.com bugzilla at redhat.com
Mon Jan 29 07:46:02 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1467614



--- Comment #61 from Krutika Dhananjay <kdhananj at redhat.com> ---
I loaded io-stats below fuse-bridge and a couple of other places on the gluster
stack to profile the fops at different levels (randrd workload).

And here is an excerpt from the profile taken just below gluster's fuse-bridge
translator even with attribute-timeout set to 600s:

<excerpt>
...
...
Fop           Call Count    Avg-Latency    Min-Latency    Max-Latency
---           ----------    -----------    -----------    -----------
READ             3145728     1145.73 us       77.66 us    42739.02 us
FLUSH                 96        2.23 us        1.21 us       23.29 us
FSTAT            3145632        2.66 us        0.67 us     3435.86 us

...
...
</excerpt>

I also captured strace output of the application writing to the glusterfs mount
- in this case fio and grep'd for number of fstats invoked by it and here are
the counts of syscalls executed by fio:

[root at rhs-srv-07 tmp]# grep -iarT 'fstat' fio-strace.140* | wc -l
95
[root at rhs-srv-07 tmp]# grep -iarT 'lstat' fio-strace.140* | wc -l
3269
[root at rhs-srv-07 tmp]# grep -iarT '^stat' fio-strace.140* | wc -l
1132
[root at rhs-srv-07 tmp]# grep -iarT 'read' fio-strace.140* | wc -l
3146696

Based on this data we can conclude that neither the application nor glusterfs
stack are winding as many fstats as the number of reads. So the only suspect
left is kernel.

-Krutika

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=cnT6xFXuqU&a=cc_unsubscribe


More information about the Bugs mailing list