[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend

bugzilla at redhat.com bugzilla at redhat.com
Thu May 24 10:38:06 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1467614

Krutika Dhananjay <kdhananj at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
              Flags|                            |needinfo?(mpillai at redhat.co
                   |                            |m)



--- Comment #65 from Krutika Dhananjay <kdhananj at redhat.com> ---
So I wrote a quick and dirty patch in inode.c to minimize contention on
itable->lock, particularly in inode_{ref,unref,lookup} codepaths.

With 8 fuse threads, I was getting ~60K iops in randrd fio test earlier (with
all of the FOSDEM improvements put together).
Here's some mutrace data with contention time in itable->lock:

With 1 fuse reader thread I was seeing ~2K ms contention time.
With 8 fuse threads, this increased to ~33K ms.

With the itable->lock contention fix, iops slightly increased to 63K.
And as for mutrace, I see that the contention time has dropped to ~8K ms (with
8 reader threads).

So it doesn't seem like 3K is much of an improvement?

Manoj,
Do you think it would make sense for me to repeat the
io-stats-loaded-at-multiple-points-on-the-stack experiment once more on top of
all these patches (well, for want of other ideas more than anything else)?

-Krutika

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=lmxo8FMpTl&a=cc_unsubscribe


More information about the Bugs mailing list