[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend
bugzilla at redhat.com
bugzilla at redhat.com
Wed Nov 1 08:01:05 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1467614
--- Comment #40 from Manoj Pillai <mpillai at redhat.com> ---
Back to the client-side analysis. Single-client, single brick runs. Client
system separate from server, 10GbE interconnect. io-thread-count=4.
Trying an approach where I'm doing runs with increasing number of concurrent
jobs -- 24, 48 and 96 -- and attaching mutrace to the glusterfs client in each
case. Comparing mutrace output for the runs and looking for locks that rise up
in the ranking of contended locks when concurrency increases. Based on this,
I'm looking at the following as prime suspects:
Mutex #28293810 (0x0x7fdc240ca940) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0x1ae) [0x7fdc3744d95a]
/usr/lib64/glusterfs/3.12.1/rpc-transport/socket.so(+0xb4c8)
[0x7fdc2b66c4c8]
Mutex #27743163 (0x0x7fdc240c9890) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0x1ae) [0x7fdc3744d95a]
/lib64/libgfrpc.so.0(rpc_clnt_new+0xf0) [0x7fdc36f3c840]
Maybe to a lesser extent, but also this one:
Mutex #21500558 (0x0x5608175976c8) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0x1ae) [0x7fdc3744d95a]
/lib64/libglusterfs.so.0(cb_buffer_new+0x7c) [0x7fdc371ccafc]
Might be a good idea to have a discussion on these.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=lSUm8hFd5f&a=cc_unsubscribe
More information about the Bugs
mailing list