[Bugs] [Bug 1467614] Gluster read/write performance improvements on NVMe backend
bugzilla at redhat.com
bugzilla at redhat.com
Thu Nov 2 06:37:10 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1467614
--- Comment #41 from Krutika Dhananjay <kdhananj at redhat.com> ---
(In reply to Manoj Pillai from comment #40)
> Back to the client-side analysis. Single-client, single brick runs. Client
> system separate from server, 10GbE interconnect. io-thread-count=4.
>
> Trying an approach where I'm doing runs with increasing number of concurrent
> jobs -- 24, 48 and 96 -- and attaching mutrace to the glusterfs client in
> each case. Comparing mutrace output for the runs and looking for locks that
> rise up in the ranking of contended locks when concurrency increases. Based
> on this, I'm looking at the following as prime suspects:
>
> Mutex #28293810 (0x0x7fdc240ca940) first referenced by:
> /usr/local/lib/libmutrace.so(pthread_mutex_init+0x1ae)
> [0x7fdc3744d95a]
> /usr/lib64/glusterfs/3.12.1/rpc-transport/socket.so(+0xb4c8)
> [0x7fdc2b66c4c8]
>
> Mutex #27743163 (0x0x7fdc240c9890) first referenced by:
> /usr/local/lib/libmutrace.so(pthread_mutex_init+0x1ae)
> [0x7fdc3744d95a]
> /lib64/libgfrpc.so.0(rpc_clnt_new+0xf0) [0x7fdc36f3c840]
>
>
> Maybe to a lesser extent, but also this one:
> Mutex #21500558 (0x0x5608175976c8) first referenced by:
> /usr/local/lib/libmutrace.so(pthread_mutex_init+0x1ae)
> [0x7fdc3744d95a]
> /lib64/libglusterfs.so.0(cb_buffer_new+0x7c) [0x7fdc371ccafc]
>
> Might be a good idea to have a discussion on these.
So the last mutex (created in cb_buffer_new) is associated with event-history.
I would suggest you to try 3.12.2 which has event-history disabled and see what
the new results are.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=lPRLUyj44Z&a=cc_unsubscribe
More information about the Bugs
mailing list