[Bugs] [Bug 1467614] Performance improvements in socket layer
bugzilla at redhat.com
bugzilla at redhat.com
Mon Aug 28 05:00:27 UTC 2017
https://bugzilla.redhat.com/show_bug.cgi?id=1467614
--- Comment #9 from Manoj Pillai <mpillai at redhat.com> ---
Profiling a run for a simplified test: single-brick gluster volume mounted on
same server; brick on nvme device; 4K random read fio test with direct=1.
From: perf record -F 99 -p <glusterfs pid>
<quote>
Overhead Command Shared Object Symbol
11.34% glusterfs libpthread-2.17.so [.] pthread_mutex_lock
6.10% glusterfs libpthread-2.17.so [.] pthread_mutex_unlock
5.66% glusterfs libc-2.17.so [.] vfprintf
3.12% glusterfs [kernel.kallsyms] [k] try_to_wake_up
2.47% glusterfs libglusterfs.so.0.0.1 [.] _gf_msg
2.07% glusterfs libc-2.17.so [.] __memset_sse2
1.88% glusterfs libc-2.17.so [.] _IO_default_xsputn
1.82% glusterfs libc-2.17.so [.] _int_free
1.59% glusterfs [kernel.kallsyms] [k]
native_queued_spin_lock_slowpath
1.56% glusterfs libc-2.17.so [.] __libc_calloc
1.54% glusterfs libc-2.17.so [.] _int_malloc
1.41% glusterfs [kernel.kallsyms] [k] _raw_qspin_lock
1.35% glusterfs [kernel.kallsyms] [k] copy_user_enhanced_fast_string
1.14% glusterfs [kernel.kallsyms] [k] _raw_spin_lock_irqsave
1.04% glusterfs [kernel.kallsyms] [k] futex_wait_setup
1.01% glusterfs libglusterfs.so.0.0.1 [.] mem_get_from_pool
0.97% glusterfs fuse.so [.] fuse_attr_cbk
0.91% glusterfs [kernel.kallsyms] [k] futex_wake
0.90% glusterfs libpthread-2.17.so [.] pthread_spin_lock
0.78% glusterfs libglusterfs.so.0.0.1 [.] mem_put
0.77% glusterfs libglusterfs.so.0.0.1 [.] mem_get
0.73% glusterfs libc-2.17.so [.] _itoa_word
...
</quote>
From: perf record -F 99 -p <glusterfsd pid>
<quote>
Overhead Command Shared Object Symbol
6.25% glusterfsd libpthread-2.17.so [.] pthread_mutex_lock
4.48% glusterfsd libglusterfs.so.0.0.1 [.] _gf_msg
3.60% glusterfsd libpthread-2.17.so [.] pthread_mutex_unlock
3.10% glusterfsd libc-2.17.so [.] vfprintf
1.37% glusterfsd [kernel.kallsyms] [k] __bufio_new
1.37% glusterfsd [kernel.kallsyms] [k] copy_user_enhanced_fast_string
1.36% glusterfsd [kernel.kallsyms] [k] _raw_spin_lock_irqsave
1.31% glusterfsd libc-2.17.so [.] _IO_default_xsputn
1.21% glusterfsd [kernel.kallsyms] [k] __d_lookup_rcu
1.12% glusterfsd [kernel.kallsyms] [k] blk_throtl_bio
1.10% glusterfsd [kernel.kallsyms] [k] _raw_qspin_lock
1.10% glusterfsd libc-2.17.so [.] __libc_calloc
0.92% glusterfsd [kernel.kallsyms] [k] avc_has_perm_noaudit
0.86% glusterfsd libpthread-2.17.so [.] pthread_getspecific
0.84% glusterfsd libc-2.17.so [.] __memset_sse2
0.82% glusterfsd [kernel.kallsyms] [k] try_to_wake_up
0.81% glusterfsd [kernel.kallsyms] [k] kmem_cache_alloc
0.79% glusterfsd libc-2.17.so [.] _int_malloc
0.75% glusterfsd libc-2.17.so [.] __strchrnul
...
</quote>
A lot of cycles being spent in pthread mutex lock/unlock. However, afaik, this
doesn't tell us whether there is lock contention, and on which locks there is
contention. Will need to use something like mutrace for that.
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=Tv2zxVSC2K&a=cc_unsubscribe
More information about the Bugs
mailing list