[Bugs] [Bug 1349953] thread CPU saturation limiting throughput on write workloads

bugzilla at redhat.com bugzilla at redhat.com
Wed Jun 29 13:33:28 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1349953



--- Comment #11 from Manoj Pillai <mpillai at redhat.com> ---
Back to single client runs for simplicity.

iozone command:
./iozone -i 0 -w -+n -c -C -e -s 10g -r 64k -t 8 -F /mnt/glustervol/f{1..8}.ioz

gluster vol info ouput:
[...]
Options Reconfigured:
cluster.lookup-optimize: on
server.event-threads: 4
client.event-threads: 4
performance.client-io-threads: on
transport.address-family: inet
performance.readdir-ahead: on

top output:
  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
12160 root      20   0 1301004 249764   3868 R 90.4  0.4   1:57.51 glusterfs
12162 root      20   0 1301004 249764   3868 R 90.4  0.4   1:58.38 glusterfs
12158 root      20   0 1301004 249764   3868 R 90.2  0.4   1:58.66 glusterfs
12161 root      20   0 1301004 249764   3868 R 90.2  0.4   1:58.13 glusterfs
12165 root      20   0 1301004 249764   3868 S 72.9  0.4   1:36.63 glusterfs
12178 root      20   0 1301004 249764   3868 S 15.3  0.4   0:19.57 glusterfs
12159 root      20   0 1301004 249764   3868 S 14.3  0.4   0:19.14 glusterfs
12177 root      20   0 1301004 249764   3868 S 14.3  0.4   0:19.39 glusterfs
12175 root      20   0 1301004 249764   3868 S 14.1  0.4   0:19.01 glusterfs
12179 root      20   0 1301004 249764   3868 S 14.1  0.4   0:19.43 glusterfs
12246 root      20   0   47432  18796    168 S  6.8  0.0   0:01.55 iozone

pstack 12160:
Thread 1 (process 12160):
#0  0x00007f675ac9ed89 in gf8_muladd_04 () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#1  0x00007f675acb4d77 in ec_method_encode () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#2  0x00007f675ac9954d in ec_wind_writev () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#3  0x00007f675ac7dc78 in ec_dispatch_mask () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#4  0x00007f675ac9bbde in ec_manager_writev () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#5  0x00007f675ac7d50b in __ec_manager () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#6  0x00007f675ac7d6e8 in ec_resume () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#7  0x00007f675ac7f6df in ec_lock_resume_shared () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#8  0x00007f675ac80b83 in ec_lock_reuse () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#9  0x00007f675ac9bd08 in ec_manager_writev () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#10 0x00007f675ac7d50b in __ec_manager () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#11 0x00007f675ac7d6e8 in ec_resume () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#12 0x00007f675ac7d80f in ec_complete () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#13 0x00007f675ac9917c in ec_inode_write_cbk () from
/usr/lib64/glusterfs/3.8.0/xlator/cluster/disperse.so
#14 0x00007f675aeef442 in client3_3_writev_cbk () from
/usr/lib64/glusterfs/3.8.0/xlator/protocol/client.so
#15 0x00007f6768b26b30 in rpc_clnt_handle_reply () from /lib64/libgfrpc.so.0
#16 0x00007f6768b26def in rpc_clnt_notify () from /lib64/libgfrpc.so.0
#17 0x00007f6768b22923 in rpc_transport_notify () from /lib64/libgfrpc.so.0
#18 0x00007f675d3ec9d4 in socket_event_poll_in () from
/usr/lib64/glusterfs/3.8.0/rpc-transport/socket.so
#19 0x00007f675d3ef614 in socket_event_handler () from
/usr/lib64/glusterfs/3.8.0/rpc-transport/socket.so
#20 0x00007f6768db4590 in event_dispatch_epoll_worker () from
/lib64/libglusterfs.so.0
#21 0x00007f6767bbddf5 in start_thread () from /lib64/libpthread.so.0
#22 0x00007f67675041ad in clone () from /lib64/libc.so.6

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=kS7M34yesg&a=cc_unsubscribe


More information about the Bugs mailing list