[Bugs] [Bug 1392646] New: Client crash

bugzilla at redhat.com bugzilla at redhat.com
Tue Nov 8 00:29:05 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1392646

            Bug ID: 1392646
           Summary: Client crash
           Product: GlusterFS
           Version: 3.6.8
         Component: rpc
          Assignee: bugs at gluster.org
          Reporter: joe at julianfamily.org
                CC: bugs at gluster.org



Description of problem:
pending frames:
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(1) op(READ)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash: 
2016-11-07 16:39:44
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.6.8
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xaf)[0x7fa329e4ecef]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(gf_print_trace+0x341)[0x7fa329e683c1]
/lib/x86_64-linux-gnu/libc.so.6(+0x36150)[0x7fa329459150]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7fa3294590d5]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7fa32945c83b]
/lib/x86_64-linux-gnu/libc.so.6(+0x7404e)[0x7fa32949704e]
/lib/x86_64-linux-gnu/libc.so.6(+0x7e846)[0x7fa3294a1846]
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/protocol/client.so(client_local_wipe+0x39)[0x7fa3244c81
99]
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/xlator/protocol/client.so(client3_3_readv_cbk+0x4a3)[0x7fa3244
d8a73]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x7fa329c25685]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_notify+0xc8)[0x7fa329c25ed8]
/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_transport_notify+0x27)[0x7fa329c22257]
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/rpc-transport/socket.so(+0xa99b)[0x7fa324f8e99b]
/usr/lib/x86_64-linux-gnu/glusterfs/3.6.8/rpc-transport/socket.so(+0xb0dc)[0x7fa324f8f0dc]
/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x7493b)[0x7fa329ea593b]
/usr/sbin/glusterfs(main+0x4f1)[0x7fa32a2f6f71]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7fa32944476d]
/usr/sbin/glusterfs(+0x630d)[0x7fa32a2f730d]


Version-Release number of selected component (if applicable):
3.6.8

How reproducible:
inconsistently

I'm having this crash happen about once every 3 weeks on 1 out of 20 clients:
openstack compute nodes running fuse mounted qemu images.

There's been nothing in the logs for 5 days.

Volume Name: gv-cinder
Type: Distributed-Replicate
Volume ID: 509632e5-5c37-4034-99b7-94598bf33826
Status: Started
Number of Bricks: 7 x 2 = 14
Transport-type: tcp
Bricks:
Brick1: storage03-stor:/gluster/brick01/cinder-std-01
Brick2: storage08-stor:/gluster/brick01/cinder-std-01
Brick3: storage03-stor:/gluster/brick03/cinder-std-01
Brick4: storage07-stor:/gluster/brick03/cinder-std-01
Brick5: storage03-stor:/gluster/brick04/cinder-std-01
Brick6: storage08-stor:/gluster/brick03/cinder-std-01
Brick7: storage04-stor:/gluster/brick03/cinder-std-01
Brick8: storage07-stor:/gluster/brick04/cinder-std-01
Brick9: storage04-stor:/gluster/brick04/cinder-std-01
Brick10: storage08-stor:/gluster/brick04/cinder-std-01
Brick11: storage04-stor:/gluster/brick01/cinder-std-01
Brick12: storage07-stor:/gluster/brick02/cinder-std-01
Brick13: storage07-stor:/gluster/brick01/cinder-std-01
Brick14: storage08-stor:/gluster/brick02/cinder-std-01
Options Reconfigured:
performance.open-behind: off
cluster.data-self-heal-algorithm: diff
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.data-self-heal: off
server.outstanding-rpc-limit: 0
cluster.ensure-durability: off
storage.owner-gid: 510
storage.owner-uid: 510
network.remote-dio: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
cluster.eager-lock: enable
diagnostics.brick-log-level: INFO
server.allow-insecure: on

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list