[Gluster-users] glusterfs segmentation fault in rdma mode

自由人 21291285 at qq.com
Sat Nov 4 09:27:50 UTC 2017


Hi, All,




I used Infiniband to connect all GlusterFS nodes and the clients. Previously I run IP over IB and everything was OK. Now I used rdma transport mode instead.  And then I ran the traffic. After I while,  the glusterfs process exited because of segmentation fault.




Here were the messages when I saw segmentation fault:

pending frames:

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(1) op(WRITE)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

frame : type(0) op(0)

patchset: git://git.gluster.org/glusterfs.git

signal received: 11

time of crash:

2017-11-01 11:11:23

configuration details:

argp 1

backtrace 1

dlfcn 1

libpthread 1

llistxattr 1

setfsid 1

spinlock 1

epoll.h 1

xattr.h 1

st_atim.tv_nsec 1

package-string: glusterfs 3.11.0

/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618]

/usr/lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7f95bc557834]

/lib64/libc.so.6(+0x32510)[0x7f95bace2510]


The client OS was CentOS 7.3. The server OS was CentOS 6.5. The GlusterFS version was 3.11.0 both in clients and servers. The Infiniband card was Mellanox. The Mellanox IB driver version was v4.1-1.0.2 (27 Jun 2017) both in clients and servers.




Is rdma code stable for GlusterFS? Need I upgrade the IB driver or apply a patch?


Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171104/2027cdc4/attachment.html>


More information about the Gluster-users mailing list