[Gluster-users] 回复: glusterfs segmentation fault in rdma mode

acfreeman 21291285 at qq.com
Sun Nov 5 04:23:56 UTC 2017


Hi, If there was only one client, there were not any problems even the traffic was very heavy. But if I used several clients to write the same volume, then I could see the segmentation fault. I used gdb to debug, but the performance was much lower than the previous test results, and we couldn't see the errors. We thought that the problem only occurred when multiple clients wrote the same volume with a very high performance (e.g., more than 1GiB/s each client).------------------ 原始邮件 ------------------
发件人: "Ben Turner"<bturner at redhat.com>
发送时间: 2017年11月5日(星期天) 凌晨3:00
收件人: "自由人"<21291285 at qq.com>;
抄送: "gluster-users"<gluster-users at gluster.org>;
主题: Re: [Gluster-users] glusterfs segmentation fault in rdma mode


This looks like there could be some some problem requesting / leaking / whatever memory but without looking at the core its tought to tell for sure.   Note:

/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618]

Can you open up a bugzilla and get us the core file to review?

-b

----- Original Message -----
> From: "自由人" <21291285 at qq.com>
> To: "gluster-users" <gluster-users at gluster.org>
> Sent: Saturday, November 4, 2017 5:27:50 AM
> Subject: [Gluster-users] glusterfs segmentation fault in rdma mode
> 
> 
> 
> Hi, All,
> 
> 
> 
> 
> I used Infiniband to connect all GlusterFS nodes and the clients. Previously
> I run IP over IB and everything was OK. Now I used rdma transport mode
> instead. And then I ran the traffic. After I while, the glusterfs process
> exited because of segmentation fault.
> 
> 
> 
> 
> Here were the messages when I saw segmentation fault:
> 
> pending frames:
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(1) op(WRITE)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> frame : type(0) op(0)
> 
> patchset: git:// git.gluster.org/glusterfs.git
> 
> signal received: 11
> 
> time of crash:
> 
> 2017-11-01 11:11:23
> 
> configuration details:
> 
> argp 1
> 
> backtrace 1
> 
> dlfcn 1
> 
> libpthread 1
> 
> llistxattr 1
> 
> setfsid 1
> 
> spinlock 1
> 
> epoll.h 1
> 
> xattr.h 1
> 
> st_atim.tv_nsec 1
> 
> package-string: glusterfs 3.11.0
> 
> /usr/lib64/ libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618 ]
> 
> /usr/lib64/ libglusterfs.so.0(gf_print_trace+0x324)[0x7f95bc557834 ]
> 
> /lib64/ libc.so.6(+0x32510)[0x7f95bace2510 ]
> 
> The client OS was CentOS 7.3. The server OS was CentOS 6.5. The GlusterFS
> version was 3.11.0 both in clients and servers. The Infiniband card was
> Mellanox. The Mellanox IB driver version was v4.1-1.0.2 (27 Jun 2017) both
> in clients and servers.
> 
> 
> Is rdma code stable for GlusterFS? Need I upgrade the IB driver or apply a
> patch?
> 
> Thanks!
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171105/f0f16004/attachment.html>


More information about the Gluster-users mailing list