[Gluster-users] glusterfs mount crashes with "Transport endpoint is not connected"

Shreyansh Shah shreyansh.shah at alpha-grep.com
Thu Aug 29 09:24:29 UTC 2019


On Thu, Aug 29, 2019 at 2:50 PM Shreyansh Shah <
shreyansh.shah at alpha-grep.com> wrote:

> Hi,
> Running on cloud centos7.5 VM, same machine has gluster volume mounted at
> 2 endpoints (read/write), say A and B. Gluster version server is 5.3 and on
> client is 3.12.2.
> B is used very rarely and only for light reads. Mount A failed when our
> processes were running, but B was still present and could access data
> through B but not through A.
>
> Here is the trace from /var/log/glusterfs:
> The message "E [MSGID: 101191]
> [event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
> handler" repeated 968 times between [2019-08-28 20:40:59.654898] and
> [2019-08-28 20:41:36.417335]
> pending frames:
> frame : type(1) op(FSTAT)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(1) op(READ)
> frame : type(0) op(0)
> patchset: git://git.gluster.org/glusterfs.git
> signal received: 11
> time of crash:
> 2019-08-28 20:41:37
> configuration details:
> argp 1
> backtrace 1
> dlfcn 1
> libpthread 1
> llistxattr 1
> setfsid 1
> spinlock 1
> epoll.h 1
> xattr.h 1
> st_atim.tv_nsec 1
> package-string: glusterfs 5.3
> /lib64/libglusterfs.so.0(+0x26610)[0x7fea89c32610]
> /lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7fea89c3cb84]
> /lib64/libc.so.6(+0x36340)[0x7fea88295340]
> /lib64/libpthread.so.0(pthread_mutex_lock+0x0)[0x7fea88a97c30]
> /lib64/libglusterfs.so.0(__gf_free+0x12c)[0x7fea89c5dc3c]
> /lib64/libglusterfs.so.0(rbthash_remove+0xd5)[0x7fea89c69d35]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xcace)[0x7fea7771dace]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xcdd7)[0x7fea7771ddd7]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xcfc5)[0x7fea7771dfc5]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/io-cache.so(+0xf0ca)[0x7fea777200ca]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/read-ahead.so(+0xa6a1)[0x7fea77b426a1]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/read-ahead.so(+0xaa6f)[0x7fea77b42a6f]
>
> /usr/lib64/glusterfs/5.3/xlator/performance/read-ahead.so(+0xb0ce)[0x7fea77b430ce]
> /lib64/libglusterfs.so.0(default_readv_cbk+0x180)[0x7fea89cbb8e0]
>
> /usr/lib64/glusterfs/5.3/xlator/cluster/distribute.so(+0x81c1a)[0x7fea77dc9c1a]
>
> /usr/lib64/glusterfs/5.3/xlator/protocol/client.so(+0x6d636)[0x7fea7c307636]
> /lib64/libgfrpc.so.0(+0xec70)[0x7fea899fec70]
> /lib64/libgfrpc.so.0(+0xf043)[0x7fea899ff043]
> /lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fea899faf23]
> /usr/lib64/glusterfs/5.3/rpc-transport/socket.so(+0xa37b)[0x7fea7e5e637b]
> /lib64/libglusterfs.so.0(+0x8aa49)[0x7fea89c96a49]
> /lib64/libpthread.so.0(+0x7dd5)[0x7fea88a95dd5]
> /lib64/libc.so.6(clone+0x6d)[0x7fea8835d02d]
>
>
> --
> Regards,
> Shreyansh Shah
>


-- 
Regards,
Shreyansh Shah
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190829/ab41d10d/attachment.html>


More information about the Gluster-users mailing list