[Bugs] [Bug 1626085] "glusterfs --process-name fuse" crashes and leads to " Transport endpoint is not connected"

bugzilla at redhat.com bugzilla at redhat.com
Mon Oct 8 10:30:20 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1626085

Christophe Combelles <ccomb at free.fr> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |ccomb at free.fr



--- Comment #3 from Christophe Combelles <ccomb at free.fr> ---
Hi,
we got a similar crash on the glusterfs process which also leads in the mount
point being unusable.

Could you point me to the way to get the coredump ?  Is it created
automatically or should I enable something to get it?


pending frames:
frame : type(1) op(LOOKUP)
frame : type(0) op(0)
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash: 
2018-10-08 09:14:20
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 4.1.2
/lib64/libglusterfs.so.0(+0x25920)[0x7f6f226e3920]
/lib64/libglusterfs.so.0(gf_print_trace+0x334)[0x7f6f226ed874]
/lib64/libc.so.6(+0x362f0)[0x7f6f20d482f0]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/nufa.so(+0x44db8)[0x7f6f1ad72db8]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/nufa.so(+0x22050)[0x7f6f1ad50050]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/nufa.so(+0x23332)[0x7f6f1ad51332]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/nufa.so(+0x42a9c)[0x7f6f1ad70a9c]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/replicate.so(+0x6e8d8)[0x7f6f1b0528d8]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/replicate.so(+0x6f08a)[0x7f6f1b05308a]
/usr/lib64/glusterfs/4.1.2/xlator/cluster/replicate.so(+0x6fba9)[0x7f6f1b053ba9]
/usr/lib64/glusterfs/4.1.2/xlator/protocol/client.so(+0x61f02)[0x7f6f1b2daf02]
/lib64/libgfrpc.so.0(+0xec20)[0x7f6f224b0c20]
/lib64/libgfrpc.so.0(+0xefb3)[0x7f6f224b0fb3]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f6f224ace93]
/usr/lib64/glusterfs/4.1.2/rpc-transport/socket.so(+0x7626)[0x7f6f1d5c0626]
/usr/lib64/glusterfs/4.1.2/rpc-transport/socket.so(+0xa0f7)[0x7f6f1d5c30f7]
/lib64/libglusterfs.so.0(+0x89094)[0x7f6f22747094]
/lib64/libpthread.so.0(+0x7e25)[0x7f6f21547e25]
/lib64/libc.so.6(clone+0x6d)[0x7f6f20e10bad]
---------

volume info :

Volume Name: vol_nextcloud
Type: Distributed-Replicate
Volume ID: f50d2270-abf8-47d5-97a0-af8eba2f2f0e
Status: Started
Snapshot Count: 31
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: sd-135021:/data/glusterfs/vol_nextcloud/brick1/brick
Brick2: sd-135024:/data/glusterfs/vol_nextcloud/brick1/brick
Brick3: sd-135609:/data/glusterfs/vol_nextcloud/brick1/brick
Brick4: sd-135608:/data/glusterfs/vol_nextcloud/brick1/brick
Options Reconfigured:
features.scrub: Active
features.bitrot: on
features.quota-deem-statfs: off
features.inode-quota: off
features.quota: off
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
cluster.nufa: enable
auth.allow: 10.88.80.*
features.barrier: disable

------------------
the volume is mounted (from the CoreOs host) with:

docker exec -ti glusterfs-server mount -t glusterfs -o acl
sd-135608:/vol_nextcloud /mnt/gluster_nextcloud

------------------

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list