[Bugs] [Bug 1655901] New: glusterfsd 5.1 crashes in socket.so
bugzilla at redhat.com
bugzilla at redhat.com
Tue Dec 4 08:58:22 UTC 2018
https://bugzilla.redhat.com/show_bug.cgi?id=1655901
Bug ID: 1655901
Summary: glusterfsd 5.1 crashes in socket.so
Product: GlusterFS
Version: 5
Component: glusterd
Severity: urgent
Assignee: bugs at gluster.org
Reporter: rob.dewit at coosto.com
CC: bugs at gluster.org
Description of problem: glusterfsd crashes in socket.so
Version-Release number of selected component (if applicable): 5.1
How reproducible: run volume and wait for crash on one of the nodes
Actual results:
Without a clear cause, the transport endpoint disappears. A core file is
written. glusterd is still running, but "gluster volume status" shows no
running daemon on the node. The volume is remains usable.
Expected results:
No crashes and no need to manually restart glusterfsd after a crash.
Additional info:
This a dat set in two node cluster that is in the process of being transferred
to glusterfs. We started with a single node and added the new one recently. A
third will be added once we can declare this gluster cluster stable.
gdb core file analysis:
Core was generated by `/usr/sbin/glusterfsd -s 10.10.0.177 --volfile-id
jf-vol0.10.10.0.177.local.mnt-'.
Program terminated with signal 11, Segmentation fault.
#0 0x00007f31692ce62b in ?? () from
/usr/lib64/glusterfs/5.1/rpc-transport/socket.so
(gdb) bt
#0 0x00007f31692ce62b in ?? () from
/usr/lib64/glusterfs/5.1/rpc-transport/socket.so
#1 0x00007f316e21aaeb in ?? () from /usr/lib64/libglusterfs.so.0
#2 0x00007f316d00b504 in start_thread () from /lib64/libpthread.so.0
#3 0x00007f316c8f319f in clone () from /lib64/libc.so.6
Actual command line options were: -s 10.10.0.177 --volfile-id
jf-vol0.10.10.0.177.local.mnt-glfs-brick -p
/var/run/gluster/vols/jf-vol0/10.10.0.177-local.mnt-glfs-brick.pid -S
/var/run/gluster/ccdac309d72f1df7.socket --brick-name /local.mnt/glfs/brick -l
/var/log/glusterfs/bricks/local.mnt-glfs-brick.log --xlator-option
*-posix.glusterd-uuid=ab5f12ae-c203-4299-b5eb-9a7df6abfc1b --process-name brick
--brick-port 49152 --xlator-option jf-vol0-server.listen-port=49152
glusterd.log:
[2018-11-28 23:40:01.859118] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2018-11-28 23:40:01.859219] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2018-11-28 23:50:01.593857] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2018-11-28 23:50:01.593949] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2018-11-29 00:00:01.159538] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
[2018-11-29 00:00:09.723224] I [MSGID: 106143]
[glusterd-pmap.c:389:pmap_registry_remove] 0-pmap: removing brick (null) on
port 49152
[2018-11-29 00:00:09.748419] I [MSGID: 106005]
[glusterd-handler.c:6194:__glusterd_brick_rpc_notify] 0-management: Brick
10.10.0.177:/local.mnt/glfs/brick has disconnected from glusterd.
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 36 times between [2018-11-29
00:00:01.159538] and [2018-11-29 00:00:28.759673]
[2018-11-29 00:00:29.281398] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 339 times between [2018-11-29
00:00:29.281398] and [2018-11-29 00:02:28.804429]
[2018-11-29 00:02:29.293664] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 339 times between [2018-11-29
00:02:29.293664] and [2018-11-29 00:04:28.849724]
[2018-11-29 00:04:29.306508] E [MSGID: 101191]
[event-epoll.c:671:event_dispatch_epoll_worker] 0-epoll: Failed to dispatch
handler
The message "E [MSGID: 101191] [event-epoll.c:671:event_dispatch_epoll_worker]
0-epoll: Failed to dispatch handler" repeated 339 times between [2018-11-29
00:04:29.306508] and [2018-11-29 00:06:28.893840]
volume info:
Volume Name: jf-vol0
Type: Replicate
Volume ID: d6c72c52-24c5-4302-81ed-257507c27c1a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.10.0.177:/local.mnt/glfs/brick
Brick2: 10.10.0.208:/local.mnt/glfs/brick
Options Reconfigured:
client.event-threads: 3
server.event-threads: 3
cluster.self-heal-daemon: enable
diagnostics.client-log-level: WARNING
diagnostics.brick-log-level: CRITICAL
diagnostics.brick-sys-log-level: CRITICAL
disperse.shd-wait-qlength: 2048
cluster.shd-max-threads: 4
performance.cache-size: 4GB
performance.cache-max-file-size: 4MB
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
features.cache-invalidation: on
features.cache-invalidation-timeout: 60
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 60
network.inode-lru-limit: 50000
cluster.lookup-optimize: on
cluster.readdir-optimize: on
cluster.force-migration: off
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list