[Bugs] [Bug 1662557] New: glusterfs process crashes, causing "Transport endpoint not connected".

bugzilla at redhat.com bugzilla at redhat.com
Sat Dec 29 21:57:16 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1662557

            Bug ID: 1662557
           Summary: glusterfs process crashes, causing "Transport endpoint
                    not connected".
           Product: GlusterFS
           Version: 3.12
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: fuse
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: rob.dewit at coosto.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Description of problem:
Sometimes even after only a couple of minutes, the glusterfs process crashes.
This leaves all processes connected to that volume, so to remount the volume I
have to kill all those running processes :-(

Version-Release number of selected component (if applicable): 
5.2

How reproducible: 
I have not found out what triggers the crash, but it seems to be related the
kind of load it gets. Two people working interactively on the volume got
numerous crashes in a day. Some pretty heavy automated usage on another
client-host crashes only once in a while, although that seems to increase the
last few days.

Actual results:
glusterfs crashes

Expected results:
the mountpoint stays up until I 'umount' it.

Additional info:

* 1 volume with millions of small files
* 3-way cluster, all running 5.2
* all three nodes also mount the volume
* another host also mounts the volume
* some of the work-load of the interactive users is doing 'find' over large
numbers of files and directories.


Volume info:

Volume Name: jf-vol0
Type: Replicate
Volume ID: d6c72c52-24c5-4302-81ed-257507c27c1a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.10.0.177:/local.mnt/glfs/brick
Brick2: 10.10.0.208:/local.mnt/glfs/brick
Brick3: 10.10.0.25:/local.mnt/glfs/brick
Options Reconfigured:
performance.readdir-ahead: off
cluster.force-migration: off
cluster.readdir-optimize: on
cluster.lookup-optimize: on
network.inode-lru-limit: 50000
performance.md-cache-timeout: 60
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 60
features.cache-invalidation: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
performance.cache-max-file-size: 4MB
performance.cache-size: 4GB
cluster.shd-max-threads: 4
disperse.shd-wait-qlength: 2048
diagnostics.brick-sys-log-level: CRITICAL
diagnostics.brick-log-level: CRITICAL
diagnostics.client-log-level: WARNING
cluster.self-heal-daemon: enable
server.event-threads: 3
client.event-threads: 3
server.statedump-path: /local.mnt/glfs/

I set the log-levels to CRITICAL and WARNING because that seemed to
dramatically improve performance, however I did set the client to DEBUG for a
short while until it crashed again. Tail of that log (full log in attachment):

[2018-12-29 15:08:44.476156] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.476542] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-0:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.476814] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-1:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.477011] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-2:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.478211] D [write-behind.c:750:__wb_fulfill_request] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x77e9)[0x7fe0a03407e9]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
))))) 0-jf-vol0-write-behind: (unique=15185, fop=WRITE,
gfid=6c5088a1-d6d7-4d2f-b445-9999ab7a7a8c, gen=0): request fulfilled. removing
the request from liability queue? = yes
[2018-12-29 15:08:44.478367] D [write-behind.c:419:__wb_request_unref] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5042)[0x7fe0a033e042]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7838)[0x7fe0a0340838]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
))))) 0-jf-vol0-write-behind: (unique = 15185, fop=WRITE,
gfid=6c5088a1-d6d7-4d2f-b445-9999ab7a7a8c, gen=0): destroying request, removing
from all queues
[2018-12-29 15:08:44.478517] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xea07)[0x7fe0a0347a07]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x2721e)[0x7fe0a082d21e]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.478634] D [write-behind.c:750:__wb_fulfill_request] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x77e9)[0x7fe0a03407e9]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
))))) 0-jf-vol0-write-behind: (unique=15187, fop=WRITE,
gfid=6c5088a1-d6d7-4d2f-b445-9999ab7a7a8c, gen=1): request fulfilled. removing
the request from liability queue? = yes
[2018-12-29 15:08:44.478784] D [write-behind.c:419:__wb_request_unref] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5042)[0x7fe0a033e042]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7838)[0x7fe0a0340838]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
))))) 0-jf-vol0-write-behind: (unique = 15187, fop=WRITE,
gfid=6c5088a1-d6d7-4d2f-b445-9999ab7a7a8c, gen=1): destroying request, removing
from all queues
[2018-12-29 15:08:44.478826] D [MSGID: 0] [write-behind.c:1710:__wb_pick_winds]
0-jf-vol0-write-behind: (unique=15188, fop=FLUSH,
gfid=6c5088a1-d6d7-4d2f-b445-9999ab7a7a8c, gen=2): picking the request for
winding
[2018-12-29 15:08:44.478892] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2018-12-29 15:08:44.478916] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-2' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.478931] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-1' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.478946] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.478986] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xea07)[0x7fe0a0347a07]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x2721e)[0x7fe0a082d21e]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.479030] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2018-12-29 15:08:44.479066] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-2' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.479096] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-1' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.479122] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.479189] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2018-12-29 15:08:44.479219] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-2' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.479244] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-1' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.479269] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.479531] D [write-behind.c:419:__wb_request_unref] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5042)[0x7fe0a033e042]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5347)[0x7fe0a033e347]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x99a4)[0x7fe0a03429a4]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9a9c)[0x7fe0a0342a9c]
))))) 0-jf-vol0-write-behind: (unique = 15188, fop=FLUSH,
gfid=6c5088a1-d6d7-4d2f-b445-9999ab7a7a8c, gen=2): destroying request, removing
from all queues
[2018-12-29 15:08:44.479636] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-0:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.479912] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-1:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.480019] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-2:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.481063] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe537)[0x7fe0a0347537]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/read-ahead.so(+0x7126)[0x7fe0a0130126]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/io-cache.so(+0x7b98)[0x7fe09bdf0b98]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.481097] D [MSGID: 0] [write-behind.c:1710:__wb_pick_winds]
0-jf-vol0-write-behind: (unique=15191, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=0): picking the request for
winding
[2018-12-29 15:08:44.481116] D [MSGID: 0]
[write-behind.c:1299:__wb_pick_unwinds] 0-jf-vol0-write-behind: (unique=15191,
fop=WRITE, gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=0): added req to
liability queue. inode-generation-number=1
[2018-12-29 15:08:44.481170] D [MSGID: 0] [stack.h:499:copy_frame] 0-stack:
groups is null (ngrps: 0) [Invalid argument]
[2018-12-29 15:08:44.481639] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe537)[0x7fe0a0347537]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/read-ahead.so(+0x7126)[0x7fe0a0130126]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/io-cache.so(+0x7b98)[0x7fe09bdf0b98]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.481674] D [MSGID: 0] [write-behind.c:1666:__wb_pick_winds]
0-jf-vol0-write-behind: (unique=15193, fop=WRITE, gen=1,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c): ordering.go is not set, hence not
winding
[2018-12-29 15:08:44.481692] D [MSGID: 0]
[write-behind.c:1299:__wb_pick_unwinds] 0-jf-vol0-write-behind: (unique=15193,
fop=WRITE, gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=1): added req to
liability queue. inode-generation-number=2
[2018-12-29 15:08:44.482006] D [write-behind.c:750:__wb_fulfill_request] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x77e9)[0x7fe0a03407e9]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
))))) 0-jf-vol0-write-behind: (unique=15191, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=0): request fulfilled. removing
the request from liability queue? = yes
[2018-12-29 15:08:44.482184] D [write-behind.c:419:__wb_request_unref] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5042)[0x7fe0a033e042]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7838)[0x7fe0a0340838]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
))))) 0-jf-vol0-write-behind: (unique = 15191, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=0): destroying request, removing
from all queues
[2018-12-29 15:08:44.482434] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe537)[0x7fe0a0347537]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/read-ahead.so(+0x7126)[0x7fe0a0130126]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/io-cache.so(+0x7b98)[0x7fe09bdf0b98]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.482450] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe537)[0x7fe0a0347537]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/read-ahead.so(+0x7126)[0x7fe0a0130126]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/io-cache.so(+0x7b98)[0x7fe09bdf0b98]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.482614] D [write-behind.c:750:__wb_fulfill_request] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x77e9)[0x7fe0a03407e9]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9118)[0x7fe0a0342118]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9a5d)[0x7fe0a0342a5d]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xea07)[0x7fe0a0347a07]
))))) 0-jf-vol0-write-behind: (unique=15195, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=2): request fulfilled. removing
the request from liability queue? = no
[2018-12-29 15:08:44.482649] D [MSGID: 0] [write-behind.c:1710:__wb_pick_winds]
0-jf-vol0-write-behind: (unique=15193, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=1): picking the request for
winding
[2018-12-29 15:08:44.482868] D [write-behind.c:419:__wb_request_unref] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5042)[0x7fe0a033e042]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5347)[0x7fe0a033e347]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x871e)[0x7fe0a034171e]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9a91)[0x7fe0a0342a91]
))))) 0-jf-vol0-write-behind: (unique = 15195, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=2): destroying request, removing
from all queues
[2018-12-29 15:08:44.482937] D [MSGID: 0] [stack.h:499:copy_frame] 0-stack:
groups is null (ngrps: 0) [Invalid argument]
[2018-12-29 15:08:44.483764] D [write-behind.c:750:__wb_fulfill_request] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x77e9)[0x7fe0a03407e9]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
))))) 0-jf-vol0-write-behind: (unique=15193, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=1): request fulfilled. removing
the request from liability queue? = yes
[2018-12-29 15:08:44.483950] D [write-behind.c:419:__wb_request_unref] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x5042)[0x7fe0a033e042]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7838)[0x7fe0a0340838]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x7b90)[0x7fe0a0340b90]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xe9ff)[0x7fe0a03479ff]
))))) 0-jf-vol0-write-behind: (unique = 15193, fop=WRITE,
gfid=f3e395b1-c592-4e57-b97e-2800d06e757c, gen=1): destroying request, removing
from all queues
[2018-12-29 15:08:44.484123] D [write-behind.c:1764:wb_process_queue] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13f)[0x7fe0a6ae412f] (-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0x9adc)[0x7fe0a0342adc]
(-->
/usr/lib64/glusterfs/5.2/xlator/performance/write-behind.so(+0xea07)[0x7fe0a0347a07]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/distribute.so(+0x73e86)[0x7fe0a05c3e86]
(-->
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x2721e)[0x7fe0a082d21e]
))))) 0-jf-vol0-write-behind: processing queues
[2018-12-29 15:08:44.985883] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2018-12-29 15:08:44.985944] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-2' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.985963] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-1' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.985977] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986055] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2018-12-29 15:08:44.986075] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-2' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986090] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-1' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986103] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986168] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.dirty' would not be sent on wire in the future
[Invalid argument]
[2018-12-29 15:08:44.986189] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-2' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986204] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-1' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986218] D [MSGID: 101016] [glusterfs3.h:746:dict_to_xdr]
0-dict: key 'trusted.afr.jf-vol0-client-0' would not be sent on wire in the
future [Invalid argument]
[2018-12-29 15:08:44.986844] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-0:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.986919] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-1:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.986934] D [MSGID: 0]
[client-rpc-fops_v2.c:1619:client4_0_fxattrop_cbk] 0-jf-vol0-client-2:
resetting op_ret to 0 from 0
[2018-12-29 15:08:44.987647] D [logging.c:1805:gf_log_flush_extra_msgs]
0-logging-infra: Log buffer size reduced. About to flush 5 extra log messages
[2018-12-29 15:08:44.987683] D [logging.c:1808:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git:/FILE/glusterfs.git
signal received: 11
time of crash: 
2018-12-29 15:08:44
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.2
/usr/lib64/libglusterfs.so.0(+0x26537)[0x7fe0a6ae1537]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x2fe)[0x7fe0a6aeb24e]
/lib64/libc.so.6(+0x35d10)[0x7fe0a5145d10]
/lib64/libpthread.so.0(pthread_mutex_lock+0x0)[0x7fe0a5936e30]
/usr/lib64/libglusterfs.so.0(__gf_free+0x145)[0x7fe0a6b0c795]
/usr/lib64/libglusterfs.so.0(+0x1a1ee)[0x7fe0a6ad51ee]
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x60124)[0x7fe0a0866124]
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x39ee1)[0x7fe0a083fee1]
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x3d7f1)[0x7fe0a08437f1]
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so(+0x3e562)[0x7fe0a0844562]
/usr/lib64/glusterfs/5.2/xlator/protocol/client.so(+0x731d0)[0x7fe0a0b101d0]
/usr/lib64/libgfrpc.so.0(+0xe534)[0x7fe0a68ae534]
/usr/lib64/libgfrpc.so.0(+0xee77)[0x7fe0a68aee77]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7fe0a68aaf13]
/usr/lib64/glusterfs/5.2/rpc-transport/socket.so(+0xaa23)[0x7fe0a19c2a23]
/usr/lib64/libglusterfs.so.0(+0x88aeb)[0x7fe0a6b43aeb]
/lib64/libpthread.so.0(+0x7504)[0x7fe0a5934504]
/lib64/libc.so.6(clone+0x3f)[0x7fe0a521c19f]
---------


gdb output:

Core was generated by `/usr/sbin/glusterfs --use-readdirp=off
--attribute-timeout=600 --entry-timeout='.
Program terminated with signal 11, Segmentation fault.
#0  0x00007fe0a5936e30 in pthread_mutex_lock () from /lib64/libpthread.so.0
(gdb) bt
#0  0x00007fe0a5936e30 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1  0x00007fe0a6b0c795 in __gf_free () from /usr/lib64/libglusterfs.so.0
#2  0x00007fe0a6ad51ee in ?? () from /usr/lib64/libglusterfs.so.0
#3  0x00007fe0a0866124 in ?? () from
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so
#4  0x00007fe0a083fee1 in ?? () from
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so
#5  0x00007fe0a08437f1 in ?? () from
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so
#6  0x00007fe0a0844562 in ?? () from
/usr/lib64/glusterfs/5.2/xlator/cluster/replicate.so
#7  0x00007fe0a0b101d0 in ?? () from
/usr/lib64/glusterfs/5.2/xlator/protocol/client.so
#8  0x00007fe0a68ae534 in ?? () from /usr/lib64/libgfrpc.so.0
#9  0x00007fe0a68aee77 in ?? () from /usr/lib64/libgfrpc.so.0
#10 0x00007fe0a68aaf13 in rpc_transport_notify () from /usr/lib64/libgfrpc.so.0
#11 0x00007fe0a19c2a23 in ?? () from
/usr/lib64/glusterfs/5.2/rpc-transport/socket.so
#12 0x00007fe0a6b43aeb in ?? () from /usr/lib64/libglusterfs.so.0
#13 0x00007fe0a5934504 in start_thread () from /lib64/libpthread.so.0
#14 0x00007fe0a521c19f in clone () from /lib64/libc.so.6

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list