[Bugs] [Bug 1751085] Gluster fuse mount crashed during truncate

bugzilla at redhat.com bugzilla at redhat.com
Thu Sep 12 06:43:23 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1751085

bipin <bshetty at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |bshetty at redhat.com



--- Comment #1 from bipin <bshetty at redhat.com> ---
Seeing a similar crash with below steps:

1. Complete the RHHI-V deployment
2. Create 30 vm's using pool 
3. After an hour or so could see the FUSE mount crash


(gdb) bt
    #0  0x00007ff30b541a1a in shard_common_resolve_shards
(frame=frame at entry=0x7ff2d400eba8, this=this at entry=0x7ff304013470,
post_res_handler=0x7ff30b54b2d0 <shard_post_resolve_truncate_handler>) at
shard.c:1030
    #1  0x00007ff30b5425e5 in shard_refresh_internal_dir
(frame=frame at entry=0x7ff2d400eba8, this=this at entry=0x7ff304013470,
type=type at entry=SHARD_INTERNAL_DIR_DOT_SHARD) at shard.c:1321
    #2  0x00007ff30b54b46e in shard_truncate_begin
(frame=frame at entry=0x7ff2d400eba8, this=this at entry=0x7ff304013470) at
shard.c:2573
    #3  0x00007ff30b551cc8 in shard_post_lookup_truncate_handler
(frame=0x7ff2d400eba8, this=0x7ff304013470) at shard.c:2637
    #4  0x00007ff30b5409f2 in shard_lookup_base_file_cbk (frame=0x7ff2d400eba8,
cookie=<optimized out>, this=0x7ff304013470, op_ret=<optimized out>,
op_errno=<optimized out>, inode=<optimized out>,
        buf=0x7ff2d402f700, xdata=0x7ff2fc02fd48, postparent=0x7ff2d402f9a0) at
shard.c:1705
    #5  0x00007ff30b7a8381 in dht_discover_complete
(this=this at entry=0x7ff304011cc0,
discover_frame=discover_frame at entry=0x7ff2d400c028) at dht-common.c:754
    #6  0x00007ff30b7a92d4 in dht_discover_cbk (frame=0x7ff2d400c028,
cookie=0x7ff30400f330, this=0x7ff304011cc0, op_ret=<optimized out>,
op_errno=117, inode=0x7ff2fc00c578, stbuf=0x7ff2d400aba0,
        xattr=0x7ff2fc02fd48, postparent=0x7ff2d400ac10) at dht-common.c:1133
    #7  0x00007ff30ba61315 in afr_discover_done (frame=0x7ff2d4068fe8,
this=<optimized out>) at afr-common.c:3027
    #8  0x00007ff30ba6c175 in afr_lookup_metadata_heal_check
(frame=frame at entry=0x7ff2d4068fe8, this=this at entry=0x7ff30400f330) at
afr-common.c:2769
    #9  0x00007ff30ba6d089 in afr_discover_cbk
(frame=frame at entry=0x7ff2d4068fe8, cookie=<optimized out>, this=<optimized
out>, op_ret=<optimized out>, op_errno=<optimized out>,
inode=inode at entry=0x7ff2fc00c578,
        buf=buf at entry=0x7ff30c6e3900, xdata=0x7ff2fc0166e8,
postparent=postparent at entry=0x7ff30c6e3970) at afr-common.c:3077
    #10 0x00007ff30bcacf3d in client3_3_lookup_cbk (req=<optimized out>,
iov=<optimized out>, count=<optimized out>, myframe=0x7ff2d40180a8) at
client-rpc-fops.c:2872
    #11 0x00007ff313a90ac0 in rpc_clnt_handle_reply
(clnt=clnt at entry=0x7ff304049ac0, pollin=pollin at entry=0x7ff2fc00b880) at
rpc-clnt.c:778
    #12 0x00007ff313a90e2b in rpc_clnt_notify (trans=<optimized out>,
mydata=0x7ff304049af0, event=<optimized out>, data=0x7ff2fc00b880) at
rpc-clnt.c:971
    #13 0x00007ff313a8cba3 in rpc_transport_notify
(this=this at entry=0x7ff304049e80, event=event at entry=RPC_TRANSPORT_MSG_RECEIVED,
data=data at entry=0x7ff2fc00b880) at rpc-transport.c:557
    #14 0x00007ff30eba55e6 in socket_event_poll_in
(this=this at entry=0x7ff304049e80, notify_handled=<optimized out>) at
socket.c:2322
    #15 0x00007ff30eba7c2a in socket_event_handler (fd=11, idx=2, gen=4,
data=0x7ff304049e80, poll_in=<optimized out>, poll_out=<optimized out>,
poll_err=0, event_thread_died=0 '\000') at socket.c:2482
    #16 0x00007ff313d498b0 in event_dispatch_epoll_handler
(event=0x7ff30c6e3e70, event_pool=0x555e43424750) at event-epoll.c:643
    #17 event_dispatch_epoll_worker (data=0x555e4347fca0) at event-epoll.c:759
    #18 0x00007ff312b26ea5 in start_thread (arg=0x7ff30c6e4700) at
pthread_create.c:307
    #19 0x00007ff3123ec8cd in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:111

(gdb) f 0 
#0  0x00007ff30b541a1a in shard_common_resolve_shards
(frame=frame at entry=0x7ff2d400eba8, this=this at entry=0x7ff304013470,
post_res_handler=0x7ff30b54b2d0 <shard_post_resolve_truncate_handler>) at
shard.c:1030
1030                            local->inode_list[i] = inode_ref (res_inode);
(gdb) p local->num_blocks
 $1 = 0

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.


More information about the Bugs mailing list