[Bugs] [Bug 1577574] brick crash seen while creating and deleting two volumes in loop

bugzilla at redhat.com bugzilla at redhat.com
Sun May 13 06:27:49 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1577574



--- Comment #1 from Mohit Agrawal <moagrawa at redhat.com> ---
RCA: The brick process was getting crash because in there was no ctx available
at inode and inode was already free in xlator_mem_cleanup, It call's by at the
time of destroying transport in free_state before call fd_unref.
     To resolve the same move the code in free_state destroying transport after
free all resources.

>>>>>>>>>>>>>>>>>>>>

#0  0x00007fa5a3c08dc7 in __inode_get_xl_index (xlator=0x7fa590029ff0,
inode=0x7fa53c002260) at inode.c:455
#1  __inode_unref (inode=inode at entry=0x7fa53c002260) at inode.c:489
#2  0x00007fa5a3c09621 in inode_unref (inode=0x7fa53c002260) at inode.c:559
#3  0x00007fa5a3c1f502 in fd_destroy (bound=_gf_true, fd=0x7fa538004d20) at
fd.c:532
#4  fd_unref (fd=0x7fa538004d20) at fd.c:569
#5  0x00007fa58ed03169 in free_state (state=0x7fa5380013b0) at
server-helpers.c:185
#6  0x00007fa58ecfe64a in server_submit_reply
(frame=frame at entry=0x7fa538002910, req=0x7fa50c29ade0, 
    arg=arg at entry=0x7fa58e8ec910, payload=payload at entry=0x0,
payloadcount=payloadcount at entry=0, iobref=0x7fa538004e50, 
    iobref at entry=0x0, xdrproc=0x7fa5a37a36b0 <xdr_gfs3_opendir_rsp>) at
server.c:212
#7  0x00007fa58ed12de4 in server_opendir_cbk (frame=frame at entry=0x7fa538002910,
cookie=<optimized out>, this=0x7fa590029ff0, 
    op_ret=op_ret at entry=0, op_errno=op_errno at entry=0,
fd=fd at entry=0x7fa538004d20, xdata=xdata at entry=0x0)
    at server-rpc-fops.c:710
#8  0x00007fa58f173111 in io_stats_opendir_cbk (frame=0x7fa538006f10,
cookie=<optimized out>, this=<optimized out>, 
    op_ret=0, op_errno=0, fd=0x7fa538004d20, xdata=0x0) at io-stats.c:2315
#9  0x00007fa58f5b419d in index_opendir (frame=frame at entry=0x7fa538002480,
this=this at entry=0x7fa5640138a0, 
    loc=loc at entry=0x7fa5380013c8, fd=fd at entry=0x7fa538004d20,
xdata=xdata at entry=0x0) at index.c:2113
#10 0x00007fa5a3c7a27b in default_opendir (frame=0x7fa538002480,
this=<optimized out>, loc=0x7fa5380013c8, 
    fd=0x7fa538004d20, xdata=0x0) at defaults.c:2956
#11 0x00007fa58f1621bb in io_stats_opendir (frame=frame at entry=0x7fa538006f10,
this=this at entry=0x7fa564016110, 
    loc=loc at entry=0x7fa5380013c8, fd=fd at entry=0x7fa538004d20,
xdata=xdata at entry=0x0) at io-stats.c:3311
#12 0x00007fa5a3c7a27b in default_opendir (frame=0x7fa538006f10,
this=<optimized out>, loc=0x7fa5380013c8, 
    fd=0x7fa538004d20, xdata=0x0) at defaults.c:2956
#13 0x00007fa58ed1b082 in server_opendir_resume (frame=0x7fa538002910,
bound_xl=0x7fa564017720) at server-rpc-fops.c:2672
#14 0x00007fa58ed01d29 in server_resolve_done (frame=0x7fa538002910) at
server-resolve.c:587
#15 0x00007fa58ed01dcd in server_resolve_all (frame=frame at entry=0x7fa538002910)
at server-resolve.c:622
#16 0x00007fa58ed027e5 in server_resolve (frame=0x7fa538002910) at
server-resolve.c:571
#17 0x00007fa58ed01e0e in server_resolve_all (frame=frame at entry=0x7fa538002910)
at server-resolve.c:618
#18 0x00007fa58ed0257b in server_resolve_inode
(frame=frame at entry=0x7fa538002910) at server-resolve.c:425
#19 0x00007fa58ed02810 in server_resolve (frame=0x7fa538002910) at
server-resolve.c:559
#20 0x00007fa58ed01dee in server_resolve_all (frame=frame at entry=0x7fa538002910)
at server-resolve.c:611
#21 0x00007fa58ed028a4 in resolve_and_resume (frame=frame at entry=0x7fa538002910, 
    fn=fn at entry=0x7fa58ed1ae90 <server_opendir_resume>) at server-resolve.c:642
---Type <return> to continue, or q <return> to quit---
#22 0x00007fa58ed1c851 in server3_3_opendir (req=<optimized out>) at
server-rpc-fops.c:4938
#23 0x00007fa5a39ba66e in rpcsvc_request_handler (arg=0x7fa59003f9b0) at
rpcsvc.c:1915
#24 0x00007fa5a2a57dd5 in start_thread () from /lib64/libpthread.so.0
#25 0x00007fa5a2320b3d in clone () from /lib64/libc.so.6


Regards
Mohit Agrawal

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=k20Cc5SbQm&a=cc_unsubscribe


More information about the Bugs mailing list