<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
        {font-family:宋体;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:DengXian;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:"\@宋体";
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:Cambria;
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:"\@等线";
        panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0cm;
        margin-bottom:.0001pt;
        text-align:justify;
        text-justify:inter-ideograph;
        font-size:10.5pt;
        font-family:DengXian;
        color:black;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:#0563C1;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:#954F72;
        text-decoration:underline;}
p.msonormal0, li.msonormal0, div.msonormal0
        {mso-style-name:msonormal;
        mso-margin-top-alt:auto;
        margin-right:0cm;
        mso-margin-bottom-alt:auto;
        margin-left:0cm;
        font-size:11.0pt;
        font-family:"Calibri",sans-serif;
        color:black;}
span.EmailStyle18
        {mso-style-type:personal;
        font-family:DengXian;
        color:windowtext;}
span.EmailStyle19
        {mso-style-type:personal;
        font-family:DengXian;
        color:windowtext;}
span.EmailStyle20
        {mso-style-type:personal;
        font-family:DengXian;
        color:windowtext;}
span.EmailStyle21
        {mso-style-type:personal;
        font-family:DengXian;
        color:windowtext;}
span.EmailStyle22
        {mso-style-type:personal;
        font-family:DengXian;
        color:windowtext;}
span.EmailStyle23
        {mso-style-type:personal;
        font-family:DengXian;
        color:windowtext;}
span.EmailStyle26
        {mso-style-type:personal-reply;
        font-family:DengXian;
        color:windowtext;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:612.0pt 792.0pt;
        margin:72.0pt 90.0pt 72.0pt 90.0pt;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body bgcolor="white" lang="ZH-CN" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">I did some further study of this issue, I think if frame
</span><b><span lang="EN-US" style="color:red">0x7f84740116e0 </span></b><span lang="EN-US" style="color:windowtext">get freed but still kept in rpc_clnt saved_frame_list, this issue is possible to happen. because when frame destroy it is put to hot list ,and
most likely to be reused next time, but by the next time it got used, its ret address will be changed, and when previous request’s response returned, it still could retrieve this changed frame, which is wrong!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">I find that when FRAME_DESTROY, it does not do anything to rpc_clnt saved_frame_list(actually when free frame, it should not be in saved_frame_list), can we add check like checking every element
in saved_frame_list to make sure no frame(to be destroyed) is in the saved_frame_list ?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Looking forward for your reply!</span><span lang="EN-US" style="color:windowtext"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal" align="left" style="text-align:left"><b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">From:</span></b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">
Zhou, Cynthia (NSB - CN/Hangzhou) <br>
<b>Sent:</b> Friday, October 19, 2018 9:59 AM<br>
<b>To:</b> 'Ravishankar N' <ravishankar@redhat.com><br>
<b>Cc:</b> 'gluster-users' <gluster-users@gluster.org><br>
<b>Subject:</b> RE: glustershd coredump generated while reboot all 3 sn nodes<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Hi,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">From one coredump recently I got two interesting thread call back trace, from which seems glustershd has two thread polling in message from the same client simultaneously,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Thread 17 (Thread 0x7f8485247700 (LWP 6063)):<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#0 0x00007f8489787c80 in pthread_mutex_lock () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#1 0x00007f848a9c177e in dict_ref (this=0x18004f0f0) at dict.c:660<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#2 0x00007f84845920e4 in afr_selfheal_discover_cbk (frame=</span><b><span lang="EN-US" style="color:#4472C4">0x7f847400bf00</span></b><span lang="EN-US" style="color:windowtext">,
</span><b><span lang="EN-US" style="color:#70AD47">cookie=0x2, this=0x7f84800390b0, op_ret=0, op_errno=0, inode=0x0,
<o:p></o:p></span></b></p>
<p class="MsoNormal"><b><span lang="EN-US" style="color:#70AD47"> buf=0x7f84740116e0, xdata=0x18004f0f0, parbuf=0x7f8474019e80</span></b><span lang="EN-US" style="color:windowtext">) at afr-self-heal-common.c:1723<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#3 0x00007f848480b96d in client3_3_entrylk_cbk (req=0x7f8474019e40, iov=0x7f8474019e80, count=1, myframe=</span><b><span lang="EN-US" style="color:red">0x7f84740116e0</span></b><span lang="EN-US" style="color:windowtext">)
at client-rpc-fops.c:1611<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#4 0x00007f848a78ed47 in rpc_clnt_handle_reply (clnt=</span><b><span lang="EN-US" style="color:#C55A11">0x7f848004f0c0</span></b><span lang="EN-US" style="color:windowtext">, pollin=0x7f84800bd6d0)
at rpc-clnt.c:778<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#5 0x00007f848a78f2e5 in rpc_clnt_notify (trans=0x7f848004f2f0, mydata=0x7f848004f0f0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f84800bd6d0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> at rpc-clnt.c:971<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#6 0x00007f848a78b319 in rpc_transport_notify (this=0x7f848004f2f0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f84800bd6d0) at rpc-transport.c:538<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#7 0x00007f84856d234d in socket_event_poll_in (this=0x7f848004f2f0, notify_handled=_gf_true) at socket.c:2315<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#8 0x00007f84856d2992 in socket_event_handler (fd=15, idx=8, gen=1, data=0x7f848004f2f0, poll_in=1, poll_out=0, poll_err=0) at socket.c:2471<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#9 0x00007f848aa395ac in event_dispatch_epoll_handler (event_pool=0x1d40b00, event=0x7f8485246e84) at event-epoll.c:583<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#10 0x00007f848aa39883 in event_dispatch_epoll_worker (data=0x1d883d0) at event-epoll.c:659<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#11 0x00007f84897855da in start_thread () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#12 0x00007f848905bcbf in clone () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">And <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Thread 1 (Thread 0x7f847f6ff700 (LWP 6083)):<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#0 0x00007f8484812d24 in client3_3_lookup_cbk (req=0x7f8474002300, iov=0x7f8474002340, count=1, myframe=</span><span lang="EN-US" style="color:red">0x7f84740116e0</span><span lang="EN-US" style="color:windowtext">)
at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#1 0x00007f848a78ed47 in rpc_clnt_handle_reply (clnt=</span><b><span lang="EN-US" style="color:#C55A11">0x7f848004f0c0</span></b><span lang="EN-US" style="color:windowtext">, pollin=0x7f847800d9f0)
at rpc-clnt.c:778<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#2 0x00007f848a78f2e5 in rpc_clnt_notify (trans=0x7f848004f2f0, mydata=0x7f848004f0f0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f847800d9f0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> at rpc-clnt.c:971<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#3 0x00007f848a78b319 in rpc_transport_notify (this=0x7f848004f2f0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f847800d9f0) at rpc-transport.c:538<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#4 0x00007f84856d234d in socket_event_poll_in (this=0x7f848004f2f0, notify_handled=_gf_true) at socket.c:2315<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#5 0x00007f84856d2992 in socket_event_handler (fd=15, idx=8, gen=1, data=0x7f848004f2f0, poll_in=1, poll_out=0, poll_err=0) at socket.c:2471<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#6 0x00007f848aa395ac in event_dispatch_epoll_handler (event_pool=0x1d40b00, event=0x7f847f6fee84) at event-epoll.c:583<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#7 0x00007f848aa39883 in event_dispatch_epoll_worker (data=0x7f848004ef30) at event-epoll.c:659<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#8 0x00007f84897855da in start_thread () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">#9 0x00007f848905bcbf in clone () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Coredump generate because theread 1 myframe->local is 0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">(gdb) print *(struct _call_frame*)myframe<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">$12 = {root = 0x7f8474008180, parent = 0x7f847400bf00, frames = {next = 0x7f8474009230, prev = 0x7f8474008878},
</span><b><span lang="EN-US" style="color:#C00000">local = 0x0</span></b><span lang="EN-US" style="color:windowtext">,
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> this = 0x7f8480036e20, ret = 0x7f8484591e93 <afr_selfheal_discover_cbk>, ref_count = 0, lock = {spinlock = 0, mutex = {__data = {<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> __lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0,
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}}, cookie = 0x2, complete = _gf_true, op = GF_FOP_NULL,
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> begin = {tv_sec = 0, tv_usec = 0}, end = {tv_sec = 0, tv_usec = 0},
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> wind_from = 0x7f84845cba60 <__FUNCTION__.18726> "afr_selfheal_unlocked_discover_on",
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> wind_to = 0x7f84845cb090 "__priv->children[__i]->fops->lookup",
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"> unwind_from = 0x7f848483a350 <__FUNCTION__.18496> "client3_3_entrylk_cbk", unwind_to = 0x7f84845cb0b4 "afr_selfheal_discover_cbk"}<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">[Analysis]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">It seems thread 17 is receiving a msg reply and it get call frame
</span><b><span lang="EN-US" style="color:red">0x7f84740116e0 and</span></b><span lang="EN-US" style="color:windowtext"> called client3_3_entrylk_cbk,
<b>but</b> from source code, when client3_3_entrylk_cbk do unwind <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><img width="635" height="81" style="width:6.6166in;height:.8416in" id="_x0000_i1025" src="cid:image002.jpg@01D46AE9.65090080"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Even if it could find correct ret address, the param passed to it should not be as the params high-lighted in green colour!!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Another weird thing is that when rpc_clnt_handle_reply find frame
</span><b><span lang="EN-US" style="color:red">0x7f84740116e0, </span></b><span lang="EN-US">it should be removed from list, why thread 1 could retrieve this from again??<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">I checked that when client3_3_lookup_cbk do the unwind the param passed to parent frame is as the param high-lighted in green colour.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal" align="left" style="text-align:left"><b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">From:</span></b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">
Zhou, Cynthia (NSB - CN/Hangzhou) <br>
<b>Sent:</b> Tuesday, October 16, 2018 5:24 PM<br>
<b>To:</b> Ravishankar N <<a href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a>><br>
<b>Cc:</b> gluster-users <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
<b>Subject:</b> RE: glustershd coredump generated while reboot all 3 sn nodes<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">By the way not any private patch applied,
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1639632">https://bugzilla.redhat.com/show_bug.cgi?id=1639632</a> is created to follow this issue, I enclose one coredump in the bug,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">There is not much useful info from glustershd log, because this process coredump suddenly the log only show prints several seconds before.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">cynthia<o:p></o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal" align="left" style="text-align:left"><b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">From:</span></b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">
Zhou, Cynthia (NSB - CN/Hangzhou) <br>
<b>Sent:</b> Tuesday, October 16, 2018 2:15 PM<br>
<b>To:</b> Ravishankar N <<a href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a>><br>
<b>Cc:</b> gluster-users <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
<b>Subject:</b> RE: glustershd coredump generated while reboot all 3 sn nodes<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Hi,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">Yes it is glusterfs3.12.3<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">I will create BZ and attach related coredump and glusterfs log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext">cynthia<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US" style="color:windowtext"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal" align="left" style="text-align:left"><b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">From:</span></b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">
Ravishankar N <<a href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a>>
<br>
<b>Sent:</b> Tuesday, October 16, 2018 12:23 PM<br>
<b>To:</b> Zhou, Cynthia (NSB - CN/Hangzhou) <<a href="mailto:cynthia.zhou@nokia-sbell.com">cynthia.zhou@nokia-sbell.com</a>><br>
<b>Cc:</b> gluster-users <<a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>><br>
<b>Subject:</b> Re: glustershd coredump generated while reboot all 3 sn nodes<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US"><o:p> </o:p></span></p>
<p><span lang="EN-US">Hi,<o:p></o:p></span></p>
<p><span lang="EN-US">- Is this stock glusterfs-3.12.3? Or do you have any patches applied on top of it?<o:p></o:p></span></p>
<p><span lang="EN-US">- If it is stock, could you create a BZ and attach the core file and the /var/log/glusterfs/ logs from 3 nodes at the time of crash?<o:p></o:p></span></p>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span lang="EN-US">Thanks,<br>
Ravi<o:p></o:p></span></p>
<div>
<p class="MsoNormal"><span lang="EN-US">On 10/16/2018 08:45 AM, Zhou, Cynthia (NSB - CN/Hangzhou) wrote:<o:p></o:p></span></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><span lang="EN-US">Hi, <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">This issue happened twice recently, when glustershd do heal, it generate coredump occassinally,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">I do some debug and find that sometimes afr_selfheal_unlocked_discover_on do lookup and saved the frame in function rpc_clnt_submit, when reply comes, it find the saved frame , but the address is different from the saved
frame address, I think this is wrong, but I can not find a clue how this happened?<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[root@mn-0:/home/robot]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[Thread debugging using libthread_db enabled]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Using host libthread_db library "/lib64/libthread_db.so.1".<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Core was generated by `/usr/sbin/glusterfs -s sn-0.local --volfile-id gluster/glustershd -p /var/run/g'.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Program terminated with signal SIGSEGV, Segmentation fault.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#0 0x00007fb1a6fd9d24 in client3_3_lookup_cbk (req=0x7fb188010fb0, iov=0x7fb188010ff0, count=1, myframe=</span><b><span lang="EN-US" style="color:#C00000">0x7fb188215740</span></b><span lang="EN-US">) at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">2802 client-rpc-fops.c: No such file or directory.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[Current thread is 1 (Thread 0x7fb1a7a0e700 (LWP 8151))]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Missing separate debuginfos, use: dnf debuginfo-install rcp-pack-glusterfs-1.2.0-RCP2.wf29.x86_64<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) bt<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#0 0x00007fb1a6fd9d24 in client3_3_lookup_cbk (req=0x7fb188010fb0, iov=0x7fb188010ff0, count=1, myframe=0x7fb188215740) at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#1 0x00007fb1acf55d47 in rpc_clnt_handle_reply (clnt=0x7fb1a008fff0, pollin=0x7fb1a0843910) at rpc-clnt.c:778<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#2 0x00007fb1acf562e5 in rpc_clnt_notify (trans=0x7fb1a00901c0, mydata=0x7fb1a0090020, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fb1a0843910) at rpc-clnt.c:971<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#3 0x00007fb1acf52319 in rpc_transport_notify (this=0x7fb1a00901c0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7fb1a0843910) at rpc-transport.c:538<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#4 0x00007fb1a7e9934d in socket_event_poll_in (this=0x7fb1a00901c0, notify_handled=_gf_true) at socket.c:2315<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#5 0x00007fb1a7e99992 in socket_event_handler (fd=20, idx=14, gen=103, data=0x7fb1a00901c0, poll_in=1, poll_out=0, poll_err=0) at socket.c:2471<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#6 0x00007fb1ad2005ac in event_dispatch_epoll_handler (event_pool=0x175fb00, event=0x7fb1a7a0de84) at event-epoll.c:583<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#7 0x00007fb1ad200883 in event_dispatch_epoll_worker (data=0x17a73d0) at event-epoll.c:659<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#8 0x00007fb1abf4c5da in start_thread () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#9 0x00007fb1ab822cbf in clone () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) info thread<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Id Target Id Frame <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">* 1 Thread 0x7fb1a7a0e700 (LWP 8151) 0x00007fb1a6fd9d24 in client3_3_lookup_cbk (req=0x7fb188010fb0, iov=0x7fb188010ff0, count=1, myframe=0x7fb188215740) at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 2 Thread 0x7fb1aa0af700 (LWP 8147) 0x00007fb1ab761cbc in sigtimedwait () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 3 Thread 0x7fb1a98ae700 (LWP 8148) 0x00007fb1ab7f04b0 in nanosleep () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 4 Thread 0x7fb1957fa700 (LWP 8266) 0x00007fb1abf528ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 5 Thread 0x7fb1a88ac700 (LWP 8150) 0x00007fb1abf528ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 6 Thread 0x7fb17f7fe700 (LWP 8269) 0x00007fb1abf5250c in
<a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 7 Thread 0x7fb1aa8b0700 (LWP 8146) 0x00007fb1abf56300 in nanosleep () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 8 Thread 0x7fb1ad685780 (LWP 8145) 0x00007fb1abf4da3d in __pthread_timedjoin_ex () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 9 Thread 0x7fb1a542d700 (LWP 8251) 0x00007fb1ab7f04b0 in nanosleep () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 10 Thread 0x7fb1a4c2c700 (LWP 8260) 0x00007fb1abf528ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 11 Thread 0x7fb196ffd700 (LWP 8263) 0x00007fb1abf528ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 12 Thread 0x7fb1a60d7700 (LWP 8247) 0x00007fb1ab822fe7 in epoll_wait () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 13 Thread 0x7fb1a90ad700 (LWP 8149) 0x00007fb1abf528ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) print (call_frame_t*)myframe<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">$1 = (call_frame_t *) 0x7fb188215740<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) print *(call_frame_t*)myframe<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">$2 = {root = 0x7fb1a0085090, parent = 0xcd4642c4a3efd678, frames = {next = 0x151e2a92a5ae1bb, prev = 0x0},
</span><b><span lang="EN-US" style="color:#C00000">local = 0x0, this = 0x0, ret = 0x0</span></b><span lang="EN-US">, ref_count = 0, lock = {spinlock = 0, mutex = {__data = {<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> __lock = 0, __count = 0, __owner = 0, __nusers = 4, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x7fb188215798, __next = 0x7fb188215798}},
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> __size = '\000' <repeats 12 times>, "\004", '\000' <repeats 11 times>, "\230W!\210\261\177\000\000\230W!\210\261\177\000", __align = 0}}, cookie = 0x7fb1882157a8, complete = (unknown: 2283886504),
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> op = 32689, begin = {tv_sec = 140400469825464, tv_usec = 140400469825464}, end = {tv_sec = 140400878737576, tv_usec = 140400132101048}, wind_from = 0x7fb18801cdc0 "", wind_to = 0x0, unwind_from = 0x0,
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> unwind_to = 0x0}<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) thread 6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[Switching to thread 6 (Thread 0x7fb17f7fe700 (LWP 8269))]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#0 0x00007fb1abf5250c in <a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">
pthread_cond_wait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) bt<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#0 0x00007fb1abf5250c in <a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">
pthread_cond_wait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#1 0x00007fb1ad1dc993 in __syncbarrier_wait (barrier=0x7fb188014790, waitfor=3) at syncop.c:1138<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#2 0x00007fb1ad1dc9e4 in syncbarrier_wait (barrier=0x7fb188014790, waitfor=3) at syncop.c:1155<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#3 0x00007fb1a6d59cde in afr_selfheal_unlocked_discover_on (</span><b><span lang="EN-US" style="color:#C00000">frame=0x7fb1882162d0</span></b><span lang="EN-US">, inode=0x7fb188215740, gfid=0x7fb17f7fdb00 "x\326\357\243\304BF</span><span lang="EN-US" style="font-family:"Cambria",serif">ͻ</span><span lang="EN-US">\341Z*\251\342Q\001\060\333\177\177\261\177",
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> replies=0x7fb17f7fcf40, discover_on=0x7fb1a0084cb0 "\001\001\001", <incomplete sequence \360\255\272>) at afr-self-heal-common.c:1809<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#4 0x00007fb1a6d59d80 in afr_selfheal_unlocked_discover (</span><b><span lang="EN-US" style="color:#C00000">frame=0x7fb1882162d0</span></b><span lang="EN-US">, inode=0x7fb188215740, gfid=0x7fb17f7fdb00 "x\326\357\243\304BF</span><span lang="EN-US" style="font-family:"Cambria",serif">ͻ</span><span lang="EN-US">\341Z*\251\342Q\001\060\333\177\177\261\177",
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> replies=0x7fb17f7fcf40) at afr-self-heal-common.c:1828<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#5 0x00007fb1a6d5e51f in afr_selfheal_unlocked_inspect (frame=0x7fb1882162d0, this=0x7fb1a001db40, gfid=0x7fb17f7fdb00 "x\326\357\243\304BF</span><span lang="EN-US" style="font-family:"Cambria",serif">ͻ</span><span lang="EN-US">\341Z*\251\342Q\001\060\333\177\177\261\177",
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> link_inode=0x7fb17f7fd9c8, data_selfheal=0x7fb17f7fd9c4, metadata_selfheal=0x7fb17f7fd9c0, entry_selfheal=0x7fb17f7fd9bc) at afr-self-heal-common.c:2241<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#6 0x00007fb1a6d5f19b in afr_selfheal_do (frame=0x7fb1882162d0, this=0x7fb1a001db40, gfid=0x7fb17f7fdb00 "x\326\357\243\304BF</span><span lang="EN-US" style="font-family:"Cambria",serif">ͻ</span><span lang="EN-US">\341Z*\251\342Q\001\060\333\177\177\261\177")
at afr-self-heal-common.c:2483<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#7 0x00007fb1a6d5f346 in afr_selfheal (this=0x7fb1a001db40, gfid=0x7fb17f7fdb00 "x\326\357\243\304BF</span><span lang="EN-US" style="font-family:"Cambria",serif">ͻ</span><span lang="EN-US">\341Z*\251\342Q\001\060\333\177\177\261\177")
at afr-self-heal-common.c:2543<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#8 0x00007fb1a6d6ac5c in afr_shd_selfheal (healer=0x7fb1a0085640, child=0, gfid=0x7fb17f7fdb00 "x\326\357\243\304BF</span><span lang="EN-US" style="font-family:"Cambria",serif">ͻ</span><span lang="EN-US">\341Z*\251\342Q\001\060\333\177\177\261\177")
at afr-self-heald.c:343<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#9 0x00007fb1a6d6b00b in afr_shd_index_heal (subvol=0x7fb1a00171e0, entry=0x7fb1a0714180, parent=0x7fb17f7fddc0, data=0x7fb1a0085640) at afr-self-heald.c:440<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#10 0x00007fb1ad201ed3 in syncop_mt_dir_scan (frame=0x7fb1a07a0e90, subvol=0x7fb1a00171e0, loc=0x7fb17f7fddc0, pid=-6, data=0x7fb1a0085640, fn=0x7fb1a6d6aebc <afr_shd_index_heal>, xdata=0x7fb1a07b4ed0,
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> max_jobs=1, max_qlen=1024) at syncop-utils.c:407<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#11 0x00007fb1a6d6b2b5 in afr_shd_index_sweep (healer=0x7fb1a0085640, vgfid=0x7fb1a6d93610 "glusterfs.xattrop_index_gfid") at afr-self-heald.c:494<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#12 0x00007fb1a6d6b394 in afr_shd_index_sweep_all (healer=0x7fb1a0085640) at afr-self-heald.c:517<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#13 0x00007fb1a6d6b697 in afr_shd_index_healer (data=0x7fb1a0085640) at afr-self-heald.c:597<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#14 0x00007fb1abf4c5da in start_thread () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#15 0x00007fb1ab822cbf in clone () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal" align="left" style="text-align:left"><b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif">From:</span></b><span lang="EN-US" style="font-size:11.0pt;font-family:"Calibri",sans-serif"> Zhou, Cynthia (NSB - CN/Hangzhou)
<br>
<b>Sent:</b> Thursday, October 11, 2018 3:36 PM<br>
<b>To:</b> Ravishankar N <a href="mailto:ravishankar@redhat.com"><ravishankar@redhat.com></a><br>
<b>Cc:</b> gluster-users <a href="mailto:gluster-users@gluster.org"><gluster-users@gluster.org></a><br>
<b>Subject:</b> glustershd coredump generated while reboot all 3 sn nodes</span><span lang="EN-US"><o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Hi,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">I find that when restart sn node sometimes, the glustershd will exit and generate coredump. It has happened twice in my env, I would like to know your opinion on this issue, thanks!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">The glusterfs version I use is glusterfs3.12.3<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[root@sn-1:/root]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"># gluster v info log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Volume Name: log<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Type: Replicate<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Volume ID: 87bcbaf8-5fa4-4060-9149-23f832befe92<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Status: Started<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Snapshot Count: 0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Number of Bricks: 1 x 3 = 3<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Transport-type: tcp<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Bricks:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Brick1: sn-0.local:/mnt/bricks/log/brick<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Brick2: sn-1.local:/mnt/bricks/log/brick<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Brick3: sn-2.local:/mnt/bricks/log/brick<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Options Reconfigured:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">server.allow-insecure: on<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">cluster.quorum-type: auto<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">network.ping-timeout: 42<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">cluster.consistent-metadata: on<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">cluster.favorite-child-policy: mtime<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">cluster.quorum-reads: no<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">cluster.server-quorum-type: none<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">transport.address-family: inet<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">nfs.disable: on<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">performance.client-io-threads: off<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">cluster.server-quorum-ratio: 51%<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[root@sn-1:/root]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">///////////////////////////////////////////////glustershd coredump////////////////////////////////////////////////////////////////<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"># lz4 -d core.glusterfs.0.c5f0c5547fbd4e5aa8f350b748e5675e.1812.1537967075000000.lz4<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Decoding file core.glusterfs.0.c5f0c5547fbd4e5aa8f350b748e5675e.1812.1537967075000000
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">core.glusterfs.0.c5f : decoded 263188480 bytes
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[root@sn-0:/mnt/export]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"># gdb /usr/sbin/glusterfs core.glusterfs.0.c5f0c5547fbd4e5aa8f350b748e5675e.1812.1537967075000000<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">GNU gdb (GDB) Fedora 8.1-14.wf29<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Copyright (C) 2018 Free Software Foundation, Inc.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">License GPLv3+: GNU GPL version 3 or later <<a href="http://gnu.org/licenses/gpl.html">http://gnu.org/licenses/gpl.html</a>><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">This is free software: you are free to change and redistribute it.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">There is NO WARRANTY, to the extent permitted by law. Type "show copying"<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">and "show warranty" for details.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">This GDB was configured as "x86_64-redhat-linux-gnu".<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Type "show configuration" for configuration details.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">For bug reporting instructions, please see:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><<a href="http://www.gnu.org/software/gdb/bugs/">http://www.gnu.org/software/gdb/bugs/</a>>.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Find the GDB manual and other documentation resources online at:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><<a href="http://www.gnu.org/software/gdb/documentation/">http://www.gnu.org/software/gdb/documentation/</a>>.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">For help, type "help".<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Type "apropos word" to search for commands related to "word"...<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Reading symbols from /usr/sbin/glusterfs...(no debugging symbols found)...done.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">warning: core file may not match specified executable file.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1818]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1812]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1813]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1817]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1966]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1968]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1970]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1974]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1976]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1814]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1815]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1816]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[New LWP 1828]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[Thread debugging using libthread_db enabled]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Using host libthread_db library "/lib64/libthread_db.so.1".<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Core was generated by `/usr/sbin/glusterfs -s sn-0.local --volfile-id gluster/glustershd -p /var/run/g'.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Program terminated with signal SIGSEGV, Segmentation fault.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#0 0x00007f1b5e5d7d24 in client3_3_lookup_cbk (req=0x7f1b44002300, iov=0x7f1b44002340, count=1, myframe=0x7f1b4401c850) at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">2802 client-rpc-fops.c: No such file or directory.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">[Current thread is 1 (Thread 0x7f1b5f00c700 (LWP 1818))]<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Missing separate debuginfos, use: dnf debuginfo-install rcp-pack-glusterfs-1.2.0_1_g54e6196-RCP2.wf29.x86_64<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) bt<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#0 0x00007f1b5e5d7d24 in client3_3_lookup_cbk (req=0x7f1b44002300, iov=0x7f1b44002340, count=1, myframe=0x7f1b4401c850) at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#1 0x00007f1b64553d47 in rpc_clnt_handle_reply (clnt=0x7f1b5808bbb0, pollin=0x7f1b580c6620) at rpc-clnt.c:778<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#2 0x00007f1b645542e5 in rpc_clnt_notify (trans=0x7f1b5808bde0, mydata=0x7f1b5808bbe0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f1b580c6620) at rpc-clnt.c:971<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#3 0x00007f1b64550319 in rpc_transport_notify (this=0x7f1b5808bde0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f1b580c6620) at rpc-transport.c:538<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#4 0x00007f1b5f49734d in socket_event_poll_in (this=0x7f1b5808bde0, notify_handled=_gf_true) at socket.c:2315<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#5 0x00007f1b5f497992 in socket_event_handler (fd=25, idx=15, gen=7, data=0x7f1b5808bde0, poll_in=1, poll_out=0, poll_err=0) at socket.c:2471<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#6 0x00007f1b647fe5ac in event_dispatch_epoll_handler (event_pool=0x230cb00, event=0x7f1b5f00be84) at event-epoll.c:583<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#7 0x00007f1b647fe883 in event_dispatch_epoll_worker (data=0x23543d0) at event-epoll.c:659<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#8 0x00007f1b6354a5da in start_thread () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">#9 0x00007f1b62e20cbf in clone () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><b><span lang="EN-US" style="color:red">(gdb) print *(call_frame_t*)myframe</span></b><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoNormal"><b><span lang="EN-US" style="color:red">$1 = {root = 0x100000000, parent = 0x100000005, frames = {next = 0x7f1b4401c8a8, prev = 0x7f1b44010190},
</span></b><b><span lang="EN-US" style="color:#1F4E79">local = 0x0</span></b><b><span lang="EN-US" style="color:red">, this = 0x0, ret = 0x0, ref_count = 0, lock = {spinlock = 0, mutex = {__data = {</span></b><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoNormal"><b><span lang="EN-US" style="color:red"> __lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x7f1b44010190, __next = 0x0}},
</span></b><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoNormal"><b><span lang="EN-US" style="color:red"> __size = '\000' <repeats 24 times>, "\220\001\001D\033\177\000\000\000\000\000\000\000\000\000", __align = 0}}, cookie = 0x7f1b4401ccf0, complete = _gf_false, op = GF_FOP_NULL, begin = {</span></b><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoNormal"><b><span lang="EN-US" style="color:red"> tv_sec = 139755081730912, tv_usec = 139755081785872}, end = {tv_sec = 448811404, tv_usec = 21474836481}, wind_from = 0x0, wind_to = 0x0, unwind_from = 0x0, unwind_to = 0x0}</span></b><span lang="EN-US"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) info thread<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Id Target Id Frame <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">* 1 Thread 0x7f1b5f00c700 (LWP 1818) 0x00007f1b5e5d7d24 in client3_3_lookup_cbk (req=0x7f1b44002300, iov=0x7f1b44002340, count=1, myframe=0x7f1b4401c850) at client-rpc-fops.c:2802<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 2 Thread 0x7f1b64c83780 (LWP 1812) 0x00007f1b6354ba3d in __pthread_timedjoin_ex () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 3 Thread 0x7f1b61eae700 (LWP 1813) 0x00007f1b63554300 in nanosleep () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 4 Thread 0x7f1b5feaa700 (LWP 1817) 0x00007f1b635508ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 5 Thread 0x7f1b5ca2b700 (LWP 1966) 0x00007f1b62dee4b0 in nanosleep () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 6 Thread 0x7f1b4f7fe700 (LWP 1968) 0x00007f1b6355050c in
<a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 7 Thread 0x7f1b4e7fc700 (LWP 1970) 0x00007f1b6355050c in
<a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 8 Thread 0x7f1b4d7fa700 (LWP 1974) 0x00007f1b62dee4b0 in nanosleep () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 9 Thread 0x7f1b33fff700 (LWP 1976) 0x00007f1b62dee4b0 in nanosleep () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 10 Thread 0x7f1b616ad700 (LWP 1814) 0x00007f1b62d5fcbc in sigtimedwait () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 11 Thread 0x7f1b60eac700 (LWP 1815) 0x00007f1b62dee4b0 in nanosleep () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 12 Thread 0x7f1b606ab700 (LWP 1816) 0x00007f1b635508ca in
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> () from /lib64/libpthread.so.0<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> 13 Thread 0x7f1b5d6d5700 (LWP 1828) 0x00007f1b62e20fe7 in epoll_wait () from /lib64/libc.so.6<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">(gdb) quit<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">The source code is like this, so from gdb it coredump because frame->local is
</span><b><span lang="EN-US" style="color:red">NULL</span></b><span lang="EN-US">!!<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"><img border="0" width="997" height="370" style="width:10.3833in;height:3.85in" id="Picture_x0020_1" src="cid:image004.jpg@01D46AE9.65090080"><o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">From sn-0 journal log,<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">Sep 26 16:04:40.034577 sn-0 systemd-coredump[2612]: Process 1812 (glusterfs) of user 0 dumped core.<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1818:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b5e5d7d24 client3_3_lookup_cbk (client.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b64553d47 rpc_clnt_handle_reply (libgfrpc.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b645542e5 rpc_clnt_notify (libgfrpc.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b64550319 rpc_transport_notify (libgfrpc.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b5f49734d socket_event_poll_in (socket.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #5 0x00007f1b5f497992 socket_event_handler (socket.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #6 0x00007f1b647fe5ac event_dispatch_epoll_handler (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #7 0x00007f1b647fe883 event_dispatch_epoll_worker (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #8 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #9 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1812:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b6354ba3d __GI___pthread_timedjoin_ex (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647feae1 event_dispatch_epoll (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b647c2703 event_dispatch (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x000000000040ab95 main (glusterfsd)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62d4baf7 __libc_start_main (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #5 0x000000000040543a _start (glusterfsd)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1813:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b63554300 __nanosleep (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647a04e5 gf_timer_proc (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1817:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b635508ca
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647d98e3 syncenv_task (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b647d9b7e syncenv_processor (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1966:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b62dee4b0 __nanosleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b62dee38a sleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b5e36970c afr_shd_index_healer (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1968:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b6355050c
<a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647da993 __syncbarrier_wait (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b647da9e4 syncbarrier_wait (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b5e357cde afr_selfheal_unlocked_discover_on (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b5e357d80 afr_selfheal_unlocked_discover (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #5 0x00007f1b5e363bf8 __afr_selfheal_entry_prepare (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #6 0x00007f1b5e3641c0 afr_selfheal_entry_dirent (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #7 0x00007f1b5e36488a afr_selfheal_entry_do_subvol (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #8 0x00007f1b5e365077 afr_selfheal_entry_do (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #9 0x00007f1b5e3656b6 __afr_selfheal_entry (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #10 0x00007f1b5e365bba afr_selfheal_entry (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #11 0x00007f1b5e35d250 afr_selfheal_do (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #12 0x00007f1b5e35d346 afr_selfheal (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #13 0x00007f1b5e368c5c afr_shd_selfheal (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #14 0x00007f1b5e36900b afr_shd_index_heal (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #15 0x00007f1b647ffed3 syncop_mt_dir_scan (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #16 0x00007f1b5e3692b5 afr_shd_index_sweep (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #17 0x00007f1b5e369394 afr_shd_index_sweep_all (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #18 0x00007f1b5e369697 afr_shd_index_healer (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #19 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #20 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1970:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b6355050c
<a href="mailto:pthread_cond_wait@@GLIBC_2.3.2">pthread_cond_wait@@GLIBC_2.3.2</a> (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647da993 __syncbarrier_wait (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b647da9e4 syncbarrier_wait (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b5e357742 afr_selfheal_unlocked_lookup_on (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b5e364204 afr_selfheal_entry_dirent (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #5 0x00007f1b5e36488a afr_selfheal_entry_do_subvol (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #6 0x00007f1b5e365077 afr_selfheal_entry_do (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #7 0x00007f1b5e3656b6 __afr_selfheal_entry (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #8 0x00007f1b5e365bba afr_selfheal_entry (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #9 0x00007f1b5e35d250 afr_selfheal_do (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #10 0x00007f1b5e35d346 afr_selfheal (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #11 0x00007f1b5e368c5c afr_shd_selfheal (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #12 0x00007f1b5e36900b afr_shd_index_heal (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #13 0x00007f1b647ffed3 syncop_mt_dir_scan (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #14 0x00007f1b5e3692b5 afr_shd_index_sweep (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #15 0x00007f1b5e369394 afr_shd_index_sweep_all (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #16 0x00007f1b5e369697 afr_shd_index_healer (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #17 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #18 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1974:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b62dee4b0 __nanosleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b62dee38a sleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b5e36970c afr_shd_index_healer (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1976:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b62dee4b0 __nanosleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b62dee38a sleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b5e36970c afr_shd_index_healer (replicate.so)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1814:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b62d5fcbc __sigtimedwait (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b63554afc sigwait (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x0000000000409ed7 glusterfs_sigwaiter (glusterfsd)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1815:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b62dee4b0 __nanosleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b62dee38a sleep (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b647c3f5c pool_sweeper (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1816:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b635508ca
<a href="mailto:pthread_cond_timedwait@@GLIBC_2.3.2">pthread_cond_timedwait@@GLIBC_2.3.2</a> (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647d98e3 syncenv_task (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b647d9b7e syncenv_processor (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #4 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US">
<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> Stack trace of thread 1828:<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #0 0x00007f1b62e20fe7 epoll_wait (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #1 0x00007f1b647fe855 event_dispatch_epoll_worker (libglusterfs.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #2 0x00007f1b6354a5da start_thread (libpthread.so.0)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> #3 0x00007f1b62e20cbf __clone (libc.so.6)<o:p></o:p></span></p>
<p class="MsoNormal"><span lang="EN-US"> <o:p></o:p></span></p>
</blockquote>
<p class="MsoNormal" align="left" style="text-align:left"><span lang="EN-US" style="font-size:12.0pt;font-family:宋体"><o:p> </o:p></span></p>
</div>
</body>
</html>