<div dir="ltr"><div>(gdb) bt<br>#0  0x00007f2e0dc3da5f in __gf_free (free_ptr=0x7f2d00000000)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/<wbr>libglusterfs/src/mem-pool.c:<wbr>306<br>#1  0x00007f2e0312c1f6 in gd_mgmt_v3_brick_op_cbk_fn (req=0x7f2df000acac, iov=0x7f2df000acec, count=1, <br>    myframe=0x7f2df002a02c)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/xlators/<wbr>mgmt/glusterd/src/glusterd-<wbr>mgmt.c:1180<br>#2  0x00007f2e03080f9e in glusterd_big_locked_cbk (req=0x7f2df000acac, iov=0x7f2df000acec, count=1, <br>    myframe=0x7f2df002a02c, fn=0x7f2e0312bf8b &lt;gd_mgmt_v3_brick_op_cbk_fn&gt;)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/xlators/<wbr>mgmt/glusterd/src/glusterd-<wbr>rpc-ops.c:222<br>#3  0x00007f2e0312c271 in gd_mgmt_v3_brick_op_cbk (req=0x7f2df000acac, iov=0x7f2df000acec, count=1, <br>    myframe=0x7f2df002a02c)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/xlators/<wbr>mgmt/glusterd/src/glusterd-<wbr>mgmt.c:1194<br>#4  0x00007f2e0d9ce55a in rpc_clnt_handle_reply (clnt=0x7f2dfc002e50, pollin=0x7f2df4006cd0)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/rpc/rpc-<wbr>lib/src/rpc-clnt.c:790<br>#5  0x00007f2e0d9ceae2 in rpc_clnt_notify (trans=0x7f2dfc003330, mydata=0x7f2dfc002ea8, <br>    event=RPC_TRANSPORT_MSG_<wbr>RECEIVED, data=0x7f2df4006cd0)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/rpc/rpc-<wbr>lib/src/rpc-clnt.c:970<br>#6  0x00007f2e0d9cabee in rpc_transport_notify (this=0x7f2dfc003330, event=RPC_TRANSPORT_MSG_<wbr>RECEIVED, <br>    data=0x7f2df4006cd0) at /home/jenkins/root/workspace/<wbr>centos6-regression/rpc/rpc-<wbr>lib/src/rpc-transport.c:538<br>#7  0x00007f2e016ab22c in socket_event_poll_in (this=0x7f2dfc003330)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/rpc/rpc-<wbr>transport/socket/src/socket.c:<wbr>2265<br>#8  0x00007f2e016ab7a8 in socket_event_handler (fd=15, idx=5, data=0x7f2dfc003330, poll_in=1, poll_out=0, poll_err=0)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/rpc/rpc-<wbr>transport/socket/src/socket.c:<wbr>2395<br>#9  0x00007f2e0dc78954 in event_dispatch_epoll_handler (event_pool=0x25daef0, event=0x7f2dfb5fde70)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/<wbr>libglusterfs/src/event-epoll.<wbr>c:571<br>#10 0x00007f2e0dc78d80 in event_dispatch_epoll_worker (data=0x25f6f30)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/<wbr>libglusterfs/src/event-epoll.<wbr>c:674<br>#11 0x00007f2e0cee1aa1 in start_thread () from ./lib64/libpthread.so.0<br>#12 0x00007f2e0c84aaad in clone () from ./lib64/libc.so.6<br>(gdb) f 1<br>#1  0x00007f2e0312c1f6 in gd_mgmt_v3_brick_op_cbk_fn (req=0x7f2df000acac, iov=0x7f2df000acec, count=1, <br>    myframe=0x7f2df002a02c)<br>    at /home/jenkins/root/workspace/<wbr>centos6-regression/xlators/<wbr>mgmt/glusterd/src/glusterd-<wbr>mgmt.c:1180<br>1180    /home/jenkins/root/workspace/<wbr>centos6-regression/xlators/<wbr>mgmt/glusterd/src/glusterd-<wbr>mgmt.c: No such file or directory.<br>(gdb) p peerid<br>$1 = (uuid_t *) 0x7f2d00000000<br><b>(gdb) p *peerid<br>Cannot access memory at address 0x7f2d00000000<br></b><br><br>GlusterD crashed while freeing peerid (invalid memory) which was populated from frame-&gt;cookie in gd_mgmt_v3_brick_op_cbk_fn (). gd_mgmt_v3_brick_op_req () callocs a new peerid through GD_ALLOC_COPY_UUID and populates the peerinfo-&gt;uuid in peerid which is passed as cookie to gd_syncop_submit_request (), so we are not playing with a single pointer having multiple references which was my initial suspect. A fresh allocated uuid sent over the wire is getting corrupted is something I am surprised of right now.  Any thoughts?<br></div><div><br><br><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 30, 2017 at 9:11 PM, Shyam <span dir="ltr">&lt;<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Some more context here,<br>
<br>
- This run failed against release-3.10<br>
- Glusterd core dumped, so eyes on this would be needed<br>
<br>
Snip from the failure:<br>
Core stack from the regression run is in the link,<br>
<a href="https://build.gluster.org/job/centos6-regression/2963/consoleFull" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2963/consol<wbr>eFull</a><br>
<br>
On 01/30/2017 06:28 AM, Milind Changire wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
job: <a href="https://build.gluster.org/job/centos6-regression/2963/console" rel="noreferrer" target="_blank">https://build.gluster.org/job/<wbr>centos6-regression/2963/consol<wbr>e</a><br>
<br>
</blockquote><div class="HOEnZb"><div class="h5">
______________________________<wbr>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-devel</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><br></div><div>~ Atin (atinm)<br></div></div></div></div>
</div>