[Gluster-devel] regression test case tests/basic/volume-snapshot.t generated core

Atin Mukherjee amukherj at redhat.com
Tue Jan 31 06:24:45 UTC 2017


(gdb) bt
#0  0x00007f2e0dc3da5f in __gf_free (free_ptr=0x7f2d00000000)
    at /home/jenkins/root/workspace/centos6-regression/
libglusterfs/src/mem-pool.c:306
#1  0x00007f2e0312c1f6 in gd_mgmt_v3_brick_op_cbk_fn (req=0x7f2df000acac,
iov=0x7f2df000acec, count=1,
    myframe=0x7f2df002a02c)
    at /home/jenkins/root/workspace/centos6-regression/xlators/
mgmt/glusterd/src/glusterd-mgmt.c:1180
#2  0x00007f2e03080f9e in glusterd_big_locked_cbk (req=0x7f2df000acac,
iov=0x7f2df000acec, count=1,
    myframe=0x7f2df002a02c, fn=0x7f2e0312bf8b <gd_mgmt_v3_brick_op_cbk_fn>)
    at /home/jenkins/root/workspace/centos6-regression/xlators/
mgmt/glusterd/src/glusterd-rpc-ops.c:222
#3  0x00007f2e0312c271 in gd_mgmt_v3_brick_op_cbk (req=0x7f2df000acac,
iov=0x7f2df000acec, count=1,
    myframe=0x7f2df002a02c)
    at /home/jenkins/root/workspace/centos6-regression/xlators/
mgmt/glusterd/src/glusterd-mgmt.c:1194
#4  0x00007f2e0d9ce55a in rpc_clnt_handle_reply (clnt=0x7f2dfc002e50,
pollin=0x7f2df4006cd0)
    at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
lib/src/rpc-clnt.c:790
#5  0x00007f2e0d9ceae2 in rpc_clnt_notify (trans=0x7f2dfc003330,
mydata=0x7f2dfc002ea8,
    event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f2df4006cd0)
    at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
lib/src/rpc-clnt.c:970
#6  0x00007f2e0d9cabee in rpc_transport_notify (this=0x7f2dfc003330,
event=RPC_TRANSPORT_MSG_RECEIVED,
    data=0x7f2df4006cd0) at /home/jenkins/root/workspace/
centos6-regression/rpc/rpc-lib/src/rpc-transport.c:538
#7  0x00007f2e016ab22c in socket_event_poll_in (this=0x7f2dfc003330)
    at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
transport/socket/src/socket.c:2265
#8  0x00007f2e016ab7a8 in socket_event_handler (fd=15, idx=5,
data=0x7f2dfc003330, poll_in=1, poll_out=0, poll_err=0)
    at /home/jenkins/root/workspace/centos6-regression/rpc/rpc-
transport/socket/src/socket.c:2395
#9  0x00007f2e0dc78954 in event_dispatch_epoll_handler
(event_pool=0x25daef0, event=0x7f2dfb5fde70)
    at /home/jenkins/root/workspace/centos6-regression/
libglusterfs/src/event-epoll.c:571
#10 0x00007f2e0dc78d80 in event_dispatch_epoll_worker (data=0x25f6f30)
    at /home/jenkins/root/workspace/centos6-regression/
libglusterfs/src/event-epoll.c:674
#11 0x00007f2e0cee1aa1 in start_thread () from ./lib64/libpthread.so.0
#12 0x00007f2e0c84aaad in clone () from ./lib64/libc.so.6
(gdb) f 1
#1  0x00007f2e0312c1f6 in gd_mgmt_v3_brick_op_cbk_fn (req=0x7f2df000acac,
iov=0x7f2df000acec, count=1,
    myframe=0x7f2df002a02c)
    at /home/jenkins/root/workspace/centos6-regression/xlators/
mgmt/glusterd/src/glusterd-mgmt.c:1180
1180    /home/jenkins/root/workspace/centos6-regression/xlators/
mgmt/glusterd/src/glusterd-mgmt.c: No such file or directory.
(gdb) p peerid
$1 = (uuid_t *) 0x7f2d00000000


*(gdb) p *peeridCannot access memory at address 0x7f2d00000000*

GlusterD crashed while freeing peerid (invalid memory) which was populated
from frame->cookie in gd_mgmt_v3_brick_op_cbk_fn ().
gd_mgmt_v3_brick_op_req () callocs a new peerid through GD_ALLOC_COPY_UUID
and populates the peerinfo->uuid in peerid which is passed as cookie to
gd_syncop_submit_request (), so we are not playing with a single pointer
having multiple references which was my initial suspect. A fresh allocated
uuid sent over the wire is getting corrupted is something I am surprised of
right now.  Any thoughts?




On Mon, Jan 30, 2017 at 9:11 PM, Shyam <srangana at redhat.com> wrote:

> Some more context here,
>
> - This run failed against release-3.10
> - Glusterd core dumped, so eyes on this would be needed
>
> Snip from the failure:
> Core stack from the regression run is in the link,
> https://build.gluster.org/job/centos6-regression/2963/consoleFull
>
> On 01/30/2017 06:28 AM, Milind Changire wrote:
>
>> job: https://build.gluster.org/job/centos6-regression/2963/console
>>
>> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-devel
>



-- 

~ Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170131/8e2594ca/attachment.html>


More information about the Gluster-devel mailing list