[Gluster-devel] tests/bugs/snapshot/bug-1109889.t - snapd crash
Raghavendra G
raghavendra at gluster.com
Mon Jul 6 05:49:56 UTC 2015
Issue can be tracked at:
https://bugzilla.redhat.com/show_bug.cgi?id=1240161
On Mon, Jul 6, 2015 at 10:25 AM, Raghavendra G <raghavendra at gluster.com>
wrote:
> <server_setvolume>
>
> */
> if (op_ret && !xl) {
> /* We would have set the xl_private of the transport to
> the
>
> * @conn. But if we have put the connection i.e shutting
> down
>
> * the connection, then we should set xl_private to NULL
> as
> it
> * would be pointing to a freed memory and would segfault
> when
> * accessed upon getting
> DISCONNECT.
>
> */
> gf_client_put (client, NULL);
> req->trans->xl_private = NULL;
> }
>
> </server_setvolume>
>
> The crash is in gf_client_put. Code in gf_client_put reveals that client
> is dereferenced without NULL check. I am suspecting that this crash
> might've been uncovered/caused by [1] which fails any setvolume requests
> before server graph initialization (in which case client is NULL). Will
> send out a patch.
>
> [1] http://review.gluster.org/11490
>
> On Fri, Jul 3, 2015 at 6:02 PM, Raghavendra Bhat <rabhat at redhat.com>
> wrote:
>
>> On 07/03/2015 03:37 PM, Atin Mukherjee wrote:
>>
>>>
>>> http://build.gluster.org/job/rackspace-regression-2GB-triggered/11898/consoleFull
>>> has caused a crash in snapd with the following bt:
>>>
>>
>> This seem to have crashed in server_setvolume (i.e. before the graph
>> could be properly made available for i/o. snapview-server xlator is yet to
>> come into the picture). But still I will try to reproduce it on my local
>> setup and see what might be causing this.
>>
>>
>> Regards,
>> Raghavendra Bhat
>>
>>
>>
>>> #0 0x00007f11e2ed3ded in gf_client_put (client=0x0, detached=0x0)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/client_t.c:294
>>> #1 0x00007f11d4eeac96 in server_setvolume (req=0x7f11c000195c)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/protocol/server/src/server-handshake.c:710
>>> #2 0x00007f11e2c1e05c in rpcsvc_handle_rpc_call (svc=0x7f11d001b160,
>>> trans=0x7f11c0000ac0, msg=0x7f11c0001810)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpcsvc.c:698
>>> #3 0x00007f11e2c1e3cf in rpcsvc_notify (trans=0x7f11c0000ac0,
>>> mydata=0x7f11d001b160, event=RPC_TRANSPORT_MSG_RECEIVED,
>>> data=0x7f11c0001810) at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpcsvc.c:792
>>> #4 0x00007f11e2c23ad7 in rpc_transport_notify (this=0x7f11c0000ac0,
>>> event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f11c0001810)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-lib/src/rpc-transport.c:538
>>> #5 0x00007f11d841787b in socket_event_poll_in (this=0x7f11c0000ac0)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2285
>>> #6 0x00007f11d8417dd1 in socket_event_handler (fd=13, idx=3,
>>> data=0x7f11c0000ac0, poll_in=1, poll_out=0, poll_err=0)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/rpc/rpc-transport/socket/src/socket.c:2398
>>> #7 0x00007f11e2ed79ec in event_dispatch_epoll_handler
>>> (event_pool=0x13bb040, event=0x7f11d4eb9e70)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:570
>>> #8 0x00007f11e2ed7dda in event_dispatch_epoll_worker
>>> (data=0x7f11d000dc10)
>>> at
>>>
>>> /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/libglusterfs/src/event-epoll.c:673
>>> #9 0x00007f11e213e9d1 in start_thread () from ./lib64/libpthread.so.0
>>> #10 0x00007f11e1aa88fd in clone () from ./lib64/libc.so.6
>>>
>>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Raghavendra G
>
--
Raghavendra G
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20150706/4dfb91ec/attachment-0001.html>
More information about the Gluster-devel
mailing list