[Gluster-users] gluster 3.4.5,gluster client process was core dump

Dang Zhiqiang dzq008 at 163.com
Mon May 25 10:28:31 UTC 2015


Thank you very much.


A relevant log:
data1.log:20694:[2015-05-21 07:24:32.652102] E [quota.c:318:quota_check_limit] (-->/usr/lib64/glusterfs/3.4.5/xlator/cluster/replicate.so(afr_getxattr_cbk+0xf8) [0x7f81fccc5168] (-->/usr/lib64/glusterfs/3.4.5/xlator/cluster/distribute.so(dht_getxattr_cbk+0x17d) [0x7f81fca8736d] (-->/usr/lib64/glusterfs/3.4.5/xlator/features/quota.so(quota_validate_cbk+0x1cd) [0x7f81fc8578fd]))) 0-dfs-quota: invalid argument: local->stub


local->stub == NULL



gdb) l *0x7f81fc8578fd

0x7f81fc8578fd is in quota_validate_cbk (quota.c:243).

238                  gettimeofday (&ctx->tv, NULL);

239          }

240          UNLOCK (&ctx->lock);

241 

242          quota_check_limit (frame, local->validate_loc.inode, this, NULL, NULL);

243          return 0;

244 

245  unwind:

246          LOCK (&local->lock);

247          {

 

quota_check_limit

318         GF_VALIDATE_OR_GOTO (this->name, local->stub, out);


At 2015-05-25 18:17:52, "Susant Palai" <spalai at redhat.com> wrote:
>We found a similar crash and the fix for the same is here http://review.gluster.org/#/c/10389/. You can find the RCA in the commit message.
>
>Regards,
>Susant
>
>----- Original Message -----
>> From: "Dang Zhiqiang" <dzq008 at 163.com>
>> To: gluster-users at gluster.org
>> Sent: Monday, 25 May, 2015 3:30:16 PM
>> Subject: [Gluster-users]  gluster 3.4.5,gluster client process was core dump
>> 
>> Hi,
>> 
>> Why this is and how to fix it?
>> Thanks.
>> 
>> client log:
>> data1.log:20695:[2015-05-25 03:12:31.084149] W
>> [dht-common.c:2016:dht_getxattr_cbk]
>> (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5) [0x346d80d6f5]
>> (-->/usr/lib64/glusterfs/3.4.5/xlator/protocol/client.so(client3_3_getxattr_cbk+0x178)
>> [0x7f81fcf33ad8]
>> (-->/usr/lib64/glusterfs/3.4.5/xlator/cluster/replicate.so(afr_getxattr_cbk+0xf8)
>> [0x7f81fccc5168]))) 0-dfs-dht: invalid argument: frame->local
>> 
>> core dump info:
>> Core was generated by `/usr/sbin/glusterfs --volfile-id=dfs
>> --volfile-server=node1 /dat'.
>> Program terminated with signal 11, Segmentation fault.
>> #0 0x00007f81fca87354 in dht_getxattr_cbk (frame=0x7f82009efe34,
>> cookie=<value optimized out>, this=<value optimized out>, op_ret=<value
>> optimized out>, op_errno=0,
>> xattr=<value optimized out>, xdata=0x0) at dht-common.c:2043
>> 2043 DHT_STACK_UNWIND (getxattr, frame, local->op_ret, op_errno,
>> Missing separate debuginfos, use: debuginfo-install
>> glibc-2.12-1.132.el6_5.4.x86_64 keyutils-libs-1.4-4.el6.x86_64
>> krb5-libs-1.10.3-15.el6_5.1.x86_64 libcom_err-1.41.12-18.el6_5.1.x86_64
>> libgcc-4.4.7-4.el6.x86_64 libselinux-2.0.94-5.3.el6_4.1.x86_64
>> openssl-1.0.1e-16.el6_5.7.x86_64 zlib-1.2.3-29.el6.x86_64
>> (gdb) bt
>> #0 0x00007f81fca87354 in dht_getxattr_cbk (frame=0x7f82009efe34,
>> cookie=<value optimized out>, this=<value optimized out>, op_ret=<value
>> optimized out>, op_errno=0,
>> xattr=<value optimized out>, xdata=0x0) at dht-common.c:2043
>> #1 0x00007f81fccc5168 in afr_getxattr_cbk (frame=0x7f8200a0d32c,
>> cookie=<value optimized out>, this=<value optimized out>, op_ret=0,
>> op_errno=0, dict=0x7f82003a768c,
>> xdata=0x0) at afr-inode-read.c:618
>> #2 0x00007f81fcf33ad8 in client3_3_getxattr_cbk (req=<value optimized out>,
>> iov=<value optimized out>, count=<value optimized out>,
>> myframe=0x7f82009a58fc)
>> at client-rpc-fops.c:1115
>> #3 0x000000346d80d6f5 in rpc_clnt_handle_reply (clnt=0x232cb40,
>> pollin=0x1173ac10) at rpc-clnt.c:771
>> #4 0x000000346d80ec6f in rpc_clnt_notify (trans=<value optimized out>,
>> mydata=0x232cb70, event=<value optimized out>, data=<value optimized out>)
>> at rpc-clnt.c:891
>> #5 0x000000346d80a4e8 in rpc_transport_notify (this=<value optimized out>,
>> event=<value optimized out>, data=<value optimized out>) at
>> rpc-transport.c:497
>> #6 0x00007f81fdf7f216 in socket_event_poll_in (this=0x233c5a0) at
>> socket.c:2118
>> #7 0x00007f81fdf80c3d in socket_event_handler (fd=<value optimized out>,
>> idx=<value optimized out>, data=0x233c5a0, poll_in=1, poll_out=0,
>> poll_err=0) at socket.c:2230
>> #8 0x000000346d45e907 in event_dispatch_epoll_handler (event_pool=0x228be90)
>> at event-epoll.c:384
>> #9 event_dispatch_epoll (event_pool=0x228be90) at event-epoll.c:445
>> #10 0x0000000000406818 in main (argc=4, argv=0x7fff9e2e4898) at
>> glusterfsd.c:1934
>> (gdb) print ((call_frame_t *)0x7f82009efe34)->local
>> $2 = (void *) 0x0
>> (gdb) l *0x00007f81fca87354
>> 0x7f81fca87354 is in dht_getxattr_cbk (dht-common.c:2043).
>> 2038 dht_aggregate_xattr (xattr, local->xattr);
>> 2039 local->xattr = dict_copy (xattr, local->xattr);
>> 2040 }
>> 2041 out:
>> 2042 if (is_last_call (this_call_cnt)) {
>> 2043 DHT_STACK_UNWIND (getxattr, frame, local->op_ret, op_errno,
>> 2044 local->xattr, NULL);
>> 2045 }
>> 2046 return 0;
>> 2047 }
>> 
>> jump code:
>> 2016 VALIDATE_OR_GOTO (frame->local, out);
>> 
>> 
>> volume info:
>> # gluster v info
>> Volume Name: dfs
>> Type: Distributed-Replicate
>> Volume ID: 1848afb0-44ef-418c-a58f-8d7159ec5d1e
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: node1:/data/vol/dfs
>> Brick2: node2:/data/vol/dfs
>> Brick3: node3:/data/vol/dfs
>> Brick4: node4:/data/vol/dfs
>> Options Reconfigured:
>> diagnostics.client-log-level: WARNING
>> diagnostics.brick-log-level: WARNING
>> nfs.disable: on
>> features.quota: on
>> features.limit-usage:
>> /video/CLOUD:200TB,/video/YINGSHIKU:200TB,/video/LIVENEW:200TB,/video/SOCIAL:200TB,/video/mini:200TB,/video/2013:200TB,/video:200TB
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150525/de928006/attachment.html>


More information about the Gluster-users mailing list