[Bugs] [Bug 1221025] New: Glusterd crashes after enabling quota limit on a distrep volume.

bugzilla at redhat.com bugzilla at redhat.com
Wed May 13 07:26:01 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1221025

            Bug ID: 1221025
           Summary: Glusterd crashes after enabling quota limit on a
                    distrep volume.
           Product: GlusterFS
           Version: mainline
         Component: quota
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: trao at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem:

Glusterd crashes after enabling quota limit on a distrep volume.

Version-Release number of selected component (if applicable):

root at rhsqa14-vm3 ~]# glusterfs --version
glusterfs 3.7.0beta2 built on May 11 2015 01:27:45
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
You have new mail in /var/spool/mail/root
[root at rhsqa14-vm3 ~]# 

[root at rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-libs-3.7.0beta2-0.0.el6.x86_64
glusterfs-fuse-3.7.0beta2-0.0.el6.x86_64
glusterfs-rdma-3.7.0beta2-0.0.el6.x86_64
glusterfs-3.7.0beta2-0.0.el6.x86_64
glusterfs-api-3.7.0beta2-0.0.el6.x86_64
glusterfs-cli-3.7.0beta2-0.0.el6.x86_64
glusterfs-geo-replication-3.7.0beta2-0.0.el6.x86_64
glusterfs-extra-xlators-3.7.0beta2-0.0.el6.x86_64
glusterfs-client-xlators-3.7.0beta2-0.0.el6.x86_64
glusterfs-server-3.7.0beta2-0.0.el6.x86_64
[root at rhsqa14-vm3 ~]# 



How reproducible:
easily

Steps to Reproduce:
1.create normal distrep 2x2 volume
2.enable quota
3.set the quota limit usage on the volume

Actual results:
glusterd crashed

Expected results:
setting the quota limit should success

Additional info:

root at rhsqa14-vm4 ~]# gluster v create V1 replica 2  10.70.46.243:/rhs/brick1/t2
10.70.46.240:/rhs/brick1/t2 10.70.46.243:/rhs/brick2/t2
10.70.46.240:/rhs/brick2/t2 force
volume create: V1: success: please start the volume to access data
[root at rhsqa14-vm4 ~]# 
[root at rhsqa14-vm4 ~]# 
[root at rhsqa14-vm4 ~]# gluster  v start V1
volume start: V1: success
[root at rhsqa14-vm4 ~]# gluster v info V1

Volume Name: V1
Type: Distributed-Replicate
Volume ID: 99f99d6d-b24d-4cc8-96e0-25444dbf10fd
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.243:/rhs/brick1/t2
Brick2: 10.70.46.240:/rhs/brick1/t2
Brick3: 10.70.46.243:/rhs/brick2/t2
Brick4: 10.70.46.240:/rhs/brick2/t2
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm4 ~]# 
[root at rhsqa14-vm4 ~]# 

[root at rhsqa14-vm1 ~]# cat options.sh 
gluster v set $1 cluster.min-free-disk 10
gluster volume quota $1 enable
gluster v set $1 quota-deem-statfs on
gluster v quota $1 limit-usage / 20GB
gluster v set $1 features.uss enable

[root at rhsqa14-vm1 ~]# 

[root at rhsqa14-vm4 ~]# ./options.sh V1
volume set: success
volume quota : success
volume set: success
Connection failed. Please check if gluster daemon is operational.
You have new mail in /var/spool/mail/root
[root at rhsqa14-vm4 ~]# service glusterd status
glusterd dead but pid file exists
[root at rhsqa14-vm4 ~]#


Log messages:

[2015-05-13 07:14:24.088781] W [socket.c:3059:socket_connect] 0-snapd: Ignore
failed connection attempt on
/var/run/gluster/e97ed36149cb00fbc0a75840c9ad6cf6.socket, (No such file or
directory)
[2015-05-13 07:14:24.090158] W [socket.c:642:__socket_rwv] 0-snapd: readv on
/var/run/gluster/e97ed36149cb00fbc0a75840c9ad6cf6.socket failed (Invalid
argument)
[2015-05-13 07:14:26.029793] I [run.c:190:runner_log] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f71b207cfb0] (-->
/usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7f71b20cdd05] (-->
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a0)[0x7f71a7e03070]
(-->
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(+0xd3302)[0x7f71a7e03302]
(--> /lib64/libpthread.so.0[0x395a0079d1] ))))) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=V1 -o
quota-deem-statfs=on --gd-workdir=/var/lib/glusterd
[2015-05-13 07:14:26.050454] I [run.c:190:runner_log] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f71b207cfb0] (-->
/usr/lib64/libglusterfs.so.0(runner_log+0x105)[0x7f71b20cdd05] (-->
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a0)[0x7f71a7e03070]
(-->
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(+0xd3302)[0x7f71a7e03302]
(--> /lib64/libpthread.so.0[0x395a0079d1] ))))) 0-management: Ran script:
/var/lib/glusterd/hooks/1/set/post/S31ganesha-set.sh --volname=V1 -o
quota-deem-statfs=on --gd-workdir=/var/lib/glusterd
[2015-05-13 07:14:27.096916] W [socket.c:3059:socket_connect] 0-snapd: Ignore
failed connection attempt on
/var/run/gluster/e97ed36149cb00fbc0a75840c9ad6cf6.socket, (No such file or
directory)
The message "I [MSGID: 106006]
[glusterd-snapd-svc.c:368:glusterd_snapdsvc_rpc_notify] 0-management: snapd has
disconnected from glusterd." repeated 21 times between [2015-05-13
07:13:19.275136] and [2015-05-13 07:14:24.090209]
The message "I [MSGID: 106004]
[glusterd-handler.c:4809:__glusterd_peer_rpc_notify] 0-management: Peer
<10.70.46.233> (<eb626ff0-c985-45b7-b088-76f7230dcfc7>), in state <Sent and
Received peer request>, has disconnected from glusterd." repeated 21 times
between [2015-05-13 07:13:19.279502] and [2015-05-13 07:14:24.096822]
pending frames:
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 6
time of crash:
2015-05-13 07:14:27
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7.0beta2
/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xb6)[0x7f71b207cb96]
/usr/lib64/libglusterfs.so.0(gf_print_trace+0x33f)[0x7f71b209b5af]
/lib64/libc.so.6[0x3959c326a0]
/lib64/libc.so.6(gsignal+0x35)[0x3959c32625]
/lib64/libc.so.6(abort+0x175)[0x3959c33e05]
/lib64/libc.so.6[0x3959c70537]
/lib64/libc.so.6(__fortify_fail+0x37)[0x3959d02697]
/lib64/libc.so.6[0x3959d00580]
/lib64/libc.so.6(__read_chk+0x22)[0x3959d00a52]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_store_quota_config+0x23e)[0x7f71a7dd79de]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_quota_limit_usage+0x338)[0x7f71a7dd89f8]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_op_quota+0x42f)[0x7f71a7dd9bdf]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_op_commit_perform+0x233)[0x7f71a7d8ec83]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(gd_commit_op_phase+0xd8)[0x7f71a7dfeaa8]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(gd_sync_task_begin+0x61d)[0x7f71a7e01f2d]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7f71a7e0211b]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(__glusterd_handle_quota+0x302)[0x7f71a7dd7682]
/usr/lib64/glusterfs/3.7.0beta2/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7f71a7d66c7f]
/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7f71b20bd5c2]
/lib64/libc.so.6[0x3959c438f0]
---------

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list