[Gluster-users] Quota issue on GlusterFS 3.5.2-1

Krutika Dhananjay kdhananj at redhat.com
Tue Sep 16 10:38:46 UTC 2014


Hi, 

Could you upload quotad.log file from any one of the nodes in the cluster? The file is located under /var/run/glusterfs/. 

-Krutika 

----- Original Message -----

> From: "Geoffrey Letessier" <geoffrey.letessier at cnrs.fr>
> To: gluster-users at gluster.org
> Sent: Tuesday, September 16, 2014 3:24:48 PM
> Subject: [Gluster-users] Quota issue on GlusterFS 3.5.2-1

> Dear All,

> We meet an issue with our storage infrastructure since I enabled quota
> service for our main storage volume.

> Indeed, since quota service is activated, when we try to write a new file on
> a defined quota path, we obtain this kind of message « Transport endpoint is
> not connected » -which disappears if we disable the quota service on the
> volume (or on the targeted subdirectory quota)

> I’m also noting no quota daemons seems to work on each bricks.

> Here is some information about my storage volumes: (i’ve highlighted the
> information which surprises me)
> [root at hades ~]# gluster volume status vol_home
> Status of volume: vol_home
> Gluster process Port Online Pid
> ------------------------------------------------------------------------------
> Brick ib-storage1:/export/brick_home/brick1 49164 Y 7373
> Brick ib-storage2:/export/brick_home/brick1 49160 Y 6809
> Brick ib-storage3:/export/brick_home/brick1 49152 Y 3436
> Brick ib-storage4:/export/brick_home/brick1 49152 Y 3315
> Brick ib-storage1:/export/brick_home/brick2 49166 Y 7380
> Brick ib-storage2:/export/brick_home/brick2 49162 Y 6815
> Brick ib-storage3:/export/brick_home/brick2 49154 Y 3440
> Brick ib-storage4:/export/brick_home/brick2 49154 Y 3319
> Self-heal Daemon on localhost N/A Y 22095
> Quota Daemon on localhost N/A N N/A
> Self-heal Daemon on ib-storage3 N/A Y 16370
> Quota Daemon on ib-storage3 N/A N N/A
> Self-heal Daemon on 10.0.4.1 N/A Y 14686
> Quota Daemon on 10.0.4.1 N/A N N/A
> Self-heal Daemon on ib-storage4 N/A Y 16172
> Quota Daemon on ib-storage4 N/A N N/A

> Task Status of Volume vol_home
> ------------------------------------------------------------------------------
> There are no active volume tasks

> [root at hades ~]# gluster volume status vol_home detail
> Status of volume: vol_home
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage1:/export/brick_home/brick1
> Port : 49164
> Online : Y
> Pid : 7373
> File System : xfs
> Device : /dev/mapper/storage1--block1-st1--blk1--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 6.9TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845133649
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage2:/export/brick_home/brick1
> Port : 49160
> Online : Y
> Pid : 6809
> File System : xfs
> Device : /dev/mapper/storage2--block1-st2--blk1--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 6.9TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845133649
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage3:/export/brick_home/brick1
> Port : 49152
> Online : Y
> Pid : 3436
> File System : xfs
> Device : /dev/mapper/storage3--block1-st3--blk1--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 7.4TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845131362
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage4:/export/brick_home/brick1
> Port : 49152
> Online : Y
> Pid : 3315
> File System : xfs
> Device : /dev/mapper/storage4--block1-st4--blk1--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 7.4TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845131363
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage1:/export/brick_home/brick2
> Port : 49166
> Online : Y
> Pid : 7380
> File System : xfs
> Device : /dev/mapper/storage1--block2-st1--blk2--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 6.8TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845128559
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage2:/export/brick_home/brick2
> Port : 49162
> Online : Y
> Pid : 6815
> File System : xfs
> Device : /dev/mapper/storage2--block2-st2--blk2--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 6.8TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845128559
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage3:/export/brick_home/brick2
> Port : 49154
> Online : Y
> Pid : 3440
> File System : xfs
> Device : /dev/mapper/storage3--block2-st3--blk2--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 7.0TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845124761
> ------------------------------------------------------------------------------
> Brick : Brick ib-storage4:/export/brick_home/brick2
> Port : 49154
> Online : Y
> Pid : 3319
> File System : xfs
> Device : /dev/mapper/storage4--block2-st4--blk2--home
> Mount Options : rw,noatime,nodiratime,attr2,quota
> Inode Size : 256
> Disk Space Free : 7.0TB
> Total Disk Space : 17.9TB
> Inode Count : 3853515968
> Free Inodes : 3845124761

> [root at hades ~]# gluster volume info vol_home

> Volume Name: vol_home
> Type: Distributed-Replicate
> Volume ID: f6ebcfc1-b735-4a0e-b1d7-47ed2d2e7af6
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: tcp,rdma
> Bricks:
> Brick1: ib-storage1:/export/brick_home/brick1
> Brick2: ib-storage2:/export/brick_home/brick1
> Brick3: ib-storage3:/export/brick_home/brick1
> Brick4: ib-storage4:/export/brick_home/brick1
> Brick5: ib-storage1:/export/brick_home/brick2
> Brick6: ib-storage2:/export/brick_home/brick2
> Brick7: ib-storage3:/export/brick_home/brick2
> Brick8: ib-storage4:/export/brick_home/brick2
> Options Reconfigured:
> diagnostics.brick-log-level: CRITICAL
> auth.allow: localhost,127.0.0.1,10.*
> nfs.disable: on
> performance.cache-size: 64MB
> performance.write-behind-window-size: 1MB
> performance.quick-read: on
> performance.io-cache: on
> performance.io-thread-count: 64
> features.quota: on

> As you can read below, the CLI doesn’t show me the quota list (even after
> waiting a couple of hours) but i can get quota information specifying the
> quota path.
> [root at hades ~]# gluster volume quota vol_home list
> Path Hard-limit Soft-limit Used Available
> --------------------------------------------------------------------------------
> ^C
> [root at hades ~]# gluster volume quota vol_home list /admin_team
> Path Hard-limit Soft-limit Used Available
> --------------------------------------------------------------------------------
> /admin_team 1.0TB 80% 3.6GB 1020.4GB

> And additionally, i note a quota-crawl.log file is growing bigger…

> For information:
> - all storage node are running CentOS 6.5
> - previously on the same servers we were running a 3.3 GlusterFS version but,
> after having removing the old versions of GlusterFS packages and rebuilding
> all the bricks physically (RAID60->RAID6 and multiplying per 2 my storage
> nodes count and reimporting all my data inside this new volume), I installed
> 3.5.2 GlusterFS version. —Of course all storage node have been restarted
> several times since the upgrade
> -> but i note a troubling thing in gluster log file :
> [root at hades ~]# gluster --version
> glusterfs 3.5.2 built on Jul 31 2014 18:47:54
> Repository revision: git://git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc. < http://www.gluster.com >
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU General
> Public License.
> [root at hades ~]# cat /var/log/glusterfs/home.log|grep "version numbers are not
> same"|tail -1l
> [2014-09-15 10:35:35.516925] I [client-handshake.c:1474:client_setvolume_cbk]
> 3-vol_home-client-7: Server and Client lk-version numbers are not same ,
> reopening the fds
> [root at hades ~]# cat /var/log/glusterfs/home.log|grep "GlusterFS 3.3"|tail -1l
> [2014-09-15 10:35:35.516082] I
> [client-handshake.c:1677:select_server_supported_programs]
> 3-vol_home-client-7: Using Program GlusterFS 3.3 , Num (1298437), Version
> (330)
> [root at hades ~]# rpm -qa gluster*
> glusterfs-fuse- 3.5.2-1 .el6.x86_64
> glusterfs-rdma-3.5.2-1.el6.x86_64
> glusterfs-3.5.2-1.el6.x86_64
> glusterfs-server-3.5.2-1.el6.x86_64
> glusterfs-libs-3.5.2-1.el6.x86_64
> glusterfs-cli-3.5.2-1.el6.x86_64
> glusterfs-api-3.5.2-1.el6.x86_64

> Can someone help me to fix the problem?

> Thanks in advance and have a nice day,
> Geoffrey

> PS: Don’t hesitate to tell me if you see something wrong (or better to do) in
> my volume settings.
> ------------------------------------------------------
> Geoffrey Letessier
> Responsable informatique
> UPR 9080 - CNRS - Laboratoire de Biochimie Théorique
> Institut de Biologie Physico-Chimique
> 13, rue Pierre et Marie Curie - 75005 Paris

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140916/d878cbfc/attachment.html>


More information about the Gluster-users mailing list