<div dir="ltr">Hello, <br>
<br>
I need some advice on fixing an issue with quota on my gluster volume.
It's running version 3.7, distributed volume, with 7 nodes. <br>
<br>
<tt># gluster --version</tt><tt><br>
</tt><tt>glusterfs 3.7.13 built on Jul 8 2016 15:26:18</tt><tt><br>
</tt><tt>Repository revision: git://<a href="http://git.gluster.com/glusterfs.git">git.gluster.com/glusterfs.git</a></tt><tt><br>
</tt><tt>Copyright (c) 2006-2011 Gluster Inc. <<a href="http://www.gluster.com">http://www.gluster.com</a>></tt><tt><br>
</tt><tt>GlusterFS comes with ABSOLUTELY NO WARRANTY.</tt><tt><br>
</tt><tt>You may redistribute copies of GlusterFS under the terms of the GNU General Public License.<br>
<br>
# gluster volume info storage<br>
<br>
Volume Name: storage<br>
Type: Distribute<br>
Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2<br>
Status: Started<br>
Number of Bricks: 7<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: 10.0.231.50:/mnt/raid6-storage/storage<br>
Brick2: 10.0.231.51:/mnt/raid6-storage/storage<br>
Brick3: 10.0.231.52:/mnt/raid6-storage/storage<br>
Brick4: 10.0.231.53:/mnt/raid6-storage/storage<br>
Brick5: 10.0.231.54:/mnt/raid6-storage/storage<br>
Brick6: 10.0.231.55:/mnt/raid6-storage/storage<br>
Brick7: 10.0.231.56:/mnt/raid6-storage/storage<br>
Options Reconfigured:<br>
changelog.changelog: on<br>
geo-replication.ignore-pid-check: on<br>
geo-replication.indexing: on<br>
nfs.disable: no<br>
performance.readdir-ahead: on<br>
features.quota: on<br>
features.inode-quota: on<br>
features.quota-deem-statfs: on<br>
features.read-only: off<br>
<br>
# df -h /storage/<br>
Filesystem Size Used Avail Use% Mounted on<br>
10.0.231.50:/storage 255T 172T 83T 68% /storage<br>
<br>
</tt><br>
I am planning to upgrade to 3.10 (or 3.12 when it's available) but I
have a number of quotas configured, and one of them (below) has a very
wrong "Used" value: <br>
<br>
<tt># gluster volume quota storage list | egrep "MEOPAR "</tt><tt><br>
</tt><tt>/data/projects/MEOPAR 8.5TB 80%(6.8TB) <b>16384.0PB</b> 17.4TB No No</tt><br>
<br>
<br>
I have confirmed the bad value appears in one of the bricks current
xattrs, and it looks like the issue has been encountered previously on
bricks 04, 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1 as it was recently added)<br>
<br>
<pre>$ ansible -i hosts gluster-servers[0:6] -u <user> --ask-pass -m shell -b --become-method=sudo --ask-become-pass -a "getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/data/projects/MEOPAR | egrep '^trusted.glusterfs.quota.size'"<br>SSH password:<br>SUDO password[defaults to SSH password]:<br><br>gluster02 | SUCCESS | rc=0 >><br>trusted.glusterfs.quota.size=0x0000011ecfa56c00000000000005cd6d000000000006d478<br>trusted.glusterfs.quota.size.1=0x0000010ad4a452000000000000012a0300000000000150fa<br><br>gluster05 | SUCCESS | rc=0 >><br>trusted.glusterfs.quota.size=0x00000033b8e92200000000000004cde8000000000006b1a4<br>trusted.glusterfs.quota.size.1=0x0000010dca277c00000000000001297d0000000000015005<br><br>gluster01 | SUCCESS | rc=0 >><br>trusted.glusterfs.quota.size=0x0000003d4d4348000000000000057616000000000006afd2<br>trusted.glusterfs.quota.size.1=0x00000133fe211e00000000000005d161000000000006cfd4<br><br>gluster04 | SUCCESS | rc=0 >><br>trusted.glusterfs.quota.size=0xffffff396f3e9400000000000004d7ea0000000000068c62<br>trusted.glusterfs.quota.size.1=0x00000106e6724800000000000001138f0000000000012fb2<br><br>gluster03 | SUCCESS | rc=0 >><br>trusted.glusterfs.quota.size=0xfffffd02acabf000000000000003599000000000000643e2<br>trusted.glusterfs.quota.size.1=0x00000114e20f5e0000000000000113b30000000000012fb2<br><br>gluster06 | SUCCESS | rc=0 >><br>trusted.glusterfs.quota.size=0xffffff0c98de440000000000000536e40000000000068cf2<br>trusted.glusterfs.quota.size.1=0x0000013532664e00000000000005e73f000000000006cfd4<br><br>gluster07 | SUCCESS | rc=0 >><br><b>trusted.glusterfs.quota.size=0xfffffa3d7c1ba60000000000000a9ccb000000000005fd2f</b><b><br></b><br></pre>
And reviewing the subdirectories of that folder on the impacted server
you can see that none of the direct children have such incorrect values:
<br>
<br>
<tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/data/projects/MEOPAR/*</tt><tt><br>
</tt><tt># file: /mnt/raid6-storage/storage/data/projects/MEOPAR/<dir1</tt><tt>><br>
</tt><tt>...</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.7209b677-f4b9-4d82-a382-0733620e6929.contri=0x000000fb6841820000000000000074730000000000000dae</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.dirty=0x3000</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.size=0x000000fb6841820000000000000074730000000000000dae</tt><tt><br>
</tt><tt><br>
</tt><tt># file: /mnt/raid6-storage/storage/data/projects/MEOPAR/<dir2</tt><tt>><br>
</tt><tt>...</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.7209b677-f4b9-4d82-a382-0733620e6929.contri=0x0000000416d5f4000000000000000baa0000000000000441</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.dirty=0x3000</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.limit-set=0x0000010000000000ffffffffffffffff</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.size=0x0000000416d5f4000000000000000baa0000000000000441</tt><tt><br>
</tt><tt><br>
</tt><tt># file: /mnt/raid6-storage/storage/data/projects/MEOPAR/<dir3></tt><tt><br>
</tt><tt>...</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.7209b677-f4b9-4d82-a382-0733620e6929.contri=0x000000110f2c4e00000000000002a76a000000000006ad7d</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.dirty=0x3000</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.limit-set=0x0000020000000000ffffffffffffffff</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.size=0x000000110f2c4e00000000000002a76a000000000006ad7d</tt><br>
<br>
<br>
Can I fix this on the current version of gluster (3.7) on just the one
brick before I upgrade? Or would I be better off upgrading to 3.10 and
trying to fix it then? <br>
<br>
I have reviewed information here: <br>
<br>
<a href="http://lists.gluster.org/pipermail/gluster-devel/2016-February/048282.html">http://lists.gluster.org/pipermail/gluster-devel/2016-February/048282.html</a><br>
<a href="http://lists.gluster.org/pipermail/gluster-users.old/2016-September/028365.html">http://lists.gluster.org/pipermail/gluster-users.old/2016-September/028365.html</a><br>
<br>
It seems like since I am on Gluster 3.7 I can disable quotas and
re-enable and everything will get recalculated and increment the index
on the quota.size xattr. But with the size of the volume that will take a
very long time. <br>
<br>
Could I simply mark the impacted directly as dirty on gluster07? Or
update the xattr directly as the sum of the size of dir1, 2, and 3? <br>
<br>
Thanks,<br>
-Matthew<br>
</div>