<div dir="ltr"><div><div><div><tt>Hi Mathew,<br><br>If you are sure that &quot;/mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/&quot;<br></tt></div><tt>is the only directory with wrong accounting and its immediate sub directories have correct xattr values, Setting the dirty xattr and doing a stat after that should resolve the issue.<br></tt><pre>1) setxattr -n trusted.glusterfs.quota.dirty -v 0x3100 <tt>/mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/</tt></pre><tt>2) stat </tt><tt><tt>/mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/<br><br></tt> </tt></div><div><tt></tt></div><tt>Could you please share what kind of operations that happens on this directory, to help RCA the issue.<br></tt></div><div><tt><br>If you think this can be true elsewhere in filesystem as well,</tt><tt>use the following script to identify the same.<br><br>1) <a href="https://github.com/gluster/glusterfs/blob/master/extras/quota/xattr_analysis.py">https://github.com/gluster/glusterfs/blob/master/extras/quota/xattr_analysis.py</a><br>2) <a href="https://github.com/gluster/glusterfs/blob/master/extras/quota/log_accounting.sh">https://github.com/gluster/glusterfs/blob/master/extras/quota/log_accounting.sh</a><br></tt></div><div><tt><br></tt></div><div><tt>Regards,<br></tt></div><div><tt>Sanoj<br></tt></div><div><tt><br></tt><div><tt><br> <br></tt></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 28, 2017 at 12:39 PM, Raghavendra Gowdappa <span dir="ltr">&lt;<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">+sanoj<br>
<div><div class="h5"><br>
----- Original Message -----<br>
&gt; From: &quot;Matthew B&quot; &lt;<a href="mailto:matthew.has.questions@gmail.com">matthew.has.questions@gmail.<wbr>com</a>&gt;<br>
&gt; To: <a href="mailto:gluster-devel@gluster.org">gluster-devel@gluster.org</a><br>
&gt; Sent: Saturday, August 26, 2017 12:45:19 AM<br>
&gt; Subject: [Gluster-devel] Quota Used Value Incorrect - Fix now or after        upgrade<br>
&gt;<br>
&gt; Hello,<br>
&gt;<br>
&gt; I need some advice on fixing an issue with quota on my gluster volume. It&#39;s<br>
&gt; running version 3.7, distributed volume, with 7 nodes.<br>
&gt;<br>
&gt; # gluster --version<br>
&gt; glusterfs 3.7.13 built on Jul 8 2016 15:26:18<br>
&gt; Repository revision: git:// <a href="http://git.gluster.com/glusterfs.git" rel="noreferrer" target="_blank">git.gluster.com/glusterfs.git</a><br>
&gt; Copyright (c) 2006-2011 Gluster Inc. &lt; <a href="http://www.gluster.com" rel="noreferrer" target="_blank">http://www.gluster.com</a> &gt;<br>
&gt; GlusterFS comes with ABSOLUTELY NO WARRANTY.<br>
&gt; You may redistribute copies of GlusterFS under the terms of the GNU General<br>
&gt; Public License.<br>
&gt;<br>
&gt; # gluster volume info storage<br>
&gt;<br>
&gt; Volume Name: storage<br>
&gt; Type: Distribute<br>
&gt; Volume ID: 6f95525a-94d7-4174-bac4-<wbr>e1a18fe010a2<br>
&gt; Status: Started<br>
&gt; Number of Bricks: 7<br>
&gt; Transport-type: tcp<br>
&gt; Bricks:<br>
&gt; Brick1: 10.0.231.50:/mnt/raid6-<wbr>storage/storage<br>
&gt; Brick2: 10.0.231.51:/mnt/raid6-<wbr>storage/storage<br>
&gt; Brick3: 10.0.231.52:/mnt/raid6-<wbr>storage/storage<br>
&gt; Brick4: 10.0.231.53:/mnt/raid6-<wbr>storage/storage<br>
&gt; Brick5: 10.0.231.54:/mnt/raid6-<wbr>storage/storage<br>
&gt; Brick6: 10.0.231.55:/mnt/raid6-<wbr>storage/storage<br>
&gt; Brick7: 10.0.231.56:/mnt/raid6-<wbr>storage/storage<br>
&gt; Options Reconfigured:<br>
&gt; changelog.changelog: on<br>
&gt; geo-replication.ignore-pid-<wbr>check: on<br>
&gt; geo-replication.indexing: on<br>
&gt; nfs.disable: no<br>
&gt; performance.readdir-ahead: on<br>
&gt; features.quota: on<br>
&gt; features.inode-quota: on<br>
&gt; features.quota-deem-statfs: on<br>
&gt; features.read-only: off<br>
&gt;<br>
&gt; # df -h /storage/<br>
&gt; Filesystem Size Used Avail Use% Mounted on<br>
&gt; 10.0.231.50:/storage 255T 172T 83T 68% /storage<br>
&gt;<br>
&gt;<br>
&gt; I am planning to upgrade to 3.10 (or 3.12 when it&#39;s available) but I have a<br>
&gt; number of quotas configured, and one of them (below) has a very wrong &quot;Used&quot;<br>
&gt; value:<br>
&gt;<br>
&gt; # gluster volume quota storage list | egrep &quot;MEOPAR &quot;<br>
&gt; /data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No<br>
&gt;<br>
&gt;<br>
&gt; I have confirmed the bad value appears in one of the bricks current xattrs,<br>
&gt; and it looks like the issue has been encountered previously on bricks 04,<br>
&gt; 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1 as it<br>
&gt; was recently added)<br>
&gt;<br>
&gt; $ ansible -i hosts gluster-servers[0:6] -u &lt;user&gt; --ask-pass -m shell -b<br>
&gt; --become-method=sudo --ask-become-pass -a &quot;getfattr --absolute-names -m . -d<br>
&gt; -e hex /mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR | egrep<br>
&gt; &#39;^trusted.glusterfs.quota.<wbr>size&#39;&quot;<br>
&gt; SSH password:<br>
&gt; SUDO password[defaults to SSH password]:<br>
&gt;<br>
&gt; gluster02 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0x0000011ecfa56c00000000000005<wbr>cd6d000000000006d478<br>
&gt; trusted.glusterfs.quota.size.<wbr>1=<wbr>0x0000010ad4a45200000000000001<wbr>2a0300000000000150fa<br>
&gt;<br>
&gt; gluster05 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0x00000033b8e92200000000000004<wbr>cde8000000000006b1a4<br>
&gt; trusted.glusterfs.quota.size.<wbr>1=<wbr>0x0000010dca277c00000000000001<wbr>297d0000000000015005<br>
&gt;<br>
&gt; gluster01 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0x0000003d4d434800000000000005<wbr>7616000000000006afd2<br>
&gt; trusted.glusterfs.quota.size.<wbr>1=<wbr>0x00000133fe211e00000000000005<wbr>d161000000000006cfd4<br>
&gt;<br>
&gt; gluster04 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0xffffff396f3e9400000000000004<wbr>d7ea0000000000068c62<br>
&gt; trusted.glusterfs.quota.size.<wbr>1=<wbr>0x00000106e6724800000000000001<wbr>138f0000000000012fb2<br>
&gt;<br>
&gt; gluster03 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0xfffffd02acabf000000000000003<wbr>599000000000000643e2<br>
&gt; trusted.glusterfs.quota.size.<wbr>1=<wbr>0x00000114e20f5e00000000000001<wbr>13b30000000000012fb2<br>
&gt;<br>
&gt; gluster06 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0xffffff0c98de4400000000000005<wbr>36e40000000000068cf2<br>
&gt; trusted.glusterfs.quota.size.<wbr>1=<wbr>0x0000013532664e00000000000005<wbr>e73f000000000006cfd4<br>
&gt;<br>
&gt; gluster07 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; trusted.glusterfs.quota.size=<wbr>0xfffffa3d7c1ba60000000000000a<wbr>9ccb000000000005fd2f<br>
&gt;<br>
&gt; And reviewing the subdirectories of that folder on the impacted server you<br>
&gt; can see that none of the direct children have such incorrect values:<br>
&gt;<br>
&gt; [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex<br>
&gt; /mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/*<br>
&gt; # file: /mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/&lt;dir1 &gt;<br>
&gt; ...<br>
&gt; trusted.glusterfs.quota.<wbr>7209b677-f4b9-4d82-a382-<wbr>0733620e6929.contri=<wbr>0x000000fb68418200000000000000<wbr>74730000000000000dae<br>
&gt; trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
&gt; trusted.glusterfs.quota.size=<wbr>0x000000fb68418200000000000000<wbr>74730000000000000dae<br>
&gt;<br>
&gt; # file: /mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/&lt;dir2 &gt;<br>
&gt; ...<br>
&gt; trusted.glusterfs.quota.<wbr>7209b677-f4b9-4d82-a382-<wbr>0733620e6929.contri=<wbr>0x0000000416d5f400000000000000<wbr>0baa0000000000000441<br>
&gt; trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
&gt; trusted.glusterfs.quota.limit-<wbr>set=<wbr>0x0000010000000000ffffffffffff<wbr>ffff<br>
&gt; trusted.glusterfs.quota.size=<wbr>0x0000000416d5f400000000000000<wbr>0baa0000000000000441<br>
&gt;<br>
&gt; # file: /mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR/&lt;dir3&gt;<br>
&gt; ...<br>
&gt; trusted.glusterfs.quota.<wbr>7209b677-f4b9-4d82-a382-<wbr>0733620e6929.contri=<wbr>0x000000110f2c4e00000000000002<wbr>a76a000000000006ad7d<br>
&gt; trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
&gt; trusted.glusterfs.quota.limit-<wbr>set=<wbr>0x0000020000000000ffffffffffff<wbr>ffff<br>
&gt; trusted.glusterfs.quota.size=<wbr>0x000000110f2c4e00000000000002<wbr>a76a000000000006ad7d<br>
&gt;<br>
&gt;<br>
&gt; Can I fix this on the current version of gluster (3.7) on just the one brick<br>
&gt; before I upgrade? Or would I be better off upgrading to 3.10 and trying to<br>
&gt; fix it then?<br>
&gt;<br>
&gt; I have reviewed information here:<br>
&gt;<br>
&gt; <a href="http://lists.gluster.org/pipermail/gluster-devel/2016-February/048282.html" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>pipermail/gluster-devel/2016-<wbr>February/048282.html</a><br>
&gt; <a href="http://lists.gluster.org/pipermail/gluster-users.old/2016-September/028365.html" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>pipermail/gluster-users.old/<wbr>2016-September/028365.html</a><br>
&gt;<br>
&gt; It seems like since I am on Gluster 3.7 I can disable quotas and re-enable<br>
&gt; and everything will get recalculated and increment the index on the<br>
&gt; quota.size xattr. But with the size of the volume that will take a very long<br>
&gt; time.<br>
&gt;<br>
&gt; Could I simply mark the impacted directly as dirty on gluster07? Or update<br>
&gt; the xattr directly as the sum of the size of dir1, 2, and 3?<br>
&gt;<br>
&gt; Thanks,<br>
&gt; -Matthew<br>
&gt;<br>
</div></div>&gt; ______________________________<wbr>_________________<br>
&gt; Gluster-devel mailing list<br>
&gt; <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
&gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
</blockquote></div><br></div>