<div dir="ltr"><div><div><div><div><div><div>HI Mathew,<br><br></div>In order to do listing we use an auxiliary mount, It could be that this is returning cached values..<br></div>So please try the following.<br><br></div>1) unmount the auxiliary mount for the volume (would have &quot;client pid -5&quot; is its command line)<br>.......... /var/log/glusterfs/quota-mount-xyz.log -p /var/run/gluster/xyz.pid --client-pid -5 ....<br></div>2) do a quota list again<br></div><br>Regards,<br></div>Sanoj<br><div><div><div><br></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Sep 2, 2017 at 3:55 AM, Matthew B <span dir="ltr">&lt;<a href="mailto:matthew.has.questions@gmail.com" target="_blank">matthew.has.questions@gmail.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Apologies - I copied and pasted the wrong ansible output: <br><br>matthew@laptop:~/playbooks$ ansible -i hosts gluster-servers[0:6] -u matthewb --ask-pass -m shell -b --become-method=sudo --ask-become-pass -a &quot;getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/<wbr>data/projects/MEOPAR | egrep &#39;^trusted.glusterfs.quota.<wbr>size&#39;&quot;<span class=""><br>SSH password: <br>SUDO password[defaults to SSH password]: <br>gluster02 | SUCCESS | rc=0 &gt;&gt;<br></span><span class="">trusted.glusterfs.quota.size=<wbr>0x0000011ecfa56c00000000000005<wbr>cd6d000000000006d478<br>trusted.glusterfs.quota.size.<wbr>1=<wbr>0x0000010ad4a45200000000000001<wbr>2a0300000000000150fa<br><br>gluster05 | SUCCESS | rc=0 &gt;&gt;<br></span>trusted.glusterfs.quota.size=<wbr>0x00000033b8e93800000000000004<wbr>cde9000000000006b1a4<br>trusted.glusterfs.quota.size.<wbr>1=<wbr>0x0000010dca277c00000000000001<wbr>297d0000000000015005<br><br>gluster04 | SUCCESS | rc=0 &gt;&gt;<br>trusted.glusterfs.quota.size=<wbr>0xffffff396f3ec000000000000004<wbr>d7eb0000000000068c62<br>trusted.glusterfs.quota.size.<wbr>1=<wbr>0x00000106e6724800000000000001<wbr>138f0000000000012fb2<span class=""><br><br>gluster01 | SUCCESS | rc=0 &gt;&gt;<br>trusted.glusterfs.quota.size=<wbr>0x0000003d4d434800000000000005<wbr>7616000000000006afd2<br>trusted.glusterfs.quota.size.<wbr>1=<wbr>0x00000133fe211e00000000000005<wbr>d161000000000006cfd4<br><br></span><span class="">gluster03 | SUCCESS | rc=0 &gt;&gt;<br>trusted.glusterfs.quota.size=<wbr>0xfffffd02acabf000000000000003<wbr>599000000000000643e2<br>trusted.glusterfs.quota.size.<wbr>1=<wbr>0x00000114e20f5e00000000000001<wbr>13b30000000000012fb2<br><br>gluster06 | SUCCESS | rc=0 &gt;&gt;<br>trusted.glusterfs.quota.size=<wbr>0xffffff0c98de4400000000000005<wbr>36e40000000000068cf2<br>trusted.glusterfs.quota.size.<wbr>1=<wbr>0x0000013532664e00000000000005<wbr>e73f000000000006cfd4<br><br>gluster07 | SUCCESS | rc=0 &gt;&gt;<br></span>trusted.glusterfs.quota.size=<wbr>0x000001108e511400000000000003<wbr>27c6000000000006bf6d<br><br></div>Thanks,<br></div> -Matthew<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 3:22 PM, Matthew B <span dir="ltr">&lt;<a href="mailto:matthew.has.questions@gmail.com" target="_blank">matthew.has.questions@gmail.<wbr>com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div>Thanks Sanoj, <br><br></div><div>Now the brick is showing the correct xattrs: <br></div><div>
</div><div><pre><span>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR
# file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR
security.selinux=0x73797374656<wbr>d5f753a6f626a6563745f723a756e6<wbr>c6162656c65645f743a733000
trusted.gfid=0x7209b677f4b94d8<wbr>2a3820733620e6929
trusted.glusterfs.6f95525a-94d<wbr>7-4174-bac4-e1a18fe010a2.xtime<wbr>=0x599f228800088654
trusted.glusterfs.dht=0x000000<wbr>0100000000b6db6d41db6db6ee
</span><b>trusted.glusterfs.quota.d5a5ec<wbr>da-7511-4bbb-9b4c-4fcc84e3e1da<wbr>.contri=0x000001108e5114000000<wbr>0000000327c6000000000006bf6d</b>
trusted.glusterfs.quota.dirty=<wbr>0x3000
trusted.glusterfs.quota.limit-<wbr>set=0x0000088000000000ffffffff<wbr>ffffffff
<b>trusted.glusterfs.quota.size=0<wbr>x000001108e5114000000000000032<wbr>7c6000000000006bf6d</b></pre></div><div><br></div><div>However, the quota listing still shows the old (incorrect) value: <br></div><div></div><div><pre><br>[root@gluster07 ~]# gluster volume quota storage list | egrep &quot;MEOPAR &quot; 
/data/projects/MEOPAR                      8.5TB     80%(6.8TB) <b>16384.0PB</b>  10.6TB              No                   No</pre></div><div><br></div><div>I&#39;ve checked on each of the bricks and they look fine now - is there any way to reflect the new value in the quota itself? <br></div><div><br></div><div><pre>matthew@laptop:~/playbooks$ ansible -i hosts gluster-servers[0:6] -u matthewb --ask-pass -m shell -b --become-method=sudo --ask-become-pass -a &quot;getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/dat<wbr>a/projects/comp_support | egrep &#39;^trusted.glusterfs.quota.size<wbr>\=&#39; | sed &#39;s/trusted.glusterfs.quota.siz<wbr>e\=//&#39; | cut -c 1-18 | xargs printf &#39;%d\n&#39;&quot; 
SSH password:
SUDO password[defaults to SSH password]:
gluster05 | SUCCESS | rc=0 &gt;&gt;
567293059584

gluster04 | SUCCESS | rc=0 &gt;&gt;
510784812032

gluster03 | SUCCESS | rc=0 &gt;&gt;
939742334464

gluster01 | SUCCESS | rc=0 &gt;&gt;
98688324096

gluster02 | SUCCESS | rc=0 &gt;&gt;
61449348096

gluster07 | SUCCESS | rc=0 &gt;&gt;
29252869632

gluster06 | SUCCESS | rc=0 &gt;&gt;
31899410944</pre></div><div><br></div>Thanks,<br></div> -Matthew<br></div><div class="m_-5194400459960243334HOEnZb"><div class="m_-5194400459960243334h5"><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Sep 1, 2017 at 4:33 AM, Sanoj Unnikrishnan <span dir="ltr">&lt;<a href="mailto:sunnikri@redhat.com" target="_blank">sunnikri@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Hi Mathew,<br></div><br></div>The other option is to explicitly remove the size and contri xattr at the brick path and then do a stat from the mount point.<br><br> #setfattr -x trusted.glusterfs.quota.000000<wbr>00-0000-0000-0000-000000000001<wbr>.contri.1 &lt;brick path /dir&gt;<br> #setfattr -x trusted.glusterfs.quota.size.1<wbr>  &lt;brick path / dir&gt;<br> #stat &lt;mount path /dir&gt;<br><br></div>Stat would heal the size and the contri xattr and the dirty xattr would heal only on the next operation on the directory.<br><br></div><div>After this you could set dirty bit and do  a stat again.<br><pre>setxattr -n trusted.glusterfs.quota.dirty -v 0x3100 &lt;brick path / dir&gt;<br></pre><pre>stat &lt;mount path /dir&gt;</pre></div><div><br></div><div><br></div><div><div>Regards,<br></div><div>Sanoj<br></div></div></div><div class="m_-5194400459960243334m_-5496848955793637859HOEnZb"><div class="m_-5194400459960243334m_-5496848955793637859h5"><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Aug 31, 2017 at 9:12 PM, Matthew B <span dir="ltr">&lt;<a href="mailto:matthew.has.questions@gmail.com" target="_blank">matthew.has.questions@gmail.c<wbr>om</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi Raghavendra, <br><br></div>I didn&#39;t get a chance to implement your suggestions, however it looks like the dirty bit is no longer set - so presumably the quota should have been updated, however the quota.size attribute is still incorrect though slightly different than before. Any other suggestions? <br><span><br><tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR</tt><tt><br>
</tt><tt># file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR</tt><tt><br>
</tt><tt>security.selinux=0x73797374656<wbr>d5f753a6f626a6563745f723a756e6<wbr>c6162656c65645f743a733000</tt><tt><br>
</tt><tt>trusted.gfid=0x7209b677f4b94d8<wbr>2a3820733620e6929</tt><tt><br>
</tt><tt>trusted.glusterfs.6f95525a-94d<wbr>7-4174-bac4-e1a18fe010a2.xtime<wbr>=0x599f228800088654</tt><tt><br>
</tt><tt>trusted.glusterfs.dht=0x000000<wbr>0100000000b6db6d41db6db6ee</tt><tt><br>
</tt></span><tt>trusted.glusterfs.quota.d5a5ec<wbr>da-7511-4bbb-9b4c-4fcc84e3e1da<wbr>.contri=0xfffffa3d7c28f6000000<wbr>0000000a9d0a000000000005fd2f</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.dirty=<wbr>0x3000</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.limit-<wbr>set=0x0000088000000000ffffffff<wbr>ffffffff</tt><tt><br>
</tt><tt>trusted.glusterfs.quota.size=0<wbr>xfffffa3d7c28f60000000000000a9<wbr>d0a000000000005fd2f</tt><br><br></div>Thanks,<br></div>-Matthew</div><div class="m_-5194400459960243334m_-5496848955793637859m_-7645030894149328762HOEnZb"><div class="m_-5194400459960243334m_-5496848955793637859m_-7645030894149328762h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 28, 2017 at 8:05 PM, Raghavendra Gowdappa <span dir="ltr">&lt;<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
<br>
----- Original Message -----<br>
&gt; From: &quot;Matthew B&quot; &lt;<a href="mailto:matthew.has.questions@gmail.com" target="_blank">matthew.has.questions@gmail.c<wbr>om</a>&gt;<br>
</span><span>&gt; To: &quot;Sanoj Unnikrishnan&quot; &lt;<a href="mailto:sunnikri@redhat.com" target="_blank">sunnikri@redhat.com</a>&gt;<br>
&gt; Cc: &quot;Raghavendra Gowdappa&quot; &lt;<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>&gt;, &quot;Gluster Devel&quot; &lt;<a href="mailto:gluster-devel@gluster.org" target="_blank">gluster-devel@gluster.org</a>&gt;<br>
&gt; Sent: Monday, August 28, 2017 9:33:25 PM<br>
&gt; Subject: Re: [Gluster-devel] Quota Used Value Incorrect - Fix now or after upgrade<br>
&gt;<br>
&gt; Hi Sanoj,<br>
&gt;<br>
&gt; Thank you for the information - I have applied the changes you specified<br>
&gt; above - but I haven&#39;t seen any changes in the xattrs on the directory after<br>
&gt; about 15 minutes:<br>
<br>
</span>I think stat is served from cache - either gluster&#39;s md-cache or kernel attribute cache. For healing to happen we need to force a lookup (which we had hoped would be issued as part of stat cmd) and this lookup has to reach marker xlator loaded on bricks. To make sure a lookup on the directory reaches marker we need to:<br>
<br>
1. Turn off kernel attribute and entry cache (using --entrytimeout=0 and --attribute-timeout=0 as options to glusterfs while mounting)<br>
2. Turn off md-cache using gluster cli (gluster volume set performance.md-cache &lt;volname&gt; off)<br>
3. Turn off readdirplus in the entire stack [1]<br>
<br>
Once the above steps are done I guess doing a stat results in a lookup on the directory witnessed by marker. Once the issue is fixed you can undo the above three steps so that performance is not affected in your setup.<br>
<br>
[1] <a href="http://nongnu.13855.n7.nabble.com/Turning-off-readdirp-in-the-entire-stack-on-fuse-mount-td220297.html" rel="noreferrer" target="_blank">http://nongnu.13855.n7.nabble.<wbr>com/Turning-off-readdirp-in-th<wbr>e-entire-stack-on-fuse-mount-t<wbr>d220297.html</a><br>
<div class="m_-5194400459960243334m_-5496848955793637859m_-7645030894149328762m_4318700441570490995HOEnZb"><div class="m_-5194400459960243334m_-5496848955793637859m_-7645030894149328762m_4318700441570490995h5"><br>
&gt;<br>
&gt; [root@gluster07 ~]# setfattr -n trusted.glusterfs.quota.dirty -v 0x3100<br>
&gt; /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<br>
&gt;<br>
&gt; [root@gluster07 ~]# stat /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR<br>
&gt;<br>
&gt; [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex<br>
&gt; /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR<br>
&gt; # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR<br>
&gt; security.selinux=0x73797374656<wbr>d5f753a6f626a6563745f723a756e6<wbr>c6162656c65645f743a733000<br>
&gt; trusted.gfid=0x7209b677f4b94d8<wbr>2a3820733620e6929<br>
&gt; trusted.glusterfs.6f95525a-94d<wbr>7-4174-bac4-e1a18fe010a2.xtime<wbr>=0x599f228800088654<br>
&gt; trusted.glusterfs.dht=0x000000<wbr>0100000000b6db6d41db6db6ee<br>
&gt; trusted.glusterfs.quota.d5a5ec<wbr>da-7511-4bbb-9b4c-4fcc84e3e1da<wbr>.contri=0xfffffa3d7c1ba6000000<wbr>0000000a9ccb000000000005fd2f<br>
&gt; trusted.glusterfs.quota.dirty=<wbr>0x3100<br>
&gt; trusted.glusterfs.quota.limit-<wbr>set=0x0000088000000000ffffffff<wbr>ffffffff<br>
&gt; trusted.glusterfs.quota.size=0<wbr>xfffffa3d7c1ba60000000000000a9<wbr>ccb000000000005fd2f<br>
&gt;<br>
&gt; [root@gluster07 ~]# gluster volume status storage<br>
&gt; Status of volume: storage<br>
&gt; Gluster process                             TCP Port  RDMA Port  Online  Pid<br>
&gt; ------------------------------<wbr>------------------------------<wbr>------------------<br>
&gt; Brick 10.0.231.50:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49159     0          Y<br>
&gt; 2160<br>
&gt; Brick 10.0.231.51:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49153     0          Y<br>
&gt; 16037<br>
&gt; Brick 10.0.231.52:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49159     0          Y<br>
&gt; 2298<br>
&gt; Brick 10.0.231.53:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49154     0          Y<br>
&gt; 9038<br>
&gt; Brick 10.0.231.54:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49153     0          Y<br>
&gt; 32284<br>
&gt; Brick 10.0.231.55:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49153     0          Y<br>
&gt; 14840<br>
&gt; Brick 10.0.231.56:/mnt/raid6-storage<wbr>/storag<br>
&gt; e                                           49152     0          Y<br>
&gt; 29389<br>
&gt; NFS Server on localhost                     2049      0          Y<br>
&gt; 29421<br>
&gt; Quota Daemon on localhost                   N/A       N/A        Y<br>
&gt; 29438<br>
&gt; NFS Server on 10.0.231.51                   2049      0          Y<br>
&gt; 18249<br>
&gt; Quota Daemon on 10.0.231.51                 N/A       N/A        Y<br>
&gt; 18260<br>
&gt; NFS Server on 10.0.231.55                   2049      0          Y<br>
&gt; 24128<br>
&gt; Quota Daemon on 10.0.231.55                 N/A       N/A        Y<br>
&gt; 24147<br>
&gt; NFS Server on 10.0.231.54                   2049      0          Y<br>
&gt; 9397<br>
&gt; Quota Daemon on 10.0.231.54                 N/A       N/A        Y<br>
&gt; 9406<br>
&gt; NFS Server on 10.0.231.53                   2049      0          Y<br>
&gt; 18387<br>
&gt; Quota Daemon on 10.0.231.53                 N/A       N/A        Y<br>
&gt; 18397<br>
&gt; NFS Server on 10.0.231.52                   2049      0          Y<br>
&gt; 2230<br>
&gt; Quota Daemon on 10.0.231.52                 N/A       N/A        Y<br>
&gt; 2262<br>
&gt; NFS Server on 10.0.231.50                   2049      0          Y<br>
&gt; 2113<br>
&gt; Quota Daemon on 10.0.231.50                 N/A       N/A        Y<br>
&gt; 2154<br>
&gt;<br>
&gt; Task Status of Volume storage<br>
&gt; ------------------------------<wbr>------------------------------<wbr>------------------<br>
&gt; There are no active volume tasks<br>
&gt;<br>
&gt; [root@gluster07 ~]# gluster volume quota storage list | egrep &quot;MEOPAR &quot;<br>
&gt; /data/projects/MEOPAR                      8.5TB     80%(6.8TB) 16384.0PB<br>
&gt; 17.4TB              No                   No<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Looking at the quota daemon on gluster07:<br>
&gt;<br>
&gt; [root@gluster07 ~]# ps -f -p 29438<br>
&gt; UID        PID  PPID  C STIME TTY          TIME CMD<br>
&gt; root     29438     1  0 Jun19 ?        04:43:31 /usr/sbin/glusterfs -s<br>
&gt; localhost --volfile-id gluster/quotad -p<br>
&gt; /var/lib/glusterd/quotad/run/q<wbr>uotad.pid -l /var/log/glusterfs/quotad.log<br>
&gt;<br>
&gt; I can see some errors on the log - not sure if those are related:<br>
&gt;<br>
&gt; [root@gluster07 ~]# tail /var/log/glusterfs/quotad.log<br>
&gt; [2017-08-28 15:36:17.990909] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:36:17.991389] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:36:17.992656] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:36:17.993235] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:45:51.024756] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:45:51.027871] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:45:51.030843] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:45:51.031324] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:45:51.032791] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt; [2017-08-28 15:45:51.033295] W [dict.c:592:dict_unref]<br>
&gt; (--&gt;/usr/lib64/glusterfs/3.7.1<wbr>3/xlator/features/quotad.so(qd<wbr>_lookup_cbk+0x35e)<br>
&gt; [0x7f79fb09253e]<br>
&gt; --&gt;/usr/lib64/glusterfs/3.7.13<wbr>/xlator/features/quotad.so(quo<wbr>tad_aggregator_getlimit_cbk+0x<wbr>b3)<br>
&gt; [0x7f79fb093333] --&gt;/lib64/libglusterfs.so.0(di<wbr>ct_unref+0x99)<br>
&gt; [0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>
&gt;<br>
&gt; How should I proceed?<br>
&gt;<br>
&gt; Thanks,<br>
&gt; -Matthew<br>
&gt;<br>
&gt; On Mon, Aug 28, 2017 at 3:13 AM, Sanoj Unnikrishnan &lt;<a href="mailto:sunnikri@redhat.com" target="_blank">sunnikri@redhat.com</a>&gt;<br>
&gt; wrote:<br>
&gt;<br>
&gt; &gt; Hi Mathew,<br>
&gt; &gt;<br>
&gt; &gt; If you are sure that &quot;/mnt/raid6-storage/storage/da<wbr>ta/projects/MEOPAR/&quot;<br>
&gt; &gt; is the only directory with wrong accounting and its immediate sub<br>
&gt; &gt; directories have correct xattr values, Setting the dirty xattr and doing a<br>
&gt; &gt; stat after that should resolve the issue.<br>
&gt; &gt;<br>
&gt; &gt; 1) setxattr -n trusted.glusterfs.quota.dirty -v 0x3100<br>
&gt; &gt; /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<br>
&gt; &gt;<br>
&gt; &gt; 2) stat /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<br>
&gt; &gt;<br>
&gt; &gt; Could you please share what kind of operations that happens on this<br>
&gt; &gt; directory, to help RCA the issue.<br>
&gt; &gt;<br>
&gt; &gt; If you think this can be true elsewhere in filesystem as well,use the<br>
&gt; &gt; following script to identify the same.<br>
&gt; &gt;<br>
&gt; &gt; 1) <a href="https://github.com/gluster/glusterfs/blob/master/extras/" rel="noreferrer" target="_blank">https://github.com/gluster/glu<wbr>sterfs/blob/master/extras/</a><br>
&gt; &gt; quota/xattr_analysis.py<br>
&gt; &gt; 2) <a href="https://github.com/gluster/glusterfs/blob/master/extras/" rel="noreferrer" target="_blank">https://github.com/gluster/glu<wbr>sterfs/blob/master/extras/</a><br>
&gt; &gt; quota/log_accounting.sh<br>
&gt; &gt;<br>
&gt; &gt; Regards,<br>
&gt; &gt; Sanoj<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Mon, Aug 28, 2017 at 12:39 PM, Raghavendra Gowdappa &lt;<br>
&gt; &gt; <a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>&gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; +sanoj<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; ----- Original Message -----<br>
&gt; &gt;&gt; &gt; From: &quot;Matthew B&quot; &lt;<a href="mailto:matthew.has.questions@gmail.com" target="_blank">matthew.has.questions@gmail.c<wbr>om</a>&gt;<br>
&gt; &gt;&gt; &gt; To: <a href="mailto:gluster-devel@gluster.org" target="_blank">gluster-devel@gluster.org</a><br>
&gt; &gt;&gt; &gt; Sent: Saturday, August 26, 2017 12:45:19 AM<br>
&gt; &gt;&gt; &gt; Subject: [Gluster-devel] Quota Used Value Incorrect - Fix now or after<br>
&gt; &gt;&gt;       upgrade<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Hello,<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; I need some advice on fixing an issue with quota on my gluster volume.<br>
&gt; &gt;&gt; It&#39;s<br>
&gt; &gt;&gt; &gt; running version 3.7, distributed volume, with 7 nodes.<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; # gluster --version<br>
&gt; &gt;&gt; &gt; glusterfs 3.7.13 built on Jul 8 2016 15:26:18<br>
&gt; &gt;&gt; &gt; Repository revision: git:// <a href="http://git.gluster.com/glusterfs.git" rel="noreferrer" target="_blank">git.gluster.com/glusterfs.git</a><br>
&gt; &gt;&gt; &gt; Copyright (c) 2006-2011 Gluster Inc. &lt; <a href="http://www.gluster.com" rel="noreferrer" target="_blank">http://www.gluster.com</a> &gt;<br>
&gt; &gt;&gt; &gt; GlusterFS comes with ABSOLUTELY NO WARRANTY.<br>
&gt; &gt;&gt; &gt; You may redistribute copies of GlusterFS under the terms of the GNU<br>
&gt; &gt;&gt; General<br>
&gt; &gt;&gt; &gt; Public License.<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; # gluster volume info storage<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Volume Name: storage<br>
&gt; &gt;&gt; &gt; Type: Distribute<br>
&gt; &gt;&gt; &gt; Volume ID: 6f95525a-94d7-4174-bac4-e1a18f<wbr>e010a2<br>
&gt; &gt;&gt; &gt; Status: Started<br>
&gt; &gt;&gt; &gt; Number of Bricks: 7<br>
&gt; &gt;&gt; &gt; Transport-type: tcp<br>
&gt; &gt;&gt; &gt; Bricks:<br>
&gt; &gt;&gt; &gt; Brick1: 10.0.231.50:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Brick2: 10.0.231.51:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Brick3: 10.0.231.52:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Brick4: 10.0.231.53:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Brick5: 10.0.231.54:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Brick6: 10.0.231.55:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Brick7: 10.0.231.56:/mnt/raid6-storage<wbr>/storage<br>
&gt; &gt;&gt; &gt; Options Reconfigured:<br>
&gt; &gt;&gt; &gt; changelog.changelog: on<br>
&gt; &gt;&gt; &gt; geo-replication.ignore-pid-che<wbr>ck: on<br>
&gt; &gt;&gt; &gt; geo-replication.indexing: on<br>
&gt; &gt;&gt; &gt; nfs.disable: no<br>
&gt; &gt;&gt; &gt; performance.readdir-ahead: on<br>
&gt; &gt;&gt; &gt; features.quota: on<br>
&gt; &gt;&gt; &gt; features.inode-quota: on<br>
&gt; &gt;&gt; &gt; features.quota-deem-statfs: on<br>
&gt; &gt;&gt; &gt; features.read-only: off<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; # df -h /storage/<br>
&gt; &gt;&gt; &gt; Filesystem Size Used Avail Use% Mounted on<br>
&gt; &gt;&gt; &gt; 10.0.231.50:/storage 255T 172T 83T 68% /storage<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; I am planning to upgrade to 3.10 (or 3.12 when it&#39;s available) but I<br>
&gt; &gt;&gt; have a<br>
&gt; &gt;&gt; &gt; number of quotas configured, and one of them (below) has a very wrong<br>
&gt; &gt;&gt; &quot;Used&quot;<br>
&gt; &gt;&gt; &gt; value:<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; # gluster volume quota storage list | egrep &quot;MEOPAR &quot;<br>
&gt; &gt;&gt; &gt; /data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; I have confirmed the bad value appears in one of the bricks current<br>
&gt; &gt;&gt; xattrs,<br>
&gt; &gt;&gt; &gt; and it looks like the issue has been encountered previously on bricks<br>
&gt; &gt;&gt; 04,<br>
&gt; &gt;&gt; &gt; 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1<br>
&gt; &gt;&gt; as it<br>
&gt; &gt;&gt; &gt; was recently added)<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; $ ansible -i hosts gluster-servers[0:6] -u &lt;user&gt; --ask-pass -m shell -b<br>
&gt; &gt;&gt; &gt; --become-method=sudo --ask-become-pass -a &quot;getfattr --absolute-names -m<br>
&gt; &gt;&gt; . -d<br>
&gt; &gt;&gt; &gt; -e hex /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR | egrep<br>
&gt; &gt;&gt; &gt; &#39;^trusted.glusterfs.quota.size<wbr>&#39;&quot;<br>
&gt; &gt;&gt; &gt; SSH password:<br>
&gt; &gt;&gt; &gt; SUDO password[defaults to SSH password]:<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster02 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>x0000011ecfa56c00000000000005c<br>
&gt; &gt;&gt; d6d000000000006d478<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size.1<wbr>=0x0000010ad4a4520000000000000<br>
&gt; &gt;&gt; 12a0300000000000150fa<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster05 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>x00000033b8e92200000000000004c<br>
&gt; &gt;&gt; de8000000000006b1a4<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size.1<wbr>=0x0000010dca277c0000000000000<br>
&gt; &gt;&gt; 1297d0000000000015005<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster01 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>x0000003d4d4348000000000000057<br>
&gt; &gt;&gt; 616000000000006afd2<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size.1<wbr>=0x00000133fe211e0000000000000<br>
&gt; &gt;&gt; 5d161000000000006cfd4<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster04 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>xffffff396f3e9400000000000004d<br>
&gt; &gt;&gt; 7ea0000000000068c62<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size.1<wbr>=0x00000106e672480000000000000<br>
&gt; &gt;&gt; 1138f0000000000012fb2<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster03 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>xfffffd02acabf0000000000000035<br>
&gt; &gt;&gt; 99000000000000643e2<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size.1<wbr>=0x00000114e20f5e0000000000000<br>
&gt; &gt;&gt; 113b30000000000012fb2<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster06 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>xffffff0c98de44000000000000053<br>
&gt; &gt;&gt; 6e40000000000068cf2<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size.1<wbr>=0x0000013532664e0000000000000<br>
&gt; &gt;&gt; 5e73f000000000006cfd4<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; gluster07 | SUCCESS | rc=0 &gt;&gt;<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>xfffffa3d7c1ba60000000000000a9<br>
&gt; &gt;&gt; ccb000000000005fd2f<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; And reviewing the subdirectories of that folder on the impacted server<br>
&gt; &gt;&gt; you<br>
&gt; &gt;&gt; &gt; can see that none of the direct children have such incorrect values:<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex<br>
&gt; &gt;&gt; &gt; /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/*<br>
&gt; &gt;&gt; &gt; # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/&lt;dir1 &gt;<br>
&gt; &gt;&gt; &gt; ...<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.7209b6<wbr>77-f4b9-4d82-a382-0733620e6929<br>
&gt; &gt;&gt; .contri=0x000000fb684182000000<wbr>0000000074730000000000000dae<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>x000000fb684182000000000000007<br>
&gt; &gt;&gt; 4730000000000000dae<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/&lt;dir2 &gt;<br>
&gt; &gt;&gt; &gt; ...<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.7209b6<wbr>77-f4b9-4d82-a382-0733620e6929<br>
&gt; &gt;&gt; .contri=0x0000000416d5f4000000<wbr>000000000baa0000000000000441<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.limit-<wbr>set=0x0000010000000000ffffffff<wbr>ffffffff<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>x0000000416d5f4000000000000000<br>
&gt; &gt;&gt; baa0000000000000441<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/&lt;dir3&gt;<br>
&gt; &gt;&gt; &gt; ...<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.7209b6<wbr>77-f4b9-4d82-a382-0733620e6929<br>
&gt; &gt;&gt; .contri=0x000000110f2c4e000000<wbr>00000002a76a000000000006ad7d<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.limit-<wbr>set=0x0000020000000000ffffffff<wbr>ffffffff<br>
&gt; &gt;&gt; &gt; trusted.glusterfs.quota.size=0<wbr>x000000110f2c4e00000000000002a<br>
&gt; &gt;&gt; 76a000000000006ad7d<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Can I fix this on the current version of gluster (3.7) on just the one<br>
&gt; &gt;&gt; brick<br>
&gt; &gt;&gt; &gt; before I upgrade? Or would I be better off upgrading to 3.10 and trying<br>
&gt; &gt;&gt; to<br>
&gt; &gt;&gt; &gt; fix it then?<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; I have reviewed information here:<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; <a href="http://lists.gluster.org/pipermail/gluster-devel/2016-Februa" rel="noreferrer" target="_blank">http://lists.gluster.org/piper<wbr>mail/gluster-devel/2016-Februa</a><br>
&gt; &gt;&gt; ry/048282.html<br>
&gt; &gt;&gt; &gt; <a href="http://lists.gluster.org/pipermail/gluster-users.old/2016-" rel="noreferrer" target="_blank">http://lists.gluster.org/piper<wbr>mail/gluster-users.old/2016-</a><br>
&gt; &gt;&gt; September/028365.html<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; It seems like since I am on Gluster 3.7 I can disable quotas and<br>
&gt; &gt;&gt; re-enable<br>
&gt; &gt;&gt; &gt; and everything will get recalculated and increment the index on the<br>
&gt; &gt;&gt; &gt; quota.size xattr. But with the size of the volume that will take a very<br>
&gt; &gt;&gt; long<br>
&gt; &gt;&gt; &gt; time.<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Could I simply mark the impacted directly as dirty on gluster07? Or<br>
&gt; &gt;&gt; update<br>
&gt; &gt;&gt; &gt; the xattr directly as the sum of the size of dir1, 2, and 3?<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Thanks,<br>
&gt; &gt;&gt; &gt; -Matthew<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; ______________________________<wbr>_________________<br>
&gt; &gt;&gt; &gt; Gluster-devel mailing list<br>
&gt; &gt;&gt; &gt; <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt; &gt;&gt; &gt; <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-devel</a><br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt;<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>