<div dir="ltr"><div><div>Hi Sanoj, <br></div><div><br></div><div>Thank you for the information - I have applied the changes you specified above - but I haven't seen any changes in the xattrs on the directory after about 15 minutes: <br></div><div><br></div><div><tt>[root@gluster07 ~]# setfattr -n trusted.glusterfs.quota.dirty -v 0x3100 /mnt/raid6-storage/storage/data/projects/MEOPAR/<br></tt></div><div><tt><br></tt></div><div><tt>[root@gluster07 ~]# stat /mnt/raid6-storage/storage/data/projects/MEOPAR<br></tt></div><div><tt><br></tt></div><div><tt>[root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex /mnt/raid6-storage/storage/data/projects/MEOPAR<br># file: /mnt/raid6-storage/storage/data/projects/MEOPAR<br>security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000<br>trusted.gfid=0x7209b677f4b94d82a3820733620e6929<br>trusted.glusterfs.6f95525a-94d7-4174-bac4-e1a18fe010a2.xtime=0x599f228800088654<br>trusted.glusterfs.dht=0x0000000100000000b6db6d41db6db6ee<br>trusted.glusterfs.quota.d5a5ecda-7511-4bbb-9b4c-4fcc84e3e1da.contri=0xfffffa3d7c1ba60000000000000a9ccb000000000005fd2f<br>trusted.glusterfs.quota.dirty=0x3100<br>trusted.glusterfs.quota.limit-set=0x0000088000000000ffffffffffffffff<br>trusted.glusterfs.quota.size=0xfffffa3d7c1ba60000000000000a9ccb000000000005fd2f<br></tt></div><div><tt><br></tt></div><div><tt>[root@gluster07 ~]# gluster volume status storage</tt><tt><br>
</tt><tt>Status of volume: storage</tt><tt><br>
</tt><tt>Gluster process TCP Port RDMA Port Online Pid</tt><tt><br>
</tt><tt>------------------------------------------------------------------------------</tt><tt><br>
</tt><tt>Brick 10.0.231.50:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49159 0 Y 2160 </tt><tt><br>
</tt><tt>Brick 10.0.231.51:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49153 0 Y 16037</tt><tt><br>
</tt><tt>Brick 10.0.231.52:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49159 0 Y 2298 </tt><tt><br>
</tt><tt>Brick 10.0.231.53:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49154 0 Y 9038 </tt><tt><br>
</tt><tt>Brick 10.0.231.54:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49153 0 Y 32284</tt><tt><br>
</tt><tt>Brick 10.0.231.55:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49153 0 Y 14840</tt><tt><br>
</tt><tt>Brick 10.0.231.56:/mnt/raid6-storage/storag</tt><tt><br>
</tt><tt>e 49152 0 Y 29389</tt><tt><br>
</tt><tt>NFS Server on localhost 2049 0 Y 29421</tt><tt><br>
</tt><tt>Quota Daemon on localhost N/A N/A Y 29438</tt><tt><br>
</tt><tt>NFS Server on 10.0.231.51 2049 0 Y 18249</tt><tt><br>
</tt><tt>Quota Daemon on 10.0.231.51 N/A N/A Y 18260</tt><tt><br>
</tt><tt>NFS Server on 10.0.231.55 2049 0 Y 24128</tt><tt><br>
</tt><tt>Quota Daemon on 10.0.231.55 N/A N/A Y 24147</tt><tt><br>
</tt><tt>NFS Server on 10.0.231.54 2049 0 Y 9397 </tt><tt><br>
</tt><tt>Quota Daemon on 10.0.231.54 N/A N/A Y 9406 </tt><tt><br>
</tt><tt>NFS Server on 10.0.231.53 2049 0 Y 18387</tt><tt><br>
</tt><tt>Quota Daemon on 10.0.231.53 N/A N/A Y 18397</tt><tt><br>
</tt><tt>NFS Server on 10.0.231.52 2049 0 Y 2230 </tt><tt><br>
</tt><tt>Quota Daemon on 10.0.231.52 N/A N/A Y 2262 </tt><tt><br>
</tt><tt>NFS Server on 10.0.231.50 2049 0 Y 2113 </tt><tt><br>
</tt><tt>Quota Daemon on 10.0.231.50 N/A N/A Y 2154 </tt><tt><br>
</tt><tt> </tt><tt><br>
</tt><tt>Task Status of Volume storage</tt><tt><br>
</tt><tt>------------------------------------------------------------------------------</tt><tt><br>
</tt><tt>There are no active volume tasks</tt></div><div><tt><br></tt></div><div><tt>[root@gluster07 ~]# gluster volume quota storage list | egrep "MEOPAR "<br>/data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No<br><br></tt></div><div><tt><br></tt></div><div><br><br></div>Looking at the quota daemon on gluster07: <br></div><div><br></div><div><tt>[root@gluster07 ~]# ps -f -p 29438<br>UID PID PPID C STIME TTY TIME CMD<br>root
29438 1 0 Jun19 ? 04:43:31 /usr/sbin/glusterfs -s localhost
--volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid
-l /var/log/glusterfs/quotad.log</tt></div><div><br></div><div>I can see some errors on the log - not sure if those are related: <br></div><div><br></div><div><tt>[root@gluster07 ~]# tail /var/log/glusterfs/quotad.log<br>[2017-08-28
15:36:17.990909] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:36:17.991389] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:36:17.992656] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:36:17.993235] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:45:51.024756] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:45:51.027871] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:45:51.030843] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:45:51.031324] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:45:51.032791] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]<br>[2017-08-28
15:45:51.033295] W [dict.c:592:dict_unref]
(-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(qd_lookup_cbk+0x35e)
[0x7f79fb09253e]
-->/usr/lib64/glusterfs/3.7.13/xlator/features/quotad.so(quotad_aggregator_getlimit_cbk+0xb3)
[0x7f79fb093333] -->/lib64/libglusterfs.so.0(dict_unref+0x99)
[0x7f7a090299e9] ) 0-dict: dict is NULL [Invalid argument]</tt></div><div><br></div><div>How should I proceed? </div><div><br></div><div>Thanks, <br></div><div>-Matthew<br></div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 28, 2017 at 3:13 AM, Sanoj Unnikrishnan <span dir="ltr"><<a href="mailto:sunnikri@redhat.com" target="_blank">sunnikri@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><tt>Hi Mathew,<br><br>If you are sure that "/mnt/raid6-storage/storage/da<wbr>ta/projects/MEOPAR/"<br></tt></div><tt>is the only directory with wrong accounting and its immediate sub directories have correct xattr values, Setting the dirty xattr and doing a stat after that should resolve the issue.<br></tt><pre>1) setxattr -n trusted.glusterfs.quota.dirty -v 0x3100 <tt>/mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/</tt></pre><tt>2) stat </tt><tt><tt>/mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<br><br></tt> </tt></div><div><tt></tt></div><tt>Could you please share what kind of operations that happens on this directory, to help RCA the issue.<br></tt></div><div><tt><br>If you think this can be true elsewhere in filesystem as well,</tt><tt>use the following script to identify the same.<br><br>1) <a href="https://github.com/gluster/glusterfs/blob/master/extras/quota/xattr_analysis.py" target="_blank">https://github.com/gluster/<wbr>glusterfs/blob/master/extras/<wbr>quota/xattr_analysis.py</a><br>2) <a href="https://github.com/gluster/glusterfs/blob/master/extras/quota/log_accounting.sh" target="_blank">https://github.com/gluster/<wbr>glusterfs/blob/master/extras/<wbr>quota/log_accounting.sh</a><br></tt></div><div><tt><br></tt></div><div><tt>Regards,<br></tt></div><div><tt>Sanoj<br></tt></div><div><tt><br></tt><div><tt><br> <br></tt></div></div></div><div class="gmail-HOEnZb"><div class="gmail-h5"><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 28, 2017 at 12:39 PM, Raghavendra Gowdappa <span dir="ltr"><<a href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">+sanoj<br>
<div><div class="gmail-m_8620201421561964004h5"><br>
----- Original Message -----<br>
> From: "Matthew B" <<a href="mailto:matthew.has.questions@gmail.com" target="_blank">matthew.has.questions@gmail.c<wbr>om</a>><br>
> To: <a href="mailto:gluster-devel@gluster.org" target="_blank">gluster-devel@gluster.org</a><br>
> Sent: Saturday, August 26, 2017 12:45:19 AM<br>
> Subject: [Gluster-devel] Quota Used Value Incorrect - Fix now or after upgrade<br>
><br>
> Hello,<br>
><br>
> I need some advice on fixing an issue with quota on my gluster volume. It's<br>
> running version 3.7, distributed volume, with 7 nodes.<br>
><br>
> # gluster --version<br>
> glusterfs 3.7.13 built on Jul 8 2016 15:26:18<br>
> Repository revision: git:// <a href="http://git.gluster.com/glusterfs.git" rel="noreferrer" target="_blank">git.gluster.com/glusterfs.git</a><br>
> Copyright (c) 2006-2011 Gluster Inc. < <a href="http://www.gluster.com" rel="noreferrer" target="_blank">http://www.gluster.com</a> ><br>
> GlusterFS comes with ABSOLUTELY NO WARRANTY.<br>
> You may redistribute copies of GlusterFS under the terms of the GNU General<br>
> Public License.<br>
><br>
> # gluster volume info storage<br>
><br>
> Volume Name: storage<br>
> Type: Distribute<br>
> Volume ID: 6f95525a-94d7-4174-bac4-e1a18f<wbr>e010a2<br>
> Status: Started<br>
> Number of Bricks: 7<br>
> Transport-type: tcp<br>
> Bricks:<br>
> Brick1: 10.0.231.50:/mnt/raid6-storage<wbr>/storage<br>
> Brick2: 10.0.231.51:/mnt/raid6-storage<wbr>/storage<br>
> Brick3: 10.0.231.52:/mnt/raid6-storage<wbr>/storage<br>
> Brick4: 10.0.231.53:/mnt/raid6-storage<wbr>/storage<br>
> Brick5: 10.0.231.54:/mnt/raid6-storage<wbr>/storage<br>
> Brick6: 10.0.231.55:/mnt/raid6-storage<wbr>/storage<br>
> Brick7: 10.0.231.56:/mnt/raid6-storage<wbr>/storage<br>
> Options Reconfigured:<br>
> changelog.changelog: on<br>
> geo-replication.ignore-pid-che<wbr>ck: on<br>
> geo-replication.indexing: on<br>
> nfs.disable: no<br>
> performance.readdir-ahead: on<br>
> features.quota: on<br>
> features.inode-quota: on<br>
> features.quota-deem-statfs: on<br>
> features.read-only: off<br>
><br>
> # df -h /storage/<br>
> Filesystem Size Used Avail Use% Mounted on<br>
> 10.0.231.50:/storage 255T 172T 83T 68% /storage<br>
><br>
><br>
> I am planning to upgrade to 3.10 (or 3.12 when it's available) but I have a<br>
> number of quotas configured, and one of them (below) has a very wrong "Used"<br>
> value:<br>
><br>
> # gluster volume quota storage list | egrep "MEOPAR "<br>
> /data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No<br>
><br>
><br>
> I have confirmed the bad value appears in one of the bricks current xattrs,<br>
> and it looks like the issue has been encountered previously on bricks 04,<br>
> 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1 as it<br>
> was recently added)<br>
><br>
> $ ansible -i hosts gluster-servers[0:6] -u <user> --ask-pass -m shell -b<br>
> --become-method=sudo --ask-become-pass -a "getfattr --absolute-names -m . -d<br>
> -e hex /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR | egrep<br>
> '^trusted.glusterfs.quota.size<wbr>'"<br>
> SSH password:<br>
> SUDO password[defaults to SSH password]:<br>
><br>
> gluster02 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>x0000011ecfa56c00000000000005c<wbr>d6d000000000006d478<br>
> trusted.glusterfs.quota.size.1<wbr>=0x0000010ad4a4520000000000000<wbr>12a0300000000000150fa<br>
><br>
> gluster05 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>x00000033b8e92200000000000004c<wbr>de8000000000006b1a4<br>
> trusted.glusterfs.quota.size.1<wbr>=0x0000010dca277c0000000000000<wbr>1297d0000000000015005<br>
><br>
> gluster01 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>x0000003d4d4348000000000000057<wbr>616000000000006afd2<br>
> trusted.glusterfs.quota.size.1<wbr>=0x00000133fe211e0000000000000<wbr>5d161000000000006cfd4<br>
><br>
> gluster04 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>xffffff396f3e9400000000000004d<wbr>7ea0000000000068c62<br>
> trusted.glusterfs.quota.size.1<wbr>=0x00000106e672480000000000000<wbr>1138f0000000000012fb2<br>
><br>
> gluster03 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>xfffffd02acabf0000000000000035<wbr>99000000000000643e2<br>
> trusted.glusterfs.quota.size.1<wbr>=0x00000114e20f5e0000000000000<wbr>113b30000000000012fb2<br>
><br>
> gluster06 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>xffffff0c98de44000000000000053<wbr>6e40000000000068cf2<br>
> trusted.glusterfs.quota.size.1<wbr>=0x0000013532664e0000000000000<wbr>5e73f000000000006cfd4<br>
><br>
> gluster07 | SUCCESS | rc=0 >><br>
> trusted.glusterfs.quota.size=0<wbr>xfffffa3d7c1ba60000000000000a9<wbr>ccb000000000005fd2f<br>
><br>
> And reviewing the subdirectories of that folder on the impacted server you<br>
> can see that none of the direct children have such incorrect values:<br>
><br>
> [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex<br>
> /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/*<br>
> # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<dir1 ><br>
> ...<br>
> trusted.glusterfs.quota.7209b6<wbr>77-f4b9-4d82-a382-0733620e6929<wbr>.contri=0x000000fb684182000000<wbr>0000000074730000000000000dae<br>
> trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
> trusted.glusterfs.quota.size=0<wbr>x000000fb684182000000000000007<wbr>4730000000000000dae<br>
><br>
> # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<dir2 ><br>
> ...<br>
> trusted.glusterfs.quota.7209b6<wbr>77-f4b9-4d82-a382-0733620e6929<wbr>.contri=0x0000000416d5f4000000<wbr>000000000baa0000000000000441<br>
> trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
> trusted.glusterfs.quota.limit-<wbr>set=0x0000010000000000ffffffff<wbr>ffffffff<br>
> trusted.glusterfs.quota.size=0<wbr>x0000000416d5f4000000000000000<wbr>baa0000000000000441<br>
><br>
> # file: /mnt/raid6-storage/storage/dat<wbr>a/projects/MEOPAR/<dir3><br>
> ...<br>
> trusted.glusterfs.quota.7209b6<wbr>77-f4b9-4d82-a382-0733620e6929<wbr>.contri=0x000000110f2c4e000000<wbr>00000002a76a000000000006ad7d<br>
> trusted.glusterfs.quota.dirty=<wbr>0x3000<br>
> trusted.glusterfs.quota.limit-<wbr>set=0x0000020000000000ffffffff<wbr>ffffffff<br>
> trusted.glusterfs.quota.size=0<wbr>x000000110f2c4e00000000000002a<wbr>76a000000000006ad7d<br>
><br>
><br>
> Can I fix this on the current version of gluster (3.7) on just the one brick<br>
> before I upgrade? Or would I be better off upgrading to 3.10 and trying to<br>
> fix it then?<br>
><br>
> I have reviewed information here:<br>
><br>
> <a href="http://lists.gluster.org/pipermail/gluster-devel/2016-February/048282.html" rel="noreferrer" target="_blank">http://lists.gluster.org/piper<wbr>mail/gluster-devel/2016-Februa<wbr>ry/048282.html</a><br>
> <a href="http://lists.gluster.org/pipermail/gluster-users.old/2016-September/028365.html" rel="noreferrer" target="_blank">http://lists.gluster.org/piper<wbr>mail/gluster-users.old/2016-<wbr>September/028365.html</a><br>
><br>
> It seems like since I am on Gluster 3.7 I can disable quotas and re-enable<br>
> and everything will get recalculated and increment the index on the<br>
> quota.size xattr. But with the size of the volume that will take a very long<br>
> time.<br>
><br>
> Could I simply mark the impacted directly as dirty on gluster07? Or update<br>
> the xattr directly as the sum of the size of dir1, 2, and 3?<br>
><br>
> Thanks,<br>
> -Matthew<br>
><br>
</div></div>> ______________________________<wbr>_________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-devel</a><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div></div></div></div></div>