<div dir="ltr">Hi <span id="m_4737867913707088433:og.11">Srijan</span>,<div><br></div><div>Is there a way of getting the status of the crawl process?</div><div>We are going to expand this cluster, adding 12 new bricks (around 500TB) and we rely heavily on the quota feature to control the space usage for each project. It's been running since Saturday (<span id="m_4737867913707088433:og.12">nothing</span> <span id="m_4737867913707088433:og.13">changed</span>) and unsure if it's going to finish tomorrow or in weeks.</div><div><br></div><div>Thank you!</div><div><div><div dir="ltr" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div style="font-size:small"><b><span id="m_4737867913707088433:og.14">João</span> <span id="m_4737867913707088433:og.15">Baúto</span></b></div><div><font size="1">---------------</font></div><div><div><div dir="ltr"><b><font size="1">Scientific <span style="border-collapse:collapse">Computing and Software Platform<br></span></font></b><div><span style="border-collapse:collapse"><font size="1"><font color="#666666"><span id="m_4737867913707088433:og.16">Champalimaud</span> Research<br><span id="m_4737867913707088433:og.17">Champalimaud</span> Center for the Unknown<br>Av. <span id="m_4737867913707088433:og.18">Brasília</span>, <span id="m_4737867913707088433:og.19">Doca</span> <span id="m_4737867913707088433:og.20">de</span> <span id="m_4737867913707088433:og.21">Pedrouços</span><br>1400-038 Lisbon, Portugal</font><br><a href="https://www.fchampalimaud.org/" style="color:rgb(17,85,204)" target="_blank"><span id="m_4737867913707088433:og.22">fchampalimaud</span>.org</a></font></span></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Srijan Sivakumar <<a href="mailto:ssivakum@redhat.com" target="_blank">ssivakum@redhat.com</a>> escreveu no dia domingo, 16/08/2020 à(s) 06:11:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div>Hi João,<div dir="auto"><br></div><div dir="auto">Yes it'll take some time given the file system size as it has to change the xattrs in each level and then crawl upwards.</div><div dir="auto"><br></div><div dir="auto">stat is done by the script itself so the crawl is initiated.</div><br>Regards,</div><div dir="auto">Srijan Sivakumar</div><div dir="auto"><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Sun 16 Aug, 2020, 04:58 João Baúto, <<a href="mailto:joao.bauto@neuro.fchampalimaud.org" rel="noreferrer noreferrer noreferrer" target="_blank">joao.bauto@neuro.fchampalimaud.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Srijan & Strahil,<div><br></div><div>I ran the quota_fsck script mentioned in Hari's blog post in all bricks and it detected a lot of size mismatch. </div><div><br></div><div>The script was executed as,</div><div><ul><li>python quota_fsck.py --sub-dir projectB --fix-issues /mnt/tank /tank/volume2/brick (in all nodes and bricks)<br></li></ul></div><div>Here is a snippet from the script,<br></div><div><br></div><div>Size Mismatch /tank/volume2/brick/projectB {'parents': {'00000000-0000-0000-0000-000000000001': {'contri_file_count': 18446744073035296610L, 'contri_size': 18446645297413872640L, 'contri_dir_count': 18446744073709527653L}}, 'version': '1', 'file_count': 18446744073035296610L, 'dirty': False, 'dir_count': 18446744073709527653L, 'size': 18446645297413872640L} 15204281691754<br>MARKING DIRTY: /tank/volume2/brick/projectB<br>stat on /mnt/tank/projectB<br>Files verified : 683223<br>Directories verified : 46823<br>Objects Fixed : 705230<br></div><div><br></div><div>Checking the xattr in the bricks I can see the directory in question marked as dirty,</div><div># getfattr -d -m. -e hex /tank/volume2/brick/projectB<br>getfattr: Removing leading '/' from absolute path names<br># file: tank/volume2/brick/projectB<br>trusted.gfid=0x3ca2bce0455945efa6662813ce20fc0c<br>trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f372478000a7705<br>trusted.glusterfs.dht=0xe1a4060c000000003ffffffe5ffffffc<br>trusted.glusterfs.mdata=0x010000000000000000000000005f3724750000000013ddf679000000005ce2aff90000000007fdacb0000000005ce2aff90000000007fdacb0<br>trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x00000ca6ccf7a80000000000000790a1000000000000b6ea<br>trusted.glusterfs.quota.dirty=0x3100<br>trusted.glusterfs.quota.limit-set.1=0x0000640000000000ffffffffffffffff<br>trusted.glusterfs.quota.size.1=0x00000ca6ccf7a80000000000000790a1000000000000b6ea<br></div><div><br></div><div>Now, my question is how do I trigger Gluster to recalculate the quota for this directory? Is it automatic but it takes a while? Because the quota list did change but not to a good "result".</div><div><br></div><div>Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?<br></div><div>/projectB 100.0TB 80%(80.0TB) 16383.9PB 190.1TB No No<br></div><div><br></div><div>I would like to avoid a disable/enable quota in the volume as it removes the configs.</div><div><br></div><div>Thank you for all the help!</div><div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div style="font-size:small"><b>João Baúto</b></div><div><font size="1">---------------</font></div><div><div><div dir="ltr"><b><font size="1">Scientific <span style="border-collapse:collapse">Computing and Software Platform<br></span></font></b><div><span style="border-collapse:collapse"><font size="1"><font color="#666666">Champalimaud Research<br>Champalimaud Center for the Unknown<br>Av. Brasília, Doca de Pedrouços<br>1400-038 Lisbon, Portugal</font><br><a href="https://www.fchampalimaud.org/" style="color:rgb(17,85,204)" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">fchampalimaud.org</a></font></span></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Srijan Sivakumar <<a href="mailto:ssivakum@redhat.com" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">ssivakum@redhat.com</a>> escreveu no dia sábado, 15/08/2020 à(s) 11:57:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto"><div><div style="font-family:sans-serif" dir="auto">Hi João,</div><div dir="auto" style="font-family:sans-serif"><br></div><div dir="auto" style="font-family:sans-serif">The quota accounting error is what we're looking at here. I think you've already looked into the blog post by Hari and are using the script to fix the accounting issue.</div><div dir="auto" style="font-family:sans-serif">That should help you out in fixing this issue. </div><div dir="auto" style="font-family:sans-serif"><br></div><div dir="auto" style="font-family:sans-serif">Let me know if you face any issues while using it.</div><div dir="auto" style="font-family:sans-serif"><br></div><div dir="auto" style="font-family:sans-serif">Regards,</div><div style="font-family:sans-serif">Srijan Sivakumar<br></div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri 14 Aug, 2020, 17:10 João Baúto, <<a href="mailto:joao.bauto@neuro.fchampalimaud.org" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">joao.bauto@neuro.fchampalimaud.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.1">Strahil</span>,<div><br></div><div>I have tried removing the quota for that specific directory and setting it again but it didn't work (maybe it has to be a quota disable and enable in the volume options). Currently testing a solution </div><div>by <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.2">Hari</span> with the quota_<span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.3">fsck</span>.<span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.4">py</span> script (<a href="https://medium.com/@harigowtham/glusterfs-quota-fix-accounting-840df33fcd3a" rel="noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">https://medium.com/@<span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.5">harigowtham</span>/<span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.6">glusterfs</span>-quota-fix-accounting-840df33fcd3a</a>) and its detecting a lot of size mismatch in files.</div><div><br></div><div>Thank you,</div><div><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div style="font-size:small"><b><span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.7">João</span> <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.8">Baúto</span></b></div><div><font size="1">---------------</font></div><div><div><div dir="ltr"><b><font size="1">Scientific <span style="border-collapse:collapse">Computing and Software Platform<br></span></font></b><div><span style="border-collapse:collapse"><font size="1"><font color="#666666"><span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.9">Champalimaud</span> Research<br><span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.10">Champalimaud</span> Center for the Unknown<br>Av. <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.11">Brasília</span>, <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.12">Doca</span> <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.13">de</span> <span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.14">Pedrouços</span><br>1400-038 Lisbon, Portugal</font><br><a href="https://www.fchampalimaud.org/" style="color:rgb(17,85,204)" rel="noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank"><span id="gmail-m_4737867913707088433gmail-m_5769927033199667239m_6562063502133821313m_4718795186510048319m_6362153345222832448m_3314024497766372196gmail-m_-866345378693325730m_-1177670637017511633m_6305863047400359188:1gm.15">fchampalimaud</span>.org</a></font></span></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com" rel="noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">hunter86_bg@yahoo.com</a>> escreveu no dia sexta, 14/08/2020 à(s) 10:16:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi João,<br>
<br>
Based on your output it seems that the quota size is different on the 2 bricks.<br>
<br>
Have you tried to remove the quota and then recreate it ? Maybe it will be the easiest way to fix it.<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
<br>
На 14 август 2020 г. 4:35:14 GMT+03:00, "João Baúto" <<a href="mailto:joao.bauto@neuro.fchampalimaud.org" rel="noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">joao.bauto@neuro.fchampalimaud.org</a>> написа:<br>
>Hi all,<br>
><br>
>We have a 4-node distributed cluster with 2 bricks per node running<br>
>Gluster<br>
>7.7 + ZFS. We use directory quota to limit the space used by our<br>
>members on<br>
>each project. Two days ago we noticed inconsistent space used reported<br>
>by<br>
>Gluster in the quota list.<br>
><br>
>A small snippet of gluster volume quota vol list,<br>
><br>
> Path Hard-limit Soft-limit Used<br>
>Available Soft-limit exceeded? Hard-limit exceeded?<br>
>/projectA 5.0TB 80%(4.0TB) 3.1TB 1.9TB<br>
> No No<br>
>*/projectB 100.0TB 80%(80.0TB) 16383.4PB 740.9TB<br>
> No No*<br>
>/projectC 70.0TB 80%(56.0TB) 50.0TB 20.0TB<br>
> No No<br>
><br>
>The total space available in the cluster is 360TB, the quota for<br>
>projectB<br>
>is 100TB and, as you can see, its reporting 16383.4PB used and 740TB<br>
>available (already decreased from 750TB).<br>
><br>
>There was an issue in Gluster 3.x related to the wrong directory quota<br>
>(<br>
><a href="https://lists.gluster.org/pipermail/gluster-users/2016-February/025305.html" rel="noreferrer noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">https://lists.gluster.org/pipermail/gluster-users/2016-February/025305.html</a><br>
> and<br>
><a href="https://lists.gluster.org/pipermail/gluster-users/2018-November/035374.html" rel="noreferrer noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">https://lists.gluster.org/pipermail/gluster-users/2018-November/035374.html</a>)<br>
>but it's marked as solved (not sure if the solution still applies).<br>
><br>
>*On projectB*<br>
># getfattr -d -m . -e hex projectB<br>
># file: projectB<br>
>trusted.gfid=0x3ca2bce0455945efa6662813ce20fc0c<br>
>trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f35e69800098ed9<br>
>trusted.glusterfs.dht=0xe1a4060c000000003ffffffe5ffffffc<br>
>trusted.glusterfs.mdata=0x010000000000000000000000005f355c59000000000939079f000000005ce2aff90000000007fdacb0000000005ce2aff90000000007fdacb0<br>
>trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000ab0f227a860000000000478e33acffffffffffffc112<br>
>trusted.glusterfs.quota.dirty=0x3000<br>
>trusted.glusterfs.quota.limit-set.1=0x0000640000000000ffffffffffffffff<br>
>trusted.glusterfs.quota.size.1=0x0000ab0f227a860000000000478e33acffffffffffffc112<br>
><br>
>*On projectA*<br>
># getfattr -d -m . -e hex projectA<br>
># file: projectA<br>
>trusted.gfid=0x05b09ded19354c0eb544d22d4659582e<br>
>trusted.glusterfs.9582685f-07fa-41fd-b9fc-ebab3a6989cf.xtime=0x5f1aeb9f00044c64<br>
>trusted.glusterfs.dht=0xe1a4060c000000001fffffff3ffffffd<br>
>trusted.glusterfs.mdata=0x010000000000000000000000005f1ac6a10000000018f30a4e000000005c338fab0000000017a3135a000000005b0694fb000000001584a21b<br>
>trusted.glusterfs.quota.00000000-0000-0000-0000-000000000001.contri.1=0x0000067de3bbe20000000000000128610000000000033498<br>
>trusted.glusterfs.quota.dirty=0x3000<br>
>trusted.glusterfs.quota.limit-set.1=0x0000460000000000ffffffffffffffff<br>
>trusted.glusterfs.quota.size.1=0x0000067de3bbe20000000000000128610000000000033498<br>
><br>
>Any idea on what's happening and how to fix it?<br>
><br>
>Thanks!<br>
>*João Baúto*<br>
>---------------<br>
><br>
>*Scientific Computing and Software Platform*<br>
>Champalimaud Research<br>
>Champalimaud Center for the Unknown<br>
>Av. Brasília, Doca de Pedrouços<br>
>1400-038 Lisbon, Portugal<br>
><a href="http://fchampalimaud.org" rel="noreferrer noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">fchampalimaud.org</a> <<a href="https://www.fchampalimaud.org/" rel="noreferrer noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">https://www.fchampalimaud.org/</a>><br>
</blockquote></div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" rel="noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noreferrer noreferrer noreferrer noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div></div></div>
</div>
</blockquote></div>
</blockquote></div></div></div>
</blockquote></div>