<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div class="">Hi Hari, Hi Sanoj,</div><div class=""><br class=""></div><div class="">thank you very much for your patience and your support! </div><div class="">The problem has been solved following your instructions :-)</div><div class=""><br class=""></div><div class="">N.B.: in order to reduce the running time, I executed the “du” command as follows:</div><div class=""><br class=""></div><div class="">for i in {1..12}</div><div class="">do</div><div class=""><div style="margin: 0px; line-height: normal; font-family: 'Courier New'; color: rgb(184, 180, 59); background-color: rgb(0, 0, 0);" class=""><span style="font-variant-ligatures: no-common-ligatures" class=""> du /gluster/mnt$i/brick/CSP/ans004/ftp</span></div></div><div class="">done</div><div class=""><br class=""></div><div class="">and not on each brick at "/gluster/mnt$i/brick" tree level.</div><div class=""><br class=""></div><div class="">I hope it was a correct idea :-)</div><div class=""><br class=""></div><div class="">Thank you again for helping me to solve this issue.</div><div class="">Have a good day.</div><div class="">Mauro</div><div class=""><br class=""></div><br class=""><div><blockquote type="cite" class=""><div class="">Il giorno 11 lug 2018, alle ore 09:16, Hari Gowtham <<a href="mailto:hgowtham@redhat.com" class="">hgowtham@redhat.com</a>> ha scritto:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi,<br class=""><br class="">There was a accounting issue in your setup.<br class="">The directory ans004/ftp/CMCC-CM2-VHR4-CTR/atm/hist and ans004/ftp/CMCC-CM2-VHR4<br class="">had wrong size value on them.<br class=""><br class="">To fix it, you will have to set dirty xattr (an internal gluster<br class="">xattr) on these directories<br class="">which will mark it for calculating the values again for the directory.<br class="">And then do a du on the mount after setting the xattrs. This will do a<br class="">stat that will<br class="">calculate and update the right values.<br class=""><br class="">To set dirty xattr:<br class="">setfattr -n trusted.glusterfs.quota.dirty -v 0x3100 <path to the directory><br class="">This has to be done for both the directories one after the other on each brick.<br class="">Once done for all the bricks issue the du command.<br class=""><br class="">Thanks to Sanoj for the guidance<br class="">On Tue, Jul 10, 2018 at 6:37 PM Mauro Tridici <<a href="mailto:mauro.tridici@cmcc.it" class="">mauro.tridici@cmcc.it</a>> wrote:<br class=""><blockquote type="cite" class=""><br class=""><br class="">Hi Hari,<br class=""><br class="">sorry for the late.<br class="">Yes, the gluster volume is a single volume that is spread between all the 3 node and has 36 bricks<br class=""><br class="">In attachment you can find a tar.gz file containing:<br class=""><br class="">- gluster volume status command output;<br class="">- gluster volume info command output;<br class="">- the output of the following script execution (it generated 3 files per server: s01.log, s02.log, s03.log).<br class=""><br class="">This is the “check.sh” script that has been executed on each server (servers are s01, s02, s03).<br class=""><br class="">#!/bin/bash<br class=""><br class="">#set -xv<br class=""><br class="">host=$(hostname)<br class=""><br class="">for i in {1..12}<br class="">do<br class=""> ./quota_fsck_new-6.py --full-logs --sub-dir CSP/ans004 /gluster/mnt$i/brick >> $host.log<br class="">done<br class=""><br class="">Many thanks,<br class="">Mauro<br class=""><br class=""><br class="">Il giorno 10 lug 2018, alle ore 12:12, Hari Gowtham <<a href="mailto:hgowtham@redhat.com" class="">hgowtham@redhat.com</a>> ha scritto:<br class=""><br class="">Hi Mauro,<br class=""><br class="">Can you send the gluster v status command output?<br class=""><br class="">Is it a single volume that is spread between all the 3 node and has 36 bricks?<br class="">If yes, you will have to run on all the bricks.<br class=""><br class="">In the command use sub-dir option if you are running only for the<br class="">directory where limit is set. else if you are<br class="">running on the brick mount path you can remove it.<br class=""><br class="">The full-log will consume a lot of space as its going to record the<br class="">xattrs for each entry inside the path we are<br class="">running it. This data is needed to cross check and verify quota's<br class="">marker functionality.<br class=""><br class="">To reduce resource consumption you can run it on one replica set alone<br class="">(if its replicate volume)<br class="">But its better if you can run it on all the brick if possible and if<br class="">the size consumed is fine with you.<br class=""><br class="">Make sure you run it with the script link provided above by Sanoj. (patch set 6)<br class="">On Tue, Jul 10, 2018 at 2:54 PM Mauro Tridici <<a href="mailto:mauro.tridici@cmcc.it" class="">mauro.tridici@cmcc.it</a>> wrote:<br class=""><br class=""><br class=""><br class="">Hi Hari,<br class=""><br class="">thank you very much for your answer.<br class="">I will try to use the script mentioned above pointing to each backend bricks.<br class=""><br class="">So, if I understand, since I have a gluster cluster composed by 3 nodes (with 12 bricks on each node), I have to execute the script 36 times. Right?<br class=""><br class="">You can find below the “df” command output executed on a cluster node:<br class=""><br class="">/dev/mapper/cl_s01-gluster 100G 33M 100G 1% /gluster<br class="">/dev/mapper/gluster_vgd-gluster_lvd 9,0T 5,6T 3,5T 62% /gluster/mnt2<br class="">/dev/mapper/gluster_vge-gluster_lve 9,0T 5,7T 3,4T 63% /gluster/mnt3<br class="">/dev/mapper/gluster_vgj-gluster_lvj 9,0T 5,7T 3,4T 63% /gluster/mnt8<br class="">/dev/mapper/gluster_vgc-gluster_lvc 9,0T 5,6T 3,5T 62% /gluster/mnt1<br class="">/dev/mapper/gluster_vgl-gluster_lvl 9,0T 5,8T 3,3T 65% /gluster/mnt10<br class="">/dev/mapper/gluster_vgh-gluster_lvh 9,0T 5,7T 3,4T 64% /gluster/mnt6<br class="">/dev/mapper/gluster_vgf-gluster_lvf 9,0T 5,7T 3,4T 63% /gluster/mnt4<br class="">/dev/mapper/gluster_vgm-gluster_lvm 9,0T 5,4T 3,7T 60% /gluster/mnt11<br class="">/dev/mapper/gluster_vgn-gluster_lvn 9,0T 5,4T 3,7T 60% /gluster/mnt12<br class="">/dev/mapper/gluster_vgg-gluster_lvg 9,0T 5,7T 3,4T 64% /gluster/mnt5<br class="">/dev/mapper/gluster_vgi-gluster_lvi 9,0T 5,7T 3,4T 63% /gluster/mnt7<br class="">/dev/mapper/gluster_vgk-gluster_lvk 9,0T 5,8T 3,3T 65% /gluster/mnt9<br class=""><br class="">I will execute the following command and I will put here the output.<br class=""><br class="">./quota_fsck_new.py --full-logs --sub-dir /gluster/mnt{1..12}<br class=""><br class="">Thank you again for your support.<br class="">Regards,<br class="">Mauro<br class=""><br class="">Il giorno 10 lug 2018, alle ore 11:02, Hari Gowtham <<a href="mailto:hgowtham@redhat.com" class="">hgowtham@redhat.com</a>> ha scritto:<br class=""><br class="">Hi,<br class=""><br class="">There is no explicit command to backup all the quota limits as per my<br class="">understanding. need to look further about this.<br class="">But you can do the following to backup and set it.<br class="">Gluster volume quota volname list which will print all the quota<br class="">limits on that particular volume.<br class="">You will have to make a note of the directories with their respective limit set.<br class="">Once noted down, you can disable quota on the volume and then enable it.<br class="">Once enabled, you will have to set each limit explicitly on the volume.<br class=""><br class="">Before doing this we suggest you can to try running the script<br class="">mentioned above with the backend brick path instead of the mount path.<br class="">you need to run this on the machines where the backend bricks are<br class="">located and not on the mount.<br class="">On Mon, Jul 9, 2018 at 9:01 PM Mauro Tridici <<a href="mailto:mauro.tridici@cmcc.it" class="">mauro.tridici@cmcc.it</a>> wrote:<br class=""><br class=""><br class="">Hi Sanoj,<br class=""><br class="">could you provide me the command that I need in order to backup all quota limits?<br class="">If there is no solution for this kind of problem, I would like to try to follow your “backup” suggestion.<br class=""><br class="">Do you think that I should contact gluster developers too?<br class=""><br class="">Thank you very much.<br class="">Regards,<br class="">Mauro<br class=""><br class=""><br class="">Il giorno 05 lug 2018, alle ore 09:56, Mauro Tridici <<a href="mailto:mauro.tridici@cmcc.it" class="">mauro.tridici@cmcc.it</a>> ha scritto:<br class=""><br class="">Hi Sanoj,<br class=""><br class="">unfortunately the output of the command execution was not helpful.<br class=""><br class="">[root@s01 ~]# find /tier2/CSP/ans004 | xargs getfattr -d -m. -e hex<br class="">[root@s01 ~]#<br class=""><br class="">Do you have some other idea in order to detect the cause of the issue?<br class=""><br class="">Thank you again,<br class="">Mauro<br class=""><br class=""><br class="">Il giorno 05 lug 2018, alle ore 09:08, Sanoj Unnikrishnan <<a href="mailto:sunnikri@redhat.com" class="">sunnikri@redhat.com</a>> ha scritto:<br class=""><br class="">Hi Mauro,<br class=""><br class="">A script issue did not capture all necessary xattr.<br class="">Could you provide the xattrs with..<br class="">find /tier2/CSP/ans004 | xargs getfattr -d -m. -e hex<br class=""><br class="">Meanwhile, If you are being impacted, you could do the following<br class="">back up quota limits<br class="">disable quota<br class="">enable quota<br class="">freshly set the limits.<br class=""><br class="">Please capture the xattr values first, so that we can get to know what went wrong.<br class="">Regards,<br class="">Sanoj<br class=""><br class=""><br class="">On Tue, Jul 3, 2018 at 4:09 PM, Mauro Tridici <<a href="mailto:mauro.tridici@cmcc.it" class="">mauro.tridici@cmcc.it</a>> wrote:<br class=""><br class=""><br class="">Dear Sanoj,<br class=""><br class="">thank you very much for your support.<br class="">I just downloaded and executed the script you suggested.<br class=""><br class="">This is the full command I executed:<br class=""><br class="">./quota_fsck_new.py --full-logs --sub-dir /tier2/CSP/ans004/ /gluster<br class=""><br class="">In attachment, you can find the logs generated by the script.<br class="">What can I do now?<br class=""><br class="">Thank you very much for your patience.<br class="">Mauro<br class=""><br class=""><br class=""><br class=""><br class="">Il giorno 03 lug 2018, alle ore 11:34, Sanoj Unnikrishnan <<a href="mailto:sunnikri@redhat.com" class="">sunnikri@redhat.com</a>> ha scritto:<br class=""><br class="">Hi Mauro,<br class=""><br class="">This may be an issue with update of backend xattrs.<br class="">To RCA further and provide resolution could you provide me with the logs by running the following fsck script.<br class=""><a href="https://review.gluster.org/#/c/19179/6/extras/quota/quota_fsck.py" class="">https://review.gluster.org/#/c/19179/6/extras/quota/quota_fsck.py</a><br class=""><br class="">Try running the script and revert with the logs generated.<br class=""><br class="">Thanks,<br class="">Sanoj<br class=""><br class=""><br class="">On Mon, Jul 2, 2018 at 2:21 PM, Mauro Tridici <mauro.tridici@cmcc.it> wrote:<br class=""><br class=""><br class="">Dear Users,<br class=""><br class="">I just noticed that, after some data deletions executed inside "/tier2/CSP/ans004” folder, the amount of used disk reported by quota command doesn’t reflect the value indicated by du command.<br class="">Surfing on the web, it seems that it is a bug of previous versions of Gluster FS and it was already fixed.<br class="">In my case, the problem seems unfortunately still here.<br class=""><br class="">How can I solve this issue? Is it possible to do it without starting a downtime period?<br class=""><br class="">Thank you very much in advance,<br class="">Mauro<br class=""><br class="">[root@s01 ~]# glusterfs -V<br class="">glusterfs 3.10.5<br class="">Repository revision: git://git.gluster.org/glusterfs.git<br class="">Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/><br class="">GlusterFS comes with ABSOLUTELY NO WARRANTY.<br class="">It is licensed to you under your choice of the GNU Lesser<br class="">General Public License, version 3 or any later version (LGPLv3<br class="">or later), or the GNU General Public License, version 2 (GPLv2),<br class="">in all cases as published by the Free Software Foundation.<br class=""><br class="">[root@s01 ~]# gluster volume quota tier2 list /CSP/ans004<br class=""> Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit exceeded?<br class="">-------------------------------------------------------------------------------------------------------------------------------<br class="">/CSP/ans004 1.0TB 99%(1013.8GB) 3.9TB 0Bytes Yes Yes<br class=""><br class="">[root@s01 ~]# du -hs /tier2/CSP/ans004/<br class="">295G /tier2/CSP/ans004/<br class=""><br class=""><br class=""><br class=""><br class="">_______________________________________________<br class="">Gluster-users mailing list<br class="">Gluster-users@gluster.org<br class="">http://lists.gluster.org/mailman/listinfo/gluster-users<br class=""><br class=""><br class=""><br class=""><br class=""><br class=""><br class=""><br class="">_______________________________________________<br class="">Gluster-users mailing list<br class="">Gluster-users@gluster.org<br class="">https://lists.gluster.org/mailman/listinfo/gluster-users<br class=""><br class=""><br class=""><br class=""><br class="">--<br class="">Regards,<br class="">Hari Gowtham.<br class=""><br class=""><br class=""><br class=""><br class=""><br class="">--<br class="">Regards,<br class="">Hari Gowtham.<br class=""><br class=""><br class=""></blockquote><br class=""><br class="">-- <br class="">Regards,<br class="">Hari Gowtham.<br class=""></div></div></blockquote></div><br class=""><div class="">
<span class="Apple-style-span" style="border-collapse: separate; font-variant-ligatures: normal; font-variant-position: normal; font-variant-numeric: normal; font-variant-alternates: normal; font-variant-east-asian: normal; line-height: normal; border-spacing: 0px;"><span class="Apple-style-span" style="border-collapse: separate; color: rgb(0, 0, 0); font-family: Helvetica; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: 2; text-align: -webkit-auto; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-text-decorations-in-effect: none; -webkit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; "><div class=""><br class="Apple-interchange-newline">-------------------------</div><div class="">Mauro Tridici</div><div class=""><br class=""></div><div class="">Fondazione CMCC</div><div class="">CMCC Supercomputing Center</div><div class="">presso Complesso Ecotekne - Università del Salento -</div><div class="">Strada Prov.le Lecce - Monteroni sn</div><div class="">73100 Lecce IT</div><div class=""><a href="http://www.cmcc.it" class="">http://www.cmcc.it</a></div><div class=""><br class=""></div><div class="">mobile: (+39) 327 5630841</div><div class="">email: <a href="mailto:mauro.tridici@cmcc.it" class="">mauro.tridici@cmcc.it</a></div></span></span>
</div>
<br class=""></body></html>