[Gluster-users] "du" and "df -hT" commands output mismatch

Mauro Tridici mauro.tridici at cmcc.it
Mon Jul 22 11:45:52 UTC 2019


Hello Hari,

thank you very much for the explanation.

Regards,
Mauro



> On 22 Jul 2019, at 10:28, Hari Gowtham <hgowtham at redhat.com> wrote:
> 
> As of now we don't have way to solve it indefinitely.
> There may be a number of ways accounting mismatch can happen.
> To solve each way, we need to identify how it happened (the IOs that
> went through, their order and the timing)
> with this we need to understand what change is necessary and implement that.
> This has to done every time we come across an issue that can cause
> accounting mismatch.
> Most of the changes might affect the performance. That is a down side.
> And we don't have a way to collect the above necessary information.
> 
> To do the above requirements, we don't have enough bandwidth.
> If anyone from the community is interested, they can contribute to it.
> We are here to help with them out.
> 
> On Mon, Jul 22, 2019 at 1:12 PM Mauro Tridici <mauro.tridici at cmcc.it> wrote:
>> 
>> Hi Hari,
>> 
>> I hope that the crawl will run at most for a couple of days.
>> Do you know if there is a way to solve the issue definitely ?
>> 
>> GlusterFS version is 3.12.14.
>> You can find below some additional info.
>> 
>> Volume Name: tier2
>> Type: Distributed-Disperse
>> Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 12 x (4 + 2) = 72
>> Transport-type: tcp
>> 
>> Many thanks,
>> Mauro
>> 
>> On 22 Jul 2019, at 09:16, Hari Gowtham <hgowtham at redhat.com> wrote:
>> 
>> Hi,
>> Yes the above mentioned steps are right.
>> The way to find if the crawl is still happening is to grep for
>> quota_crawl in the processes that are still running.
>> # ps aux | grep quota_crawl
>> As long as this process is alive, the crawl is happening.
>> 
>> Note: crawl does take a lot of time as well. And it happens twice.
>> 
>> On Fri, Jul 19, 2019 at 5:42 PM Mauro Tridici <mauro.tridici at cmcc.it> wrote:
>> 
>> 
>> Hi Hari,
>> 
>> thank you very much for the fast answer.
>> I think that the we will try to solve the issue disabling and enabling quota.
>> So, if I understand I have to do the following actions:
>> 
>> - save on my notes the current quota limits;
>> - disable quota using "gluster volume quota /tier2 disable” command;
>> - wait a while for the crawl (question: how can I understand that crawl is terminated!? how logn should I wait?);
>> - enable quota using  "gluster volume quota /tier2 enable”;
>> - set again the previous quota limits.
>> 
>> Is this correct?
>> 
>> Many thanks for your support,
>> Mauro
>> 
>> 
>> 
>> On 19 Jul 2019, at 12:48, Hari Gowtham <hgowtham at redhat.com> wrote:
>> 
>> Hi Mauro,
>> 
>> The fsck script is the fastest way to resolve the issue.
>> The other way would be to disable quota and once the crawl for disable
>> is done, we have to enable and set the limits again.
>> In this way, the crawl happens twice and hence its slow.
>> 
>> On Fri, Jul 19, 2019 at 3:27 PM Mauro Tridici <mauro.tridici at cmcc.it> wrote:
>> 
>> 
>> Dear All,
>> 
>> I’m experiencing again a problem with gluster file system quota.
>> The “df -hT /tier2/CSP/sp1” command output is different from the “du -ms” command executed against the same folder.
>> 
>> [root at s01 manual]# df -hT /tier2/CSP/sp1
>> Filesystem     Type            Size  Used Avail Use% Mounted on
>> s01-stg:tier2  fuse.glusterfs   25T   22T  3.5T  87% /tier2
>> 
>> [root at s01 sp1]# du -ms /tier2/CSP/sp1
>> 14TB /tier2/CSP/sp1
>> 
>> In the past, I used successfully the quota_fsck_new-6.py script in order to detect the SIZE_MISMATCH occurrences and fix them.
>> Unfortunately, the number of sub-directories and files saved in /tier2/CSP/sp1 grew so much and the list of SIZE_MISMATCH entries is very long.
>> 
>> Is there a faster way to correct the mismatching outputs?
>> Could you please help me to solve, if it is possible, this issue?
>> 
>> Thank you in advance,
>> Mauro
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>> 
>> 
>> 
>> --
>> Regards,
>> Hari Gowtham.
>> 
>> 
>> 
>> 
>> 
>> --
>> Regards,
>> Hari Gowtham.
>> 
>> 
>> 
> 
> 
> -- 
> Regards,
> Hari Gowtham.




More information about the Gluster-users mailing list