[Gluster-users] Completely filling up a Disperse volume results in unreadable/unhealable files that must be deleted.
ranaraya at redhat.com
Wed May 12 07:29:11 UTC 2021
On Wed, May 12, 2021 at 2:14 AM Jeff Byers <jbyers.sfly at gmail.com> wrote:
> Does anyone have any ideas how to prevent, or perhaps
> fix the issue described here:
> Completely filling up a Disperse volume results in
> unreadable/unhealable files that must be deleted.
> Cleaning up from this was so terrible when it happened the
> first time, that the thought of it happening again is causing
> me to lose sleep. :-(
> This was a while ago, but from what I recall from my lab
> testing, reserving space with the GlusterFS option, and using
> the GlusterFS quota feature only helped some, and didn't
> prevent the problem from happening.
You could perhaps try to reserve some space on the bricks with the
option so that you are alerted earlier. As far as I understand, in disperse
volumes, for a file to be healed successfully, all the xattrs and stat
information (file size, permissions, uid, gid etc) must be identical on
majority of the bricks. If that isn't the case, heal logic cannot proceed
further. For *file.7 *in the github issue, I see that
*trusted.glusterfs.mdata* and the file size (from the `ls -l` output) is
different on all 3 bricks, so heals won't happen even if there is free
CC'in Xavi to correct me if I am wrong. I'm also not sure if it is possible
to partially recover the data from the append writes which were successful
before the ENOSPC was hit.
> ~ Jeff Byers ~
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users