[Gluster-users] GlusterFS 9.3 - Replicate Volume (2 Bricks / 1 Arbiter) - Self-healing does not always work

Thorsten Walk darkiop at gmail.com
Sat Oct 30 13:13:38 UTC 2021

Hi Ravi & Strahil, thanks a lot for your answer!

The file in the path .glusterfs/26/c5/.. only exists at node1 (=pve01). On
node2 (pve02) and the arbiter (freya), the file does not exist:

┬[14:35:48] [ssh:root at pve01( ~ (700)]
╰─># getfattr -d -m. -e hex
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768

┬[14:36:49] [ssh:root at pve02( /data/glusterfs/.glusterfs/26/c5
╰─># ll
drwx------ root root   6B 3 days ago   ./
drwx------ root root 8.0K 6 hours ago  ../

┬[14:36:58] [ssh:root at freya( /data/glusterfs/.glusterfs/26/c5
╰─># ll
drwx------ root root   6B 3 days ago   ./
drwx------ root root 8.0K 3 hours ago  ../

After this, i have disabled the the option you mentioned:

gluster volume set glusterfs-1-volume cluster.eager-lock off

After that I started another healing process manually. Unfortunately
without success.

@Strahil: For your idea with
https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/ i need
more time, maybe i can try it tomorrow. I'll be in touch.

Thanks again and best regards,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20211030/a4e68921/attachment.html>

More information about the Gluster-users mailing list