You can mount the volume via # mount -t glusterfs -o aux-gfid-mount vm1:test /mnt/testvol<div id="yMail_cursorElementTracker_1636141480525"><br></div><div id="yMail_cursorElementTracker_1636141501844">And then obtain the path:</div><div id="yMail_cursorElementTracker_1636141509003"><br></div><div id="yMail_cursorElementTracker_1636141480714"><span style="color: rgb(54, 70, 78); font-family: "Roboto Mono", SFMono-Regular, Consolas, Menlo, monospace; font-size: 13.6px; white-space: pre; background-color: rgb(245, 245, 245);" id="yMail_cursorElementTracker_1636141495017">getfattr -n trusted.glusterfs.pathinfo -e text /mnt/testvol/.gfid/<GFID></span></div><div id="yMail_cursorElementTracker_1636141510654"><font color="#36464e" face="Roboto Mono, SFMono-Regular, Consolas, Menlo, monospace"><span style="font-size: 13.6px; white-space: pre;" id="yMail_cursorElementTracker_1636141496715"><br></span></font> <br>Source: <a id="linkextractor__1636141521911" data-yahoo-extracted-link="true" href="https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/">https://docs.gluster.org/en/latest/Troubleshooting/gfid-to-path/</a></div><div id="yMail_cursorElementTracker_1636141521979"><br></div><div id="yMail_cursorElementTracker_1636141522133">Best Regards,</div><div id="yMail_cursorElementTracker_1636141538838">Strahil Nikolov</div><div id="yMail_cursorElementTracker_1636141541464"><br></div><div id="yMail_cursorElementTracker_1636141541763"><br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Fri, Nov 5, 2021 at 19:29, Thorsten Walk</div><div><darkiop@gmail.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv9270820019"><div><div dir="ltr">Hi Guys,<div><br clear="none"></div><div>I pushed some VMs to the GlusterFS storage this week and ran them there. For a maintenance task, I moved these VMs to Proxmox-Node-2 and took Node-1 offline for a short time.</div><div>After moving them back to Node-1 there were some file corpses left (see attachment). In the logs I can't find anything about the gfids :)<br clear="none"></div><div><br clear="none"></div><div><br clear="none"></div><div>┬[15:36:51] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]<br clear="none">╰─># gvi<br clear="none"><br clear="none">Cluster:<br clear="none">         Status: Healthy                 GlusterFS: 9.3<br clear="none">         Nodes: 3/3                      Volumes: 1/1<br clear="none"><br clear="none">Volumes: <br clear="none"><br clear="none">glusterfs-1-volume<br clear="none">                Replicate          Started (UP) - 3/3 Bricks Up  - (Arbiter Volume)<br clear="none">                                   Capacity: (17.89% used) 83.00 GiB/466.00 GiB (used/total)<br clear="none">                                   Self-Heal:<br clear="none">                                      192.168.1.51:/data/glusterfs (4 File(s) to heal).<br clear="none">                                   Bricks:<br clear="none">                                      Distribute Group 1:<br clear="none">                                         192.168.1.50:/data/glusterfs   (Online)<br clear="none">                                         192.168.1.51:/data/glusterfs   (Online)<br clear="none">                                         192.168.1.40:/data/glusterfs   (Online)<br clear="none"><br clear="none"><br clear="none">Brick 192.168.1.50:/data/glusterfs<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none">Brick 192.168.1.51:/data/glusterfs<br clear="none"><gfid:ade6f31c-b80b-457e-a054-6ca1548d9cd3> <br clear="none"><gfid:39365c96-296b-4270-9cdb-1b751e40ad86> <br clear="none"><gfid:54774d44-26a7-4954-a657-6e4fa79f2b97> <br clear="none"><gfid:d5a8ae04-7301-4876-8d32-37fcd6093977> <br clear="none">Status: Connected<br clear="none">Number of entries: 4<br clear="none"><br clear="none">Brick 192.168.1.40:/data/glusterfs<br clear="none">Status: Connected<br clear="none">Number of entries: 0<br clear="none"><br clear="none"><br clear="none">┬[15:37:03] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]<br clear="none">╰─># cat /data/glusterfs/.glusterfs/ad/e6/ade6f31c-b80b-457e-a054-6ca1548d9cd3<br clear="none">22962<br clear="none"><br clear="none"><br clear="none">┬[15:37:13] [ssh:root@pve02(192.168.1.51): /home/darkiop (755)]<br clear="none">╰─># grep -ir 'ade6f31c-b80b-457e-a054-6ca1548d9cd3' /var/log/glusterfs/*.log<br clear="none"></div></div><br clear="none"><div class="yiv9270820019gmail_quote"><div dir="ltr" class="yiv9270820019gmail_attr">Am Mo., 1. Nov. 2021 um 07:51 Uhr schrieb Thorsten Walk <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:darkiop@gmail.com" target="_blank" href="mailto:darkiop@gmail.com">darkiop@gmail.com</a>>:<br clear="none"></div><div id="yiv9270820019yqt20820" class="yiv9270820019yqt5136279812"><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" class="yiv9270820019gmail_quote"><div dir="ltr"><div>After deleting the file, output of heal info is clear.</div><div><br clear="none"></div>>Not sure why you ended up in this situation (maybe unlink partially failed on this brick?)<div><br clear="none"></div><div>Neither did I, this was a completely fresh setup with 1-2 VMs and 1-2 Proxmox LXC templates. I let it run for a few days and at some point it had the mentioned state. I continue to monitor and start with fill the bricks with data.<br clear="none">Thanks for your help!<br clear="none"></div><div><br clear="none"><div class="yiv9270820019gmail_quote"><div dir="ltr" class="yiv9270820019gmail_attr">Am Mo., 1. Nov. 2021 um 02:54 Uhr schrieb Ravishankar N <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:ravishankar.n@pavilion.io" target="_blank" href="mailto:ravishankar.n@pavilion.io">ravishankar.n@pavilion.io</a>>:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" class="yiv9270820019gmail_quote"><div dir="ltr"><div dir="ltr"><br clear="none"></div><br clear="none"><div class="yiv9270820019gmail_quote"><div dir="ltr" class="yiv9270820019gmail_attr">On Mon, Nov 1, 2021 at 12:02 AM Thorsten Walk <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:darkiop@gmail.com" target="_blank" href="mailto:darkiop@gmail.com">darkiop@gmail.com</a>> wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;" class="yiv9270820019gmail_quote"><div dir="ltr"><div dir="ltr">Hi Ravi, the file only exists at pve01 and since only once:<div><br clear="none"></div><div>┬[19:22:10] [ssh:root@pve01(192.168.1.50): ~ (700)]<br clear="none">╰─># stat /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768<br clear="none">  File: /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768<br clear="none">  Size: 6               Blocks: 8          IO Block: 4096   regular file<br clear="none">Device: fd12h/64786d    Inode: 528         Links: 1<br clear="none">Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)<br clear="none">Access: 2021-10-30 14:34:50.385893588 +0200<br clear="none">Modify: 2021-10-27 00:26:43.988756557 +0200<br clear="none">Change: 2021-10-27 00:26:43.988756557 +0200<br clear="none"> Birth: -<br clear="none"></div></div><div><br clear="none"></div><div>┬[19:24:41] [ssh:root@pve01(192.168.1.50): ~ (700)]<br clear="none">╰─># ls -l /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768<br clear="none">.rw-r--r-- root root 6B 4 days ago  /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768<br clear="none"><br clear="none">┬[19:24:54] [ssh:root@pve01(192.168.1.50): ~ (700)]<br clear="none">╰─># cat /data/glusterfs/.glusterfs/26/c5/26c5396c-86ff-408d-9cda-106acd2b0768<br clear="none">28084<br clear="none"></div><div class="yiv9270820019gmail_quote"><div dir="ltr" class="yiv9270820019gmail_attr"><br clear="none"></div></div></div></blockquote><div>Hi Thorsten, you can delete the file. From the file size and contents, it looks like it belongs to ovirt sanlock. Not sure why you ended up in this situation (maybe unlink partially failed on this brick?). You can check the mount, brick and self-heal daemon logs for this gfid to  see if you find related error/warning messages.</div><div><br clear="none"></div><div>-Ravi</div></div></div>
</blockquote></div></div></div>
</blockquote></div></div>
</div></div> </div> </blockquote></div>