[Gluster-users] gluster volume not healing - remote operation failed

Eli V eliventer at gmail.com
Wed Nov 2 11:39:14 UTC 2022


On Wed, Sep 14, 2022 at 7:08 AM <dpgluster at posteo.de> wrote:
>
> Hi folks,
>
> my gluster volume isn't fully healing. We had an outage couple days ago
> and all other files got healed successfully. Now - days later - i can
> see there are still two gfid's per node remaining in healing list.
>
> root at storage-001~# for i in `gluster volume list`; do gluster volume
> heal $i info; done
> Brick storage-003.mydomain.com:/mnt/bricks/g-volume-myvolume
> <gfid:612ebae7-3df2-467f-aa02-47d9e3bafc1a>
> <gfid:876597cd-702a-49ec-a9ed-46d21f90f754>
> Status: Connected
> Number of entries: 2
>
> Brick storage-002.mydomain.com:/mnt/bricks/g-volume-myvolume
> <gfid:a4babc5a-bd5a-4429-b65e-758651d5727c>
> <gfid:48791313-e5e7-44df-bf99-3ebc8d4cf5d5>
> Status: Connected
> Number of entries: 2
>
> Brick storage-001.mydomain.com:/mnt/bricks/g-volume-myvolume
> <gfid:a4babc5a-bd5a-4429-b65e-758651d5727c>
> <gfid:48791313-e5e7-44df-bf99-3ebc8d4cf5d5>
> Status: Connected
> Number of entries: 2
>
> In the log i can see that the glustershd process is invoked to heal the
> reamining files but fails with "remote operation failed".
> [2022-09-14 10:56:50.007978 +0000] I [MSGID: 108026]
> [afr-self-heal-entry.c:1053:afr_selfheal_entry_do]
> 0-g-volume-myvolume-replicate-0: performing entry selfheal on
> 48791313-e5e7-44df-bf99-3ebc8d4cf5d5
> [2022-09-14 10:56:50.008428 +0000] I [MSGID: 108026]
> [afr-self-heal-entry.c:1053:afr_selfheal_entry_do]
> 0-g-volume-myvolume-replicate-0: performing entry selfheal on
> a4babc5a-bd5a-4429-b65e-758651d5727c
> [2022-09-14 10:56:50.015005 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-2: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:50.015007 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-3: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:50.015138 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-4: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:50.614082 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-2: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:50.614108 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-3: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:50.614099 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-4: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:51.619623 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-2: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:51.619630 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-3: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
> [2022-09-14 10:56:51.619632 +0000] E [MSGID: 114031]
> [client-rpc-fops_v2.c:214:client4_0_mkdir_cbk]
> 0-g-volume-myvolume-client-4: remote operation failed. [{path=(null)},
> {errno=22}, {error=Invalid argument}]
>
> The gluster is running with opversion 90000 on CentOS. There are no
> entries in split brain.
>
> How can i get these files finally healed?
>
> Thanks in advance.
> ________

I've seen this too. The only I've found to fix it is run a find under
each of my bricks and run getfattr -n trusted.gfid -e hex on all the
files, saving the output to a text file and then greping for the
problematic gfid's to identify which file it is. Accessing the files
through the gluster fuse mount can sometimes heal them, but I've had
symlinks I just had to rm and recreate and other files that were just
failed removals that only exist in one brick and no others that have
to be removed by hand. Happens often enough I wrote a script that
traverses all files under a brick and recursively removes the file in
the brick and it's gfid version under .glusterfs.  I can dig it up if
you're still interested, don't have it handy atm.


More information about the Gluster-users mailing list