[Gluster-users] File\Directory not healing

Strahil Nikolov hunter86_bg at yahoo.com
Thu Feb 23 11:01:08 UTC 2023

Move away the file located onthe arbiter brick as it has different gfid and touch it(only if the software that consumes it is NOT sensitive to atime modification).
Best Regards,Strahil Nikolov  
  On Wed, Feb 22, 2023 at 13:09, David Dolan<daithidolan at gmail.com> wrote:   Hi Strahil,
The output in my previous email showed the directory the file is located in with a different GFID on the Arbiter node compared with the bricks on the other nodes.
Based on that, do you know what my next step should be?

On Wed, 15 Feb 2023 at 09:21, David Dolan <daithidolan at gmail.com> wrote:

sorry I didn't receive the previous email.I've run the command on all 3 nodes(bricks). See below. The directory only has one file.On the Arbiter, the file doesn't exist and the directory the file should be in has a different GFID than the bricks on the other nodes
Node 1 Brickgetfattr -d -m . -e hex /path_on_brick/subdir1/subdir2/filetrusted.gfid=0x7b1aa40dd1e64b7b8aac7fc6bcbc9e9b
getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2
getfattr -d -m . -e hex /path_on_brick/subdir1

Node 2 Brickgetfattr -d -m . -e hex /path_on_brick/subdir1/subdir2/file
getfattr -d -m . -e hex /path_on_brick/subdir1/subdir2trusted.gfid=0xdc99ac0db85d4b1c8a6af57a71bbe22c
getfattr -d -m . -e hex /path_on_brick/subdir1

Node 3 Brick (Arbiter)Path to file doesn't existgetfattr -d -m . -e hex /path_on_brick/subdir1/subdir2
getfattr -d -m . -e hex /path_on_brick/subdir1

On Tue, 14 Feb 2023 at 20:38, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

 I guess you didn't receive my last e-mail.
Use getfattr and identify if the gfid mismatch. If yes, move away the mismatched one.
In order a dir to heal, you have to fix all files inside it before it can be healed.

Best Regards,
Strahil Nikolov     В вторник, 14 февруари 2023 г., 14:04:31 ч. Гринуич+2, David Dolan <daithidolan at gmail.com> написа:  
 I've touched the directory one level above the directory with the I\O issue as the one above that is the one showing as dirty.It hasn't healed. Should the self heal daemon automatically kick in here?
Is there anything else I can do?
On Tue, 14 Feb 2023 at 07:03, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

You can always mount it locally on any of the gluster nodes.
Best Regards,Strahil Nikolov 
  On Mon, Feb 13, 2023 at 18:13, David Dolan<daithidolan at gmail.com> wrote:   HI Strahil,
Thanks for that. It's the first time I've been in this position, so I'm learning as I go along.
Unfortunately I can't go into the directory on the client side as I get an input/output errorInput/output error
d????????? ? ?      ?        ?            ? 01


On Sun, 12 Feb 2023 at 20:29, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:

Setting blame on client-1 and client-2 will make a bigger mess.Can't you touch the affected file from the FUSE mount point ?
Best Regards,Strahil Nikolov 
  On Tue, Feb 7, 2023 at 14:42, David Dolan<daithidolan at gmail.com> wrote:   Hi All. 
Hoping you can help me with a healing problem. I have one file which didn't self heal.
it looks to be a problem with a directory in the path as one node says it's dirty. I have a replica volume with arbiter
This is what the 3 nodes say. One brick on each
Node1getfattr -d -m . -e hex /path/to/dir | grep afrgetfattr: Removing leading '/' from absolute path namestrusted.afr.volume-client-2=0x000000000000000000000001trusted.afr.dirty=0x000000000000000000000000Node2getfattr -d -m . -e hex /path/to/dir | grep afrgetfattr: Removing leading '/' from absolute path namestrusted.afr.volume-client-2=0x000000000000000000000001trusted.afr.dirty=0x000000000000000000000000Node3(Arbiter)getfattr -d -m . -e hex /path/to/dir | grep afrgetfattr: Removing leading '/' from absolute path namestrusted.afr.dirty=0x000000000000000000000001Since Node3(the arbiter) sees it as dirty and it looks like Node 1 and Node 2 have good copies, I was thinking of running the following on Node1 which I believe would tell Node 2 and Node 3 to sync from Node 1
I'd then kick off a heal on the volume
setfattr -n trusted.afr.volume-client-1 -v 0x000000010000000000000000 /path/to/dirsetfattr -n trusted.afr.volume-client-2 -v 0x000000010000000000000000 /path/to/dirclient-0 is node 1, client-1 is node2 and client-2 is node 3. I've verified the hard links with gfid are in the xattrop directory
Is this the correct way to heal and resolve the issue? 

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230223/9c9d61ea/attachment.html>

More information about the Gluster-users mailing list