[Gluster-users] No active sinks for performing self-heal on file

Pranith Kumar Karampuri pkarampu at redhat.com
Mon Aug 5 10:06:47 UTC 2013


hey,
    Please change the step-2: The value should be all zeros.
    Step-2: On one of the bricks execute, setfattr -n trusted.afr.488_1152-client-0 -v 0x000000000000000000000000 <file-path>

Pranith.

----- Original Message -----
> From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> To: "Nux!" <nux at li.nux.ro>
> Cc: "Gluster Users" <gluster-users at gluster.org>
> Sent: Monday, August 5, 2013 3:32:21 PM
> Subject: Re: [Gluster-users] No active sinks for performing self-heal on file
> 
> Follow these steps:
> step-1: Take a backup of the file.
> step-2: On one of the bricks execute, setfattr -n
> trusted.afr.488_1152-client-0 -v 0x000000010000000000000000 <file-path>
> After the steps above are done on both the files
> step-3: gluster volume heal <volname> - This will trigger the heal and things
> should be good now.
> step-4: Check the md5sums on the files are same with the backups we took in
> step-1. (This step is not really required but I am paranoid)
> step-5: Give pranith logs :-)
> 
> Pranith.
> ----- Original Message -----
> > From: "Nux!" <nux at li.nux.ro>
> > To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> > Cc: "Gluster Users" <gluster-users at gluster.org>
> > Sent: Monday, August 5, 2013 3:19:41 PM
> > Subject: Re: [Gluster-users] No active sinks for performing self-heal on
> > file
> > 
> > On 05.08.2013 10:38, Pranith Kumar Karampuri wrote:
> > > With only one sever reboot it should not lead to this issue. Will it
> > > be possible to send the logs so that I can root cause the issue. What
> > > is the version of glusterfs?
> > > Are you sure the file is not in active use at the time of getfattr
> > > command execution?
> > > 
> > > Check md5sums of the files on both the bricks when no operations are
> > > in progress on the file. If they are same we can erase these changelog
> > > xattrs and things should be back to normal.
> > > If md5sums are different then one needs to figure out which files are
> > > more recent and edit the xattrs so that it is healed in the right
> > > direction. I can help you with this on IRC. my nick on irc is
> > > pranithk.
> > > I will be online in 30 minutes.
> > > 
> > > But It is still not clear how the files ended up in this situation.
> > > How many files are in this status. Logs would be helpful.
> > > 
> > 
> > Hello Pranith,
> > 
> > Only 2 files appear in the logs over and over again:
> > 
> > gfid:4ab40ffe-29d8-4e90-9f03-e7a61e92ce4c
> > gfid:c525c671-9855-4d3b-b1ab-dbc9fc7022cf
> > 
> > getfattr -d -m . -e hex
> > /bricks/488_1152/.glusterfs/c5/25/c525c671-9855-4d3b-b1ab-dbc9fc7022cf
> > getfattr: Removing leading '/' from absolute path names
> > # file:
> > bricks/488_1152/.glusterfs/c5/25/c525c671-9855-4d3b-b1ab-dbc9fc7022cf
> > trusted.afr.488_1152-client-0=0x000000010000000000000000
> > trusted.afr.488_1152-client-1=0x000000010000000000000000
> > trusted.gfid=0xc525c67198554d3bb1abdbc9fc7022cf
> > trusted.glusterfs.quota.2c056f93-fe6d-4927-8423-57d6ae80b9fb.contri=0x0000000000109000
> > 
> > getfattr -d -m . -e hex
> > /bricks/488_1152/.glusterfs/4a/b4/4ab40ffe-29d8-4e90-9f03-e7a61e92ce4c
> > getfattr: Removing leading '/' from absolute path names
> > # file:
> > bricks/488_1152/.glusterfs/4a/b4/4ab40ffe-29d8-4e90-9f03-e7a61e92ce4c
> > trusted.afr.488_1152-client-0=0x000000010000000000000000
> > trusted.afr.488_1152-client-1=0x000000010000000000000000
> > trusted.gfid=0x4ab40ffe29d84e909f03e7a61e92ce4c
> > trusted.glusterfs.quota.2c056f93-fe6d-4927-8423-57d6ae80b9fb.contri=0x00000000000f3600
> > 
> > 
> > I have checked both their md5 sums and are identical on all bricks,
> > though I can't be 100% sure the files are not in use by the customer.
> > How do I go about erasing those changelog xattrs?
> > 
> > I can provide full logs, but this has been happening for a while and I
> > imagine the logs are quite huge by now, let me know if you still need
> > them.
> > 
> > --
> > Sent from the Delta quadrant using Borg technology!
> > 
> > Nux!
> > www.nux.ro
> > 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 



More information about the Gluster-users mailing list