[Gluster-users] Questions related to heal-failed

shwetha spandura at redhat.com
Thu Nov 28 04:30:34 UTC 2013


1) Usage of gluster volume heal command :

To see list of files that requires self-heal : "gluster volume heal 
<volume_name> info"

To see files that were self-healed : "gluster volume  heal <volume_name> 
info healed"

To see files which were failed to self-heal : "gluster volume heal 
<volume_name> info heal-failed"

To see if files are in split-brain state : "gluster volume  heal 
<volume_name> info split-brain"

2 and 3) Even after executing "gluster volume heal <volume_name>" or 
"gluster volume heal <volume_name> full" which will trigger self-heal , 
you will be seeing stale entries when you execute "gluster volume  heal 
<volume_name> info heal-failed" from the previous execution .If you want 
to overcome this

a) restart glusterd on all storage nodes : "service glusterd restart"

b) Trigger self-heal : "gluster volume heal <volume_name>" or "gluster 
volume heal <volume_name> full"

c) Execute : "gluster volume heal <volume_name> info heal-failed"  to 
check if self-heal has been failed on certain files.

4) Under "/var/log/glusterfs" check glustershd.log file for any 
self-heal related logs.

-Shwetha

On 11/28/2013 08:15 AM, glusted at netcourrier.com wrote:
>
>   Hi Couilles-de-Loups!
> After a few unsuccessful attempts to get answers on the gluster chat, 
> I turn to email.
>
> I have Glusterfs version 3.4.0.
>
> 1) What is the correct usage of command:   gluster volume heal 
> myvolume info heal-failed  ?
>
> When I type this command, I get a list of files:
> Ex:
>
> 2013-11-14 03:07:52 <gfid:fd1d018e-38ae-444c-a069-91528b9871dd>/10.jpg
> 2013-11-14 03:07:51 <gfid:fd1d018e-38ae-444c-a069-91528b9871dd>/1.jpg
>
> In fact, I get this:
>
> [bob at server]# gluster volume heal myvolume info heal-failed | grep -i 
> number
> Number of entries: 6
> Number of entries: 68
>
> So on my 2 bricks, I have a total of 74 "heal-failed" files.
>
> 2) When I do gluster volume heal myvolume and/or gluster volume heal 
> myvolume full, then I type again the gluster volume heal myvolume info 
> heal-failed, I get the same number...
> In fact it is saying that the command was successful /(Launching Heal 
> operation on volume myvolume has been successful Use heal info 
> commands to check status).../
>
>
> 3) How to I remove those files so they don't appear in "heal-failed"? 
> Do I want to remove them? My understanding is that this command should 
> only show the files who have not been healed, not some relics of the past.
>
> 4) About logging which log should I check to know why I have 
> "heal-failed"?
> I found the log directory, but I have plenty of logs, including brick1 
> and brick2 logs.
> I looked at them, but have not found the root cause, yet.
>
> 5) I can not find those files marked as "heal-failed", can someone 
> tell me a hint or explanation? (for example what is this: 
> gfid:fd1d018e-38ae-444c-a069-91528b9871dd)
>
>
> Thanks,
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/2645417f/attachment.html>


More information about the Gluster-users mailing list