[Gluster-users] Replica 3 cluster, file being healed on all 3 nodes

Pranith Kumar Karampuri pkarampu at redhat.com
Wed Oct 14 05:31:36 UTC 2015



On 10/14/2015 09:39 AM, Lindsay Mathieson wrote:
>
> On 13 October 2015 at 22:33, Krutika Dhananjay <kdhananj at redhat.com 
> <mailto:kdhananj at redhat.com>> wrote:
>
>         However I managed to create a state where a file was being
>         healed on all three nodes (probably y live migrating a VM
>         while it was being healed). I didn't think that was possible
>         without creating a split brain problem, but it eventually got
>         all the way to being healed.
>
>     I don't think it is possible for heal of this image to be
>     happening on all three nodes.
>
>
>
> I should have recorded the info output, but it did show the same file 
> being "possibly healed" on all three nodes.
There seems to be a confusion in understanding how to interpret output 
of "gluster volume heal <volname> info". We will address it by either 
improving the output or adding some documentation about how to interpret it.
For now, all it means is that one of the self-heal-daemons or the mount 
is doing the heal on that file. The main confusing point seems to be the 
fact that the same output is seen on multiple bricks. Finding 
intersection/union of the results between bricks will cost (LOT)more 
iops, so we went with giving same output multiple times(when same info 
is present on more than one brick) as perceived by each brick. Doesn't 
mean that each of the bricks is doing the heal!, afr takes necessary 
locks to make sure parallel heals don't happen on the file.

Pranith
>
> Gluster 3.6.6
>
> -- 
> Lindsay
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151014/bb949b16/attachment.html>


More information about the Gluster-users mailing list