[Gluster-users] Self-Heal Daemon not Running

Mohit Anchlia mohitanchlia at gmail.com
Wed Sep 25 00:39:30 UTC 2013


What's the output of

gluster volume heal $VOLUME info split-brain


On Tue, Sep 24, 2013 at 5:33 PM, Andrew Lau <andrew at andrewklau.com> wrote:

> Found the BZ https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so I
> restarted one of the volumes and it seems to have restarted the all daemons
> again.
>
> Self heal started again, but I seem to have split-brain issues everywhere.
> There's over 100 different entries on each node, what's the best way to
> restore this now? Short of having to manually go through and delete 200+
> files. It looks like a full split brain as the file sizes on the different
> nodes are out of balance by about 100GB or so.
>
> Any suggestions would be much appreciated!
>
> Cheers.
>
> On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau <andrew at andrewklau.com>wrote:
>
>> Hi,
>>
>> Right now, I have a 2x1 replica. Ever since I had to reinstall one of the
>> gluster servers, there's been issues with split-brain. The self-heal daemon
>> doesn't seem to be running on either of the nodes.
>>
>> To reinstall the gluster server (the original brick data was intact but
>> the OS had to be reinstalled)
>> - Reinstalled gluster
>> - Copied over the old uuid from backup
>> - gluster peer probe
>> - gluster volume sync $othernode all
>> - mount -t glusterfs localhost:STORAGE /mnt
>> - find /mnt -noleaf -print0 | xargs --null stat >/dev/null
>> 2>/var/log/glusterfs/mnt-selfheal.log
>>
>> I let it resync and it was working fine, atleast so I thought. I just
>> came back a few days later to see there's a miss match in the brick
>> volumes. One is 50GB ahead of the other.
>>
>> # gluster volume heal STORAGE info
>> Status: self-heal-daemon is not running on
>> 966456a1-b8a6-4ca8-9da7-d0eb96997cbe
>>
>> /var/log/gluster/glustershd.log doesn't seem to have any recent logs,
>> only those from when the two original gluster servers were running.
>>
>> # gluster volume status
>>
>> Self-heal Daemon on localhost N/A N N/A
>>
>> Any suggestions would be much appreciated!
>>
>> Cheers
>> Andrew.
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130924/c223d73a/attachment.html>


More information about the Gluster-users mailing list