[Gluster-users] No healing on peer disconnect - is it correct?

Ravishankar N ravishankar at redhat.com
Tue Jun 11 04:50:10 UTC 2019

There will be pending heals only when the brick process goes down or 
there is a disconnect between the client and that brick. When you say " 
gluster process is down but bricks running", I'm guessing you killed 
only glusterd and not the glusterfsd brick process. That won't cause any 
pending heals. If there is something to be healed, `gluster volume heal 
$volname info` will display the list of files.

Hope that helps,
On 10/06/19 7:53 PM, Martin wrote:
> My VMs using Gluster as storage through libgfapi support in Qemu. But 
> I dont see any healing of reconnected brick.
> Thanks Karthik / Ravishankar in advance!
>> On 10 Jun 2019, at 16:07, Hari Gowtham <hgowtham at redhat.com 
>> <mailto:hgowtham at redhat.com>> wrote:
>> On Mon, Jun 10, 2019 at 7:21 PM snowmailer <snowmailer at gmail.com 
>> <mailto:snowmailer at gmail.com>> wrote:
>>> Can someone advice on this, please?
>>> BR!
>>> Dňa 3. 6. 2019 o 18:58 užívateľ Martin <snowmailer at gmail.com 
>>> <mailto:snowmailer at gmail.com>> napísal:
>>>> Hi all,
>>>> I need someone to explain if my gluster behaviour is correct. I am 
>>>> not sure if my gluster works as it should. I have simple Replica 3 
>>>> - Number of Bricks: 1 x 3 = 3.
>>>> When one of my hypervisor is disconnected as peer, i.e. gluster 
>>>> process is down but bricks running, other two healthy nodes start 
>>>> signalling that they lost one peer. This is correct.
>>>> Next, I restart gluster process on node where gluster process 
>>>> failed and I thought It should trigger healing of files on failed 
>>>> node but nothing is happening.
>>>> I run VMs disks on this gluster volume. No healing is triggered 
>>>> after gluster restart, remaining two nodes get peer back after 
>>>> restart of gluster and everything is running without down time.
>>>> Even VMs that are running on “failed” node where gluster process 
>>>> was down (bricks were up) are running without down time.
>> I assume your VMs use gluster as the storage. In that case, the
>> gluster volume might be mounted on all the hypervisors.
>> The mount/ client is smart enough to give the correct data from the
>> other two machines which were always up.
>> This is the reason things are working fine.
>> Gluster should heal the brick.
>> Adding people how can help you better with the heal part.
>> @Karthik Subrahmanya  @Ravishankar N do take a look and answer this part.
>>>> Is this behaviour correct? I mean No healing is triggered after 
>>>> peer is reconnected back and VMs.
>>>> Thanks for explanation.
>>>> BR!
>>>> Martin
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> --
>> Regards,
>> Hari Gowtham.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190611/955b08f9/attachment.html>

More information about the Gluster-users mailing list