[Gluster-users] Setup with replica 3 for imagestorage

Ravishankar N ravishankar at redhat.com
Wed Jul 15 13:29:52 UTC 2015



On 07/15/2015 06:41 PM, Gregor Burck wrote:
> Hi Ravi,
>
>> You can create a normal replica 3 volume and then use it for VM 
>> images, instead of doing an add brick (thus avoiding the need to heal 
>> the vm image file  to the newly added brick).
>
> that is not the problem, the initial heal after the add I've done.
> But what is about take a node down? For example for maintenace or 
> power supply or or or?
> After the node come back, the VM go readonly.

That should not be the case, can you provide the client (mount) logs and 
the brick logs when this happens?  Replicate translator returns EROFS 
usually when quorum is not met.

>
> This I get after one brick (edgf006) was restart:
> Brick edgf004:/export/vbstore/
> /gftest/gftest.vdi - Possibly undergoing heal
>
> Number of entries: 1
>
> Brick edgf005:/export/vbstore/
> /gftest/gftest.vdi - Possibly undergoing heal
>
> Number of entries: 1
>
> Brick edgf006:/export/vbstore/
> Number of entries: 0
>
> Why the two nodes wich still alive get 'Possibly undergoing heal'? 
> That I don't understand.
>

The healthy bricks always record the list of files that need to be 
healed to the other node, which is why it shows up in both bricks. The 
possibly undergoing heal means out of the list of files that need heal, 
this one is currently undergoing heal by the selfheal daemon.

-Ravi
>
>
>> client quorm (cluster.quorum-type) should be set to `auto`.
> I've done this with no change.
>
>> glusterfs 3.7 onwards has support for a special type of replica 3 
>> configuration called arbiter volumes where the disk space consumed is 
>> less than a conventional replica 3 volume . It would be great if you 
>> can try that out for your VM images and provide some feedback! 
>> Details on arbiter volumes can be found here: 
>> https://github.com/gluster/glusterfs/blob/master/doc/features/afr-ar
>> biter-volumes.md
>
> I've a look on that to test it,
>
> Bye,
>
> Gregor
>



More information about the Gluster-users mailing list