[Gluster-users] Replica 3: Client access via FUSE failed if two bricks are down

Ravishankar N ravishankar at redhat.com
Fri Apr 12 15:53:17 UTC 2019


On 12/04/19 8:34 PM, Felix Kölzow wrote:
>
> Dear Gluster-Community,
>
>
> I created a test-environment to test a gluster volume with replica 3.
>
> Afterwards, I am able to manually mount the gluster volume using FUSE.
>
>
> mount command:
>
> mount -t glusterfs  -o backup-volfile-servers=gluster01:gluster02 
> gluster00:/ifwFuse /mnt/glusterfs/ifwFuse
>
>
> Just for a testing purpose, I shutdown *two* (arbitrary) bricks and 
> one brick keeps still online
>
> and accessible via ssh. If I poweroff the two machines, I immediately 
> get the following error message:
>
> ls: cannot open directory .: Transport endpoint is not connected
>
>
> From my understanding of replica 3, even if two bricks are broken the 
> client should be able to
>
> have access to the data.
>
In replica 3, 2 out of 3 bricks must be up for allowing access. IoW, 
client-quorum must be met. Otherwise you would get ENOTCONN like you 
observed.
>
> Actually, I don't know how to solve that issue. Any idea is welcome!
>
You could disable client-quorum but it is strongly advised not to do so 
in order to prevent split-brains 
(https://docs.gluster.org/en/v3/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/).

Hope that helps.

-Ravi

>
> If you need any log-file as further information, just give me a hint!
>
>
> Thanks in advance.
>
> Felix
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190412/1fcd3e87/attachment.html>


More information about the Gluster-users mailing list