[Gluster-users] Geo-replication status Faulty
gilberto.nunes32 at gmail.com
Tue Oct 27 12:39:49 UTC 2020
>> IIUC you're begging for split-brain ...
Not at all!
I have used this configuration and there isn't any split brain at all!
But if I do not use it, then I get a split brain.
Regarding count 2 I will see it!
Gilberto Nunes Ferreira
Em ter., 27 de out. de 2020 às 09:37, Diego Zuccato <diego.zuccato at unibo.it>
> Il 27/10/20 13:15, Gilberto Nunes ha scritto:
> > I have applied this parameters to the 2-node gluster:
> > gluster vol set VMS cluster.heal-timeout 10
> > gluster volume heal VMS enable
> > gluster vol set VMS cluster.quorum-reads false
> > gluster vol set VMS cluster.quorum-count 1
> IIUC you're begging for split-brain ...
> I think you should leave quorum-count=2 for safe writes. If a node is
> down, obviously the volume becomes readonly. But if you planned the
> downtime you can reduce quorum-count just before shutting it down.
> You'll have to bring it back to 2 before re-enabling the downed server,
> then wait for heal to complete before being able to down the second server.
> > Then I mount the gluster volume putting this line in the fstab file:
> > In gluster01
> > gluster01:VMS /vms glusterfs
> > defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0
> > In gluster02
> > gluster02:VMS /vms glusterfs
> > defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0
> Isn't it preferrable to use the 'hostlist' syntax?
> gluster01,gluster02:VMS /vms glusterfs defaults,_netdev 0 0
> A / at the beginning is optional, but can be useful if you're trying to
> use the diamond freespace collector (w/o the initial slash, it ignores
> glusterfs mountpoints).
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Gluster-users