<div dir="ltr">>> IIUC you're begging for split-brain ...<br><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Not at all!</div><div>I have used this configuration and there isn't any split brain at all!</div><div>But if I do not use it, then I get a split brain.</div><div>Regarding count 2 I will see it!</div><div>Thanks</div><div><br></div><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 27 de out. de 2020 às 09:37, Diego Zuccato <<a href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Il 27/10/20 13:15, Gilberto Nunes ha scritto:<br>
> I have applied this parameters to the 2-node gluster:<br>
> gluster vol set VMS cluster.heal-timeout 10<br>
> gluster volume heal VMS enable<br>
> gluster vol set VMS cluster.quorum-reads false<br>
> gluster vol set VMS cluster.quorum-count 1<br>
Urgh!<br>
IIUC you're begging for split-brain ...<br>
I think you should leave quorum-count=2 for safe writes. If a node is<br>
down, obviously the volume becomes readonly. But if you planned the<br>
downtime you can reduce quorum-count just before shutting it down.<br>
You'll have to bring it back to 2 before re-enabling the downed server,<br>
then wait for heal to complete before being able to down the second server.<br>
<br>
> Then I mount the gluster volume putting this line in the fstab file:<br>
> In gluster01<br>
> gluster01:VMS /vms glusterfs<br>
> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0<br>
> In gluster02<br>
> gluster02:VMS /vms glusterfs<br>
> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0<br>
Isn't it preferrable to use the 'hostlist' syntax?<br>
gluster01,gluster02:VMS /vms glusterfs defaults,_netdev 0 0<br>
A / at the beginning is optional, but can be useful if you're trying to<br>
use the diamond freespace collector (w/o the initial slash, it ignores<br>
glusterfs mountpoints).<br>
<br>
-- <br>
Diego Zuccato<br>
DIFA - Dip. di Fisica e Astronomia<br>
Servizi Informatici<br>
Alma Mater Studiorum - Università di Bologna<br>
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
tel.: +39 051 20 95786<br>
</blockquote></div>