[Gluster-users] Geo-replication status Faulty

Strahil Nikolov hunter86_bg at yahoo.com
Tue Oct 27 19:04:11 UTC 2020

It could be a "simple" bug - software has bugs and regressions.

I would recommend you to ping the debian mailing list - at least it won't hurt.

Best Regards,
Strahil Nikolov

В вторник, 27 октомври 2020 г., 20:10:39 Гринуич+2, Gilberto Nunes <gilberto.nunes32 at gmail.com> написа: 


Well... It seems to me that pure Debian Linux 10 has some problem with XFS, which is the FS that  I used.
It's not accept attr2 mount options.

Interestingly enough, I have now used Proxmox 6.x, which is Debian based, I am now able to use the attr2 mount point option.
Then the Faulty status of geo-rep has gone.
Perhaps Proxmox staff has compiled xfs from scratch... Don't know....
But now I am happy ' cause the main reason to use geo-rep to me is to use it over Proxmox....

cat /etc/fstab  # <file system> <mount point> <type> <options> <dump> <pass> /dev/pve/root / xfs defaults 0 1 /dev/pve/swap none swap sw 0 0 /dev/sdb1       /DATA   xfs     attr2   0       0 gluster01:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0 proc /proc proc defaults 0 0

Gilberto Nunes Ferreira

Em ter., 27 de out. de 2020 às 09:39, Gilberto Nunes <gilberto.nunes32 at gmail.com> escreveu:
>>> IIUC you're begging for split-brain ...
> Not at all!
> I have used this configuration and there isn't any split brain at all!
> But if I do not use it, then I get a split brain.
> Regarding count 2 I will see it!
> Thanks
> ---
> Gilberto Nunes Ferreira
> Em ter., 27 de out. de 2020 às 09:37, Diego Zuccato <diego.zuccato at unibo.it> escreveu:
>> Il 27/10/20 13:15, Gilberto Nunes ha scritto:
>>> I have applied this parameters to the 2-node gluster:
>>> gluster vol set VMS cluster.heal-timeout 10
>>> gluster volume heal VMS enable
>>> gluster vol set VMS cluster.quorum-reads false
>>> gluster vol set VMS cluster.quorum-count 1
>> Urgh!
>> IIUC you're begging for split-brain ...
>> I think you should leave quorum-count=2 for safe writes. If a node is
>> down, obviously the volume becomes readonly. But if you planned the
>> downtime you can reduce quorum-count just before shutting it down.
>> You'll have to bring it back to 2 before re-enabling the downed server,
>> then wait for heal to complete before being able to down the second server.
>>> Then I mount the gluster volume putting this line in the fstab file:
>>> In gluster01
>>> gluster01:VMS /vms glusterfs
>>> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0
>>> In gluster02
>>> gluster02:VMS /vms glusterfs
>>> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0
>> Isn't it preferrable to use the 'hostlist' syntax?
>> gluster01,gluster02:VMS /vms glusterfs defaults,_netdev 0 0
>> A / at the beginning is optional, but can be useful if you're trying to
>> use the diamond freespace collector (w/o the initial slash, it ignores
>> glusterfs mountpoints).
>> -- 
>> Diego Zuccato
>> DIFA - Dip. di Fisica e Astronomia
>> Servizi Informatici
>> Alma Mater Studiorum - Università di Bologna
>> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
>> tel.: +39 051 20 95786

Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users at gluster.org

More information about the Gluster-users mailing list