[Gluster-users] Geo-replication status Faulty

Gilberto Nunes gilberto.nunes32 at gmail.com
Tue Oct 27 19:24:20 UTC 2020


Not so fast with my solution!
After shutting the other node in the head, get  FAULTY stat again...
The only failure I saw in this thing regarding xattr value...

[2020-10-27 19:20:07.718897] E [syncdutils(worker
/DATA/vms):110:gf_mount_ready] <top>: failed to get the xattr value

Don't know if I am looking at the right log:
/var/log/glusterfs/geo-replication/VMS_gluster03_VMS-SLAVE/gsyncd.log

[2020-10-27 19:20:03.867749] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
Change [{status=Initializing...}]
[2020-10-27 19:20:03.868206] I [monitor(monitor):160:monitor] Monitor:
starting gsyncd worker [{brick=/DATA/vms}, {slave_node=gluster03}]
[2020-10-27 19:20:04.397444] I [resource(worker
/DATA/vms):1387:connect_remote] SSH: Initializing SSH connection between
master and slave...
[2020-10-27 19:20:06.337282] I [resource(worker
/DATA/vms):1436:connect_remote] SSH: SSH connection between master and
slave established. [{duration=1.9385}]
[2020-10-27 19:20:06.337854] I [resource(worker /DATA/vms):1116:connect]
GLUSTER: Mounting gluster volume locally...
[2020-10-27 19:20:07.718897] E [syncdutils(worker
/DATA/vms):110:gf_mount_ready] <top>: failed to get the xattr value
[2020-10-27 19:20:07.720089] I [resource(worker /DATA/vms):1139:connect]
GLUSTER: Mounted gluster volume [{duration=1.3815}]
[2020-10-27 19:20:07.720644] I [subcmds(worker /DATA/vms):84:subcmd_worker]
<top>: Worker spawn successful. Acknowledging back to monitor
[2020-10-27 19:20:09.757677] I [master(worker /DATA/vms):1645:register]
_GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/VMS_gluster03_VMS-SLAVE/DATA-vms}]
[2020-10-27 19:20:09.758440] I [resource(worker
/DATA/vms):1292:service_loop] GLUSTER: Register time [{time=1603826409}]
[2020-10-27 19:20:09.925364] I [gsyncdstatus(worker
/DATA/vms):281:set_active] GeorepStatus: Worker Status Change
[{status=Active}]
[2020-10-27 19:20:10.407319] I [gsyncdstatus(worker
/DATA/vms):253:set_worker_crawl_status] GeorepStatus: Crawl Status Change
[{status=History Crawl}]
[2020-10-27 19:20:10.420385] I [master(worker /DATA/vms):1559:crawl]
_GMaster: starting history crawl [{turns=1}, {stime=(1603821702, 0)},
{etime=1603826410}, {entry_
stime=(1603822857, 0)}]
[2020-10-27 19:20:10.424286] E [resource(worker
/DATA/vms):1312:service_loop] GLUSTER: Changelog History Crawl failed
[{error=[Errno 0] Success}]
[2020-10-27 19:20:10.731317] I [monitor(monitor):228:monitor] Monitor:
worker died in startup phase [{brick=/DATA/vms}]
[2020-10-27 19:20:10.740046] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status
Change [{status=Faulty}]


---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 27 de out. de 2020 às 16:06, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:

> It could be a "simple" bug - software has bugs and regressions.
>
> I would recommend you to ping the debian mailing list - at least it won't
> hurt.
>
> Best Regards,
> Strahil Nikolov
>
>
>
>
>
>
> В вторник, 27 октомври 2020 г., 20:10:39 Гринуич+2, Gilberto Nunes <
> gilberto.nunes32 at gmail.com> написа:
>
>
>
>
>
> [SOLVED]
>
> Well... It seems to me that pure Debian Linux 10 has some problem with
> XFS, which is the FS that  I used.
> It's not accept attr2 mount options.
>
> Interestingly enough, I have now used Proxmox 6.x, which is Debian based,
> I am now able to use the attr2 mount point option.
> Then the Faulty status of geo-rep has gone.
> Perhaps Proxmox staff has compiled xfs from scratch... Don't know....
> But now I am happy ' cause the main reason to use geo-rep to me is to use
> it over Proxmox....
>
> cat /etc/fstab  # <file system> <mount point> <type> <options> <dump>
> <pass> /dev/pve/root / xfs defaults 0 1 /dev/pve/swap none swap sw 0 0
> /dev/sdb1       /DATA   xfs     attr2   0       0 gluster01:VMS /vms
> glusterfs
> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0
> proc /proc proc defaults 0 0
>
>
> ---
> Gilberto Nunes Ferreira
>
>
>
>
>
>
>
> Em ter., 27 de out. de 2020 às 09:39, Gilberto Nunes <
> gilberto.nunes32 at gmail.com> escreveu:
> >>> IIUC you're begging for split-brain ...
> > Not at all!
> > I have used this configuration and there isn't any split brain at all!
> > But if I do not use it, then I get a split brain.
> > Regarding count 2 I will see it!
> > Thanks
> >
> > ---
> > Gilberto Nunes Ferreira
> >
> >
> >
> >
> >
> > Em ter., 27 de out. de 2020 às 09:37, Diego Zuccato <
> diego.zuccato at unibo.it> escreveu:
> >> Il 27/10/20 13:15, Gilberto Nunes ha scritto:
> >>> I have applied this parameters to the 2-node gluster:
> >>> gluster vol set VMS cluster.heal-timeout 10
> >>> gluster volume heal VMS enable
> >>> gluster vol set VMS cluster.quorum-reads false
> >>> gluster vol set VMS cluster.quorum-count 1
> >> Urgh!
> >> IIUC you're begging for split-brain ...
> >> I think you should leave quorum-count=2 for safe writes. If a node is
> >> down, obviously the volume becomes readonly. But if you planned the
> >> downtime you can reduce quorum-count just before shutting it down.
> >> You'll have to bring it back to 2 before re-enabling the downed server,
> >> then wait for heal to complete before being able to down the second
> server.
> >>
> >>> Then I mount the gluster volume putting this line in the fstab file:
> >>> In gluster01
> >>> gluster01:VMS /vms glusterfs
> >>> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0
> >>> In gluster02
> >>> gluster02:VMS /vms glusterfs
> >>> defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0
> >> Isn't it preferrable to use the 'hostlist' syntax?
> >> gluster01,gluster02:VMS /vms glusterfs defaults,_netdev 0 0
> >> A / at the beginning is optional, but can be useful if you're trying to
> >> use the diamond freespace collector (w/o the initial slash, it ignores
> >> glusterfs mountpoints).
> >>
> >> --
> >> Diego Zuccato
> >> DIFA - Dip. di Fisica e Astronomia
> >> Servizi Informatici
> >> Alma Mater Studiorum - Università di Bologna
> >> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> >> tel.: +39 051 20 95786
> >>
> >
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201027/13d32b6b/attachment.html>


More information about the Gluster-users mailing list