<div dir="ltr">Dear Felix<div><br></div><div>I have applied this parameters to the 2-node gluster:</div><div><br></div><div>gluster vol set VMS cluster.heal-timeout 10<br>gluster volume heal VMS enable<br>gluster vol set VMS cluster.quorum-reads false<br>gluster vol set VMS cluster.quorum-count 1<br>gluster vol set VMS network.ping-timeout 2<br>gluster volume set VMS cluster.favorite-child-policy mtime<br>gluster volume heal VMS granular-entry-heal enable<br>gluster volume set VMS cluster.data-self-heal-algorithm full<br></div><div><br></div><div>As you can see, I used this for virtualization purposes.</div><div>Then I mount the gluster volume putting this line in the fstab file:</div><div><br></div><div>In gluster01</div><div><br></div><div>gluster01:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster02 0 0<br></div><div><br></div><div>In gluster02</div><div><br></div><div>gluster02:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster01 0 0<br></div><div><br></div><div>Then after shutdown the gluster01, gluster02 is still access the mounted gluster volume...</div><div><br></div><div>Just the geo-rep has failure.</div><div><br></div><div>I could see why, but I'll make further investigation.</div><div><br></div><div>Thanks</div><div><br></div><div><br></div><div><br></div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><br></div><div><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em ter., 27 de out. de 2020 às 04:57, Felix Kölzow <<a href="mailto:felix.koelzow@gmx.de">felix.koelzow@gmx.de</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p>Dear Gilberto,</p>
<p><br>
</p>
<p>If I am right, you ran into server-quorum if you startet a 2-node
replica and shutdown one host.</p>
<p>From my perspective, its fine.</p>
<p><br>
</p>
<p>Please correct me if I am wrong here.</p>
<p><br>
</p>
<p>Regards,</p>
<p>Felix<br>
</p>
<div>On 27/10/2020 01:46, Gilberto Nunes
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Well I do not reboot the host. I shut down the
host. Then after 15 min give up.
<div>Don't know why that happened.</div>
<div>I will try it latter</div>
<div><br>
</div>
<div>
<div>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>---</div>
<div>
<div>
<div>Gilberto Nunes Ferreira</div>
</div>
<div><br>
</div>
<div> </div>
<div>
<p style="font-size:12.8px;margin:0px"><br>
</p>
<p style="font-size:12.8px;margin:0px"><br>
</p>
</div>
</div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">Em seg., 26 de out. de 2020 às
21:31, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>>
escreveu:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Usually
there is always only 1 "master" , but when you power off one
of the 2 nodes - the geo rep should handle that and the second
node should take the job.<br>
<br>
How long did you wait after gluster1 has been rebooted ?<br>
<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
<br>
<br>
<br>
<br>
<br>
В понеделник, 26 октомври 2020 г., 22:46:21 Гринуич+2,
Gilberto Nunes <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>>
написа: <br>
<br>
<br>
<br>
<br>
<br>
I was able to solve the issue restarting all servers.<br>
<br>
Now I have another issue!<br>
<br>
I just powered off the gluster01 server and then the
geo-replication entered in faulty status.<br>
I tried to stop and start the gluster geo-replication like
that:<br>
<br>
gluster volume geo-replication DATA root@gluster03::DATA-SLAVE
resume Peer gluster01.home.local, which is a part of DATA
volume, is down. Please bring up the peer and retry.
geo-replication command failed<br>
How can I have geo-replication with 2 master and 1 slave?<br>
<br>
Thanks<br>
<br>
<br>
---<br>
Gilberto Nunes Ferreira<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Em seg., 26 de out. de 2020 às 17:23, Gilberto Nunes <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>>
escreveu:<br>
> Hi there...<br>
> <br>
> I'd created a 2 gluster vol and another 1 gluster server
acting as a backup server, using geo-replication.<br>
> So in gluster01 I'd issued the command:<br>
> <br>
> gluster peer probe gluster02;gluster peer probe gluster03<br>
> gluster vol create DATA replica 2
gluster01:/DATA/master01-data gluster02:/DATA/master01-data/<br>
> <br>
> Then in gluster03 server:<br>
> <br>
> gluster vol create DATA-SLAVE gluster03:/DATA/slave-data/<br>
> <br>
> I'd setted the ssh powerless session between this 3
servers.<br>
> <br>
> Then I'd used this script<br>
> <br>
> <a href="https://github.com/gilbertoferreira/georepsetup" rel="noreferrer" target="_blank">https://github.com/gilbertoferreira/georepsetup</a><br>
> <br>
> like this<br>
> <br>
> georepsetup
/usr/local/lib/python2.7/dist-packages/paramiko-2.7.2-py2.7.egg/paramiko/transport.py:33:
CryptographyDeprecationWarning: Python 2 is no longer
supported by the Python core team. Support for it is now
deprecated in cryptography, and will be removed in a future
release. from cryptography.hazmat.backends import
default_backend usage: georepsetup [-h] [--force] [--no-color]
MASTERVOL SLAVE SLAVEVOL georepsetup: error: too few arguments
gluster01:~# georepsetup DATA gluster03 DATA-SLAVE
/usr/local/lib/python2.7/dist-packages/paramiko-2.7.2-py2.7.egg/paramiko/transport.py:33:
CryptographyDeprecationWarning: Python 2 is no longer
supported by the Python core team. Support for it is now
deprecated in cryptography, and will be removed in a future
release. from cryptography.hazmat.backends import
default_backend Geo-replication session will be established
between DATA and gluster03::DATA-SLAVE Root password of
gluster03 is required to complete the setup. NOTE: Password
will not be stored. root@gluster03's password: [ OK]
gluster03 is Reachable(Port 22) [ OK] SSH Connection
established root@gluster03 [ OK] Master Volume and Slave
Volume are compatible (Version: 8.2) [ OK] Common secret
pub file present at
/var/lib/glusterd/geo-replication/common_secret.pem.pub [
OK] common_secret.pem.pub file copied to gluster03 [ OK]
Master SSH Keys copied to all Up Slave nodes [ OK] Updated
Master SSH Keys to all Up Slave nodes authorized_keys file [
OK] Geo-replication Session Established<br>
> Then I reboot the 3 servers...<br>
> After a while everything works ok, but after a few
minutes, I get Faulty status in gluster01....<br>
> <br>
> There's the log<br>
> <br>
> <br>
> [2020-10-26 20:16:41.362584] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus:
Worker Status Change [{status=Initializing...}] [2020-10-26
20:16:41.362937] I [monitor(monitor):160:monitor] Monitor:
starting gsyncd worker [{brick=/DATA/master01-data},
{slave_node=gluster03}] [2020-10-26 20:16:41.508884] I
[resource(worker /DATA/master01-data):1387:connect_remote]
SSH: Initializing SSH connection between master and slave...
[2020-10-26 20:16:42.996678] I [resource(worker
/DATA/master01-data):1436:connect_remote] SSH: SSH connection
between master and slave established. [{duration=1.4873}]
[2020-10-26 20:16:42.997121] I [resource(worker
/DATA/master01-data):1116:connect] GLUSTER: Mounting gluster
volume locally... [2020-10-26 20:16:44.170661] E
[syncdutils(worker /DATA/master01-data):110:gf_mount_ready]
<top>: failed to get the xattr value [2020-10-26
20:16:44.171281] I [resource(worker
/DATA/master01-data):1139:connect] GLUSTER: Mounted gluster
volume [{duration=1.1739}] [2020-10-26 20:16:44.171772] I
[subcmds(worker /DATA/master01-data):84:subcmd_worker]
<top>: Worker spawn successful. Acknowledging back to
monitor [2020-10-26 20:16:46.200603] I [master(worker
/DATA/master01-data):1645:register] _GMaster: Working dir
[{path=/var/lib/misc/gluster/gsyncd/DATA_gluster03_DATA-SLAVE/DATA-master01-data}]
[2020-10-26 20:16:46.201798] I [resource(worker
/DATA/master01-data):1292:service_loop] GLUSTER: Register time
[{time=1603743406}] [2020-10-26 20:16:46.226415] I
[gsyncdstatus(worker /DATA/master01-data):281:set_active]
GeorepStatus: Worker Status Change [{status=Active}]
[2020-10-26 20:16:46.395112] I [gsyncdstatus(worker
/DATA/master01-data):253:set_worker_crawl_status]
GeorepStatus: Crawl Status Change [{status=History Crawl}]
[2020-10-26 20:16:46.396491] I [master(worker
/DATA/master01-data):1559:crawl] _GMaster: starting history
crawl [{turns=1}, {stime=(1603742506, 0)},{etime=1603743406},
{entry_stime=(1603743226, 0)}] [2020-10-26 20:16:46.399292] E
[resource(worker /DATA/master01-data):1312:service_loop]
GLUSTER: Changelog History Crawl failed [{error=[Errno 0]
Sucesso}] [2020-10-26 20:16:47.177205] I
[monitor(monitor):228:monitor] Monitor: worker died in startup
phase [{brick=/DATA/master01-data}] [2020-10-26
20:16:47.184525] I
[gsyncdstatus(monitor):248:set_worker_status] GeorepStatus:
Worker Status Change [{status=Faulty}]<br>
> <br>
> Any advice will be welcome.<br>
> <br>
> Thanks<br>
> <br>
> ---<br>
> Gilberto Nunes Ferreira<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote>
</div>
<br>
<fieldset></fieldset>
<pre>________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a href="https://bluejeans.com/441850968" target="_blank">https://bluejeans.com/441850968</a>
Gluster-users mailing list
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
</div>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>