<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div class="elementToProof" style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Hi <a href="mailto:stefan@kania-online.de" id="OWAAM719540" class="tWKOu mention ms-bgc-nlr ms-fcl-b" data-loopstyle="linkonly">
@Stefan Kania</a>,</div>
<div class="elementToProof" style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof" style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Please try to enable the geo-replication debug logs using the following command on the primary server, and recheck or resend the logs.</div>
<div class="elementToProof" style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div class="elementToProof"><span style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);"><code>gluster volume geo-replication privol01 geobenutzer@s01.gluster::secvol01 config
log-level DEBUG</code></span></div>
<div id="appendonsend"></div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Thanks,</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Anant</div>
<div style="font-family: Aptos, Aptos_EmbeddedFont, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<hr style="display: inline-block; width: 98%;">
<div dir="ltr" id="divRplyFwdMsg"><span style="font-family: Calibri, sans-serif; font-size: 11pt; color: rgb(0, 0, 0);"><b>From:</b> Gluster-users <gluster-users-bounces@gluster.org> on behalf of Stefan Kania <stefan@kania-online.de><br>
<b>Sent:</b> 13 February 2024 7:11 PM<br>
<b>To:</b> gluster-users@gluster.org <gluster-users@gluster.org><br>
<b>Subject:</b> [Gluster-users] geo-replication {error=12} on one primary node</span>
<div> </div>
</div>
<div><span style="font-size: 11pt;">EXTERNAL: Do not click links or open attachments if you do not recognize the sender.<br>
<br>
Hi to all,<br>
<br>
Yes, I saw that there is a thread about geo-replication with nearly the<br>
same problem, I read it, but I think my problem is a bit different.<br>
<br>
I created two volumes the primary volume "privol01" and the secondary<br>
volume "secvol01". All hosts are having the same packages installed, all<br>
hosts are debian12 with gluster version 10.05. So even rsync is the<br>
same on any of the hosts. (I installed one host (vm) and clone it).<br>
I have:<br>
Volume Name: privol01<br>
Type: Replicate<br>
Volume ID: 93ace064-2862-41fe-9606-af5a4af9f5ab<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: p01:/gluster/brick<br>
Brick2: p02:/gluster/brick<br>
Brick3: p03:/gluster/brick<br>
<br>
and:<br>
<br>
Volume Name: secvol01<br>
Type: Replicate<br>
Volume ID: 4ebb7768-51da-446c-a301-dc3ea49a9ba2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: s01:/gluster/brick<br>
Brick2: s02:/gluster/brick<br>
Brick3: s03:/gluster/brick<br>
<br>
resolving the names of the hosts is working in any direction<br>
<br>
that's what I did:<br>
on all secondary hosts:<br>
<br>
groupadd geogruppe<br>
useradd -G geogruppe -m geobenutzer<br>
passwd geobenutzer<br>
ln -s /usr/sbin/gluster /usr/bin<br>
<br>
on one of the secondary hosts:<br>
gluster-mountbroker setup /var/mountbroker geogruppe<br>
<br>
gluster-mountbroker add secvol01 geobenutzer<br>
<br>
on one of the primary hosts:<br>
ssh-keygen<br>
<br>
ssh-copy-id geobenutzer@s01.gluster<br>
<br>
gluster-georep-sshkey generate<br>
<br>
gluster v geo-replication privol01 geobenutzer@s01.gluster::secvol01<br>
create push-pem<br>
<br>
<br>
on one of the secondary hosts:<br>
/usr/libexec/glusterfs/set_geo_rep_pem_keys.sh<br>
<br>
All the commands exited with out an error message.<br>
<br>
Restarted glusterd on all nodes<br>
<br>
then on the primary host:<br>
gluster volume geo-replication privol01<br>
geobenutzer@s01.gluster::secvol01 start<br>
<br>
The status is showing:<br>
<br>
PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER<br>
SECONDARY SECONDARY NODE STATUS CRAWL<br>
STATUS LAST_SYNCED<br>
---------------------------------------------------------------------------------------------------------------------------------------------------------------<br>
p03 privol01 /gluster/brick geobenutzer<br>
geobenutzer@s01.gluster::secvol01 Passive N/A<br>
N/A<br>
p02 privol01 /gluster/brick geobenutzer<br>
geobenutzer@s01.gluster::secvol01 Passive N/A<br>
N/A<br>
p01 privol01 /gluster/brick geobenutzer<br>
geobenutzer@s01.gluster::secvol01 N/A Faulty N/A<br>
N/A<br>
<br>
For p01 the status is changing from "Initializing... to" "status=Active<br>
status=History Crawl" to status=Faulty and then back to Initializing<br>
<br>
But only for the primary host p01.<br>
<br>
Here is the lock from p01:<br>
--------------------------------<br>
[2024-02-13 18:30:06.64585] I<br>
[gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker<br>
Status Change [{status=Initializing...}]<br>
[2024-02-13 18:30:06.65004] I [monitor(monitor):158:monitor] Monitor:<br>
starting gsyncd worker [{brick=/gluster/brick}, {secondary_node=s01}]<br>
[2024-02-13 18:30:06.147194] I [resource(worker<br>
/gluster/brick):1387:connect_remote] SSH: Initializing SSH connection<br>
between primary and secondary...<br>
[2024-02-13 18:30:07.777785] I [resource(worker<br>
/gluster/brick):1435:connect_remote] SSH: SSH connection between primary<br>
and secondary established. [{duration=1.6304}]<br>
[2024-02-13 18:30:07.777971] I [resource(worker<br>
/gluster/brick):1116:connect] GLUSTER: Mounting gluster volume locally...<br>
[2024-02-13 18:30:08.822077] I [resource(worker<br>
/gluster/brick):1138:connect] GLUSTER: Mounted gluster volume<br>
[{duration=1.0438}]<br>
[2024-02-13 18:30:08.823039] I [subcmds(worker<br>
/gluster/brick):84:subcmd_worker] <top>: Worker spawn successful.<br>
Acknowledging back to monitor<br>
[2024-02-13 18:30:10.861742] I [primary(worker<br>
/gluster/brick):1661:register] _GPrimary: Working dir<br>
[{path=/var/lib/misc/gluster/gsyncd/privol01_s01.gluster_secvol01/gluster-brick}]<br>
[2024-02-13 18:30:10.864432] I [resource(worker<br>
/gluster/brick):1291:service_loop] GLUSTER: Register time<br>
[{time=1707849010}]<br>
[2024-02-13 18:30:10.906805] I [gsyncdstatus(worker<br>
/gluster/brick):280:set_active] GeorepStatus: Worker Status Change<br>
[{status=Active}]<br>
[2024-02-13 18:30:11.7656] I [gsyncdstatus(worker<br>
/gluster/brick):252:set_worker_crawl_status] GeorepStatus: Crawl Status<br>
Change [{status=History Crawl}]<br>
[2024-02-13 18:30:11.7984] I [primary(worker /gluster/brick):1572:crawl]<br>
_GPrimary: starting history crawl [{turns=1}, {stime=(1707848760, 0)},<br>
{etime=1707849011}, {entry_stime=None}]<br>
[2024-02-13 18:30:12.9234] I [primary(worker /gluster/brick):1604:crawl]<br>
_GPrimary: secondary's time [{stime=(1707848760, 0)}]<br>
[2024-02-13 18:30:12.388528] I [primary(worker<br>
/gluster/brick):2009:syncjob] Syncer: Sync Time Taken [{job=2},<br>
{num_files=2}, {return_code=12}, {duration=0.0520}]<br>
[2024-02-13 18:30:12.388745] E [syncdutils(worker<br>
/gluster/brick):845:errlog] Popen: command returned error [{cmd=rsync<br>
-aR0 --inplace --files-from=- --super --stats --numeric-ids<br>
--no-implied-dirs --existing --xattrs --acls --ignore-missing-args . -e<br>
ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i<br>
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto<br>
-S /tmp/gsyncd-aux-ssh-1_kow1tp/c343d8e67535166a0d66b71865f3f3c4.sock<br>
geobenutzer@s01:/proc/2675/cwd}, {error=12}]<br>
[2024-02-13 18:30:12.826546] I [monitor(monitor):227:monitor] Monitor:<br>
worker died in startup phase [{brick=/gluster/brick}]<br>
[2024-02-13 18:30:12.845687] I<br>
[gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker<br>
Status Change [{status=Faulty}]<br>
---------------------<br>
<br>
The host p01 is trying to connect to s01<br>
<br>
A look at host p02 of the primary volume is showing:<br>
-------------------<br>
[2024-02-13 18:25:55.179385] I<br>
[gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker<br>
Status Change [{status=Initializing...}]<br>
[2024-02-13 18:25:55.179572] I [monitor(monitor):158:monitor] Monitor:<br>
starting gsyncd worker [{brick=/gluster/brick}, {secondary_node=s01}]<br>
[2024-02-13 18:25:55.258658] I [resource(worker<br>
/gluster/brick):1387:connect_remote] SSH: Initializing SSH connection<br>
between primary and secondary...<br>
[2024-02-13 18:25:57.78159] I [resource(worker<br>
/gluster/brick):1435:connect_remote] SSH: SSH connection between primary<br>
and secondary established. [{duration=1.8194}]<br>
[2024-02-13 18:25:57.78254] I [resource(worker<br>
/gluster/brick):1116:connect] GLUSTER: Mounting gluster volume locally...<br>
[2024-02-13 18:25:58.123291] I [resource(worker<br>
/gluster/brick):1138:connect] GLUSTER: Mounted gluster volume<br>
[{duration=1.0450}]<br>
[2024-02-13 18:25:58.123410] I [subcmds(worker<br>
/gluster/brick):84:subcmd_worker] <top>: Worker spawn successful.<br>
Acknowledging back to monitor<br>
[2024-02-13 18:26:00.135934] I [primary(worker<br>
/gluster/brick):1661:register] _GPrimary: Working dir<br>
[{path=/var/lib/misc/gluster/gsyncd/privol01_s01.gluster_secvol01/gluster-brick}]<br>
[2024-02-13 18:26:00.136287] I [resource(worker<br>
/gluster/brick):1291:service_loop] GLUSTER: Register time<br>
[{time=1707848760}]<br>
[2024-02-13 18:26:00.179157] I [gsyncdstatus(worker<br>
/gluster/brick):286:set_passive] GeorepStatus: Worker Status Change<br>
[{status=Passive}]<br>
------------------<br>
This is primary node is also connecting to s01 and it works.<br>
<br>
It must have something to do with the primary host, because if I stop<br>
the replication and restart it, the primary host is triying to connect<br>
to a different secondary host with the same error:<br>
<br>
----------------<br>
Popen: command returned error [{cmd=rsync -aR0 --inplace --files-from=-<br>
--super --stats --numeric-ids --no-implied-dirs --existing --xattrs<br>
--acls --ignore-missing-args . -e ssh -oPasswordAuthentication=no<br>
-oStrictHostKeyChecking=no -i<br>
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto<br>
-S /tmp/gsyncd-aux-ssh-1_kow1tp/c343d8e67535166a0d66b71865f3f3c4.sock<br>
geobenutzer@s01:/proc/2675/cwd}, {error=12}]<br>
----------------<br>
<br>
So the problem must be the primary host p01. That's the host I<br>
configured the passwordless ssh-session.<br>
<br>
This is is test-setup I also tried it before with two other volumes with<br>
6 Nodes each. There I had 2 faulty nodes in the primary volume.<br>
<br>
I can start and stop the replication session from any of the primary<br>
nodes but always p01 is faulty.<br>
<br>
<br>
Any help ?<br>
<br>
Stefan</span></div>
<p style="FONT-SIZE: 10pt; FONT-FAMILY: ARIAL"><span style="FONT-FAMILY: Calibri Light"></p>
<p style="FONT-SIZE: 10pt; FONT-FAMILY: ARIAL">DISCLAIMER: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error, please
notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee, you should not disseminate, distribute or copy this email. Please notify the sender immediately by email if
you have received this email by mistake and delete this email from your system. <br>
<br>
If you are not the intended recipient, you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited. Thanks for your cooperation.
</span></p>
</body>
</html>