<html><head></head><body><div class="ydpd3228d0ayahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div></div>
<div dir="ltr" data-setdir="false">If push-pem worked, stop the session (despite it's not working) and change in the config the port:<br><br>Example:<br><div><span>gluster volume geo-replication sourcevol geoaccount@glusterdest::destvol config ssh_port 2244</span></div></div><div dir="ltr" data-setdir="false"><br><div>Also, you can restart glusterd just to be on the safe side and then start the session.<br><br>Best Regards,<br>Strahil Nikolov<br></div></div>
</div><div id="yahoo_quoted_5920042373" class="yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В петък, 30 август 2024 г. в 02:02:47 ч. Гринуич+3, Karl Kleinpaste <karl@kleinpaste.org> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="yiv3134030779"><div>
<div class="yiv3134030779moz-cite-prefix">On 8/28/24 18:20, Strahil Nikolov
wrote:<br clear="none">
</div>
<blockquote type="cite">It seems
the problem is not in you but in a deprecated python package.</blockquote>
<br clear="none">
<font face="FreeSerif">I appear to be very close, but I can't quite
get to the finish line.<br clear="none">
<br clear="none">
I updated /usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py
on both systems, to replace readfp with read_file; you also
mentioned /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py, but
that does not contain any instances of readfp.<br clear="none">
<br clear="none">
</font><font face="monospace">diff -U0 gsyncdconfig.py.~1~
gsyncdconfig.py<br clear="none">
--- gsyncdconfig.py.~1~ 2023-11-05 19:00:00.000000000 -0500<br clear="none">
+++ gsyncdconfig.py 2024-08-29 16:28:07.685753403 -0400<br clear="none">
@@ -99 +99 @@<br clear="none">
- cnf.readfp(f)<br clear="none">
+ cnf.read_file(f)<br clear="none">
@@ -143 +143 @@<br clear="none">
- cnf.readfp(f)<br clear="none">
+ cnf.read_file(f)<br clear="none">
@@ -184 +184 @@<br clear="none">
- conf.readfp(f)<br clear="none">
+ conf.read_file(f)<br clear="none">
@@ -189 +189 @@<br clear="none">
- conf.readfp(f)<br clear="none">
+ conf.read_file(f)</font><font face="FreeSerif"><br clear="none">
<br clear="none">
With that change, and tail'ing *.log under /var/log/glusterfs, I
issued the create command and configured the port permanently:<br clear="none">
</font><font face="monospace">gluster volume geo-replication j
geoacct@pms::n create ssh-port 6427 push-pem<br clear="none">
gluster volume geo-replication j geoacct@pms::n config ssh-port
6427</font><font face="FreeSerif"><br clear="none">
<br clear="none">
These were successful, and a status query then shows Created.
Thereafter, I issued the start command, at which point ...
nothing. I can run status queries forever, I can re-run start
which continues to exit with SUCCESS, but georep remains in
Created state, never moving to Active. I tried "start force" but
that didn't help, either.<br clear="none">
<br clear="none">
I've looked for status files under
/var/lib/glusterfs/geo-replication; the file monitor.status says
"Created." Unsurprisingly, the "status detail" command shows
several additional "N/A" entries.
/var/lib/glusterd/geo-replication/j_pms_n/gsyncd.conf contains
only a [vars] section with the configured ssh port.<br clear="none">
<br clear="none">
In status output, "secondary node" shows N/A. Should it?<br clear="none">
<br clear="none">
What is left, that feeds the battle but starves the victory?<br clear="none">
</font><br clear="none">
--karl<br clear="none">
------------------------------------------------<br clear="none">
<font face="monospace"><b>gluster volume geo-replication j
geoacct@pms::n start</b><br clear="none">
[2024-08-29 22:26:22.712156 +0000] I [cli.c:788:main] 0-cli:
Started running gluster with version 11.1<br clear="none">
[2024-08-29 22:26:22.771551 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=0}] <br clear="none">
[2024-08-29 22:26:22.771579 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=1}] <br clear="none">
<br clear="none">
==> ./glusterd.log <==<br clear="none">
[2024-08-29 22:26:22.825048 +0000] I [MSGID: 106327]
[glusterd-geo-rep.c:2644:glusterd_get_statefile_name]
0-management: Using passed config
template(/var/lib/glusterd/geo-replication/j_pms_n/gsyncd.conf). <br clear="none">
<br clear="none">
==> ./cmd_history.log <==<br clear="none">
[2024-08-29 22:26:23.464111 +0000] : volume geo-replication j
geoacct@pms::n start : SUCCESS<br clear="none">
<b>Starting geo-replication session between j & geoacct@pms::n
has been successful</b><br clear="none">
<br clear="none">
==> ./cli.log <==<br clear="none">
[2024-08-29 22:26:23.464347 +0000] I [input.c:31:cli_batch] 0-:
Exiting with: 0<br clear="none">
<br clear="none">
[2024-08-29 22:26:23.467828 +0000] I [cli.c:788:main] 0-cli:
Started running /usr/sbin/gluster with version 11.1<br clear="none">
[2024-08-29 22:26:23.467861 +0000] I [cli.c:664:cli_rpc_init]
0-cli: Connecting to remote glusterd at localhost<br clear="none">
[2024-08-29 22:26:23.522725 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=0}] <br clear="none">
[2024-08-29 22:26:23.522767 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=1}] <br clear="none">
[2024-08-29 22:26:23.523087 +0000] I
[cli-rpc-ops.c:808:gf_cli_get_volume_cbk] 0-cli: Received resp to
get vol: 0<br clear="none">
[2024-08-29 22:26:23.523285 +0000] I [input.c:31:cli_batch] 0-:
Exiting with: 0<br clear="none">
<br clear="none">
<b>gluster volume geo-replication j geoacct@pms::n status</b><br clear="none">
[2024-08-29 22:26:30.861404 +0000] I [cli.c:788:main] 0-cli:
Started running gluster with version 11.1<br clear="none">
[2024-08-29 22:26:30.914925 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=0}] <br clear="none">
[2024-08-29 22:26:30.915017 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=1}] <br clear="none">
<br clear="none">
==> ./cmd_history.log <==<br clear="none">
[2024-08-29 22:26:31.407365 +0000] : volume geo-replication j
geoacct@pms::n status : SUCCESS<div id="yiv3134030779yqtfd20696" class="yiv3134030779yqt1215728569"><br clear="none">
<br clear="none">
PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER
SECONDARY SECONDARY NODE STATUS CRAWL STATUS
LAST_SYNCED </div><br clear="none">
---------------------------------------------------------------------------------------------------------------------------------------------<br clear="none">
major j /xx/brick/j geoacct
geoacct@pms::n N/A Created N/A
N/A <br clear="none">
<br clear="none">
==> ./cli.log <==<br clear="none">
[2024-08-29 22:26:31.408209 +0000] I [input.c:31:cli_batch] 0-:
Exiting with: 0</font><div id="yiv3134030779yqtfd94972" class="yiv3134030779yqt1215728569"><br clear="none">
</div></div></div></div>
</div>
</div></body></html>