<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">On 8/28/24 18:20, Strahil Nikolov
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:1290417612.624712.1724883605101@mail.yahoo.com">It seems
the problem is not in you but in a deprecated python package.</blockquote>
<br>
<font face="FreeSerif">I appear to be very close, but I can't quite
get to the finish line.<br>
<br>
I updated /usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py
on both systems, to replace readfp with read_file; you also
mentioned /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py, but
that does not contain any instances of readfp.<br>
<br>
</font><font face="monospace">diff -U0 gsyncdconfig.py.~1~
gsyncdconfig.py<br>
--- gsyncdconfig.py.~1~ 2023-11-05 19:00:00.000000000 -0500<br>
+++ gsyncdconfig.py 2024-08-29 16:28:07.685753403 -0400<br>
@@ -99 +99 @@<br>
- cnf.readfp(f)<br>
+ cnf.read_file(f)<br>
@@ -143 +143 @@<br>
- cnf.readfp(f)<br>
+ cnf.read_file(f)<br>
@@ -184 +184 @@<br>
- conf.readfp(f)<br>
+ conf.read_file(f)<br>
@@ -189 +189 @@<br>
- conf.readfp(f)<br>
+ conf.read_file(f)</font><font face="FreeSerif"><br>
<br>
With that change, and tail'ing *.log under /var/log/glusterfs, I
issued the create command and configured the port permanently:<br>
</font><font face="monospace">gluster volume geo-replication j
geoacct@pms::n create ssh-port 6427 push-pem<br>
gluster volume geo-replication j geoacct@pms::n config ssh-port
6427</font><font face="FreeSerif"><br>
<br>
These were successful, and a status query then shows Created.
Thereafter, I issued the start command, at which point ...
nothing. I can run status queries forever, I can re-run start
which continues to exit with SUCCESS, but georep remains in
Created state, never moving to Active. I tried "start force" but
that didn't help, either.<br>
<br>
I've looked for status files under
/var/lib/glusterfs/geo-replication; the file monitor.status says
"Created." Unsurprisingly, the "status detail" command shows
several additional "N/A" entries.
/var/lib/glusterd/geo-replication/j_pms_n/gsyncd.conf contains
only a [vars] section with the configured ssh port.<br>
<br>
In status output, "secondary node" shows N/A. Should it?<br>
<br>
What is left, that feeds the battle but starves the victory?<br>
</font><br>
--karl<br>
------------------------------------------------<br>
<font face="monospace"><b>gluster volume geo-replication j
geoacct@pms::n start</b><br>
[2024-08-29 22:26:22.712156 +0000] I [cli.c:788:main] 0-cli:
Started running gluster with version 11.1<br>
[2024-08-29 22:26:22.771551 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=0}] <br>
[2024-08-29 22:26:22.771579 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=1}] <br>
<br>
==> ./glusterd.log <==<br>
[2024-08-29 22:26:22.825048 +0000] I [MSGID: 106327]
[glusterd-geo-rep.c:2644:glusterd_get_statefile_name]
0-management: Using passed config
template(/var/lib/glusterd/geo-replication/j_pms_n/gsyncd.conf). <br>
<br>
==> ./cmd_history.log <==<br>
[2024-08-29 22:26:23.464111 +0000] : volume geo-replication j
geoacct@pms::n start : SUCCESS<br>
<b>Starting geo-replication session between j & geoacct@pms::n
has been successful</b><br>
<br>
==> ./cli.log <==<br>
[2024-08-29 22:26:23.464347 +0000] I [input.c:31:cli_batch] 0-:
Exiting with: 0<br>
<br>
[2024-08-29 22:26:23.467828 +0000] I [cli.c:788:main] 0-cli:
Started running /usr/sbin/gluster with version 11.1<br>
[2024-08-29 22:26:23.467861 +0000] I [cli.c:664:cli_rpc_init]
0-cli: Connecting to remote glusterd at localhost<br>
[2024-08-29 22:26:23.522725 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=0}] <br>
[2024-08-29 22:26:23.522767 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=1}] <br>
[2024-08-29 22:26:23.523087 +0000] I
[cli-rpc-ops.c:808:gf_cli_get_volume_cbk] 0-cli: Received resp to
get vol: 0<br>
[2024-08-29 22:26:23.523285 +0000] I [input.c:31:cli_batch] 0-:
Exiting with: 0<br>
<br>
<b>gluster volume geo-replication j geoacct@pms::n status</b><br>
[2024-08-29 22:26:30.861404 +0000] I [cli.c:788:main] 0-cli:
Started running gluster with version 11.1<br>
[2024-08-29 22:26:30.914925 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=0}] <br>
[2024-08-29 22:26:30.915017 +0000] I [MSGID: 101188]
[event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started
thread with index [{index=1}] <br>
<br>
==> ./cmd_history.log <==<br>
[2024-08-29 22:26:31.407365 +0000] : volume geo-replication j
geoacct@pms::n status : SUCCESS<br>
<br>
PRIMARY NODE PRIMARY VOL PRIMARY BRICK SECONDARY USER
SECONDARY SECONDARY NODE STATUS CRAWL STATUS
LAST_SYNCED <br>
---------------------------------------------------------------------------------------------------------------------------------------------<br>
major j /xx/brick/j geoacct
geoacct@pms::n N/A Created N/A
N/A <br>
<br>
==> ./cli.log <==<br>
[2024-08-29 22:26:31.408209 +0000] I [input.c:31:cli_batch] 0-:
Exiting with: 0</font><br>
</body>
</html>