[Gluster-users] geo-rep will not initialize

Strahil Nikolov hunter86_bg at yahoo.com
Fri Aug 30 08:10:22 UTC 2024


 If push-pem worked, stop the session (despite it's not working) and change in the config the port:

Example:
gluster volume geo-replication sourcevol geoaccount at glusterdest::destvol config ssh_port 2244
Also, you can restart glusterd just to be on the safe side and then start the session.

Best Regards,
Strahil Nikolov
    В петък, 30 август 2024 г. в 02:02:47 ч. Гринуич+3, Karl Kleinpaste <karl at kleinpaste.org> написа:  
 
  On 8/28/24 18:20, Strahil Nikolov wrote:
  
It seems the problem is not in you but in a deprecated python package.
 
 I appear to be very close, but I can't quite get to the finish line.
 
 I updated /usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py on both systems, to replace readfp with read_file; you also mentioned /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py, but that does not contain any instances of readfp.
 
 diff -U0 gsyncdconfig.py.~1~ gsyncdconfig.py
 --- gsyncdconfig.py.~1~    2023-11-05 19:00:00.000000000 -0500
 +++ gsyncdconfig.py    2024-08-29 16:28:07.685753403 -0400
 @@ -99 +99 @@
 -            cnf.readfp(f)
 +            cnf.read_file(f)
 @@ -143 +143 @@
 -            cnf.readfp(f)
 +            cnf.read_file(f)
 @@ -184 +184 @@
 -            conf.readfp(f)
 +            conf.read_file(f)
 @@ -189 +189 @@
 -                conf.readfp(f)
 +                conf.read_file(f)
 
 With that change, and tail'ing *.log under /var/log/glusterfs, I issued the create command and configured the port permanently:
 gluster volume geo-replication j geoacct at pms::n create ssh-port 6427 push-pem
 gluster volume geo-replication j geoacct at pms::n config ssh-port 6427
 
 These were successful, and a status query then shows Created. Thereafter, I issued the start command, at which point ... nothing. I can run status queries forever, I can re-run start which continues to exit with SUCCESS, but georep remains in Created state, never moving to Active. I tried "start force" but that didn't help, either.
 
 I've looked for status files under /var/lib/glusterfs/geo-replication; the file monitor.status says "Created." Unsurprisingly, the "status detail" command shows several additional  "N/A" entries. /var/lib/glusterd/geo-replication/j_pms_n/gsyncd.conf contains only a [vars] section with the configured ssh port.
 
 In status output, "secondary node" shows N/A. Should it?
 
 What is left, that feeds the battle but starves the victory?
 
 --karl
 ------------------------------------------------
 gluster volume geo-replication j geoacct at pms::n start
 [2024-08-29 22:26:22.712156 +0000] I [cli.c:788:main] 0-cli: Started running gluster with version 11.1
 [2024-08-29 22:26:22.771551 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] 
 [2024-08-29 22:26:22.771579 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] 
 
 ==> ./glusterd.log <==
 [2024-08-29 22:26:22.825048 +0000] I [MSGID: 106327] [glusterd-geo-rep.c:2644:glusterd_get_statefile_name] 0-management: Using passed config template(/var/lib/glusterd/geo-replication/j_pms_n/gsyncd.conf). 
 
 ==> ./cmd_history.log <==
 [2024-08-29 22:26:23.464111 +0000]  : volume geo-replication j geoacct at pms::n start : SUCCESS
 Starting geo-replication session between j & geoacct at pms::n has been successful
 
 ==> ./cli.log <==
 [2024-08-29 22:26:23.464347 +0000] I [input.c:31:cli_batch] 0-: Exiting with: 0
 
 [2024-08-29 22:26:23.467828 +0000] I [cli.c:788:main] 0-cli: Started running /usr/sbin/gluster with version 11.1
 [2024-08-29 22:26:23.467861 +0000] I [cli.c:664:cli_rpc_init] 0-cli: Connecting to remote glusterd at localhost
 [2024-08-29 22:26:23.522725 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] 
 [2024-08-29 22:26:23.522767 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] 
 [2024-08-29 22:26:23.523087 +0000] I [cli-rpc-ops.c:808:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol: 0
 [2024-08-29 22:26:23.523285 +0000] I [input.c:31:cli_batch] 0-: Exiting with: 0
 
 gluster volume geo-replication j geoacct at pms::n status
 [2024-08-29 22:26:30.861404 +0000] I [cli.c:788:main] 0-cli: Started running gluster with version 11.1
 [2024-08-29 22:26:30.914925 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] 
 [2024-08-29 22:26:30.915017 +0000] I [MSGID: 101188] [event-epoll.c:643:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] 
 
 ==> ./cmd_history.log <==
 [2024-08-29 22:26:31.407365 +0000]  : volume geo-replication j geoacct at pms::n status : SUCCESS
  
 PRIMARY NODE    PRIMARY VOL    PRIMARY BRICK    SECONDARY USER    SECONDARY           SECONDARY NODE    STATUS     CRAWL STATUS    LAST_SYNCED          
---------------------------------------------------------------------------------------------------------------------------------------------
 major           j              /xx/brick/j      geoacct           geoacct at pms::n    N/A               Created    N/A             N/A                  
 
 ==> ./cli.log <==
 [2024-08-29 22:26:31.408209 +0000] I [input.c:31:cli_batch] 0-: Exiting with: 0
   
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20240830/fa8f5bc4/attachment.html>


More information about the Gluster-users mailing list