[Gluster-users] Geo-replication status always on 'Created'
Aravinda
avishwan at redhat.com
Mon Mar 25 05:21:57 UTC 2019
Below print statement looks wrong. Latest Glusterfs code doesn't have
this print statement. Please let us know which version of glusterfs you
are using.
```
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
860, in __init__
print "debug varible " %vix
```
As a workaround, edit that file and comment the print line and test the
geo-rep config command.
On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
> hi Aravinda,
> had the session created using : create ssh-port 2222 push-pem and
> also the :
>
> gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port
> 2222
>
> hitting this message:
> geo-replication config-set failed for
> vol_75a5fd373d88ba687f591f3353fa05cf
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
> geo-replication command failed
>
> Below is snap of status:
>
> [root at k8s-agentpool1-24779565-1
> vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]# gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>
> MASTER NODE MASTER VOL MASTER
> BRICK
> SLAVE USER SLAVE
> SLAVE NODE STATUS CRAWL STATUS
> LAST_SYNCED
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> -------------------------------------------------------------------
> ----------------
> 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f
> b9427fb26f752d9ba8e45e183cb1/brick root
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
> Created N/A N/A
> 172.16.189.35 vol_75a5fd373d88ba687f591f3353fa05cf
> /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b
> b08f0d466d346f8c0b19569736fb/brick root
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
> Created N/A N/A
> 172.16.189.66 vol_75a5fd373d88ba687f591f3353fa05cf
> /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4
> 4c9380cdedac708e27e2c2a443a0/brick root
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A
> Created N/A N/A
>
> any ideas ? where can find logs for the failed commands check in
> gysncd.log , the trace is as below:
>
> [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:04:42.387192] E
> [syncdutils(monitor):332:log_raise_exception] <top>: FAIL:
> Traceback (most recent call last):
> File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
> 311, in main
> func(args)
> File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
> 50, in subcmd_monitor
> return monitor.monitor(local, remote)
> File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 427, in monitor
> return Monitor().multiplex(*distribute(local, remote))
> File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
> 370, in distribute
> mvol = Volinfo(master.volume, master.host)
> File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
> 860, in __init__
> print "debug varible " %vix
> TypeError: not all arguments converted during string formatting
> [2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] <top>: Using
> session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:18.933028] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:08:19.25285] I [gsyncd(status):297:main] <top>: Using
> session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:09:15.766882] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:09:16.30267] I [gsyncd(config-get):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
> [2019-03-25 04:09:16.89006] I [gsyncd(config-set):297:main] <top>:
> Using session config file path=/var/lib/glusterd/geo-
> replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e7
> 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>
> regards,
> Maurya
>
> On Mon, Mar 25, 2019 at 9:08 AM Aravinda <avishwan at redhat.com> wrote:
> > Use `ssh-port <port>` while creating the Geo-rep session
> >
> > Ref:
> > https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session
> >
> > And set the ssh-port option before start.
> >
> > ```
> > gluster volume geo-replication <master_volume> \
> > [<slave_user>@]<slave_host>::<slave_volume> config
> > ssh-port 2222
> > ```
> >
--
regards
Aravinda
More information about the Gluster-users
mailing list