[Gluster-users] Geo-replication status always on 'Created'

Maurya M mauryam at gmail.com
Mon Mar 25 12:46:00 UTC 2019


some addtion logs from gverify-mastermnt.log & gverify-slavemnt.log:

[2019-03-25 12:13:23.819665] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-vol_75a5fd373d88ba687f591f3353fa05cf-client-2: error returned while
attempting to connect to host:(null), port:0
[2019-03-25 12:13:23.819814] W [dict.c:923:str_to_data]
(-->/usr/lib64/glusterfs/4.1.7/xlator/protocol/client.so(+0x40c0a)
[0x7f3eb4d86c0a] -->/lib64/libglusterfs.so.0(dict_set_str+0x16)
[0x7f3ebc334266] -->/lib64/libglusterfs.so.0(str_to_data+0x91)
[0x7f3ebc330ea1] ) 0-dict: *value is NULL [Invalid argument]*


 any idea how to fix this ? any patch file i can try with please share.

thanks,
Maurya


On Mon, Mar 25, 2019 at 3:37 PM Maurya M <mauryam at gmail.com> wrote:

> ran this command :  ssh -p 2222 -i
> /var/lib/glusterd/geo-replication/secret.pem root@<slave node>gluster
> volume info --xml
>
> attaching the output.
>
>
>
> On Mon, Mar 25, 2019 at 2:13 PM Aravinda <avishwan at redhat.com> wrote:
>
>> Geo-rep is running `ssh -i /var/lib/glusterd/geo-replication/secret.pem
>> root@<slavenode> gluster volume info --xml` and parsing its output.
>> Please try to to run the command from the same node and let us know the
>> output.
>>
>>
>> On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:
>> > Now the error is on the same line 860 : as highlighted below:
>> >
>> > [2019-03-25 06:11:52.376238] E
>> > [syncdutils(monitor):332:log_raise_exception] <top>: FAIL:
>> > Traceback (most recent call last):
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> > 311, in main
>> >     func(args)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line
>> > 50, in subcmd_monitor
>> >     return monitor.monitor(local, remote)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 427, in monitor
>> >     return Monitor().multiplex(*distribute(local, remote))
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> > 386, in distribute
>> >     svol = Volinfo(slave.volume, "localhost", prelude)
>> >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
>> > 860, in __init__
>> >     vi = XET.fromstring(vix)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1300, in
>> > XML
>> >     parser.feed(text)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1642, in
>> > feed
>> >     self._raiseerror(v)
>> >   File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line 1506, in
>> > _raiseerror
>> >     raise err
>> > ParseError: syntax error: line 1, column 0
>> >
>> >
>> > On Mon, Mar 25, 2019 at 11:29 AM Maurya M <mauryam at gmail.com> wrote:
>> > > Sorry my bad, had put the print line to debug, i am using gluster
>> > > 4.1.7, will remove the print line.
>> > >
>> > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda <avishwan at redhat.com>
>> > > wrote:
>> > > > Below print statement looks wrong. Latest Glusterfs code doesn't
>> > > > have
>> > > > this print statement. Please let us know which version of
>> > > > glusterfs you
>> > > > are using.
>> > > >
>> > > >
>> > > > ```
>> > > >   File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",
>> > > > line
>> > > > 860, in __init__
>> > > >     print "debug varible " %vix
>> > > > ```
>> > > >
>> > > > As a workaround, edit that file and comment the print line and
>> > > > test the
>> > > > geo-rep config command.
>> > > >
>> > > >
>> > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:
>> > > > > hi Aravinda,
>> > > > >  had the session created using : create ssh-port 2222 push-pem
>> > > > and
>> > > > > also the :
>> > > > >
>> > > > > gluster volume geo-replication
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-
>> > > > port
>> > > > > 2222
>> > > > >
>> > > > > hitting this message:
>> > > > > geo-replication config-set failed for
>> > > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f
>> > > > > geo-replication command failed
>> > > > >
>> > > > > Below is snap of status:
>> > > > >
>> > > > > [root at k8s-agentpool1-24779565-1
>> > > > >
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057
>> > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication
>> > > > vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status
>> > > > >
>> > > > > MASTER NODE      MASTER VOL                              MASTER
>> > > > > BRICK
>> > > >
>> > > > >                                SLAVE USER    SLAVE
>> > > >
>> > > > >                               SLAVE NODE    STATUS     CRAWL
>> > > > STATUS
>> > > > >   LAST_SYNCED
>> > > > > -------------------------------------------------------------
>> > > > ------
>> > > > > -------------------------------------------------------------
>> > > > ------
>> > > > > -------------------------------------------------------------
>> > > > ------
>> > > > > -------------------------------------------------------------
>> > > > ------
>> > > > > ----------------
>> > > > > 172.16.189.4     vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > >
>> > > > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_
>> > > > 116f
>> > > > > b9427fb26f752d9ba8e45e183cb1/brick    root
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f    N/A
>> > > >
>> > > > >  Created    N/A             N/A
>> > > > > 172.16.189.35    vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > >
>> > > > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_
>> > > > 266b
>> > > > > b08f0d466d346f8c0b19569736fb/brick    root
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f    N/A
>> > > >
>> > > > >  Created    N/A             N/A
>> > > > > 172.16.189.66    vol_75a5fd373d88ba687f591f3353fa05cf
>> > > > >
>> > > > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_
>> > > > dfa4
>> > > > > 4c9380cdedac708e27e2c2a443a0/brick    root
>> > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f    N/A
>> > > >
>> > > > >  Created    N/A             N/A
>> > > > >
>> > > > > any ideas ? where can find logs for the failed commands check
>> > > > in
>> > > > > gysncd.log , the trace is as below:
>> > > > >
>> > > > > [2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main]
>> > > > <top>:
>> > > > > Using session config file      path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:04:42.387192] E
>> > > > > [syncdutils(monitor):332:log_raise_exception] <top>: FAIL:
>> > > > > Traceback (most recent call last):
>> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",
>> > > > line
>> > > > > 311, in main
>> > > > >     func(args)
>> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",
>> > > > line
>> > > > > 50, in subcmd_monitor
>> > > > >     return monitor.monitor(local, remote)
>> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
>> > > > line
>> > > > > 427, in monitor
>> > > > >     return Monitor().multiplex(*distribute(local, remote))
>> > > > >   File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",
>> > > > line
>> > > > > 370, in distribute
>> > > > >     mvol = Volinfo(master.volume, master.host)
>> > > > >   File
>> > > > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line
>> > > > > 860, in __init__
>> > > > >     print "debug varible " %vix
>> > > > > TypeError: not all arguments converted during string formatting
>> > > > > [2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file   path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] <top>:
>> > > > Using
>> > > > > session config file        path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file   path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file   path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file   path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:08:18.933028] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file   path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:08:19.25285] I [gsyncd(status):297:main] <top>:
>> > > > Using
>> > > > > session config file        path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:09:15.766882] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file   path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:09:16.30267] I [gsyncd(config-get):297:main]
>> > > > <top>:
>> > > > > Using session config file    path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > > [2019-03-25 04:09:16.89006] I [gsyncd(config-set):297:main]
>> > > > <top>:
>> > > > > Using session config file    path=/var/lib/glusterd/geo-
>> > > > >
>> > > > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo
>> > > > l_e7
>> > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf
>> > > > >
>> > > > > regards,
>> > > > > Maurya
>> > > > >
>> > > > > On Mon, Mar 25, 2019 at 9:08 AM Aravinda <avishwan at redhat.com>
>> > > > wrote:
>> > > > > > Use `ssh-port <port>` while creating the Geo-rep session
>> > > > > >
>> > > > > > Ref:
>> > > > > >
>> > > >
>> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session
>> > > > > >
>> > > > > > And set the ssh-port option before start.
>> > > > > >
>> > > > > > ```
>> > > > > > gluster volume geo-replication <master_volume> \
>> > > > > >     [<slave_user>@]<slave_host>::<slave_volume> config
>> > > > > >     ssh-port 2222
>> > > > > > ```
>> > > > > >
>> --
>> regards
>> Aravinda
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190325/16408480/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: gverify-slavemnt.log
Type: application/octet-stream
Size: 3556 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190325/16408480/attachment-0001.obj>


More information about the Gluster-users mailing list