<div dir="ltr"><div dir="ltr">Hi,<div> In my glusterd.log i am seeing this error message , is this related to the patch i applied? or do i need to open a new thread?</div><div><br></div><div><div> I [MSGID: 106327] [glusterd-geo-rep.c:4483:glusterd_read_status_file] 0-management: Using passed config template(/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf).</div><div>[2019-03-28 10:39:29.493554] E [MSGID: 106293] [glusterd-geo-rep.c:679:glusterd_query_extutil_generic] 0-management: reading data from child failed</div><div>[2019-03-28 10:39:29.493589] E [MSGID: 106305] [glusterd-geo-rep.c:4377:glusterd_fetch_values_from_config] 0-management: Unable to get configuration data for vol_75a5fd373d88ba687f591f3353fa05cf(master), 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f(slave)</div><div>[2019-03-28 10:39:29.493617] E [MSGID: 106328] [glusterd-geo-rep.c:4517:glusterd_read_status_file] 0-management: Unable to fetch config values for vol_75a5fd373d88ba687f591f3353fa05cf(master), 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f(slave). Trying default config template</div><div>[2019-03-28 10:39:29.553846] E [MSGID: 106328] [glusterd-geo-rep.c:4525:glusterd_read_status_file] 0-management: Unable to fetch config values for vol_75a5fd373d88ba687f591f3353fa05cf(master), 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f(slave)</div><div>[2019-03-28 10:39:29.553836] E [MSGID: 106293] [glusterd-geo-rep.c:679:glusterd_query_extutil_generic] 0-management: reading data from child failed</div><div>[2019-03-28 10:39:29.553844] E [MSGID: 106305] [glusterd-geo-rep.c:4377:glusterd_fetch_values_from_config] 0-management: Unable to get configuration data for vol_75a5fd373d88ba687f591f3353fa05cf(master), 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f(slave)</div></div><div><br></div><div>also while do a status call, i am not seeing one of the nodes which was reporting 'Passive' before ( did not change any configuration ) , any ideas how to troubleshoot this?</div><div><br></div><div>thanks for your help.</div><div><br></div><div>Maurya</div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 26, 2019 at 8:34 PM Aravinda <<a href="mailto:avishwan@redhat.com">avishwan@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Please check error message in gsyncd.log file in<br>
/var/log/glusterfs/geo-replication/<session-dir><br>
<br>
On Tue, 2019-03-26 at 19:44 +0530, Maurya M wrote:<br>
> Hi Arvind,<br>
> Have patched my setup with your fix: re-run the setup, but this time<br>
> getting a different error where it failed to commit the ssh-port on<br>
> my other 2 nodes on the master cluster, so manually copied the :<br>
> [vars]<br>
> ssh-port = 2222<br>
> <br>
> into gsyncd.conf<br>
> <br>
> and status reported back is as shown below : Any ideas how to<br>
> troubleshoot this?<br>
> <br>
> MASTER NODE MASTER VOL MASTER<br>
> BRICK <br>
> SLAVE USER SLAVE <br>
> SLAVE NODE STATUS <br>
> CRAWL STATUS LAST_SYNCED<br>
> -------------------------------------------------------------------<br>
> -------------------------------------------------------------------<br>
> -------------------------------------------------------------------<br>
> -------------------------------------------------------------------<br>
> --------------------------<br>
> 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf <br>
> /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116f<br>
> b9427fb26f752d9ba8e45e183cb1/brick root <br>
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f 172.16.201.4 <br>
> Passive N/A N/A<br>
> 172.16.189.35 vol_75a5fd373d88ba687f591f3353fa05cf <br>
> /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266b<br>
> b08f0d466d346f8c0b19569736fb/brick root <br>
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A <br>
> Faulty N/A N/A<br>
> 172.16.189.66 vol_75a5fd373d88ba687f591f3353fa05cf <br>
> /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa4<br>
> 4c9380cdedac708e27e2c2a443a0/brick root <br>
> 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f N/A <br>
> Initializing... N/A N/A<br>
> <br>
> <br>
> <br>
> <br>
> On Tue, Mar 26, 2019 at 1:40 PM Aravinda <<a href="mailto:avishwan@redhat.com" target="_blank">avishwan@redhat.com</a>> wrote:<br>
> > I got chance to investigate this issue further and identified a<br>
> > issue<br>
> > with Geo-replication config set and sent patch to fix the same.<br>
> > <br>
> > BUG: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1692666" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1692666</a><br>
> > Patch: <a href="https://review.gluster.org/22418" rel="noreferrer" target="_blank">https://review.gluster.org/22418</a><br>
> > <br>
> > On Mon, 2019-03-25 at 15:37 +0530, Maurya M wrote:<br>
> > > ran this command : ssh -p 2222 -i /var/lib/glusterd/geo-<br>
> > > replication/secret.pem root@<slave node>gluster volume info --<br>
> > xml <br>
> > > <br>
> > > attaching the output.<br>
> > > <br>
> > > <br>
> > > <br>
> > > On Mon, Mar 25, 2019 at 2:13 PM Aravinda <<a href="mailto:avishwan@redhat.com" target="_blank">avishwan@redhat.com</a>><br>
> > wrote:<br>
> > > > Geo-rep is running `ssh -i /var/lib/glusterd/geo-<br>
> > > > replication/secret.pem <br>
> > > > root@<slavenode> gluster volume info --xml` and parsing its<br>
> > output.<br>
> > > > Please try to to run the command from the same node and let us<br>
> > know<br>
> > > > the<br>
> > > > output.<br>
> > > > <br>
> > > > <br>
> > > > On Mon, 2019-03-25 at 11:43 +0530, Maurya M wrote:<br>
> > > > > Now the error is on the same line 860 : as highlighted below:<br>
> > > > > <br>
> > > > > [2019-03-25 06:11:52.376238] E<br>
> > > > > [syncdutils(monitor):332:log_raise_exception] <top>: FAIL:<br>
> > > > > Traceback (most recent call last):<br>
> > > > > File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",<br>
> > line<br>
> > > > > 311, in main<br>
> > > > > func(args)<br>
> > > > > File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",<br>
> > > > line<br>
> > > > > 50, in subcmd_monitor<br>
> > > > > return monitor.monitor(local, remote)<br>
> > > > > File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",<br>
> > > > line<br>
> > > > > 427, in monitor<br>
> > > > > return Monitor().multiplex(*distribute(local, remote))<br>
> > > > > File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",<br>
> > > > line<br>
> > > > > 386, in distribute<br>
> > > > > svol = Volinfo(slave.volume, "localhost", prelude)<br>
> > > > > File<br>
> > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",<br>
> > > > line<br>
> > > > > 860, in __init__<br>
> > > > > vi = XET.fromstring(vix)<br>
> > > > > File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line<br>
> > > > 1300, in<br>
> > > > > XML<br>
> > > > > parser.feed(text)<br>
> > > > > File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line<br>
> > > > 1642, in<br>
> > > > > feed<br>
> > > > > self._raiseerror(v)<br>
> > > > > File "/usr/lib64/python2.7/xml/etree/ElementTree.py", line<br>
> > > > 1506, in<br>
> > > > > _raiseerror<br>
> > > > > raise err<br>
> > > > > ParseError: syntax error: line 1, column 0<br>
> > > > > <br>
> > > > > <br>
> > > > > On Mon, Mar 25, 2019 at 11:29 AM Maurya M <<a href="mailto:mauryam@gmail.com" target="_blank">mauryam@gmail.com</a>><br>
> > > > wrote:<br>
> > > > > > Sorry my bad, had put the print line to debug, i am using<br>
> > > > gluster<br>
> > > > > > 4.1.7, will remove the print line.<br>
> > > > > > <br>
> > > > > > On Mon, Mar 25, 2019 at 10:52 AM Aravinda <<br>
> > <a href="mailto:avishwan@redhat.com" target="_blank">avishwan@redhat.com</a>><br>
> > > > > > wrote:<br>
> > > > > > > Below print statement looks wrong. Latest Glusterfs code<br>
> > > > doesn't<br>
> > > > > > > have<br>
> > > > > > > this print statement. Please let us know which version of<br>
> > > > > > > glusterfs you<br>
> > > > > > > are using.<br>
> > > > > > > <br>
> > > > > > > <br>
> > > > > > > ```<br>
> > > > > > > File<br>
> > > > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",<br>
> > > > > > > line<br>
> > > > > > > 860, in __init__<br>
> > > > > > > print "debug varible " %vix<br>
> > > > > > > ```<br>
> > > > > > > <br>
> > > > > > > As a workaround, edit that file and comment the print<br>
> > line<br>
> > > > and<br>
> > > > > > > test the<br>
> > > > > > > geo-rep config command.<br>
> > > > > > > <br>
> > > > > > > <br>
> > > > > > > On Mon, 2019-03-25 at 09:46 +0530, Maurya M wrote:<br>
> > > > > > > > hi Aravinda,<br>
> > > > > > > > had the session created using : create ssh-port 2222<br>
> > push-<br>
> > > > pem<br>
> > > > > > > and<br>
> > > > > > > > also the :<br>
> > > > > > > > <br>
> > > > > > > > gluster volume geo-replication<br>
> > > > > > > vol_75a5fd373d88ba687f591f3353fa05cf<br>
> > > > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f<br>
> > config<br>
> > > > ssh-<br>
> > > > > > > port<br>
> > > > > > > > 2222<br>
> > > > > > > > <br>
> > > > > > > > hitting this message:<br>
> > > > > > > > geo-replication config-set failed for<br>
> > > > > > > > vol_75a5fd373d88ba687f591f3353fa05cf<br>
> > > > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f<br>
> > > > > > > > geo-replication command failed<br>
> > > > > > > > <br>
> > > > > > > > Below is snap of status:<br>
> > > > > > > > <br>
> > > > > > > > [root@k8s-agentpool1-24779565-1<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a73057<br>
> > > > > > > 8e45ed9d51b9a80df6c33f]# gluster volume geo-replication<br>
> > > > > > > vol_75a5fd373d88ba687f591f3353fa05cf<br>
> > > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f<br>
> > status<br>
> > > > > > > > <br>
> > > > > > > > MASTER NODE MASTER VOL <br>
> > <br>
> > > > MASTER<br>
> > > > > > > > BRICK <br>
> > <br>
> > > > <br>
> > > > > > > <br>
> > > > > > > > SLAVE USER SLAVE <br>
> > <br>
> > > > <br>
> > > > > > > <br>
> > > > > > > > SLAVE NODE STATUS <br>
> > > > CRAWL<br>
> > > > > > > STATUS <br>
> > > > > > > > LAST_SYNCED<br>
> > > > > > > > -----------------------------------------------------<br>
> > ----<br>
> > > > ----<br>
> > > > > > > ------<br>
> > > > > > > > -----------------------------------------------------<br>
> > ----<br>
> > > > ----<br>
> > > > > > > ------<br>
> > > > > > > > -----------------------------------------------------<br>
> > ----<br>
> > > > ----<br>
> > > > > > > ------<br>
> > > > > > > > -----------------------------------------------------<br>
> > ----<br>
> > > > ----<br>
> > > > > > > ------<br>
> > > > > > > > ----------------<br>
> > > > > > > > 172.16.189.4 vol_75a5fd373d88ba687f591f3353fa05cf <br>
> > <br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_<br>
> > > > > > > 116f<br>
> > > > > > > > b9427fb26f752d9ba8e45e183cb1/brick root <br>
> > > > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f <br>
> > N/A <br>
> > > > <br>
> > > > > > > <br>
> > > > > > > > Created N/A N/A<br>
> > > > > > > > 172.16.189.35 vol_75a5fd373d88ba687f591f3353fa05cf <br>
> > <br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_<br>
> > > > > > > 266b<br>
> > > > > > > > b08f0d466d346f8c0b19569736fb/brick root <br>
> > > > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f <br>
> > N/A <br>
> > > > <br>
> > > > > > > <br>
> > > > > > > > Created N/A N/A<br>
> > > > > > > > 172.16.189.66 vol_75a5fd373d88ba687f591f3353fa05cf <br>
> > <br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_<br>
> > > > > > > dfa4<br>
> > > > > > > > 4c9380cdedac708e27e2c2a443a0/brick root <br>
> > > > > > > > 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f <br>
> > N/A <br>
> > > > <br>
> > > > > > > <br>
> > > > > > > > Created N/A N/A<br>
> > > > > > > > <br>
> > > > > > > > any ideas ? where can find logs for the failed commands<br>
> > > > check<br>
> > > > > > > in<br>
> > > > > > > > gysncd.log , the trace is as below:<br>
> > > > > > > > <br>
> > > > > > > > [2019-03-25 04:04:42.295043] I<br>
> > [gsyncd(monitor):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file <br>
> > path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:04:42.387192] E<br>
> > > > > > > > [syncdutils(monitor):332:log_raise_exception] <top>:<br>
> > FAIL:<br>
> > > > > > > > Traceback (most recent call last):<br>
> > > > > > > > File<br>
> > > > "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py",<br>
> > > > > > > line<br>
> > > > > > > > 311, in main<br>
> > > > > > > > func(args)<br>
> > > > > > > > File<br>
> > > > "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py",<br>
> > > > > > > line<br>
> > > > > > > > 50, in subcmd_monitor<br>
> > > > > > > > return monitor.monitor(local, remote)<br>
> > > > > > > > File<br>
> > > > "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",<br>
> > > > > > > line<br>
> > > > > > > > 427, in monitor<br>
> > > > > > > > return Monitor().multiplex(*distribute(local,<br>
> > remote))<br>
> > > > > > > > File<br>
> > > > "/usr/libexec/glusterfs/python/syncdaemon/monitor.py",<br>
> > > > > > > line<br>
> > > > > > > > 370, in distribute<br>
> > > > > > > > mvol = Volinfo(master.volume, master.host)<br>
> > > > > > > > File<br>
> > > > > > > "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py",<br>
> > > > line<br>
> > > > > > > > 860, in __init__<br>
> > > > > > > > print "debug varible " %vix<br>
> > > > > > > > TypeError: not all arguments converted during string<br>
> > > > formatting<br>
> > > > > > > > [2019-03-25 04:04:48.997519] I [gsyncd(config-<br>
> > > > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:04:49.93528] I [gsyncd(status):297:main]<br>
> > > > <top>:<br>
> > > > > > > Using<br>
> > > > > > > > session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:08:07.194348] I [gsyncd(config-<br>
> > > > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:08:07.262588] I [gsyncd(config-<br>
> > > > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:08:07.550080] I [gsyncd(config-<br>
> > > > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:08:18.933028] I [gsyncd(config-<br>
> > > > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:08:19.25285] I [gsyncd(status):297:main]<br>
> > > > <top>:<br>
> > > > > > > Using<br>
> > > > > > > > session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:09:15.766882] I [gsyncd(config-<br>
> > > > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:09:16.30267] I [gsyncd(config-<br>
> > get):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file <br>
> > path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > [2019-03-25 04:09:16.89006] I [gsyncd(config-<br>
> > set):297:main]<br>
> > > > > > > <top>:<br>
> > > > > > > > Using session config file <br>
> > path=/var/lib/glusterd/geo-<br>
> > > > > > > ><br>
> > > > > > ><br>
> > > ><br>
> > replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vo<br>
> > > > > > > l_e7<br>
> > > > > > > > 83a730578e45ed9d51b9a80df6c33f/gsyncd.conf<br>
> > > > > > > > <br>
> > > > > > > > regards,<br>
> > > > > > > > Maurya<br>
> > > > > > > > <br>
> > > > > > > > On Mon, Mar 25, 2019 at 9:08 AM Aravinda <<br>
> > > > <a href="mailto:avishwan@redhat.com" target="_blank">avishwan@redhat.com</a>><br>
> > > > > > > wrote:<br>
> > > > > > > > > Use `ssh-port <port>` while creating the Geo-rep<br>
> > session<br>
> > > > > > > > > <br>
> > > > > > > > > Ref: <br>
> > > > > > > > > <br>
> > > > > > > <br>
> > > > <br>
> > <a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session</a><br>
> > > > > > > > > <br>
> > > > > > > > > And set the ssh-port option before start.<br>
> > > > > > > > > <br>
> > > > > > > > > ```<br>
> > > > > > > > > gluster volume geo-replication <master_volume> \<br>
> > > > > > > > > [<slave_user>@]<slave_host>::<slave_volume><br>
> > config<br>
> > > > > > > > > ssh-port 2222<br>
> > > > > > > > > ```<br>
> > > > > > > > > <br>
-- <br>
regards<br>
Aravinda<br>
<br>
</blockquote></div>