<div dir="ltr"><div dir="ltr">tried even this, did not work :<div><br></div><div><div>[root@k8s-agentpool1-24779565-1 vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]# gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f<b> config ssh-command &#39;ssh -p 2222&#39;</b></div><div>geo-replication config-set failed for vol_75a5fd373d88ba687f591f3353fa05cf 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f</div><div>geo-replication command failed</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 25, 2019 at 9:46 AM Maurya M &lt;<a href="mailto:mauryam@gmail.com">mauryam@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">hi Aravinda,<div> had the session created using : create ssh-port 2222 push-pem and also the :</div><div><br></div><div>gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f config ssh-port 2222<br></div><div><br></div><div>hitting this message:</div><div><div>geo-replication config-set failed for vol_75a5fd373d88ba687f591f3353fa05cf 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f</div><div>geo-replication command failed</div></div><div><br></div><div>Below is snap of status:</div><div><br></div><div><div>[root@k8s-agentpool1-24779565-1 vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f]# gluster volume geo-replication vol_75a5fd373d88ba687f591f3353fa05cf 172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f status</div><div><br></div><div>MASTER NODE      MASTER VOL                              MASTER BRICK                                                                                               SLAVE USER    SLAVE                                                  SLAVE NODE    STATUS     CRAWL STATUS    LAST_SYNCED</div><div>--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</div><div>172.16.189.4     vol_75a5fd373d88ba687f591f3353fa05cf    /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_116fb9427fb26f752d9ba8e45e183cb1/brick    root          172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f    N/A           Created    N/A             N/A</div><div>172.16.189.35    vol_75a5fd373d88ba687f591f3353fa05cf    /var/lib/heketi/mounts/vg_05708751110fe60b3e7da15bdcf6d4d4/brick_266bb08f0d466d346f8c0b19569736fb/brick    root          172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f    N/A           Created    N/A             N/A</div><div>172.16.189.66    vol_75a5fd373d88ba687f591f3353fa05cf    /var/lib/heketi/mounts/vg_4b92a2b687e59b7311055d3809b77c06/brick_dfa44c9380cdedac708e27e2c2a443a0/brick    root          172.16.201.35::vol_e783a730578e45ed9d51b9a80df6c33f    N/A           Created    N/A             N/A</div></div><div><br></div><div>any ideas ? where can find logs for the failed commands check in gysncd.log , the trace is as below:</div><div><br></div><div><div>[2019-03-25 04:04:42.295043] I [gsyncd(monitor):297:main] &lt;top&gt;: Using session config file      path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:04:42.387192] E [syncdutils(monitor):332:log_raise_exception] &lt;top&gt;: FAIL:</div><div>Traceback (most recent call last):</div><div>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py&quot;, line 311, in main</div><div>    func(args)</div><div>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/subcmds.py&quot;, line 50, in subcmd_monitor</div><div>    return monitor.monitor(local, remote)</div><div>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/monitor.py&quot;, line 427, in monitor</div><div>    return Monitor().multiplex(*distribute(local, remote))</div><div>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/monitor.py&quot;, line 370, in distribute</div><div>    mvol = Volinfo(master.volume, master.host)</div><div>  File &quot;/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py&quot;, line 860, in __init__</div><div>    print &quot;debug varible &quot; %vix</div><div>TypeError: not all arguments converted during string formatting</div><div>[2019-03-25 04:04:48.997519] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file   path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:04:49.93528] I [gsyncd(status):297:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:08:07.194348] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file   path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:08:07.262588] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file   path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:08:07.550080] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file   path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:08:18.933028] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file   path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:08:19.25285] I [gsyncd(status):297:main] &lt;top&gt;: Using session config file        path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:09:15.766882] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file   path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:09:16.30267] I [gsyncd(config-get):297:main] &lt;top&gt;: Using session config file    path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div><div>[2019-03-25 04:09:16.89006] I [gsyncd(config-set):297:main] &lt;top&gt;: Using session config file    path=/var/lib/glusterd/geo-replication/vol_75a5fd373d88ba687f591f3353fa05cf_172.16.201.35_vol_e783a730578e45ed9d51b9a80df6c33f/gsyncd.conf</div></div><div><br></div><div>regards,</div><div>Maurya</div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 25, 2019 at 9:08 AM Aravinda &lt;<a href="mailto:avishwan@redhat.com" target="_blank">avishwan@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Use `ssh-port &lt;port&gt;` while creating the Geo-rep session<br>
<br>
Ref: <br>
<a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/#creating-the-session</a><br>
<br>
And set the ssh-port option before start.<br>
<br>
```<br>
gluster volume geo-replication &lt;master_volume&gt; \<br>
    [&lt;slave_user&gt;@]&lt;slave_host&gt;::&lt;slave_volume&gt; config<br>
    ssh-port 2222<br>
```<br>
<br>
-- <br>
regards<br>
Aravinda<br>
<a href="http://aravindavk.in" rel="noreferrer" target="_blank">http://aravindavk.in</a><br>
<br>
<br>
On Sun, 2019-03-24 at 17:13 +0530, Maurya M wrote:<br>
&gt; did all the suggestion as mentioned in the log trace , have another<br>
&gt; setup using root user , but there i have issue on the ssh command as<br>
&gt; i am unable to change the ssh port to use default 22, but my servers<br>
&gt; (azure aks engine)  are configure to using 2222 where i am unable to<br>
&gt; change the ports , restart of ssh service giving me error!<br>
&gt; <br>
&gt; Is this syntax correct to config the ssh-command:<br>
&gt; gluster volume geo-replication vol_041afbc53746053368a1840607636e97<br>
&gt; xxx.xx.xxx.xx::vol_a5aee81a873c043c99a938adcb5b5781 config ssh-<br>
&gt; command &#39;/usr/sbin/sshd -D  -p 2222&#39;<br>
&gt; <br>
&gt; On Sun, Mar 24, 2019 at 4:38 PM Maurya M &lt;<a href="mailto:mauryam@gmail.com" target="_blank">mauryam@gmail.com</a>&gt; wrote:<br>
&gt; &gt; Did give the persmission on both &quot;/var/log/glusterfs/&quot; &amp;<br>
&gt; &gt; &quot;/var/lib/glusterd/&quot; too, but seems the directory where i mounted<br>
&gt; &gt; using heketi is having issues:<br>
&gt; &gt; <br>
&gt; &gt; [2019-03-22 09:48:21.546308] E [syncdutils(worker<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick):305:log_raise_exception]<br>
&gt; &gt; &lt;top&gt;: connection to peer is broken<br>
&gt; &gt; [2019-03-22 09:48:21.546662] E [syncdutils(worker<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick):309:log_raise_exception]<br>
&gt; &gt; &lt;top&gt;: getting &quot;No such file or directory&quot;errors is most likely due<br>
&gt; &gt; to MISCONFIGURATION, please remove all the public keys added by<br>
&gt; &gt; geo-replication from authorized_keys file in slave nodes and run<br>
&gt; &gt; Geo-replication create command again.<br>
&gt; &gt; [2019-03-22 09:48:21.546736] E [syncdutils(worker<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick):316:log_raise_exception]<br>
&gt; &gt; &lt;top&gt;: If `gsec_create container` was used, then run `gluster<br>
&gt; &gt; volume geo-replication &lt;MASTERVOL&gt;<br>
&gt; &gt; [&lt;SLAVEUSER&gt;@]&lt;SLAVEHOST&gt;::&lt;SLAVEVOL&gt; config remote-gsyncd<br>
&gt; &gt; &lt;GSYNCD_PATH&gt; (Example GSYNCD_PATH:<br>
&gt; &gt; `/usr/libexec/glusterfs/gsyncd`)<br>
&gt; &gt; [2019-03-22 09:48:21.546858] E [syncdutils(worker<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick):801:errlog] Popen: command<br>
&gt; &gt; returned error    cmd=ssh -oPasswordAuthentication=no<br>
&gt; &gt; -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<br>
&gt; &gt; replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-<br>
&gt; &gt; aux-ssh-OaPGc3/c784230c9648efa4d529975bd779c551.sock <br>
&gt; &gt; <a href="mailto:azureuser@172.16.201.35" target="_blank">azureuser@172.16.201.35</a> /nonexistent/gsyncd slave<br>
&gt; &gt; vol_041afbc53746053368a1840607636e97 azureuser@172.16.201.35::vol_a<br>
&gt; &gt; 5aee81a873c043c99a938adcb5b5781 --master-node 172.16.189.4 --<br>
&gt; &gt; master-node-id dd4efc35-4b86-4901-9c00-483032614c35 --master-brick<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick --local-node 172.16.201.35 --<br>
&gt; &gt; local-node-id 7eb0a2b6-c4d6-41b1-a346-0638dbf8d779 --slave-timeout<br>
&gt; &gt; 120 --slave-log-level INFO --slave-gluster-log-level INFO --slave-<br>
&gt; &gt; gluster-command-dir /usr/sbin      error=127<br>
&gt; &gt; [2019-03-22 09:48:21.546977] E [syncdutils(worker<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick):805:logerr] Popen: ssh&gt; bash:<br>
&gt; &gt; /nonexistent/gsyncd: No such file or directory<br>
&gt; &gt; [2019-03-22 09:48:21.565583] I [repce(agent<br>
&gt; &gt; /var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/brick_b3<br>
&gt; &gt; eab2394433f02f5617012d4ae3c28f/brick):80:service_loop] RepceServer:<br>
&gt; &gt; terminating on reaching EOF.<br>
&gt; &gt; [2019-03-22 09:48:21.565745] I [monitor(monitor):266:monitor]<br>
&gt; &gt; Monitor: worker died before establishing connection      <br>
&gt; &gt; brick=/var/lib/heketi/mounts/vg_aee3df7b0bb2451bc00a73358c5196a2/br<br>
&gt; &gt; ick_b3eab2394433f02f5617012d4ae3c28f/brick<br>
&gt; &gt; [2019-03-22 09:48:21.579195] I<br>
&gt; &gt; [gsyncdstatus(monitor):245:set_worker_status] GeorepStatus: Worker<br>
&gt; &gt; Status Change status=Faulty<br>
&gt; &gt; <br>
&gt; &gt; On Fri, Mar 22, 2019 at 10:23 PM Sunny Kumar &lt;<a href="mailto:sunkumar@redhat.com" target="_blank">sunkumar@redhat.com</a>&gt;<br>
&gt; &gt; wrote:<br>
&gt; &gt; &gt; Hi Maurya,<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Looks like hook script is failed to set permissions for azureuser<br>
&gt; &gt; &gt; on<br>
&gt; &gt; &gt; &quot;/var/log/glusterfs&quot;.<br>
&gt; &gt; &gt; You can assign permission manually for directory then it will<br>
&gt; &gt; &gt; work.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; -Sunny<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; On Fri, Mar 22, 2019 at 2:07 PM Maurya M &lt;<a href="mailto:mauryam@gmail.com" target="_blank">mauryam@gmail.com</a>&gt;<br>
&gt; &gt; &gt; wrote:<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; hi Sunny,<br>
&gt; &gt; &gt; &gt;  Passwordless ssh to :<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i<br>
&gt; &gt; &gt; /var/lib/glusterd/geo-replication/secret.pem -p 22 <br>
&gt; &gt; &gt; <a href="mailto:azureuser@172.16.201.35" target="_blank">azureuser@172.16.201.35</a><br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; is login, but when the whole command is run getting permission<br>
&gt; &gt; &gt; issues again::<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i<br>
&gt; &gt; &gt; /var/lib/glusterd/geo-replication/secret.pem -p 22 <br>
&gt; &gt; &gt; <a href="mailto:azureuser@172.16.201.35" target="_blank">azureuser@172.16.201.35</a> gluster --xml --remote-host=localhost<br>
&gt; &gt; &gt; volume info vol_a5aee81a873c043c99a938adcb5b5781 -v<br>
&gt; &gt; &gt; &gt; ERROR: failed to create logfile &quot;/var/log/glusterfs/cli.log&quot;<br>
&gt; &gt; &gt; (Permission denied)<br>
&gt; &gt; &gt; &gt; ERROR: failed to open logfile /var/log/glusterfs/cli.log<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; any idea here ?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; thanks,<br>
&gt; &gt; &gt; &gt; Maurya<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Thu, Mar 21, 2019 at 2:43 PM Maurya M &lt;<a href="mailto:mauryam@gmail.com" target="_blank">mauryam@gmail.com</a>&gt;<br>
&gt; &gt; &gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; hi Sunny,<br>
&gt; &gt; &gt; &gt;&gt;  i did use the [1] link for the setup, when i encountered this<br>
&gt; &gt; &gt; error during ssh-copy-id : (so setup the passwordless ssh, by<br>
&gt; &gt; &gt; manually copied the private/ public keys to all the nodes , both<br>
&gt; &gt; &gt; master &amp; slave)<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; [root@k8s-agentpool1-24779565-1 ~]# ssh-copy-id <br>
&gt; &gt; &gt; geouser@xxx.xx.xxx.x<br>
&gt; &gt; &gt; &gt;&gt; /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:<br>
&gt; &gt; &gt; &quot;/root/.ssh/id_rsa.pub&quot;<br>
&gt; &gt; &gt; &gt;&gt; The authenticity of host &#39; xxx.xx.xxx.x   ( xxx.xx.xxx.x  )&#39;<br>
&gt; &gt; &gt; can&#39;t be established.<br>
&gt; &gt; &gt; &gt;&gt; ECDSA key fingerprint is<br>
&gt; &gt; &gt; SHA256:B2rNaocIcPjRga13oTnopbJ5KjI/7l5fMANXc+KhA9s.<br>
&gt; &gt; &gt; &gt;&gt; ECDSA key fingerprint is<br>
&gt; &gt; &gt; MD5:1b:70:f9:7a:bf:35:33:47:0c:f2:c1:cd:21:e2:d3:75.<br>
&gt; &gt; &gt; &gt;&gt; Are you sure you want to continue connecting (yes/no)? yes<br>
&gt; &gt; &gt; &gt;&gt; /usr/bin/ssh-copy-id: INFO: attempting to log in with the new<br>
&gt; &gt; &gt; key(s), to filter out any that are already installed<br>
&gt; &gt; &gt; &gt;&gt; /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed --<br>
&gt; &gt; &gt; if you are prompted now it is to install the new keys<br>
&gt; &gt; &gt; &gt;&gt; Permission denied (publickey).<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; To start afresh what all needs to teardown / delete, do we<br>
&gt; &gt; &gt; have any script for it ? where all the pem keys do i need to<br>
&gt; &gt; &gt; delete?<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; thanks,<br>
&gt; &gt; &gt; &gt;&gt; Maurya<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; On Thu, Mar 21, 2019 at 2:12 PM Sunny Kumar &lt;<br>
&gt; &gt; &gt; <a href="mailto:sunkumar@redhat.com" target="_blank">sunkumar@redhat.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; Hey you can start a fresh I think you are not following<br>
&gt; &gt; &gt; proper setup steps.<br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; Please follow these steps [1] to create geo-rep session, you<br>
&gt; &gt; &gt; can<br>
&gt; &gt; &gt; &gt;&gt;&gt; delete the old one and do a fresh start. Or alternative you<br>
&gt; &gt; &gt; can use<br>
&gt; &gt; &gt; &gt;&gt;&gt; this tool[2] to setup geo-rep.<br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; [1]. <br>
&gt; &gt; &gt; <a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/</a><br>
&gt; &gt; &gt; &gt;&gt;&gt; [2]. <a href="http://aravindavk.in/blog/gluster-georep-tools/" rel="noreferrer" target="_blank">http://aravindavk.in/blog/gluster-georep-tools/</a><br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; /Sunny<br>
&gt; &gt; &gt; &gt;&gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; On Thu, Mar 21, 2019 at 11:28 AM Maurya M &lt;<a href="mailto:mauryam@gmail.com" target="_blank">mauryam@gmail.com</a>&gt;<br>
&gt; &gt; &gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; Hi Sunil,<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;  I did run the on the slave node :<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser<br>
&gt; &gt; &gt; vol_041afbc53746053368a1840607636e97<br>
&gt; &gt; &gt; vol_a5aee81a873c043c99a938adcb5b5781<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; getting this message &quot;/home/azureuser/common_secret.pem.pub<br>
&gt; &gt; &gt; not present. Please run geo-replication command on master with<br>
&gt; &gt; &gt; push-pem option to generate the file&quot;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; So went back and created the session again, no change, so<br>
&gt; &gt; &gt; manually copied the common_secret.pem.pub to /home/azureuser/ but<br>
&gt; &gt; &gt; still the set_geo_rep_pem_keys.sh is looking the pem file in<br>
&gt; &gt; &gt; different name : <br>
&gt; &gt; &gt; COMMON_SECRET_PEM_PUB=${master_vol}_${slave_vol}_<a href="http://common_secret.pe" rel="noreferrer" target="_blank">common_secret.pe</a><br>
&gt; &gt; &gt; m.pub , change the name of pem , ran the command again :<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;  /usr/libexec/glusterfs/set_geo_rep_pem_keys.sh azureuser<br>
&gt; &gt; &gt; vol_041afbc53746053368a1840607636e97<br>
&gt; &gt; &gt; vol_a5aee81a873c043c99a938adcb5b5781<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; Successfully copied file.<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; Command executed successfully.<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; - went back and created the session , start the geo-<br>
&gt; &gt; &gt; replication , still seeing the  same error in logs. Any ideas ?<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; thanks,<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; Maurya<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt; On Wed, Mar 20, 2019 at 11:07 PM Sunny Kumar &lt;<br>
&gt; &gt; &gt; <a href="mailto:sunkumar@redhat.com" target="_blank">sunkumar@redhat.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; Hi Maurya,<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; I guess you missed last trick to distribute keys in slave<br>
&gt; &gt; &gt; node. I see<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; this is non-root geo-rep setup so please try this:<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; Run the following command as root in any one of Slave<br>
&gt; &gt; &gt; node.<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; /usr/local/libexec/glusterfs/set_geo_rep_pem_keys.sh <br>
&gt; &gt; &gt; &lt;slave_user&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &lt;master_volume&gt; &lt;slave_volume&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; - Sunny<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; On Wed, Mar 20, 2019 at 10:47 PM Maurya M &lt;<br>
&gt; &gt; &gt; <a href="mailto:mauryam@gmail.com" target="_blank">mauryam@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; Hi all,<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;  Have setup a 3 master nodes - 3 slave nodes (gluster<br>
&gt; &gt; &gt; 4.1) for geo-replication, but once have the geo-replication<br>
&gt; &gt; &gt; configure the status is always on &quot;Created&#39;,<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; even after have force start the session.<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; On close inspect of the logs on the master node seeing<br>
&gt; &gt; &gt; this error:<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; &quot;E [syncdutils(monitor):801:errlog] Popen: command<br>
&gt; &gt; &gt; returned error   cmd=ssh -oPasswordAuthentication=no<br>
&gt; &gt; &gt; -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-<br>
&gt; &gt; &gt; replication/secret.pem -p 22 azureuser@xxxxx.xxxx..xxx. gluster<br>
&gt; &gt; &gt; --xml --remote-host=localhost volume info<br>
&gt; &gt; &gt; vol_a5ae34341a873c043c99a938adcb5b5781      error=255&quot;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; Any ideas what is issue?<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; thanks,<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; Maurya<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; Gluster-users mailing list<br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt; &gt; &gt; &gt;&gt;&gt; &gt;&gt; &gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
&gt; <br>
&gt; _______________________________________________<br>
&gt; Gluster-users mailing list<br>
&gt; <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
<br>
</blockquote></div>
</blockquote></div>