<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Oct 17, 2019 at 12:54 PM Alexander Iliev <<a href="mailto:ailiev%2Bgluster@mamul.org">ailiev+gluster@mamul.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thanks, Aravinda.<br>
<br>Does this mean that my scenario is currently unsupported?<br></blockquote><div><br></div><div>Please try by providing external IP while creating Geo-rep session. We will work on the enhancement if it didn't work. <br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>It seems that I need to make sure that the nodes in the two clusters can <br>see each-other (some kind of VPN would work I guess).<br>
<br>Is this be documented somewhere? I think I've read the geo-replication <br>documentation several times now, but somehow it wasn't obvious to me <br>that you need access to the slave nodes from the master ones (apart from <br>the SSH access).<br>
<br>Thanks!<br>
<br>Best regards,<br>--<br>alexander iliev<br>
<br>On 10/17/19 5:25 AM, Aravinda Vishwanathapura Krishna Murthy wrote:<br>> Got it.<br>> <br>> Geo-replication uses slave nodes IP in the following cases,<br>> <br>> - Verification during Session creation - It tries to mount the Slave <br>> volume using the hostname/IP provided in Geo-rep create command. Try <br>> Geo-rep create by specifying the external IP which is accessible from <br>> the master node.<br>> - Once Geo-replication is started, it gets the list of Slave nodes <br>> IP/hostname from Slave volume info and connects to those IPs. But in <br>> this case, those are internal IP addresses that are not accessible from <br>> Master nodes. - We need to enhance Geo-replication to accept external IP <br>> and internal IP map details so that for all connections it can use <br>> external IP.<br>> <br>> On Wed, Oct 16, 2019 at 10:29 PM Alexander Iliev <br>> <<a href="mailto:ailiev%2Bgluster@mamul.org" target="_blank">ailiev+gluster@mamul.org</a> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a>>> wrote:<br>> <br>> Hi Aravinda,<br>> <br>> All volume brick on the slave volume are up and the volume seems<br>> functional.<br>> <br>> Your suggestion about trying to mount the slave volume on a master node<br>> brings up my question about network connectivity again - the GlusterFS<br>> documentation[1] says:<br>> <br>> > The server specified in the mount command is only used to fetch the<br>> gluster configuration volfile describing the volume name. Subsequently,<br>> the client will communicate directly with the servers mentioned in the<br>> volfile (which might not even include the one used for mount).<br>> <br>> To me this means that the masternode from your example is expected to<br>> have connectivity to the network where the slave volume runs, i.e. to<br>> have network access to the slave nodes. In my geo-replication scenario<br>> this is definitely not the case. The two cluster are running in two<br>> completely different networks that are not interconnected.<br>> <br>> So my question is - how is the slave volume mount expected to happen if<br>> the client host cannot access the GlusterFS nodes? Or is the<br>> connectivity a requirement even for geo-replication?<br>> <br>> I'm not sure if I'm missing something, but any help will be highly<br>> appreciated!<br>> <br>> Thanks!<br>> <br>> Links:<br>> [1]<br>> <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/" rel="noreferrer" target="_blank">https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/</a><br>> --<br>> alexander iliev<br>> <br>> On 10/16/19 6:03 AM, Aravinda Vishwanathapura Krishna Murthy wrote:<br>> > Hi Alexander,<br>> ><br>> > Please check the status of Volume. Looks like the Slave volume<br>> mount is<br>> > failing because bricks are down or not reachable. If Volume<br>> status shows<br>> > all bricks are up then try mounting the slave volume using mount<br>> command.<br>> ><br>> > ```<br>> > masternode$ mkdir /mnt/vol<br>> > masternode$ mount -t glusterfs <slavehost>:<slavevol> /mnt/vol<br>> > ```<br>> ><br>> > On Fri, Oct 11, 2019 at 4:03 AM Alexander Iliev<br>> > <<a href="mailto:ailiev%2Bgluster@mamul.org" target="_blank">ailiev+gluster@mamul.org</a> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a>><br>> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%25252Bgluster@mamul.org" target="_blank">ailiev%252Bgluster@mamul.org</a>>>> wrote:<br>> ><br>> > Hi all,<br>> ><br>> > I ended up reinstalling the nodes with CentOS 7.5 and<br>> GlusterFS 6.5<br>> > (installed from the SIG.)<br>> ><br>> > Now when I try to create a replication session I get the<br>> following:<br>> ><br>> > > # gluster volume geo-replication store1<br>> <slave-host>::store2 create<br>> > push-pem<br>> > > Unable to mount and fetch slave volume details. Please<br>> check the<br>> > log:<br>> > /var/log/glusterfs/geo-replication/gverify-slavemnt.log<br>> > > geo-replication command failed<br>> ><br>> > You can find the contents of gverify-slavemnt.log below, but the<br>> > initial<br>> > error seems to be:<br>> ><br>> > > [2019-10-10 22:07:51.578519] E<br>> > [fuse-bridge.c:5211:fuse_first_lookup]<br>> > 0-fuse: first lookup on root failed (Transport endpoint is not<br>> > connected)<br>> ><br>> > I only found<br>> > [this](<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1659824" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1659824</a>)<br>> > bug report which doesn't seem to help. The reported issue is<br>> failure to<br>> > mount a volume on a GlusterFS client, but in my case I need<br>> > geo-replication which implies the client (geo-replication<br>> master) being<br>> > on a different network.<br>> ><br>> > Any help will be appreciated.<br>> ><br>> > Thanks!<br>> ><br>> > gverify-slavemnt.log:<br>> ><br>> > > [2019-10-10 22:07:40.571256] I [MSGID: 100030]<br>> > [glusterfsd.c:2847:main] 0-glusterfs: Started running<br>> glusterfs version<br>> > 6.5 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off<br>> > --volfile-server <slave-host> --volfile-id store2 -l<br>> > /var/log/glusterfs/geo-replication/gverify-slavemnt.log<br>> > /tmp/gverify.sh.5nFlRh)<br>> > > [2019-10-10 22:07:40.575438] I [glusterfsd.c:2556:daemonize]<br>> > 0-glusterfs: Pid of current running process is 6021<br>> > > [2019-10-10 22:07:40.584282] I [MSGID: 101190]<br>> > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:<br>> Started thread<br>> > with index 0<br>> > > [2019-10-10 22:07:40.584299] I [MSGID: 101190]<br>> > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:<br>> Started thread<br>> > with index 1<br>> > > [2019-10-10 22:07:40.928094] I [MSGID: 114020]<br>> > [client.c:2393:notify]<br>> > 0-store2-client-0: parent translators are ready, attempting<br>> connect on<br>> > transport<br>> > > [2019-10-10 22:07:40.931121] I [MSGID: 114020]<br>> > [client.c:2393:notify]<br>> > 0-store2-client-1: parent translators are ready, attempting<br>> connect on<br>> > transport<br>> > > [2019-10-10 22:07:40.933976] I [MSGID: 114020]<br>> > [client.c:2393:notify]<br>> > 0-store2-client-2: parent translators are ready, attempting<br>> connect on<br>> > transport<br>> > > Final graph:<br>> > ><br>> > <br>> +------------------------------------------------------------------------------+<br>> > > 1: volume store2-client-0<br>> > > 2: type protocol/client<br>> > > 3: option ping-timeout 42<br>> > > 4: option remote-host 172.31.36.11<br>> > > 5: option remote-subvolume<br>> /data/gfs/store1/1/brick-store2<br>> > > 6: option transport-type socket<br>> > > 7: option transport.address-family inet<br>> > > 8: option transport.socket.ssl-enabled off<br>> > > 9: option transport.tcp-user-timeout 0<br>> > > 10: option transport.socket.keepalive-time 20<br>> > > 11: option transport.socket.keepalive-interval 2<br>> > > 12: option transport.socket.keepalive-count 9<br>> > > 13: option send-gids true<br>> > > 14: end-volume<br>> > > 15:<br>> > > 16: volume store2-client-1<br>> > > 17: type protocol/client<br>> > > 18: option ping-timeout 42<br>> > > 19: option remote-host 172.31.36.12<br>> > > 20: option remote-subvolume<br>> /data/gfs/store1/1/brick-store2<br>> > > 21: option transport-type socket<br>> > > 22: option transport.address-family inet<br>> > > 23: option transport.socket.ssl-enabled off<br>> > > 24: option transport.tcp-user-timeout 0<br>> > > 25: option transport.socket.keepalive-time 20<br>> > > 26: option transport.socket.keepalive-interval 2<br>> > > 27: option transport.socket.keepalive-count 9<br>> > > 28: option send-gids true<br>> > > 29: end-volume<br>> > > 30:<br>> > > 31: volume store2-client-2<br>> > > 32: type protocol/client<br>> > > 33: option ping-timeout 42<br>> > > 34: option remote-host 172.31.36.13<br>> > > 35: option remote-subvolume<br>> /data/gfs/store1/1/brick-store2<br>> > > 36: option transport-type socket<br>> > > 37: option transport.address-family inet<br>> > > 38: option transport.socket.ssl-enabled off<br>> > > 39: option transport.tcp-user-timeout 0<br>> > > 40: option transport.socket.keepalive-time 20<br>> > > 41: option transport.socket.keepalive-interval 2<br>> > > 42: option transport.socket.keepalive-count 9<br>> > > 43: option send-gids true<br>> > > 44: end-volume<br>> > > 45:<br>> > > 46: volume store2-replicate-0<br>> > > 47: type cluster/replicate<br>> > > 48: option afr-pending-xattr<br>> > store2-client-0,store2-client-1,store2-client-2<br>> > > 49: option use-compound-fops off<br>> > > 50: subvolumes store2-client-0 store2-client-1<br>> store2-client-2<br>> > > 51: end-volume<br>> > > 52:<br>> > > 53: volume store2-dht<br>> > > 54: type cluster/distribute<br>> > > 55: option lookup-unhashed off<br>> > > 56: option lock-migration off<br>> > > 57: option force-migration off<br>> > > 58: subvolumes store2-replicate-0<br>> > > 59: end-volume<br>> > > 60:<br>> > > 61: volume store2-write-behind<br>> > > 62: type performance/write-behind<br>> > > 63: subvolumes store2-dht<br>> > > 64: end-volume<br>> > > 65:<br>> > > 66: volume store2-read-ahead<br>> > > 67: type performance/read-ahead<br>> > > 68: subvolumes store2-write-behind<br>> > > 69: end-volume<br>> > > 70:<br>> > > 71: volume store2-readdir-ahead<br>> > > 72: type performance/readdir-ahead<br>> > > 73: option parallel-readdir off<br>> > > 74: option rda-request-size 131072<br>> > > 75: option rda-cache-limit 10MB<br>> > > 76: subvolumes store2-read-ahead<br>> > > 77: end-volume<br>> > > 78:<br>> > > 79: volume store2-io-cache<br>> > > 80: type performance/io-cache<br>> > > 81: subvolumes store2-readdir-ahead<br>> > > 82: end-volume<br>> > > 83:<br>> > > 84: volume store2-open-behind<br>> > > 85: type performance/open-behind<br>> > > 86: subvolumes store2-io-cache<br>> > > 87: end-volume<br>> > > 88:<br>> > > 89: volume store2-quick-read<br>> > > 90: type performance/quick-read<br>> > > 91: subvolumes store2-open-behind<br>> > > 92: end-volume<br>> > > 93:<br>> > > 94: volume store2-md-cache<br>> > > 95: type performance/md-cache<br>> > > 96: subvolumes store2-quick-read<br>> > > 97: end-volume<br>> > > 98:<br>> > > 99: volume store2<br>> > > 100: type debug/io-stats<br>> > > 101: option log-level INFO<br>> > > 102: option latency-measurement off<br>> > > 103: option count-fop-hits off<br>> > > 104: subvolumes store2-md-cache<br>> > > 105: end-volume<br>> > > 106:<br>> > > 107: volume meta-autoload<br>> > > 108: type meta<br>> > > 109: subvolumes store2<br>> > > 110: end-volume<br>> > > 111:<br>> > ><br>> > <br>> +------------------------------------------------------------------------------+<br>> > > [2019-10-10 22:07:51.578287] I [fuse-bridge.c:5142:fuse_init]<br>> > 0-glusterfs-fuse: FUSE inited with protocol versions:<br>> glusterfs 7.24<br>> > kernel 7.22<br>> > > [2019-10-10 22:07:51.578356] I<br>> [fuse-bridge.c:5753:fuse_graph_sync]<br>> > 0-fuse: switched to graph 0<br>> > > [2019-10-10 22:07:51.578467] I [MSGID: 108006]<br>> > [afr-common.c:5666:afr_local_init] 0-store2-replicate-0: no<br>> > subvolumes up<br>> > > [2019-10-10 22:07:51.578519] E<br>> > [fuse-bridge.c:5211:fuse_first_lookup]<br>> > 0-fuse: first lookup on root failed (Transport endpoint is not<br>> > connected)<br>> > > [2019-10-10 22:07:51.578709] W<br>> [fuse-bridge.c:1266:fuse_attr_cbk]<br>> > 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is not<br>> > connected)<br>> > > [2019-10-10 22:07:51.578687] I [MSGID: 108006]<br>> > [afr-common.c:5666:afr_local_init] 0-store2-replicate-0: no<br>> > subvolumes up<br>> > > [2019-10-10 22:09:48.222459] E [MSGID: 108006]<br>> > [afr-common.c:5318:__afr_handle_child_down_event]<br>> 0-store2-replicate-0:<br>> > All subvolumes are down. Going offline until at least one of<br>> them comes<br>> > back up.<br>> > > The message "E [MSGID: 108006]<br>> > [afr-common.c:5318:__afr_handle_child_down_event]<br>> 0-store2-replicate-0:<br>> > All subvolumes are down. Going offline until at least one of<br>> them comes<br>> > back up." repeated 2 times between [2019-10-10<br>> 22:09:48.222459] and<br>> > [2019-10-10 22:09:48.222891]<br>> > ><br>> ><br>> > alexander iliev<br>> ><br>> > On 9/8/19 4:50 PM, Alexander Iliev wrote:<br>> > > Hi all,<br>> > ><br>> > > Sunny, thank you for the update.<br>> > ><br>> > > I have applied the patch locally on my slave system and<br>> now the<br>> > > mountbroker setup is successful.<br>> > ><br>> > > I am facing another issue though - when I try to create a<br>> > replication<br>> > > session between the two sites I am getting:<br>> > ><br>> > > # gluster volume geo-replication store1<br>> > > glustergeorep@<slave-host>::store1 create push-pem<br>> > > Error : Request timed out<br>> > > geo-replication command failed<br>> > ><br>> > > It is still unclear to me if my setup is expected to work<br>> at all.<br>> > ><br>> > > Reading the geo-replication documentation at [1] I see this<br>> > paragraph:<br>> > ><br>> > > > A password-less SSH connection is also required for gsyncd<br>> > between<br>> > > every node in the master to every node in the slave. The<br>> gluster<br>> > > system:: execute gsec_create command creates secret-pem<br>> files on<br>> > all the<br>> > > nodes in the master, and is used to implement the<br>> password-less SSH<br>> > > connection. The push-pem option in the geo-replication create<br>> > command<br>> > > pushes these keys to all the nodes in the slave.<br>> > ><br>> > > It is not clear to me whether connectivity from each<br>> master node<br>> > to each<br>> > > slave node is a requirement in terms of networking. In my<br>> setup the<br>> > > slave nodes form the Gluster pool over a private network<br>> which is<br>> > not<br>> > > reachable from the master site.<br>> > ><br>> > > Any ideas how to proceed from here will be greatly<br>> appreciated.<br>> > ><br>> > > Thanks!<br>> > ><br>> > > Links:<br>> > > [1]<br>> > ><br>> ><br>> <a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication</a><br>> ><br>> > ><br>> > ><br>> > > Best regards,<br>> > > --<br>> > > alexander iliev<br>> > ><br>> > > On 9/3/19 2:50 PM, Sunny Kumar wrote:<br>> > >> Thank you for the explanation Kaleb.<br>> > >><br>> > >> Alexander,<br>> > >><br>> > >> This fix will be available with next release for all<br>> supported<br>> > versions.<br>> > >><br>> > >> /sunny<br>> > >><br>> > >> On Mon, Sep 2, 2019 at 6:47 PM Kaleb Keithley<br>> > <<a href="mailto:kkeithle@redhat.com" target="_blank">kkeithle@redhat.com</a> <mailto:<a href="mailto:kkeithle@redhat.com" target="_blank">kkeithle@redhat.com</a>><br>> <mailto:<a href="mailto:kkeithle@redhat.com" target="_blank">kkeithle@redhat.com</a> <mailto:<a href="mailto:kkeithle@redhat.com" target="_blank">kkeithle@redhat.com</a>>>><br>> > >> wrote:<br>> > >>><br>> > >>> Fixes on master (before or after the release-7 branch<br>> was taken)<br>> > >>> almost certainly warrant a backport IMO to at least<br>> release-6, and<br>> > >>> probably release-5 as well.<br>> > >>><br>> > >>> We used to have a "tracker" BZ for each minor release (e.g.<br>> > 6.6) to<br>> > >>> keep track of backports by cloning the original BZ and<br>> changing<br>> > the<br>> > >>> Version, and adding that BZ to the tracker. I'm not sure<br>> what<br>> > >>> happened to that practice. The last ones I can find are<br>> for 6.3<br>> > and<br>> > >>> 5.7;<br>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3</a> and<br>> > >>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7</a><br>> > >>><br>> > >>> It isn't enough to just backport recent fixes on master to<br>> > release-7.<br>> > >>> We are supposedly continuing to maintain release-6 and<br>> release-5<br>> > >>> after release-7 GAs. If that has changed, I haven't seen an<br>> > >>> announcement to that effect. I don't know why our<br>> developers don't<br>> > >>> automatically backport to all the actively maintained<br>> releases.<br>> > >>><br>> > >>> Even if there isn't a tracker BZ, you can always create a<br>> > backport BZ<br>> > >>> by cloning the original BZ and change the release to 6.<br>> That'd<br>> > be a<br>> > >>> good place to start.<br>> > >>><br>> > >>> On Sun, Sep 1, 2019 at 8:45 AM Alexander Iliev<br>> > >>> <<a href="mailto:ailiev%2Bgluster@mamul.org" target="_blank">ailiev+gluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a>><br>> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%25252Bgluster@mamul.org" target="_blank">ailiev%252Bgluster@mamul.org</a>>>><br>> > wrote:<br>> > >>>><br>> > >>>> Hi Strahil,<br>> > >>>><br>> > >>>> Yes, this might be right, but I would still expect<br>> fixes like<br>> > this<br>> > >>>> to be<br>> > >>>> released for all supported major versions (which should<br>> > include 6.) At<br>> > >>>> least that's how I understand<br>> > >>>> <a href="https://www.gluster.org/release-schedule/" rel="noreferrer" target="_blank">https://www.gluster.org/release-schedule/</a>.<br>> > >>>><br>> > >>>> Anyway, let's wait for Sunny to clarify.<br>> > >>>><br>> > >>>> Best regards,<br>> > >>>> alexander iliev<br>> > >>>><br>> > >>>> On 9/1/19 2:07 PM, Strahil Nikolov wrote:<br>> > >>>>> Hi Alex,<br>> > >>>>><br>> > >>>>> I'm not very deep into bugzilla stuff, but for me<br>> NEXTRELEASE<br>> > means<br>> > >>>>> v7.<br>> > >>>>><br>> > >>>>> Sunny,<br>> > >>>>> Am I understanding it correctly ?<br>> > >>>>><br>> > >>>>> Best Regards,<br>> > >>>>> Strahil Nikolov<br>> > >>>>><br>> > >>>>> В неделя, 1 септември 2019 г., 14:27:32 ч. Гринуич+3,<br>> > Alexander Iliev<br>> > >>>>> <<a href="mailto:ailiev%2Bgluster@mamul.org" target="_blank">ailiev+gluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a>><br>> > <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%25252Bgluster@mamul.org" target="_blank">ailiev%252Bgluster@mamul.org</a>>>> написа:<br>> > >>>>><br>> > >>>>><br>> > >>>>> Hi Sunny,<br>> > >>>>><br>> > >>>>> Thank you for the quick response.<br>> > >>>>><br>> > >>>>> It's not clear to me however if the fix has been already<br>> > released<br>> > >>>>> or not.<br>> > >>>>><br>> > >>>>> The bug status is CLOSED NEXTRELEASE and according to<br>> [1] the<br>> > >>>>> NEXTRELEASE resolution means that the fix will be<br>> included in<br>> > the next<br>> > >>>>> supported release. The bug is logged against the<br>> mainline version<br>> > >>>>> though, so I'm not sure what this means exactly.<br>> > >>>>><br>> > >>>>> From the 6.4[2] and 6.5[3] release notes it seems it<br>> hasn't<br>> > been<br>> > >>>>> released yet.<br>> > >>>>><br>> > >>>>> Ideally I would not like to patch my systems locally,<br>> so if you<br>> > >>>>> have an<br>> > >>>>> ETA on when this will be out officially I would really<br>> > appreciate it.<br>> > >>>>><br>> > >>>>> Links:<br>> > >>>>> [1]<br>> > <a href="https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status</a><br>> > >>>>> [2] <a href="https://docs.gluster.org/en/latest/release-notes/6.4/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/release-notes/6.4/</a><br>> > >>>>> [3] <a href="https://docs.gluster.org/en/latest/release-notes/6.5/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/release-notes/6.5/</a><br>> > >>>>><br>> > >>>>> Thank you!<br>> > >>>>><br>> > >>>>> Best regards,<br>> > >>>>><br>> > >>>>> alexander iliev<br>> > >>>>><br>> > >>>>> On 8/30/19 9:22 AM, Sunny Kumar wrote:<br>> > >>>>> > Hi Alexander,<br>> > >>>>> ><br>> > >>>>> > Thanks for pointing that out!<br>> > >>>>> ><br>> > >>>>> > But this issue is fixed now you can see below link for<br>> > bz-link<br>> > >>>>> and patch.<br>> > >>>>> ><br>> > >>>>> > BZ -<br>> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1709248" rel="noreferrer" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1709248</a><br>> > >>>>> ><br>> > >>>>> > Patch -<br>> <a href="https://review.gluster.org/#/c/glusterfs/+/22716/" rel="noreferrer" target="_blank">https://review.gluster.org/#/c/glusterfs/+/22716/</a><br>> > >>>>> ><br>> > >>>>> > Hope this helps.<br>> > >>>>> ><br>> > >>>>> > /sunny<br>> > >>>>> ><br>> > >>>>> > On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev<br>> > >>>>> > <<a href="mailto:ailiev%2Bgluster@mamul.org" target="_blank">ailiev+gluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a>><br>> > <mailto:<a href="mailto:ailiev%252Bgluster@mamul.org" target="_blank">ailiev%2Bgluster@mamul.org</a><br>> <mailto:<a href="mailto:ailiev%25252Bgluster@mamul.org" target="_blank">ailiev%252Bgluster@mamul.org</a>>> <mailto:<a href="mailto:gluster@mamul.org" target="_blank">gluster@mamul.org</a><br>> <mailto:<a href="mailto:gluster@mamul.org" target="_blank">gluster@mamul.org</a>><br>> > <mailto:<a href="mailto:gluster@mamul.org" target="_blank">gluster@mamul.org</a> <mailto:<a href="mailto:gluster@mamul.org" target="_blank">gluster@mamul.org</a>>>>> wrote:<br>> > >>>>> >><br>> > >>>>> >> Hello dear GlusterFS users list,<br>> > >>>>> >><br>> > >>>>> >> I have been trying to set up geo-replication<br>> between two<br>> > >>>>> clusters for<br>> > >>>>> >> some time now. The desired state is (Cluster #1)<br>> being<br>> > >>>>> replicated to<br>> > >>>>> >> (Cluster #2).<br>> > >>>>> >><br>> > >>>>> >> Here are some details about the setup:<br>> > >>>>> >><br>> > >>>>> >> Cluster #1: three nodes connected via a local network<br>> > >>>>> (<a href="http://172.31.35.0/24" rel="noreferrer" target="_blank">172.31.35.0/24</a> <<a href="http://172.31.35.0/24" rel="noreferrer" target="_blank">http://172.31.35.0/24</a>><br>> <<a href="http://172.31.35.0/24" rel="noreferrer" target="_blank">http://172.31.35.0/24</a>>),<br>> > >>>>> >> one replicated (3 replica) volume.<br>> > >>>>> >><br>> > >>>>> >> Cluster #2: three nodes connected via a local network<br>> > >>>>> (<a href="http://172.31.36.0/24" rel="noreferrer" target="_blank">172.31.36.0/24</a> <<a href="http://172.31.36.0/24" rel="noreferrer" target="_blank">http://172.31.36.0/24</a>><br>> <<a href="http://172.31.36.0/24" rel="noreferrer" target="_blank">http://172.31.36.0/24</a>>),<br>> > >>>>> >> one replicated (3 replica) volume.<br>> > >>>>> >><br>> > >>>>> >> The two clusters are connected to the Internet<br>> via separate<br>> > >>>>> network<br>> > >>>>> >> adapters.<br>> > >>>>> >><br>> > >>>>> >> Only SSH (port 22) is open on cluster #2 nodes'<br>> adapters<br>> > >>>>> connected to<br>> > >>>>> >> the Internet.<br>> > >>>>> >><br>> > >>>>> >> All nodes are running Ubuntu 18.04 and GlusterFS 6.3<br>> > installed<br>> > >>>>> from [1].<br>> > >>>>> >><br>> > >>>>> >> The first time I followed the guide[2] everything<br>> went<br>> > fine up<br>> > >>>>> until I<br>> > >>>>> >> reached the "Create the session" step. That was<br>> like a<br>> > month<br>> > >>>>> ago, then I<br>> > >>>>> >> had to temporarily stop working in this and now I<br>> am coming<br>> > >>>>> back to it.<br>> > >>>>> >><br>> > >>>>> >> Currently, if I try to see the mountbroker status<br>> I get the<br>> > >>>>> following:<br>> > >>>>> >><br>> > >>>>> >>> # gluster-mountbroker status<br>> > >>>>> >>> Traceback (most recent call last):<br>> > >>>>> >>> File "/usr/sbin/gluster-mountbroker", line<br>> 396, in<br>> > <module><br>> > >>>>> >>> runcli()<br>> > >>>>> >>> File<br>> > >>>>><br>> > <br>> "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line<br>> > >>>>> 225,<br>> > >>>>> in runcli<br>> > >>>>> >>> cls.run(args)<br>> > >>>>> >>> File "/usr/sbin/gluster-mountbroker", line<br>> 275, in run<br>> > >>>>> >>> out = execute_in_peers("node-status")<br>> > >>>>> >>> File<br>> > >>>>><br>> "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",<br>> > >>>>> >> line 127, in execute_in_peers<br>> > >>>>> >>> raise GlusterCmdException((rc, out, err, "<br>> > ".join(cmd)))<br>> > >>>>> >>> gluster.cliutils.cliutils.GlusterCmdException:<br>> (1, '',<br>> > >>>>> 'Unable to<br>> > >>>>> >> end. Error : Success\n', 'gluster system:: execute<br>> > mountbroker.py<br>> > >>>>> >> node-status')<br>> > >>>>> >><br>> > >>>>> >> And in /var/log/gluster/glusterd.log I have:<br>> > >>>>> >><br>> > >>>>> >>> [2019-08-10 15:24:21.418834] E [MSGID: 106336]<br>> > >>>>> >> [glusterd-geo-rep.c:5413:glusterd_op_sys_exec]<br>> > 0-management:<br>> > >>>>> Unable to<br>> > >>>>> >> end. Error : Success<br>> > >>>>> >>> [2019-08-10 15:24:21.418908] E [MSGID: 106122]<br>> > �� >>>>> >> [glusterd-syncop.c:1445:gd_commit_op_phase]<br>> 0-management:<br>> > >>>>> Commit of<br>> > >>>>> >> operation 'Volume Execute system commands' failed on<br>> > localhost<br>> > >>>>> : Unable<br>> > >>>>> >> to end. Error : Success<br>> > >>>>> >><br>> > >>>>> >> So, I have two questions right now:<br>> > >>>>> >><br>> > >>>>> >> 1) Is there anything wrong with my setup<br>> (networking, open<br>> > >>>>> ports, etc.)?<br>> > >>>>> >> Is it expected to work with this setup or should<br>> I redo<br>> > it in a<br>> > >>>>> >> different way?<br>> > >>>>> >> 2) How can I troubleshoot the current status of my<br>> > setup? Can<br>> > >>>>> I find out<br>> > >>>>> >> what's missing/wrong and continue from there or<br>> should I<br>> > just<br>> > >>>>> start from<br>> > >>>>> >> scratch?<br>> > >>>>> >><br>> > >>>>> >> Links:<br>> > >>>>> >> [1]<br>> <a href="http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu" rel="noreferrer" target="_blank">http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu</a><br>> > >>>>> >> [2]<br>> > >>>>> >><br>> > >>>>><br>> ><br>> <a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/</a><br>> ><br>> > >>>>><br>> > >>>>> >><br>> > >>>>> >> Thank you!<br>> > >>>>> >><br>> > >>>>> >> Best regards,<br>> > >>>>> >> --<br>> > >>>>> >> alexander iliev<br>> > >>>>> >> _______________________________________________<br>> > >>>>> >> Gluster-users mailing list<br>> > >>>>> >> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>> > <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>> > <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>>><br>> > >>>>> >><br>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>> > >>>>> _______________________________________________<br>> > >>>>> Gluster-users mailing list<br>> > >>>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>><br>> > <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>>><br>> > >>>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>> > >>>> _______________________________________________<br>> > >>>> Gluster-users mailing list<br>> > >>>> <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>><br>> > >>>> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>> > > _______________________________________________<br>> > > Gluster-users mailing list<br>> > > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>><br>> > > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>> > ________<br>> ><br>> > Community Meeting Calendar:<br>> ><br>> > APAC Schedule -<br>> > Every 2nd and 4th Tuesday at 11:30 AM IST<br>> > Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>> ><br>> > NA/EMEA Schedule -<br>> > Every 1st and 3rd Tuesday at 01:00 PM EDT<br>> > Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>> ><br>> > Gluster-users mailing list<br>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>><br>> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>> ><br>> ><br>> ><br>> > --<br>> > regards<br>> > Aravinda VK<br>> <br>> <br>> <br>> -- <br>> regards<br>> Aravinda VK<br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>regards<br></div>Aravinda VK<br></div></div></div></div></div></div></div>