<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
Unsubscribe</div>
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div id="Signature">
<p>Sent from <a href="http://aka.ms/weboutlook">Outlook</a><br>
</p>
<div>
<div id="appendonsend"></div>
<div style="font-family:Calibri,Helvetica,sans-serif; font-size:12pt; color:rgb(0,0,0)">
<br>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> gluster-users-bounces@gluster.org <gluster-users-bounces@gluster.org> on behalf of gluster-users-request@gluster.org <gluster-users-request@gluster.org><br>
<b>Sent:</b> October 18, 2019 5:00 AM<br>
<b>To:</b> gluster-users@gluster.org <gluster-users@gluster.org><br>
<b>Subject:</b> Gluster-users Digest, Vol 138, Issue 14</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt">
<div class="PlainText">Send Gluster-users mailing list submissions to<br>
gluster-users@gluster.org<br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
or, via email, send a message with subject or body 'help' to<br>
gluster-users-request@gluster.org<br>
<br>
You can reach the person managing the list at<br>
gluster-users-owner@gluster.org<br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of Gluster-users digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Mirror <a href="https://download.gluster.org/">https://download.gluster.org/</a> is not working<br>
(Alberto Bengoa)<br>
2. Re: Issues with Geo-replication (GlusterFS 6.3 on Ubuntu<br>
18.04) (Aravinda Vishwanathapura Krishna Murthy)<br>
3. Re: Single Point of failure in geo Replication<br>
(Aravinda Vishwanathapura Krishna Murthy)<br>
4. Re: On a glusterfsd service (Amar Tumballi)<br>
5. Re: Mirror <a href="https://download.gluster.org/">https://download.gluster.org/</a> is not working<br>
(Kaleb Keithley)<br>
6. Re: Issues with Geo-replication (GlusterFS 6.3 on Ubuntu<br>
18.04) (Alexander Iliev)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Thu, 17 Oct 2019 15:55:25 +0100<br>
From: Alberto Bengoa <bengoa@gmail.com><br>
To: gluster-users <gluster-users@gluster.org><br>
Subject: [Gluster-users] Mirror <a href="https://download.gluster.org/">https://download.gluster.org/</a> is not<br>
working<br>
Message-ID:<br>
<CA+vk31b5qomQVXQ10ofD5jL+kkhMqaamALUg7XKEcK-X7Ju1yw@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Guys,<br>
<br>
Anybody from Gluster Team has one word about the mirror status? It is<br>
failing since (maybe?) yesterday.<br>
<br>
root@nas-bkp /tmp $ yum install glusterfs-client<br>
GlusterFS is a clustered file-system capable of scaling to several petabyte<br>
2.1 kB/s | 2.9 kB 00:01<br>
Dependencies resolved.<br>
============================================================================================================<br>
Package Arch Version<br>
Repository Size<br>
============================================================================================================<br>
Installing:<br>
glusterfs-fuse x86_64 6.5-2.el8<br>
glusterfs-rhel8 167 k<br>
Installing dependencies:<br>
glusterfs x86_64 6.5-2.el8<br>
glusterfs-rhel8 681 k<br>
glusterfs-client-xlators x86_64 6.5-2.el8<br>
glusterfs-rhel8 893 k<br>
glusterfs-libs x86_64 6.5-2.el8<br>
glusterfs-rhel8 440 k<br>
<br>
Transaction Summary<br>
============================================================================================================<br>
Install 4 Packages<br>
<br>
Total download size: 2.1 M<br>
Installed size: 9.1 M<br>
Is this ok [y/N]: y<br>
Downloading Packages:<br>
[MIRROR] glusterfs-6.5-2.el8.x86_64.rpm: Curl error (18): Transferred a<br>
partial file for<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
[transfer closed with 648927 bytes remaining to read]<br>
[FAILED] glusterfs-6.5-2.el8.x86_64.rpm: No more mirrors to try - All<br>
mirrors were already tried without success<br>
(2-3/4): glusterfs-client-xlators- 34% [===========- ]<br>
562 kB/s | 745 kB 00:02 ETA<br>
The downloaded packages were saved in cache until the next successful<br>
transaction.<br>
You can remove cached packages by executing 'dnf clean packages'.<br>
Error: Error downloading packages:<br>
Cannot download glusterfs-6.5-2.el8.x86_64.rpm: All mirrors were tried<br>
<br>
If you try to download using wget it fails as well:<br>
<br>
root@nas-bkp /tmp $ wget<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
--2019-10-17 15:53:41--<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
Resolving download.gluster.org (download.gluster.org)... 8.43.85.185<br>
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
connected.<br>
HTTP request sent, awaiting response... 200 OK<br>
Length: 697688 (681K) [application/x-rpm]<br>
Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.1?<br>
<br>
glusterfs-6.5-2.el8.x86_64 6%[=> ]<br>
47.62K --.-KB/s in 0.09s<br>
<br>
2019-10-17 15:53:42 (559 KB/s) - Read error at byte 48761/697688 (Error<br>
decoding the received TLS packet.). Retrying.<br>
<br>
--2019-10-17 15:53:43-- (try: 2)<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
connected.<br>
HTTP request sent, awaiting response... ^C<br>
root@nas-bkp /tmp $ wget<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
--2019-10-17 15:53:45--<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
Resolving download.gluster.org (download.gluster.org)... 8.43.85.185<br>
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
connected.<br>
HTTP request sent, awaiting response... 200 OK<br>
Length: 697688 (681K) [application/x-rpm]<br>
Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?<br>
<br>
glusterfs-6.5-2.el8.x86_64 6%[=> ]<br>
47.62K --.-KB/s in 0.08s<br>
<br>
2019-10-17 15:53:46 (564 KB/s) - Read error at byte 48761/697688 (Error<br>
decoding the received TLS packet.). Retrying.<br>
<br>
--2019-10-17 15:53:47-- (try: 2)<br>
<a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
connected.<br>
HTTP request sent, awaiting response... 206 Partial Content<br>
Length: 697688 (681K), 648927 (634K) remaining [application/x-rpm]<br>
Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?<br>
<br>
glusterfs-6.5-2.el8.x86_64 13%[++==> ]<br>
95.18K --.-KB/s in 0.08s<br>
<br>
2019-10-17 15:53:47 (563 KB/s) - Read error at byte 97467/697688 (Error<br>
decoding the received TLS packet.). Retrying.<br>
<br>
<br>
Thank you!<br>
<br>
Alberto Bengoa<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/146a4068/attachment-0001.html">http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/146a4068/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 2<br>
Date: Thu, 17 Oct 2019 21:02:42 +0530<br>
From: Aravinda Vishwanathapura Krishna Murthy <avishwan@redhat.com><br>
To: Alexander Iliev <ailiev+gluster@mamul.org><br>
Cc: gluster-users <gluster-users@gluster.org><br>
Subject: Re: [Gluster-users] Issues with Geo-replication (GlusterFS<br>
6.3 on Ubuntu 18.04)<br>
Message-ID:<br>
<CA+8EeuNwuYJs0Yxk8zqKYc2VxdGM0xU6ivGpLE3oo28oxzbqLA@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Thu, Oct 17, 2019 at 12:54 PM Alexander Iliev <ailiev+gluster@mamul.org><br>
wrote:<br>
<br>
> Thanks, Aravinda.<br>
><br>
> Does this mean that my scenario is currently unsupported?<br>
><br>
<br>
Please try by providing external IP while creating Geo-rep session. We will<br>
work on the enhancement if it didn't work.<br>
<br>
<br>
> It seems that I need to make sure that the nodes in the two clusters can<br>
> see each-other (some kind of VPN would work I guess).<br>
><br>
> Is this be documented somewhere? I think I've read the geo-replication<br>
> documentation several times now, but somehow it wasn't obvious to me<br>
> that you need access to the slave nodes from the master ones (apart from<br>
> the SSH access).<br>
><br>
> Thanks!<br>
><br>
> Best regards,<br>
> --<br>
> alexander iliev<br>
><br>
> On 10/17/19 5:25 AM, Aravinda Vishwanathapura Krishna Murthy wrote:<br>
> > Got it.<br>
> ><br>
> > Geo-replication uses slave nodes IP in the following cases,<br>
> ><br>
> > - Verification during Session creation - It tries to mount the Slave<br>
> > volume using the hostname/IP provided in Geo-rep create command. Try<br>
> > Geo-rep create by specifying the external IP which is accessible from<br>
> > the master node.<br>
> > - Once Geo-replication is started, it gets the list of Slave nodes<br>
> > IP/hostname from Slave volume info and connects to those IPs. But in<br>
> > this case, those are internal IP addresses that are not accessible from<br>
> > Master nodes. - We need to enhance Geo-replication to accept external IP<br>
> > and internal IP map details so that for all connections it can use<br>
> > external IP.<br>
> ><br>
> > On Wed, Oct 16, 2019 at 10:29 PM Alexander Iliev<br>
> > <ailiev+gluster@mamul.org <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>>> wrote:<br>
> ><br>
> > Hi Aravinda,<br>
> ><br>
> > All volume brick on the slave volume are up and the volume seems<br>
> > functional.<br>
> ><br>
> > Your suggestion about trying to mount the slave volume on a master<br>
> node<br>
> > brings up my question about network connectivity again - the<br>
> GlusterFS<br>
> > documentation[1] says:<br>
> ><br>
> > > The server specified in the mount command is only used to fetch<br>
> the<br>
> > gluster configuration volfile describing the volume name.<br>
> Subsequently,<br>
> > the client will communicate directly with the servers mentioned in<br>
> the<br>
> > volfile (which might not even include the one used for mount).<br>
> ><br>
> > To me this means that the masternode from your example is expected to<br>
> > have connectivity to the network where the slave volume runs, i.e. to<br>
> > have network access to the slave nodes. In my geo-replication<br>
> scenario<br>
> > this is definitely not the case. The two cluster are running in two<br>
> > completely different networks that are not interconnected.<br>
> ><br>
> > So my question is - how is the slave volume mount expected to happen<br>
> if<br>
> > the client host cannot access the GlusterFS nodes? Or is the<br>
> > connectivity a requirement even for geo-replication?<br>
> ><br>
> > I'm not sure if I'm missing something, but any help will be highly<br>
> > appreciated!<br>
> ><br>
> > Thanks!<br>
> ><br>
> > Links:<br>
> > [1]<br>
> ><br>
> <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/">
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/</a><br>
> > --<br>
> > alexander iliev<br>
> ><br>
> > On 10/16/19 6:03 AM, Aravinda Vishwanathapura Krishna Murthy wrote:<br>
> > > Hi Alexander,<br>
> > ><br>
> > > Please check the status of Volume. Looks like the Slave volume<br>
> > mount is<br>
> > > failing because bricks are down or not reachable. If Volume<br>
> > status shows<br>
> > > all bricks are up then try mounting the slave volume using mount<br>
> > command.<br>
> > ><br>
> > > ```<br>
> > > masternode$ mkdir /mnt/vol<br>
> > > masternode$ mount -t glusterfs <slavehost>:<slavevol> /mnt/vol<br>
> > > ```<br>
> > ><br>
> > > On Fri, Oct 11, 2019 at 4:03 AM Alexander Iliev<br>
> > > <ailiev+gluster@mamul.org <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> > <<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> > <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>>> wrote:<br>
> > ><br>
> > > Hi all,<br>
> > ><br>
> > > I ended up reinstalling the nodes with CentOS 7.5 and<br>
> > GlusterFS 6.5<br>
> > > (installed from the SIG.)<br>
> > ><br>
> > > Now when I try to create a replication session I get the<br>
> > following:<br>
> > ><br>
> > > > # gluster volume geo-replication store1<br>
> > <slave-host>::store2 create<br>
> > > push-pem<br>
> > > > Unable to mount and fetch slave volume details. Please<br>
> > check the<br>
> > > log:<br>
> > > /var/log/glusterfs/geo-replication/gverify-slavemnt.log<br>
> > > > geo-replication command failed<br>
> > ><br>
> > > You can find the contents of gverify-slavemnt.log below, but<br>
> the<br>
> > > initial<br>
> > > error seems to be:<br>
> > ><br>
> > > > [2019-10-10 22:07:51.578519] E<br>
> > > [fuse-bridge.c:5211:fuse_first_lookup]<br>
> > > 0-fuse: first lookup on root failed (Transport endpoint is not<br>
> > > connected)<br>
> > ><br>
> > > I only found<br>
> > > [this](https://bugzilla.redhat.com/show_bug.cgi?id=1659824)<br>
> > > bug report which doesn't seem to help. The reported issue is<br>
> > failure to<br>
> > > mount a volume on a GlusterFS client, but in my case I need<br>
> > > geo-replication which implies the client (geo-replication<br>
> > master) being<br>
> > > on a different network.<br>
> > ><br>
> > > Any help will be appreciated.<br>
> > ><br>
> > > Thanks!<br>
> > ><br>
> > > gverify-slavemnt.log:<br>
> > ><br>
> > > > [2019-10-10 22:07:40.571256] I [MSGID: 100030]<br>
> > > [glusterfsd.c:2847:main] 0-glusterfs: Started running<br>
> > glusterfs version<br>
> > > 6.5 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off<br>
> > > --volfile-server <slave-host> --volfile-id store2 -l<br>
> > > /var/log/glusterfs/geo-replication/gverify-slavemnt.log<br>
> > > /tmp/gverify.sh.5nFlRh)<br>
> > > > [2019-10-10 22:07:40.575438] I<br>
> [glusterfsd.c:2556:daemonize]<br>
> > > 0-glusterfs: Pid of current running process is 6021<br>
> > > > [2019-10-10 22:07:40.584282] I [MSGID: 101190]<br>
> > > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:<br>
> > Started thread<br>
> > > with index 0<br>
> > > > [2019-10-10 22:07:40.584299] I [MSGID: 101190]<br>
> > > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:<br>
> > Started thread<br>
> > > with index 1<br>
> > > > [2019-10-10 22:07:40.928094] I [MSGID: 114020]<br>
> > > [client.c:2393:notify]<br>
> > > 0-store2-client-0: parent translators are ready, attempting<br>
> > connect on<br>
> > > transport<br>
> > > > [2019-10-10 22:07:40.931121] I [MSGID: 114020]<br>
> > > [client.c:2393:notify]<br>
> > > 0-store2-client-1: parent translators are ready, attempting<br>
> > connect on<br>
> > > transport<br>
> > > > [2019-10-10 22:07:40.933976] I [MSGID: 114020]<br>
> > > [client.c:2393:notify]<br>
> > > 0-store2-client-2: parent translators are ready, attempting<br>
> > connect on<br>
> > > transport<br>
> > > > Final graph:<br>
> > > ><br>
> > ><br>
> ><br>
> +------------------------------------------------------------------------------+<br>
> > > > 1: volume store2-client-0<br>
> > > > 2: type protocol/client<br>
> > > > 3: option ping-timeout 42<br>
> > > > 4: option remote-host 172.31.36.11<br>
> > > > 5: option remote-subvolume<br>
> > /data/gfs/store1/1/brick-store2<br>
> > > > 6: option transport-type socket<br>
> > > > 7: option transport.address-family inet<br>
> > > > 8: option transport.socket.ssl-enabled off<br>
> > > > 9: option transport.tcp-user-timeout 0<br>
> > > > 10: option transport.socket.keepalive-time 20<br>
> > > > 11: option transport.socket.keepalive-interval 2<br>
> > > > 12: option transport.socket.keepalive-count 9<br>
> > > > 13: option send-gids true<br>
> > > > 14: end-volume<br>
> > > > 15:<br>
> > > > 16: volume store2-client-1<br>
> > > > 17: type protocol/client<br>
> > > > 18: option ping-timeout 42<br>
> > > > 19: option remote-host 172.31.36.12<br>
> > > > 20: option remote-subvolume<br>
> > /data/gfs/store1/1/brick-store2<br>
> > > > 21: option transport-type socket<br>
> > > > 22: option transport.address-family inet<br>
> > > > 23: option transport.socket.ssl-enabled off<br>
> > > > 24: option transport.tcp-user-timeout 0<br>
> > > > 25: option transport.socket.keepalive-time 20<br>
> > > > 26: option transport.socket.keepalive-interval 2<br>
> > > > 27: option transport.socket.keepalive-count 9<br>
> > > > 28: option send-gids true<br>
> > > > 29: end-volume<br>
> > > > 30:<br>
> > > > 31: volume store2-client-2<br>
> > > > 32: type protocol/client<br>
> > > > 33: option ping-timeout 42<br>
> > > > 34: option remote-host 172.31.36.13<br>
> > > > 35: option remote-subvolume<br>
> > /data/gfs/store1/1/brick-store2<br>
> > > > 36: option transport-type socket<br>
> > > > 37: option transport.address-family inet<br>
> > > > 38: option transport.socket.ssl-enabled off<br>
> > > > 39: option transport.tcp-user-timeout 0<br>
> > > > 40: option transport.socket.keepalive-time 20<br>
> > > > 41: option transport.socket.keepalive-interval 2<br>
> > > > 42: option transport.socket.keepalive-count 9<br>
> > > > 43: option send-gids true<br>
> > > > 44: end-volume<br>
> > > > 45:<br>
> > > > 46: volume store2-replicate-0<br>
> > > > 47: type cluster/replicate<br>
> > > > 48: option afr-pending-xattr<br>
> > > store2-client-0,store2-client-1,store2-client-2<br>
> > > > 49: option use-compound-fops off<br>
> > > > 50: subvolumes store2-client-0 store2-client-1<br>
> > store2-client-2<br>
> > > > 51: end-volume<br>
> > > > 52:<br>
> > > > 53: volume store2-dht<br>
> > > > 54: type cluster/distribute<br>
> > > > 55: option lookup-unhashed off<br>
> > > > 56: option lock-migration off<br>
> > > > 57: option force-migration off<br>
> > > > 58: subvolumes store2-replicate-0<br>
> > > > 59: end-volume<br>
> > > > 60:<br>
> > > > 61: volume store2-write-behind<br>
> > > > 62: type performance/write-behind<br>
> > > > 63: subvolumes store2-dht<br>
> > > > 64: end-volume<br>
> > > > 65:<br>
> > > > 66: volume store2-read-ahead<br>
> > > > 67: type performance/read-ahead<br>
> > > > 68: subvolumes store2-write-behind<br>
> > > > 69: end-volume<br>
> > > > 70:<br>
> > > > 71: volume store2-readdir-ahead<br>
> > > > 72: type performance/readdir-ahead<br>
> > > > 73: option parallel-readdir off<br>
> > > > 74: option rda-request-size 131072<br>
> > > > 75: option rda-cache-limit 10MB<br>
> > > > 76: subvolumes store2-read-ahead<br>
> > > > 77: end-volume<br>
> > > > 78:<br>
> > > > 79: volume store2-io-cache<br>
> > > > 80: type performance/io-cache<br>
> > > > 81: subvolumes store2-readdir-ahead<br>
> > > > 82: end-volume<br>
> > > > 83:<br>
> > > > 84: volume store2-open-behind<br>
> > > > 85: type performance/open-behind<br>
> > > > 86: subvolumes store2-io-cache<br>
> > > > 87: end-volume<br>
> > > > 88:<br>
> > > > 89: volume store2-quick-read<br>
> > > > 90: type performance/quick-read<br>
> > > > 91: subvolumes store2-open-behind<br>
> > > > 92: end-volume<br>
> > > > 93:<br>
> > > > 94: volume store2-md-cache<br>
> > > > 95: type performance/md-cache<br>
> > > > 96: subvolumes store2-quick-read<br>
> > > > 97: end-volume<br>
> > > > 98:<br>
> > > > 99: volume store2<br>
> > > > 100: type debug/io-stats<br>
> > > > 101: option log-level INFO<br>
> > > > 102: option latency-measurement off<br>
> > > > 103: option count-fop-hits off<br>
> > > > 104: subvolumes store2-md-cache<br>
> > > > 105: end-volume<br>
> > > > 106:<br>
> > > > 107: volume meta-autoload<br>
> > > > 108: type meta<br>
> > > > 109: subvolumes store2<br>
> > > > 110: end-volume<br>
> > > > 111:<br>
> > > ><br>
> > ><br>
> ><br>
> +------------------------------------------------------------------------------+<br>
> > > > [2019-10-10 22:07:51.578287] I<br>
> [fuse-bridge.c:5142:fuse_init]<br>
> > > 0-glusterfs-fuse: FUSE inited with protocol versions:<br>
> > glusterfs 7.24<br>
> > > kernel 7.22<br>
> > > > [2019-10-10 22:07:51.578356] I<br>
> > [fuse-bridge.c:5753:fuse_graph_sync]<br>
> > > 0-fuse: switched to graph 0<br>
> > > > [2019-10-10 22:07:51.578467] I [MSGID: 108006]<br>
> > > [afr-common.c:5666:afr_local_init] 0-store2-replicate-0: no<br>
> > > subvolumes up<br>
> > > > [2019-10-10 22:07:51.578519] E<br>
> > > [fuse-bridge.c:5211:fuse_first_lookup]<br>
> > > 0-fuse: first lookup on root failed (Transport endpoint is not<br>
> > > connected)<br>
> > > > [2019-10-10 22:07:51.578709] W<br>
> > [fuse-bridge.c:1266:fuse_attr_cbk]<br>
> > > 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is<br>
> not<br>
> > > connected)<br>
> > > > [2019-10-10 22:07:51.578687] I [MSGID: 108006]<br>
> > > [afr-common.c:5666:afr_local_init] 0-store2-replicate-0: no<br>
> > > subvolumes up<br>
> > > > [2019-10-10 22:09:48.222459] E [MSGID: 108006]<br>
> > > [afr-common.c:5318:__afr_handle_child_down_event]<br>
> > 0-store2-replicate-0:<br>
> > > All subvolumes are down. Going offline until at least one of<br>
> > them comes<br>
> > > back up.<br>
> > > > The message "E [MSGID: 108006]<br>
> > > [afr-common.c:5318:__afr_handle_child_down_event]<br>
> > 0-store2-replicate-0:<br>
> > > All subvolumes are down. Going offline until at least one of<br>
> > them comes<br>
> > > back up." repeated 2 times between [2019-10-10<br>
> > 22:09:48.222459] and<br>
> > > [2019-10-10 22:09:48.222891]<br>
> > > ><br>
> > ><br>
> > > alexander iliev<br>
> > ><br>
> > > On 9/8/19 4:50 PM, Alexander Iliev wrote:<br>
> > > > Hi all,<br>
> > > ><br>
> > > > Sunny, thank you for the update.<br>
> > > ><br>
> > > > I have applied the patch locally on my slave system and<br>
> > now the<br>
> > > > mountbroker setup is successful.<br>
> > > ><br>
> > > > I am facing another issue though - when I try to create a<br>
> > > replication<br>
> > > > session between the two sites I am getting:<br>
> > > ><br>
> > > > # gluster volume geo-replication store1<br>
> > > > glustergeorep@<slave-host>::store1 create push-pem<br>
> > > > Error : Request timed out<br>
> > > > geo-replication command failed<br>
> > > ><br>
> > > > It is still unclear to me if my setup is expected to work<br>
> > at all.<br>
> > > ><br>
> > > > Reading the geo-replication documentation at [1] I see this<br>
> > > paragraph:<br>
> > > ><br>
> > > > > A password-less SSH connection is also required for<br>
> gsyncd<br>
> > > between<br>
> > > > every node in the master to every node in the slave. The<br>
> > gluster<br>
> > > > system:: execute gsec_create command creates secret-pem<br>
> > files on<br>
> > > all the<br>
> > > > nodes in the master, and is used to implement the<br>
> > password-less SSH<br>
> > > > connection. The push-pem option in the geo-replication<br>
> create<br>
> > > command<br>
> > > > pushes these keys to all the nodes in the slave.<br>
> > > ><br>
> > > > It is not clear to me whether connectivity from each<br>
> > master node<br>
> > > to each<br>
> > > > slave node is a requirement in terms of networking. In my<br>
> > setup the<br>
> > > > slave nodes form the Gluster pool over a private network<br>
> > which is<br>
> > > not<br>
> > > > reachable from the master site.<br>
> > > ><br>
> > > > Any ideas how to proceed from here will be greatly<br>
> > appreciated.<br>
> > > ><br>
> > > > Thanks!<br>
> > > ><br>
> > > > Links:<br>
> > > > [1]<br>
> > > ><br>
> > ><br>
> ><br>
> <a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication">
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication</a><br>
> > ><br>
> > > ><br>
> > > ><br>
> > > > Best regards,<br>
> > > > --<br>
> > > > alexander iliev<br>
> > > ><br>
> > > > On 9/3/19 2:50 PM, Sunny Kumar wrote:<br>
> > > >> Thank you for the explanation Kaleb.<br>
> > > >><br>
> > > >> Alexander,<br>
> > > >><br>
> > > >> This fix will be available with next release for all<br>
> > supported<br>
> > > versions.<br>
> > > >><br>
> > > >> /sunny<br>
> > > >><br>
> > > >> On Mon, Sep 2, 2019 at 6:47 PM Kaleb Keithley<br>
> > > <kkeithle@redhat.com <<a href="mailto:kkeithle@redhat.com">mailto:kkeithle@redhat.com</a>><br>
> > <<a href=""></a>mailto:kkeithle@redhat.com <<a href="mailto:kkeithle@redhat.com">mailto:kkeithle@redhat.com</a>>>><br>
> > > >> wrote:<br>
> > > >>><br>
> > > >>> Fixes on master (before or after the release-7 branch<br>
> > was taken)<br>
> > > >>> almost certainly warrant a backport IMO to at least<br>
> > release-6, and<br>
> > > >>> probably release-5 as well.<br>
> > > >>><br>
> > > >>> We used to have a "tracker" BZ for each minor release<br>
> (e.g.<br>
> > > 6.6) to<br>
> > > >>> keep track of backports by cloning the original BZ and<br>
> > changing<br>
> > > the<br>
> > > >>> Version, and adding that BZ to the tracker. I'm not sure<br>
> > what<br>
> > > >>> happened to that practice. The last ones I can find are<br>
> > for 6.3<br>
> > > and<br>
> > > >>> 5.7;<br>
> > <a href="https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3">https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3</a> and<br>
> > > >>><br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7">https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7</a><br>
> > > >>><br>
> > > >>> It isn't enough to just backport recent fixes on master<br>
> to<br>
> > > release-7.<br>
> > > >>> We are supposedly continuing to maintain release-6 and<br>
> > release-5<br>
> > > >>> after release-7 GAs. If that has changed, I haven't seen<br>
> an<br>
> > > >>> announcement to that effect. I don't know why our<br>
> > developers don't<br>
> > > >>> automatically backport to all the actively maintained<br>
> > releases.<br>
> > > >>><br>
> > > >>> Even if there isn't a tracker BZ, you can always create a<br>
> > > backport BZ<br>
> > > >>> by cloning the original BZ and change the release to 6.<br>
> > That'd<br>
> > > be a<br>
> > > >>> good place to start.<br>
> > > >>><br>
> > > >>> On Sun, Sep 1, 2019 at 8:45 AM Alexander Iliev<br>
> > > >>> <ailiev+gluster@mamul.org<br>
> > <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> > <<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> > <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>>><br>
> > > wrote:<br>
> > > >>>><br>
> > > >>>> Hi Strahil,<br>
> > > >>>><br>
> > > >>>> Yes, this might be right, but I would still expect<br>
> > fixes like<br>
> > > this<br>
> > > >>>> to be<br>
> > > >>>> released for all supported major versions (which should<br>
> > > include 6.) At<br>
> > > >>>> least that's how I understand<br>
> > > >>>> <a href="https://www.gluster.org/release-schedule/">https://www.gluster.org/release-schedule/</a>.<br>
> > > >>>><br>
> > > >>>> Anyway, let's wait for Sunny to clarify.<br>
> > > >>>><br>
> > > >>>> Best regards,<br>
> > > >>>> alexander iliev<br>
> > > >>>><br>
> > > >>>> On 9/1/19 2:07 PM, Strahil Nikolov wrote:<br>
> > > >>>>> Hi Alex,<br>
> > > >>>>><br>
> > > >>>>> I'm not very deep into bugzilla stuff, but for me<br>
> > NEXTRELEASE<br>
> > > means<br>
> > > >>>>> v7.<br>
> > > >>>>><br>
> > > >>>>> Sunny,<br>
> > > >>>>> Am I understanding it correctly ?<br>
> > > >>>>><br>
> > > >>>>> Best Regards,<br>
> > > >>>>> Strahil Nikolov<br>
> > > >>>>><br>
> > > >>>>> ? ??????, 1 ????????? 2019 ?., 14:27:32 ?. ???????+3,<br>
> > > Alexander Iliev<br>
> > > >>>>> <ailiev+gluster@mamul.org<br>
> > <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> > > <<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> > <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>>> ??????:<br>
> > > >>>>><br>
> > > >>>>><br>
> > > >>>>> Hi Sunny,<br>
> > > >>>>><br>
> > > >>>>> Thank you for the quick response.<br>
> > > >>>>><br>
> > > >>>>> It's not clear to me however if the fix has been<br>
> already<br>
> > > released<br>
> > > >>>>> or not.<br>
> > > >>>>><br>
> > > >>>>> The bug status is CLOSED NEXTRELEASE and according to<br>
> > [1] the<br>
> > > >>>>> NEXTRELEASE resolution means that the fix will be<br>
> > included in<br>
> > > the next<br>
> > > >>>>> supported release. The bug is logged against the<br>
> > mainline version<br>
> > > >>>>> though, so I'm not sure what this means exactly.<br>
> > > >>>>><br>
> > > >>>>> From the 6.4[2] and 6.5[3] release notes it seems it<br>
> > hasn't<br>
> > > been<br>
> > > >>>>> released yet.<br>
> > > >>>>><br>
> > > >>>>> Ideally I would not like to patch my systems locally,<br>
> > so if you<br>
> > > >>>>> have an<br>
> > > >>>>> ETA on when this will be out officially I would really<br>
> > > appreciate it.<br>
> > > >>>>><br>
> > > >>>>> Links:<br>
> > > >>>>> [1]<br>
> > > <a href="https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status">
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status</a><br>
> > > >>>>> [2]<br>
> <a href="https://docs.gluster.org/en/latest/release-notes/6.4/">https://docs.gluster.org/en/latest/release-notes/6.4/</a><br>
> > > >>>>> [3]<br>
> <a href="https://docs.gluster.org/en/latest/release-notes/6.5/">https://docs.gluster.org/en/latest/release-notes/6.5/</a><br>
> > > >>>>><br>
> > > >>>>> Thank you!<br>
> > > >>>>><br>
> > > >>>>> Best regards,<br>
> > > >>>>><br>
> > > >>>>> alexander iliev<br>
> > > >>>>><br>
> > > >>>>> On 8/30/19 9:22 AM, Sunny Kumar wrote:<br>
> > > >>>>> > Hi Alexander,<br>
> > > >>>>> ><br>
> > > >>>>> > Thanks for pointing that out!<br>
> > > >>>>> ><br>
> > > >>>>> > But this issue is fixed now you can see below link<br>
> for<br>
> > > bz-link<br>
> > > >>>>> and patch.<br>
> > > >>>>> ><br>
> > > >>>>> > BZ -<br>
> > <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1709248">https://bugzilla.redhat.com/show_bug.cgi?id=1709248</a><br>
> > > >>>>> ><br>
> > > >>>>> > Patch -<br>
> > <a href="https://review.gluster.org/#/c/glusterfs/+/22716/">https://review.gluster.org/#/c/glusterfs/+/22716/</a><br>
> > > >>>>> ><br>
> > > >>>>> > Hope this helps.<br>
> > > >>>>> ><br>
> > > >>>>> > /sunny<br>
> > > >>>>> ><br>
> > > >>>>> > On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev<br>
> > > >>>>> > <ailiev+gluster@mamul.org<br>
> > <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> > > <<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> > <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>> <<a href=""></a>mailto:gluster@mamul.org<br>
> > <<a href="mailto:gluster@mamul.org">mailto:gluster@mamul.org</a>><br>
> > > <<a href=""></a>mailto:gluster@mamul.org <<a href="mailto:gluster@mamul.org">mailto:gluster@mamul.org</a>>>>><br>
> wrote:<br>
> > > >>>>> >><br>
> > > >>>>> >> Hello dear GlusterFS users list,<br>
> > > >>>>> >><br>
> > > >>>>> >> I have been trying to set up geo-replication<br>
> > between two<br>
> > > >>>>> clusters for<br>
> > > >>>>> >> some time now. The desired state is (Cluster #1)<br>
> > being<br>
> > > >>>>> replicated to<br>
> > > >>>>> >> (Cluster #2).<br>
> > > >>>>> >><br>
> > > >>>>> >> Here are some details about the setup:<br>
> > > >>>>> >><br>
> > > >>>>> >> Cluster #1: three nodes connected via a local<br>
> network<br>
> > > >>>>> (172.31.35.0/24 <<a href="http://172.31.35.0/24">http://172.31.35.0/24</a>><br>
> > <<a href="http://172.31.35.0/24">http://172.31.35.0/24</a>>),<br>
> > > >>>>> >> one replicated (3 replica) volume.<br>
> > > >>>>> >><br>
> > > >>>>> >> Cluster #2: three nodes connected via a local<br>
> network<br>
> > > >>>>> (172.31.36.0/24 <<a href="http://172.31.36.0/24">http://172.31.36.0/24</a>><br>
> > <<a href="http://172.31.36.0/24">http://172.31.36.0/24</a>>),<br>
> > > >>>>> >> one replicated (3 replica) volume.<br>
> > > >>>>> >><br>
> > > >>>>> >> The two clusters are connected to the Internet<br>
> > via separate<br>
> > > >>>>> network<br>
> > > >>>>> >> adapters.<br>
> > > >>>>> >><br>
> > > >>>>> >> Only SSH (port 22) is open on cluster #2 nodes'<br>
> > adapters<br>
> > > >>>>> connected to<br>
> > > >>>>> >> the Internet.<br>
> > > >>>>> >><br>
> > > >>>>> >> All nodes are running Ubuntu 18.04 and GlusterFS<br>
> 6.3<br>
> > > installed<br>
> > > >>>>> from [1].<br>
> > > >>>>> >><br>
> > > >>>>> >> The first time I followed the guide[2] everything<br>
> > went<br>
> > > fine up<br>
> > > >>>>> until I<br>
> > > >>>>> >> reached the "Create the session" step. That was<br>
> > like a<br>
> > > month<br>
> > > >>>>> ago, then I<br>
> > > >>>>> >> had to temporarily stop working in this and now I<br>
> > am coming<br>
> > > >>>>> back to it.<br>
> > > >>>>> >><br>
> > > >>>>> >> Currently, if I try to see the mountbroker status<br>
> > I get the<br>
> > > >>>>> following:<br>
> > > >>>>> >><br>
> > > >>>>> >>> # gluster-mountbroker status<br>
> > > >>>>> >>> Traceback (most recent call last):<br>
> > > >>>>> >>> File "/usr/sbin/gluster-mountbroker", line<br>
> > 396, in<br>
> > > <module><br>
> > > >>>>> >>> runcli()<br>
> > > >>>>> >>> File<br>
> > > >>>>><br>
> > ><br>
> > "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line<br>
> > > >>>>> 225,<br>
> > > >>>>> in runcli<br>
> > > >>>>> >>> cls.run(args)<br>
> > > >>>>> >>> File "/usr/sbin/gluster-mountbroker", line<br>
> > 275, in run<br>
> > > >>>>> >>> out = execute_in_peers("node-status")<br>
> > > >>>>> >>> File<br>
> > > >>>>><br>
> > "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",<br>
> > > >>>>> >> line 127, in execute_in_peers<br>
> > > >>>>> >>> raise GlusterCmdException((rc, out, err, "<br>
> > > ".join(cmd)))<br>
> > > >>>>> >>> gluster.cliutils.cliutils.GlusterCmdException:<br>
> > (1, '',<br>
> > > >>>>> 'Unable to<br>
> > > >>>>> >> end. Error : Success\n', 'gluster system:: execute<br>
> > > mountbroker.py<br>
> > > >>>>> >> node-status')<br>
> > > >>>>> >><br>
> > > >>>>> >> And in /var/log/gluster/glusterd.log I have:<br>
> > > >>>>> >><br>
> > > >>>>> >>> [2019-08-10 15:24:21.418834] E [MSGID: 106336]<br>
> > > >>>>> >> [glusterd-geo-rep.c:5413:glusterd_op_sys_exec]<br>
> > > 0-management:<br>
> > > >>>>> Unable to<br>
> > > >>>>> >> end. Error : Success<br>
> > > >>>>> >>> [2019-08-10 15:24:21.418908] E [MSGID: 106122]<br>
> > > ?? >>>>> >> [glusterd-syncop.c:1445:gd_commit_op_phase]<br>
> > 0-management:<br>
> > > >>>>> Commit of<br>
> > > >>>>> >> operation 'Volume Execute system commands' failed<br>
> on<br>
> > > localhost<br>
> > > >>>>> : Unable<br>
> > > >>>>> >> to end. Error : Success<br>
> > > >>>>> >><br>
> > > >>>>> >> So, I have two questions right now:<br>
> > > >>>>> >><br>
> > > >>>>> >> 1) Is there anything wrong with my setup<br>
> > (networking, open<br>
> > > >>>>> ports, etc.)?<br>
> > > >>>>> >> Is it expected to work with this setup or should<br>
> > I redo<br>
> > > it in a<br>
> > > >>>>> >> different way?<br>
> > > >>>>> >> 2) How can I troubleshoot the current status of my<br>
> > > setup? Can<br>
> > > >>>>> I find out<br>
> > > >>>>> >> what's missing/wrong and continue from there or<br>
> > should I<br>
> > > just<br>
> > > >>>>> start from<br>
> > > >>>>> >> scratch?<br>
> > > >>>>> >><br>
> > > >>>>> >> Links:<br>
> > > >>>>> >> [1]<br>
> > <a href="http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu">http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu</a><br>
> > > >>>>> >> [2]<br>
> > > >>>>> >><br>
> > > >>>>><br>
> > ><br>
> ><br>
> <a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/">
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/</a><br>
> > ><br>
> > > >>>>><br>
> > > >>>>> >><br>
> > > >>>>> >> Thank you!<br>
> > > >>>>> >><br>
> > > >>>>> >> Best regards,<br>
> > > >>>>> >> --<br>
> > > >>>>> >> alexander iliev<br>
> > > >>>>> >> _______________________________________________<br>
> > > >>>>> >> Gluster-users mailing list<br>
> > > >>>>> >> Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> > > <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> > <<a href=""></a>mailto:Gluster-users@gluster.org <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> > > <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> > > >>>>> >><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > > >>>>> _______________________________________________<br>
> > > >>>>> Gluster-users mailing list<br>
> > > >>>>> Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> > > <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> > > >>>>><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > > >>>> _______________________________________________<br>
> > > >>>> Gluster-users mailing list<br>
> > > >>>> Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> > > >>>><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > > > _______________________________________________<br>
> > > > Gluster-users mailing list<br>
> > > > Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> > <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> > > > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">
https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > > ________<br>
> > ><br>
> > > Community Meeting Calendar:<br>
> > ><br>
> > > APAC Schedule -<br>
> > > Every 2nd and 4th Tuesday at 11:30 AM IST<br>
> > > Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a><br>
> > ><br>
> > > NA/EMEA Schedule -<br>
> > > Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
> > > Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a><br>
> > ><br>
> > > Gluster-users mailing list<br>
> > > Gluster-users@gluster.org <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> > <<a href=""></a>mailto:Gluster-users@gluster.org <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> >><br>
> > > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> > ><br>
> > ><br>
> > ><br>
> > > --<br>
> > > regards<br>
> > > Aravinda VK<br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > regards<br>
> > Aravinda VK<br>
><br>
<br>
<br>
-- <br>
regards<br>
Aravinda VK<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/9bac07d3/attachment-0001.html">http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/9bac07d3/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 3<br>
Date: Thu, 17 Oct 2019 21:03:43 +0530<br>
From: Aravinda Vishwanathapura Krishna Murthy <avishwan@redhat.com><br>
To: deepu srinivasan <sdeepugd@gmail.com><br>
Cc: gluster-users <gluster-users@gluster.org><br>
Subject: Re: [Gluster-users] Single Point of failure in geo<br>
Replication<br>
Message-ID:<br>
<CA+8EeuPu_t3ucUwkvS1x7Y91qyP=sCD7k0Ln=t0Fd_Dp_+7oTA@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Thu, Oct 17, 2019 at 11:44 AM deepu srinivasan <sdeepugd@gmail.com><br>
wrote:<br>
<br>
> Thank you for your response.<br>
> We have tried the above use case you mentioned.<br>
><br>
> Case 1: Primary node is permanently Down (Hardware failure)<br>
> In this case, the Georeplication session cannot be stopped and returns the<br>
> failure "start the primary node and then stop(or similar message)".<br>
> Now I cannot delete because I cannot stop the session.<br>
><br>
<br>
Please try "stop force", Let us know if that works.<br>
<br>
<br>
> On Thu, Oct 17, 2019 at 8:32 AM Aravinda Vishwanathapura Krishna Murthy <<br>
> avishwan@redhat.com> wrote:<br>
><br>
>><br>
>> On Wed, Oct 16, 2019 at 11:08 PM deepu srinivasan <sdeepugd@gmail.com><br>
>> wrote:<br>
>><br>
>>> Hi Users<br>
>>> Is there a single point of failure in GeoReplication for gluster?<br>
>>> My Case:<br>
>>> I Use 3 nodes in both master and slave volume.<br>
>>> Master volume : Node1,Node2,Node3<br>
>>> Slave Volume : Node4,Node5,Node6<br>
>>> I tried to recreate the scenario to test a single point of failure.<br>
>>><br>
>>> Geo-Replication Status:<br>
>>><br>
>>> *Master Node Slave Node Status *<br>
>>> Node1 Node4 Active<br>
>>> Node2 Node4 Passive<br>
>>> Node3 Node4 Passive<br>
>>><br>
>>> Step 1: Stoped the glusterd daemon in Node4.<br>
>>> Result: There were only two-node statuses like the one below.<br>
>>><br>
>>> *Master Node Slave Node Status *<br>
>>> Node2 Node4 Passive<br>
>>> Node3 Node4 Passive<br>
>>><br>
>>><br>
>>> Will the GeoReplication session goes down if the primary slave is down?<br>
>>><br>
>><br>
>><br>
>> Hi Deepu,<br>
>><br>
>> Geo-replication depends on a primary slave node to get the information<br>
>> about other nodes which are part of Slave Volume.<br>
>><br>
>> Once the workers are started, it is not dependent on the primary slave<br>
>> node. Will not fail if a primary goes down. But if any other node goes down<br>
>> then the worker will try to connect to some other node, for which it tries<br>
>> to run Volume status command on the slave node using the following command.<br>
>><br>
>> ```<br>
>> ssh -i <georep-pem> <primary-node> gluster volume status <slavevol><br>
>> ```<br>
>><br>
>> The above command will fail and Worker will not get the list of Slave<br>
>> nodes to which it can connect to.<br>
>><br>
>> This is only a temporary failure until the primary node comes back<br>
>> online. If the primary node is permanently down then run Geo-rep delete and<br>
>> Geo-rep create command again with the new primary node. (Note: Geo-rep<br>
>> Delete and Create will remember the last sync time and resume once it<br>
>> starts)<br>
>><br>
>> I will evaluate the possibility of caching a list of Slave nodes so that<br>
>> it can be used as a backup primary node in case of failures. I will open<br>
>> Github issue for the same.<br>
>><br>
>> Thanks for reporting the issue.<br>
>><br>
>> --<br>
>> regards<br>
>> Aravinda VK<br>
>><br>
><br>
<br>
-- <br>
regards<br>
Aravinda VK<br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/fe2a180f/attachment-0001.html">http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/fe2a180f/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 4<br>
Date: Thu, 17 Oct 2019 22:24:30 +0530<br>
From: Amar Tumballi <amarts@gmail.com><br>
To: "Kay K." <kkay.jp@gmail.com><br>
Cc: gluster-users <gluster-users@gluster.org><br>
Subject: Re: [Gluster-users] On a glusterfsd service<br>
Message-ID:<br>
<CA+OzEQvhgfdBeAhoVHZsk14CPYGU32YRmXNUcWC-scTQ00aHaw@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
On Thu, Oct 17, 2019 at 5:21 PM Kay K. <kkay.jp@gmail.com> wrote:<br>
<br>
> Hello All,<br>
><br>
> I'm using 20 glusterfs servers on CentOS 6.9 for about 5 years. It's<br>
> working well.<br>
<br>
<br>
That is a sweet thing to read as first thing the email :-)<br>
<br>
<br>
> However recently, I noticed that those settings are<br>
> different in a part of hosts.<br>
><br>
> Those 20 servers are working on runlevel:3.<br>
> For 10 servers, if I looked at a directory /etc/rc.3, I found to be<br>
> set K80 for the service, glusterfsd like below.<br>
><br>
> $ ls -l /etc/rc.d/rc3.d/*gluster*<br>
> lrwxrwxrwx 1 root root 20 Mar 9 2016 /etc/rc.d/rc3.d/K80glusterfsd<br>
> -> ../init.d/glusterfsd<br>
> lrwxrwxrwx 1 root root 18 Mar 9 2016 /etc/rc.d/rc3.d/S20glusterd -><br>
> ../init.d/glusterd<br>
><br>
> However, I checked the another 10 servers, I cound find to be set S20<br>
> for glusterfsd as below.<br>
><br>
> $ ls -l /etc/rc.d/rc3.d/*gluster*<br>
> lrwxrwxrwx 1 root root 18 Oct 9 2015 /etc/rc.d/rc3.d/S20glusterd -><br>
> ../init.d/glusterd<br>
> lrwxrwxrwx 1 root root 20 Oct 9 2015 /etc/rc.d/rc3.d/S20glusterfsd<br>
> -> ../init.d/glusterfsd<br>
><br>
> I remember that the half of servers were built up several years lator.<br>
> I expect that maybe, the difference was made at the time.<br>
><br>
><br>
Most probably. The dates points difference of ~18 months between them.<br>
Surely some improvements would have gone into the code. (~1000 patches in<br>
an year)<br>
<br>
Trying to check git log inside glusterfs' spec file and not able to find<br>
any thing. Looks like the diff is mostly with CentOS spec.<br>
<br>
<br>
> Futhermore, if I checked the status for glusterfsd, the glusterfsd can<br>
> get running as below.<br>
><br>
> $ /etc/init.d/glusterd status<br>
> glusterd (pid 1989) is running...<br>
> $ /etc/init.d/glusterfsd status<br>
> glusterfsd (pid 2216 2206 2201 2198 2193 2187 2181 2168 2163 2148 2147<br>
> 2146 2139 2130 2123 2113 2111 2100 2088) is running...<br>
><br>
><br>
:-)<br>
<br>
<br>
<br>
> Actually, my GlusterFS server is working well.<br>
><br>
><br>
IMO that is a good news. I don't think it would be an issue all of a sudden<br>
after 4-5 years.<br>
<br>
<br>
> I don't know that which setting is correct. Would you know about it?<br>
><br>
><br>
We just need to start 'glusterd' service in later versions. So, if it is<br>
working, it would be fine. For reference/correctness related things, I<br>
would leave it to experts on specs, and init.d scripts to respond.<br>
<br>
For most of the emails, we end up recommending to move to a latest,<br>
supported version, but considering you are not facing an issue on top of<br>
filesystem, I wouldn't recommend that yet :-)<br>
<br>
Regards,<br>
Amar<br>
<br>
<br>
> Thanks,<br>
> Kondo<br>
> ________<br>
><br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/1eb1a0ed/attachment-0001.html">http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/1eb1a0ed/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 5<br>
Date: Thu, 17 Oct 2019 13:48:15 -0400<br>
From: Kaleb Keithley <kkeithle@redhat.com><br>
To: Alberto Bengoa <bengoa@gmail.com><br>
Cc: gluster-users <gluster-users@gluster.org><br>
Subject: Re: [Gluster-users] Mirror <a href="https://download.gluster.org/">https://download.gluster.org/</a> is<br>
not working<br>
Message-ID:<br>
<CAC+Jd5DEE5b7kW5+Ax9fg3Ha3cM2FGvXCvSDJFhH2Vo+PmqXsA@mail.gmail.com><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
file owners+perms were fixed. Should work now<br>
<br>
On Thu, Oct 17, 2019 at 10:57 AM Alberto Bengoa <bengoa@gmail.com> wrote:<br>
<br>
> Guys,<br>
><br>
> Anybody from Gluster Team has one word about the mirror status? It is<br>
> failing since (maybe?) yesterday.<br>
><br>
> root@nas-bkp /tmp $ yum install glusterfs-client<br>
> GlusterFS is a clustered file-system capable of scaling to several<br>
> petabyte 2.1 kB/s | 2.9 kB 00:01<br>
> Dependencies resolved.<br>
><br>
> ============================================================================================================<br>
> Package Arch Version<br>
> Repository Size<br>
><br>
> ============================================================================================================<br>
> Installing:<br>
> glusterfs-fuse x86_64 6.5-2.el8<br>
> glusterfs-rhel8 167 k<br>
> Installing dependencies:<br>
> glusterfs x86_64 6.5-2.el8<br>
> glusterfs-rhel8 681 k<br>
> glusterfs-client-xlators x86_64 6.5-2.el8<br>
> glusterfs-rhel8 893 k<br>
> glusterfs-libs x86_64 6.5-2.el8<br>
> glusterfs-rhel8 440 k<br>
><br>
> Transaction Summary<br>
><br>
> ============================================================================================================<br>
> Install 4 Packages<br>
><br>
> Total download size: 2.1 M<br>
> Installed size: 9.1 M<br>
> Is this ok [y/N]: y<br>
> Downloading Packages:<br>
> [MIRROR] glusterfs-6.5-2.el8.x86_64.rpm: Curl error (18): Transferred a<br>
> partial file for<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> [transfer closed with 648927 bytes remaining to read]<br>
> [FAILED] glusterfs-6.5-2.el8.x86_64.rpm: No more mirrors to try - All<br>
> mirrors were already tried without success<br>
> (2-3/4): glusterfs-client-xlators- 34% [===========-<br>
> ] 562 kB/s | 745 kB 00:02 ETA<br>
> The downloaded packages were saved in cache until the next successful<br>
> transaction.<br>
> You can remove cached packages by executing 'dnf clean packages'.<br>
> Error: Error downloading packages:<br>
> Cannot download glusterfs-6.5-2.el8.x86_64.rpm: All mirrors were tried<br>
><br>
> If you try to download using wget it fails as well:<br>
><br>
> root@nas-bkp /tmp $ wget<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> --2019-10-17 15:53:41--<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> Resolving download.gluster.org (download.gluster.org)... 8.43.85.185<br>
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
> connected.<br>
> HTTP request sent, awaiting response... 200 OK<br>
> Length: 697688 (681K) [application/x-rpm]<br>
> Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.1?<br>
><br>
> glusterfs-6.5-2.el8.x86_64 6%[=> ]<br>
> 47.62K --.-KB/s in 0.09s<br>
><br>
> 2019-10-17 15:53:42 (559 KB/s) - Read error at byte 48761/697688 (Error<br>
> decoding the received TLS packet.). Retrying.<br>
><br>
> --2019-10-17 15:53:43-- (try: 2)<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
> connected.<br>
> HTTP request sent, awaiting response... ^C<br>
> root@nas-bkp /tmp $ wget<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> --2019-10-17 15:53:45--<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> Resolving download.gluster.org (download.gluster.org)... 8.43.85.185<br>
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
> connected.<br>
> HTTP request sent, awaiting response... 200 OK<br>
> Length: 697688 (681K) [application/x-rpm]<br>
> Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?<br>
><br>
> glusterfs-6.5-2.el8.x86_64 6%[=> ]<br>
> 47.62K --.-KB/s in 0.08s<br>
><br>
> 2019-10-17 15:53:46 (564 KB/s) - Read error at byte 48761/697688 (Error<br>
> decoding the received TLS packet.). Retrying.<br>
><br>
> --2019-10-17 15:53:47-- (try: 2)<br>
> <a href="https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm">
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm</a><br>
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...<br>
> connected.<br>
> HTTP request sent, awaiting response... 206 Partial Content<br>
> Length: 697688 (681K), 648927 (634K) remaining [application/x-rpm]<br>
> Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?<br>
><br>
> glusterfs-6.5-2.el8.x86_64 13%[++==> ]<br>
> 95.18K --.-KB/s in 0.08s<br>
><br>
> 2019-10-17 15:53:47 (563 KB/s) - Read error at byte 97467/697688 (Error<br>
> decoding the received TLS packet.). Retrying.<br>
><br>
><br>
> Thank you!<br>
><br>
> Alberto Bengoa<br>
> ________<br>
><br>
> Community Meeting Calendar:<br>
><br>
> APAC Schedule -<br>
> Every 2nd and 4th Tuesday at 11:30 AM IST<br>
> Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a><br>
><br>
> NA/EMEA Schedule -<br>
> Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
> Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a><br>
><br>
> Gluster-users mailing list<br>
> Gluster-users@gluster.org<br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/0a41ccd1/attachment-0001.html">http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/0a41ccd1/attachment-0001.html</a>><br>
<br>
------------------------------<br>
<br>
Message: 6<br>
Date: Thu, 17 Oct 2019 20:40:37 +0200<br>
From: Alexander Iliev <ailiev+gluster@mamul.org><br>
To: Aravinda Vishwanathapura Krishna Murthy <avishwan@redhat.com><br>
Cc: gluster-users <gluster-users@gluster.org><br>
Subject: Re: [Gluster-users] Issues with Geo-replication (GlusterFS<br>
6.3 on Ubuntu 18.04)<br>
Message-ID: <4214e52d-b69f-d5b2-c3fc-2c69e9abb217@mamul.org><br>
Content-Type: text/plain; charset=utf-8; format=flowed<br>
<br>
On 10/17/19 5:32 PM, Aravinda Vishwanathapura Krishna Murthy wrote:<br>
> <br>
> <br>
> On Thu, Oct 17, 2019 at 12:54 PM Alexander Iliev <br>
> <ailiev+gluster@mamul.org <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>>> wrote:<br>
> <br>
> Thanks, Aravinda.<br>
> <br>
> Does this mean that my scenario is currently unsupported?<br>
> <br>
> <br>
> Please try by providing external IP while creating Geo-rep session. We <br>
> will work on the enhancement if it didn't work.<br>
<br>
This is what I've been doing all along. It didn't work for me.<br>
<br>
> <br>
> <br>
> It seems that I need to make sure that the nodes in the two clusters<br>
> can<br>
> see each-other (some kind of VPN would work I guess).<br>
> <br>
> Is this be documented somewhere? I think I've read the geo-replication<br>
> documentation several times now, but somehow it wasn't obvious to me<br>
> that you need access to the slave nodes from the master ones (apart<br>
> from<br>
> the SSH access).<br>
> <br>
> Thanks!<br>
> <br>
> Best regards,<br>
> --<br>
> alexander iliev<br>
> <br>
> On 10/17/19 5:25 AM, Aravinda Vishwanathapura Krishna Murthy wrote:<br>
> > Got it.<br>
> ><br>
> > Geo-replication uses slave nodes IP in the following cases,<br>
> ><br>
> > - Verification during Session creation - It tries to mount the Slave<br>
> > volume using the hostname/IP provided in Geo-rep create command. Try<br>
> > Geo-rep create by specifying the external IP which is accessible<br>
> from<br>
> > the master node.<br>
> > - Once Geo-replication is started, it gets the list of Slave nodes<br>
> > IP/hostname from Slave volume info and connects to those IPs. But in<br>
> > this case, those are internal IP addresses that are not<br>
> accessible from<br>
> > Master nodes. - We need to enhance Geo-replication to accept<br>
> external IP<br>
> > and internal IP map details so that for all connections it can use<br>
> > external IP.<br>
> ><br>
> > On Wed, Oct 16, 2019 at 10:29 PM Alexander Iliev<br>
> > <ailiev+gluster@mamul.org <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> <<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>>> wrote:<br>
> ><br>
> >? ? ?Hi Aravinda,<br>
> ><br>
> >? ? ?All volume brick on the slave volume are up and the volume seems<br>
> >? ? ?functional.<br>
> ><br>
> >? ? ?Your suggestion about trying to mount the slave volume on a<br>
> master node<br>
> >? ? ?brings up my question about network connectivity again - the<br>
> GlusterFS<br>
> >? ? ?documentation[1] says:<br>
> ><br>
> >? ? ? ?> The server specified in the mount command is only used to<br>
> fetch the<br>
> >? ? ?gluster configuration volfile describing the volume name.<br>
> Subsequently,<br>
> >? ? ?the client will communicate directly with the servers<br>
> mentioned in the<br>
> >? ? ?volfile (which might not even include the one used for mount).<br>
> ><br>
> >? ? ?To me this means that the masternode from your example is<br>
> expected to<br>
> >? ? ?have connectivity to the network where the slave volume runs,<br>
> i.e. to<br>
> >? ? ?have network access to the slave nodes. In my geo-replication<br>
> scenario<br>
> >? ? ?this is definitely not the case. The two cluster are running<br>
> in two<br>
> >? ? ?completely different networks that are not interconnected.<br>
> ><br>
> >? ? ?So my question is - how is the slave volume mount expected to<br>
> happen if<br>
> >? ? ?the client host cannot access the GlusterFS nodes? Or is the<br>
> >? ? ?connectivity a requirement even for geo-replication?<br>
> ><br>
> >? ? ?I'm not sure if I'm missing something, but any help will be<br>
> highly<br>
> >? ? ?appreciated!<br>
> ><br>
> >? ? ?Thanks!<br>
> ><br>
> >? ? ?Links:<br>
> >? ? ?[1]<br>
> ><br>
> <a href="https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/">
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/</a><br>
> >? ? ?--<br>
> >? ? ?alexander iliev<br>
> ><br>
> >? ? ?On 10/16/19 6:03 AM, Aravinda Vishwanathapura Krishna Murthy<br>
> wrote:<br>
> >? ? ? > Hi Alexander,<br>
> >? ? ? ><br>
> >? ? ? > Please check the status of Volume. Looks like the Slave volume<br>
> >? ? ?mount is<br>
> >? ? ? > failing because bricks are down or not reachable. If Volume<br>
> >? ? ?status shows<br>
> >? ? ? > all bricks are up then try mounting the slave volume using<br>
> mount<br>
> >? ? ?command.<br>
> >? ? ? ><br>
> >? ? ? > ```<br>
> >? ? ? > masternode$ mkdir /mnt/vol<br>
> >? ? ? > masternode$ mount -t glusterfs <slavehost>:<slavevol> /mnt/vol<br>
> >? ? ? > ```<br>
> >? ? ? ><br>
> >? ? ? > On Fri, Oct 11, 2019 at 4:03 AM Alexander Iliev<br>
> >? ? ? > <ailiev+gluster@mamul.org<br>
> <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> <<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%252Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%25252Bgluster@mamul.org">mailto:ailiev%25252Bgluster@mamul.org</a>>>>> wrote:<br>
> >? ? ? ><br>
> >? ? ? >? ? ?Hi all,<br>
> >? ? ? ><br>
> >? ? ? >? ? ?I ended up reinstalling the nodes with CentOS 7.5 and<br>
> >? ? ?GlusterFS 6.5<br>
> >? ? ? >? ? ?(installed from the SIG.)<br>
> >? ? ? ><br>
> >? ? ? >? ? ?Now when I try to create a replication session I get the<br>
> >? ? ?following:<br>
> >? ? ? ><br>
> >? ? ? >? ? ? ?> # gluster volume geo-replication store1<br>
> >? ? ?<slave-host>::store2 create<br>
> >? ? ? >? ? ?push-pem<br>
> >? ? ? >? ? ? ?> Unable to mount and fetch slave volume details. Please<br>
> >? ? ?check the<br>
> >? ? ? >? ? ?log:<br>
> >? ? ? >? ? ?/var/log/glusterfs/geo-replication/gverify-slavemnt.log<br>
> >? ? ? >? ? ? ?> geo-replication command failed<br>
> >? ? ? ><br>
> >? ? ? >? ? ?You can find the contents of gverify-slavemnt.log<br>
> below, but the<br>
> >? ? ? >? ? ?initial<br>
> >? ? ? >? ? ?error seems to be:<br>
> >? ? ? ><br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578519] E<br>
> >? ? ? >? ? ?[fuse-bridge.c:5211:fuse_first_lookup]<br>
> >? ? ? >? ? ?0-fuse: first lookup on root failed (Transport<br>
> endpoint is not<br>
> >? ? ? >? ? ?connected)<br>
> >? ? ? ><br>
> >? ? ? >? ? ?I only found<br>
> >? ? ? > <br>
> ?[this](https://bugzilla.redhat.com/show_bug.cgi?id=1659824)<br>
> >? ? ? >? ? ?bug report which doesn't seem to help. The reported<br>
> issue is<br>
> >? ? ?failure to<br>
> >? ? ? >? ? ?mount a volume on a GlusterFS client, but in my case I<br>
> need<br>
> >? ? ? >? ? ?geo-replication which implies the client (geo-replication<br>
> >? ? ?master) being<br>
> >? ? ? >? ? ?on a different network.<br>
> >? ? ? ><br>
> >? ? ? >? ? ?Any help will be appreciated.<br>
> >? ? ? ><br>
> >? ? ? >? ? ?Thanks!<br>
> >? ? ? ><br>
> >? ? ? >? ? ?gverify-slavemnt.log:<br>
> >? ? ? ><br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.571256] I [MSGID: 100030]<br>
> >? ? ? >? ? ?[glusterfsd.c:2847:main] 0-glusterfs: Started running<br>
> >? ? ?glusterfs version<br>
> >? ? ? >? ? ?6.5 (args: glusterfs<br>
> --xlator-option=*dht.lookup-unhashed=off<br>
> >? ? ? >? ? ?--volfile-server <slave-host> --volfile-id store2 -l<br>
> >? ? ? >? ? ?/var/log/glusterfs/geo-replication/gverify-slavemnt.log<br>
> >? ? ? >? ? ?/tmp/gverify.sh.5nFlRh)<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.575438] I<br>
> [glusterfsd.c:2556:daemonize]<br>
> >? ? ? >? ? ?0-glusterfs: Pid of current running process is 6021<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.584282] I [MSGID: 101190]<br>
> >? ? ? >? ? ?[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:<br>
> >? ? ?Started thread<br>
> >? ? ? >? ? ?with index 0<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.584299] I [MSGID: 101190]<br>
> >? ? ? >? ? ?[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:<br>
> >? ? ?Started thread<br>
> >? ? ? >? ? ?with index 1<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.928094] I [MSGID: 114020]<br>
> >? ? ? >? ? ?[client.c:2393:notify]<br>
> >? ? ? >? ? ?0-store2-client-0: parent translators are ready,<br>
> attempting<br>
> >? ? ?connect on<br>
> >? ? ? >? ? ?transport<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.931121] I [MSGID: 114020]<br>
> >? ? ? >? ? ?[client.c:2393:notify]<br>
> >? ? ? >? ? ?0-store2-client-1: parent translators are ready,<br>
> attempting<br>
> >? ? ?connect on<br>
> >? ? ? >? ? ?transport<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.933976] I [MSGID: 114020]<br>
> >? ? ? >? ? ?[client.c:2393:notify]<br>
> >? ? ? >? ? ?0-store2-client-2: parent translators are ready,<br>
> attempting<br>
> >? ? ?connect on<br>
> >? ? ? >? ? ?transport<br>
> >? ? ? >? ? ? ?> Final graph:<br>
> >? ? ? >? ? ? ?><br>
> >? ? ? ><br>
> > <br>
> ?+------------------------------------------------------------------------------+<br>
> >? ? ? >? ? ? ?>? ?1: volume store2-client-0<br>
> >? ? ? >? ? ? ?>? ?2:? ? ?type protocol/client<br>
> >? ? ? >? ? ? ?>? ?3:? ? ?option ping-timeout 42<br>
> >? ? ? >? ? ? ?>? ?4:? ? ?option remote-host 172.31.36.11<br>
> >? ? ? >? ? ? ?>? ?5:? ? ?option remote-subvolume<br>
> >? ? ?/data/gfs/store1/1/brick-store2<br>
> >? ? ? >? ? ? ?>? ?6:? ? ?option transport-type socket<br>
> >? ? ? >? ? ? ?>? ?7:? ? ?option transport.address-family inet<br>
> >? ? ? >? ? ? ?>? ?8:? ? ?option transport.socket.ssl-enabled off<br>
> >? ? ? >? ? ? ?>? ?9:? ? ?option transport.tcp-user-timeout 0<br>
> >? ? ? >? ? ? ?>? 10:? ? ?option transport.socket.keepalive-time 20<br>
> >? ? ? >? ? ? ?>? 11:? ? ?option transport.socket.keepalive-interval 2<br>
> >? ? ? >? ? ? ?>? 12:? ? ?option transport.socket.keepalive-count 9<br>
> >? ? ? >? ? ? ?>? 13:? ? ?option send-gids true<br>
> >? ? ? >? ? ? ?>? 14: end-volume<br>
> >? ? ? >? ? ? ?>? 15:<br>
> >? ? ? >? ? ? ?>? 16: volume store2-client-1<br>
> >? ? ? >? ? ? ?>? 17:? ? ?type protocol/client<br>
> >? ? ? >? ? ? ?>? 18:? ? ?option ping-timeout 42<br>
> >? ? ? >? ? ? ?>? 19:? ? ?option remote-host 172.31.36.12<br>
> >? ? ? >? ? ? ?>? 20:? ? ?option remote-subvolume<br>
> >? ? ?/data/gfs/store1/1/brick-store2<br>
> >? ? ? >? ? ? ?>? 21:? ? ?option transport-type socket<br>
> >? ? ? >? ? ? ?>? 22:? ? ?option transport.address-family inet<br>
> >? ? ? >? ? ? ?>? 23:? ? ?option transport.socket.ssl-enabled off<br>
> >? ? ? >? ? ? ?>? 24:? ? ?option transport.tcp-user-timeout 0<br>
> >? ? ? >? ? ? ?>? 25:? ? ?option transport.socket.keepalive-time 20<br>
> >? ? ? >? ? ? ?>? 26:? ? ?option transport.socket.keepalive-interval 2<br>
> >? ? ? >? ? ? ?>? 27:? ? ?option transport.socket.keepalive-count 9<br>
> >? ? ? >? ? ? ?>? 28:? ? ?option send-gids true<br>
> >? ? ? >? ? ? ?>? 29: end-volume<br>
> >? ? ? >? ? ? ?>? 30:<br>
> >? ? ? >? ? ? ?>? 31: volume store2-client-2<br>
> >? ? ? >? ? ? ?>? 32:? ? ?type protocol/client<br>
> >? ? ? >? ? ? ?>? 33:? ? ?option ping-timeout 42<br>
> >? ? ? >? ? ? ?>? 34:? ? ?option remote-host 172.31.36.13<br>
> >? ? ? >? ? ? ?>? 35:? ? ?option remote-subvolume<br>
> >? ? ?/data/gfs/store1/1/brick-store2<br>
> >? ? ? >? ? ? ?>? 36:? ? ?option transport-type socket<br>
> >? ? ? >? ? ? ?>? 37:? ? ?option transport.address-family inet<br>
> >? ? ? >? ? ? ?>? 38:? ? ?option transport.socket.ssl-enabled off<br>
> >? ? ? >? ? ? ?>? 39:? ? ?option transport.tcp-user-timeout 0<br>
> >? ? ? >? ? ? ?>? 40:? ? ?option transport.socket.keepalive-time 20<br>
> >? ? ? >? ? ? ?>? 41:? ? ?option transport.socket.keepalive-interval 2<br>
> >? ? ? >? ? ? ?>? 42:? ? ?option transport.socket.keepalive-count 9<br>
> >? ? ? >? ? ? ?>? 43:? ? ?option send-gids true<br>
> >? ? ? >? ? ? ?>? 44: end-volume<br>
> >? ? ? >? ? ? ?>? 45:<br>
> >? ? ? >? ? ? ?>? 46: volume store2-replicate-0<br>
> >? ? ? >? ? ? ?>? 47:? ? ?type cluster/replicate<br>
> >? ? ? >? ? ? ?>? 48:? ? ?option afr-pending-xattr<br>
> >? ? ? >? ? ?store2-client-0,store2-client-1,store2-client-2<br>
> >? ? ? >? ? ? ?>? 49:? ? ?option use-compound-fops off<br>
> >? ? ? >? ? ? ?>? 50:? ? ?subvolumes store2-client-0 store2-client-1<br>
> >? ? ?store2-client-2<br>
> >? ? ? >? ? ? ?>? 51: end-volume<br>
> >? ? ? >? ? ? ?>? 52:<br>
> >? ? ? >? ? ? ?>? 53: volume store2-dht<br>
> >? ? ? >? ? ? ?>? 54:? ? ?type cluster/distribute<br>
> >? ? ? >? ? ? ?>? 55:? ? ?option lookup-unhashed off<br>
> >? ? ? >? ? ? ?>? 56:? ? ?option lock-migration off<br>
> >? ? ? >? ? ? ?>? 57:? ? ?option force-migration off<br>
> >? ? ? >? ? ? ?>? 58:? ? ?subvolumes store2-replicate-0<br>
> >? ? ? >? ? ? ?>? 59: end-volume<br>
> >? ? ? >? ? ? ?>? 60:<br>
> >? ? ? >? ? ? ?>? 61: volume store2-write-behind<br>
> >? ? ? >? ? ? ?>? 62:? ? ?type performance/write-behind<br>
> >? ? ? >? ? ? ?>? 63:? ? ?subvolumes store2-dht<br>
> >? ? ? >? ? ? ?>? 64: end-volume<br>
> >? ? ? >? ? ? ?>? 65:<br>
> >? ? ? >? ? ? ?>? 66: volume store2-read-ahead<br>
> >? ? ? >? ? ? ?>? 67:? ? ?type performance/read-ahead<br>
> >? ? ? >? ? ? ?>? 68:? ? ?subvolumes store2-write-behind<br>
> >? ? ? >? ? ? ?>? 69: end-volume<br>
> >? ? ? >? ? ? ?>? 70:<br>
> >? ? ? >? ? ? ?>? 71: volume store2-readdir-ahead<br>
> >? ? ? >? ? ? ?>? 72:? ? ?type performance/readdir-ahead<br>
> >? ? ? >? ? ? ?>? 73:? ? ?option parallel-readdir off<br>
> >? ? ? >? ? ? ?>? 74:? ? ?option rda-request-size 131072<br>
> >? ? ? >? ? ? ?>? 75:? ? ?option rda-cache-limit 10MB<br>
> >? ? ? >? ? ? ?>? 76:? ? ?subvolumes store2-read-ahead<br>
> >? ? ? >? ? ? ?>? 77: end-volume<br>
> >? ? ? >? ? ? ?>? 78:<br>
> >? ? ? >? ? ? ?>? 79: volume store2-io-cache<br>
> >? ? ? >? ? ? ?>? 80:? ? ?type performance/io-cache<br>
> >? ? ? >? ? ? ?>? 81:? ? ?subvolumes store2-readdir-ahead<br>
> >? ? ? >? ? ? ?>? 82: end-volume<br>
> >? ? ? >? ? ? ?>? 83:<br>
> >? ? ? >? ? ? ?>? 84: volume store2-open-behind<br>
> >? ? ? >? ? ? ?>? 85:? ? ?type performance/open-behind<br>
> >? ? ? >? ? ? ?>? 86:? ? ?subvolumes store2-io-cache<br>
> >? ? ? >? ? ? ?>? 87: end-volume<br>
> >? ? ? >? ? ? ?>? 88:<br>
> >? ? ? >? ? ? ?>? 89: volume store2-quick-read<br>
> >? ? ? >? ? ? ?>? 90:? ? ?type performance/quick-read<br>
> >? ? ? >? ? ? ?>? 91:? ? ?subvolumes store2-open-behind<br>
> >? ? ? >? ? ? ?>? 92: end-volume<br>
> >? ? ? >? ? ? ?>? 93:<br>
> >? ? ? >? ? ? ?>? 94: volume store2-md-cache<br>
> >? ? ? >? ? ? ?>? 95:? ? ?type performance/md-cache<br>
> >? ? ? >? ? ? ?>? 96:? ? ?subvolumes store2-quick-read<br>
> >? ? ? >? ? ? ?>? 97: end-volume<br>
> >? ? ? >? ? ? ?>? 98:<br>
> >? ? ? >? ? ? ?>? 99: volume store2<br>
> >? ? ? >? ? ? ?> 100:? ? ?type debug/io-stats<br>
> >? ? ? >? ? ? ?> 101:? ? ?option log-level INFO<br>
> >? ? ? >? ? ? ?> 102:? ? ?option latency-measurement off<br>
> >? ? ? >? ? ? ?> 103:? ? ?option count-fop-hits off<br>
> >? ? ? >? ? ? ?> 104:? ? ?subvolumes store2-md-cache<br>
> >? ? ? >? ? ? ?> 105: end-volume<br>
> >? ? ? >? ? ? ?> 106:<br>
> >? ? ? >? ? ? ?> 107: volume meta-autoload<br>
> >? ? ? >? ? ? ?> 108:? ? ?type meta<br>
> >? ? ? >? ? ? ?> 109:? ? ?subvolumes store2<br>
> >? ? ? >? ? ? ?> 110: end-volume<br>
> >? ? ? >? ? ? ?> 111:<br>
> >? ? ? >? ? ? ?><br>
> >? ? ? ><br>
> > <br>
> ?+------------------------------------------------------------------------------+<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578287] I<br>
> [fuse-bridge.c:5142:fuse_init]<br>
> >? ? ? >? ? ?0-glusterfs-fuse: FUSE inited with protocol versions:<br>
> >? ? ?glusterfs 7.24<br>
> >? ? ? >? ? ?kernel 7.22<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578356] I<br>
> >? ? ?[fuse-bridge.c:5753:fuse_graph_sync]<br>
> >? ? ? >? ? ?0-fuse: switched to graph 0<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578467] I [MSGID: 108006]<br>
> >? ? ? >? ? ?[afr-common.c:5666:afr_local_init]<br>
> 0-store2-replicate-0: no<br>
> >? ? ? >? ? ?subvolumes up<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578519] E<br>
> >? ? ? >? ? ?[fuse-bridge.c:5211:fuse_first_lookup]<br>
> >? ? ? >? ? ?0-fuse: first lookup on root failed (Transport<br>
> endpoint is not<br>
> >? ? ? >? ? ?connected)<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578709] W<br>
> >? ? ?[fuse-bridge.c:1266:fuse_attr_cbk]<br>
> >? ? ? >? ? ?0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport<br>
> endpoint is not<br>
> >? ? ? >? ? ?connected)<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578687] I [MSGID: 108006]<br>
> >? ? ? >? ? ?[afr-common.c:5666:afr_local_init]<br>
> 0-store2-replicate-0: no<br>
> >? ? ? >? ? ?subvolumes up<br>
> >? ? ? >? ? ? ?> [2019-10-10 22:09:48.222459] E [MSGID: 108006]<br>
> >? ? ? >? ? ?[afr-common.c:5318:__afr_handle_child_down_event]<br>
> >? ? ?0-store2-replicate-0:<br>
> >? ? ? >? ? ?All subvolumes are down. Going offline until at least<br>
> one of<br>
> >? ? ?them comes<br>
> >? ? ? >? ? ?back up.<br>
> >? ? ? >? ? ? ?> The message "E [MSGID: 108006]<br>
> >? ? ? >? ? ?[afr-common.c:5318:__afr_handle_child_down_event]<br>
> >? ? ?0-store2-replicate-0:<br>
> >? ? ? >? ? ?All subvolumes are down. Going offline until at least<br>
> one of<br>
> >? ? ?them comes<br>
> >? ? ? >? ? ?back up." repeated 2 times between [2019-10-10<br>
> >? ? ?22:09:48.222459] and<br>
> >? ? ? >? ? ?[2019-10-10 22:09:48.222891]<br>
> >? ? ? >? ? ? ?><br>
> >? ? ? ><br>
> >? ? ? >? ? ?alexander iliev<br>
> >? ? ? ><br>
> >? ? ? >? ? ?On 9/8/19 4:50 PM, Alexander Iliev wrote:<br>
> >? ? ? >? ? ? > Hi all,<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > Sunny, thank you for the update.<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > I have applied the patch locally on my slave system and<br>
> >? ? ?now the<br>
> >? ? ? >? ? ? > mountbroker setup is successful.<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > I am facing another issue though - when I try to<br>
> create a<br>
> >? ? ? >? ? ?replication<br>
> >? ? ? >? ? ? > session between the two sites I am getting:<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? >? ??????? # gluster volume geo-replication store1<br>
> >? ? ? >? ? ? > glustergeorep@<slave-host>::store1 create push-pem<br>
> >? ? ? >? ? ? >? ??????? Error : Request timed out<br>
> >? ? ? >? ? ? >? ??????? geo-replication command failed<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > It is still unclear to me if my setup is expected<br>
> to work<br>
> >? ? ?at all.<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > Reading the geo-replication documentation at [1] I<br>
> see this<br>
> >? ? ? >? ? ?paragraph:<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? >? > A password-less SSH connection is also required<br>
> for gsyncd<br>
> >? ? ? >? ? ?between<br>
> >? ? ? >? ? ? > every node in the master to every node in the<br>
> slave. The<br>
> >? ? ?gluster<br>
> >? ? ? >? ? ? > system:: execute gsec_create command creates secret-pem<br>
> >? ? ?files on<br>
> >? ? ? >? ? ?all the<br>
> >? ? ? >? ? ? > nodes in the master, and is used to implement the<br>
> >? ? ?password-less SSH<br>
> >? ? ? >? ? ? > connection. The push-pem option in the<br>
> geo-replication create<br>
> >? ? ? >? ? ?command<br>
> >? ? ? >? ? ? > pushes these keys to all the nodes in the slave.<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > It is not clear to me whether connectivity from each<br>
> >? ? ?master node<br>
> >? ? ? >? ? ?to each<br>
> >? ? ? >? ? ? > slave node is a requirement in terms of networking.<br>
> In my<br>
> >? ? ?setup the<br>
> >? ? ? >? ? ? > slave nodes form the Gluster pool over a private<br>
> network<br>
> >? ? ?which is<br>
> >? ? ? >? ? ?not<br>
> >? ? ? >? ? ? > reachable from the master site.<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > Any ideas how to proceed from here will be greatly<br>
> >? ? ?appreciated.<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > Thanks!<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > Links:<br>
> >? ? ? >? ? ? > [1]<br>
> >? ? ? >? ? ? ><br>
> >? ? ? ><br>
> ><br>
> <a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication">
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication</a><br>
> >? ? ? ><br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > Best regards,<br>
> >? ? ? >? ? ? > --<br>
> >? ? ? >? ? ? > alexander iliev<br>
> >? ? ? >? ? ? ><br>
> >? ? ? >? ? ? > On 9/3/19 2:50 PM, Sunny Kumar wrote:<br>
> >? ? ? >? ? ? >> Thank you for the explanation Kaleb.<br>
> >? ? ? >? ? ? >><br>
> >? ? ? >? ? ? >> Alexander,<br>
> >? ? ? >? ? ? >><br>
> >? ? ? >? ? ? >> This fix will be available with next release for all<br>
> >? ? ?supported<br>
> >? ? ? >? ? ?versions.<br>
> >? ? ? >? ? ? >><br>
> >? ? ? >? ? ? >> /sunny<br>
> >? ? ? >? ? ? >><br>
> >? ? ? >? ? ? >> On Mon, Sep 2, 2019 at 6:47 PM Kaleb Keithley<br>
> >? ? ? >? ? ?<kkeithle@redhat.com <<a href="mailto:kkeithle@redhat.com">mailto:kkeithle@redhat.com</a>><br>
> <<a href=""></a>mailto:kkeithle@redhat.com <<a href="mailto:kkeithle@redhat.com">mailto:kkeithle@redhat.com</a>>><br>
> >? ? ?<<a href=""></a>mailto:kkeithle@redhat.com <<a href="mailto:kkeithle@redhat.com">mailto:kkeithle@redhat.com</a>><br>
> <<a href=""></a>mailto:kkeithle@redhat.com <<a href="mailto:kkeithle@redhat.com">mailto:kkeithle@redhat.com</a>>>>><br>
> >? ? ? >? ? ? >> wrote:<br>
> >? ? ? >? ? ? >>><br>
> >? ? ? >? ? ? >>> Fixes on master (before or after the release-7 branch<br>
> >? ? ?was taken)<br>
> >? ? ? >? ? ? >>> almost certainly warrant a backport IMO to at least<br>
> >? ? ?release-6, and<br>
> >? ? ? >? ? ? >>> probably release-5 as well.<br>
> >? ? ? >? ? ? >>><br>
> >? ? ? >? ? ? >>> We used to have a "tracker" BZ for each minor<br>
> release (e.g.<br>
> >? ? ? >? ? ?6.6) to<br>
> >? ? ? >? ? ? >>> keep track of backports by cloning the original<br>
> BZ and<br>
> >? ? ?changing<br>
> >? ? ? >? ? ?the<br>
> >? ? ? >? ? ? >>> Version, and adding that BZ to the tracker. I'm<br>
> not sure<br>
> >? ? ?what<br>
> >? ? ? >? ? ? >>> happened to that practice. The last ones I can<br>
> find are<br>
> >? ? ?for 6.3<br>
> >? ? ? >? ? ?and<br>
> >? ? ? >? ? ? >>> 5.7;<br>
> > <a href="https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3">https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3</a> and<br>
> >? ? ? >? ? ? >>><br>
> <a href="https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7">https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7</a><br>
> >? ? ? >? ? ? >>><br>
> >? ? ? >? ? ? >>> It isn't enough to just backport recent fixes on<br>
> master to<br>
> >? ? ? >? ? ?release-7.<br>
> >? ? ? >? ? ? >>> We are supposedly continuing to maintain<br>
> release-6 and<br>
> >? ? ?release-5<br>
> >? ? ? >? ? ? >>> after release-7 GAs. If that has changed, I<br>
> haven't seen an<br>
> >? ? ? >? ? ? >>> announcement to that effect. I don't know why our<br>
> >? ? ?developers don't<br>
> >? ? ? >? ? ? >>> automatically backport to all the actively maintained<br>
> >? ? ?releases.<br>
> >? ? ? >? ? ? >>><br>
> >? ? ? >? ? ? >>> Even if there isn't a tracker BZ, you can always<br>
> create a<br>
> >? ? ? >? ? ?backport BZ<br>
> >? ? ? >? ? ? >>> by cloning the original BZ and change the release<br>
> to 6.<br>
> >? ? ?That'd<br>
> >? ? ? >? ? ?be a<br>
> >? ? ? >? ? ? >>> good place to start.<br>
> >? ? ? >? ? ? >>><br>
> >? ? ? >? ? ? >>> On Sun, Sep 1, 2019 at 8:45 AM Alexander Iliev<br>
> >? ? ? >? ? ? >>> <ailiev+gluster@mamul.org<br>
> <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%252Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%25252Bgluster@mamul.org">mailto:ailiev%25252Bgluster@mamul.org</a>>>>><br>
> >? ? ? >? ? ?wrote:<br>
> >? ? ? >? ? ? >>>><br>
> >? ? ? >? ? ? >>>> Hi Strahil,<br>
> >? ? ? >? ? ? >>>><br>
> >? ? ? >? ? ? >>>> Yes, this might be right, but I would still expect<br>
> >? ? ?fixes like<br>
> >? ? ? >? ? ?this<br>
> >? ? ? >? ? ? >>>> to be<br>
> >? ? ? >? ? ? >>>> released for all supported major versions (which<br>
> should<br>
> >? ? ? >? ? ?include 6.) At<br>
> >? ? ? >? ? ? >>>> least that's how I understand<br>
> >? ? ? >? ? ? >>>> <a href="https://www.gluster.org/release-schedule/">https://www.gluster.org/release-schedule/</a>.<br>
> >? ? ? >? ? ? >>>><br>
> >? ? ? >? ? ? >>>> Anyway, let's wait for Sunny to clarify.<br>
> >? ? ? >? ? ? >>>><br>
> >? ? ? >? ? ? >>>> Best regards,<br>
> >? ? ? >? ? ? >>>> alexander iliev<br>
> >? ? ? >? ? ? >>>><br>
> >? ? ? >? ? ? >>>> On 9/1/19 2:07 PM, Strahil Nikolov wrote:<br>
> >? ? ? >? ? ? >>>>> Hi Alex,<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> I'm not very deep into bugzilla stuff, but for me<br>
> >? ? ?NEXTRELEASE<br>
> >? ? ? >? ? ?means<br>
> >? ? ? >? ? ? >>>>> v7.<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Sunny,<br>
> >? ? ? >? ? ? >>>>> Am I understanding it correctly ?<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Best Regards,<br>
> >? ? ? >? ? ? >>>>> Strahil Nikolov<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> ? ??????, 1 ????????? 2019 ?., 14:27:32 ?.<br>
> ???????+3,<br>
> >? ? ? >? ? ?Alexander Iliev<br>
> >? ? ? >? ? ? >>>>> <ailiev+gluster@mamul.org<br>
> <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>><br>
> >? ? ? >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%252Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%25252Bgluster@mamul.org">mailto:ailiev%25252Bgluster@mamul.org</a>>>>> ??????:<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Hi Sunny,<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Thank you for the quick response.<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> It's not clear to me however if the fix has<br>
> been already<br>
> >? ? ? >? ? ?released<br>
> >? ? ? >? ? ? >>>>> or not.<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> The bug status is CLOSED NEXTRELEASE and<br>
> according to<br>
> >? ? ?[1] the<br>
> >? ? ? >? ? ? >>>>> NEXTRELEASE resolution means that the fix will be<br>
> >? ? ?included in<br>
> >? ? ? >? ? ?the next<br>
> >? ? ? >? ? ? >>>>> supported release. The bug is logged against the<br>
> >? ? ?mainline version<br>
> >? ? ? >? ? ? >>>>> though, so I'm not sure what this means exactly.<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> ? From the 6.4[2] and 6.5[3] release notes it<br>
> seems it<br>
> >? ? ?hasn't<br>
> >? ? ? >? ? ?been<br>
> >? ? ? >? ? ? >>>>> released yet.<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Ideally I would not like to patch my systems<br>
> locally,<br>
> >? ? ?so if you<br>
> >? ? ? >? ? ? >>>>> have an<br>
> >? ? ? >? ? ? >>>>> ETA on when this will be out officially I would<br>
> really<br>
> >? ? ? >? ? ?appreciate it.<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Links:<br>
> >? ? ? >? ? ? >>>>> [1]<br>
> >? ? ? > <a href="https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status">
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status</a><br>
> >? ? ? >? ? ? >>>>> [2]<br>
> <a href="https://docs.gluster.org/en/latest/release-notes/6.4/">https://docs.gluster.org/en/latest/release-notes/6.4/</a><br>
> >? ? ? >? ? ? >>>>> [3]<br>
> <a href="https://docs.gluster.org/en/latest/release-notes/6.5/">https://docs.gluster.org/en/latest/release-notes/6.5/</a><br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Thank you!<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> Best regards,<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> alexander iliev<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> On 8/30/19 9:22 AM, Sunny Kumar wrote:<br>
> >? ? ? >? ? ? >>>>> ? > Hi Alexander,<br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > Thanks for pointing that out!<br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > But this issue is fixed now you can see<br>
> below link for<br>
> >? ? ? >? ? ?bz-link<br>
> >? ? ? >? ? ? >>>>> and patch.<br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > BZ -<br>
> > <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1709248">https://bugzilla.redhat.com/show_bug.cgi?id=1709248</a><br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > Patch -<br>
> > <a href="https://review.gluster.org/#/c/glusterfs/+/22716/">https://review.gluster.org/#/c/glusterfs/+/22716/</a><br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > Hope this helps.<br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > /sunny<br>
> >? ? ? >? ? ? >>>>> ? ><br>
> >? ? ? >? ? ? >>>>> ? > On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev<br>
> >? ? ? >? ? ? >>>>> ? > <ailiev+gluster@mamul.org<br>
> <<a href="mailto:ailiev%2Bgluster@mamul.org">mailto:ailiev%2Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>>><br>
> >? ? ? >? ? ?<<a href=""></a>mailto:ailiev%2Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%252Bgluster@mamul.org">mailto:ailiev%252Bgluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:ailiev%252Bgluster@mamul.org<br>
> <<a href="mailto:ailiev%25252Bgluster@mamul.org">mailto:ailiev%25252Bgluster@mamul.org</a>>>> <<a href=""></a>mailto:gluster@mamul.org<br>
> <<a href="mailto:gluster@mamul.org">mailto:gluster@mamul.org</a>><br>
> >? ? ?<<a href=""></a>mailto:gluster@mamul.org <<a href="mailto:gluster@mamul.org">mailto:gluster@mamul.org</a>>><br>
> >? ? ? >? ? ?<<a href=""></a>mailto:gluster@mamul.org <<a href="mailto:gluster@mamul.org">mailto:gluster@mamul.org</a>><br>
> <<a href=""></a>mailto:gluster@mamul.org <<a href="mailto:gluster@mamul.org">mailto:gluster@mamul.org</a>>>>>> wrote:<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Hello dear GlusterFS users list,<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> I have been trying to set up geo-replication<br>
> >? ? ?between two<br>
> >? ? ? >? ? ? >>>>> clusters for<br>
> >? ? ? >? ? ? >>>>> ? >> some time now. The desired state is<br>
> (Cluster #1)<br>
> >? ? ?being<br>
> >? ? ? >? ? ? >>>>> replicated to<br>
> >? ? ? >? ? ? >>>>> ? >> (Cluster #2).<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Here are some details about the setup:<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Cluster #1: three nodes connected via a<br>
> local network<br>
> >? ? ? >? ? ? >>>>> (172.31.35.0/24 <<a href="http://172.31.35.0/24">http://172.31.35.0/24</a>><br>
> <<a href="http://172.31.35.0/24">http://172.31.35.0/24</a>><br>
> >? ? ?<<a href="http://172.31.35.0/24">http://172.31.35.0/24</a>>),<br>
> >? ? ? >? ? ? >>>>> ? >> one replicated (3 replica) volume.<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Cluster #2: three nodes connected via a<br>
> local network<br>
> >? ? ? >? ? ? >>>>> (172.31.36.0/24 <<a href="http://172.31.36.0/24">http://172.31.36.0/24</a>><br>
> <<a href="http://172.31.36.0/24">http://172.31.36.0/24</a>><br>
> >? ? ?<<a href="http://172.31.36.0/24">http://172.31.36.0/24</a>>),<br>
> >? ? ? >? ? ? >>>>> ? >> one replicated (3 replica) volume.<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> The two clusters are connected to the Internet<br>
> >? ? ?via separate<br>
> >? ? ? >? ? ? >>>>> network<br>
> >? ? ? >? ? ? >>>>> ? >> adapters.<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Only SSH (port 22) is open on cluster #2<br>
> nodes'<br>
> >? ? ?adapters<br>
> >? ? ? >? ? ? >>>>> connected to<br>
> >? ? ? >? ? ? >>>>> ? >> the Internet.<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> All nodes are running Ubuntu 18.04 and<br>
> GlusterFS 6.3<br>
> >? ? ? >? ? ?installed<br>
> >? ? ? >? ? ? >>>>> from [1].<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> The first time I followed the guide[2]<br>
> everything<br>
> >? ? ?went<br>
> >? ? ? >? ? ?fine up<br>
> >? ? ? >? ? ? >>>>> until I<br>
> >? ? ? >? ? ? >>>>> ? >> reached the "Create the session" step.<br>
> That was<br>
> >? ? ?like a<br>
> >? ? ? >? ? ?month<br>
> >? ? ? >? ? ? >>>>> ago, then I<br>
> >? ? ? >? ? ? >>>>> ? >> had to temporarily stop working in this<br>
> and now I<br>
> >? ? ?am coming<br>
> >? ? ? >? ? ? >>>>> back to it.<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Currently, if I try to see the mountbroker<br>
> status<br>
> >? ? ?I get the<br>
> >? ? ? >? ? ? >>>>> following:<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >>> # gluster-mountbroker status<br>
> >? ? ? >? ? ? >>>>> ? >>> Traceback (most recent call last):<br>
> >? ? ? >? ? ? >>>>> ? >>>??? File "/usr/sbin/gluster-mountbroker", line<br>
> >? ? ?396, in<br>
> >? ? ? >? ? ?<module><br>
> >? ? ? >? ? ? >>>>> ? >>>????? runcli()<br>
> >? ? ? >? ? ? >>>>> ? >>>??? File<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? ><br>
> > <br>
> ?"/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line<br>
> >? ? ? >? ? ? >>>>> 225,<br>
> >? ? ? >? ? ? >>>>> in runcli<br>
> >? ? ? >? ? ? >>>>> ? >>>????? cls.run(args)<br>
> >? ? ? >? ? ? >>>>> ? >>>??? File "/usr/sbin/gluster-mountbroker", line<br>
> >? ? ?275, in run<br>
> >? ? ? >? ? ? >>>>> ? >>>????? out = execute_in_peers("node-status")<br>
> >? ? ? >? ? ? >>>>> ? >>>??? File<br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ?"/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",<br>
> >? ? ? >? ? ? >>>>> ? >> line 127, in execute_in_peers<br>
> >? ? ? >? ? ? >>>>> ? >>>????? raise GlusterCmdException((rc, out,<br>
> err, "<br>
> >? ? ? >? ? ?".join(cmd)))<br>
> >? ? ? >? ? ? >>>>> ? >>><br>
> gluster.cliutils.cliutils.GlusterCmdException:<br>
> >? ? ?(1, '',<br>
> >? ? ? >? ? ? >>>>> 'Unable to<br>
> >? ? ? >? ? ? >>>>> ? >> end. Error : Success\n', 'gluster system::<br>
> execute<br>
> >? ? ? >? ? ?mountbroker.py<br>
> >? ? ? >? ? ? >>>>> ? >> node-status')<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> And in /var/log/gluster/glusterd.log I have:<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >>> [2019-08-10 15:24:21.418834] E [MSGID:<br>
> 106336]<br>
> >? ? ? >? ? ? >>>>> ? >> [glusterd-geo-rep.c:5413:glusterd_op_sys_exec]<br>
> >? ? ? >? ? ?0-management:<br>
> >? ? ? >? ? ? >>>>> Unable to<br>
> >? ? ? >? ? ? >>>>> ? >> end. Error : Success<br>
> >? ? ? >? ? ? >>>>> ? >>> [2019-08-10 15:24:21.418908] E [MSGID:<br>
> 106122]<br>
> >? ? ? >? ? ?? >>>>> ? >> [glusterd-syncop.c:1445:gd_commit_op_phase]<br>
> >? ? ?0-management:<br>
> >? ? ? >? ? ? >>>>> Commit of<br>
> >? ? ? >? ? ? >>>>> ? >> operation 'Volume Execute system commands'<br>
> failed on<br>
> >? ? ? >? ? ?localhost<br>
> >? ? ? >? ? ? >>>>> : Unable<br>
> >? ? ? >? ? ? >>>>> ? >> to end. Error : Success<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> So, I have two questions right now:<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> 1) Is there anything wrong with my setup<br>
> >? ? ?(networking, open<br>
> >? ? ? >? ? ? >>>>> ports, etc.)?<br>
> >? ? ? >? ? ? >>>>> ? >> Is it expected to work with this setup or<br>
> should<br>
> >? ? ?I redo<br>
> >? ? ? >? ? ?it in a<br>
> >? ? ? >? ? ? >>>>> ? >> different way?<br>
> >? ? ? >? ? ? >>>>> ? >> 2) How can I troubleshoot the current<br>
> status of my<br>
> >? ? ? >? ? ?setup? Can<br>
> >? ? ? >? ? ? >>>>> I find out<br>
> >? ? ? >? ? ? >>>>> ? >> what's missing/wrong and continue from<br>
> there or<br>
> >? ? ?should I<br>
> >? ? ? >? ? ?just<br>
> >? ? ? >? ? ? >>>>> start from<br>
> >? ? ? >? ? ? >>>>> ? >> scratch?<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Links:<br>
> >? ? ? >? ? ? >>>>> ? >> [1]<br>
> > <a href="http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu">http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu</a><br>
> >? ? ? >? ? ? >>>>> ? >> [2]<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? ><br>
> ><br>
> <a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/">
https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/</a><br>
> >? ? ? ><br>
> >? ? ? >? ? ? >>>>><br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Thank you!<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> >? ? ? >? ? ? >>>>> ? >> Best regards,<br>
> >? ? ? >? ? ? >>>>> ? >> --<br>
> >? ? ? >? ? ? >>>>> ? >> alexander iliev<br>
> >? ? ? >? ? ? >>>>> ? >><br>
> _______________________________________________<br>
> >? ? ? >? ? ? >>>>> ? >> Gluster-users mailing list<br>
> >? ? ? >? ? ? >>>>> ? >> Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> >? ? ? >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> >? ? ? >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>>><br>
> >? ? ? >? ? ? >>>>> ? >><br>
> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> >? ? ? >? ? ? >>>>> _______________________________________________<br>
> >? ? ? >? ? ? >>>>> Gluster-users mailing list<br>
> >? ? ? >? ? ? >>>>> Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> <<a href=""></a>mailto:Gluster-users@gluster.org <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> >? ? ? >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> <<a href=""></a>mailto:Gluster-users@gluster.org <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>>><br>
> >? ? ? >? ? ? >>>>><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> >? ? ? >? ? ? >>>> _______________________________________________<br>
> >? ? ? >? ? ? >>>> Gluster-users mailing list<br>
> >? ? ? >? ? ? >>>> Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> <<a href=""></a>mailto:Gluster-users@gluster.org <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> >? ? ? >? ? ? >>>><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> >? ? ? >? ? ? > _______________________________________________<br>
> >? ? ? >? ? ? > Gluster-users mailing list<br>
> >? ? ? >? ? ? > Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> <<a href=""></a>mailto:Gluster-users@gluster.org <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> >? ? ? >? ? ? ><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> >? ? ? >? ? ?________<br>
> >? ? ? ><br>
> >? ? ? >? ? ?Community Meeting Calendar:<br>
> >? ? ? ><br>
> >? ? ? >? ? ?APAC Schedule -<br>
> >? ? ? >? ? ?Every 2nd and 4th Tuesday at 11:30 AM IST<br>
> >? ? ? >? ? ?Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a><br>
> >? ? ? ><br>
> >? ? ? >? ? ?NA/EMEA Schedule -<br>
> >? ? ? >? ? ?Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
> >? ? ? >? ? ?Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a><br>
> >? ? ? ><br>
> >? ? ? >? ? ?Gluster-users mailing list<br>
> >? ? ? > Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>><br>
> >? ? ?<<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>> <<a href=""></a>mailto:Gluster-users@gluster.org<br>
> <<a href="mailto:Gluster-users@gluster.org">mailto:Gluster-users@gluster.org</a>>>><br>
> >? ? ? > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">
https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
> >? ? ? ><br>
> >? ? ? ><br>
> >? ? ? ><br>
> >? ? ? > --<br>
> >? ? ? > regards<br>
> >? ? ? > Aravinda VK<br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > regards<br>
> > Aravinda VK<br>
> <br>
> <br>
> <br>
> -- <br>
> regards<br>
> Aravinda VK<br>
<br>
Best regards,<br>
--<br>
alexander iliev<br>
<br>
<br>
------------------------------<br>
<br>
_______________________________________________<br>
Gluster-users mailing list<br>
Gluster-users@gluster.org<br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
<br>
End of Gluster-users Digest, Vol 138, Issue 14<br>
**********************************************<br>
</div>
</span></font></div>
</div>
</div>
</body>
</html>