[Gluster-users] Unsubscribe
Oskar Pienkos
oskarp10 at hotmail.com
Fri Oct 18 15:00:10 UTC 2019
Unsubscribe
Sent from Outlook<http://aka.ms/weboutlook>
________________________________
From: gluster-users-bounces at gluster.org <gluster-users-bounces at gluster.org> on behalf of gluster-users-request at gluster.org <gluster-users-request at gluster.org>
Sent: October 18, 2019 5:00 AM
To: gluster-users at gluster.org <gluster-users at gluster.org>
Subject: Gluster-users Digest, Vol 138, Issue 14
Send Gluster-users mailing list submissions to
gluster-users at gluster.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.gluster.org/mailman/listinfo/gluster-users
or, via email, send a message with subject or body 'help' to
gluster-users-request at gluster.org
You can reach the person managing the list at
gluster-users-owner at gluster.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Gluster-users digest..."
Today's Topics:
1. Mirror https://download.gluster.org/ is not working
(Alberto Bengoa)
2. Re: Issues with Geo-replication (GlusterFS 6.3 on Ubuntu
18.04) (Aravinda Vishwanathapura Krishna Murthy)
3. Re: Single Point of failure in geo Replication
(Aravinda Vishwanathapura Krishna Murthy)
4. Re: On a glusterfsd service (Amar Tumballi)
5. Re: Mirror https://download.gluster.org/ is not working
(Kaleb Keithley)
6. Re: Issues with Geo-replication (GlusterFS 6.3 on Ubuntu
18.04) (Alexander Iliev)
----------------------------------------------------------------------
Message: 1
Date: Thu, 17 Oct 2019 15:55:25 +0100
From: Alberto Bengoa <bengoa at gmail.com>
To: gluster-users <gluster-users at gluster.org>
Subject: [Gluster-users] Mirror https://download.gluster.org/ is not
working
Message-ID:
<CA+vk31b5qomQVXQ10ofD5jL+kkhMqaamALUg7XKEcK-X7Ju1yw at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Guys,
Anybody from Gluster Team has one word about the mirror status? It is
failing since (maybe?) yesterday.
root at nas-bkp /tmp $ yum install glusterfs-client
GlusterFS is a clustered file-system capable of scaling to several petabyte
2.1 kB/s | 2.9 kB 00:01
Dependencies resolved.
============================================================================================================
Package Arch Version
Repository Size
============================================================================================================
Installing:
glusterfs-fuse x86_64 6.5-2.el8
glusterfs-rhel8 167 k
Installing dependencies:
glusterfs x86_64 6.5-2.el8
glusterfs-rhel8 681 k
glusterfs-client-xlators x86_64 6.5-2.el8
glusterfs-rhel8 893 k
glusterfs-libs x86_64 6.5-2.el8
glusterfs-rhel8 440 k
Transaction Summary
============================================================================================================
Install 4 Packages
Total download size: 2.1 M
Installed size: 9.1 M
Is this ok [y/N]: y
Downloading Packages:
[MIRROR] glusterfs-6.5-2.el8.x86_64.rpm: Curl error (18): Transferred a
partial file for
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
[transfer closed with 648927 bytes remaining to read]
[FAILED] glusterfs-6.5-2.el8.x86_64.rpm: No more mirrors to try - All
mirrors were already tried without success
(2-3/4): glusterfs-client-xlators- 34% [===========- ]
562 kB/s | 745 kB 00:02 ETA
The downloaded packages were saved in cache until the next successful
transaction.
You can remove cached packages by executing 'dnf clean packages'.
Error: Error downloading packages:
Cannot download glusterfs-6.5-2.el8.x86_64.rpm: All mirrors were tried
If you try to download using wget it fails as well:
root at nas-bkp /tmp $ wget
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
--2019-10-17 15:53:41--
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
Resolving download.gluster.org (download.gluster.org)... 8.43.85.185
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
connected.
HTTP request sent, awaiting response... 200 OK
Length: 697688 (681K) [application/x-rpm]
Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.1?
glusterfs-6.5-2.el8.x86_64 6%[=> ]
47.62K --.-KB/s in 0.09s
2019-10-17 15:53:42 (559 KB/s) - Read error at byte 48761/697688 (Error
decoding the received TLS packet.). Retrying.
--2019-10-17 15:53:43-- (try: 2)
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
connected.
HTTP request sent, awaiting response... ^C
root at nas-bkp /tmp $ wget
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
--2019-10-17 15:53:45--
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
Resolving download.gluster.org (download.gluster.org)... 8.43.85.185
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
connected.
HTTP request sent, awaiting response... 200 OK
Length: 697688 (681K) [application/x-rpm]
Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?
glusterfs-6.5-2.el8.x86_64 6%[=> ]
47.62K --.-KB/s in 0.08s
2019-10-17 15:53:46 (564 KB/s) - Read error at byte 48761/697688 (Error
decoding the received TLS packet.). Retrying.
--2019-10-17 15:53:47-- (try: 2)
https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
connected.
HTTP request sent, awaiting response... 206 Partial Content
Length: 697688 (681K), 648927 (634K) remaining [application/x-rpm]
Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?
glusterfs-6.5-2.el8.x86_64 13%[++==> ]
95.18K --.-KB/s in 0.08s
2019-10-17 15:53:47 (563 KB/s) - Read error at byte 97467/697688 (Error
decoding the received TLS packet.). Retrying.
Thank you!
Alberto Bengoa
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/146a4068/attachment-0001.html>
------------------------------
Message: 2
Date: Thu, 17 Oct 2019 21:02:42 +0530
From: Aravinda Vishwanathapura Krishna Murthy <avishwan at redhat.com>
To: Alexander Iliev <ailiev+gluster at mamul.org>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Issues with Geo-replication (GlusterFS
6.3 on Ubuntu 18.04)
Message-ID:
<CA+8EeuNwuYJs0Yxk8zqKYc2VxdGM0xU6ivGpLE3oo28oxzbqLA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
On Thu, Oct 17, 2019 at 12:54 PM Alexander Iliev <ailiev+gluster at mamul.org>
wrote:
> Thanks, Aravinda.
>
> Does this mean that my scenario is currently unsupported?
>
Please try by providing external IP while creating Geo-rep session. We will
work on the enhancement if it didn't work.
> It seems that I need to make sure that the nodes in the two clusters can
> see each-other (some kind of VPN would work I guess).
>
> Is this be documented somewhere? I think I've read the geo-replication
> documentation several times now, but somehow it wasn't obvious to me
> that you need access to the slave nodes from the master ones (apart from
> the SSH access).
>
> Thanks!
>
> Best regards,
> --
> alexander iliev
>
> On 10/17/19 5:25 AM, Aravinda Vishwanathapura Krishna Murthy wrote:
> > Got it.
> >
> > Geo-replication uses slave nodes IP in the following cases,
> >
> > - Verification during Session creation - It tries to mount the Slave
> > volume using the hostname/IP provided in Geo-rep create command. Try
> > Geo-rep create by specifying the external IP which is accessible from
> > the master node.
> > - Once Geo-replication is started, it gets the list of Slave nodes
> > IP/hostname from Slave volume info and connects to those IPs. But in
> > this case, those are internal IP addresses that are not accessible from
> > Master nodes. - We need to enhance Geo-replication to accept external IP
> > and internal IP map details so that for all connections it can use
> > external IP.
> >
> > On Wed, Oct 16, 2019 at 10:29 PM Alexander Iliev
> > <ailiev+gluster at mamul.org <mailto:ailiev%2Bgluster at mamul.org>> wrote:
> >
> > Hi Aravinda,
> >
> > All volume brick on the slave volume are up and the volume seems
> > functional.
> >
> > Your suggestion about trying to mount the slave volume on a master
> node
> > brings up my question about network connectivity again - the
> GlusterFS
> > documentation[1] says:
> >
> > > The server specified in the mount command is only used to fetch
> the
> > gluster configuration volfile describing the volume name.
> Subsequently,
> > the client will communicate directly with the servers mentioned in
> the
> > volfile (which might not even include the one used for mount).
> >
> > To me this means that the masternode from your example is expected to
> > have connectivity to the network where the slave volume runs, i.e. to
> > have network access to the slave nodes. In my geo-replication
> scenario
> > this is definitely not the case. The two cluster are running in two
> > completely different networks that are not interconnected.
> >
> > So my question is - how is the slave volume mount expected to happen
> if
> > the client host cannot access the GlusterFS nodes? Or is the
> > connectivity a requirement even for geo-replication?
> >
> > I'm not sure if I'm missing something, but any help will be highly
> > appreciated!
> >
> > Thanks!
> >
> > Links:
> > [1]
> >
> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/
> > --
> > alexander iliev
> >
> > On 10/16/19 6:03 AM, Aravinda Vishwanathapura Krishna Murthy wrote:
> > > Hi Alexander,
> > >
> > > Please check the status of Volume. Looks like the Slave volume
> > mount is
> > > failing because bricks are down or not reachable. If Volume
> > status shows
> > > all bricks are up then try mounting the slave volume using mount
> > command.
> > >
> > > ```
> > > masternode$ mkdir /mnt/vol
> > > masternode$ mount -t glusterfs <slavehost>:<slavevol> /mnt/vol
> > > ```
> > >
> > > On Fri, Oct 11, 2019 at 4:03 AM Alexander Iliev
> > > <ailiev+gluster at mamul.org <mailto:ailiev%2Bgluster at mamul.org>
> > <mailto:ailiev%2Bgluster at mamul.org
> > <mailto:ailiev%252Bgluster at mamul.org>>> wrote:
> > >
> > > Hi all,
> > >
> > > I ended up reinstalling the nodes with CentOS 7.5 and
> > GlusterFS 6.5
> > > (installed from the SIG.)
> > >
> > > Now when I try to create a replication session I get the
> > following:
> > >
> > > > # gluster volume geo-replication store1
> > <slave-host>::store2 create
> > > push-pem
> > > > Unable to mount and fetch slave volume details. Please
> > check the
> > > log:
> > > /var/log/glusterfs/geo-replication/gverify-slavemnt.log
> > > > geo-replication command failed
> > >
> > > You can find the contents of gverify-slavemnt.log below, but
> the
> > > initial
> > > error seems to be:
> > >
> > > > [2019-10-10 22:07:51.578519] E
> > > [fuse-bridge.c:5211:fuse_first_lookup]
> > > 0-fuse: first lookup on root failed (Transport endpoint is not
> > > connected)
> > >
> > > I only found
> > > [this](https://bugzilla.redhat.com/show_bug.cgi?id=1659824)
> > > bug report which doesn't seem to help. The reported issue is
> > failure to
> > > mount a volume on a GlusterFS client, but in my case I need
> > > geo-replication which implies the client (geo-replication
> > master) being
> > > on a different network.
> > >
> > > Any help will be appreciated.
> > >
> > > Thanks!
> > >
> > > gverify-slavemnt.log:
> > >
> > > > [2019-10-10 22:07:40.571256] I [MSGID: 100030]
> > > [glusterfsd.c:2847:main] 0-glusterfs: Started running
> > glusterfs version
> > > 6.5 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off
> > > --volfile-server <slave-host> --volfile-id store2 -l
> > > /var/log/glusterfs/geo-replication/gverify-slavemnt.log
> > > /tmp/gverify.sh.5nFlRh)
> > > > [2019-10-10 22:07:40.575438] I
> [glusterfsd.c:2556:daemonize]
> > > 0-glusterfs: Pid of current running process is 6021
> > > > [2019-10-10 22:07:40.584282] I [MSGID: 101190]
> > > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:
> > Started thread
> > > with index 0
> > > > [2019-10-10 22:07:40.584299] I [MSGID: 101190]
> > > [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:
> > Started thread
> > > with index 1
> > > > [2019-10-10 22:07:40.928094] I [MSGID: 114020]
> > > [client.c:2393:notify]
> > > 0-store2-client-0: parent translators are ready, attempting
> > connect on
> > > transport
> > > > [2019-10-10 22:07:40.931121] I [MSGID: 114020]
> > > [client.c:2393:notify]
> > > 0-store2-client-1: parent translators are ready, attempting
> > connect on
> > > transport
> > > > [2019-10-10 22:07:40.933976] I [MSGID: 114020]
> > > [client.c:2393:notify]
> > > 0-store2-client-2: parent translators are ready, attempting
> > connect on
> > > transport
> > > > Final graph:
> > > >
> > >
> >
> +------------------------------------------------------------------------------+
> > > > 1: volume store2-client-0
> > > > 2: type protocol/client
> > > > 3: option ping-timeout 42
> > > > 4: option remote-host 172.31.36.11
> > > > 5: option remote-subvolume
> > /data/gfs/store1/1/brick-store2
> > > > 6: option transport-type socket
> > > > 7: option transport.address-family inet
> > > > 8: option transport.socket.ssl-enabled off
> > > > 9: option transport.tcp-user-timeout 0
> > > > 10: option transport.socket.keepalive-time 20
> > > > 11: option transport.socket.keepalive-interval 2
> > > > 12: option transport.socket.keepalive-count 9
> > > > 13: option send-gids true
> > > > 14: end-volume
> > > > 15:
> > > > 16: volume store2-client-1
> > > > 17: type protocol/client
> > > > 18: option ping-timeout 42
> > > > 19: option remote-host 172.31.36.12
> > > > 20: option remote-subvolume
> > /data/gfs/store1/1/brick-store2
> > > > 21: option transport-type socket
> > > > 22: option transport.address-family inet
> > > > 23: option transport.socket.ssl-enabled off
> > > > 24: option transport.tcp-user-timeout 0
> > > > 25: option transport.socket.keepalive-time 20
> > > > 26: option transport.socket.keepalive-interval 2
> > > > 27: option transport.socket.keepalive-count 9
> > > > 28: option send-gids true
> > > > 29: end-volume
> > > > 30:
> > > > 31: volume store2-client-2
> > > > 32: type protocol/client
> > > > 33: option ping-timeout 42
> > > > 34: option remote-host 172.31.36.13
> > > > 35: option remote-subvolume
> > /data/gfs/store1/1/brick-store2
> > > > 36: option transport-type socket
> > > > 37: option transport.address-family inet
> > > > 38: option transport.socket.ssl-enabled off
> > > > 39: option transport.tcp-user-timeout 0
> > > > 40: option transport.socket.keepalive-time 20
> > > > 41: option transport.socket.keepalive-interval 2
> > > > 42: option transport.socket.keepalive-count 9
> > > > 43: option send-gids true
> > > > 44: end-volume
> > > > 45:
> > > > 46: volume store2-replicate-0
> > > > 47: type cluster/replicate
> > > > 48: option afr-pending-xattr
> > > store2-client-0,store2-client-1,store2-client-2
> > > > 49: option use-compound-fops off
> > > > 50: subvolumes store2-client-0 store2-client-1
> > store2-client-2
> > > > 51: end-volume
> > > > 52:
> > > > 53: volume store2-dht
> > > > 54: type cluster/distribute
> > > > 55: option lookup-unhashed off
> > > > 56: option lock-migration off
> > > > 57: option force-migration off
> > > > 58: subvolumes store2-replicate-0
> > > > 59: end-volume
> > > > 60:
> > > > 61: volume store2-write-behind
> > > > 62: type performance/write-behind
> > > > 63: subvolumes store2-dht
> > > > 64: end-volume
> > > > 65:
> > > > 66: volume store2-read-ahead
> > > > 67: type performance/read-ahead
> > > > 68: subvolumes store2-write-behind
> > > > 69: end-volume
> > > > 70:
> > > > 71: volume store2-readdir-ahead
> > > > 72: type performance/readdir-ahead
> > > > 73: option parallel-readdir off
> > > > 74: option rda-request-size 131072
> > > > 75: option rda-cache-limit 10MB
> > > > 76: subvolumes store2-read-ahead
> > > > 77: end-volume
> > > > 78:
> > > > 79: volume store2-io-cache
> > > > 80: type performance/io-cache
> > > > 81: subvolumes store2-readdir-ahead
> > > > 82: end-volume
> > > > 83:
> > > > 84: volume store2-open-behind
> > > > 85: type performance/open-behind
> > > > 86: subvolumes store2-io-cache
> > > > 87: end-volume
> > > > 88:
> > > > 89: volume store2-quick-read
> > > > 90: type performance/quick-read
> > > > 91: subvolumes store2-open-behind
> > > > 92: end-volume
> > > > 93:
> > > > 94: volume store2-md-cache
> > > > 95: type performance/md-cache
> > > > 96: subvolumes store2-quick-read
> > > > 97: end-volume
> > > > 98:
> > > > 99: volume store2
> > > > 100: type debug/io-stats
> > > > 101: option log-level INFO
> > > > 102: option latency-measurement off
> > > > 103: option count-fop-hits off
> > > > 104: subvolumes store2-md-cache
> > > > 105: end-volume
> > > > 106:
> > > > 107: volume meta-autoload
> > > > 108: type meta
> > > > 109: subvolumes store2
> > > > 110: end-volume
> > > > 111:
> > > >
> > >
> >
> +------------------------------------------------------------------------------+
> > > > [2019-10-10 22:07:51.578287] I
> [fuse-bridge.c:5142:fuse_init]
> > > 0-glusterfs-fuse: FUSE inited with protocol versions:
> > glusterfs 7.24
> > > kernel 7.22
> > > > [2019-10-10 22:07:51.578356] I
> > [fuse-bridge.c:5753:fuse_graph_sync]
> > > 0-fuse: switched to graph 0
> > > > [2019-10-10 22:07:51.578467] I [MSGID: 108006]
> > > [afr-common.c:5666:afr_local_init] 0-store2-replicate-0: no
> > > subvolumes up
> > > > [2019-10-10 22:07:51.578519] E
> > > [fuse-bridge.c:5211:fuse_first_lookup]
> > > 0-fuse: first lookup on root failed (Transport endpoint is not
> > > connected)
> > > > [2019-10-10 22:07:51.578709] W
> > [fuse-bridge.c:1266:fuse_attr_cbk]
> > > 0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport endpoint is
> not
> > > connected)
> > > > [2019-10-10 22:07:51.578687] I [MSGID: 108006]
> > > [afr-common.c:5666:afr_local_init] 0-store2-replicate-0: no
> > > subvolumes up
> > > > [2019-10-10 22:09:48.222459] E [MSGID: 108006]
> > > [afr-common.c:5318:__afr_handle_child_down_event]
> > 0-store2-replicate-0:
> > > All subvolumes are down. Going offline until at least one of
> > them comes
> > > back up.
> > > > The message "E [MSGID: 108006]
> > > [afr-common.c:5318:__afr_handle_child_down_event]
> > 0-store2-replicate-0:
> > > All subvolumes are down. Going offline until at least one of
> > them comes
> > > back up." repeated 2 times between [2019-10-10
> > 22:09:48.222459] and
> > > [2019-10-10 22:09:48.222891]
> > > >
> > >
> > > alexander iliev
> > >
> > > On 9/8/19 4:50 PM, Alexander Iliev wrote:
> > > > Hi all,
> > > >
> > > > Sunny, thank you for the update.
> > > >
> > > > I have applied the patch locally on my slave system and
> > now the
> > > > mountbroker setup is successful.
> > > >
> > > > I am facing another issue though - when I try to create a
> > > replication
> > > > session between the two sites I am getting:
> > > >
> > > > # gluster volume geo-replication store1
> > > > glustergeorep@<slave-host>::store1 create push-pem
> > > > Error : Request timed out
> > > > geo-replication command failed
> > > >
> > > > It is still unclear to me if my setup is expected to work
> > at all.
> > > >
> > > > Reading the geo-replication documentation at [1] I see this
> > > paragraph:
> > > >
> > > > > A password-less SSH connection is also required for
> gsyncd
> > > between
> > > > every node in the master to every node in the slave. The
> > gluster
> > > > system:: execute gsec_create command creates secret-pem
> > files on
> > > all the
> > > > nodes in the master, and is used to implement the
> > password-less SSH
> > > > connection. The push-pem option in the geo-replication
> create
> > > command
> > > > pushes these keys to all the nodes in the slave.
> > > >
> > > > It is not clear to me whether connectivity from each
> > master node
> > > to each
> > > > slave node is a requirement in terms of networking. In my
> > setup the
> > > > slave nodes form the Gluster pool over a private network
> > which is
> > > not
> > > > reachable from the master site.
> > > >
> > > > Any ideas how to proceed from here will be greatly
> > appreciated.
> > > >
> > > > Thanks!
> > > >
> > > > Links:
> > > > [1]
> > > >
> > >
> >
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication
> > >
> > > >
> > > >
> > > > Best regards,
> > > > --
> > > > alexander iliev
> > > >
> > > > On 9/3/19 2:50 PM, Sunny Kumar wrote:
> > > >> Thank you for the explanation Kaleb.
> > > >>
> > > >> Alexander,
> > > >>
> > > >> This fix will be available with next release for all
> > supported
> > > versions.
> > > >>
> > > >> /sunny
> > > >>
> > > >> On Mon, Sep 2, 2019 at 6:47 PM Kaleb Keithley
> > > <kkeithle at redhat.com <mailto:kkeithle at redhat.com>
> > <mailto:kkeithle at redhat.com <mailto:kkeithle at redhat.com>>>
> > > >> wrote:
> > > >>>
> > > >>> Fixes on master (before or after the release-7 branch
> > was taken)
> > > >>> almost certainly warrant a backport IMO to at least
> > release-6, and
> > > >>> probably release-5 as well.
> > > >>>
> > > >>> We used to have a "tracker" BZ for each minor release
> (e.g.
> > > 6.6) to
> > > >>> keep track of backports by cloning the original BZ and
> > changing
> > > the
> > > >>> Version, and adding that BZ to the tracker. I'm not sure
> > what
> > > >>> happened to that practice. The last ones I can find are
> > for 6.3
> > > and
> > > >>> 5.7;
> > https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3 and
> > > >>>
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7
> > > >>>
> > > >>> It isn't enough to just backport recent fixes on master
> to
> > > release-7.
> > > >>> We are supposedly continuing to maintain release-6 and
> > release-5
> > > >>> after release-7 GAs. If that has changed, I haven't seen
> an
> > > >>> announcement to that effect. I don't know why our
> > developers don't
> > > >>> automatically backport to all the actively maintained
> > releases.
> > > >>>
> > > >>> Even if there isn't a tracker BZ, you can always create a
> > > backport BZ
> > > >>> by cloning the original BZ and change the release to 6.
> > That'd
> > > be a
> > > >>> good place to start.
> > > >>>
> > > >>> On Sun, Sep 1, 2019 at 8:45 AM Alexander Iliev
> > > >>> <ailiev+gluster at mamul.org
> > <mailto:ailiev%2Bgluster at mamul.org>
> > <mailto:ailiev%2Bgluster at mamul.org
> > <mailto:ailiev%252Bgluster at mamul.org>>>
> > > wrote:
> > > >>>>
> > > >>>> Hi Strahil,
> > > >>>>
> > > >>>> Yes, this might be right, but I would still expect
> > fixes like
> > > this
> > > >>>> to be
> > > >>>> released for all supported major versions (which should
> > > include 6.) At
> > > >>>> least that's how I understand
> > > >>>> https://www.gluster.org/release-schedule/.
> > > >>>>
> > > >>>> Anyway, let's wait for Sunny to clarify.
> > > >>>>
> > > >>>> Best regards,
> > > >>>> alexander iliev
> > > >>>>
> > > >>>> On 9/1/19 2:07 PM, Strahil Nikolov wrote:
> > > >>>>> Hi Alex,
> > > >>>>>
> > > >>>>> I'm not very deep into bugzilla stuff, but for me
> > NEXTRELEASE
> > > means
> > > >>>>> v7.
> > > >>>>>
> > > >>>>> Sunny,
> > > >>>>> Am I understanding it correctly ?
> > > >>>>>
> > > >>>>> Best Regards,
> > > >>>>> Strahil Nikolov
> > > >>>>>
> > > >>>>> ? ??????, 1 ????????? 2019 ?., 14:27:32 ?. ???????+3,
> > > Alexander Iliev
> > > >>>>> <ailiev+gluster at mamul.org
> > <mailto:ailiev%2Bgluster at mamul.org>
> > > <mailto:ailiev%2Bgluster at mamul.org
> > <mailto:ailiev%252Bgluster at mamul.org>>> ??????:
> > > >>>>>
> > > >>>>>
> > > >>>>> Hi Sunny,
> > > >>>>>
> > > >>>>> Thank you for the quick response.
> > > >>>>>
> > > >>>>> It's not clear to me however if the fix has been
> already
> > > released
> > > >>>>> or not.
> > > >>>>>
> > > >>>>> The bug status is CLOSED NEXTRELEASE and according to
> > [1] the
> > > >>>>> NEXTRELEASE resolution means that the fix will be
> > included in
> > > the next
> > > >>>>> supported release. The bug is logged against the
> > mainline version
> > > >>>>> though, so I'm not sure what this means exactly.
> > > >>>>>
> > > >>>>> From the 6.4[2] and 6.5[3] release notes it seems it
> > hasn't
> > > been
> > > >>>>> released yet.
> > > >>>>>
> > > >>>>> Ideally I would not like to patch my systems locally,
> > so if you
> > > >>>>> have an
> > > >>>>> ETA on when this will be out officially I would really
> > > appreciate it.
> > > >>>>>
> > > >>>>> Links:
> > > >>>>> [1]
> > > https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status
> > > >>>>> [2]
> https://docs.gluster.org/en/latest/release-notes/6.4/
> > > >>>>> [3]
> https://docs.gluster.org/en/latest/release-notes/6.5/
> > > >>>>>
> > > >>>>> Thank you!
> > > >>>>>
> > > >>>>> Best regards,
> > > >>>>>
> > > >>>>> alexander iliev
> > > >>>>>
> > > >>>>> On 8/30/19 9:22 AM, Sunny Kumar wrote:
> > > >>>>> > Hi Alexander,
> > > >>>>> >
> > > >>>>> > Thanks for pointing that out!
> > > >>>>> >
> > > >>>>> > But this issue is fixed now you can see below link
> for
> > > bz-link
> > > >>>>> and patch.
> > > >>>>> >
> > > >>>>> > BZ -
> > https://bugzilla.redhat.com/show_bug.cgi?id=1709248
> > > >>>>> >
> > > >>>>> > Patch -
> > https://review.gluster.org/#/c/glusterfs/+/22716/
> > > >>>>> >
> > > >>>>> > Hope this helps.
> > > >>>>> >
> > > >>>>> > /sunny
> > > >>>>> >
> > > >>>>> > On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev
> > > >>>>> > <ailiev+gluster at mamul.org
> > <mailto:ailiev%2Bgluster at mamul.org>
> > > <mailto:ailiev%2Bgluster at mamul.org
> > <mailto:ailiev%252Bgluster at mamul.org>> <mailto:gluster at mamul.org
> > <mailto:gluster at mamul.org>
> > > <mailto:gluster at mamul.org <mailto:gluster at mamul.org>>>>
> wrote:
> > > >>>>> >>
> > > >>>>> >> Hello dear GlusterFS users list,
> > > >>>>> >>
> > > >>>>> >> I have been trying to set up geo-replication
> > between two
> > > >>>>> clusters for
> > > >>>>> >> some time now. The desired state is (Cluster #1)
> > being
> > > >>>>> replicated to
> > > >>>>> >> (Cluster #2).
> > > >>>>> >>
> > > >>>>> >> Here are some details about the setup:
> > > >>>>> >>
> > > >>>>> >> Cluster #1: three nodes connected via a local
> network
> > > >>>>> (172.31.35.0/24 <http://172.31.35.0/24>
> > <http://172.31.35.0/24>),
> > > >>>>> >> one replicated (3 replica) volume.
> > > >>>>> >>
> > > >>>>> >> Cluster #2: three nodes connected via a local
> network
> > > >>>>> (172.31.36.0/24 <http://172.31.36.0/24>
> > <http://172.31.36.0/24>),
> > > >>>>> >> one replicated (3 replica) volume.
> > > >>>>> >>
> > > >>>>> >> The two clusters are connected to the Internet
> > via separate
> > > >>>>> network
> > > >>>>> >> adapters.
> > > >>>>> >>
> > > >>>>> >> Only SSH (port 22) is open on cluster #2 nodes'
> > adapters
> > > >>>>> connected to
> > > >>>>> >> the Internet.
> > > >>>>> >>
> > > >>>>> >> All nodes are running Ubuntu 18.04 and GlusterFS
> 6.3
> > > installed
> > > >>>>> from [1].
> > > >>>>> >>
> > > >>>>> >> The first time I followed the guide[2] everything
> > went
> > > fine up
> > > >>>>> until I
> > > >>>>> >> reached the "Create the session" step. That was
> > like a
> > > month
> > > >>>>> ago, then I
> > > >>>>> >> had to temporarily stop working in this and now I
> > am coming
> > > >>>>> back to it.
> > > >>>>> >>
> > > >>>>> >> Currently, if I try to see the mountbroker status
> > I get the
> > > >>>>> following:
> > > >>>>> >>
> > > >>>>> >>> # gluster-mountbroker status
> > > >>>>> >>> Traceback (most recent call last):
> > > >>>>> >>> File "/usr/sbin/gluster-mountbroker", line
> > 396, in
> > > <module>
> > > >>>>> >>> runcli()
> > > >>>>> >>> File
> > > >>>>>
> > >
> > "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line
> > > >>>>> 225,
> > > >>>>> in runcli
> > > >>>>> >>> cls.run(args)
> > > >>>>> >>> File "/usr/sbin/gluster-mountbroker", line
> > 275, in run
> > > >>>>> >>> out = execute_in_peers("node-status")
> > > >>>>> >>> File
> > > >>>>>
> > "/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",
> > > >>>>> >> line 127, in execute_in_peers
> > > >>>>> >>> raise GlusterCmdException((rc, out, err, "
> > > ".join(cmd)))
> > > >>>>> >>> gluster.cliutils.cliutils.GlusterCmdException:
> > (1, '',
> > > >>>>> 'Unable to
> > > >>>>> >> end. Error : Success\n', 'gluster system:: execute
> > > mountbroker.py
> > > >>>>> >> node-status')
> > > >>>>> >>
> > > >>>>> >> And in /var/log/gluster/glusterd.log I have:
> > > >>>>> >>
> > > >>>>> >>> [2019-08-10 15:24:21.418834] E [MSGID: 106336]
> > > >>>>> >> [glusterd-geo-rep.c:5413:glusterd_op_sys_exec]
> > > 0-management:
> > > >>>>> Unable to
> > > >>>>> >> end. Error : Success
> > > >>>>> >>> [2019-08-10 15:24:21.418908] E [MSGID: 106122]
> > > ?? >>>>> >> [glusterd-syncop.c:1445:gd_commit_op_phase]
> > 0-management:
> > > >>>>> Commit of
> > > >>>>> >> operation 'Volume Execute system commands' failed
> on
> > > localhost
> > > >>>>> : Unable
> > > >>>>> >> to end. Error : Success
> > > >>>>> >>
> > > >>>>> >> So, I have two questions right now:
> > > >>>>> >>
> > > >>>>> >> 1) Is there anything wrong with my setup
> > (networking, open
> > > >>>>> ports, etc.)?
> > > >>>>> >> Is it expected to work with this setup or should
> > I redo
> > > it in a
> > > >>>>> >> different way?
> > > >>>>> >> 2) How can I troubleshoot the current status of my
> > > setup? Can
> > > >>>>> I find out
> > > >>>>> >> what's missing/wrong and continue from there or
> > should I
> > > just
> > > >>>>> start from
> > > >>>>> >> scratch?
> > > >>>>> >>
> > > >>>>> >> Links:
> > > >>>>> >> [1]
> > http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu
> > > >>>>> >> [2]
> > > >>>>> >>
> > > >>>>>
> > >
> >
> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
> > >
> > > >>>>>
> > > >>>>> >>
> > > >>>>> >> Thank you!
> > > >>>>> >>
> > > >>>>> >> Best regards,
> > > >>>>> >> --
> > > >>>>> >> alexander iliev
> > > >>>>> >> _______________________________________________
> > > >>>>> >> Gluster-users mailing list
> > > >>>>> >> Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>
> > > <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>
> > <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> > > <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>>
> > > >>>>> >>
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> > > >>>>> _______________________________________________
> > > >>>>> Gluster-users mailing list
> > > >>>>> Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>
> > > <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>>
> > > >>>>>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> > > >>>> _______________________________________________
> > > >>>> Gluster-users mailing list
> > > >>>> Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>
> > > >>>>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> > > > _______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> > <mailto:Gluster-users at gluster.org>>
> > > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > > ________
> > >
> > > Community Meeting Calendar:
> > >
> > > APAC Schedule -
> > > Every 2nd and 4th Tuesday at 11:30 AM IST
> > > Bridge: https://bluejeans.com/118564314
> > >
> > > NA/EMEA Schedule -
> > > Every 1st and 3rd Tuesday at 01:00 PM EDT
> > > Bridge: https://bluejeans.com/118564314
> > >
> > > Gluster-users mailing list
> > > Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> > <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org
> >>
> > > https://lists.gluster.org/mailman/listinfo/gluster-users
> > >
> > >
> > >
> > > --
> > > regards
> > > Aravinda VK
> >
> >
> >
> > --
> > regards
> > Aravinda VK
>
--
regards
Aravinda VK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/9bac07d3/attachment-0001.html>
------------------------------
Message: 3
Date: Thu, 17 Oct 2019 21:03:43 +0530
From: Aravinda Vishwanathapura Krishna Murthy <avishwan at redhat.com>
To: deepu srinivasan <sdeepugd at gmail.com>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Single Point of failure in geo
Replication
Message-ID:
<CA+8EeuPu_t3ucUwkvS1x7Y91qyP=sCD7k0Ln=t0Fd_Dp_+7oTA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
On Thu, Oct 17, 2019 at 11:44 AM deepu srinivasan <sdeepugd at gmail.com>
wrote:
> Thank you for your response.
> We have tried the above use case you mentioned.
>
> Case 1: Primary node is permanently Down (Hardware failure)
> In this case, the Georeplication session cannot be stopped and returns the
> failure "start the primary node and then stop(or similar message)".
> Now I cannot delete because I cannot stop the session.
>
Please try "stop force", Let us know if that works.
> On Thu, Oct 17, 2019 at 8:32 AM Aravinda Vishwanathapura Krishna Murthy <
> avishwan at redhat.com> wrote:
>
>>
>> On Wed, Oct 16, 2019 at 11:08 PM deepu srinivasan <sdeepugd at gmail.com>
>> wrote:
>>
>>> Hi Users
>>> Is there a single point of failure in GeoReplication for gluster?
>>> My Case:
>>> I Use 3 nodes in both master and slave volume.
>>> Master volume : Node1,Node2,Node3
>>> Slave Volume : Node4,Node5,Node6
>>> I tried to recreate the scenario to test a single point of failure.
>>>
>>> Geo-Replication Status:
>>>
>>> *Master Node Slave Node Status *
>>> Node1 Node4 Active
>>> Node2 Node4 Passive
>>> Node3 Node4 Passive
>>>
>>> Step 1: Stoped the glusterd daemon in Node4.
>>> Result: There were only two-node statuses like the one below.
>>>
>>> *Master Node Slave Node Status *
>>> Node2 Node4 Passive
>>> Node3 Node4 Passive
>>>
>>>
>>> Will the GeoReplication session goes down if the primary slave is down?
>>>
>>
>>
>> Hi Deepu,
>>
>> Geo-replication depends on a primary slave node to get the information
>> about other nodes which are part of Slave Volume.
>>
>> Once the workers are started, it is not dependent on the primary slave
>> node. Will not fail if a primary goes down. But if any other node goes down
>> then the worker will try to connect to some other node, for which it tries
>> to run Volume status command on the slave node using the following command.
>>
>> ```
>> ssh -i <georep-pem> <primary-node> gluster volume status <slavevol>
>> ```
>>
>> The above command will fail and Worker will not get the list of Slave
>> nodes to which it can connect to.
>>
>> This is only a temporary failure until the primary node comes back
>> online. If the primary node is permanently down then run Geo-rep delete and
>> Geo-rep create command again with the new primary node. (Note: Geo-rep
>> Delete and Create will remember the last sync time and resume once it
>> starts)
>>
>> I will evaluate the possibility of caching a list of Slave nodes so that
>> it can be used as a backup primary node in case of failures. I will open
>> Github issue for the same.
>>
>> Thanks for reporting the issue.
>>
>> --
>> regards
>> Aravinda VK
>>
>
--
regards
Aravinda VK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/fe2a180f/attachment-0001.html>
------------------------------
Message: 4
Date: Thu, 17 Oct 2019 22:24:30 +0530
From: Amar Tumballi <amarts at gmail.com>
To: "Kay K." <kkay.jp at gmail.com>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] On a glusterfsd service
Message-ID:
<CA+OzEQvhgfdBeAhoVHZsk14CPYGU32YRmXNUcWC-scTQ00aHaw at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
On Thu, Oct 17, 2019 at 5:21 PM Kay K. <kkay.jp at gmail.com> wrote:
> Hello All,
>
> I'm using 20 glusterfs servers on CentOS 6.9 for about 5 years. It's
> working well.
That is a sweet thing to read as first thing the email :-)
> However recently, I noticed that those settings are
> different in a part of hosts.
>
> Those 20 servers are working on runlevel:3.
> For 10 servers, if I looked at a directory /etc/rc.3, I found to be
> set K80 for the service, glusterfsd like below.
>
> $ ls -l /etc/rc.d/rc3.d/*gluster*
> lrwxrwxrwx 1 root root 20 Mar 9 2016 /etc/rc.d/rc3.d/K80glusterfsd
> -> ../init.d/glusterfsd
> lrwxrwxrwx 1 root root 18 Mar 9 2016 /etc/rc.d/rc3.d/S20glusterd ->
> ../init.d/glusterd
>
> However, I checked the another 10 servers, I cound find to be set S20
> for glusterfsd as below.
>
> $ ls -l /etc/rc.d/rc3.d/*gluster*
> lrwxrwxrwx 1 root root 18 Oct 9 2015 /etc/rc.d/rc3.d/S20glusterd ->
> ../init.d/glusterd
> lrwxrwxrwx 1 root root 20 Oct 9 2015 /etc/rc.d/rc3.d/S20glusterfsd
> -> ../init.d/glusterfsd
>
> I remember that the half of servers were built up several years lator.
> I expect that maybe, the difference was made at the time.
>
>
Most probably. The dates points difference of ~18 months between them.
Surely some improvements would have gone into the code. (~1000 patches in
an year)
Trying to check git log inside glusterfs' spec file and not able to find
any thing. Looks like the diff is mostly with CentOS spec.
> Futhermore, if I checked the status for glusterfsd, the glusterfsd can
> get running as below.
>
> $ /etc/init.d/glusterd status
> glusterd (pid 1989) is running...
> $ /etc/init.d/glusterfsd status
> glusterfsd (pid 2216 2206 2201 2198 2193 2187 2181 2168 2163 2148 2147
> 2146 2139 2130 2123 2113 2111 2100 2088) is running...
>
>
:-)
> Actually, my GlusterFS server is working well.
>
>
IMO that is a good news. I don't think it would be an issue all of a sudden
after 4-5 years.
> I don't know that which setting is correct. Would you know about it?
>
>
We just need to start 'glusterd' service in later versions. So, if it is
working, it would be fine. For reference/correctness related things, I
would leave it to experts on specs, and init.d scripts to respond.
For most of the emails, we end up recommending to move to a latest,
supported version, but considering you are not facing an issue on top of
filesystem, I wouldn't recommend that yet :-)
Regards,
Amar
> Thanks,
> Kondo
> ________
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/1eb1a0ed/attachment-0001.html>
------------------------------
Message: 5
Date: Thu, 17 Oct 2019 13:48:15 -0400
From: Kaleb Keithley <kkeithle at redhat.com>
To: Alberto Bengoa <bengoa at gmail.com>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Mirror https://download.gluster.org/ is
not working
Message-ID:
<CAC+Jd5DEE5b7kW5+Ax9fg3Ha3cM2FGvXCvSDJFhH2Vo+PmqXsA at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
file owners+perms were fixed. Should work now
On Thu, Oct 17, 2019 at 10:57 AM Alberto Bengoa <bengoa at gmail.com> wrote:
> Guys,
>
> Anybody from Gluster Team has one word about the mirror status? It is
> failing since (maybe?) yesterday.
>
> root at nas-bkp /tmp $ yum install glusterfs-client
> GlusterFS is a clustered file-system capable of scaling to several
> petabyte 2.1 kB/s | 2.9 kB 00:01
> Dependencies resolved.
>
> ============================================================================================================
> Package Arch Version
> Repository Size
>
> ============================================================================================================
> Installing:
> glusterfs-fuse x86_64 6.5-2.el8
> glusterfs-rhel8 167 k
> Installing dependencies:
> glusterfs x86_64 6.5-2.el8
> glusterfs-rhel8 681 k
> glusterfs-client-xlators x86_64 6.5-2.el8
> glusterfs-rhel8 893 k
> glusterfs-libs x86_64 6.5-2.el8
> glusterfs-rhel8 440 k
>
> Transaction Summary
>
> ============================================================================================================
> Install 4 Packages
>
> Total download size: 2.1 M
> Installed size: 9.1 M
> Is this ok [y/N]: y
> Downloading Packages:
> [MIRROR] glusterfs-6.5-2.el8.x86_64.rpm: Curl error (18): Transferred a
> partial file for
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> [transfer closed with 648927 bytes remaining to read]
> [FAILED] glusterfs-6.5-2.el8.x86_64.rpm: No more mirrors to try - All
> mirrors were already tried without success
> (2-3/4): glusterfs-client-xlators- 34% [===========-
> ] 562 kB/s | 745 kB 00:02 ETA
> The downloaded packages were saved in cache until the next successful
> transaction.
> You can remove cached packages by executing 'dnf clean packages'.
> Error: Error downloading packages:
> Cannot download glusterfs-6.5-2.el8.x86_64.rpm: All mirrors were tried
>
> If you try to download using wget it fails as well:
>
> root at nas-bkp /tmp $ wget
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> --2019-10-17 15:53:41--
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> Resolving download.gluster.org (download.gluster.org)... 8.43.85.185
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
> connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 697688 (681K) [application/x-rpm]
> Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.1?
>
> glusterfs-6.5-2.el8.x86_64 6%[=> ]
> 47.62K --.-KB/s in 0.09s
>
> 2019-10-17 15:53:42 (559 KB/s) - Read error at byte 48761/697688 (Error
> decoding the received TLS packet.). Retrying.
>
> --2019-10-17 15:53:43-- (try: 2)
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
> connected.
> HTTP request sent, awaiting response... ^C
> root at nas-bkp /tmp $ wget
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> --2019-10-17 15:53:45--
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> Resolving download.gluster.org (download.gluster.org)... 8.43.85.185
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
> connected.
> HTTP request sent, awaiting response... 200 OK
> Length: 697688 (681K) [application/x-rpm]
> Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?
>
> glusterfs-6.5-2.el8.x86_64 6%[=> ]
> 47.62K --.-KB/s in 0.08s
>
> 2019-10-17 15:53:46 (564 KB/s) - Read error at byte 48761/697688 (Error
> decoding the received TLS packet.). Retrying.
>
> --2019-10-17 15:53:47-- (try: 2)
> https://download.gluster.org/pub/gluster/glusterfs/6/LATEST/RHEL/el-8/x86_64/glusterfs-6.5-2.el8.x86_64.rpm
> Connecting to download.gluster.org (download.gluster.org)|8.43.85.185|:443...
> connected.
> HTTP request sent, awaiting response... 206 Partial Content
> Length: 697688 (681K), 648927 (634K) remaining [application/x-rpm]
> Saving to: ?glusterfs-6.5-2.el8.x86_64.rpm.2?
>
> glusterfs-6.5-2.el8.x86_64 13%[++==> ]
> 95.18K --.-KB/s in 0.08s
>
> 2019-10-17 15:53:47 (563 KB/s) - Read error at byte 97467/697688 (Error
> decoding the received TLS packet.). Retrying.
>
>
> Thank you!
>
> Alberto Bengoa
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/0a41ccd1/attachment-0001.html>
------------------------------
Message: 6
Date: Thu, 17 Oct 2019 20:40:37 +0200
From: Alexander Iliev <ailiev+gluster at mamul.org>
To: Aravinda Vishwanathapura Krishna Murthy <avishwan at redhat.com>
Cc: gluster-users <gluster-users at gluster.org>
Subject: Re: [Gluster-users] Issues with Geo-replication (GlusterFS
6.3 on Ubuntu 18.04)
Message-ID: <4214e52d-b69f-d5b2-c3fc-2c69e9abb217 at mamul.org>
Content-Type: text/plain; charset=utf-8; format=flowed
On 10/17/19 5:32 PM, Aravinda Vishwanathapura Krishna Murthy wrote:
>
>
> On Thu, Oct 17, 2019 at 12:54 PM Alexander Iliev
> <ailiev+gluster at mamul.org <mailto:ailiev%2Bgluster at mamul.org>> wrote:
>
> Thanks, Aravinda.
>
> Does this mean that my scenario is currently unsupported?
>
>
> Please try by providing external IP while creating Geo-rep session. We
> will work on the enhancement if it didn't work.
This is what I've been doing all along. It didn't work for me.
>
>
> It seems that I need to make sure that the nodes in the two clusters
> can
> see each-other (some kind of VPN would work I guess).
>
> Is this be documented somewhere? I think I've read the geo-replication
> documentation several times now, but somehow it wasn't obvious to me
> that you need access to the slave nodes from the master ones (apart
> from
> the SSH access).
>
> Thanks!
>
> Best regards,
> --
> alexander iliev
>
> On 10/17/19 5:25 AM, Aravinda Vishwanathapura Krishna Murthy wrote:
> > Got it.
> >
> > Geo-replication uses slave nodes IP in the following cases,
> >
> > - Verification during Session creation - It tries to mount the Slave
> > volume using the hostname/IP provided in Geo-rep create command. Try
> > Geo-rep create by specifying the external IP which is accessible
> from
> > the master node.
> > - Once Geo-replication is started, it gets the list of Slave nodes
> > IP/hostname from Slave volume info and connects to those IPs. But in
> > this case, those are internal IP addresses that are not
> accessible from
> > Master nodes. - We need to enhance Geo-replication to accept
> external IP
> > and internal IP map details so that for all connections it can use
> > external IP.
> >
> > On Wed, Oct 16, 2019 at 10:29 PM Alexander Iliev
> > <ailiev+gluster at mamul.org <mailto:ailiev%2Bgluster at mamul.org>
> <mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>>> wrote:
> >
> >? ? ?Hi Aravinda,
> >
> >? ? ?All volume brick on the slave volume are up and the volume seems
> >? ? ?functional.
> >
> >? ? ?Your suggestion about trying to mount the slave volume on a
> master node
> >? ? ?brings up my question about network connectivity again - the
> GlusterFS
> >? ? ?documentation[1] says:
> >
> >? ? ? ?> The server specified in the mount command is only used to
> fetch the
> >? ? ?gluster configuration volfile describing the volume name.
> Subsequently,
> >? ? ?the client will communicate directly with the servers
> mentioned in the
> >? ? ?volfile (which might not even include the one used for mount).
> >
> >? ? ?To me this means that the masternode from your example is
> expected to
> >? ? ?have connectivity to the network where the slave volume runs,
> i.e. to
> >? ? ?have network access to the slave nodes. In my geo-replication
> scenario
> >? ? ?this is definitely not the case. The two cluster are running
> in two
> >? ? ?completely different networks that are not interconnected.
> >
> >? ? ?So my question is - how is the slave volume mount expected to
> happen if
> >? ? ?the client host cannot access the GlusterFS nodes? Or is the
> >? ? ?connectivity a requirement even for geo-replication?
> >
> >? ? ?I'm not sure if I'm missing something, but any help will be
> highly
> >? ? ?appreciated!
> >
> >? ? ?Thanks!
> >
> >? ? ?Links:
> >? ? ?[1]
> >
> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/
> >? ? ?--
> >? ? ?alexander iliev
> >
> >? ? ?On 10/16/19 6:03 AM, Aravinda Vishwanathapura Krishna Murthy
> wrote:
> >? ? ? > Hi Alexander,
> >? ? ? >
> >? ? ? > Please check the status of Volume. Looks like the Slave volume
> >? ? ?mount is
> >? ? ? > failing because bricks are down or not reachable. If Volume
> >? ? ?status shows
> >? ? ? > all bricks are up then try mounting the slave volume using
> mount
> >? ? ?command.
> >? ? ? >
> >? ? ? > ```
> >? ? ? > masternode$ mkdir /mnt/vol
> >? ? ? > masternode$ mount -t glusterfs <slavehost>:<slavevol> /mnt/vol
> >? ? ? > ```
> >? ? ? >
> >? ? ? > On Fri, Oct 11, 2019 at 4:03 AM Alexander Iliev
> >? ? ? > <ailiev+gluster at mamul.org
> <mailto:ailiev%2Bgluster at mamul.org>
> <mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>>
> >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%252Bgluster at mamul.org
> <mailto:ailiev%25252Bgluster at mamul.org>>>> wrote:
> >? ? ? >
> >? ? ? >? ? ?Hi all,
> >? ? ? >
> >? ? ? >? ? ?I ended up reinstalling the nodes with CentOS 7.5 and
> >? ? ?GlusterFS 6.5
> >? ? ? >? ? ?(installed from the SIG.)
> >? ? ? >
> >? ? ? >? ? ?Now when I try to create a replication session I get the
> >? ? ?following:
> >? ? ? >
> >? ? ? >? ? ? ?> # gluster volume geo-replication store1
> >? ? ?<slave-host>::store2 create
> >? ? ? >? ? ?push-pem
> >? ? ? >? ? ? ?> Unable to mount and fetch slave volume details. Please
> >? ? ?check the
> >? ? ? >? ? ?log:
> >? ? ? >? ? ?/var/log/glusterfs/geo-replication/gverify-slavemnt.log
> >? ? ? >? ? ? ?> geo-replication command failed
> >? ? ? >
> >? ? ? >? ? ?You can find the contents of gverify-slavemnt.log
> below, but the
> >? ? ? >? ? ?initial
> >? ? ? >? ? ?error seems to be:
> >? ? ? >
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578519] E
> >? ? ? >? ? ?[fuse-bridge.c:5211:fuse_first_lookup]
> >? ? ? >? ? ?0-fuse: first lookup on root failed (Transport
> endpoint is not
> >? ? ? >? ? ?connected)
> >? ? ? >
> >? ? ? >? ? ?I only found
> >? ? ? >
> ?[this](https://bugzilla.redhat.com/show_bug.cgi?id=1659824)
> >? ? ? >? ? ?bug report which doesn't seem to help. The reported
> issue is
> >? ? ?failure to
> >? ? ? >? ? ?mount a volume on a GlusterFS client, but in my case I
> need
> >? ? ? >? ? ?geo-replication which implies the client (geo-replication
> >? ? ?master) being
> >? ? ? >? ? ?on a different network.
> >? ? ? >
> >? ? ? >? ? ?Any help will be appreciated.
> >? ? ? >
> >? ? ? >? ? ?Thanks!
> >? ? ? >
> >? ? ? >? ? ?gverify-slavemnt.log:
> >? ? ? >
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.571256] I [MSGID: 100030]
> >? ? ? >? ? ?[glusterfsd.c:2847:main] 0-glusterfs: Started running
> >? ? ?glusterfs version
> >? ? ? >? ? ?6.5 (args: glusterfs
> --xlator-option=*dht.lookup-unhashed=off
> >? ? ? >? ? ?--volfile-server <slave-host> --volfile-id store2 -l
> >? ? ? >? ? ?/var/log/glusterfs/geo-replication/gverify-slavemnt.log
> >? ? ? >? ? ?/tmp/gverify.sh.5nFlRh)
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.575438] I
> [glusterfsd.c:2556:daemonize]
> >? ? ? >? ? ?0-glusterfs: Pid of current running process is 6021
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.584282] I [MSGID: 101190]
> >? ? ? >? ? ?[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:
> >? ? ?Started thread
> >? ? ? >? ? ?with index 0
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.584299] I [MSGID: 101190]
> >? ? ? >? ? ?[event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll:
> >? ? ?Started thread
> >? ? ? >? ? ?with index 1
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.928094] I [MSGID: 114020]
> >? ? ? >? ? ?[client.c:2393:notify]
> >? ? ? >? ? ?0-store2-client-0: parent translators are ready,
> attempting
> >? ? ?connect on
> >? ? ? >? ? ?transport
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.931121] I [MSGID: 114020]
> >? ? ? >? ? ?[client.c:2393:notify]
> >? ? ? >? ? ?0-store2-client-1: parent translators are ready,
> attempting
> >? ? ?connect on
> >? ? ? >? ? ?transport
> >? ? ? >? ? ? ?> [2019-10-10 22:07:40.933976] I [MSGID: 114020]
> >? ? ? >? ? ?[client.c:2393:notify]
> >? ? ? >? ? ?0-store2-client-2: parent translators are ready,
> attempting
> >? ? ?connect on
> >? ? ? >? ? ?transport
> >? ? ? >? ? ? ?> Final graph:
> >? ? ? >? ? ? ?>
> >? ? ? >
> >
> ?+------------------------------------------------------------------------------+
> >? ? ? >? ? ? ?>? ?1: volume store2-client-0
> >? ? ? >? ? ? ?>? ?2:? ? ?type protocol/client
> >? ? ? >? ? ? ?>? ?3:? ? ?option ping-timeout 42
> >? ? ? >? ? ? ?>? ?4:? ? ?option remote-host 172.31.36.11
> >? ? ? >? ? ? ?>? ?5:? ? ?option remote-subvolume
> >? ? ?/data/gfs/store1/1/brick-store2
> >? ? ? >? ? ? ?>? ?6:? ? ?option transport-type socket
> >? ? ? >? ? ? ?>? ?7:? ? ?option transport.address-family inet
> >? ? ? >? ? ? ?>? ?8:? ? ?option transport.socket.ssl-enabled off
> >? ? ? >? ? ? ?>? ?9:? ? ?option transport.tcp-user-timeout 0
> >? ? ? >? ? ? ?>? 10:? ? ?option transport.socket.keepalive-time 20
> >? ? ? >? ? ? ?>? 11:? ? ?option transport.socket.keepalive-interval 2
> >? ? ? >? ? ? ?>? 12:? ? ?option transport.socket.keepalive-count 9
> >? ? ? >? ? ? ?>? 13:? ? ?option send-gids true
> >? ? ? >? ? ? ?>? 14: end-volume
> >? ? ? >? ? ? ?>? 15:
> >? ? ? >? ? ? ?>? 16: volume store2-client-1
> >? ? ? >? ? ? ?>? 17:? ? ?type protocol/client
> >? ? ? >? ? ? ?>? 18:? ? ?option ping-timeout 42
> >? ? ? >? ? ? ?>? 19:? ? ?option remote-host 172.31.36.12
> >? ? ? >? ? ? ?>? 20:? ? ?option remote-subvolume
> >? ? ?/data/gfs/store1/1/brick-store2
> >? ? ? >? ? ? ?>? 21:? ? ?option transport-type socket
> >? ? ? >? ? ? ?>? 22:? ? ?option transport.address-family inet
> >? ? ? >? ? ? ?>? 23:? ? ?option transport.socket.ssl-enabled off
> >? ? ? >? ? ? ?>? 24:? ? ?option transport.tcp-user-timeout 0
> >? ? ? >? ? ? ?>? 25:? ? ?option transport.socket.keepalive-time 20
> >? ? ? >? ? ? ?>? 26:? ? ?option transport.socket.keepalive-interval 2
> >? ? ? >? ? ? ?>? 27:? ? ?option transport.socket.keepalive-count 9
> >? ? ? >? ? ? ?>? 28:? ? ?option send-gids true
> >? ? ? >? ? ? ?>? 29: end-volume
> >? ? ? >? ? ? ?>? 30:
> >? ? ? >? ? ? ?>? 31: volume store2-client-2
> >? ? ? >? ? ? ?>? 32:? ? ?type protocol/client
> >? ? ? >? ? ? ?>? 33:? ? ?option ping-timeout 42
> >? ? ? >? ? ? ?>? 34:? ? ?option remote-host 172.31.36.13
> >? ? ? >? ? ? ?>? 35:? ? ?option remote-subvolume
> >? ? ?/data/gfs/store1/1/brick-store2
> >? ? ? >? ? ? ?>? 36:? ? ?option transport-type socket
> >? ? ? >? ? ? ?>? 37:? ? ?option transport.address-family inet
> >? ? ? >? ? ? ?>? 38:? ? ?option transport.socket.ssl-enabled off
> >? ? ? >? ? ? ?>? 39:? ? ?option transport.tcp-user-timeout 0
> >? ? ? >? ? ? ?>? 40:? ? ?option transport.socket.keepalive-time 20
> >? ? ? >? ? ? ?>? 41:? ? ?option transport.socket.keepalive-interval 2
> >? ? ? >? ? ? ?>? 42:? ? ?option transport.socket.keepalive-count 9
> >? ? ? >? ? ? ?>? 43:? ? ?option send-gids true
> >? ? ? >? ? ? ?>? 44: end-volume
> >? ? ? >? ? ? ?>? 45:
> >? ? ? >? ? ? ?>? 46: volume store2-replicate-0
> >? ? ? >? ? ? ?>? 47:? ? ?type cluster/replicate
> >? ? ? >? ? ? ?>? 48:? ? ?option afr-pending-xattr
> >? ? ? >? ? ?store2-client-0,store2-client-1,store2-client-2
> >? ? ? >? ? ? ?>? 49:? ? ?option use-compound-fops off
> >? ? ? >? ? ? ?>? 50:? ? ?subvolumes store2-client-0 store2-client-1
> >? ? ?store2-client-2
> >? ? ? >? ? ? ?>? 51: end-volume
> >? ? ? >? ? ? ?>? 52:
> >? ? ? >? ? ? ?>? 53: volume store2-dht
> >? ? ? >? ? ? ?>? 54:? ? ?type cluster/distribute
> >? ? ? >? ? ? ?>? 55:? ? ?option lookup-unhashed off
> >? ? ? >? ? ? ?>? 56:? ? ?option lock-migration off
> >? ? ? >? ? ? ?>? 57:? ? ?option force-migration off
> >? ? ? >? ? ? ?>? 58:? ? ?subvolumes store2-replicate-0
> >? ? ? >? ? ? ?>? 59: end-volume
> >? ? ? >? ? ? ?>? 60:
> >? ? ? >? ? ? ?>? 61: volume store2-write-behind
> >? ? ? >? ? ? ?>? 62:? ? ?type performance/write-behind
> >? ? ? >? ? ? ?>? 63:? ? ?subvolumes store2-dht
> >? ? ? >? ? ? ?>? 64: end-volume
> >? ? ? >? ? ? ?>? 65:
> >? ? ? >? ? ? ?>? 66: volume store2-read-ahead
> >? ? ? >? ? ? ?>? 67:? ? ?type performance/read-ahead
> >? ? ? >? ? ? ?>? 68:? ? ?subvolumes store2-write-behind
> >? ? ? >? ? ? ?>? 69: end-volume
> >? ? ? >? ? ? ?>? 70:
> >? ? ? >? ? ? ?>? 71: volume store2-readdir-ahead
> >? ? ? >? ? ? ?>? 72:? ? ?type performance/readdir-ahead
> >? ? ? >? ? ? ?>? 73:? ? ?option parallel-readdir off
> >? ? ? >? ? ? ?>? 74:? ? ?option rda-request-size 131072
> >? ? ? >? ? ? ?>? 75:? ? ?option rda-cache-limit 10MB
> >? ? ? >? ? ? ?>? 76:? ? ?subvolumes store2-read-ahead
> >? ? ? >? ? ? ?>? 77: end-volume
> >? ? ? >? ? ? ?>? 78:
> >? ? ? >? ? ? ?>? 79: volume store2-io-cache
> >? ? ? >? ? ? ?>? 80:? ? ?type performance/io-cache
> >? ? ? >? ? ? ?>? 81:? ? ?subvolumes store2-readdir-ahead
> >? ? ? >? ? ? ?>? 82: end-volume
> >? ? ? >? ? ? ?>? 83:
> >? ? ? >? ? ? ?>? 84: volume store2-open-behind
> >? ? ? >? ? ? ?>? 85:? ? ?type performance/open-behind
> >? ? ? >? ? ? ?>? 86:? ? ?subvolumes store2-io-cache
> >? ? ? >? ? ? ?>? 87: end-volume
> >? ? ? >? ? ? ?>? 88:
> >? ? ? >? ? ? ?>? 89: volume store2-quick-read
> >? ? ? >? ? ? ?>? 90:? ? ?type performance/quick-read
> >? ? ? >? ? ? ?>? 91:? ? ?subvolumes store2-open-behind
> >? ? ? >? ? ? ?>? 92: end-volume
> >? ? ? >? ? ? ?>? 93:
> >? ? ? >? ? ? ?>? 94: volume store2-md-cache
> >? ? ? >? ? ? ?>? 95:? ? ?type performance/md-cache
> >? ? ? >? ? ? ?>? 96:? ? ?subvolumes store2-quick-read
> >? ? ? >? ? ? ?>? 97: end-volume
> >? ? ? >? ? ? ?>? 98:
> >? ? ? >? ? ? ?>? 99: volume store2
> >? ? ? >? ? ? ?> 100:? ? ?type debug/io-stats
> >? ? ? >? ? ? ?> 101:? ? ?option log-level INFO
> >? ? ? >? ? ? ?> 102:? ? ?option latency-measurement off
> >? ? ? >? ? ? ?> 103:? ? ?option count-fop-hits off
> >? ? ? >? ? ? ?> 104:? ? ?subvolumes store2-md-cache
> >? ? ? >? ? ? ?> 105: end-volume
> >? ? ? >? ? ? ?> 106:
> >? ? ? >? ? ? ?> 107: volume meta-autoload
> >? ? ? >? ? ? ?> 108:? ? ?type meta
> >? ? ? >? ? ? ?> 109:? ? ?subvolumes store2
> >? ? ? >? ? ? ?> 110: end-volume
> >? ? ? >? ? ? ?> 111:
> >? ? ? >? ? ? ?>
> >? ? ? >
> >
> ?+------------------------------------------------------------------------------+
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578287] I
> [fuse-bridge.c:5142:fuse_init]
> >? ? ? >? ? ?0-glusterfs-fuse: FUSE inited with protocol versions:
> >? ? ?glusterfs 7.24
> >? ? ? >? ? ?kernel 7.22
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578356] I
> >? ? ?[fuse-bridge.c:5753:fuse_graph_sync]
> >? ? ? >? ? ?0-fuse: switched to graph 0
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578467] I [MSGID: 108006]
> >? ? ? >? ? ?[afr-common.c:5666:afr_local_init]
> 0-store2-replicate-0: no
> >? ? ? >? ? ?subvolumes up
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578519] E
> >? ? ? >? ? ?[fuse-bridge.c:5211:fuse_first_lookup]
> >? ? ? >? ? ?0-fuse: first lookup on root failed (Transport
> endpoint is not
> >? ? ? >? ? ?connected)
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578709] W
> >? ? ?[fuse-bridge.c:1266:fuse_attr_cbk]
> >? ? ? >? ? ?0-glusterfs-fuse: 2: LOOKUP() / => -1 (Transport
> endpoint is not
> >? ? ? >? ? ?connected)
> >? ? ? >? ? ? ?> [2019-10-10 22:07:51.578687] I [MSGID: 108006]
> >? ? ? >? ? ?[afr-common.c:5666:afr_local_init]
> 0-store2-replicate-0: no
> >? ? ? >? ? ?subvolumes up
> >? ? ? >? ? ? ?> [2019-10-10 22:09:48.222459] E [MSGID: 108006]
> >? ? ? >? ? ?[afr-common.c:5318:__afr_handle_child_down_event]
> >? ? ?0-store2-replicate-0:
> >? ? ? >? ? ?All subvolumes are down. Going offline until at least
> one of
> >? ? ?them comes
> >? ? ? >? ? ?back up.
> >? ? ? >? ? ? ?> The message "E [MSGID: 108006]
> >? ? ? >? ? ?[afr-common.c:5318:__afr_handle_child_down_event]
> >? ? ?0-store2-replicate-0:
> >? ? ? >? ? ?All subvolumes are down. Going offline until at least
> one of
> >? ? ?them comes
> >? ? ? >? ? ?back up." repeated 2 times between [2019-10-10
> >? ? ?22:09:48.222459] and
> >? ? ? >? ? ?[2019-10-10 22:09:48.222891]
> >? ? ? >? ? ? ?>
> >? ? ? >
> >? ? ? >? ? ?alexander iliev
> >? ? ? >
> >? ? ? >? ? ?On 9/8/19 4:50 PM, Alexander Iliev wrote:
> >? ? ? >? ? ? > Hi all,
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > Sunny, thank you for the update.
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > I have applied the patch locally on my slave system and
> >? ? ?now the
> >? ? ? >? ? ? > mountbroker setup is successful.
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > I am facing another issue though - when I try to
> create a
> >? ? ? >? ? ?replication
> >? ? ? >? ? ? > session between the two sites I am getting:
> >? ? ? >? ? ? >
> >? ? ? >? ? ? >? ??????? # gluster volume geo-replication store1
> >? ? ? >? ? ? > glustergeorep@<slave-host>::store1 create push-pem
> >? ? ? >? ? ? >? ??????? Error : Request timed out
> >? ? ? >? ? ? >? ??????? geo-replication command failed
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > It is still unclear to me if my setup is expected
> to work
> >? ? ?at all.
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > Reading the geo-replication documentation at [1] I
> see this
> >? ? ? >? ? ?paragraph:
> >? ? ? >? ? ? >
> >? ? ? >? ? ? >? > A password-less SSH connection is also required
> for gsyncd
> >? ? ? >? ? ?between
> >? ? ? >? ? ? > every node in the master to every node in the
> slave. The
> >? ? ?gluster
> >? ? ? >? ? ? > system:: execute gsec_create command creates secret-pem
> >? ? ?files on
> >? ? ? >? ? ?all the
> >? ? ? >? ? ? > nodes in the master, and is used to implement the
> >? ? ?password-less SSH
> >? ? ? >? ? ? > connection. The push-pem option in the
> geo-replication create
> >? ? ? >? ? ?command
> >? ? ? >? ? ? > pushes these keys to all the nodes in the slave.
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > It is not clear to me whether connectivity from each
> >? ? ?master node
> >? ? ? >? ? ?to each
> >? ? ? >? ? ? > slave node is a requirement in terms of networking.
> In my
> >? ? ?setup the
> >? ? ? >? ? ? > slave nodes form the Gluster pool over a private
> network
> >? ? ?which is
> >? ? ? >? ? ?not
> >? ? ? >? ? ? > reachable from the master site.
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > Any ideas how to proceed from here will be greatly
> >? ? ?appreciated.
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > Thanks!
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > Links:
> >? ? ? >? ? ? > [1]
> >? ? ? >? ? ? >
> >? ? ? >
> >
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3/html/administration_guide/sect-preparing_to_deploy_geo-replication
> >? ? ? >
> >? ? ? >? ? ? >
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > Best regards,
> >? ? ? >? ? ? > --
> >? ? ? >? ? ? > alexander iliev
> >? ? ? >? ? ? >
> >? ? ? >? ? ? > On 9/3/19 2:50 PM, Sunny Kumar wrote:
> >? ? ? >? ? ? >> Thank you for the explanation Kaleb.
> >? ? ? >? ? ? >>
> >? ? ? >? ? ? >> Alexander,
> >? ? ? >? ? ? >>
> >? ? ? >? ? ? >> This fix will be available with next release for all
> >? ? ?supported
> >? ? ? >? ? ?versions.
> >? ? ? >? ? ? >>
> >? ? ? >? ? ? >> /sunny
> >? ? ? >? ? ? >>
> >? ? ? >? ? ? >> On Mon, Sep 2, 2019 at 6:47 PM Kaleb Keithley
> >? ? ? >? ? ?<kkeithle at redhat.com <mailto:kkeithle at redhat.com>
> <mailto:kkeithle at redhat.com <mailto:kkeithle at redhat.com>>
> >? ? ?<mailto:kkeithle at redhat.com <mailto:kkeithle at redhat.com>
> <mailto:kkeithle at redhat.com <mailto:kkeithle at redhat.com>>>>
> >? ? ? >? ? ? >> wrote:
> >? ? ? >? ? ? >>>
> >? ? ? >? ? ? >>> Fixes on master (before or after the release-7 branch
> >? ? ?was taken)
> >? ? ? >? ? ? >>> almost certainly warrant a backport IMO to at least
> >? ? ?release-6, and
> >? ? ? >? ? ? >>> probably release-5 as well.
> >? ? ? >? ? ? >>>
> >? ? ? >? ? ? >>> We used to have a "tracker" BZ for each minor
> release (e.g.
> >? ? ? >? ? ?6.6) to
> >? ? ? >? ? ? >>> keep track of backports by cloning the original
> BZ and
> >? ? ?changing
> >? ? ? >? ? ?the
> >? ? ? >? ? ? >>> Version, and adding that BZ to the tracker. I'm
> not sure
> >? ? ?what
> >? ? ? >? ? ? >>> happened to that practice. The last ones I can
> find are
> >? ? ?for 6.3
> >? ? ? >? ? ?and
> >? ? ? >? ? ? >>> 5.7;
> > https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-6.3 and
> >? ? ? >? ? ? >>>
> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.7
> >? ? ? >? ? ? >>>
> >? ? ? >? ? ? >>> It isn't enough to just backport recent fixes on
> master to
> >? ? ? >? ? ?release-7.
> >? ? ? >? ? ? >>> We are supposedly continuing to maintain
> release-6 and
> >? ? ?release-5
> >? ? ? >? ? ? >>> after release-7 GAs. If that has changed, I
> haven't seen an
> >? ? ? >? ? ? >>> announcement to that effect. I don't know why our
> >? ? ?developers don't
> >? ? ? >? ? ? >>> automatically backport to all the actively maintained
> >? ? ?releases.
> >? ? ? >? ? ? >>>
> >? ? ? >? ? ? >>> Even if there isn't a tracker BZ, you can always
> create a
> >? ? ? >? ? ?backport BZ
> >? ? ? >? ? ? >>> by cloning the original BZ and change the release
> to 6.
> >? ? ?That'd
> >? ? ? >? ? ?be a
> >? ? ? >? ? ? >>> good place to start.
> >? ? ? >? ? ? >>>
> >? ? ? >? ? ? >>> On Sun, Sep 1, 2019 at 8:45 AM Alexander Iliev
> >? ? ? >? ? ? >>> <ailiev+gluster at mamul.org
> <mailto:ailiev%2Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>>
> >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%252Bgluster at mamul.org
> <mailto:ailiev%25252Bgluster at mamul.org>>>>
> >? ? ? >? ? ?wrote:
> >? ? ? >? ? ? >>>>
> >? ? ? >? ? ? >>>> Hi Strahil,
> >? ? ? >? ? ? >>>>
> >? ? ? >? ? ? >>>> Yes, this might be right, but I would still expect
> >? ? ?fixes like
> >? ? ? >? ? ?this
> >? ? ? >? ? ? >>>> to be
> >? ? ? >? ? ? >>>> released for all supported major versions (which
> should
> >? ? ? >? ? ?include 6.) At
> >? ? ? >? ? ? >>>> least that's how I understand
> >? ? ? >? ? ? >>>> https://www.gluster.org/release-schedule/.
> >? ? ? >? ? ? >>>>
> >? ? ? >? ? ? >>>> Anyway, let's wait for Sunny to clarify.
> >? ? ? >? ? ? >>>>
> >? ? ? >? ? ? >>>> Best regards,
> >? ? ? >? ? ? >>>> alexander iliev
> >? ? ? >? ? ? >>>>
> >? ? ? >? ? ? >>>> On 9/1/19 2:07 PM, Strahil Nikolov wrote:
> >? ? ? >? ? ? >>>>> Hi Alex,
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> I'm not very deep into bugzilla stuff, but for me
> >? ? ?NEXTRELEASE
> >? ? ? >? ? ?means
> >? ? ? >? ? ? >>>>> v7.
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Sunny,
> >? ? ? >? ? ? >>>>> Am I understanding it correctly ?
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Best Regards,
> >? ? ? >? ? ? >>>>> Strahil Nikolov
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> ? ??????, 1 ????????? 2019 ?., 14:27:32 ?.
> ???????+3,
> >? ? ? >? ? ?Alexander Iliev
> >? ? ? >? ? ? >>>>> <ailiev+gluster at mamul.org
> <mailto:ailiev%2Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>>
> >? ? ? >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%252Bgluster at mamul.org
> <mailto:ailiev%25252Bgluster at mamul.org>>>> ??????:
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Hi Sunny,
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Thank you for the quick response.
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> It's not clear to me however if the fix has
> been already
> >? ? ? >? ? ?released
> >? ? ? >? ? ? >>>>> or not.
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> The bug status is CLOSED NEXTRELEASE and
> according to
> >? ? ?[1] the
> >? ? ? >? ? ? >>>>> NEXTRELEASE resolution means that the fix will be
> >? ? ?included in
> >? ? ? >? ? ?the next
> >? ? ? >? ? ? >>>>> supported release. The bug is logged against the
> >? ? ?mainline version
> >? ? ? >? ? ? >>>>> though, so I'm not sure what this means exactly.
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> ? From the 6.4[2] and 6.5[3] release notes it
> seems it
> >? ? ?hasn't
> >? ? ? >? ? ?been
> >? ? ? >? ? ? >>>>> released yet.
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Ideally I would not like to patch my systems
> locally,
> >? ? ?so if you
> >? ? ? >? ? ? >>>>> have an
> >? ? ? >? ? ? >>>>> ETA on when this will be out officially I would
> really
> >? ? ? >? ? ?appreciate it.
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Links:
> >? ? ? >? ? ? >>>>> [1]
> >? ? ? > https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_status
> >? ? ? >? ? ? >>>>> [2]
> https://docs.gluster.org/en/latest/release-notes/6.4/
> >? ? ? >? ? ? >>>>> [3]
> https://docs.gluster.org/en/latest/release-notes/6.5/
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Thank you!
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> Best regards,
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> alexander iliev
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> On 8/30/19 9:22 AM, Sunny Kumar wrote:
> >? ? ? >? ? ? >>>>> ? > Hi Alexander,
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > Thanks for pointing that out!
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > But this issue is fixed now you can see
> below link for
> >? ? ? >? ? ?bz-link
> >? ? ? >? ? ? >>>>> and patch.
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > BZ -
> > https://bugzilla.redhat.com/show_bug.cgi?id=1709248
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > Patch -
> > https://review.gluster.org/#/c/glusterfs/+/22716/
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > Hope this helps.
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > /sunny
> >? ? ? >? ? ? >>>>> ? >
> >? ? ? >? ? ? >>>>> ? > On Fri, Aug 30, 2019 at 2:30 AM Alexander Iliev
> >? ? ? >? ? ? >>>>> ? > <ailiev+gluster at mamul.org
> <mailto:ailiev%2Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>>
> >? ? ? >? ? ?<mailto:ailiev%2Bgluster at mamul.org
> <mailto:ailiev%252Bgluster at mamul.org>
> >? ? ?<mailto:ailiev%252Bgluster at mamul.org
> <mailto:ailiev%25252Bgluster at mamul.org>>> <mailto:gluster at mamul.org
> <mailto:gluster at mamul.org>
> >? ? ?<mailto:gluster at mamul.org <mailto:gluster at mamul.org>>
> >? ? ? >? ? ?<mailto:gluster at mamul.org <mailto:gluster at mamul.org>
> <mailto:gluster at mamul.org <mailto:gluster at mamul.org>>>>> wrote:
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Hello dear GlusterFS users list,
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> I have been trying to set up geo-replication
> >? ? ?between two
> >? ? ? >? ? ? >>>>> clusters for
> >? ? ? >? ? ? >>>>> ? >> some time now. The desired state is
> (Cluster #1)
> >? ? ?being
> >? ? ? >? ? ? >>>>> replicated to
> >? ? ? >? ? ? >>>>> ? >> (Cluster #2).
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Here are some details about the setup:
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Cluster #1: three nodes connected via a
> local network
> >? ? ? >? ? ? >>>>> (172.31.35.0/24 <http://172.31.35.0/24>
> <http://172.31.35.0/24>
> >? ? ?<http://172.31.35.0/24>),
> >? ? ? >? ? ? >>>>> ? >> one replicated (3 replica) volume.
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Cluster #2: three nodes connected via a
> local network
> >? ? ? >? ? ? >>>>> (172.31.36.0/24 <http://172.31.36.0/24>
> <http://172.31.36.0/24>
> >? ? ?<http://172.31.36.0/24>),
> >? ? ? >? ? ? >>>>> ? >> one replicated (3 replica) volume.
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> The two clusters are connected to the Internet
> >? ? ?via separate
> >? ? ? >? ? ? >>>>> network
> >? ? ? >? ? ? >>>>> ? >> adapters.
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Only SSH (port 22) is open on cluster #2
> nodes'
> >? ? ?adapters
> >? ? ? >? ? ? >>>>> connected to
> >? ? ? >? ? ? >>>>> ? >> the Internet.
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> All nodes are running Ubuntu 18.04 and
> GlusterFS 6.3
> >? ? ? >? ? ?installed
> >? ? ? >? ? ? >>>>> from [1].
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> The first time I followed the guide[2]
> everything
> >? ? ?went
> >? ? ? >? ? ?fine up
> >? ? ? >? ? ? >>>>> until I
> >? ? ? >? ? ? >>>>> ? >> reached the "Create the session" step.
> That was
> >? ? ?like a
> >? ? ? >? ? ?month
> >? ? ? >? ? ? >>>>> ago, then I
> >? ? ? >? ? ? >>>>> ? >> had to temporarily stop working in this
> and now I
> >? ? ?am coming
> >? ? ? >? ? ? >>>>> back to it.
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Currently, if I try to see the mountbroker
> status
> >? ? ?I get the
> >? ? ? >? ? ? >>>>> following:
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >>> # gluster-mountbroker status
> >? ? ? >? ? ? >>>>> ? >>> Traceback (most recent call last):
> >? ? ? >? ? ? >>>>> ? >>>??? File "/usr/sbin/gluster-mountbroker", line
> >? ? ?396, in
> >? ? ? >? ? ?<module>
> >? ? ? >? ? ? >>>>> ? >>>????? runcli()
> >? ? ? >? ? ? >>>>> ? >>>??? File
> >? ? ? >? ? ? >>>>>
> >? ? ? >
> >
> ?"/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py", line
> >? ? ? >? ? ? >>>>> 225,
> >? ? ? >? ? ? >>>>> in runcli
> >? ? ? >? ? ? >>>>> ? >>>????? cls.run(args)
> >? ? ? >? ? ? >>>>> ? >>>??? File "/usr/sbin/gluster-mountbroker", line
> >? ? ?275, in run
> >? ? ? >? ? ? >>>>> ? >>>????? out = execute_in_peers("node-status")
> >? ? ? >? ? ? >>>>> ? >>>??? File
> >? ? ? >? ? ? >>>>>
> >? ? ?"/usr/lib/python3/dist-packages/gluster/cliutils/cliutils.py",
> >? ? ? >? ? ? >>>>> ? >> line 127, in execute_in_peers
> >? ? ? >? ? ? >>>>> ? >>>????? raise GlusterCmdException((rc, out,
> err, "
> >? ? ? >? ? ?".join(cmd)))
> >? ? ? >? ? ? >>>>> ? >>>
> gluster.cliutils.cliutils.GlusterCmdException:
> >? ? ?(1, '',
> >? ? ? >? ? ? >>>>> 'Unable to
> >? ? ? >? ? ? >>>>> ? >> end. Error : Success\n', 'gluster system::
> execute
> >? ? ? >? ? ?mountbroker.py
> >? ? ? >? ? ? >>>>> ? >> node-status')
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> And in /var/log/gluster/glusterd.log I have:
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >>> [2019-08-10 15:24:21.418834] E [MSGID:
> 106336]
> >? ? ? >? ? ? >>>>> ? >> [glusterd-geo-rep.c:5413:glusterd_op_sys_exec]
> >? ? ? >? ? ?0-management:
> >? ? ? >? ? ? >>>>> Unable to
> >? ? ? >? ? ? >>>>> ? >> end. Error : Success
> >? ? ? >? ? ? >>>>> ? >>> [2019-08-10 15:24:21.418908] E [MSGID:
> 106122]
> >? ? ? >? ? ?? >>>>> ? >> [glusterd-syncop.c:1445:gd_commit_op_phase]
> >? ? ?0-management:
> >? ? ? >? ? ? >>>>> Commit of
> >? ? ? >? ? ? >>>>> ? >> operation 'Volume Execute system commands'
> failed on
> >? ? ? >? ? ?localhost
> >? ? ? >? ? ? >>>>> : Unable
> >? ? ? >? ? ? >>>>> ? >> to end. Error : Success
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> So, I have two questions right now:
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> 1) Is there anything wrong with my setup
> >? ? ?(networking, open
> >? ? ? >? ? ? >>>>> ports, etc.)?
> >? ? ? >? ? ? >>>>> ? >> Is it expected to work with this setup or
> should
> >? ? ?I redo
> >? ? ? >? ? ?it in a
> >? ? ? >? ? ? >>>>> ? >> different way?
> >? ? ? >? ? ? >>>>> ? >> 2) How can I troubleshoot the current
> status of my
> >? ? ? >? ? ?setup? Can
> >? ? ? >? ? ? >>>>> I find out
> >? ? ? >? ? ? >>>>> ? >> what's missing/wrong and continue from
> there or
> >? ? ?should I
> >? ? ? >? ? ?just
> >? ? ? >? ? ? >>>>> start from
> >? ? ? >? ? ? >>>>> ? >> scratch?
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Links:
> >? ? ? >? ? ? >>>>> ? >> [1]
> > http://ppa.launchpad.net/gluster/glusterfs-6/ubuntu
> >? ? ? >? ? ? >>>>> ? >> [2]
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>>
> >? ? ? >
> >
> https://docs.gluster.org/en/latest/Administrator%20Guide/Geo%20Replication/
> >? ? ? >
> >? ? ? >? ? ? >>>>>
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Thank you!
> >? ? ? >? ? ? >>>>> ? >>
> >? ? ? >? ? ? >>>>> ? >> Best regards,
> >? ? ? >? ? ? >>>>> ? >> --
> >? ? ? >? ? ? >>>>> ? >> alexander iliev
> >? ? ? >? ? ? >>>>> ? >>
> _______________________________________________
> >? ? ? >? ? ? >>>>> ? >> Gluster-users mailing list
> >? ? ? >? ? ? >>>>> ? >> Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> >? ? ? >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> >? ? ? >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>>
> >? ? ? >? ? ? >>>>> ? >>
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >? ? ? >? ? ? >>>>> _______________________________________________
> >? ? ? >? ? ? >>>>> Gluster-users mailing list
> >? ? ? >? ? ? >>>>> Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>
> >? ? ? >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>>
> >? ? ? >? ? ? >>>>>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> >? ? ? >? ? ? >>>> _______________________________________________
> >? ? ? >? ? ? >>>> Gluster-users mailing list
> >? ? ? >? ? ? >>>> Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>
> >? ? ? >? ? ? >>>>
> https://lists.gluster.org/mailman/listinfo/gluster-users
> >? ? ? >? ? ? > _______________________________________________
> >? ? ? >? ? ? > Gluster-users mailing list
> >? ? ? >? ? ? > Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> <mailto:Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>
> >? ? ? >? ? ? >
> https://lists.gluster.org/mailman/listinfo/gluster-users
> >? ? ? >? ? ?________
> >? ? ? >
> >? ? ? >? ? ?Community Meeting Calendar:
> >? ? ? >
> >? ? ? >? ? ?APAC Schedule -
> >? ? ? >? ? ?Every 2nd and 4th Tuesday at 11:30 AM IST
> >? ? ? >? ? ?Bridge: https://bluejeans.com/118564314
> >? ? ? >
> >? ? ? >? ? ?NA/EMEA Schedule -
> >? ? ? >? ? ?Every 1st and 3rd Tuesday at 01:00 PM EDT
> >? ? ? >? ? ?Bridge: https://bluejeans.com/118564314
> >? ? ? >
> >? ? ? >? ? ?Gluster-users mailing list
> >? ? ? > Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>
> >? ? ?<mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org> <mailto:Gluster-users at gluster.org
> <mailto:Gluster-users at gluster.org>>>
> >? ? ? > https://lists.gluster.org/mailman/listinfo/gluster-users
> >? ? ? >
> >? ? ? >
> >? ? ? >
> >? ? ? > --
> >? ? ? > regards
> >? ? ? > Aravinda VK
> >
> >
> >
> > --
> > regards
> > Aravinda VK
>
>
>
> --
> regards
> Aravinda VK
Best regards,
--
alexander iliev
------------------------------
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
End of Gluster-users Digest, Vol 138, Issue 14
**********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191018/0131ca5d/attachment.html>
More information about the Gluster-users
mailing list