[Gluster-users] Issues with Geo-replication (GlusterFS 6.3 on Ubuntu 18.04)

Aravinda Vishwanathapura Krishna Murthy avishwan at redhat.com
Thu Oct 17 03:27:42 UTC 2019


On Wed, Oct 16, 2019 at 11:36 PM Strahil <hunter86_bg at yahoo.com> wrote:

> By the  way,
>
> I have  been left  with the  impresssion that  data  is transferred via
> 'rsync' and not via FUSE.
> Am I wrong ?
>

Rsync syncs data from Master FUSE mount to Slave/Remote FUSE mount.


>
> Best Regards,
> Strahil NikolovOn Oct 16, 2019 19:59, Alexander Iliev <
> ailiev+gluster at mamul.org> wrote:
> >
> > Hi Aravinda,
> >
> > All volume brick on the slave volume are up and the volume seems
> functional.
> >
> > Your suggestion about trying to mount the slave volume on a master node
> > brings up my question about network connectivity again - the GlusterFS
> > documentation[1] says:
> >
> > > The server specified in the mount command is only used to fetch the
> > gluster configuration volfile describing the volume name. Subsequently,
> > the client will communicate directly with the servers mentioned in the
> > volfile (which might not even include the one used for mount).
> >
> > To me this means that the masternode from your example is expected to
> > have connectivity to the network where the slave volume runs, i.e. to
> > have network access to the slave nodes. In my geo-replication scenario
> > this is definitely not the case. The two cluster are running in two
> > completely different networks that are not interconnected.
> >
> > So my question is - how is the slave volume mount expected to happen if
> > the client host cannot access the GlusterFS nodes? Or is the
> > connectivity a requirement even for geo-replication?
> >
> > I'm not sure if I'm missing something, but any help will be highly
> > appreciated!
> >
> > Thanks!
> >
> > Links:
> > [1]
> >
> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Clients/
> > --
> > alexander iliev
> >
> > On 10/16/19 6:03 AM, Aravinda Vishwanathapura Krishna Murthy wrote:
> > > Hi Alexander,
> > >
> > > Please check the status of Volume. Looks like the Slave volume mount
> is
> > > failing because bricks are down or not reachable. If Volume status
> shows
> > > all bricks are up then try mounting the slave volume using mount
> command.
> > >
> > > ```
> > > masternode$ mkdir /mnt/vol
> > > masternode$ mount -t glusterfs <slavehost>:<slavevol> /mnt/vol
> > > ```
> > >
> > > On Fri, Oct 11, 2019 at 4:03 AM Alexander Iliev
> > > <ailiev+gluster at mamul.org <mailto:ailiev%2Bgluster at mamul.org>> wrote:
> > >
> > >     Hi all,
> > >
> > >     I ended up reinstalling the nodes with CentOS 7.5 and GlusterFS 6.5
> > >     (installed from the SIG.)
> > >
> > >     Now when I try to create a replication session I get the following:
> > >
> > >       > # gluster volume geo-replication store1 <slave-host>::store2
> create
> > >     push-pem
> > >       > Unable to mount and fetch slave volume details. Please check
> the
> > >     log:
> > >     /var/log/glusterfs/geo-replication/gverify-slavemnt.log
> > >       > geo-replication command failed
> > >
> > >     You can find the contents of gverify-slavemnt.log below, but the
> > >     initial
> > >     error seems to be:
> > >
> > >       > [2019-10-10 22:07:51.578519] E
> > >     [fuse-bridge.c:5211:fuse_first_lookup]
> > >     0-fuse: first lookup on root failed (Transport endpoint is not
> > >     connected)
> > >
> > >     I only found
> > >     [this](https://bugzilla.redhat.com/show_bug.cgi?id=1659824)
> > >     bug report which doesn't seem to help. The reported issue is
> failure to
> > >     mount a volume on a GlusterFS client, but in my case I need
> > >     geo-replication which implies the client (geo-replication master)
> being
> > >     on a different network.
> > >
> > >     Any help will be appreciated.
> > >
> > >     Thanks!
> > >
> > >     gverify-slavemnt.log:
> > >
> > >       > [2019-10-10 22:07:40.571256] I [MSGID: 100030]
> > >     [glusterfsd.c:2847:main] 0-glusterfs: Started running glusterfs
> version
> > >     6.5 (args: glusterfs --xlator-option=*dht.lookup-unhashed=off
> > >     --volfile-server <slave-host> --volfile-id store2 -l
> > >     /var/log/glusterfs/geo-replication/gverify-slavemnt.log
> > >     /tmp/gverify.sh.5nFlRh)
> > >       > [2019-10-10 22:07:40.575438] I [glusterfsd.c:2556:daemonize]
> > >     0-glusterfs: Pid of current running process is 6021
> > >       > [2019-10-10 22:07:40.584282] I [MSGID: 101190]
> > >     [event-epoll.c:680:event_dispatch_epoll_worker] 0-epoll: Started
> thread
> > >     with index 0
> > >       > [2019-10-10 22:07:40.584299] I [MSGID: 101190]
> > >



-- 
regards
Aravinda VK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191017/aaae1d50/attachment.html>


More information about the Gluster-users mailing list