[Gluster-users] GlusterFS across two datacenter's

Niels de Vos ndevos at redhat.com
Mon Sep 28 15:04:08 UTC 2015


On Mon, Sep 21, 2015 at 08:38:26PM +0530, Amardeep Singh wrote:
> Hi There,
> 
> We are planning to implement GlusterFS on CentOS-6 across two datacenter's.
> The architecture we are planning is :
> 
> *SAN Storage 1 - SiteA*  > announce via iSCSI > *GlusterFS Server 1 - SiteA*
> *SAN Storage 2 - SiteB*  > announce via iSCSI > *GlusterFS Server 2 - SiteB*
> 
> Once glusterfs configuration is done then each site will have its own
> virtual machines
> 
> *SiteA - VM1* > GlusterFS client installed > mounted from *GlusterFS Server
> 1 - SiteA*
> *SiteB - VM2* > GlusterFS client installed > mounted from*GlusterFS Server 2
> - SiteB*
> 
> We want each site to read/write on its own glusterfs node instead of
> mounting it from *SiteA* to VM's in SiteA and SiteB and using *SiteB
> glusterfs* node as backup.
> 
> Though POC seems to be working fine with xfs filesystem. I need to check
> couple of things here:
> 
> 1. The setup above is correct and we can use glusterfs mounted on each site
> with Replicated Gluster.

A replicated volume will do the write to the two copies in sync. For a
write, the slowest brick will be the bottleneck for response times.

Reading will be done from the brick that replies first to the LOOKUP
request. In general, this will be the brick on the local site.

It is possible to deploy Gluster with replicated volumes over two
datacenters, but the connection/latency between the sites will be
important. It is regularly done on big campuses of organisations that
have two or more datacenters with dedicated high-speed and low-latency
connections.

> 2. We want to add third datacenter for DR using geo replication, will that
> work with above setup. We have not done POC for geo replication yet.

geo-replication is async. The writes are done on the master, and the
slave contains an other copy of the data. geo-replication is targetted
at environments where storage servers are in different datacenters.

HTH,
Niels


More information about the Gluster-users mailing list