[Gluster-users] Replication issue. Synchronous / Async..
Bobby Jacob
bobby.jacob at alshaya.com
Tue May 21 13:51:31 UTC 2013
Hi,
So in my case:
I have to setup a GlusterFS volume in Kuwait and a separate one in Dubai ??
Is it possible to have a 2-way asynchronous replication between the 2
volumes as I would want the files from both volumes to be replicated to each
other. Not 1-way.
Thanks & Regards,
Bobby Jacob
P SAVE TREES. Please don't print this e-mail unless you really need to.
From: Venky Shankar [mailto:yknev.shankar at gmail.com]
Sent: Tuesday, May 21, 2013 4:48 PM
To: Bobby Jacob
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] Replication issue. Synchronous / Async..
H
i,
Asynchronous replication in GlusterFS is Geo-replication, which is
replicating data between two GlusterFS clusters. You cannot create a
asynchronous replicated volume.
Basically, you create two volumes in different geographies and then setup
"asynchronous replication" b/w the two. The volumes itself could be pure
distribute, pure replicate or distributed-replicate.
Thanks,
-venky
On Tue, May 21, 2013 at 6:30 PM, Bobby Jacob <bobby.jacob at alshaya.com>
wrote:
Hi All,
I'm currently testing GFS on a geographically distributed environment.
Our scenario:
We have 2 Datacenters: Kuwait/Dubai. I have build test
servers in both the locations. My glusterFS volume looks like this:
Volume Name: cloudgfs
Type: Replicate
Volume ID: 3e002989-6c9f-4f83-9bd5-c8a3442d8721
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: KWTTESTGSNODE001:/mnt/cloudbrick
Brick2: KWTTESTGSNODE002:/mnt/cloudbrick
Brick3: DXBTESTGSNODE001:/mnt/cloudbrick
Brick4: DXBTESTGSNODE002:/mnt/cloudbrick
As you see it's a basic replicated volume. This volume is mounted on 2
applications servers: 1in Kuwait/1 in Dubai. This volume will act as a cloud
based file share for users.
The replication is synchronous. Any idea how we can make the replication
asynchronous. Any feedback on how we can make the setup more rigid.
Thanks & Regards,
Bobby Jacob
P SAVE TREES. Please don't print this e-mail unless you really need to.
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130521/17902f31/attachment.html>
More information about the Gluster-users
mailing list