[Gluster-users] WAN Challenge

Count Zero countz at gmail.com
Fri Apr 9 16:55:39 UTC 2010

Hi Guys, I've sent this once but I did not even get it myself from the mailing list so i'm not sure it was even received correctly so I am re-posting. My apologies if this is a re-post.

I have an interesting situation, and I'm wondering if there's a solution for it in the glusterfs realm or if I will have to resort to other solutions that complement glusterfs (such as rsync or unison).

I have 9 servers in 3 locations on the internet (3 servers per location). Unfortunately, the network distance between them is such that setting up a Distribute or NUFA cluster between them all is difficult (I'm not saying impossible, because it may be possible and I just don't know how to pull it off).

There are 3 servers in each data center, and they are all clustered via NUFA:

-+ NUFA-Cluster
---+ SRV-A1
---+ SRV-A2
---+ SRV-A3

DC-B ( >> rsync from A)
-+ NUFA-Cluster
---+ SRV-B1
---+ SRV-B2
---+ SRV-B3

DC-C ( >> rsync from B)
-+ NUFA-Cluster
---+ SRV-C1
---+ SRV-C2
---+ SRV-C3

The reason I did it like this, so far:

1) I needed file reads to be fast on each local node, so I have the "option local-volume-name `hostname`" trick in my glusterfs.vol file (like in the cookbook).

2) Bandwidth between DC-A and DC-B and DC-C is kinda low... and since glusterfs waits for the last server to finish, this severely slows down the entire cluster for any operation, including just listing the files in a directory.

Is there a better way to implement this? All the examples I find are about 4 node replication, etc.

What about inter-continent replication of data between NUFA Clusters?
Any advice would be greatly appreciated :-)

At the moment, out of lack of options, I plan to sync between the 3 NUFA clusters with "INOSYNC".

Count Zero

P.S. Below is my configuration file, from /etc/glusterfs/glusterfs.vol:


volume posix
type storage/posix
option directory /data/export

volume locks
type features/locks
subvolumes posix

volume brick
type performance/io-threads
subvolumes locks

volume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow *
subvolumes brick

volume srv-a1
type protocol/client
option transport-type tcp
option remote-host srv-a1
option remote-subvolume brick

volume srv-a2
type protocol/client
option transport-type tcp
option remote-host srv-a2
option remote-subvolume brick

volume srv-a3
type protocol/client
option transport-type tcp
option remote-host srv-a3
option remote-subvolume brick

volume nufa
type cluster/nufa
option local-volume-name `hostname`
subvolumes srv-a1 srv-a2 srv-a3

volume writebehind
type performance/write-behind
option cache-size 1MB
subvolumes nufa

volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind


More information about the Gluster-users mailing list