[Gluster-users] Gluster considerations - replicated volumes in different sites

Mathieu Chateau mathieu.chateau at lotp.fr
Mon Nov 23 12:29:21 UTC 2015


Hello,

except for NFS, client will synchronously writes to each replica all time.
They are also getting meta data from both.

As it's synchronous, it's going as slower as slowest replica. replica with
2 nodes won't be useful as it's the slowest one that is important.

Cordialement,
Mathieu CHATEAU
http://www.lotp.fr

2015-11-23 9:26 GMT+01:00 Tom Farrar <tom.farrar86 at gmail.com>:

> Good Morning All,
>
> I'm looking at deploying 3 Gluster nodes, two in one location and the
> third in another. The link between these two locations is fast and fairly
> low latency, around ~4ms. The volumes are all low write/high read with the
> largest being a few TBs (with lots of small files). While the primary
> location will have two nodes in, the secondary location (with one node)
> will see local reads and writes.
>
> I'm a little concerned about running the replication in separate locations
> given that from what I've read Gluster doesn't like latency. Is this a
> valid concern? I've seen a few options for what I believe is keeping reads
> local so they don't go to the distant node, but I'm struggling to find a
> definitive answer (cluster.read-subvolume perhaps).
>
> Also, in terms of configuration for low write/high read and a large volume
> of small files, the following seems to be the recommended from what I can
> cobble together - does this seem ok?
>
> Options Reconfigured:
> performance.readdir-ahead: on
> cluster.ensure-durability: off
> server.event-threads: 4
> cluster.lookup-optimize: on
> performance.quick-read: on
> cluster.readdir-optimize: on
>
> Many thanks,
>
> Tom
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151123/09b5e4ea/attachment.html>


More information about the Gluster-users mailing list