[Gluster-users] How many clients can mount a single volume?

Robert Hajime Lanning lanning at lanning.cc
Mon Mar 6 22:15:57 UTC 2017


There is a lot of data missing...

Using FUSE clients:

For every write, there are three writes on the network (one per brick in 
your x3 config).  So, outside of bandwidth requirements, you have TCP 
connection limits which comes with filedescriptor limits.

If all three bricks are on the same server, then you have a max for that 
server divided by three. If each brick is on its own server, then you 
don't have that 1/3rd problem.

The idea of GlusterFS is horizontal scaling, so you would add more 
bricks on more hosts to scale when you reach that arbitrary limit.


Using NFS:

You have the single connection from the client to the NFS server, then 
the fan out to the bricks.

On 03/02/17 21:14, Tamal Saha wrote:
> Hi,
> Anyone has any comments about this issue? Thanks again.
>
> -Tamal
>
> On Mon, Feb 27, 2017 at 8:34 PM, Tamal Saha <tamal at appscode.com 
> <mailto:tamal at appscode.com>> wrote:
>
>     Hi,
>     I am running a GlusterFS cluster in Kubernetes. This has a single
>     1x3 volume. But this volume is mounted by around 30 other docker
>     containers. Basically each docker container represents a separate
>     "user" in our multi-tenant application. As a result there is no
>     conflicting writes among the "user"s. Each user writes to their
>     own folder in the volume.
>
>     My question is how many clients can mount a GlusterFS volume
>     before it becomes a performance issue?
>
>     Thanks,
>     -Tamal
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

-- 
Mr. Flibble
King of the Potato People
http://www.linkedin.com/in/RobertLanning

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170306/6ae26c21/attachment.html>


More information about the Gluster-users mailing list