[Gluster-users] Gluster Scale Limitations
Atin Mukherjee
amukherj at redhat.com
Thu Nov 2 03:52:45 UTC 2017
On Tue, 31 Oct 2017 at 03:32, Mayur Dewaikar <mdewaikar at commvault.com>
wrote:
> Hi all,
>
> Are there any scale limitations in terms of how many nodes can be in a
> single Gluster Cluster or how much storage capacity can be managed in a
> single cluster? What are some of the large deployments out there that you
> know of?
>
>
The current design of GlusterD is not capable of handling too many nodes in
the cluster specially on the node restart/reboot condition. We have heard
about deployments with ~100-150 nodes where things are stable but in node
reboot scenario some special tweaking of parameters like
network.listen-backlog is required to ensure TCP packets don’t get
overflowed resulting into connection between brick to glusterd fail.
GlusterD2 project will definitely address this aspect of the problems.
Also since all the directory layouts are replicated on all the bricks of a
volume, mkdir, unlink or any other directory operations are costly and with
more number of bricks this impacts the latency. We’re also working on a
project called RIO to address this issue.
>
> Thanks,
>
> Mayur
>
>
>
>
> ***************************Legal Disclaimer***************************
> "This communication may contain confidential and privileged material for
> the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **********************************************************************
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
--
- Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171102/09bc9e52/attachment.html>
More information about the Gluster-users
mailing list