[Gluster-users] GlusterFS absolute max. storage size in total?

Amar Tumballi atumball at redhat.com
Thu Apr 20 15:35:46 UTC 2017


On Thu, Apr 20, 2017 at 12:32 PM, Peter B. <pb at das-werkstatt.com> wrote:

> Thanks Amar and Mohamed!
>
> My question was mainly aiming at things like programmatical limitations.
> We're already running 2 Gluster-Clusters with 4 nodes each.
> 3 bricks = 100 TB/node = 400 TB total.
>
> So with Gluster 3.x it's 8PB, possibly more with Gluster 4.x.
>

Considering 100TB node, we should hit 10PB with 3.x

With Gluster 4.x we are planning to increase the number of hosts/nodes in
storage pools.

-Amar


>
> Right?
>
>
> Thank you very much again!
> Peter B.
>
>
>
> On 04/20/2017 06:31 AM, Mohamed Pakkeer wrote:
> > Hi Amar,
> >
> > Currently, we are running 40 node cluster and each node has
> > 36*6TB(210TB).As of now we don't face any issue with read and write
> > performance  except folder listing and disk failure healing. Planning to
> > start another cluster with 36*10TB(360)per node. What kind of problem
> will
> > we face if we go with 36*10(360)per node. Planned 360TB per node is far
> > from recommended 64TB. We are using Glusterfs with disperse volume for
> > video archival.
> >
> > Regards
> > K. Mohamed Pakkeer
> >
> > On 20-Apr-2017 9:00 AM, "Amar Tumballi" <atumball at redhat.com> wrote:
> >
> >> On Wed, 19 Apr 2017 at 11:07 PM, Peter B. <pb at das-werkstatt.com> wrote:
> >>
> >>> Hello,
> >>>
> >>> Could you tell me what the technically maximum limit of capacity of a
> >>> GlusterFS storage is?
> >>> I cannot find any official statement on this...
> >>>
> >>> Somewhere it says "several Petabytes", but that's a bit vague.
> >>
> >> It is mostly vague because the total storage depends on 2 things, what
> is
> >> per node storage you can get, and then how many nodes you have in
> cluster?
> >>
> >> It is scalable comfortably for 128nodes, and per node you can have 64TB
> >> drives, making it 8PB.
> >>
> >>
> >>
> >>> I'd need it for arguing pro GlusterFS in comparison to other systems.
> >>
> >> Hope the above info helps you. Btw, we are targeting more scalability
> >> related improvements in Gluster 4.0, so it should be much larger number
> >> from next year.
> >>
> >> -Amar
> >>
> >>
> >>
> >>> Thank you very much in advance,
> >>> Peter B.
> >>>
> >>> _______________________________________________
> >>> Gluster-users mailing list
> >>> Gluster-users at gluster.org
> >>> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>>
> >> --
> >> Amar Tumballi (amarts)
> >>
> >> _______________________________________________
> >> Gluster-users mailing list
> >> Gluster-users at gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170420/25d2de41/attachment.html>


More information about the Gluster-users mailing list