[Gluster-users] Exorbitant cost to achieve redundancy??

Nathan Stratton nathan at robotics.net
Tue Feb 14 03:15:19 UTC 2012


On Mon, 13 Feb 2012, Jeff Wiegley wrote:

> I'm trying to justify a GlusterFS storage system for my technology
> development group and I want to get some clarification on
> something that I can't seem to figure out architecture wise...
>
> My storage system will be rather large. Significant fraction of a
> petabyte and will require scaling in size for at least one decade.
>
> from what I understand GlusterFS achieves redundancy through
> replication. And from the documentation: Section 5.5 Creating
> Distributed Replicated Volumes the note says "The number of bricks
> should be a multiple of the replica count for a distributed replicated 
> volume."
>
> Is this telling me that if I want to be able to suffer 2 bricks failing
> that I have to deploy three bricks at a time and the amount of space
> I wind up with available is essentially equal to only that provided
> by a single brick?
>
> In other words... GlusterFS TRIPLES all my storage costs to provide
> 2 brick fault tolerance?
>
> How do I get redundancy in GlusterFS while getting reasonable
> storage costs where I am not wasting 50% of my investment or
> more in providing copies to obtain redundancy?

It is a real problem that I hope gets addressed soon, until then about the 
only think you can do is rely on underlying hardware redundancy. We have 
just over 100 TB on Gluster with every brick using hardware RAID 6 cards 
with a hot standby. I know its not the solution your looking for, but 
for now its all we have with Gluster.

><>
Nathan Stratton                                CTO, BlinkMind, Inc.
nathan at robotics.net                         nathan at blinkmind.com
http://www.robotics.net                        http://www.blinkmind.com



More information about the Gluster-users mailing list