[Gluster-users] Exorbitant cost to achieve redundancy??

Whit Blauvelt whit.gluster at transpect.com
Tue Feb 14 00:33:43 UTC 2012


You don't have to leave all your redundancy to Gluster. You can put Gluster
on two (or more) systems which are each running RAID5, for instance. Then it
would take a minimum of 4 drives failing (2 on each array) before Gluster
should lose any data. Each system would require N+1 drives, so double your
drives plus two. (There are reasons to consider RAID other than 5, but I'll
leave that discussion out for now.)

As for "reasonable storage costs," have you priced the alternatives? 

Best,
Whit

On Mon, Feb 13, 2012 at 04:15:16PM -0800, Jeff Wiegley wrote:
> I'm trying to justify a GlusterFS storage system for my technology
> development group and I want to get some clarification on
> something that I can't seem to figure out architecture wise...
> 
> My storage system will be rather large. Significant fraction of a
> petabyte and will require scaling in size for at least one decade.
> 
> from what I understand GlusterFS achieves redundancy through
> replication. And from the documentation: Section 5.5 Creating
> Distributed Replicated Volumes the note says "The number of bricks
> should be a multiple of the replica count for a distributed replicated volume."
> 
> Is this telling me that if I want to be able to suffer 2 bricks failing
> that I have to deploy three bricks at a time and the amount of space
> I wind up with available is essentially equal to only that provided
> by a single brick?
> 
> In other words... GlusterFS TRIPLES all my storage costs to provide
> 2 brick fault tolerance?
> 
> How do I get redundancy in GlusterFS while getting reasonable
> storage costs where I am not wasting 50% of my investment or
> more in providing copies to obtain redundancy?
> 
> Thank you.
> 

> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list