[Gluster-users] Exorbitant cost to achieve redundancy??

Whit Blauvelt whit.gluster at transpect.com
Tue Feb 14 04:45:10 UTC 2012


On Mon, Feb 13, 2012 at 09:18:34PM -0600, Nathan Stratton wrote:
> 
> On Mon, 13 Feb 2012, Whit Blauvelt wrote:
> 
> >You don't have to leave all your redundancy to Gluster. You can put Gluster
> >on two (or more) systems which are each running RAID5, for instance. Then it
> >would take a minimum of 4 drives failing (2 on each array) before Gluster
> >should lose any data. Each system would require N+1 drives, so double your
> >drives plus two. (There are reasons to consider RAID other than 5, but I'll
> >leave that discussion out for now.)
> 
> That's great with a few nodes, but the problem is with Gluster and
> many notes. We run all our notes with RAID6, but the more nodes you
> have the more likely you will have a node failure. This is what I
> think Jeff was worried about.

Sure, nodes can fail. But Jeff's subject was "Exorbitant cost to achieve
redundancy??" That was a nice contrast to those who've found Gluster far
cheaper than solutions they'd gone with before. Your systems could always be
hit by natural or man-made disaster at any location too. Seems to be more of
the former lately, and there's plenty of threat of the latter too. Remote DR
should go without saying if the data's valuable. If Gluster gets the geo-rep
thing working right, it'll be the low-cost solution there too.

If someone has a design to get redundancy more cost-effectively, that'd be
great. But as far as I can see - and I'm only in the trenches here, not on
the mountain top - as long as we're essentially on drives, we're going to
need a RAIDed set locally, synchronously mirrored on anther RAIDed set by
whatever tech you like (e.g. Gluster), and a third set of possibly slower,
cheaper drives far away, with your data asynchronously mirrored for DR. No
equivalent solution is going to be cheaper than the collection of physical
drives. I'm sure my thinking's too conventional. Please suggest better.

Best,
Whit



More information about the Gluster-users mailing list