[Gluster-users] Design/HW for cost-efficient NL archive >= 0.5PB?

Justin Dossey jbd at podomatic.com
Tue Dec 31 16:33:45 UTC 2013


Yes, I'd recommend sticking with RAID in addition to GlusterFS.  The
cluster I'm mid-build on (it's a live migration) is 18x RAID-5 bricks on 9
servers.  Each RAID-5 brick is 8 2T drives, so about 13T usable.  It's
better to deal with a RAID when a disk fails than to have to pull and
replace the brick, and I believe Red Hat's official recommendation is still
to minimize the number of bricks per server (which makes me a rebel for
having two, I suppose).  9 (slow-ish, SATA RAID) servers easily saturate
1Gbit on a busy day.

 The following is opinion only, so make up your own mind:

If I had a big pile of RAID-5 or RAID-6 bricks, I would not want to spend
extra money for replica-3.  Instead, I would go replica-2 and use the
leftover money to build in additional redundancy on the hardware (e.g.
redundant power, redundant 10gigE).  If money were not an object, of course
there's no harm in going replica-3 or more.  But every build I've ever done
has a budget that seems slightly small for the desired outcome.




On Mon, Dec 30, 2013 at 5:54 AM, bernhard glomm
<bernhard.glomm at ecologic.eu>wrote:

> some years ago I had a similar tasks.
> I did:
> - We had disk arrays with 24 slots, with optional 4 JBODS (each 24 slots)
> stacked on top, dual LWL controller 4GB (costs ;-)
> - creating raids (6) with not more than 7 disks each
> - as far as I remember I had one hot spare per each 4 raids
> - connecting as many of this raid bricks together with striped glusterfs
> as needed
> - as for replication, I was planing for an offside duplicate of this
> architecture and
> because losing data was REALLY not an option, writing it all off at a
> second offside location onto LTFS tapes.
> As the original version for the LTFS library edition was far to expensive
> for us
> I found an alternative solution that does the same thing
> but fort a much reasonable prize. LTFS is still a big thing in digital
> Archiving.
> Give me a note if you like more details on that.
>
> - This way I could fsck all (not to big) raids in parallel (sped things up)
> - proper robustness against disk failure
> - space that could grow infinite in size (add more and bigger disks) and
> keep up with access speed (ad more server) at a pretty foreseeable prize
> - LTFS in the vault provided just the finishing having data accessible
> even if two out three sides are down,
> reasonable prize, (for instance no heat problem at the tape location)
> Nowadays I would go for the same approach except zfs raidz3 bricks (at
> least do a thorough test on it)
> instead of (small) hardware raid bricks.
> As for simplicity and robustness I wouldn't like to end up with several
> hundred glusterfs bricks, each on one individual disk,
> but rather leaving disk failure prevention either to hardware raid or zfs
> and using gluster to connect this bricks into the
> fs size I need(  - and for mirroring the whole thing to a second side if
> needed)
> hth
> Bernhard
>
>
>
>  [image: *Ecologic Institute*]   *Bernhard Glomm*
> IT Administration
>
>    Phone:  +49 (30) 86880 134   Fax:  +49 (30) 86880 100   Skype: bernhard.glomm.ecologic     [image:
> Website:] <http://ecologic.eu> [image: | Video:]<http://www.youtube.com/v/hZtiK04A9Yo> [image:
> | Newsletter:] <http://ecologic.eu/newsletter/subscribe> [image: |
> Facebook:] <http://www.facebook.com/Ecologic.Institute> [image: |
> Linkedin:]<http://www.linkedin.com/company/ecologic-institute-berlin-germany> [image:
> | Twitter:] <http://twitter.com/EcologicBerlin> [image: | YouTube:]<http://www.youtube.com/user/EcologicInstitute> [image:
> | Google+:] <http://plus.google.com/113756356645020994482>   Ecologic
> Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 Berlin |
> Germany
> GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | USt/VAT-IdNr.:
> DE811963464
> Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
> ------------------------------
>
> On Dec 25, 2013, at 8:47 PM, Fredrik Häll <hall.fredrik at gmail.com> wrote:
>
> I am new to Gluster, but so far it seems very attractive for my needs. I
> am trying to assess its suitability for a cost-efficient storage problem I
> am tackling. Hopefully someone can help me find how to best solve my
> problem.
>
> Capacity:
> Start with around 0.5PB usable
>
> Redundancy:
> 2 replicas with non-RAID is not sufficient. Either 3 replicas with
> non-raid or some combination of 2 replicas and RAID?
>
> File types:
> Large files, around 400-1500MB each.
>
> Usage pattern:
> Archive (not sure if this matches nearline or not..) with files being
> added at around 200-300GB/day (3-400 files/day). Very few reads, order of
> 10 file accesses per day. Concurrent reads highly unlikely.
>
> The main two factors for me are cost and redundancy. Losing data is not an
> option, being an archive solution. Cost/usable TB is the other key factor,
> as we see growth estimates of 100-500TB/year.
>
> Looking just at $/TB, a RAID-based approach to me sounds more efficient.
> But RAID rebuild times with large arrays of large capacity drives sound
> really scary. Not sure if something smart can be done since we will still
> have a replica left during the rebuild?
>
> So, any suggestions on what would be possible and cost-efficient
> solutions?
>
> - Any experience on dense servers, what is advisable? 24/36/50/60 slots?
> - SAS expanders/storage pods?
> - RAID vs non-RAID?
> - Number of replicas etc?
>
> Best,
>
> Fredrik
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Justin Dossey
CTO, PodOmatic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131231/0364f103/attachment.html>


More information about the Gluster-users mailing list