[Gluster-devel] Recommended hardware for a unify cluster?
Jake Maul
jakemaul at gmail.com
Fri Nov 28 12:05:13 UTC 2008
What he said :).
The obvious advantage here is if you do go with unify and the
performance isn't good enough, you can:
1) add more servers
2) switch to SAS drives in the existing servers
They only hold 4 drives though... you'd be looking at 8 spindles
total. If that bothers you, you could do the same thing with the
PE2950 2U's... they hold 6 3.5" drives or 8 2.5" drives (which is
fairly common for fast SAS gear). Honestly though, I think you might
be pleasantly surprised by the performance of modern SAS equipment...
2 1950's like Ananth said might be just the ticket. I quick
run-through on Dell's site puts me at right around $3,000/each for a
4-drive SAS RAID10 with the bells and whistles I like.
Of course, without AFR adding servers doesn't help your redundancy or
availability... you'll still have missing data when a node goes down.
Jake
On Fri, Nov 28, 2008 at 12:12 AM, Ananth <ananth at zresearch.com> wrote:
> I agree with Jake, the ideal option would be using two 1U servers, and in my
> experience, the Dell 1950s perform admirably for your requirements. They
> would come with a factory installed PERC card as an option, you could look
> into that. You'd have the options of atleast RAID 0,1, 5 and 10 on those,
> and can handle SATA and SAS drives. I guess the optimal solution would be
> two 1 Unit rack servers, having 4 drives each. The advantage here is, if you
> want to save on costs, you could use SATA drives. You could upgrade them to
> SAS later, to improve performance. They fit into the same backplane, the
> SATA drives just need an interposer board. (My personal recommendation is to
> go for SAS right away). Also, if you need to upgrade your space at a later
> stage on these servers, you can always add on storage with a JBOD / RBOD
> when required (something like the MD1000).
> Regards,
> Ananth
>
> -----Original Message-----
> From: Jake Maul <jakemaul at gmail.com>
> To: Gluster Devel <gluster-devel at nongnu.org>
> Subject: Re: [Gluster-devel] Recommended hardware for a unify cluster?
> Date: Thu, 27 Nov 2008 20:03:25 -0700
>
> Given that you say you're only after ~215GB of disk space, I'm curious
> as to why you're looking to have so many drives. 80GB and 120GB are
> "tiny" by today's standards. I understand the idea of "more spindles
> == more performance", but is there some reason that 4/6/8 larger 10k
> or 15k rpm SAS drives won't do the job? Those would surely outperform
> 7200rpm SATA drives, or even WD Raptors. Even if you decide to stick
> with 7200rpm SATA drives, I'd do some research first- if you buy 80GB
> drives, you're undoubtedly buying older models (nobody
> designs/produces new drives that small)... newer drives are much
> faster, even at the same spindle speed. http://storagereview.com/ is a
> good reference here.
>
> Presuming that you've looked into 10k/15k SAS drives and have decided
> the price isn't worth it...
>
> I can't speak for anyone else of course, but the idea of a Unify-only
> setup kinda makes me itchy. I much prefer Unify+AFR. Others have no
> problem with it though, so to each his own... I don't use RAID-0
> either :).
>
> In your case, since you're not looking at a large amount of storage,
> why not just go for a 2 machine AFR solution? Load is automatically
> balanced between the storage nodes, so you'd still get the performance
> advantage of multiple servers, plus high availability
> (http://www.gluster.org/docs/index.php/Understanding_AFR_Translator).
> My stance on Unify has always been "for when you can't get enough
> space in 1 machine cheaply". Everything short of Solid State is 250GB+
> now though, so this isn't a scenario I'd use Unify in. Again, this is
> a personal viewpoint here- others may feel differently.
>
> I think I'd go with a single server solution or maybe a 2-system AFR
> setup... more than that seems like overkill to me. I'd also want to
> watch the output of "iostat -dkx 30" for a while on your current
> production server, and make sure that the storage system really is a
> problem (looking for high usage% or long "await" times), but
> presumably you've already done something like this. For a single
> server I'd be seriously looking at 10k/15k rpm SAS drives (RAID10 or
> maybe RAID5, depending on writes). For a pair of servers, I'd still
> want that of course but would also consider recent-vintage 7.2k/10k
> SATA equipment with a good 3ware RAID card, or better yet, something
> that can work with SATA or SAS drives, like (IIRC) a Dell PERC5
> controller.
>
> I'm partial to Dell equipment, and most of my experience is
> rack-mount... I'd be looking at something like a PowerEdge 1950 or
> similar. Those hold 4 drives... enough for a 2-server AFR w/ RAID-10
> in each. For more space, you could go with an external MD1000 storage
> unit (15-16 bayes, IIRC), or a 2U server like a Dell 2950 - 6 bayes I
> believe.
>
> Offtopic: For a maildir-style mail store I would definitely recommend
> looking at filesystems other than ext2/3 ... personally, I'd probably
> go with XFS. If you're not already doing this, you'll find that
> performance with many files per directory (10,000+) is much better
> than ext3.
>
> Good luck,
> Jake
>
> On Thu, Nov 27, 2008 at 11:17 AM, Brandon Lamb <brandonlamb at gmail.com>
> wrote:
>> Hello,
>>
>> Im hoping someone(s) can recommend commodity hardware to use in a 2
>> and/or 3 server unify setup.
>>
>> I am discussing with our other admins about migrating from a
>> monolithic 16 drive scsi (160 drives) nfs server to a 2 or 3 server
>> glusterfs setup. Given the two options what would you use for 2 and 3
>> machines?
>>
>> Should I use something like
>>
>> (2)
>> Quad core 9850
>> 8g ram
>> 8 sata2 120g drives, raid10
>>
>> or
>>
>> (3)
>> Dual core 5600
>> 4/6g ram
>> 8 sata2 80g drives, raid10
>>
>> If you would recommend a 3 server setup I would probably want the
>> cheaper hardware where possible. Im just not sure (from lack of
>> experience) if there would be a benefit to having 3 machines, but
>> lesser horsepower, than 2 machines.
>>
>> The use of this is a 215 gig maildir format mail store.Our current
>> scsi setup is using 73 gig drives on a P4 3ghz, 4g ram.
>>
>> One of the questions asked by one of the admins was whether a two 8
>> drive raids would perform as well/better using glusterfs than a
>> monolithic 16 drive raid. My thought was yes based on we would have
>> two machines with lots of ram compared to a single cpu, and also
>> double the bandwidth. Also the cost of buying two 8 drives machines is
>> less than a single 16 drive machine.
>>
>> Anyone out there with previous experience with something like this
>> that can point me in a direction?
>>
>>
>> _______________________________________________
>> Gluster-devel mailing list
>> Gluster-devel at nongnu.org
>> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>>
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
>
More information about the Gluster-devel
mailing list