[Gluster-devel] Recommended hardware for a unify cluster?
Jake Maul
jakemaul at gmail.com
Fri Nov 28 03:03:25 UTC 2008
Given that you say you're only after ~215GB of disk space, I'm curious
as to why you're looking to have so many drives. 80GB and 120GB are
"tiny" by today's standards. I understand the idea of "more spindles
== more performance", but is there some reason that 4/6/8 larger 10k
or 15k rpm SAS drives won't do the job? Those would surely outperform
7200rpm SATA drives, or even WD Raptors. Even if you decide to stick
with 7200rpm SATA drives, I'd do some research first- if you buy 80GB
drives, you're undoubtedly buying older models (nobody
designs/produces new drives that small)... newer drives are much
faster, even at the same spindle speed. http://storagereview.com/ is a
good reference here.
Presuming that you've looked into 10k/15k SAS drives and have decided
the price isn't worth it...
I can't speak for anyone else of course, but the idea of a Unify-only
setup kinda makes me itchy. I much prefer Unify+AFR. Others have no
problem with it though, so to each his own... I don't use RAID-0
either :).
In your case, since you're not looking at a large amount of storage,
why not just go for a 2 machine AFR solution? Load is automatically
balanced between the storage nodes, so you'd still get the performance
advantage of multiple servers, plus high availability
(http://www.gluster.org/docs/index.php/Understanding_AFR_Translator).
My stance on Unify has always been "for when you can't get enough
space in 1 machine cheaply". Everything short of Solid State is 250GB+
now though, so this isn't a scenario I'd use Unify in. Again, this is
a personal viewpoint here- others may feel differently.
I think I'd go with a single server solution or maybe a 2-system AFR
setup... more than that seems like overkill to me. I'd also want to
watch the output of "iostat -dkx 30" for a while on your current
production server, and make sure that the storage system really is a
problem (looking for high usage% or long "await" times), but
presumably you've already done something like this. For a single
server I'd be seriously looking at 10k/15k rpm SAS drives (RAID10 or
maybe RAID5, depending on writes). For a pair of servers, I'd still
want that of course but would also consider recent-vintage 7.2k/10k
SATA equipment with a good 3ware RAID card, or better yet, something
that can work with SATA or SAS drives, like (IIRC) a Dell PERC5
controller.
I'm partial to Dell equipment, and most of my experience is
rack-mount... I'd be looking at something like a PowerEdge 1950 or
similar. Those hold 4 drives... enough for a 2-server AFR w/ RAID-10
in each. For more space, you could go with an external MD1000 storage
unit (15-16 bayes, IIRC), or a 2U server like a Dell 2950 - 6 bayes I
believe.
Offtopic: For a maildir-style mail store I would definitely recommend
looking at filesystems other than ext2/3 ... personally, I'd probably
go with XFS. If you're not already doing this, you'll find that
performance with many files per directory (10,000+) is much better
than ext3.
Good luck,
Jake
On Thu, Nov 27, 2008 at 11:17 AM, Brandon Lamb <brandonlamb at gmail.com> wrote:
> Hello,
>
> Im hoping someone(s) can recommend commodity hardware to use in a 2
> and/or 3 server unify setup.
>
> I am discussing with our other admins about migrating from a
> monolithic 16 drive scsi (160 drives) nfs server to a 2 or 3 server
> glusterfs setup. Given the two options what would you use for 2 and 3
> machines?
>
> Should I use something like
>
> (2)
> Quad core 9850
> 8g ram
> 8 sata2 120g drives, raid10
>
> or
>
> (3)
> Dual core 5600
> 4/6g ram
> 8 sata2 80g drives, raid10
>
> If you would recommend a 3 server setup I would probably want the
> cheaper hardware where possible. Im just not sure (from lack of
> experience) if there would be a benefit to having 3 machines, but
> lesser horsepower, than 2 machines.
>
> The use of this is a 215 gig maildir format mail store.Our current
> scsi setup is using 73 gig drives on a P4 3ghz, 4g ram.
>
> One of the questions asked by one of the admins was whether a two 8
> drive raids would perform as well/better using glusterfs than a
> monolithic 16 drive raid. My thought was yes based on we would have
> two machines with lots of ram compared to a single cpu, and also
> double the bandwidth. Also the cost of buying two 8 drives machines is
> less than a single 16 drive machine.
>
> Anyone out there with previous experience with something like this
> that can point me in a direction?
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>
More information about the Gluster-devel
mailing list