[Gluster-users] New GlusterFS Config with 6 x Dell R720xd's and 12x3TB storage

Brian Candler B.Candler at pobox.com
Sun Dec 2 19:02:49 UTC 2012


On Fri, Nov 30, 2012 at 07:21:54PM +0000, Mike Hanby wrote:
>    We have the following hardware that we are going to use for a GlusterFS
>    cluster.
> 
>    6 x Dell R720xd's (16 cores, 96G)

Heavily over-specified, especially the RAM. Having such large amounts of RAM
can even cause problems if you're not careful.  You probably want to use
sysctl and /etc/sysctl.conf to set

    vm.dirty_background_ratio=1
    vm.dirty_ratio=5   (or less)

so that dirty disk blocks are written to disk sooner, otherwise you may find
the system locks up for several minutes at a time as it flushes the enormous
disk cache.

I use 4 cores + 8GB for bricks with 24 disks (and they are never CPU-bound)

>    I now need to decide how to configure the 12 x 3TB disks in each
>    server, followed by partitioning / formatting them in the OS.
> 
>    The PERC H710 supports RAID 0,1,5,6,10,50,60. Ideally we'd like to get
>    good performance, maximize storage capacity and still have parity :-)

For performance: RAID10
For maximum storage capacity: RAID5 or RAID6

>    * Stripe Element Size: 64, 128, 256, 512KB, 1MB

Depends on workload. With RAID10 and lots of concurrent clients, I'd tend to
use a 1MB stripe size. Then R/W by one client is likely to be on a different
disk to R/W by another client, and although throughput to individual clients
will be similar to a single disk, the total throughput is maximised.

If your accesses are mostly by a single client, then you may not get enough
readahead to saturate the disks with such a large stripe size; with RAID5/6
your writes may be slow if you can't write a stripe at a time (which may be
possible if you have a battery-backed card).  So for these scenarios
something like 256K may work better.  But you do need to test it.

Finally: you don't mention your network setup.

With 12 SATA disks, you can expect to get 25-100MB/sec *per disk* depending
on how sequential and how large the transfers are.  So your total disk
throughput is potentially 12 times that, i.e.  300-1200MB/sec.  The bottom
end of this range is easily achievable, and is already 2.5 times a 1G link.
At the top end you could saturate a 10G link.

So if you have only 1G networking it's very likely going to be the
bottleneck.

Regards,

Brian.



More information about the Gluster-users mailing list