[Gluster-users] volume sizes

Liam Slusser lslusser at gmail.com
Wed Dec 30 11:46:03 UTC 2009


We have a very similar setup.  We have a 6 x 24 bay gluster cluster
with 36TB per node.  We use 3ware raid cards with raid6 over all the
24 drives making a ~32TB usable per node.  We have our gluster cluster
setup like raid 10, so 3 nodes stripped together and then mirrored to
the other 3 nodes.  Performance is very good as so is the reliability
which was more important to us then performance.  I thought about
breaking it into smaller pieces but it gets complicated very quick so
i went with the simpler is better setup.  We also grow about 1tb a
week of data so i have to add 1-2 nodes a year which is a huge pain in
the butt since gluster doesnt make it very easy to do. (ie building
the directory structure on each new node)  Doing a ls -agl on the root
of our cluster takes well over a week - we have around 50+ million
files in there.

The only downside is the rebuild time whenever we loose a drive.  The
3ware controller with such a large array takes about a week to rebuild
from any one drive failure.  Of course, with raid6, we can loose two
drives without any data loss.  Luckily we've never lost two or more
drives within the same week.  However if we DID for whatever reason
loose the whole array we can always pull the data of the other mirror
node.  I do very closely watch the SMART output of each drive and
proactively replace any drive which starts to show any signs of
failing or read/write errors.

I have a smaller cluster of 4 x 24 bay 36TB per node.  This array
pushes well over 500mbit of traffic almost 24/7 with almost zero
issues.  I've been very happy with how well it performs.  I do notice
that during an array rebuild after a failed drive the IOwait time on
the server is a bit higher but over all it does very well.

If you would like more information on my setup or what
hardware/software i run please feel free to contact me privately.

thanks,
liam


On Tue, Dec 29, 2009 at 1:54 PM, Anthony Goddard <agoddard at mbl.edu> wrote:
> First post!
> We're looking at setting up 6x 24 bay storage servers (36TB of JBOD storage per node) and running glusterFS over this cluster.
> We have RAID cards on these boxes and are trying to decide what the best size of each volume should be, for example if we present the OS's (and gluster) with six 36TB volumes, I imagine rebuilding one node would take a long time, and there may be other performance implications of this. On the other hand, if we present gluster / the OS's with 6x 6TB volumes on each node, we might have more trouble in managing a larger number of volumes.
>
> My gut tells me a lot of small (if you can call 6TB small) volumes will be lower risk and offer faster rebuilds from a failure, though I don't know what the pros and cons of these two approaches might be.
>
> Any advice would be much appreciated!
>
>
> Cheers,
> Anthony
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>



More information about the Gluster-users mailing list