[Gluster-users] Gluster = RAID 10 over the network?

Ryan Nix ryan.nix at gmail.com
Sun Sep 21 12:24:13 UTC 2014


Hi All,

So my boss and I decided to make a good size investment in a Gluster
cluster.  I'm super excited and I will be taking a Redhat Storage class
soon.

However, we're debating the hardware configuration we intend to purchase.
We agree that each brick/node, and we're buying four, each configured as
RAID 10 will help us sleep at night, but to me, it seems like such an
unfortunate waste of disk space.  Our graduate and PHD students work with
lots of video and they filled up our proof-of-concept 4 TB ownCloud/Gluster
setup in < 2 months.

I stumbled upon Howtoforge's Gluster setup guide from two years ago and I'm
wondering if this is correct and or still relevant:

http://bit.ly/1qkLoVe

*This tutorial shows how to combine four single storage servers (running
Ubuntu 12.10) to a distributed replicated storage with GlusterFS
<http://www.gluster.org/>. Nodes 1 and 2 (replication1) as well as 3 and 4
(replication2) will mirror each other,
and replication1 and replication2 will be combined to one larger storage
server (distribution). Basically, this is RAID10 over network. If you lose
one server from replication1 and one from replication2, the distributed
volume continues to work. The client system (Ubuntu 12.10 as well) will be
able to access the storage as if it was a local filesystem*

The vendor we have chosen, System 76, offers either RAID 5 or RAID 10 in
each server.  Does anyone have insights or opinions on this?  It would seem
to be that RAID 5 would be okay and that some kind drive monitoring
(opinions also welcome, please) would be sufficient with the inherent
nature of Gluster's Distributed/Replicated setup.  RAID 5 at System 76
allows us to max out at 42 TB of useable space.  RAID 10 makes it 24 TB
useable.

I'd love to hear any insights or opinions on this.  To me, RAID 5 with
Gluster in a distributed replicated setup should be sufficient and help us
sleep well each night.  :)

Thanks in advance!

Ryan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140921/b97598fe/attachment.html>


More information about the Gluster-users mailing list