[Gluster-users] RAID vs bare drive for bricks

Ben Turner bturner at redhat.com
Wed Sep 23 16:45:23 UTC 2015


----- Original Message -----
> From: "Gluster Admin" <gluster at hepcat.org>
> To: gluster-users at gluster.org
> Sent: Wednesday, September 23, 2015 11:48:46 AM
> Subject: [Gluster-users] RAID vs bare drive for bricks
> 
> 
> So in most of the documentation I read from both redhat and gluster.org it
> seems to reference using RAID on the servers for the bricks. This is a nice
> failsafe but obviously has reduced capacity repurcussions as you continue to
> scale with nodes and bricks.
> 
> With Gluster 3.7+ is it still recommended to use hardware RAID for the
> underlying disks of the bricks or especially in the case of Replica-3 would
> it be better to have individual drives as bricks?

See the replica 3 section here:

https://videos.cdn.redhat.com/summit2015/presentations/13767_red-hat-gluster-storage-performance.pdf


> 
> In the scenario of many servers and bricks lets say 12 servers with 12 drives
> each that would yield a scenario where 3 servers would have a copy of the
> data on each brick and assuming a multi rack layout would lead to a fairly
> distributed fault domain.
> 
> Am I missing something here? I can see hardware raid for smaller
> implementations of a few servers but it seems counterproductive for larger
> distributed-replicated setups

The problem with JBOD vs RAID is that you will only see the performance of a single client to a single disk without some kind of striping / sharding.  RAID aggregates the disks on the back end so single file performance can take advantage of all the disks instead of just 1.  JBOD is def the way of the future and when sharding is prod read you should be able to take more advantage of your disk for single file throughput.  Have a look at hat slide deck I linked for the tradeoffs and some perf data.

-b 



> 
> thanks
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list