[Gluster-users] 90 Brick/Server suggestions?

Alvin Starr alvin at netvel.net
Fri Feb 17 19:05:00 UTC 2017


On 02/17/2017 10:13 AM, Gambit15 wrote:
>
>     RAID is not an option, JBOD with EC will be used.
>
>
> Any particular reason for this, other than maximising space by 
> avoiding two layers of RAID/redundancy?
> Local RAID would be far simpler & quicker for replacing failed drives, 
> and it would greatly reduce the number of bricks & load on Gluster.
>
> We use RAID volumes for our bricks, and the benefits of simplified 
> management far outweigh the costs of a little lost capacity.
>
> D

This is as much of a question as a comment.

My impression is that distributed filesystems like Gluster shine where 
the number if bricks is close to the number of servers and both of those 
numbers are as large as possible.
So the ideal solution would be 90 disks as 90 bricks on 90 servers.

This would be hard to do in practice but the point of Gluster is to try 
and spread the load and potential failures over a large surface.

Putting all the disks into a big RAID array and then just duplicating 
that for redundancy is not much better than using something like DRBD 
which would likely perform faster but be less scaleable.
In the end with big RAID arrays and fewer servers you have a smaller 
surface to absorb failures.

Over the years I have seen raid systems fail because users put them in 
and forget about them and then see system failures becasue they did not 
monitor the raid arrays.
I would be willing to bet that 80%+ of all the raid arrays out there are 
not monitored.
Gluster is more in your face about failures and arguably should be more 
reliable in practice because you will know quickly about a failure.

  Feel free to correct my misconceptions.

-- 
Alvin Starr                   ||   voice: (905)513-7688
Netvel Inc.                   ||   Cell:  (416)806-0133
alvin at netvel.net              ||

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170217/b4fadf29/attachment.html>


More information about the Gluster-users mailing list