[Gluster-users] Bricks suggestions

John Jolet jjolet at drillinginfo.com
Sun Apr 29 21:29:59 UTC 2012


I'm doing software raid-0 with a  cluster volume at replica 2 across 2 nodes (essentially getting raid 10, i hope).  The OS will monitor the software raid and email root when it becomes degraded.  then i'll take the whole NODE out of the volume, fix the software raid, then bring it back in.  that's the plan.  
haven't tested it yet.

On Apr 29, 2012, at 4:18 PM, Brian Candler wrote:

> On Sat, Apr 28, 2012 at 11:25:30PM +0200, Gandalf Corvotempesta wrote:
>>   I'm also considering no raid at all.
>> 
>>   For example, with 2 server and 8 SATA disk each, I can create a single
>>   XFS filesystem for every disk and then creating a replicated bricks for
>>   each.
>> 
>>   For example:
>> 
>>   server1:brick1 => server2:brick1
>> 
>>   server1:brick2 => server2:brick2
>> 
>>   and so on.
>> 
>>   After that, I can use these bricks to create a distributed volume.
>> 
>>   In case of a disk failure, I have to heal only on disk at time and not
>>   the whole volume, right?
> 
> Yes. I considered that too. What you have to weigh it up against is the
> management overhead:
> 
> - recognising a failed disk
> - replacing a failed disk (which involves creating a new XFS filesystem
>  and mounting it at the right place)
> - forcing a self-heal
> 
> Whereas detecting a failed RAID disk is straightforward, and so is swapping
> it out.
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users




More information about the Gluster-users mailing list