[Gluster-devel] Is it possible to setup a RAID 6 using GlusterFS?

David Coulson david at davidcoulson.net
Sat Apr 7 13:35:16 UTC 2012


If you are using distributed-replicate, the max file size is the 
capacity available on an individual brick (16gb in this case). if you 
use stripe-replicate, it will split the file across all of the bricks. I 
believe stripe-replicate is only available in 3.3beta/rc at this time.

Your volume size will always be 64gb, other than if you had 3 nodes fail 
at the same time (and you would have data unavailable at that point). 
Your total available capacity will be 1/3 the capacity of all your 
bricks together due to the replica-3 configuration.

On 4/7/12 9:27 AM, Pascal wrote:
> Hello David,
>
> thank you for explaining it.
>
> To sum it up: Your suggestion works like a charm ;)
>
> I still have a follow-up question. I expected the available storage
> capacity to be half of the total storage capacity provided by all brick
> partitions/volumes.
>
> In my test environment each of the four bricks provides ~16 GB and if
> two nodes could fail at the same time, there would just ~32 GB be left.
>
> So I used the df command on a client ...
> # df -h /mn/gluster-volume
> ... and it shows me that ~64 GB are available.
>
> I am not able to store files of a total size of 64 GB on that volume,
> isn't it? Can someone explain the result of df to me?
>
>
> ----------
>
>
> I am not sure, if it is the usual way to write down "solutions" on the
> mailing list, but I thing it could be helpful for other people.
>
> node1
> - /data
>    - /exp1
>    - /exp2
>    - /exp3
>
> node2
> - /data
>    - /exp1
>    - /exp2
>    - /exp3
>
> node3
> - /data
>    - /exp1
>    - /exp2
>    - /exp3
>
> node4
> - /data
>    - /exp1
>    - /exp2
>    - /exp3
>
> # gluster volume create gluster-volume replica 3 transport tcp \
> node1:/data/exp1 node2:/data/exp1 node3:/data/exp1 \
> node1:/data/exp2 node2:/data/exp2 node4:/data/exp1 \
> node1:/data/exp3 node3:/data/exp2 node4:/data/exp2 \
> node2:/data/exp3 node3:/data/exp3 node4:/data/exp3
>
>
> Am Fri, 06 Apr 2012 21:04:16 -0400
> schrieb David Coulson<david at davidcoulson.net>:
>
>> You need to do 12 bricks across 4 nodes, in 'replica 3' groups. This
>> would allow you to lose two nodes and still have access to all your
>> data, as each distributed replica group is across at least 3 of your
>> 4 nodes.
>>
>> You will need to be deliberate about which 3-way groups end up on
>> each node so you have appropriate redundancy (e.g. group one does
>> 1,2,3, group two does 1,3,4, three does 2,3,4, four does 1,2,4)
>>
>> On 4/6/12 8:06 PM, Pascal wrote:
>>> Hello David,
>>>
>>> I hope that you will read this, even though your post was written
>>> some days ago.
>>>
>>> I was trying to configure your suggestion "with a replica count of
>>> 3" and I wasn't able to do it.
>>>
>>>
>>> My original setup with four nodes: node1, node2, node3, node4.
>>>
>>> # gluster volume create gluster-storage replica 2 transport tcp
>>> ip-node1:/data ip-node2:/data ip-node3:/data ip-node4:/data
>>>
>>> The result:
>>> Node1 and node2 replicated the files among each other and node3 and
>>> node4 did the same. The replication group of node1 and node2
>>> (group1) distributed the files among the replication group of node3
>>> and node4 (group2).
>>>
>>> The problem:
>>> Two hard drives could fail at the same time, but just one hard drive
>>> from each replication group. My aim is to archive something were
>>> any two hard drives could fail.
>>>
>>>
>>> Trying to setup a replica count of 3 with my four nodes:
>>>
>>> # gluster volume create gluster-storage replica 3 transport tcp
>>> ip-node1:/data ip-node2:/data ip-node3:/data ip-node4:/data
>>>> number of bricks is not a multiple of replica count
>>> This means to my, that I would need six nodes/bricks and that would
>>> lead me to the same situation as before. Node1, node2 and node3
>>> would build a replication group and node4, node5 and node6 would
>>> build the other replication group and both groups together would
>>> save all the data.
>>> I would still have the problem that two hard drives from one
>>> replication group weren't allowed to fail at the same time.
>>>
>>>
>>> Did I misunderstood your idea of a "replica count of 3"? Would you
>>> be so kind to explain it to me?
>>>
>>> Thanks in advance!
>>>
>>> Pascal
>>>
>>>
>>> Am Thu, 29 Mar 2012 10:47:38 -0400
>>> schrieb David
>>> Coulson<david at davidcoulson.net>:
>>>
>>>> Try doing a distributed-replica with a replica count of 3. Not
>>>> really 'RAID-6' comparable, but you can have two nodes fail
>>>> without outage.
>>>>
>>>> http://download.gluster.com/pub/gluster/glusterfs/3.2/Documentation/AG/html/sect-Administration_Guide--Setting_Volumes-Distributed_Replicated.html
>>>>
>>>> On 3/29/12 10:39 AM, Pascal wrote:
>>>>> Hello everyone,
>>>>>
>>>>> I would like to know if it is possible to setup a GlusterFS
>>>>> installation which is comparable to a RAID 6? I did some research
>>>>> in the community and several mailing lists and all I could find
>>>>> were the similar request from 2009
>>>>> (http://gluster.org/pipermail/gluster-users/2009-May/002208.html,
>>>>> http://www.gluster.org/community/documentation/index.ph/Talk:GlusterFS_Roadmap_Suggestions).
>>>>>
>>>>> I would just like to have a scenario where two GlusterFS
>>>>> nodes/servers, respectively their hard drives, could fail at the
>>>>> same time.
>>>>>
>>>>> Thanks in advance!
>>>>> Pascal
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at nongnu.org
> https://lists.nongnu.org/mailman/listinfo/gluster-devel




More information about the Gluster-devel mailing list