[Gluster-users] RAID on GLUSTER node
Pranith Kumar Karampuri
pkarampu at redhat.com
Wed Jan 13 04:35:43 UTC 2016
+gluster-users
On 01/13/2016 09:44 AM, Pawan Devaiah wrote:
> We would be looking for redundancy so replicated volumes I guess
If replication is going to be there, why additional RAID10? You can do
just RAID6, it saves on space and replication in glusterfs will give
redundancy anyways.
Pranith
>
> Thanks,
> Dev
>
> On Wed, Jan 13, 2016 at 5:07 PM, Pranith Kumar Karampuri
> <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> wrote:
>
>
>
> On 01/13/2016 02:21 AM, Pawan Devaiah wrote:
>> Thanks for the response Pranith
>>
>> If we take EC out of the equation and say I go with RAID on the
>> physical disk, do you think GlusterFS is good for the 2 workloads
>> that I mentioned before.
>>
>> Basically it is going to be a NFS storage for VM and data but
>> with different RAIDs, 10 for VM and 6 for data.
> What will be the kind of volume you will be using with these disks?
>
> Pranith
>
>> Thanks
>> Dev
>>
>> On Tue, Jan 12, 2016 at 9:46 PM, Pranith Kumar Karampuri
>> <pkarampu at redhat.com <mailto:pkarampu at redhat.com>> wrote:
>>
>>
>>
>> On 01/12/2016 01:26 PM, Pawan Devaiah wrote:
>>> Thanks for your response Pranith and Mathieu,
>>>
>>> Pranith: To answer your question, I am planning to use this
>>> storage for two main workloads.
>>>
>>> 1. As a shared storage for VMs.
>> EC as it is today is not good for this.
>>> 2. As a NFS Storage for files.
>> If the above is for storing archive data. EC is nice here.
>>
>> Pranith
>>
>>>
>>> We are a online backup company so we store few hundred Terra
>>> bytes of data.
>>>
>>>
>>> Mathieu: I appreciate your concern, however as a system
>>> admins sometimes we get paranoid and try to control
>>> everything under the Sun.
>>> I know I can only control what I can.
>>>
>>> Having said that, No, I have pair of servers to start with
>>> so at the moment I am just evaluating and preparing for
>>> proof of concept, after which I am going to propose to my
>>> management, if they are happy then we will proceed further.
>>>
>>> Regards,
>>> Dev
>>>
>>> On Tue, Jan 12, 2016 at 7:30 PM, Mathieu Chateau
>>> <mathieu.chateau at lotp.fr <mailto:mathieu.chateau at lotp.fr>>
>>> wrote:
>>>
>>> Hello,
>>>
>>> For any system, 36 disks raise disk failure probability.
>>> Do you plan GlusterFS with only one server?
>>>
>>> You should think about failure at each level and be
>>> prepared for it:
>>>
>>> * Motherboard failure (full server down)
>>> * Disks failure
>>> * Network cable failure
>>> * File system corruption (time needed for fsck)
>>> * File/folder removed by mistake (backup)
>>>
>>> Using or not raid depend on your answer on these
>>> questions and performance needed.
>>> It also depend how "good" is raid controller in your
>>> server, like if it has battery and 1GB of cache.
>>>
>>> When many disks are bought at same time (1 order, serial
>>> number close to each other), they may fail in near time
>>> to each other (if something bad happened in manufactory).
>>> I already saw like 3 disks failing in few days.
>>>
>>> just my 2 cents,
>>>
>>>
>>>
>>> Cordialement,
>>> Mathieu CHATEAU
>>> http://www.lotp.fr
>>>
>>> 2016-01-12 4:36 GMT+01:00 Pranith Kumar Karampuri
>>> <pkarampu at redhat.com <mailto:pkarampu at redhat.com>>:
>>>
>>>
>>>
>>> On 01/12/2016 04:34 AM, Pawan Devaiah wrote:
>>>> Hi All,
>>>>
>>>> We have a fairly powerful server sitting at office
>>>> with 128 Gig RAM and 36 X 4 TB drives. I am
>>>> planning to utilize this server as a backend
>>>> storage with GlusterFS on it.
>>>> I have been doing lot of reading on Glusterfs, but
>>>> I do not see any definite recommendation on having
>>>> RAID on GLUSTER nodes.
>>>> Is it recommended to have RAID on GLUSTER nodes
>>>> specially for the bricks?
>>>> If Yes, is it not contrary to the latest Erasure
>>>> code implemented in Gluster or is it still not
>>>> ready for production environment?
>>>> I am happy to implement RAID but my two main
>>>> concern are
>>>> 1. I want to make most of the disk space available.
>>>> 2. I am also concerned about the rebuild time after
>>>> disk failure on the RAID.
>>> What is the workload you have?
>>>
>>> We found in our testing that random read/write
>>> workload with Erasure coded volumes is not as good
>>> as we get with replication. There are enhancements
>>> in progress at the moment to address these things
>>> which we are yet to merge and re-test.
>>>
>>> Pranith
>>>>
>>>> Thanks
>>>> Dev
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> <mailto:Gluster-users at gluster.org>
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>>
>>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160113/18882da0/attachment.html>
More information about the Gluster-users
mailing list