[Gluster-users] RAID on GLUSTER node
Mathieu Chateau
mathieu.chateau at lotp.fr
Mon Jan 18 06:56:54 UTC 2016
Hello,
You should also check how are performance when 96TB volume is degraded
(remove one disk) and rebuilding (after putting back disk).
You can adjust performance impact of rebuild rate, but it will take longer
to complete (during which another disk may fail).
Also reboot one node while copying file, to ensure that files healing can
handle your files count.
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-18 4:21 GMT+01:00 Pawan Devaiah <pawan.devaiah at gmail.com>:
> Hi Guys,
>
> Sorry I was busy with setting up those 2 machines for GlusterFS
> So my machines now have
> 32 GB memory
> 7 TB RAID 10 Storage
> 94 TB Raid 6 Storage
>
> I have setup the initials bricks and cluster is formed and working well.
> Although I haven't checked from client side.
>
> I just wanted to ask what should be the idle size of the brick, for now I
> have made the entire 7TB volume as one brick, is that ok?
> Is there a best practice for brick size?
>
> @ Pranith: Yes for now I want to test this system.
>
> @Mathieu : Even I think it will be perform better with RAID 10 for read
> write intensive workloads.
>
> Cheers,
> Dev
>
> On Wed, Jan 13, 2016 at 9:05 PM, Mathieu Chateau <mathieu.chateau at lotp.fr>
> wrote:
>
>> RAID 10 provide best performance (much better than raid 6)
>>
>> Cordialement,
>> Mathieu CHATEAU
>> http://www.lotp.fr
>>
>> 2016-01-13 5:35 GMT+01:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:
>>
>>> +gluster-users
>>>
>>> On 01/13/2016 09:44 AM, Pawan Devaiah wrote:
>>>
>>> We would be looking for redundancy so replicated volumes I guess
>>>
>>> If replication is going to be there, why additional RAID10? You can do
>>> just RAID6, it saves on space and replication in glusterfs will give
>>> redundancy anyways.
>>>
>>> Pranith
>>>
>>>
>>> Thanks,
>>> Dev
>>>
>>> On Wed, Jan 13, 2016 at 5:07 PM, Pranith Kumar Karampuri <
>>> pkarampu at redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On 01/13/2016 02:21 AM, Pawan Devaiah wrote:
>>>>
>>>> Thanks for the response Pranith
>>>>
>>>> If we take EC out of the equation and say I go with RAID on the
>>>> physical disk, do you think GlusterFS is good for the 2 workloads that I
>>>> mentioned before.
>>>>
>>>> Basically it is going to be a NFS storage for VM and data but with
>>>> different RAIDs, 10 for VM and 6 for data.
>>>>
>>>> What will be the kind of volume you will be using with these disks?
>>>>
>>>> Pranith
>>>>
>>>> Thanks
>>>> Dev
>>>>
>>>> On Tue, Jan 12, 2016 at 9:46 PM, Pranith Kumar Karampuri <
>>>> pkarampu at redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On 01/12/2016 01:26 PM, Pawan Devaiah wrote:
>>>>>
>>>>> Thanks for your response Pranith and Mathieu,
>>>>>
>>>>> Pranith: To answer your question, I am planning to use this storage
>>>>> for two main workloads.
>>>>>
>>>>> 1. As a shared storage for VMs.
>>>>>
>>>>> EC as it is today is not good for this.
>>>>>
>>>>> 2. As a NFS Storage for files.
>>>>>
>>>>> If the above is for storing archive data. EC is nice here.
>>>>>
>>>>> Pranith
>>>>>
>>>>>
>>>>> We are a online backup company so we store few hundred Terra bytes of
>>>>> data.
>>>>>
>>>>>
>>>>> Mathieu: I appreciate your concern, however as a system admins
>>>>> sometimes we get paranoid and try to control everything under the Sun.
>>>>> I know I can only control what I can.
>>>>>
>>>>> Having said that, No, I have pair of servers to start with so at the
>>>>> moment I am just evaluating and preparing for proof of concept, after which
>>>>> I am going to propose to my management, if they are happy then we will
>>>>> proceed further.
>>>>>
>>>>> Regards,
>>>>> Dev
>>>>>
>>>>> On Tue, Jan 12, 2016 at 7:30 PM, Mathieu Chateau <
>>>>> mathieu.chateau at lotp.fr> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> For any system, 36 disks raise disk failure probability. Do you plan
>>>>>> GlusterFS with only one server?
>>>>>>
>>>>>> You should think about failure at each level and be prepared for it:
>>>>>>
>>>>>> - Motherboard failure (full server down)
>>>>>> - Disks failure
>>>>>> - Network cable failure
>>>>>> - File system corruption (time needed for fsck)
>>>>>> - File/folder removed by mistake (backup)
>>>>>>
>>>>>> Using or not raid depend on your answer on these questions and
>>>>>> performance needed.
>>>>>> It also depend how "good" is raid controller in your server, like if
>>>>>> it has battery and 1GB of cache.
>>>>>>
>>>>>> When many disks are bought at same time (1 order, serial number close
>>>>>> to each other), they may fail in near time to each other (if something bad
>>>>>> happened in manufactory).
>>>>>> I already saw like 3 disks failing in few days.
>>>>>>
>>>>>> just my 2 cents,
>>>>>>
>>>>>>
>>>>>>
>>>>>> Cordialement,
>>>>>> Mathieu CHATEAU
>>>>>> http://www.lotp.fr
>>>>>>
>>>>>> 2016-01-12 4:36 GMT+01:00 Pranith Kumar Karampuri <
>>>>>> pkarampu at redhat.com>:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 01/12/2016 04:34 AM, Pawan Devaiah wrote:
>>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> We have a fairly powerful server sitting at office with 128 Gig RAM
>>>>>>> and 36 X 4 TB drives. I am planning to utilize this server as a backend
>>>>>>> storage with GlusterFS on it.
>>>>>>> I have been doing lot of reading on Glusterfs, but I do not see any
>>>>>>> definite recommendation on having RAID on GLUSTER nodes.
>>>>>>> Is it recommended to have RAID on GLUSTER nodes specially for the
>>>>>>> bricks?
>>>>>>> If Yes, is it not contrary to the latest Erasure code implemented in
>>>>>>> Gluster or is it still not ready for production environment?
>>>>>>> I am happy to implement RAID but my two main concern are
>>>>>>> 1. I want to make most of the disk space available.
>>>>>>> 2. I am also concerned about the rebuild time after disk failure on
>>>>>>> the RAID.
>>>>>>>
>>>>>>> What is the workload you have?
>>>>>>>
>>>>>>> We found in our testing that random read/write workload with Erasure
>>>>>>> coded volumes is not as good as we get with replication. There are
>>>>>>> enhancements in progress at the moment to address these things which we are
>>>>>>> yet to merge and re-test.
>>>>>>>
>>>>>>> Pranith
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>> Dev
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Gluster-users mailing list
>>>>>>> Gluster-users at gluster.org
>>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160118/3a328802/attachment.html>
More information about the Gluster-users
mailing list