[Gluster-users] RAID on GLUSTER node
Mathieu Chateau
mathieu.chateau at lotp.fr
Tue Jan 12 06:30:54 UTC 2016
Hello,
For any system, 36 disks raise disk failure probability. Do you plan
GlusterFS with only one server?
You should think about failure at each level and be prepared for it:
- Motherboard failure (full server down)
- Disks failure
- Network cable failure
- File system corruption (time needed for fsck)
- File/folder removed by mistake (backup)
Using or not raid depend on your answer on these questions and performance
needed.
It also depend how "good" is raid controller in your server, like if it has
battery and 1GB of cache.
When many disks are bought at same time (1 order, serial number close to
each other), they may fail in near time to each other (if something bad
happened in manufactory).
I already saw like 3 disks failing in few days.
just my 2 cents,
Cordialement,
Mathieu CHATEAU
http://www.lotp.fr
2016-01-12 4:36 GMT+01:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:
>
>
> On 01/12/2016 04:34 AM, Pawan Devaiah wrote:
>
> Hi All,
>
> We have a fairly powerful server sitting at office with 128 Gig RAM and 36
> X 4 TB drives. I am planning to utilize this server as a backend storage
> with GlusterFS on it.
> I have been doing lot of reading on Glusterfs, but I do not see any
> definite recommendation on having RAID on GLUSTER nodes.
> Is it recommended to have RAID on GLUSTER nodes specially for the bricks?
> If Yes, is it not contrary to the latest Erasure code implemented in
> Gluster or is it still not ready for production environment?
> I am happy to implement RAID but my two main concern are
> 1. I want to make most of the disk space available.
> 2. I am also concerned about the rebuild time after disk failure on the
> RAID.
>
> What is the workload you have?
>
> We found in our testing that random read/write workload with Erasure coded
> volumes is not as good as we get with replication. There are enhancements
> in progress at the moment to address these things which we are yet to merge
> and re-test.
>
> Pranith
>
>
> Thanks
> Dev
>
>
>
>
> _______________________________________________
> Gluster-users mailing listGluster-users at gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160112/4c5aee62/attachment.html>
More information about the Gluster-users
mailing list