[Gluster-users] Three nodes cluster with 2 replicas

Justin Dossey jbd at podomatic.com
Thu Jan 2 17:17:15 UTC 2014


Since you don't have HW RAID, it might be wise to use software RAID
(md-RAID or LVM) to aggregate drives and then allocate two bricks per
server.  This gives you a nice even number of bricks for use with your
distributed-replica-2 setup.  Note that if you were to lose one node, you
will lose two bricks in this setup!  It would be wise to allocate your
storage in such a way that minimizes the impact of losing an entire RAID
volume.  You'll have to weigh the risks against your need for storage
space.  Also, run a few failure simulations so that if you do lose a node
or brick down the way, you will know what to do.  If your bricks are on the
same RAID volume(s) as your OS, you'll also have to be able to replace the
OS and configuration quickly (from backup or using automatic install and
configuration) in order to recover from a major failure.  Some people run
their OSes on a separate RAID-1 configuration (sometimes even off of USB
drives!) in order to isolate brick failure from the OSes.

My opinion: 7 drives isn't very many for a storage node and it will be
difficult to obtain an optimal configuration with two bricks per server.
 You would be wise to purchase another server or additional storage chassis
for your existing servers in order to obtain a more optimal configuration.


On Wed, Jan 1, 2014 at 1:06 PM, shacky <shacky83 at gmail.com> wrote:

> Hi.
>
> I have three servers with 7 hard drives (without HW RAID controller) that
> I wish to use to create a Gluster cluster.
>
> I am looking a way to have 2 replicas with 3 nodes, because I need much
> storage space and 2 nodes are not enough, but I wish to have the same
> security I'd have using a RAID5 on a node.
>
> So I wish my data to be protected if one (or two) of 7 hard drives will
> fail on the same node and if an entire node of three will fail.
>
> Is it possibile?
> Could you help me to find out the correct way?
>
> Thank you very much!
> Bye.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Justin Dossey
CTO, PodOmatic
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140102/ee883214/attachment.html>


More information about the Gluster-users mailing list