[Gluster-users] single or multiple bricks per server
Anand Avati
anand.avati at gmail.com
Fri Jul 1 19:20:19 UTC 2011
>
>
> > I have 3 servers with 7 disks each, is it better to use each
> > individual disk as brick (several bricks per server). ?Or to have
> > them as a raid volume that presents as a single brick per server?
> >
> > I'm setting them up as a replicated volume across all three servers.
>
> In our environment (several hundred volumes over several servers that had
> several bricks) we were told that a lot of our issues stemmed from the
> number of bricks per volume. We are now going to a max of 2 bricks local
> to each node, using lvm. We are using 2 instead of 1 because we are
> separating for replica pairings.
>
> -greg
>
> I'm hoping that someone at Gluster will step forward on this one because
> this just doesn't sound correct. I have two servers with 8x2TB drives
> each and each one is configured as a brick. I have set up replicated
> and distributed so server 0/drive 0 is mirrored to server 1/drive 0 ...
> That's 8 bricks per server and I'm having no problems. That way if a
> drive goes bad, I replace and re-mirror from/to a single drive. If I
> use in-server RAID, I either lose space to raid parity (RAID5/RAID5) or
> waste a lot of drive space (RAID-10) before I even get to replicate
> across the servers. If I used in-server RAID and lost a RAIDset, I
> would have to replicate the entire 16TB volume which would take too
> long and be a performance hog. I decided to use replicate for
> mirroring and then use distribute to act as striping mechanism to
> eliminate these two issues. Up for 18 months without any problems, so
> I'm pretty happy.
>
> I've also see people post on this mailing list that had hundreds of
> bricks in a single volume, so I'm pretty sure that works. Can someone
> clarify that for us?
>
>
Having 10s of bricks on a server should work fine. In Greg's case, the total
number of bricks we running into multiple thousands causing processes to run
out of ports < 1024.
Avati
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110702/2e608f65/attachment.html>
More information about the Gluster-users
mailing list