[Gluster-users] single or multiple bricks per server

Greg_Swift at aotx.uscourts.gov Greg_Swift at aotx.uscourts.gov
Fri Jul 1 19:32:46 UTC 2011



gluster-users-bounces at gluster.org wrote on 07/01/2011 02:13:07 PM:

>
> > I have 3 servers with 7 disks each, is it better to use each
> > individual disk as brick (several bricks per server). ?Or to have
> > them as a raid volume that presents as a single brick per server?
> >
> > I'm setting them up as a replicated volume across all three servers.
>
> In our environment (several hundred volumes over several servers that had
> several bricks) we were told that a lot of our issues stemmed from the
> number of bricks per volume.  We are now going to a max of 2 bricks local
> to each node, using lvm.  We are using 2 instead of 1 because we are
> separating for replica pairings.
>
> -greg
>
> I'm hoping that someone at Gluster will step forward on this one because
> this just doesn't sound correct.  I have two servers with 8x2TB drives
> each and each one is configured as a brick.  I have set up replicated
> and distributed so server 0/drive 0 is mirrored to server 1/drive 0 ...
> That's 8 bricks per server and I'm having no problems.  That way if a
> drive goes bad, I replace and re-mirror from/to a single drive.  If I
> use in-server RAID, I either lose space to raid parity (RAID5/RAID5) or
> waste a lot of drive space (RAID-10) before I even get to replicate
> across the servers.  If I used in-server RAID and lost a RAIDset, I
> would have to replicate the entire 16TB volume which would take too
> long and be a performance hog.  I decided to use replicate for
> mirroring and then use distribute to act as striping mechanism to
> eliminate these two issues.  Up for 18 months without any problems, so
> I'm pretty happy.
>
> I've also see people post on this mailing list that had hundreds of
> bricks in a single volume, so I'm pretty sure that works.  Can someone
> clarify that for us?
>

I'm just passing along my experience..which came with supports guidance. I
don't think the number of servers is as much of a problem as the
volume*brick per server.  And if you noticed I did say several hundred
volumes.

lets do some math...(feel free to correct)

200 volumes x 8 bricks x 5 servers (the math I was working with)

That is 1600 processes per server.  Each process is also a port, and that
means clients have 8000 ports per volume they have to access if using
glusterfs client.

1 volume x 8 bricks x 5 servers, that is only 8 processes per server.  Only
40 ports for a client to access. Much more manageable.

Or where we have moved to (1 bricks per server via lvm+xfs)

200 volumes x 1 brick x 5 servers, that is 200 processes per server.  Up to
200 ports per client.  Very doable, seems to be working great for us.

Or where we are moving to (2 bricks per server, in a Distribute->replicate
chain of Server1/drive1 -> server2/drive2, wrapping around so
server5/drive1 -> server1/drive2)

200 volumes x 2 brick x 5 servers, up to 400 processes.  Thats 2000 ports
per client. Still pretty doable on our systems I think (although possibly
getting heavy... hopefully it works well enough so we can have HA).  Maybe
the client side isn't so great.

And then your other example, 'hundred of bricks in a single volume'.  By
hundreds of bricks, i'm assuming you mean across lots of servers.... but
we'll take the 8 brick as a reasonable attached ent/mdl SAS system with 2TB
filesystems....

1 volume x 8 bricks x 100 servers, well that is only 8 processes per
server.  not bad.  Client side is only 800 ports, but that should not be
too bad.


So... its all a matter of volume.  lots of bricks on a node is fine, until
you have lots of volumes using all of those bricks.




More information about the Gluster-users mailing list