[Gluster-users] New GlusterFS Config with 6 x Dell R720xd's and 12x3TB storage

Brian Candler B.Candler at pobox.com
Tue Dec 4 13:50:06 UTC 2012

On Mon, Dec 03, 2012 at 11:29:51PM +0000, Mike Hanby wrote:
> Each of the 6 servers now have 10 3TB LUNs that physically exist on a RAID 6.

You mean you combined the 12 3TB drives into a 30TB RAID6 array, and then
partitioned that 30TB array into 10 x 3TB partitions / logical volumes?

> I've created a single large distributed volume as a first test. Is this a
> typical configuration (break large storage on a single server into smaller
> bricks), or is it more common to take the smaller LUNs and use LVM to
> create a single large logical volume that becomes the Brick?

I don't think breaking a single large volume into smaller bricks gains you
very much, and makes your system more difficult to administer and more
sensitive to unbalanced files between bricks.

With xfs you shouldn't ever have a long fsck:

Not sure about ext4 but as long as the journal replays properly it should be
fine too.

> That said, another thing we are looking at doing is offering both
> distributed and distributed replica storage, depending on the users
> requirements.  Best I can tell, in order to do this in GlusterFS, I need
> two volumes, each with its own bricks?

Not necessarily. You can make a single filesystem, say /data, and then use
subdirectories as the bricks (e.g.  server1:/data/brick1,
server1:/data/brick2).  These bricks can then be parts of different gluster

Using subdirectories has the useful benefit that if /data doesn't mount for
any reason, the bricks will fail to start (rather than writing directly to
/data on the root filesystem, which could be catastrophic if the
self-healing daemon decides to copy 30TB from the other brick into this
space :-)

The downside of having mutltiple bricks within one filesystem is that "df"
will show the free space on all the volumes as being the free space on
/data; if you create 1TB on either volume, then both "volumes" will show 1TB
less free space available.



More information about the Gluster-users mailing list