[Gluster-users] New GlusterFS Config with 6 x Dell R720xd's and 12x3TB storage
mhanby at uab.edu
Tue Dec 4 16:33:30 UTC 2012
Thanks Brian. I'm going to test the multiple volumes suggestion today.
Regarding the underlying storage, the 12 x 3TB disks are configured as a single RAID6 that is presented to the OS as 10 x 3TB disks via the Dell PERC controller. So the OS sees /dev/sdb,c,d,e,...
I did this because I was under the impression that all bricks have to be the same size. So for future expansion I didn't want to get into a situation where additional servers had to come configured with exactly 30TB of usable storage.
From: Brian Candler [B.Candler at pobox.com]
Sent: Tuesday, December 04, 2012 7:50 AM
To: Mike Hanby
Cc: Gluster-users at gluster.org
Subject: Re: [Gluster-users] New GlusterFS Config with 6 x Dell R720xd's and 12x3TB storage
On Mon, Dec 03, 2012 at 11:29:51PM +0000, Mike Hanby wrote:
> Each of the 6 servers now have 10 3TB LUNs that physically exist on a RAID 6.
You mean you combined the 12 3TB drives into a 30TB RAID6 array, and then
partitioned that 30TB array into 10 x 3TB partitions / logical volumes?
> I've created a single large distributed volume as a first test. Is this a
> typical configuration (break large storage on a single server into smaller
> bricks), or is it more common to take the smaller LUNs and use LVM to
> create a single large logical volume that becomes the Brick?
I don't think breaking a single large volume into smaller bricks gains you
very much, and makes your system more difficult to administer and more
sensitive to unbalanced files between bricks.
With xfs you shouldn't ever have a long fsck:
Not sure about ext4 but as long as the journal replays properly it should be
> That said, another thing we are looking at doing is offering both
> distributed and distributed replica storage, depending on the users
> requirements. Best I can tell, in order to do this in GlusterFS, I need
> two volumes, each with its own bricks?
Not necessarily. You can make a single filesystem, say /data, and then use
subdirectories as the bricks (e.g. server1:/data/brick1,
server1:/data/brick2). These bricks can then be parts of different gluster
Using subdirectories has the useful benefit that if /data doesn't mount for
any reason, the bricks will fail to start (rather than writing directly to
/data on the root filesystem, which could be catastrophic if the
self-healing daemon decides to copy 30TB from the other brick into this
The downside of having mutltiple bricks within one filesystem is that "df"
will show the free space on all the volumes as being the free space on
/data; if you create 1TB on either volume, then both "volumes" will show 1TB
less free space available.
More information about the Gluster-users