[Gluster-users] Gluster on EC2 - how to replace failed EBS volume?
jaw171 at pitt.edu
Thu Oct 6 15:33:16 UTC 2011
"So I would not use any RAID on the machine, just have 8 independent
disks and mount the 8 disks at eight locations:"
Then your max file size is limited to the space available in each
disk/brick unless you stripe the data with Gluster. I don't think
Gluster is a replacement for RAID0,1,5,10 it should layer on top of it
to provide more redundancy or speed.
In distribute/replicate your max file size is your brick size is the
best case scenario. 250GB bricks = 250GB max file size. What if your
brick already has 230GB used? Now you max file size it 20G and you get
a full brick if you use it. Create bigger bricks and you have lowered
your chance of having too big of a file and lowered your risk of getting
a full brick.
I'm fairly new to gluster so if I'm wrong anyone can feel free to
correct me. The only bad thing about big bricks is it makes it harder
to add more bricks since you want to add bricks of the same size if
Linux/Unix Systems Engineer
University of Pittsburgh - CSSD
Jaw171 at pitt.edu
On 10/05/2011 10:45 PM, Olivier Nicole wrote:
> Hi Don,
>> Thanks for your reply. Can you explain what you mean by:
>>> Instead of configuring your 8 disks in RAID 0, I would use JOBD and
>>> let Gluster do the concatenation. That way, when you replace a disk,
>>> you just have 125 GB to self-heal.
> If I am not mistaken, RAID 0 provides no redundancy, it just
> concatenates the 8 125GB disks together so they appear as one big 1TB
> So I would not use any RAID on the machine, just have 8 independent
> disks and mount the 8 disks at eight locations:
> mount /dev/sda1 /
> mount /dev/sdb1 /datab
> mount /dev/sdc1 /datac
> The in gluster I would have the bricks
> If any disk (except the system disk) fails, you can simply fit in a
> new disk and let gluster self-heal.
> Even if RAID 0 increases the disk throughput because it does stripping
> (write different blocks to different disks), gluster does the same
> more or less, each new file will end up in a different disk. So the
> trhoughput should be close.
> The only disadvantage is that gluster will have some space overhead,
> as it will create a replicate of the directory tree on each disk.
> I think that you should only use RAID with gluster when RAID provides
> local redundancy (RAID 1 or above): in that case, when a disk fails,
> gluster will not notice the problem, you swap to a new disk and let
> RAID rebuild the information.
> Gluster-users mailing list
> Gluster-users at gluster.org
More information about the Gluster-users