[Gluster-users] Gluster on EC2 - how to replace failed EBS volume?
Olivier Nicole
Olivier.Nicole at cs.ait.ac.th
Thu Oct 6 02:45:13 UTC 2011
Hi Don,
> Thanks for your reply. Can you explain what you mean by:
>
> > Instead of configuring your 8 disks in RAID 0, I would use JOBD and
> > let Gluster do the concatenation. That way, when you replace a disk,
> > you just have 125 GB to self-heal.
If I am not mistaken, RAID 0 provides no redundancy, it just
concatenates the 8 125GB disks together so they appear as one big 1TB
disk.
So I would not use any RAID on the machine, just have 8 independent
disks and mount the 8 disks at eight locations:
mount /dev/sda1 /
mount /dev/sdb1 /datab
mount /dev/sdc1 /datac
etc.
The in gluster I would have the bricks
server:/data
server:/datab
server:/datac
etc.
If any disk (except the system disk) fails, you can simply fit in a
new disk and let gluster self-heal.
Even if RAID 0 increases the disk throughput because it does stripping
(write different blocks to different disks), gluster does the same
more or less, each new file will end up in a different disk. So the
trhoughput should be close.
The only disadvantage is that gluster will have some space overhead,
as it will create a replicate of the directory tree on each disk.
I think that you should only use RAID with gluster when RAID provides
local redundancy (RAID 1 or above): in that case, when a disk fails,
gluster will not notice the problem, you swap to a new disk and let
RAID rebuild the information.
Bests,
Olivier
More information about the Gluster-users
mailing list