[Gluster-users] Gluster on EC2 - how to replace failed EBS volume?

Olivier Nicole Olivier.Nicole at cs.ait.ac.th
Fri Oct 7 03:04:42 UTC 2011


Don,

> That is a brilliant idea.  I implemented it in a test environment
> today and am doing some benchmarks.  Great idea to eliminate RAID0.
> I was only using it to get better I/O throughput on EC2 EBS.  I
> didn't know that Gluster would handle the striping like it does.

I did some testing yesterday evening. O)ne virtual server on a VMware
ESXi, with 5 disks, one being for the system.

I tested

1) 4 disks, formatted ext4, mounted as /mntb, /mntc, etc. and used as
   4 briks in a single gluster volume. Mounted that gluster volume
   locally as glusterfs.

2) 4 disks, in mdadm RAID 0, /dev/md0 formated ext4, mounted and used
   as one brik in a gluster volume. Mounted that gluster volume
   locally as glusterfs.

3) 4 disks, in mdadm RAID 0, formated ext4, mounted locally.

1 and 2 had close throughputs in writting, maybe 2 was 5% faster.

So for the ease of administration, I will go solution 1 at any
time. Now if RAID is higher than 0, that's another discussion.

Of course 3 was faster, but it has no gluster :)

Bests,

Olivier

> Thank you very much!
> Don
> 
> 
> 
> 
> ----- Original Message -----
> From: "Olivier Nicole" <Olivier.Nicole at cs.ait.ac.th>
> To: dspidell at nxtbookmedia.com
> Cc: gluster-users at gluster.org
> Sent: Wednesday, October 5, 2011 10:45:13 PM
> Subject: Re: [Gluster-users] Gluster on EC2 - how to replace failed EBS	volume?
> 
> Hi Don,
> 
> > Thanks for your reply.  Can you explain what you mean by:
> > 
> > > Instead of configuring your 8 disks in RAID 0, I would use JOBD and
> > > let Gluster do the concatenation. That way, when you replace a disk,
> > > you just have 125 GB to self-heal.
> 
> If I am not mistaken, RAID 0 provides no redundancy, it just
> concatenates the 8 125GB disks together so they appear as one big 1TB
> disk.
> 
> So I would not use any RAID on the machine, just have 8 independent
> disks and mount the 8 disks at eight locations:
> 
> mount /dev/sda1 /
> mount /dev/sdb1 /datab
> mount /dev/sdc1 /datac
> etc.
> 
> The in gluster I would have the bricks
> 
> server:/data
> server:/datab
> server:/datac
> etc.
> 
> If any disk (except the system disk) fails, you can simply fit in a
> new disk and let gluster self-heal.
> 
> Even if RAID 0 increases the disk throughput because it does stripping
> (write different blocks to different disks), gluster does the same
> more or less, each new file will end up in a different disk. So the
> trhoughput should be close.
> 
> The only disadvantage is that gluster will have some space overhead,
> as it will create a replicate of the directory tree on each disk.
> 
> I think that you should only use RAID with gluster when RAID provides
> local redundancy (RAID 1 or above): in that case, when a disk fails,
> gluster will not notice the problem, you swap to a new disk and let
> RAID rebuild the information.
> 
> Bests,
> 
> Olivier
> 



More information about the Gluster-users mailing list