[Gluster-users] Gluster on EC2 - how to replace failed EBS volume?
Olivier Nicole
Olivier.Nicole at cs.ait.ac.th
Wed Oct 5 02:37:16 UTC 2011
Hi Don,
> 1. Remove the brick from the Gluster volume, stop the array, detach the 8 vols, make new vols from last good snapshot, attach new vols, restart array, re-add brick to volume, perform self-heal.
>
> or
>
> 2. Remove the brick from the Gluster volume, stop the array, detach the 8 vols, make brand new empty volumes, attach new vols, restart array, re-add brick to volume, perform self-heal. Seems like this one would take forever and kill performance.
I am very new to Gluster, but I would think that solution 2 is the
safest: you don't mix-up the rebuild from two different sources, only
Gluster is involved in rebuilding.
Though I have read that you can self-heal with a time parameter to
limit the find to the files that were modified since your brick was
off line. So I beleive that could be extended to the time since your
snapshot.
Instead of configuring your 8 disks in RAID 0, I would use JOBD and
let Gluster do the concatenation. That way, when you replace a disk,
you just have 125 GB to self-heal.
Best regards,
Olivier
More information about the Gluster-users
mailing list