[Gluster-users] Questions about healing

Kevin Lemonnier lemonnierk at ulrar.net
Wed May 18 11:55:01 UTC 2016

On Wed, May 18, 2016 at 01:39:58PM +0200, Gandalf Corvotempesta wrote:
> Ciao,
> i'm planning a new infrastructure. I have some questions about
> healing to better optimize performances in case of brick failure.
> Let's assume this environment:
> 3 supermicro servers, replica 3, with 12 SATA disks each.
> each servers has 2 bricks in RAID-6 (software or
> hardware, i don't know) made by 6 disks each.
> 1) in case of a single disk failure, healing would not
> happen as RAID is recovering on it's own


> 2) in case of total brick failure (3 broken disks in a RAID-6),
> healing would happen, right ? During the healing, the whole brick
> is locked for write? Even if the other 2 servers are working properly?

Yeah, but that's transparent. You don't access bricks directly, you access the volume,
even if you think you are monting a specific brick, you aren't.

> 3) in case of total server failure, healing would happen like on point 2?
> I'm asking this because I don't know if using Gluster to store virtual 
> machines
> disk images or mount a filesystem inside each virtual machine.
> In the first case, when healing happen, the whole VM is locked down, right?
> If the same brick has multiple VM storage, all VM would be locked.

Yes, that's why you need to use sharding. With sharding, the heal is much quicker
and the whole VM isn't freezed during the heal, only the shard being healed.
I'm testing that right now myself and that's almost invisible for the VM
using 3.7.11. Use the latest version though, it really really wasn't transparent
in 3.7.6 :).

> In the second case, only the healed file is locked. As we host mainly 
> webservers
> with tons of small files, healing would be almost transparent (heal a 
> 20kb file
> would require a second, not hours)

yes, but gluster isn't great for small files. We do have a few websites
on glusterFS, but to make the performances acceptable you'll have to enable
APCu with stat = 0. Using gluster for the VM disks instead of the application
would avoid that.

You should test both solutions though, see what fits best !

Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160518/33cec532/attachment.sig>

More information about the Gluster-users mailing list