[Gluster-users] Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

Gionatan Danti g.danti at assyoma.it
Thu Sep 10 20:53:07 UTC 2020


Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto:
> I'm setting up GlusterFS on 2 hw w/ same configuration, 8  hdds. This
> deployment will grow later on.

Hi, I really suggest avoiding a replica 2 cluster unless it is for 
testing only. Be sure to add an arbiter at least (using a replica 2 
arbiter 1 cluster).

> I'm undecided between these different configurations and am seeing
> comments or advice from more experienced users of GlusterFS.
> 
> Here is the summary of 3 options:
> 1. 1 brick per host, Gluster "distributed" volumes, internal
> redundancy at brick level

I strongly suggest against it: any server reboot will cause troubles for 
mounted clients. I will end with *lower* uptime than a single server.

> 2. 1 brick per drive, Gluster "distributed replicated" volumes, no
> internal redundancy

This would increase Gluster performance via multiple bricks; however a 
single failed disk will put the entire note out-of-service. Moreover, 
Gluster heals are much slower processes than a simple RAID1/ZFS mirror 
recover.

> 3. 1 brick per host, Gluster "distributed replicated" volumes, no
> internal redundancy

Again, a suggest against it: a single failed disk will put the entire 
note out-of-service *and* will cause massive heal as all data need to be 
copied from the surviving node, which is a long and stressful event for 
the other node (and for the sysadmin).

In short, I would not use Gluster without *both* internal and 
brick-level redundancy. For a simple setup, I suggest option #1 but in 
replica setup (rather than distributed). You can increase the number of 
briks (mountpoint) via multiple zfs datasets, if needed.

Regards.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8


More information about the Gluster-users mailing list