[Gluster-users] Fwd: New GlusterFS deployment, doubts on 1 brick per host vs 1 brick per drive.

Miguel Mascarenhas Filipe miguel.filipe at gmail.com
Thu Sep 10 21:13:31 UTC 2020


On Thu, 10 Sep 2020 at 21:53, Gionatan Danti <g.danti at assyoma.it> wrote:

> Il 2020-09-09 15:30 Miguel Mascarenhas Filipe ha scritto:
> > I'm setting up GlusterFS on 2 hw w/ same configuration, 8  hdds. This
> > deployment will grow later on.
>
> Hi, I really suggest avoiding a replica 2 cluster unless it is for
> testing only. Be sure to add an arbiter at least (using a replica 2
> arbiter 1 cluster).
>
> > I'm undecided between these different configurations and am seeing
> > comments or advice from more experienced users of GlusterFS.
> >
> > Here is the summary of 3 options:
> > 1. 1 brick per host, Gluster "distributed" volumes, internal
> > redundancy at brick level
>
> I strongly suggest against it: any server reboot will cause troubles for
> mounted clients. I will end with *lower* uptime than a single server.
>
> > 2. 1 brick per drive, Gluster "distributed replicated" volumes, no
> > internal redundancy
>
> This would increase Gluster performance via multiple bricks; however a
> single failed disk will put the entire note out-of-service. Moreover,
> Gluster heals are much slower processes than a simple RAID1/ZFS mirror
> recover.


can you explain better how a single disk failing would bring a whole node
out of service?

from your comments this one sounds the best, but having node outages from
single disk failures doesn’t sound acceptable..



>
> > 3. 1 brick per host, Gluster "distributed replicated" volumes, no
> > internal redundancy
>
> Again, a suggest against it: a single failed disk will put the entire
> note out-of-service *and* will cause massive heal as all data need to be
> copied from the surviving node, which is a long and stressful event for
> the other node (and for the sysadmin).
>
> In short, I would not use Gluster without *both* internal and
> brick-level redundancy. For a simple setup, I suggest option #1 but in
> replica setup (rather than distributed). You can increase the number of
> briks (mountpoint) via multiple zfs datasets, if needed.



>
> Regards.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti at assyoma.it - info at assyoma.it
> GPG public key ID: FF5F32A8
>
-- 
Miguel Mascarenhas Filipe
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200910/49f16ff7/attachment.html>


More information about the Gluster-users mailing list