[Gluster-users] Gluster setup for virtualization cluster
Gionatan Danti
g.danti at assyoma.it
Mon Feb 17 19:53:09 UTC 2020
Il 2020-02-17 03:59 Markus Kern ha scritto:
> Greetings!
>
> I am currently evaluating our options to replace our old mixture of
> IBM SAN storage boxes. This will be a strategic decision for the next
> years.
> One of the solutions I am reviewing is a GlusterFS installation.
>
> Planned usage:
> - Central NFS server for around 25 systems providing around 400 docker
> containers
> - Central storage for a small VMWare vCenter cluster and a RedHat
> virtualization cluster. In total maybe around 15 machines
>
> The following requirements ensue from this:
> - Fast storage
> - High availability
>
>
> After reading all kind of tutorials and documentation, I came to the
> conclusion that for the expected traffic a "Distributed Replicate
> Volume" is the proper setup.
>
> Nothing has been purchased but I think about following small setup for
> the beginning (call it PoC):
>
> 4 x server, each with 8 x 1.8TB 10k SAS disks in a RAID60
> Two 10 GBit interfaces per server: One for communication betweens the
> 4 systems only (separate VLAN), the other one for regular traffic
> between clients and servers.
>
>
> Does this all make sense?
> Generally speaking: Is such a setup capable of providing fast enough
> storage for a virtualization cluster?
> Do you have any hints?
>
> Thanks
>
> Markus
I evaluated such a setup, but I decided against it when using a small
number of nodes/brick.
The key reason was bad sync performance even when using ramdisks *and*
two local bricks (ie: minimal network overhead). You can read more here:
https://lists.gluster.org/pipermail/gluster-users/2020-January/037601.html
The interesting thing is that when increasing the number of bricks,
performance scaled well. So it seems gluster *can* be good at
virtualization, but it need a large number of bricks (eg: an entire
server rack or one-brick-for-physical-disk approach). This matches the
experiences shared by other sysadmin.
Moreover, in order to have efficient resync/healing after a node reboot,
you need to enable sharding (ie: the virtual disks will be divided in
many small chunks). I was somewhat unconfortable doing that, as any
problem with gluster incapable to mount the share would lead to quite
trickly "file reconstruction puzzle".
So I ended with local storage (and a hot-standby server) rather than
Gluster. If anyone has some different stories to share, I really am all
ears.
Regards.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it [1]
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8
More information about the Gluster-users
mailing list