[Gluster-users] Gluster setup for virtualization cluster

Darrell Budic budic at onholyground.com
Mon Feb 17 17:48:11 UTC 2020


Markus-

Strahil is on the right path with questions about your IOP needs. Expect to loose a bunch with gluster keeping drives in sync, so you may need faster drives or SSD/NVMe caches. I would not recommend using a arbiter with a distributed replicated setup, but think about those raids. You’re doubling up by going raid 60, or 50. Maybe consider 10 for more speed if you stick with HDDs, but remember you’re getting additional redundancy from your replica setup and plan according to your needs.

I’d bond the 2x 10G, gluster will communicate both between the servers and directly with the clients, especially if you can use libgfapi aware clients. Bonding them in a LACP or mode-6 config will give the best throughput.

And read up on Ganesha, it will add some complexity to your setup.

  -Darrell

> On Feb 16, 2020, at 11:48 PM, Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
> 
> On February 17, 2020 4:59:17 AM GMT+02:00, Markus Kern <gluster at military.de> wrote:
>> Greetings!
>> 
>> I am currently evaluating our options to replace our old mixture of IBM
>> 
>> SAN storage boxes. This will be a strategic decision for the next
>> years.
>> One of the solutions I am reviewing is a GlusterFS installation.
>> 
>> Planned usage:
>> - Central NFS server for around 25 systems providing around 400 docker 
>> containers
>> - Central storage for a small VMWare vCenter cluster and a RedHat 
>> virtualization cluster. In total maybe around 15 machines
>> 
>> The following requirements ensue from this:
>> - Fast storage
>> - High availability
>> 
>> 
>> After reading all kind of tutorials and documentation, I came to the 
>> conclusion that for the expected traffic a "Distributed Replicate 
>> Volume" is the proper setup.
>> 
>> Nothing has been purchased but I think about following small setup for 
>> the beginning (call it PoC):
>> 
>> 4 x server, each with 8 x 1.8TB 10k SAS disks in a RAID60
>> Two 10 GBit interfaces per server: One for communication betweens the 4
>> 
>> systems only (separate VLAN), the other one for regular traffic between
>> 
>> clients and servers.
>> 
>> 
>> Does this all make sense?
>> Generally speaking: Is such a setup capable of providing fast enough 
>> storage for a virtualization cluster?
>> Do you have any hints?
>> 
>> Thanks
>> 
>> Markus
>> 
>> 
>> 
>> ________
>> 
>> Community Meeting Calendar:
>> 
>> APAC Schedule -
>> Every 2nd and 4th Tuesday at 11:30 AM IST
>> Bridge: https://bluejeans.com/441850968
>> 
>> NA/EMEA Schedule -
>> Every 1st and 3rd Tuesday at 01:00 PM EDT
>> Bridge: https://bluejeans.com/441850968
>> 
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> Hi Markus,
> 
> It all depends on the IOPS requirements and the capabilities of the disks. Are these  spinning or SSD/NVME ?
> 
> For me a Raid60  is an overkill, as  you need 'replica  2 arbiter  1' or pure 'replica 3' when you care about your data.
> 
> I'd recommend you to add a separate node (either physical or Virtual)  as an Arbiter. 
> 
> Most probably a raid5/50  will be enough, but you can adapt in the POC stage.
> 
> If  you plan to use  NFS - consider NFS Ganesha in a HA cluster,  as  the built-in NFS is deprecated (and in some  distributions you need to rebuild from source).Also consider it as a 'gateway' which means that the NFS Ganesha should have more NICs.
> 
> If NFS Ganesha is used - you won't need the second group of NICs on the Gluster nodes.  Ganesha directly speeks with all nodes in the storage pool.
> 
> So, you can try with 3  Gluster nodes  and a node for NFS Ganesha and later scale-out.
> 
> 
> Best Regards,
> Strahil Nikolov
> ________
> 
> Community Meeting Calendar:
> 
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/441850968
> 
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/441850968
> 
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



More information about the Gluster-users mailing list