[Gluster-users] Gluster usage scenarios in HPC cluster management

Yaniv Kaul ykaul at redhat.com
Tue Mar 23 08:20:14 UTC 2021


On Tue, Mar 23, 2021 at 10:02 AM Diego Zuccato <diego.zuccato at unibo.it>
wrote:

> Il 22/03/21 16:54, Erik Jacobson ha scritto:
>
> > So if you had 24 leaders like HLRS, there would be 8 replica-3 at the
> > bottom layer, and then distributed across. (replicated/distributed
> > volumes)
> I still have to grasp the "leader node" concept.
> Weren't gluster nodes "peers"? Or by "leader" you mean that it's
> mentioned in the fstab entry like
> /l1,l2,l3:gv0 /mnt/gv0 glusterfs defaults 0 0
> while the peer list includes l1,l2,l3 and a bunch of other nodes?
>
> > So we would have 24 leader nodes, each leader would have a disk serving
> > 4 bricks (one of which is simply a lock FS for CTDB, one is sharded,
> > one is for logs, and one is heavily optimized for non-object expanded
> > tree NFS). The term "disk" is loose.
> That's a system way bigger than ours (3 nodes, replica3arbiter1, up to
> 36 bricks per node).
>
> > Specs of a leader node at a customer site:
> >  * 256G RAM
> Glip! 256G for 4 bricks... No wonder I have had troubles running 26
> bricks in 64GB RAM... :)
>

If you can recompile Gluster, you may want to experiment with disabling
memory pools - this should save you some memory.
Y.

>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi Informatici
> Alma Mater Studiorum - Università di Bologna
> V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
> tel.: +39 051 20 95786
> ________
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20210323/1129c320/attachment.html>


More information about the Gluster-users mailing list