[Gluster-users] How to configure?

Strahil Nikolov hunter86_bg at yahoo.com
Wed Mar 15 07:30:04 UTC 2023


Do you use brick multiplexing ?
Best Regards,Strahil Nikolov 
 
 
  On Tue, Mar 14, 2023 at 16:44, Diego Zuccato<diego.zuccato at unibo.it> wrote:   Hello all.

Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual 
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), 
configured in replica 3 arbiter 1. Using Debian packages from Gluster 
9.x latest repository.

Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and 
I often had to reload glusterfsd because glusterfs processed got killed 
for OOM.
On top of that, performance have been quite bad, especially when we 
reached about 20M files. On top of that, one of the servers have had 
mobo issues that resulted in memory errors that corrupted some bricks fs 
(XFS, it required "xfs_reparir -L" to fix).
Now I'm getting lots of "stale file handle" errors and other errors 
(like directories that seem empty from the client but still containing 
files in some bricks) and auto healing seems unable to complete.

Since I can't keep up continuing to manually fix all the issues, I'm 
thinking about backup+destroy+recreate strategy.

I think that if I reduce the number of bricks per server to just 5 
(RAID1 of 6x12TB disks) I might resolve RAM issues - at the cost of 
longer heal times in case a disk fails. Am I right or it's useless? 
Other recommendations?
Servers have space for another 6 disks. Maybe those could be used for 
some SSDs to speed up access?

TIA.

-- 
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230315/a0bcea90/attachment.html>


More information about the Gluster-users mailing list