[Gluster-users] Memory leak und very slow speed

Strahil Nikolov hunter86_bg at yahoo.com
Tue Oct 13 15:56:51 UTC 2020


Thanks for sharing.

Best Regards,
Strahil Nikolov






В вторник, 13 октомври 2020 г., 18:17:23 Гринуич+3, Benjamin Knoth <bknoth at gwdg.de> написа: 






Dear all,




I add the community repository, to update Gluster to 8.1.




This fix my memory leak. But in my logfile I got every second many errors





Oct 11 11:50:29 vm01 gluster[908]: [2020-10-11 09:50:29.642031] C [mem-pool.c:873:mem_put] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_close+0x6a) [0x7f92d691960a] -->/usr/lib/x86_64-linux-gnu/glusterfs/8.1/xlator/performance/open-behind.so(+0x748a) [0x7f92d0b8f48a] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(mem_put+0xf0) [0x7f92d691c7f0] ) 0-mem-pool: invalid argument hdr->pool_list NULL [Das Argument ist ungültig]




I found this fix.


https://github.com/gluster/glusterfs/issues/1473
# gluster volume set <volname> open-behind off

After disabling open-behind no error messages in the log.

Best regards
Benjamin







Am 09.10.20 um 08:28 schrieb Knoth, Benjamin:


>  



All 3 server have the same configuration with Debian Buster.




I used the backports repository for GlusterFS, but I can also try to change the source to Gluster.org repositories and install the latest version at this repository.





Best regards

Benjamin


________________________________ 
Von: Strahil Nikolov <hunter86_bg at yahoo.com>
Gesendet: Donnerstag, 8. Oktober 2020 17:42:01
An: Gluster Users; Knoth, Benjamin
Betreff: Re: [Gluster-users] Memory leak und very slow speed 
 



Do you have the option to update your cluster to 8.1 ?

Are your clients in a HCI (server & client are the same system) ?


Best Regards,
Strahil Nikolov






В четвъртък, 8 октомври 2020 г., 17:07:31 Гринуич+3, Knoth, Benjamin <bknoth at gwdg.de> написа: 





  


Dear community,




actually, I'm running a 3 Node GlusterFS. Simple Wordpress pages needs 4 -10 seconds to load. Since a month we have also problems with memory leaks. All 3 nodes got 24 GB RAM (before 12 GB RAM) but GlusterFS use all the RAM. If all the RAM is used the virtual maschine loose there mountpoint. After remount everything starts again and that 2-3 times daily.




# Gluster Version: 8.0




#Affected process:  This is a snapshot from top where the process starts with low memory usage and run so long RAM is available.







   PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                                                     
869835 root      20   0   20,9g  20,3g   4340 S   2,3  86,5 152:10.62 /usr/sbin/glusterfs --process-name fuse --volfile-server=vm01 --volfile-server=vm02 --volfile-id=/gluster /var/www 







# gluster volume info



Volume Name: gluster
Type: Replicate
Volume ID: c6d3beb1-b841-45e8-aa64-bb2be1e36e39
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vm01:/srv/glusterfs
Brick2: vm02:/srv/glusterfs
Brick3: vm03:/srv/glusterfs
Options Reconfigured:
performance.io-cache: on
performance.write-behind: on
performance.flush-behind: on
auth.allow: 10.10.10.*
performance.readdir-ahead: on
performance.quick-read: off
performance.cache-size: 1GB
performance.cache-refresh-timeout: 10
performance.read-ahead: off
performance.write-behind-window-size: 4MB
network.ping-timeout: 2
performance.io-thread-count: 32
performance.cache-max-file-size: 2MB
performance.md-cache-timeout: 60
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
network.inode-lru-limit: 90000






# Logs

I can't find any critical messages on all gluster logs, but in syslog I found the oom-kill. After that, the mountpoint is history.






 oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/srv-web.mount,task=glusterfs,pid=961,uid=0
[68263.478730] Out of memory: Killed process 961 (glusterfs) total-vm:21832212kB, anon-rss:21271576kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:41792kB oom_score_adj:0
[68264.243608] oom_reaper: reaped process 961 (glusterfs), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB


And after the remount it starts again to use more and more memory. 





Alternatively I can also activate SWAP but this slow down the load time extremely if GlusterFS starts to use SWAP after all RAM is used.




If you need more information let me know it and i will send this too.





Best regards

Benjamin













________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users






________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-- 
Benjamin Knoth
Max Planck Digital Library (MPDL)
Systemadministration
Amalienstrasse 33
80799 Munich, Germany
http://www.mpdl.mpg.de

Mail: knoth at mpdl.mpg.de
Phone:  +49 89 909311 211
Fax:    +49-89-38602-280 
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


More information about the Gluster-users mailing list