[Gluster-users] Memory leak und very slow speed

Knoth, Benjamin bknoth at gwdg.de
Fri Oct 9 06:28:58 UTC 2020


All 3 server have the same configuration with Debian Buster.


I used the backports repository for GlusterFS, but I can also try to change the source to Gluster.org repositories and install the latest version at this repository.


Best regards

Benjamin

________________________________
Von: Strahil Nikolov <hunter86_bg at yahoo.com>
Gesendet: Donnerstag, 8. Oktober 2020 17:42:01
An: Gluster Users; Knoth, Benjamin
Betreff: Re: [Gluster-users] Memory leak und very slow speed

Do you have the option to update your cluster to 8.1 ?

Are your clients in a HCI (server & client are the same system) ?


Best Regards,
Strahil Nikolov






В четвъртък, 8 октомври 2020 г., 17:07:31 Гринуич+3, Knoth, Benjamin <bknoth at gwdg.de> написа:








Dear community,




actually, I'm running a 3 Node GlusterFS. Simple Wordpress pages needs 4 -10 seconds to load. Since a month we have also problems with memory leaks. All 3 nodes got 24 GB RAM (before 12 GB RAM) but GlusterFS use all the RAM. If all the RAM is used the virtual maschine loose there mountpoint. After remount everything starts again and that 2-3 times daily.




# Gluster Version: 8.0




#Affected process:  This is a snapshot from top where the process starts with low memory usage and run so long RAM is available.







   PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
869835 root      20   0   20,9g  20,3g   4340 S   2,3  86,5 152:10.62 /usr/sbin/glusterfs --process-name fuse --volfile-server=vm01 --volfile-server=vm02 --volfile-id=/gluster /var/www







# gluster volume info



Volume Name: gluster
Type: Replicate
Volume ID: c6d3beb1-b841-45e8-aa64-bb2be1e36e39
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vm01:/srv/glusterfs
Brick2: vm02:/srv/glusterfs
Brick3: vm03:/srv/glusterfs
Options Reconfigured:
performance.io-cache: on
performance.write-behind: on
performance.flush-behind: on
auth.allow: 10.10.10.*
performance.readdir-ahead: on
performance.quick-read: off
performance.cache-size: 1GB
performance.cache-refresh-timeout: 10
performance.read-ahead: off
performance.write-behind-window-size: 4MB
network.ping-timeout: 2
performance.io-thread-count: 32
performance.cache-max-file-size: 2MB
performance.md-cache-timeout: 60
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
network.inode-lru-limit: 90000






# Logs

I can't find any critical messages on all gluster logs, but in syslog I found the oom-kill. After that, the mountpoint is history.






 oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/srv-web.mount,task=glusterfs,pid=961,uid=0
[68263.478730] Out of memory: Killed process 961 (glusterfs) total-vm:21832212kB, anon-rss:21271576kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:41792kB oom_score_adj:0
[68264.243608] oom_reaper: reaped process 961 (glusterfs), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB


And after the remount it starts again to use more and more memory.





Alternatively I can also activate SWAP but this slow down the load time extremely if GlusterFS starts to use SWAP after all RAM is used.




If you need more information let me know it and i will send this too.





Best regards

Benjamin













________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20201009/bea611ce/attachment.html>


More information about the Gluster-users mailing list