<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Dear all,</p>
<p><br>
</p>
<p>I add the community repository, to update Gluster to 8.1.</p>
<p><br>
</p>
<p>This fix my memory leak. But in my logfile I got every second
many errors<br>
</p>
<p><br>
</p>
<p><span>Oct 11 11:50:29 vm01 gluster[908]: [2020-10-11
09:50:29.642031] C [mem-pool.c:873:mem_put]
(-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_close+0x6a)
[0x7f92d691960a]
-->/usr/lib/x86_64-linux-gnu/glusterfs/8.1/xlator/performance/open-behind.so(+0x748a)
[0x7f92d0b8f48a]
-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(mem_put+0xf0)
[0x7f92d691c7f0] ) 0-mem-pool: invalid argument
hdr->pool_list NULL [Das Argument ist ungültig]</span></p>
<p><br>
</p>
<p>I found this fix.<br>
</p>
<p><a href="https://github.com/gluster/glusterfs/issues/1473"
class="OWAAutoLink">https://github.com/gluster/glusterfs/issues/1473</a></p>
<pre><code># gluster volume set <volname> open-behind off
After disabling open-behind no error messages in the log.
Best regards
Benjamin
</code></pre>
<p><br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">Am 09.10.20 um 08:28 schrieb Knoth,
Benjamin:<br>
</div>
<blockquote type="cite"
cite="mid:d4632a0d8fe9409f8b08a25138ddf787@gwdg.de">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text -->
<style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
        {margin-top:0;
        margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr"
style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p>All 3 server have the same configuration with Debian
Buster.</p>
<p><br>
</p>
<p>I used the backports repository for GlusterFS, but I can
also try to change the source to Gluster.org repositories
and install the latest version at this repository.<br>
</p>
<p><br>
</p>
<p>Best regards</p>
<p>Benjamin<br>
</p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt"
face="Calibri, sans-serif" color="#000000"><b>Von:</b>
Strahil Nikolov <a class="moz-txt-link-rfc2396E" href="mailto:hunter86_bg@yahoo.com"><hunter86_bg@yahoo.com></a><br>
<b>Gesendet:</b> Donnerstag, 8. Oktober 2020 17:42:01<br>
<b>An:</b> Gluster Users; Knoth, Benjamin<br>
<b>Betreff:</b> Re: [Gluster-users] Memory leak und very
slow speed</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">Do you have the option to update your
cluster to 8.1 ?<br>
<br>
Are your clients in a HCI (server & client are the same
system) ?<br>
<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
<br>
<br>
<br>
<br>
<br>
В четвъртък, 8 октомври 2020 г., 17:07:31 Гринуич+3, Knoth,
Benjamin <a class="moz-txt-link-rfc2396E" href="mailto:bknoth@gwdg.de"><bknoth@gwdg.de></a> написа:
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Dear community,<br>
<br>
<br>
<br>
<br>
actually, I'm running a 3 Node GlusterFS. Simple Wordpress
pages needs 4 -10 seconds to load. Since a month we have
also problems with memory leaks. All 3 nodes got 24 GB RAM
(before 12 GB RAM) but GlusterFS use all the RAM. If all the
RAM is used the virtual maschine loose there mountpoint.
After remount everything starts again and that 2-3 times
daily.<br>
<br>
<br>
<br>
<br>
# Gluster Version: 8.0<br>
<br>
<br>
<br>
<br>
#Affected process: This is a snapshot from top where the
process starts with low memory usage and run so long RAM is
available.<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
PID USER PR NI VIRT RES SHR S %CPU
%MEM TIME+
COMMAND
<br>
869835 root 20 0 20,9g 20,3g 4340 S 2,3 86,5
152:10.62 /usr/sbin/glusterfs --process-name fuse
--volfile-server=vm01 --volfile-server=vm02
--volfile-id=/gluster /var/www
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
# gluster volume info<br>
<br>
<br>
<br>
Volume Name: gluster<br>
Type: Replicate<br>
Volume ID: c6d3beb1-b841-45e8-aa64-bb2be1e36e39<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 1 x 3 = 3<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: vm01:/srv/glusterfs<br>
Brick2: vm02:/srv/glusterfs<br>
Brick3: vm03:/srv/glusterfs<br>
Options Reconfigured:<br>
performance.io-cache: on<br>
performance.write-behind: on<br>
performance.flush-behind: on<br>
auth.allow: 10.10.10.*<br>
performance.readdir-ahead: on<br>
performance.quick-read: off<br>
performance.cache-size: 1GB<br>
performance.cache-refresh-timeout: 10<br>
performance.read-ahead: off<br>
performance.write-behind-window-size: 4MB<br>
network.ping-timeout: 2<br>
performance.io-thread-count: 32<br>
performance.cache-max-file-size: 2MB<br>
performance.md-cache-timeout: 60<br>
features.cache-invalidation: on<br>
features.cache-invalidation-timeout: 600<br>
performance.stat-prefetch: on<br>
network.inode-lru-limit: 90000<br>
<br>
<br>
<br>
<br>
<br>
<br>
# Logs<br>
<br>
I can't find any critical messages on all gluster logs, but
in syslog I found the oom-kill. After that, the mountpoint
is history.<br>
<br>
<br>
<br>
<br>
<br>
<br>
oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/srv-web.mount,task=glusterfs,pid=961,uid=0<br>
[68263.478730] Out of memory: Killed process 961 (glusterfs)
total-vm:21832212kB, anon-rss:21271576kB, file-rss:0kB,
shmem-rss:0kB, UID:0 pgtables:41792kB oom_score_adj:0<br>
[68264.243608] oom_reaper: reaped process 961 (glusterfs),
now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB<br>
<br>
<br>
And after the remount it starts again to use more and more
memory. <br>
<br>
<br>
<br>
<br>
<br>
Alternatively I can also activate SWAP but this slow down
the load time extremely if GlusterFS starts to use SWAP
after all RAM is used.<br>
<br>
<br>
<br>
<br>
If you need more information let me know it and i will send
this too.<br>
<br>
<br>
<br>
<br>
<br>
Best regards<br>
<br>
Benjamin<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://bluejeans.com/441850968"
moz-do-not-send="true">https://bluejeans.com/441850968</a><br>
<br>
Gluster-users mailing list<br>
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a
href="https://lists.gluster.org/mailman/listinfo/gluster-users"
moz-do-not-send="true">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</div>
</span></font>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a class="moz-txt-link-freetext" href="https://bluejeans.com/441850968">https://bluejeans.com/441850968</a>
Gluster-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>
<a class="moz-txt-link-freetext" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>
</pre>
</blockquote>
<pre class="moz-signature" cols="72">--
Benjamin Knoth
Max Planck Digital Library (MPDL)
Systemadministration
Amalienstrasse 33
80799 Munich, Germany
<a class="moz-txt-link-freetext" href="http://www.mpdl.mpg.de">http://www.mpdl.mpg.de</a>
Mail: <a class="moz-txt-link-abbreviated" href="mailto:knoth@mpdl.mpg.de">knoth@mpdl.mpg.de</a>
Phone: +49 89 909311 211
Fax: +49-89-38602-280</pre>
</body>
</html>