<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 30 Jul 2019 at 16:37, Diego Remolina <<a href="mailto:dijuremo@gmail.com" target="_blank">dijuremo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">This option is enabled. In which version has this been patched? This is a file server and disabling readdir-ahead will have a hard impact on performance.</div></div></blockquote><div><br></div><div>This was fixed in 5.3 (<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1659676" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1659676</a>). </div><div>This bug is only relevant if the gluster fuse client is the one that is using up memory.</div><div><br></div><div>The first thing to do would be to determine which process is using up the memory and to get a statedump. </div><div><br></div><div><font face="courier new, monospace" size="1">ps <pid> </font>should give you the details of the gluster process .</div><div><br></div><div>Regards,</div><div>Nithya</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div><br></div><div><span style="font-family:monospace"><span style="color:rgb(0,0,0)">[root@ysmha01 ~]# gluster v get export readdir-ahead </span><br>Option Value <br>------ ----- <br>performance.readdir-ahead on </span></div><div><span style="font-family:monospace"><br>The guide recommends enabling the setting:<br><br></span><a href="https://docs.gluster.org/en/latest/Administrator%20Guide/Accessing%20Gluster%20from%20Windows/" target="_blank">https://docs.gluster.org/en/latest/Administrator%20Guide/Accessing%20Gluster%20from%20Windows/</a><span style="font-family:monospace"><br><br></span></div><div><span style="font-family:monospace">Diego</span><br></div><div><span style="font-family:monospace"><br></span></div><div><span style="font-family:monospace"> <br></span></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jul 29, 2019 at 11:52 PM Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><font face="courier new, monospace" size="1">Hi Diego,</font><div><font face="courier new, monospace" size="1"><br></font></div><div><font face="courier new, monospace" size="1">Please do the following:</font></div><div><font face="courier new, monospace" size="1"><br></font></div><div><font face="courier new, monospace" size="1">gluster v get <volname> readdir-ahead</font></div><div><font face="courier new, monospace" size="1"><br></font></div><div><font face="courier new, monospace" size="1">If this is enabled, please disable it and see if it helps. There was a leak in the opendir codpath that was fixed in later released.</font></div><div><font face="courier new, monospace" size="1"><br></font><div><div><div><font face="courier new, monospace" size="1">Regards,</font></div><div><font face="courier new, monospace" size="1">Nithya</font></div><div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 30 Jul 2019 at 09:04, Diego Remolina <<a href="mailto:dijuremo@gmail.com" target="_blank">dijuremo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Will this kill the actual process or simply trigger the dump? Which process should I kill? The brick process in the system or the fuse mount?<div dir="auto"><br></div><div dir="auto">Diego</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jul 29, 2019, 23:27 Nithya Balachandran <<a href="mailto:nbalacha@redhat.com" target="_blank">nbalacha@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, 30 Jul 2019 at 05:44, Diego Remolina <<a href="mailto:dijuremo@gmail.com" rel="noreferrer" target="_blank">dijuremo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Unfortunately statedump crashes on both machines, even freshly rebooted.</div></blockquote><div><br></div><div>Do you see any statedump files in /var/run/gluster? This looks more like the gluster cli crashed. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>[root@ysmha01 ~]# gluster --print-statedumpdir<br>/var/run/gluster<br>[root@ysmha01 ~]# gluster v statedump export<br>Segmentation fault (core dumped)<br></div><div><br></div><div>[root@ysmha02 ~]# uptime<br> 20:12:20 up 6 min, 1 user, load average: 0.72, 0.52, 0.24<br>[root@ysmha02 ~]# gluster --print-statedumpdir<br>/var/run/gluster<br>[root@ysmha02 ~]# gluster v statedump export<br>Segmentation fault (core dumped)<br></div><div><br></div><div>I rebooted today after 40 days. Gluster was eating up shy of 40GB of RAM out of 64.</div><div><br></div><div>What would you recommend to be the next step?</div><div><br></div><div>Diego</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 4, 2019 at 5:07 AM Poornima Gurusiddaiah <<a href="mailto:pgurusid@redhat.com" rel="noreferrer" target="_blank">pgurusid@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Could you also provide the statedump of the gluster process consuming 44G ram [1]. Please make sure the statedump is taken when the memory consumption is very high, like 10s of GBs, otherwise we may not be able to identify the issue. Also i see that the cache size is 10G is that something you arrived at, after doing some tests? Its relatively higher than normal.<br></div><div><br></div><div>[1] <a href="https://docs.gluster.org/en/v3/Troubleshooting/statedump/#generate-a-statedump" rel="noreferrer" target="_blank">https://docs.gluster.org/en/v3/Troubleshooting/statedump/#generate-a-statedump</a><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Mar 4, 2019 at 12:23 AM Diego Remolina <<a href="mailto:dijuremo@gmail.com" rel="noreferrer" target="_blank">dijuremo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Hi,<div><br></div><div>I will not be able to test gluster-6rc because this is a production environment and it takes several days for memory to grow a lot.</div><div><br></div><div>The Samba server is hosting all types of files, small and large from small roaming profile type files to bigger files like adobe suite, autodesk Revit (file sizes in the hundreds of megabytes).</div><div><br></div><div>As I stated before, this same issue was present back with 3.8.x which I was running before.</div><div><br></div><div>The information you requested:</div><div><br></div><div><div>[root@ysmha02 ~]# gluster v info export</div><div><br></div><div>Volume Name: export</div><div>Type: Replicate</div><div>Volume ID: b4353b3f-6ef6-4813-819a-8e85e5a95cff</div><div>Status: Started</div><div>Snapshot Count: 0</div><div>Number of Bricks: 1 x 2 = 2</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: 10.0.1.7:/bricks/hdds/brick</div><div>Brick2: 10.0.1.6:/bricks/hdds/brick</div><div>Options Reconfigured:</div><div>performance.stat-prefetch: on</div><div>performance.cache-min-file-size: 0</div><div>network.inode-lru-limit: 65536</div><div>performance.cache-invalidation: on</div><div>features.cache-invalidation: on</div><div>performance.md-cache-timeout: 600</div><div>features.cache-invalidation-timeout: 600</div><div>performance.cache-samba-metadata: on</div><div>transport.address-family: inet</div><div>server.allow-insecure: on</div><div>performance.cache-size: 10GB</div><div>cluster.server-quorum-type: server</div><div>nfs.disable: on</div><div>performance.io-thread-count: 64</div><div>performance.io-cache: on</div><div>cluster.lookup-optimize: on</div><div>cluster.readdir-optimize: on</div><div>server.event-threads: 5</div><div>client.event-threads: 5</div><div>performance.cache-max-file-size: 256MB</div><div>diagnostics.client-log-level: INFO</div><div>diagnostics.brick-log-level: INFO</div><div>cluster.server-quorum-ratio: 51%</div></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div></div><div id="m_119331930431500301gmail-m_-8568737347882478303gmail-m_-4833401328225509760m_8374238785685214358gmail-m_-3340449949414300599m_-7001269052163580460gmail-m_-4519402017059013283gmail-m_-1483290904248086332gmail-m_-4429654867678350131DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br>
<table style="border-top:1px solid rgb(211,212,222)">
        <tbody><tr>
<td style="width:55px;padding-top:13px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon" rel="noreferrer" target="_blank"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" style="width:46px;height:29px" width="46" height="29"></a></td>
                <td style="width:470px;padding-top:12px;color:rgb(65,66,78);font-size:13px;font-family:Arial,Helvetica,sans-serif;line-height:18px">Virus-free. <a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link" style="color:rgb(68,83,234)" rel="noreferrer" target="_blank">www.avast.com</a>
                </td>
        </tr>
</tbody></table><a href="#m_119331930431500301_m_-8568737347882478303_m_-4833401328225509760_m_8374238785685214358_m_-3340449949414300599_m_-7001269052163580460_m_-4519402017059013283_m_-1483290904248086332_m_-4429654867678350131_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" rel="noreferrer"></a></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 1, 2019 at 11:07 PM Poornima Gurusiddaiah <<a href="mailto:pgurusid@redhat.com" rel="noreferrer" target="_blank">pgurusid@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div>This high memory consumption is not normal. Looks like it's a memory leak. Is it possible to try it on test setup with gluster-6rc? What is the kind of workload that goes into fuse mount? Large files or small files? We need the following information to debug further: </div><div dir="auto">- Gluster volume info output</div><div dir="auto">- Statedump of the Gluster fuse mount process consuming 44G ram.</div><div dir="auto"><br></div><div dir="auto">Regards,</div><div dir="auto">Poornima</div><div dir="auto"><br><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Sat, Mar 2, 2019, 3:40 AM Diego Remolina <<a href="mailto:dijuremo@gmail.com" rel="noreferrer" target="_blank">dijuremo@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div>I am using glusterfs with two servers as a file server sharing files via samba and ctdb. I cannot use samba vfs gluster plugin, due to bug in current Centos version of samba. So I am mounting via fuse and exporting the volume to samba from the mount point.</div><div><br></div><div>Upon initial boot, the server where samba is exporting files climbs up to ~10GB RAM within a couple hours of use. From then on, it is a constant slow memory increase. In the past with gluster 3.8.x we had to reboot the servers at around 30 days . With gluster 4.1.6 we are getting up to 48 days, but RAM use is at 48GB out of 64GB. Is this normal?</div><div><br></div><div>The particular versions are below,</div><div><br></div><div><div>[root@ysmha01 home]# uptime</div><div>16:59:39 up 48 days, 9:56, 1 user, load average: 3.75, 3.17, 3.00</div></div><div>[root@ysmha01 home]# rpm -qa | grep gluster<br></div><div><div>centos-release-gluster41-1.0-3.el7.centos.noarch</div><div>glusterfs-server-4.1.6-1.el7.x86_64</div><div>glusterfs-api-4.1.6-1.el7.x86_64</div><div>centos-release-gluster-legacy-4.0-2.el7.centos.noarch</div><div>glusterfs-4.1.6-1.el7.x86_64</div><div>glusterfs-client-xlators-4.1.6-1.el7.x86_64</div><div>libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.8.x86_64</div><div>glusterfs-fuse-4.1.6-1.el7.x86_64</div><div>glusterfs-libs-4.1.6-1.el7.x86_64</div><div>glusterfs-rdma-4.1.6-1.el7.x86_64</div><div>glusterfs-cli-4.1.6-1.el7.x86_64</div><div>samba-vfs-glusterfs-4.8.3-4.el7.x86_64</div><div>[root@ysmha01 home]# rpm -qa | grep samba</div><div>samba-common-tools-4.8.3-4.el7.x86_64</div><div>samba-client-libs-4.8.3-4.el7.x86_64</div><div>samba-libs-4.8.3-4.el7.x86_64</div><div>samba-4.8.3-4.el7.x86_64</div><div>samba-common-libs-4.8.3-4.el7.x86_64</div><div>samba-common-4.8.3-4.el7.noarch</div><div>samba-vfs-glusterfs-4.8.3-4.el7.x86_64</div><div>[root@ysmha01 home]# cat /etc/redhat-release</div><div>CentOS Linux release 7.6.1810 (Core)</div></div><div><br></div><div>RAM view using top<br></div><div>Tasks: 398 total, 1 running, 397 sleeping, 0 stopped, 0 zombie</div><div>%Cpu(s): 7.0 us, 9.3 sy, 1.7 ni, 71.6 id, 9.7 wa, 0.0 hi, 0.8 si, 0.0 st</div><div>KiB Mem : 65772000 total, 1851344 free, 60487404 used, 3433252 buff/cache</div><div>KiB Swap: 0 total, 0 free, 0 used. 3134316 avail Mem</div><div><br></div><div> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND</div><div> 9953 root 20 0 3727912 946496 3196 S 150.2 1.4 38626:27 glusterfsd</div><div> 9634 root 20 0 48.1g 47.2g 3184 S 96.3 75.3 29513:55 glusterfs</div><div>14485 root 20 0 3404140 63780 2052 S 80.7 0.1 1590:13 glusterfs</div></div><div><br></div><div><div>[root@ysmha01 ~]# gluster v status export</div><div>Status of volume: export</div><div>Gluster process TCP Port RDMA Port Online Pid</div><div>------------------------------------------------------------------------------</div><div>Brick 10.0.1.7:/bricks/hdds/brick 49157 0 Y 13986</div><div>Brick 10.0.1.6:/bricks/hdds/brick 49153 0 Y 9953</div><div>Self-heal Daemon on localhost N/A N/A Y 14485</div><div>Self-heal Daemon on 10.0.1.7 N/A N/A Y 21934</div><div>Self-heal Daemon on 10.0.1.5 N/A N/A Y 4598</div><div><br></div><div>Task Status of Volume export</div><div>------------------------------------------------------------------------------</div><div>There are no active volume tasks</div></div><div><br></div><div><br></div><div><br></div></div></div></div></div></div><div id="m_119331930431500301gmail-m_-8568737347882478303gmail-m_-4833401328225509760m_8374238785685214358gmail-m_-3340449949414300599m_-7001269052163580460gmail-m_-4519402017059013283gmail-m_-1483290904248086332gmail-m_-4429654867678350131gmail-m_1092070095161815064m_5816452762692804512DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br>
<table style="border-top:1px solid rgb(211,212,222)">
        <tbody><tr>
<td style="width:55px;padding-top:13px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon" rel="noreferrer noreferrer" target="_blank"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" alt="" style="width:46px;height:29px" width="46" height="29"></a></td>
                <td style="width:470px;padding-top:12px;color:rgb(65,66,78);font-size:13px;font-family:Arial,Helvetica,sans-serif;line-height:18px">Virus-free. <a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link" style="color:rgb(68,83,234)" rel="noreferrer noreferrer" target="_blank">www.avast.com</a>
                </td>
        </tr>
</tbody></table><a href="#m_119331930431500301_m_-8568737347882478303_m_-4833401328225509760_m_8374238785685214358_m_-3340449949414300599_m_-7001269052163580460_m_-4519402017059013283_m_-1483290904248086332_m_-4429654867678350131_m_1092070095161815064_m_5816452762692804512_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1" rel="noreferrer noreferrer"></a></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" rel="noreferrer noreferrer" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" rel="noreferrer" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div></div>
</blockquote></div>
</blockquote></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div></div>