<div dir="ltr">Hi,<div><br></div><div>For the memory increase, please capture statedumps of the process at intervals of an hour and send it across.</div><div><a href="https://docs.gluster.org/en/latest/Troubleshooting/statedump/">https://docs.gluster.org/en/latest/Troubleshooting/statedump/</a> describes how to generate a statedump for the client process.<br></div><div><br></div><div>Regards,</div><div>Nithya</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 13 Nov 2019 at 05:18, Jamie Lawrence <<a href="mailto:jlawrence@squaretrade.com">jlawrence@squaretrade.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Glusternauts,<br>
<br>
I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients from a different, much older, cluster. Those clients are running 5.9 clients, and spontaneously disconnect. It was signal 15, but no user killed it, and I can't imagine why another daemon would have.<br>
<br>
<br>
[2019-11-12 22:52:42.790687] I [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: initating unmount of /mnt/informatica/sftp/dectools<br>
[2019-11-12 22:52:42.791414] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f141e4466ba] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55711c79994d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55711c7997b4] ) 0-: received signum (15), shutting down<br>
[2019-11-12 22:52:42.791435] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/mnt/informatica/sftp/dectools'.<br>
[2019-11-12 22:52:42.791444] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to '/mnt/informatica/sftp/dectools'.<br>
<br>
Nothing in the log for about 12 minutes previously.<br>
<br>
Volume info:<br>
<br>
Volume Name: sc5_informatica_prod_shared<br>
Type: Distributed-Replicate<br>
Volume ID: db5d2693-59e1-40e0-9c28-7a2385b2524f<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 3 x 3 = 9<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: sc5-storage-1:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick2: sc5-storage-2:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick3: sc5-storage-3:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick4: sc5-storage-4:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick5: sc5-storage-5:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick6: sc5-storage-6:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick7: sc5-storage-7:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick8: sc5-storage-8:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Brick9: sc5-storage-9:/gluster-bricks/pool-1/sc5_informatica_prod_shared<br>
Options Reconfigured:<br>
performance.readdir-ahead: disable<br>
performance.quick-read: disable<br>
features.quota-deem-statfs: on<br>
features.inode-quota: on<br>
features.quota: on<br>
transport.address-family: inet<br>
nfs.disable: on<br>
performance.client-io-threads: off<br>
<br>
<br>
One very disturbing thing I'm noticing is that memory use on the client seems to be growing at rate of about 1MB/10 minutes of active use. One glusterfs process I'm looking at is consuming about 2.4G right now and growing. Does 5.9 have a memory leak, too?<br>
<br>
<br>
-j________<br>
<br>
Community Meeting Calendar:<br>
<br>
APAC Schedule -<br>
Every 2nd and 4th Tuesday at 11:30 AM IST<br>
Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
<br>
NA/EMEA Schedule -<br>
Every 1st and 3rd Tuesday at 01:00 PM EDT<br>
Bridge: <a href="https://bluejeans.com/118564314" rel="noreferrer" target="_blank">https://bluejeans.com/118564314</a><br>
<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div>