[Gluster-users] Client disconnections, memory use
Nithya Balachandran
nbalacha at redhat.com
Wed Nov 13 05:59:24 UTC 2019
Hi,
For the memory increase, please capture statedumps of the process at
intervals of an hour and send it across.
https://docs.gluster.org/en/latest/Troubleshooting/statedump/ describes how
to generate a statedump for the client process.
Regards,
Nithya
On Wed, 13 Nov 2019 at 05:18, Jamie Lawrence <jlawrence at squaretrade.com>
wrote:
> Glusternauts,
>
> I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients
> from a different, much older, cluster. Those clients are running 5.9
> clients, and spontaneously disconnect. It was signal 15, but no user killed
> it, and I can't imagine why another daemon would have.
>
>
> [2019-11-12 22:52:42.790687] I [fuse-bridge.c:5144:fuse_thread_proc]
> 0-fuse: initating unmount of /mnt/informatica/sftp/dectools
> [2019-11-12 22:52:42.791414] W [glusterfsd.c:1500:cleanup_and_exit]
> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f141e4466ba]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55711c79994d]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55711c7997b4] ) 0-:
> received signum (15), shutting down
> [2019-11-12 22:52:42.791435] I [fuse-bridge.c:5914:fini] 0-fuse:
> Unmounting '/mnt/informatica/sftp/dectools'.
> [2019-11-12 22:52:42.791444] I [fuse-bridge.c:5919:fini] 0-fuse: Closing
> fuse connection to '/mnt/informatica/sftp/dectools'.
>
> Nothing in the log for about 12 minutes previously.
>
> Volume info:
>
> Volume Name: sc5_informatica_prod_shared
> Type: Distributed-Replicate
> Volume ID: db5d2693-59e1-40e0-9c28-7a2385b2524f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: sc5-storage-1:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick2: sc5-storage-2:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick3: sc5-storage-3:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick4: sc5-storage-4:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick5: sc5-storage-5:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick6: sc5-storage-6:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick7: sc5-storage-7:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick8: sc5-storage-8:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick9: sc5-storage-9:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Options Reconfigured:
> performance.readdir-ahead: disable
> performance.quick-read: disable
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
>
> One very disturbing thing I'm noticing is that memory use on the client
> seems to be growing at rate of about 1MB/10 minutes of active use. One
> glusterfs process I'm looking at is consuming about 2.4G right now and
> growing. Does 5.9 have a memory leak, too?
>
>
> -j________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191113/f016a9bc/attachment.html>
More information about the Gluster-users
mailing list