[Gluster-users] Client disconnections, memory use

Jamie Lawrence jlawrence at squaretrade.com
Tue Nov 12 23:48:12 UTC 2019


Glusternauts,

I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients from a different, much older, cluster. Those clients are running 5.9 clients, and spontaneously disconnect. It was signal 15, but no user killed it, and I can't imagine why another daemon would have.


[2019-11-12 22:52:42.790687] I [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: initating unmount of /mnt/informatica/sftp/dectools
[2019-11-12 22:52:42.791414] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f141e4466ba] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55711c79994d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55711c7997b4] ) 0-: received signum (15), shutting down
[2019-11-12 22:52:42.791435] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/mnt/informatica/sftp/dectools'.
[2019-11-12 22:52:42.791444] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to '/mnt/informatica/sftp/dectools'.

Nothing in the log for about 12 minutes previously.

Volume info:

Volume Name: sc5_informatica_prod_shared
Type: Distributed-Replicate
Volume ID: db5d2693-59e1-40e0-9c28-7a2385b2524f
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: sc5-storage-1:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick2: sc5-storage-2:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick3: sc5-storage-3:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick4: sc5-storage-4:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick5: sc5-storage-5:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick6: sc5-storage-6:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick7: sc5-storage-7:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick8: sc5-storage-8:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick9: sc5-storage-9:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Options Reconfigured:
performance.readdir-ahead: disable
performance.quick-read: disable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


One very disturbing thing I'm noticing is that memory use on the client seems to be growing at rate of about 1MB/10 minutes of active use. One glusterfs process I'm looking at is consuming about 2.4G right now and growing. Does 5.9 have a memory leak, too?


-j
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4975 bytes
Desc: not available
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20191112/1cf787ac/attachment.p7s>


More information about the Gluster-users mailing list