<p dir="ltr">Sadly I can't help you with your issue, but you can consider setting your mount point in autofs/systemd's automounter in order to reduce the end users' irritation.<br>
You are using the fusefs, right ? <br>
Also, try to increase logging level and check if anything comes out.<br>
What is the output of 'gluster volume get all cluster.op-version' ?</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
On Nov 13, 2019 01:48, Jamie Lawrence <jlawrence@squaretrade.com> wrote:
>
> Glusternauts,
>
> I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients from a different, much older, cluster. Those clients are running 5.9 clients, and spontaneously disconnect. It was signal 15, but no user killed it, and I can't imagine why another daemon would have.
>
>
> [2019-11-12 22:52:42.790687] I [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: initating unmount of /mnt/informatica/sftp/dectools
> [2019-11-12 22:52:42.791414] W [glusterfsd.c:1500:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/<a href="http://libpthread.so">libpthread.so</a>.0(+0x76ba) [0x7f141e4466ba] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55711c79994d] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55711c7997b4] ) 0-: received signum (15), shutting down
> [2019-11-12 22:52:42.791435] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting '/mnt/informatica/sftp/dectools'.
> [2019-11-12 22:52:42.791444] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse connection to '/mnt/informatica/sftp/dectools'.
>
> Nothing in the log for about 12 minutes previously.
>
> Volume info:
>
> Volume Name: sc5_informatica_prod_shared
> Type: Distributed-Replicate
> Volume ID: db5d2693-59e1-40e0-9c28-7a2385b2524f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: sc5-storage-1:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick2: sc5-storage-2:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick3: sc5-storage-3:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick4: sc5-storage-4:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick5: sc5-storage-5:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick6: sc5-storage-6:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick7: sc5-storage-7:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick8: sc5-storage-8:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick9: sc5-storage-9:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Options Reconfigured:
> performance.readdir-ahead: disable
> performance.quick-read: disable
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
>
> One very disturbing thing I'm noticing is that memory use on the client seems to be growing at rate of about 1MB/10 minutes of active use. One glusterfs process I'm looking at is consuming about 2.4G right now and growing. Does 5.9 have a memory leak, too?
>
>
> -j
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a>
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: <a href="https://bluejeans.com/118564314">https://bluejeans.com/118564314</a>
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>