<div dir="ltr">Hi everybody,<div>I am running a 2x2  node + arbiter distributed - replicated volume (~60TB, 70% used) since January</div><div><br></div><div>I&#39;ve been experiencing problems with the fuse mounts when copying data into the volume.<br><div>Once in a while during copy a newly created directory disappears on the mount where the copy is performed. The mounts on the other nodes are not affected, as they show the &quot;missing&quot; directory. The current workaround is unmounting and mounting the volume.</div><div>I am using rsync, the data I&#39;m copying is mostly media, 1MB~20GBfile size , it was 6 terabytes initially.</div><div><br></div><div>excerpt from mount log:</div><div>[2020-04-21 20:53:19.128073] I [MSGID: 108031] [afr-common.c:2581:afr_local_discovery_cbk] 0-tapeless-replicate-1: selecting local read_child tapeless-client-3 <br>[2020-04-21 21:00:53.680413] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--&gt; /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fea041e08ea] (--&gt; /usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fe9fb590221] (--&gt; /usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fe9fb591998] (--&gt; /lib64/libpthread.<br>so.0(+0x7e65)[0x7fea03021e65] (--&gt; /lib64/libc.so.6(clone+0x6d)[0x7fea028e988d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory<br>[2020-04-21 21:00:53.680880] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--&gt; /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fea041e08ea] (--&gt; /usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fe9fb590221] (--&gt; /usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fe9fb591998] (--&gt; /lib64/libpthread.<br>so.0(+0x7e65)[0x7fea03021e65] (--&gt; /lib64/libc.so.6(clone+0x6d)[0x7fea028e988d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory<br>[2020-04-21 21:02:07.062090] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--&gt; /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fea041e08ea] (--&gt; /usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fe9fb590221] (--&gt; /usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fe9fb591998] (--&gt; /lib64/libpthread.<br>so.0(+0x7e65)[0x7fea03021e65] (--&gt; /lib64/libc.so.6(clone+0x6d)[0x7fea028e988d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory<br>[2020-04-21 21:02:56.695667] W [fuse-bridge.c:949:fuse_entry_cbk] 0-glusterfs-fuse: 62900477: MKDIR() /GR/graficos/OPERATIVA_no_borrar/LA MANANA EN CASA/ANIMADOS HD/2018/SP/ZOCALO_CHOCOTINA_203740_3 =&gt; -1 (File exists)<br>[2020-04-21 21:02:57.029790] W [fuse-bridge.c:949:fuse_entry_cbk] 0-glusterfs-fuse: 62900528: MKDIR() /GR/graficos/OPERATIVA_no_borrar/LA MANANA EN CASA/ANIMADOS HD/2018/SP/ZOCALO_CHOCOTINA_203740_3 =&gt; -1 (File exists)</div><div><pre>version glusterfs 7.3</pre></div><div>Volume Name: tapeless<br>Type: Distributed-Replicate<br>Volume ID: 53bfa86d-b390-496b-bbd7-c4bba625c956<br>Status: Started<br>Snapshot Count: 0<br>Number of Bricks: 2 x (2 + 1) = 6<br>Transport-type: tcp<br>Bricks:<br>Brick1: gluster6.glustersaeta.net:/data/glusterfs/tapeless/brick_6/brick<br>Brick2: gluster7.glustersaeta.net:/data/glusterfs/tapeless/brick_7/brick<br>Brick3: kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_1a/brick (arbiter)<br>Brick4: gluster12.glustersaeta.net:/data/glusterfs/tapeless/brick_12/brick<br>Brick5: gluster13.glustersaeta.net:/data/glusterfs/tapeless/brick_13/brick<br>Brick6: kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_2a/brick (arbiter)<br>Options Reconfigured:<br>features.quota-deem-statfs: on<br>cluster.self-heal-daemon: on<br>cluster.entry-self-heal: on<br>cluster.metadata-self-heal: on<br>cluster.data-self-heal: on<br>diagnostics.count-fop-hits: on<br>diagnostics.latency-measurement: on<br>storage.batch-fsync-delay-usec: 0<br>performance.client-io-threads: off<br>nfs.disable: on<br>transport.address-family: inet<br>features.quota: on<br>features.inode-quota: on<br>features.cache-invalidation: on<br>features.cache-invalidation-timeout: 600<br>performance.cache-samba-metadata: on<br>performance.stat-prefetch: on<br>performance.cache-invalidation: on<br>performance.md-cache-timeout: 600<br>network.inode-lru-limit: 200000<br>performance.nl-cache: on<br>performance.nl-cache-timeout: 600<br>performance.readdir-ahead: on<br>performance.parallel-readdir: on<br>performance.cache-size: 1GB<br>client.event-threads: 4<br>server.event-threads: 4<br>performance.normal-prio-threads: 16<br>performance.io-thread-count: 32<br>performance.write-behind-window-size: 4MB<br></div><div><br></div></div><div>Please let me know if you need additional information</div><div>Thanks!</div><div>Martin Lorenzo</div></div>