[Gluster-users] Dissappearing directories on FUSE mounted volume

Martín Lorenzo mlorenzo at gmail.com
Tue Apr 21 22:24:52 UTC 2020


Hi everybody,
I am running a 2x2  node + arbiter distributed - replicated volume (~60TB,
70% used) since January

I've been experiencing problems with the fuse mounts when copying data into
the volume.
Once in a while during copy a newly created directory disappears on the
mount where the copy is performed. The mounts on the other nodes are not
affected, as they show the "missing" directory. The current workaround is
unmounting and mounting the volume.
I am using rsync, the data I'm copying is mostly media, 1MB~20GBfile size ,
it was 6 terabytes initially.

excerpt from mount log:
[2020-04-21 20:53:19.128073] I [MSGID: 108031]
[afr-common.c:2581:afr_local_discovery_cbk] 0-tapeless-replicate-1:
selecting local read_child tapeless-client-3
[2020-04-21 21:00:53.680413] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fea041e08ea] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fe9fb590221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fe9fb591998] (-->
/lib64/libpthread.
so.0(+0x7e65)[0x7fea03021e65] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fea028e988d] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-04-21 21:00:53.680880] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fea041e08ea] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fe9fb590221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fe9fb591998] (-->
/lib64/libpthread.
so.0(+0x7e65)[0x7fea03021e65] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fea028e988d] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-04-21 21:02:07.062090] E [fuse-bridge.c:227:check_and_dump_fuse_W]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13a)[0x7fea041e08ea] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x8221)[0x7fe9fb590221] (-->
/usr/lib64/glusterfs/7.3/xlator/mount/fuse.so(+0x9998)[0x7fe9fb591998] (-->
/lib64/libpthread.
so.0(+0x7e65)[0x7fea03021e65] (-->
/lib64/libc.so.6(clone+0x6d)[0x7fea028e988d] ))))) 0-glusterfs-fuse:
writing to fuse device failed: No such file or directory
[2020-04-21 21:02:56.695667] W [fuse-bridge.c:949:fuse_entry_cbk]
0-glusterfs-fuse: 62900477: MKDIR() /GR/graficos/OPERATIVA_no_borrar/LA
MANANA EN CASA/ANIMADOS HD/2018/SP/ZOCALO_CHOCOTINA_203740_3 => -1 (File
exists)
[2020-04-21 21:02:57.029790] W [fuse-bridge.c:949:fuse_entry_cbk]
0-glusterfs-fuse: 62900528: MKDIR() /GR/graficos/OPERATIVA_no_borrar/LA
MANANA EN CASA/ANIMADOS HD/2018/SP/ZOCALO_CHOCOTINA_203740_3 => -1 (File
exists)

version glusterfs 7.3

Volume Name: tapeless
Type: Distributed-Replicate
Volume ID: 53bfa86d-b390-496b-bbd7-c4bba625c956
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: gluster6.glustersaeta.net:/data/glusterfs/tapeless/brick_6/brick
Brick2: gluster7.glustersaeta.net:/data/glusterfs/tapeless/brick_7/brick
Brick3: kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_1a/brick
(arbiter)
Brick4: gluster12.glustersaeta.net:/data/glusterfs/tapeless/brick_12/brick
Brick5: gluster13.glustersaeta.net:/data/glusterfs/tapeless/brick_13/brick
Brick6: kitchen-store.glustersaeta.net:/data/glusterfs/tapeless/brick_2a/brick
(arbiter)
Options Reconfigured:
features.quota-deem-statfs: on
cluster.self-heal-daemon: on
cluster.entry-self-heal: on
cluster.metadata-self-heal: on
cluster.data-self-heal: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
storage.batch-fsync-delay-usec: 0
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
features.quota: on
features.inode-quota: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.cache-samba-metadata: on
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 200000
performance.nl-cache: on
performance.nl-cache-timeout: 600
performance.readdir-ahead: on
performance.parallel-readdir: on
performance.cache-size: 1GB
client.event-threads: 4
server.event-threads: 4
performance.normal-prio-threads: 16
performance.io-thread-count: 32
performance.write-behind-window-size: 4MB

Please let me know if you need additional information
Thanks!
Martin Lorenzo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20200421/6f37f196/attachment.html>


More information about the Gluster-users mailing list