[Gluster-users] glustershd: EBADFD [File descriptor in bad state]

mabi mabi at protonmail.ch
Fri Oct 9 14:54:19 UTC 2020


Just wanted to mention that the 3 hours later the self heal daemon managed to heal the files. I don't understand why it took 3 hours but at least the affected two directories and files are now available on all nodes again


‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Friday, October 9, 2020 4:30 PM, mabi <mabi at protonmail.ch> wrote:

> Hello,
>
> I have a GlusterFS 6.9 cluster with two nodes and one arbitrer node with a replica volume and currently there are two files and two directories stuck to be self-healed.
>
> Node 1 and 3 (arbitrer) have the files and directories on the brick but node 2 does not have the files and directories.
>
> Node1 glustershd log file shows the following warning message:
>
> [2020-10-09 14:18:54.006707] I [MSGID: 108026] [afr-self-heal-entry.c:898:afr_selfheal_entry_do] 0-myvol-replicate-0: performing entry selfheal on 4d520c69-2b18-4601-bad5-3c16c29188c1
> [2020-10-09 14:18:54.007064] W [MSGID: 114061] [client-common.c:2968:client_pre_readdir_v2] 0-myvol-client-1: (4d520c69-2b18-4601-bad5-3c16c29188c1) remote_fd is -1. EBADFD [File descriptor in bad state]
>
> The FUSE mount client log file show the following error message:
>
> [2020-10-09 14:15:51.115856] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x13c)[0x7f9d0a0663bc] (--> /usr/lib/x86_64-linux-gnu/glusterfs/6.9/xlator/mount/fuse.so(+0x7bba)[0x7f9d07743bba] (--> /usr/lib/x86_64-linux-gnu/glusterfs/6.9/xlator/mount/fuse.so(+0x7d23)[0x7f9d07743d23] (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x74a4)[0x7f9d092bd4a4] (--> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f9d08b17d0f] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
>
> I have no clue how this could have happened but as the GlusterFS self-heal daemon does not seem to be able to heal the two files and directories itself, I would like to know what I can do here to fix this?
>
> Thank you in advance for your help.
>
> Best regards,
> Mabi




More information about the Gluster-users mailing list