[Gluster-users] mount of 2 volumes fails at boot (/etc/fstab)
Strahil Nikolov
hunter86_bg at yahoo.com
Sat Mar 21 20:07:01 UTC 2020
On March 20, 2020 8:13:28 AM GMT+02:00, Hu Bert <revirii at googlemail.com> wrote:
>Hello,
>
>i just reinstall a server (debian buster). I added 2 entries to
>/etc/fstab:
>
>gluster1:/persistent /data/repository/shared/private glusterfs
>defaults,_netdev,attribute-timeout=0,entry-timeout=0,backup-volfile-servers=gluster2:gluster3
>0 0
>gluster1:/workdata /data/repository/shared/public glusterfs
>defaults,_netdev,attribute-timeout=0,entry-timeout=0,backup-volfile-servers=gluster2:gluster3
>0 0
>
>Entries in /etc/hosts for gluster1+2+3 are there. But after a reboot
>the mount of the 2 gluster volumes fails. There are additional servers
>with exact the same entries, they don't have a problem with mounting.
>But only this server does. The log entries for one of the volumes
>shows this:
>
>[2020-03-20 05:32:13.089703] I [MSGID: 100030]
>[glusterfsd.c:2725:main] 0-/usr/sbin/glusterfs: Started running
>/usr/sbin/glusterfs version 5.11 (args: /usr/sbin/glusterfs
>--attribute-timeout=0 --entry-timeout=0 --process-name fuse
>--volfile-server=gluster1 --volfile-server=gluster2
>--volfile-server=gluster3 --volfile-id=/persistent
>/data/repository/shared/private)
>[2020-03-20 05:32:13.120904] I [MSGID: 101190]
>[event-epoll.c:621:event_dispatch_epoll_worker] 0-epoll: Started
>thread with index 1
>[2020-03-20 05:32:16.196568] I
>[glusterfsd-mgmt.c:2424:mgmt_rpc_notify] 0-glusterfsd-mgmt:
>disconnected from remote-host: gluster1
>[2020-03-20 05:32:16.196614] I
>[glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting
>to next volfile server gluster2
>[2020-03-20 05:32:20.164538] I
>[glusterfsd-mgmt.c:2464:mgmt_rpc_notify] 0-glusterfsd-mgmt: connecting
>to next volfile server gluster3
>[2020-03-20 05:32:26.180546] I
>[glusterfsd-mgmt.c:2444:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted
>all volfile servers
>[2020-03-20 05:32:26.181618] W [glusterfsd.c:1500:cleanup_and_exit]
>(-->/lib/x86_64-linux-gnu/libgfrpc.so.0(+0xee13) [0x7fbf38a98e13]
>-->/usr/sbin/glusterfs(+0x127d7) [0x55bac75517d7]
>-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55bac7549f54] ) 0-:
>received signum (1), shutting down
>[2020-03-20 05:32:26.181744] I [fuse-bridge.c:5914:fini] 0-fuse:
>Unmounting '/data/repository/shared/private'.
>[2020-03-20 05:32:26.200708] I [fuse-bridge.c:5919:fini] 0-fuse:
>Closing fuse connection to '/data/repository/shared/private'.
>[2020-03-20 05:32:26.200885] W [glusterfsd.c:1500:cleanup_and_exit]
>(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3) [0x7fbf38661fa3]
>-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xfd) [0x55bac754a0fd]
>-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55bac7549f54] ) 0-:
>received signum (15), shutting down
>
>The messages for the other volume are identical. If i do a 'mount -a',
>the volumes get mounted.
>
>Did i miss anything?
>
>
>Regards,
>Hubert
>________
>
>
>
>Community Meeting Calendar:
>
>Schedule -
>Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>Bridge: https://bluejeans.com/441850968
>
>Gluster-users mailing list
>Gluster-users at gluster.org
>https://lists.gluster.org/mailman/listinfo/gluster-users
Why don't you set it up as a systemd '.mount' unit ?
You can define dependencies.
Currently, after a reboot you can check the following:
'systemctl status persistent-data-repository-shared-private.mount'
Best Regards,
Strahil Nikolov
More information about the Gluster-users
mailing list