[Gluster-users] 'nofail' fails miserably

Ted Miller tmiller at sonsetsolutions.org
Wed Aug 24 22:56:51 UTC 2016

Now that most distros are switching to systemd, there seems to be a problem 
in the mount command's handling of the 'nofail' option.

Setup: using glusterfs 3.7.13 on Centos 7 (up to date) from the Centos 
Storage SIG repo.

The man page for systemd.mount says:

     With nofail this mount will be only wanted, not required, by local-fs.target or
     remote-fs.target. This means that the boot will continue even if this mount point is not
     mounted successfully.

It works fine during bootup -- things end up mounted, but they don't timeout 
and throw the server into maintenance mode if they are a little slow.

However, if I do a 'mount -a' from the command line, and any gluster volume 
needs to be mounted, mount (I assume mount.glusterfs) throws the response:

Invalid option nofail

Somebody needs to take responsibility for either filtering out the 'nofail' 
option so that mount.glusterfs doesn't see it, or else glusterfs needs to be 
smart enough to recognize that, even though it is in the options space, it is 
OK to ignore it.  If it is OK for systemd, and the only place you can put it 
is in the mount options, then mount.glusterfs needs to be OK with that.

I have not tried the other options that systemd.mount uses, so some of them 
may also cause problems.  man systemd.mount lists these options as being used:

'noauto' has been around forever, so it is probably handled OK, (I think I 
used it a while back) but not all the new options are handled right, or else 
the documentation is bad.

I hope somebody can stick a couple of lines of code in so we can mount 
'nofail' volumes after boot.

Thank you,
Ted Miller
Elkhart, Indiana, USA

More information about the Gluster-users mailing list