[Gluster-users] Incorrect exit code from mount.glusterfs

Deepak Shetty dpkshetty at gmail.com
Tue Apr 8 10:20:42 UTC 2014


Hi All,
   I am wondering if this I am the only one seeing this or there are enuf
reasons why mount.glusterfs returns 0 (which means success) as the exit
code for error cases ?
Bcos of this cinder (openstack service) code is misled as it thinks
mounting glusterfs volume on already mounted volume is successfull and
never gets into the warning/error flow!
(Not to mention that i spent 1+ days debugging and reaching that
conclusion!!!)

I just did a quick sanity check to compare how mount.nfs and
mount.glusterfs behave on similar error scenario.. and below is what i find

 [stack at devstack-vm cinder]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 9.9G 3.7G 6.1G 38% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 448K 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
192.168.122.252:/opt/stack/nfs/brick 9.9G 3.7G 6.1G 38%
/opt/stack/data/cinder/mnt/f23011fcca5ae3a8b8ebfd7e4af2e190

[stack at devstack-vm cinder]$ sudo mount -t nfs
192.168.122.252:/opt/stack/nfs/brick
/opt/stack/data/cinder/mnt/f23011fcca5ae3a8b8ebfd7e4af2e190/
mount.nfs: /opt/stack/data/cinder/mnt/f23011fcca5ae3a8b8ebfd7e4af2e190 is
busy or already mounted

*[stack at devstack-vm cinder]$ echo $? 32*
NOTE: mount.nfs exits w/ proper error code

 [stack at devstack-vm ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 9.9G 3.7G 6.1G 38% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 448K 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
devstack-vm.localdomain:/gvol1 9.9G 3.7G 6.1G 38%
/opt/stack/data/cinder/mnt/d45ccec4f1572f6f242b70befa3d80fe
devstack-vm.localdomain:/gvol2 9.9G 3.7G 6.1G 38%
/opt/stack/data/cinder/mnt/413c1f8d14058d5b2d07f8a92814bd12

[stack at devstack-vm ~]$ sudo mount -t glusterfs
devstack-vm.localdomain:/gvol1
/opt/stack/data/cinder/mnt/d45ccec4f1572f6f242b70befa3d80fe/
/sbin/mount.glusterfs: according to mtab, GlusterFS is already mounted on
/opt/stack/data/cinder/mnt/d45ccec4f1572f6f242b70befa3d80fe

*[stack at devstack-vm ~]$ echo $? 0*
 NOTE: mount.glusterfs exits with 0 (success)

******************************************************************************************

A quick look at mount.glusterfs yeilds...

    # No need to do a ! -d test, it is taken care while initializing the
    # variable mount_point
    [ -z "$mount_point" -o ! -d "$mount_point" ] && {
        echo "ERROR: Mount point does not exist."
        usage;
*        exit 0;*
    }

    # Simple check to avoid multiple identical mounts
    if grep -q "[[:space:]+]${mount_point}[[:space:]+]fuse" $mounttab; then
        echo -n "$0: according to mtab, GlusterFS is already mounted on "
        echo "$mount_point"
*        exit 0;*
    fi

******************************************************************

Is this intended or bug or is there some history to why mount.glusterfs
return 0 for many obvious error cases ?

thanx,
deepak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140408/9f7c9c93/attachment.html>


More information about the Gluster-users mailing list