[Bugs] [Bug 1788011] New: glusterfs client mount failed but exit code was 0

bugzilla at redhat.com bugzilla at redhat.com
Mon Jan 6 06:23:40 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1788011

            Bug ID: 1788011
           Summary: glusterfs client mount failed but exit code was 0
           Product: GlusterFS
           Version: 4.1
          Hardware: x86_64
                OS: Linux
            Status: NEW
         Component: libglusterfsclient
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: qujunorz at gmail.com
                CC: bugs at gluster.org
  Target Milestone: ---
    Classification: Community



Created attachment 1650035
  --> https://bugzilla.redhat.com/attachment.cgi?id=1650035&action=edit
mount command failed log

Description of problem:
  when gluster volue status is online, but client failed to mount it, even
worse is that sometimes mount command exit with 0.

  We setup gluster-server cluster by heketi's gk-deploy scrpits in kubernetes. 
every thing is fine , but sometimes  when the problem occured , A pod with
gluster pv configured created  successeful but did not mount gluster volume .
then the pod will write data to it's local directory , this is very dangerouse
and high severity problem.
  The wierd thing is whe mount command is not always fail or success , just
like random case.


$ mount -t glusterfs -o
auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG
192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:22.682248] E
[glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set
volfile server: File exists
[root at demo.tidu: /home/qujun/mnt] 14:11:22 
$ echo $?        
0
[root at demo.tidu: /home/qujun/mnt] 14:11:26 
$ mount -t glusterfs -o
auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG
192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:28.542670] E
[glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set
volfile server: File exists
Mount failed. Please check the log file for more details.
[root at demo.tidu: /home/qujun/mnt] 14:11:28 
$ echo $?
1
[root at demo.tidu: /home/qujun/mnt] 14:11:30 
$ mount -t glusterfs -o
auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG
192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:31.958008] E
[glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set
volfile server: File exists
Mount failed. Please check the log file for more details.
[root at demo.tidu: /home/qujun/mnt] 14:11:32 
$ echo $?
1
[root at demo.tidu: /home/qujun/mnt] 14:11:33 
$ mount -t glusterfs -o
auto_unmount,backup-volfile-servers=192.168.0.35:192.168.0.36:192.168.0.37,log-file=/tmp/test-qujun-3.log,log-level=DEBUG
192.168.0.35:qujun-test  /home/qujun/mnt
[2020-01-06 06:11:38.196218] E
[glusterfsd.c:795:gf_remember_backup_volfile_server] 0-glusterfs: failed to set
volfile server: File exists
Mount failed. Please check the log file for more details.
[root at demo.tidu: /home/qujun/mnt] 14:11:38 
$ echo $?
1
[root at demo.tidu: /home/qujun/mnt] 14:11:39 
$ echo $?
0


Version-Release number of selected component (if applicable):
4.1.9


How reproducible:
It is rare happend ,may be after glusterd service restart or OS reboot.

Actual results:
client mount gluster volume failed ,but exit code is zero 

Expected results:
client mount volume failed with non-zero exit code.

Additional info:

[root at sh-tidu5 glusterfs]# gluster volume status qujun-test
Status of volume: qujun-test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 192.168.0.35:/var/lib/heketi/mounts/v
g_388e881025bddc20831535c6fdcd44e6/brick_43
0caab224369e02d43546e6e578ddfd/brick        49153     0          Y       32287
Brick 192.168.0.36:/var/lib/heketi/mounts/v
g_7b6b6842ebb05301aa01615984ac168c/brick_6b
3f9163535e091793991aad8a0c2e3c/brick        49153     0          Y       13246
Brick 192.168.0.37:/var/lib/heketi/mounts/v
g_7bac8d0737a14ee8d834931052370c55/brick_e6
9d16bd006830124b07d2077e13d529/brick        49154     0          Y       32676
Self-heal Daemon on localhost               N/A       N/A        Y       32311
Self-heal Daemon on 192.168.0.36            N/A       N/A        Y       13269
Self-heal Daemon on 192.168.0.37            N/A       N/A        Y       32720

Task Status of Volume qujun-test
------------------------------------------------------------------------------
There are no active volume tasks

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list