[Bugs] [Bug 1211614] New: [NFS] Shared Storage mounted as NFS mount gives error "snap_scheduler: Another snap_scheduler command is running. Please try again after some time" while running any scheduler commands
bugzilla at redhat.com
bugzilla at redhat.com
Tue Apr 14 12:35:18 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1211614
Bug ID: 1211614
Summary: [NFS] Shared Storage mounted as NFS mount gives error
"snap_scheduler: Another snap_scheduler command is
running. Please try again after some time" while
running any scheduler commands
Product: GlusterFS
Version: mainline
Component: nfs
Severity: urgent
Assignee: bugs at gluster.org
Reporter: ashah at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
Mounting shared storage as NFS mount, executing any scheduler commands gives
error "snap_scheduler: Another snap_scheduler command is running. Please try
again after some time."
Error doesn't come when mounted shared storage as fuse mount.
Version-Release number of selected component (if applicable):
[root at localhost tmp]# rpm -qa | grep glusterfs
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
samba-glusterfs-3.6.509-169.4.el6rhs.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
How reproducible:
100%
Steps to Reproduce:
1. Create 6*2 distributed replicate volume
2. create shared storage and do nfs mount on each storage node on path
/var/run/gluster/shared_storage
mount -t nfs -o vers=3,tcp 10.70.47.143:meta /var/run/gluster/shared_storage/
3. initialize scheduler on each storage node e.g run snap_scheduler.py init
command
Actual results:
snap_scheduler: Another snap_scheduler command is running. Please try again
after some time
Expected results:
snap_scheduler.py init command should succeed
Additional info:
[root at localhost tmp]# gluster v info vol0
Volume Name: vol0
Type: Distributed-Replicate
Volume ID: fc0f1280-821d-4990-a05a-00ccc9474b44
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.47.143:/rhs/brick1/b1
Brick2: 10.70.47.145:/rhs/brick1/b2
Brick3: 10.70.47.150:/rhs/brick1/b3
Brick4: 10.70.47.151:/rhs/brick1/b4
Brick5: 10.70.47.143:/rhs/brick2/b5
Brick6: 10.70.47.145:/rhs/brick2/b6
Brick7: 10.70.47.150:/rhs/brick2/b7
Brick8: 10.70.47.151:/rhs/brick2/b8
Brick9: 10.70.47.143:/rhs/brick3/b9
Brick10: 10.70.47.145:/rhs/brick3/10
Brick11: 10.70.47.150:/rhs/brick3/b11
Brick12: 10.70.47.151:/rhs/brick3/b12
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list