[Bugs] [Bug 1218060] New: [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message
bugzilla at redhat.com
bugzilla at redhat.com
Mon May 4 07:30:12 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1218060
Bug ID: 1218060
Summary: [SNAPSHOT]: Initializing snap_scheduler from all nodes
at the same time should give proper error message
Product: GlusterFS
Version: mainline
Component: snapshot
Assignee: bugs at gluster.org
Reporter: senaik at redhat.com
CC: bugs at gluster.org, gluster-bugs at redhat.com
Description of problem:
=======================
Initialising snap_scheduler from all nodes at the same time should fail with
proper error message - "Another snap scheduler command is running"
Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.7.0beta1 built on May 1 2015
How reproducible:
=================
always
Steps to Reproduce:
===================
1.Create a dist rep volume and mount it.
2.Create another shared storage volume and mount it under
/var/run/gluster/shared_storage
3.Initialise snap scheduler at the same time from all nodes
NOde1:
~~~~~
snap_scheduler.py init
snap_scheduler: Successfully inited snapshot scheduler for this node
Node2, Node3, Node4 :
~~~~~~~~~~~~~~~~~~~~
snap_scheduler.py init
Traceback (most recent call last):
File "/usr/sbin/snap_scheduler.py", line 574, in <module>
sys.exit(main())
File "/usr/sbin/snap_scheduler.py", line 544, in main
os.makedirs(LOCK_FILE_DIR)
File "/usr/lib64/python2.6/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 17] File exists:
'/var/run/gluster/shared_storage/snaps/lock_files/'
It should fail with error :
snap_scheduler: Another snap_scheduler command is running. Please try again
after some time.
Actual results:
Expected results:
Additional info:
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list