[Bugs] [Bug 1224249] New: [SNAPSHOT]: Initializing snap_scheduler from all nodes at the same time should give proper error message

bugzilla at redhat.com bugzilla at redhat.com
Fri May 22 11:26:35 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1224249

            Bug ID: 1224249
           Summary: [SNAPSHOT]: Initializing snap_scheduler from all nodes
                    at the same time should give proper error message
           Product: Red Hat Gluster Storage
           Version: 3.1
         Component: gluster-snapshot
          Keywords: Triaged
          Assignee: rjoseph at redhat.com
          Reporter: senaik at redhat.com
        QA Contact: storage-qa-internal at redhat.com
                CC: amukherj at redhat.com, asengupt at redhat.com,
                    bugs at gluster.org, gluster-bugs at redhat.com
        Depends On: 1218060
            Blocks: 1186580 (qe_tracker_everglades), 1223203



+++ This bug was initially created as a clone of Bug #1218060 +++

Description of problem:
=======================
Initialising snap_scheduler from all nodes at the same time should fail with
proper error message - "Another snap scheduler command is running"

Version-Release number of selected component (if applicable):
==============================================================
glusterfs 3.7.0beta1 built on May  1 2015

How reproducible:
=================
always

Steps to Reproduce:
===================
1.Create a dist rep volume and mount it.

2.Create another shared storage volume and mount it under
/var/run/gluster/shared_storage

3.Initialise snap scheduler at the same time from all nodes 

NOde1: 
~~~~~
snap_scheduler.py init
snap_scheduler: Successfully inited snapshot scheduler for this node

Node2, Node3, Node4 : 
~~~~~~~~~~~~~~~~~~~~
snap_scheduler.py init
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 574, in <module>
    sys.exit(main())
  File "/usr/sbin/snap_scheduler.py", line 544, in main
    os.makedirs(LOCK_FILE_DIR)
  File "/usr/lib64/python2.6/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 17] File exists:
'/var/run/gluster/shared_storage/snaps/lock_files/'

It should fail with error : 
snap_scheduler: Another snap_scheduler command is running. Please try again
after some time.

Actual results:


Expected results:


Additional info:

--- Additional comment from  on 2015-05-20 01:17:37 EDT ---

Same issue seen while checking snap_scheduler.py status from all nodes at the
same time 

Node1:
======
snap_scheduler.py status
snap_scheduler: Snapshot scheduling status: Disabled

Node2,Node3, Node4:
===================
 snap_scheduler.py status
Traceback (most recent call last):
  File "/usr/sbin/snap_scheduler.py", line 575, in <module>
    sys.exit(main())
  File "/usr/sbin/snap_scheduler.py", line 545, in main
    os.makedirs(LOCK_FILE_DIR)
  File "/usr/lib64/python2.6/os.py", line 157, in makedirs
    mkdir(name, mode)
OSError: [Errno 17] File exists:
'/var/run/gluster/shared_storage/snaps/lock_files/'


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
https://bugzilla.redhat.com/show_bug.cgi?id=1218060
[Bug 1218060] [SNAPSHOT]: Initializing snap_scheduler from all nodes at the
same time should give proper error message
https://bugzilla.redhat.com/show_bug.cgi?id=1223203
[Bug 1223203] [SNAPSHOT]: Initializing snap_scheduler from all nodes at the
same time should give proper error message
-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=dEWfz3lGDL&a=cc_unsubscribe


More information about the Bugs mailing list