[Bugs] [Bug 1216942] New: glusterd crash when snapshot create was in progress on different volumes at the same time - job edited to create snapshots at the given time

bugzilla at redhat.com bugzilla at redhat.com
Wed Apr 29 09:14:41 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1216942

            Bug ID: 1216942
           Summary: glusterd crash when snapshot create was in progress on
                    different volumes at the same time - job edited to
                    create snapshots at the given time
           Product: GlusterFS
           Version: mainline
         Component: glusterd
          Keywords: Triaged
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: anekkunt at redhat.com
                CC: amukherj at redhat.com, anekkunt at redhat.com,
                    asengupt at redhat.com, bugs at gluster.org,
                    gluster-bugs at redhat.com, sasundar at redhat.com,
                    senaik at redhat.com, vagarwal at redhat.com
        Depends On: 1211640
            Blocks: 1186580 (qe_tracker_everglades), 1199352
                    (glusterfs-3.7.0)



+++ This bug was initially created as a clone of Bug #1211640 +++

Description of problem:
=======================
Edited 2 jobs to create snapshots on 2 different volumes at the same time, snap
create failed as glusterd crashed

Version-Release number of selected component (if applicable):
=============================================================
gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27


How reproducible:
=================
1/1

Steps to Reproduce:
===================
1.Create 2 volumes (vol0 and vol1)and start it. 
Fuse and NFS mount the volume and create some IO

2.Create another shared storage and fuse mount the shared storage on all nodes

3.Initialise the snapshot scheduler on all nodes using snap_scheduler.py init

4.Enable the snap scheduler on one of the nodes using snap_scheduler.py enable

5.Added a job which created snapshot on vol0 at 18:20 
  Added another job which created snapshot on vol1 at 18:20

snap_scheduler.py add "J1_vol0"  "20 18 * * * " "vol0"
snap_scheduler.py add "J1_vol1"  "20 18 * * * " "vol1"

Snapshots were created successfully on the volume 

gluster snapshot list
Scheduled-J1_vol0-vol0_GMT-2015.04.14-12.50.01
Scheduled-J1_vol1-vol1_GMT-2015.04.14-12.50.01

6.Edit both the jobs to create snapshots at 18:36
snap_scheduler.py edit "J1_vol1"  "36 18 * * * " "vol1"
snap_scheduler.py edit "J1_vol0"  "36 18 * * * " "vol0"

Snapshot creation failed 

-------------Part of var/log/glusterfs/gcron.log----------

[2015-04-14 18:36:01,350 gcron.py:67 takeSnap] DEBUG Running command 'gluster
snapshot create Scheduled-J1_vol0-vol0 vol0'
[2015-04-14 18:36:01,351 gcron.py:95 doJob] DEBUG
/var/run/gluster/shared_storage/snaps/lock_files/J1_vol1 last modified at Tue
Apr 14 18:20:26 2015
[2015-04-14 18:36:01,351 gcron.py:97 doJob] DEBUG Processing job
Scheduled-J1_vol1-vol1
[2015-04-14 18:36:01,352 gcron.py:67 takeSnap] DEBUG Running command 'gluster
snapshot create Scheduled-J1_vol1-vol1 vol1'
[2015-04-14 18:36:20,009 gcron.py:74 takeSnap] DEBUG Command 'gluster snapshot
create Scheduled-J1_vol0-vol0 vol0' returned '1'
[2015-04-14 18:36:20,009 gcron.py:74 takeSnap] DEBUG Command 'gluster snapshot
create Scheduled-J1_vol1-vol1 vol1' returned '1'
[2015-04-14 18:36:20,010 gcron.py:77 takeSnap] ERROR Snapshot of vol0 failed
[2015-04-14 18:36:20,014 gcron.py:78 takeSnap] ERROR Command output:
[2015-04-14 18:36:20,014 gcron.py:79 takeSnap] ERROR 
[2015-04-14 18:36:20,014 gcron.py:101 doJob] ERROR Job Scheduled-J1_vol0-vol0
failed
[2015-04-14 18:36:20,010 gcron.py:77 takeSnap] ERROR Snapshot of vol1 failed
[2015-04-14 18:36:20,019 gcron.py:78 takeSnap] ERROR Command output:
[2015-04-14 18:36:20,020 gcron.py:79 takeSnap] ERROR 
[2015-04-14 18:36:20,020 gcron.py:101 doJob] ERROR Job Scheduled-J1_vol1-vol1
failed
------------------------------------------------------------

Actual results:
==============
glusterd crashed

Expected results:
================
There should be no crash observed

Additional info:
================
[2015-04-14 12:38:44.527287] W [socket.c:642:__socket_rwv] 0-quotad: readv on
/var/run/gluster/7089eb2213ea459a8a12ba
56023bd163.socket failed (No data available)
[2015-04-14 12:38:47.918396] W [socket.c:642:__socket_rwv] 0-quotad: readv on
/var/run/gluster/7089eb2213ea459a8a12ba
56023bd163.socket failed (No data available)
The message "I [MSGID: 106006]
[glusterd-snapd-svc.c:379:glusterd_snapdsvc_rpc_notify] 0-management: snapd has
discon
nected from glusterd." repeated 2 times between [2015-04-14 12:38:34.733354]
and [2015-04-14 12:38:40.636734]
The message "I [MSGID: 106006]
[glusterd-svc-mgmt.c:327:glusterd_svc_common_rpc_notify] 0-management: nfs has
disconn
ected from glusterd." repeated 3 times between [2015-04-14 12:37:56.254712] and
[2015-04-14 12:38:40.641386]
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 
2015-04-14 13:06:18
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.7dev


(gdb) bt
#0  0x00007fe862f85d16 in rcu_read_unlock_bp () from /usr/lib64/liburcu-bp.so.1
#1  0x00007fe863283252 in glusterd_mgmt_v3_commit (op=GD_OP_SNAP,
op_ctx=0x7fe800000003, req_dict=0x7fe85c344c4c, op_errstr=0x7fe85879ace8,
txn_generation=3)
    at glusterd-mgmt.c:1232
#2  0x00007fe863287078 in glusterd_mgmt_v3_initiate_snap_phases (req=0x1345108,
op=GD_OP_SNAP, dict=0x7fe85c45d87c) at glusterd-mgmt.c:1998
#3  0x00007fe863272a10 in glusterd_handle_snapshot_create (req=0x1345108,
op=GD_OP_SNAP, dict=0x7fe85c45d87c, err_str=<value optimized out>,
len=140635893514400)
    at glusterd-snapshot.c:3763
#4  0x00007fe86327e7c1 in glusterd_handle_snapshot_fn (req=0x1345108) at
glusterd-snapshot.c:8305
#5  0x00007fe8631c9d7f in glusterd_big_locked_handler (req=0x1345108,
actor_fn=0x7fe86327dfa0 <glusterd_handle_snapshot_fn>) at glusterd-handler.c:83
#6  0x0000003abac61c72 in synctask_wrap (old_task=<value optimized out>) at
syncop.c:375
#7  0x0000003a964438f0 in ?? () from /lib64/libc.so.6
#8  0x0000000000000000 in ?? ()

--- Additional comment from  on 2015-04-14 09:37:56 EDT ---

sosreport :
=========
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/snapshots/1211640/

--- Additional comment from Avra Sengupta on 2015-04-15 02:57:10 EDT ---

The crash is seen at rcu_read_unlock_bp() which is not affected by change of
snapshot schedules. Moving it to glusterd core team

--- Additional comment from Atin Mukherjee on 2015-04-15 11:12:15 EDT ---

http://review.gluster.org/10147 has introduced it and we are working on to find
out a solution for this. At worst case the mentioned patch will be reverted
which would solve this problem.

--- Additional comment from Anand Avati on 2015-04-17 05:39:14 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd:Implementation of sync lock
as recurcive lock to avoid dead lock.) posted (#1) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-17 05:56:41 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock
as recursive lock to avoid dead lock.) posted (#2) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-17 13:29:53 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock
as recursive lock to avoid dead lock.) posted (#3) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-18 08:10:18 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock
as recursive lock to avoid dead lock.) posted (#4) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-18 13:34:40 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock
as recursive lock to avoid dead lock.) posted (#5) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from  on 2015-04-21 02:10:07 EDT ---

Facing the issue multiple times while testing Snapshots. 
Proposing this bug as a blocker

--- Additional comment from Anand Avati on 2015-04-22 00:46:30 EDT ---

REVIEW: http://review.gluster.org/10285 (glusterd: Implementation of sync lock
as recursive lock to avoid crash.) posted (#6) for review on master by Anand
Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-22 05:18:54 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#7) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-22 07:25:22 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#8) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-22 08:27:46 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#9) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-22 13:10:57 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#10) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-23 05:44:02 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#11) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-24 06:29:18 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#12) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-27 04:44:50 EDT ---

REVIEW: http://review.gluster.org/10285 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#13) for review on master by
Anand Nekkunti (anekkunt at redhat.com)

--- Additional comment from Anand Avati on 2015-04-28 04:53:31 EDT ---

COMMIT: http://review.gluster.org/10285 committed in master by Vijay Bellur
(vbellur at redhat.com) 
------
commit ada6b3a8800867934af57a57d5312f5a5d8374f0
Author: anand <anekkunt at redhat.com>
Date:   Fri Apr 17 14:19:46 2015 +0530

    libglusterfs: Implementation of sync lock as recursive lock to avoid crash.

    Problem : In glusterd,we are using big lock which is implemented based on
sync
    task frame work for thread synchronization and rcu lock for data
consistency.
    sync task frame work swap the threads  if there is no worker poll threads
    available,due to this rcu lock and rcu unlock was happening in different
threads
    (urcu-bp will not allow this),resulting into glusterd crash.

    fix : To avoid releasing the sync lock(big lock) in between rcu critical
    section,implemented sync lock as recursive lock.

    More details:
    link : http://www.spinics.net/lists/gluster-devel/msg14632.html

    Change-Id: I2b56c1caf3f0470f219b1adcaf62cce29cdc6b88
    BUG: 1211640
    Signed-off-by: anand <anekkunt at redhat.com>
    Reviewed-on: http://review.gluster.org/10285
    Reviewed-by: Atin Mukherjee <amukherj at redhat.com>
    Tested-by: Gluster Build System <jenkins at build.gluster.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Vijay Bellur <vbellur at redhat.com>

--- Additional comment from Anand Avati on 2015-04-28 12:58:01 EDT ---

REVIEW: http://review.gluster.org/10432 (libglusterfs: Implementation of sync
lock as recursive lock to avoid crash.) posted (#1) for review on release-3.7
by Anand Nekkunti (anekkunt at redhat.com)


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
https://bugzilla.redhat.com/show_bug.cgi?id=1199352
[Bug 1199352] GlusterFS 3.7.0 tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1211640
[Bug 1211640] glusterd crash when snapshot create was in progress on
different volumes at the same time - job edited to create snapshots at the
given time
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list