[Bugs] [Bug 1570476] New: Rebalance on few nodes doesn' t seem to complete - stuck at FUTEX_WAIT

bugzilla at redhat.com bugzilla at redhat.com
Mon Apr 23 03:09:14 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1570476

            Bug ID: 1570476
           Summary: Rebalance on few nodes doesn't seem to complete -
                    stuck at FUTEX_WAIT
           Product: GlusterFS
           Version: 4.0
         Component: distribute
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: nbalacha at redhat.com
                CC: bugs at gluster.org, rhs-bugs at redhat.com,
                    storage-qa-internal at redhat.com, tdesala at redhat.com
        Depends On: 1565119, 1568348
            Blocks: 1570475



+++ This bug was initially created as a clone of Bug #1568348 +++

Concurrent directory renames and fix-layouts can deadlock.

Steps to reproduce this with upstream master:


1. Create a 5 brick pure distribute volume 
2. Mount the volume on 2 different mount points (/mnt/1 and /mnt2)
3. From /mnt/1 create 3 levels of directories (mkdir -p d0/d1/d2)
4. Add bricks to the volume 
5. Gdb into the mount point process for /mnt/1 and set a breakpoint at
dht_rename_dir_lock1_cbk
6. From /mnt/1 run 'mv d0/d1 d0/d1_a"
In this particular example, the name of the hashed subvol of the /d0/d1 is
alphabetically greater than that of /d0/d1_a
[2018-04-17 09:24:02.267020] I [MSGID: 109066] [dht-rename.c:1751:dht_rename]
2-dlock-dht: renaming /d0/d1 (hash=dlock-client-3/cache=dlock-client-0) =>
/d0/d1_a (hash=dlock-client-0/cache=<nul>)

7. Once the breakpoint is hit in the /mnt/1 process, run the following on the
other mount point /mnt/2

setfattr -n "distribute.fix.layout" -v "1" d0


8.Allow gdb to continue

Both processes are now deadlocked.




[root at rhgs313-6 ~]# gluster v create dlock
server1:/bricks/brick1/deadlock-{1..5} force
volume create: dlock: success: please start the volume to access data
[root at rhgs313-6 ~]# gluster v start dlock
volume start: dlock: success
[root at rhgs313-6 ~]# mount -t glusterfs -s server1:dlock /mnt/fuse1
[root at rhgs313-6 ~]# mount -t glusterfs -s server1:dlock /mnt/fuse2
[root at rhgs313-6 ~]# cd /mnt/fuse1
[root at rhgs313-6 fuse1]# l
total 0
[root at rhgs313-6 fuse1]# 
[root at rhgs313-6 fuse1]# 
[root at rhgs313-6 fuse1]# 
[root at rhgs313-6 fuse1]# mkdir -p d0/d1/d2
[root at rhgs313-6 fuse1]# ll -lR
.:
total 4
drwxr-xr-x. 3 root root 4096 Apr 17 14:49 d0

./d0:
total 4
drwxr-xr-x. 3 root root 4096 Apr 17 14:49 d1

./d0/d1:
total 4
drwxr-xr-x. 2 root root 4096 Apr 17 14:49
[root at rhgs313-6 fuse1]# gluster v add-brick dlock
server1:/bricks/brick1/deadlock-{6..7} force
volume add-brick: success

GDB into process, set breakpoint etc etc


[root at rhgs313-6 fuse1]# mv d0/d1 d0/d1_a


Once gdb breaks at the breakpoint,
[root at rhgs313-6 brick1]# cd /mnt/fuse2/
[root at rhgs313-6 fuse2]# ll
total 4
drwxr-xr-x. 3 root root 4096 Apr 17 14:49 d0
[root at rhgs313-6 fuse2]# setfattr -n "distribute.fix.layout" -v "1" d0



This will hang. Allow gdb to continue. /mnt/fuse1 will also hang.

--- Additional comment from Worker Ant on 2018-04-17 06:12:59 EDT ---

REVIEW: https://review.gluster.org/19886 (cluster/dht: Fix dht_rename lock
order) posted (#1) for review on master by N Balachandran

--- Additional comment from Worker Ant on 2018-04-22 21:43:39 EDT ---

COMMIT: https://review.gluster.org/19886 committed in master by "Raghavendra G"
<rgowdapp at redhat.com> with a commit message- cluster/dht: Fix dht_rename lock
order

Fixed dht_order_rename_lock to use the same inodelk ordering
as that of the dht selfheal locks (dictionary order of
lock subvolumes).

Change-Id: Ia3f8353b33ea2fd3bc1ba7e8e777dda6c1d33e0d
fixes: bz#1568348
Signed-off-by: N Balachandran <nbalacha at redhat.com>


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1565119
[Bug 1565119] Rebalance on few nodes doesn't seem to complete - stuck at
FUTEX_WAIT
https://bugzilla.redhat.com/show_bug.cgi?id=1568348
[Bug 1568348] Rebalance on few nodes doesn't seem to complete - stuck at
FUTEX_WAIT
https://bugzilla.redhat.com/show_bug.cgi?id=1570475
[Bug 1570475] Rebalance on few nodes doesn't seem to complete - stuck at
FUTEX_WAIT
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list