[Bugs] [Bug 1779089] New: glusterfsd do not release posix lock when multiple glusterfs client do flock -xo to the same file paralleled

bugzilla at redhat.com bugzilla at redhat.com
Tue Dec 3 09:35:02 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1779089

            Bug ID: 1779089
           Summary: glusterfsd do not release posix lock when multiple
                    glusterfs client do flock -xo to the same file
                    paralleled
           Product: GlusterFS
           Version: mainline
            Status: NEW
         Component: locks
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: spalai at redhat.com
                CC: bugs at gluster.org, shujun.huang at nokia-sbell.com,
                    spalai at redhat.com, zz.sh.cynthia at gmail.com
        Depends On: 1776152
  Target Milestone: ---
    Classification: Community



+++ This bug was initially created as a clone of Bug #1776152 +++

Description of problem:
glusterfsd do not release posix lock when multiple glusterfs client do flock
-xo to the same file paralleled

Version-Release number of selected component (if applicable):
glusterfs7.0

How reproducible:


Steps to Reproduce:
1. create a volume with one brick
   gluster volume create test3  192.168.0.14:/mnt/vol3-test force
2. mount the brick on two different node
  node name: node2
       mkdir /mnt/test-vol3
       mount -t glusterfs 192.168.0.14:/test3 /mnt/test-vol3
  node name: test
       mkdir /mnt/test-vol3
       mount -t glusterfs 192.168.0.14:/test3 /mnt/test-vol3

3.prepare same script to do flock on the two nodes
  [root at node2 ~]# vi flock.sh 

#!/bin/bash
file=/mnt/test-vol3/test.log
touch $file
(

         flock -xo 200
         echo "client1 do something" > $file
         sleep 1

 ) 200>$file
[root at node2 ~]# vi repeat_flock.sh 

#!/bin/bash
i=1
while [ "1" = "1" ]
do
    ./flock.sh
    ((i=i+1))
    echo $i
done
similar script on "test" node
[root at test ~]# vi flock.sh 

#!/bin/bash
file=/mnt/test-vol3/test.log
touch $file
(
         flock -xo 200
         echo "client2 do something" > $file
         sleep 1

 ) 200>$file

[root at test ~]# vi repeat_flock.sh 

#!/bin/bash
i=1
while [ "1" = "1" ]
do
    ./flock.sh
    ((i=i+1))
    echo $i
done

4. start repeat_flock.sh on two nodes
  not need much time, the two scripts will stuck, 

   [root at test ~]# ./repeat_flock.sh
2
3
4
5
6
7
   [root at node2 ~]# ./repeat_flock.sh
2
issue reproduced

5. do statedump on the volume test3
  gluster v statedump test3
[xlator.features.locks.test3-locks.inode]
path=/test.log
mandatory=0
posixlk-count=3
posixlk.posixlk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 22752,
owner=8c9cd93f8ee486a0, client=0x7f76e8082100,
connection-id=CTX_ID:7da20ab3-cc70-41bd-ab83-955481288ba2-GRAPH_ID:0-PID:22649-HOST:node2-PC_NAME:test3-client-0-RECON_NO:-0,
blocked at 2019-11-25 08:30:12, granted at 2019-11-25 08:30:12
posixlk.posixlk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 10928,
owner=b42ee151db035df9, client=0x7f76e0006390,
connection-id=CTX_ID:c4cf488c-2d8e-4f7c-87e9-a0cb1f2648cd-GRAPH_ID:0-PID:10850-HOST:test-PC_NAME:test3-client-0-RECON_NO:-0,
blocked at 2019-11-25 08:30:12
posixlk.posixlk[2](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 22757,
owner=f62dd9ff96cefaf5, client=0x7f76e8082100,
connection-id=CTX_ID:7da20ab3-cc70-41bd-ab83-955481288ba2-GRAPH_ID:0-PID:22649-HOST:node2-PC_NAME:test3-client-0-RECON_NO:-0,
blocked at 2019-11-25 08:30:13


Actual results:
both two repeat_flock.sh on two nodes will stuck, and the lock held forever

Expected results:
both two repeat_flock.sh on two nodes should not be stuck

Additional info:

--- Additional comment from zhou lin on 2019-11-28 11:41:39 MVT ---



--- Additional comment from zhou lin on 2019-11-29 08:08:45 MVT ---

i tried to add un_ref in grant_blocked_locks just before stack unwind, seems it
works.

--- Additional comment from zhou lin on 2019-11-29 14:17:23 MVT ---

please review patch for this issue

--- Additional comment from Worker Ant on 2019-12-03 10:52:33 MVT ---

REVIEW: https://review.gluster.org/23794 (add clean local after grant lock)
posted (#1) for review on master by None


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1776152
[Bug 1776152] glusterfsd do not release posix lock when multiple glusterfs
client do flock -xo to the same file paralleled
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list