[Bugs] [Bug 1775461] protocol/client : After "Remove lock recovery logic from client and server protocol translators" patch commit, causing add fcntl lock on the file twice.

bugzilla at redhat.com bugzilla at redhat.com
Tue Dec 10 02:08:51 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1775461



--- Comment #8 from yinkui <13965432176 at 163.com> ---
I reproduce this bug. 
1、gluster v info test
[root at server6 ~]# gluster v info test
Volume Name: test
Type: Distributed-Replicate
Volume ID: 8c07f99a-85fe-4d62-b2e0-3deb480a371a
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: server6:/bricks/brick1/brick0
Brick2: server6:/bricks/brick2/brick0
Brick3: server6:/bricks/brick3/brick0
Brick4: server6:/bricks/brick4/brick0
Brick5: server6:/bricks/brick5/brick0
Brick6: server6:/bricks/brick6/brick0
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
2、Steps by reproduce:
2.1 lock file on file1
[root at server6 ~]# ./test 
 lock: Success

2.2 
[root at server6 ~]# gluster v status
Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server6:/bricks/brick1/brick0         49152     0          Y       3403 
Brick server6:/bricks/brick2/brick0         49153     0          Y       3423 
Brick server6:/bricks/brick3/brick0         49154     0          Y       3443 
Brick server6:/bricks/brick4/brick0         49155     0          Y       4167 
Brick server6:/bricks/brick5/brick0         49156     0          Y       4187 
Brick server6:/bricks/brick6/brick0         49157     0          Y       4207 
Self-heal Daemon on localhost               N/A       N/A        Y       4235 

Task Status of Volume test
------------------------------------------------------------------------------
There are no active volume tasks

[root at server6 ~]# ls /bricks/brick4/brick0
file1  file2  file5  file6  file8
[root at server6 ~]# ls /bricks/brick5/brick0
file1  file2  file5  file6  file8
[root at server6 ~]# ls /bricks/brick6/brick0 
file1  file2  file5  file6  file8
[root at server6 ~]# kill -9 4167 4187 4207
[root at server6 ~]# gluster v start test force
volume start: test: success
[root at server6 ~]# gluster v status
Status of volume: test
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server6:/bricks/brick1/brick0         49152     0          Y       3403 
Brick server6:/bricks/brick2/brick0         49153     0          Y       3423 
Brick server6:/bricks/brick3/brick0         49154     0          Y       3443 
Brick server6:/bricks/brick4/brick0         49155     0          Y       4410 
Brick server6:/bricks/brick5/brick0         49156     0          Y       4430 
Brick server6:/bricks/brick6/brick0         49157     0          Y       4450 
Self-heal Daemon on localhost               N/A       N/A        Y       4471 

Task Status of Volume test
------------------------------------------------------------------------------
There are no active volume tasks

2.3 Another terminal to run executable test file again and lock file1 success.
[root at server6 ~]# ./test 
 lock: Success

2.4 However, in gluster-3.7.12 version after glusterfsd restart glusterfs
process will reopen and relock on file1 so executable test file can't lock
file1.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list