[Bugs] [Bug 1775461] protocol/client : After "Remove lock recovery logic from client and server protocol translators" patch commit, causing add fcntl lock on the file twice.

bugzilla at redhat.com bugzilla at redhat.com
Tue Dec 10 06:09:55 UTC 2019


https://bugzilla.redhat.com/show_bug.cgi?id=1775461



--- Comment #11 from Anoop C S <anoopcs at redhat.com> ---
(In reply to yinkui from comment #8)
> I reproduce this bug. 
> 1、gluster v info test
> [root at server6 ~]# gluster v info test
> Volume Name: test
> Type: Distributed-Replicate
> Volume ID: 8c07f99a-85fe-4d62-b2e0-3deb480a371a
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: server6:/bricks/brick1/brick0
> Brick2: server6:/bricks/brick2/brick0
> Brick3: server6:/bricks/brick3/brick0
> Brick4: server6:/bricks/brick4/brick0
> Brick5: server6:/bricks/brick5/brick0
> Brick6: server6:/bricks/brick6/brick0
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> 2、Steps by reproduce:
> 2.1 lock file on file1
> [root at server6 ~]# ./test 
>  lock: Success

Do you maintain the fd in opened state through the next steps?
If yes, then it should be reopened after brick processes are back up after
force starting the volume. Check for entry "1 fds open - Delaying child_up
until they are re-opened" in mount log.
If not, then there is no point in trying to lock again as the previous lock
would have been released as part of closing the fd or exiting the application.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list