[Bugs] [Bug 1226149] New: Different client can not execute "for((i=0; i<1000; i++)); do ls -al; done" in a same directory at the sametime

bugzilla at redhat.com bugzilla at redhat.com
Fri May 29 05:46:15 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1226149

            Bug ID: 1226149
           Summary: Different client can not execute
                    "for((i=0;i<1000;i++));do ls -al;done" in a same
                    directory at the sametime
           Product: Red Hat Gluster Storage
           Version: 3.1
         Component: glusterfs
     Sub Component: disperse
          Severity: high
          Assignee: rhs-bugs at redhat.com
          Reporter: nsathyan at redhat.com
        QA Contact: byarlaga at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com,
                    iesool at 163.com, lidi at perabytes.com,
                    pkarampu at redhat.com, xhernandez at datalab.es
        Depends On: 1165041, 1225279



+++ This bug was initially created as a clone of Bug #1225279 +++

+++ This bug was initially created as a clone of Bug #1165041 +++

+++ This bug was initially created as a clone of Bug #1161903 +++

Description of problem:
A disperse volume, Different client can not "ls -al " in a same directory at
the sametime 

In client-1 mountpoint , exec cmd "for((i=0;i<1000;i++));do ls -al;done",In the
other client-2 mountpoint's smae directory, cmd "for((i=0;i<1000;i++));do ls
-al;done" or "touch newfile" or "mkdir newdirectory" is blocked before
client-1's cmd(1000 ls -al loops) is over.

[root at localhost test]# gluster volume info

Volume Name: test
Type: Distributed-Disperse
Volume ID: 433248ee-24f5-44e3-b334-488743850e45
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: 10.10.101.111:/sda
Brick2: 10.10.101.111:/sdb
Brick3: 10.10.101.111:/sdc
Brick4: 10.10.101.111:/sdd
Brick5: 10.10.101.111:/sde
Brick6: 10.10.101.111:/sdf


Version-Release number of selected component (if applicable):
3.6.0

How reproducible:


Steps to Reproduce:
1. 
2.
3.

Actual results:
In the other client's same directory, ls, touch, mkdir is blocked

Expected results:
In the other client's same directory, ls, touch, mkdir should be ok or be
blocked a short time.

Additional info:

--- Additional comment from Niels de Vos on 2014-11-11 13:52:26 CET ---

Have you tried this also on other types of volumes? Is this only affecting a
disperse volume?

--- Additional comment from jiademing on 2014-11-12 02:36:07 CET ---

Yes, only affecting a disperse volume, I tried to turn off the
gf_timer_call_after() wthen ec_unlock in ec_common.c's ec_unlock_timer_add(),
then can execute "for((i=0;i<1000;i++));do ls -al;done" in different client at
the same time.

In my opinion, the af_timer_call_after in ec_unlock is optimize for one client,
but maybe it is bad for many clients.


(In reply to Niels de Vos from comment #1)
> Have you tried this also on other types of volumes? Is this only affecting a
> disperse volume?

--- Additional comment from Xavier Hernandez on 2014-11-12 18:23:25 CET ---

Yes, this is a method to minimize lock/unlock calls. I'll try to find a good
solution to minimize the multiple client problem.

--- Additional comment from jiademing on 2014-11-17 02:30:07 CET ---

yes, I will  pay close attention to this problem, thanks.

(In reply to Xavier Hernandez from comment #3)
> Yes, this is a method to minimize lock/unlock calls. I'll try to find a good
> solution to minimize the multiple client problem.

--- Additional comment from Anand Avati on 2015-05-20 09:32:15 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#1) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Xavier Hernandez on 2015-05-20 09:35:48 EDT ---

The previous patch should be half of the solution. Another patch will be sent
to add wider support in the locks xlator.

--- Additional comment from Anand Avati on 2015-05-21 10:45:16 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#2) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-21 10:45:19 EDT ---

REVIEW: http://review.gluster.org/10880 (features/locks: Handle virtual
getxattrs in more fops) posted (#1) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-21 11:50:15 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#3) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-22 03:31:51 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#4) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-22 03:34:23 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#5) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-24 15:41:17 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#6) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-24 15:41:19 EDT ---

REVIEW: http://review.gluster.org/10880 (features/locks: Handle virtual
getxattrs in more fops) posted (#2) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-25 03:57:02 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#7) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-25 04:28:33 EDT ---

REVIEW: http://review.gluster.org/10845 (cluster/ec: Forced unlock when lock
contention is detected) posted (#8) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-25 06:27:52 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#4) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-25 07:20:48 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#5) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-25 09:25:54 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#6) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-25 12:42:46 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#7) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-26 00:07:25 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#8) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-26 06:18:15 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#9) for review on master by Xavier Hernandez
(xhernandez at datalab.es)

--- Additional comment from Anand Avati on 2015-05-26 11:06:04 EDT ---

REVIEW: http://review.gluster.org/10852 (cluster/ec: Forced unlock when lock
contention is detected) posted (#10) for review on master by Pranith Kumar
Karampuri (pkarampu at redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 07:12:41 EDT ---

COMMIT: http://review.gluster.org/10925 committed in release-3.7 by Pranith
Kumar Karampuri (pkarampu at redhat.com) 
------
commit 6483ac9b7eea567c8d0d48aa2c1139eedc7a9cd9
Author: Pranith Kumar K <pkarampu at redhat.com>
Date:   Tue May 19 20:53:30 2015 +0530

    features/locks: Handle virtual getxattrs in more fops

            Backport of http://review.gluster.com/10880

    With this patch getxattr of inodelk/entrylk counts can be requested in
    readv/writev/create/unlink/opendir.

    BUG: 1225279
    Change-Id: Ifb9378ce650377e67a8601147eac95cfbdf0abf0
    Signed-off-by: Pranith Kumar K <pkarampu at redhat.com>
    Reviewed-on: http://review.gluster.org/10925
    Tested-by: Gluster Build System <jenkins at build.gluster.com>
    Tested-by: NetBSD Build System


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1165041
[Bug 1165041] Different client can not execute "for((i=0;i<1000;i++));do ls
-al;done" in a same directory at the sametime
https://bugzilla.redhat.com/show_bug.cgi?id=1225279
[Bug 1225279] Different client can not execute "for((i=0;i<1000;i++));do ls
-al;done" in a same directory at the sametime
-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=GsQ2Tkxpjv&a=cc_unsubscribe


More information about the Bugs mailing list