[Bugs] [Bug 1165041] New: Different client can not execute "for((i=0; i<1000; i++)); do ls -al; done" in a same directory at the sametime

bugzilla at redhat.com bugzilla at redhat.com
Tue Nov 18 08:16:03 UTC 2014


https://bugzilla.redhat.com/show_bug.cgi?id=1165041

            Bug ID: 1165041
           Summary: Different client can not execute
                    "for((i=0;i<1000;i++));do ls -al;done" in a same
                    directory at the sametime
           Product: GlusterFS
           Version: mainline
         Component: disperse
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: xhernandez at datalab.es
                CC: bugs at gluster.org, gluster-bugs at redhat.com,
                    jiademing.dd at gmail.com, xhernandez at datalab.es



+++ This bug was initially created as a clone of Bug #1161903 +++

Description of problem:
A disperse volume, Different client can not "ls -al " in a same directory at
the sametime 

In client-1 mountpoint , exec cmd "for((i=0;i<1000;i++));do ls -al;done",In the
other client-2 mountpoint's smae directory, cmd "for((i=0;i<1000;i++));do ls
-al;done" or "touch newfile" or "mkdir newdirectory" is blocked before
client-1's cmd(1000 ls -al loops) is over.

[root at localhost test]# gluster volume info

Volume Name: test
Type: Distributed-Disperse
Volume ID: 433248ee-24f5-44e3-b334-488743850e45
Status: Started
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: 10.10.101.111:/sda
Brick2: 10.10.101.111:/sdb
Brick3: 10.10.101.111:/sdc
Brick4: 10.10.101.111:/sdd
Brick5: 10.10.101.111:/sde
Brick6: 10.10.101.111:/sdf


Version-Release number of selected component (if applicable):
3.6.0

How reproducible:


Steps to Reproduce:
1. 
2.
3.

Actual results:
In the other client's same directory, ls, touch, mkdir is blocked

Expected results:
In the other client's same directory, ls, touch, mkdir should be ok or be
blocked a short time.

Additional info:

--- Additional comment from Niels de Vos on 2014-11-11 13:52:26 CET ---

Have you tried this also on other types of volumes? Is this only affecting a
disperse volume?

--- Additional comment from jiademing on 2014-11-12 02:36:07 CET ---

Yes, only affecting a disperse volume, I tried to turn off the
gf_timer_call_after() wthen ec_unlock in ec_common.c's ec_unlock_timer_add(),
then can execute "for((i=0;i<1000;i++));do ls -al;done" in different client at
the same time.

In my opinion, the af_timer_call_after in ec_unlock is optimize for one client,
but maybe it is bad for many clients.


(In reply to Niels de Vos from comment #1)
> Have you tried this also on other types of volumes? Is this only affecting a
> disperse volume?

--- Additional comment from Xavier Hernandez on 2014-11-12 18:23:25 CET ---

Yes, this is a method to minimize lock/unlock calls. I'll try to find a good
solution to minimize the multiple client problem.

--- Additional comment from jiademing on 2014-11-17 02:30:07 CET ---

yes, I will  pay close attention to this problem, thanks.

(In reply to Xavier Hernandez from comment #3)
> Yes, this is a method to minimize lock/unlock calls. I'll try to find a good
> solution to minimize the multiple client problem.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list