[Gluster-devel] Dynamic disabling of eager-locking based on number of fds

Pranith Kumar K pkarampu at redhat.com
Tue Feb 12 05:12:36 UTC 2013


On 02/12/2013 08:43 AM, Anand Avati wrote:
>
>
> On Mon, Feb 11, 2013 at 7:02 PM, Pranith Kumar K <pkarampu at redhat.com 
> <mailto:pkarampu at redhat.com>> wrote:
>
>     hi,
>           Problem:
>
>     When there are multiple fds writing to same file with eager-lock
>     enabled, the fd which acquires the eager-lock waits for
>     post-op-delay secs before doing the unlock. Because of this all
>     other fds opened on the file face extra delay when
>     performing writes. Eager-locking, post-op-delay need to be
>     disabled when there are multiple fds opened on the file.
>
>     Here is the profile info output for the case above:
>     Execute the following command on the mount point.
>     for n in `seq 1 50` ; do eval
>     "/home/pranithk/workspace/gerrit-repo/append2log.py ./ben.log
>     10000 0.001 &" ; done ; wait
>
>      %-latency   Avg-latency   Min-Latency   Max-Latency   No. of
>     calls         Fop
>      ---------   -----------   -----------   ----------- ------------
>            ----
>           0.00       0.00 us       0.00 us       0.00 us   50     RELEASE
>           0.00       0.00 us       0.00 us       0.00 us   60  RELEASEDIR
>           0.00      55.00 us      55.00 us      55.00 us    1    GETXATTR
>           0.00      31.50 us      27.00 us      36.00 us    2      STATFS
>           0.00      41.00 us      29.00 us      53.00 us    2     ENTRYLK
>           0.00     198.00 us     198.00 us     198.00 us    1      CREATE
>           0.00     124.00 us     108.00 us     140.00 us    2     READDIR
>           0.00      27.04 us      17.00 us      95.00 us   49        OPEN
>           0.00      74.89 us      13.00 us     206.00 us   47        STAT
>           0.01      87.02 us      11.00 us     391.00 us   50       FLUSH
>           0.01     102.43 us      20.00 us     268.00 us   60     OPENDIR
>           0.02     344.27 us      22.00 us     940.00 us   44       WRITE
>           0.02     228.80 us      52.00 us     345.00 us   82    FXATTROP
>           0.03     199.89 us      19.00 us     404.00 us  120    READDIRP
>           0.05      91.41 us      23.00 us     832.00 us  421      LOOKUP
>          99.86  632698.45 us      17.00 us 1999724.00 us  126    FINODELK
>
>     Observe that most of the delay is in FINODELK fop.
>
>           Possible Solution:
>           With the patch: http://review.gluster.org/4468 we started
>     maintaining open-fd count in the inode. We need to implement xdata
>     based xattr retrieval in write-fop and get open-fd-count in write
>     fop. Remember the open-fd-count received from the write-fops and
>     maintain it in afr-fd-ctx. If the open-fd count is >1
>     post-op-delay is immediately disabled for that write fop. All
>     write-fops take into consideration this count to determine whether
>     to enable eager-lock, post-op-delay for that write fop.
>
>     Let me know if you foresee any issues with this approach.
>
>     https://bugzilla.redhat.com/show_bug.cgi?id=910217 is tracking
>     this issue.
>
>
>
> Ideally you would want open-fd count to be retrieved in all fops, and 
> only when an eager lock has been acquired. Any fop callback's 
> xattr_rsp inspection should potentially wake up the sleeping 
> post-op-delay in that inode (and disable further eager locking 
> temporarily).
>
> Avati
Avati,
     Makes sense. Since it is in-memory virtual xattr retrieval, 
performance should not be affected too much IMO. I will have to 
implement and run perf-test to make sure of that. Other than that every 
thing else is ok right?
        I will start the implementation if no  other issues are foreseen.

pranith.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-devel/attachments/20130212/ea06599a/attachment-0001.html>


More information about the Gluster-devel mailing list