[Bugs] [Bug 1372211] write-behind: flush stuck by former failed write

bugzilla at redhat.com bugzilla at redhat.com
Wed Nov 2 04:36:39 UTC 2016


https://bugzilla.redhat.com/show_bug.cgi?id=1372211



--- Comment #3 from Worker Ant <bugzilla-bot at gluster.org> ---
COMMIT: http://review.gluster.org/15380 committed in master by Raghavendra G
(rgowdapp at redhat.com) 
------
commit 9340b3c7a6c8556d6f1d4046de0dbd1946a64963
Author: Ryan Ding <ryan.ding at open-fs.com>
Date:   Thu Sep 1 15:40:35 2016 +0800

    performance/write-behind: fix flush stuck by former failed writes

    the issue is happened in this case:
    assume a file is opened with fd1 and fd2.
    1. some WRITE opto fd1 got error, they were add back to 'todo' queue
       because of those error.
    2. fd2 closed, a FLUSH op is send to write-behind.
    3. FLUSH can not be unwind because it's not a legal waiter for those
       failed write(as func __wb_request_waiting_on() say). and those failed
       WRITE also can not be ended if fd1 is not closed. fd2 stuck in close
       syscall.

    to resolve this issue, we can change the way we determine 2 requests is
    'conflict': flush/fsync is not conflict with those write that is not
    belonged to them. so __wb_pick_winds() can wind the FLUSH op.

    below is some information when the stuck issue happen:
    glusterdump logs:
    [xlator.performance.write-behind.wb_inode]
    path=/ltp-F9eG0ZSOME/rw-buffered-16436
    inode=0x7fdbe8039b9c
    window_conf=1048576
    window_current=249856
    transit-size=0
    dontsync=0

    [.WRITE]
    request-ptr=0x7fdbe8020200
    refcount=1
    wound=no
    generation-number=4
    req->op_ret=-1
    req->op_errno=116
    sync-attempts=3
    sync-in-progress=no
    size=131072
    offset=1220608
    lied=-1
    append=0
    fulfilled=0
    go=0

    [.WRITE]
    request-ptr=0x7fdbe8068c30
    refcount=1
    wound=no
    generation-number=5
    req->op_ret=-1
    req->op_errno=116
    sync-attempts=2
    sync-in-progress=no
    size=118784
    offset=1351680
    lied=-1
    append=0
    fulfilled=0
    go=0

    [.FLUSH]
    request-ptr=0x7fdbe8021cd0
    refcount=1
    wound=no
    generation-number=6
    req->op_ret=0
    req->op_errno=0
    sync-attempts=0

    gdb detail about above 3 requests:
    (gdb) print *((wb_request_t *)0x7fdbe8021cd0)
    $2 = {all = {next = 0x7fdbe803a608, prev = 0x7fdbe8068c30}, todo = {next
    = 0x7fdbe803a618, prev = 0x7fdbe8068c40}, lie = {next = 0x7fdbe8021cf0,
        prev = 0x7fdbe8021cf0}, winds = {next = 0x7fdbe8021d00, prev =
    0x7fdbe8021d00}, unwinds = {next = 0x7fdbe8021d10, prev =
    0x7fdbe8021d10}, wip = {
        next = 0x7fdbe8021d20, prev = 0x7fdbe8021d20}, stub =
    0x7fdbe80224dc, write_size = 0, orig_size = 0, total_size = 0, op_ret =
    0, op_errno = 0,
      refcount = 1, wb_inode = 0x7fdbe803a5f0, fop = GF_FOP_FLUSH, lk_owner
    = {len = 8, data = "W\322T\f\271\367y$", '\000' <repeats 1015 times>},
      iobref = 0x0, gen = 6, fd = 0x7fdbe800f0dc, wind_count = 0, ordering =
    {size = 0, off = 0, append = 0, tempted = 0, lied = 0, fulfilled = 0,
        go = 0}}
    (gdb) print *((wb_request_t *)0x7fdbe8020200)
    $3 = {all = {next = 0x7fdbe8068c30, prev = 0x7fdbe803a608}, todo = {next
    = 0x7fdbe8068c40, prev = 0x7fdbe803a618}, lie = {next = 0x7fdbe8068c50,
        prev = 0x7fdbe803a628}, winds = {next = 0x7fdbe8020230, prev =
    0x7fdbe8020230}, unwinds = {next = 0x7fdbe8020240, prev =
    0x7fdbe8020240}, wip = {
        next = 0x7fdbe8020250, prev = 0x7fdbe8020250}, stub =
    0x7fdbe8062c3c, write_size = 131072, orig_size = 4096, total_size = 0,
    op_ret = -1,
      op_errno = 116, refcount = 1, wb_inode = 0x7fdbe803a5f0, fop =
    GF_FOP_WRITE, lk_owner = {len = 8, data = '\000' <repeats 1023 times>},
      iobref = 0x7fdbe80311a0, gen = 4, fd = 0x7fdbe805c89c, wind_count = 3,
    ordering = {size = 131072, off = 1220608, append = 0, tempted = -1,
        lied = -1, fulfilled = 0, go = 0}}
    (gdb) print *((wb_request_t *)0x7fdbe8068c30)
    $4 = {all = {next = 0x7fdbe8021cd0, prev = 0x7fdbe8020200}, todo = {next
    = 0x7fdbe8021ce0, prev = 0x7fdbe8020210}, lie = {next = 0x7fdbe803a628,
        prev = 0x7fdbe8020220}, winds = {next = 0x7fdbe8068c60, prev =
    0x7fdbe8068c60}, unwinds = {next = 0x7fdbe8068c70, prev =
    0x7fdbe8068c70}, wip = {
        next = 0x7fdbe8068c80, prev = 0x7fdbe8068c80}, stub =
    0x7fdbe806746c, write_size = 118784, orig_size = 4096, total_size = 0,
    op_ret = -1,
      op_errno = 116, refcount = 1, wb_inode = 0x7fdbe803a5f0, fop =
    GF_FOP_WRITE, lk_owner = {len = 8, data = '\000' <repeats 1023 times>},
      iobref = 0x7fdbe8052b10, gen = 5, fd = 0x7fdbe805c89c, wind_count = 2,
    ordering = {size = 118784, off = 1351680, append = 0, tempted = -1,
        lied = -1, fulfilled = 0, go = 0}}

    you can see they are all on 'todo' queue, and FLUSH op fd is not the
    same WRITE op fd.

    Change-Id: Id687f9cd3b9f281e1a97c83f1ce981ede272b8ab
    BUG: 1372211
    Signed-off-by: Ryan Ding <ryan.ding at open-fs.com>
    Reviewed-on: http://review.gluster.org/15380
    Tested-by: Raghavendra G <rgowdapp at redhat.com>
    Reviewed-by: Raghavendra G <rgowdapp at redhat.com>
    NetBSD-regression: NetBSD Build System <jenkins at build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins at build.gluster.org>
    Smoke: Gluster Build System <jenkins at build.gluster.org>

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=qWhLTs8lny&a=cc_unsubscribe


More information about the Bugs mailing list