[Bugs] [Bug 1543279] Moving multiple temporary files to the same destination concurrently causes ESTALE error

bugzilla at redhat.com bugzilla at redhat.com
Mon May 7 06:55:35 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1543279



--- Comment #24 from Worker Ant <bugzilla-bot at gluster.org> ---
COMMIT: https://review.gluster.org/19547 committed in master by "Raghavendra G"
<rgowdapp at redhat.com> with a commit message- cluster/dht: fixes to parallel
renames to same destination codepath

Test case:
 # while true; do uuid="`uuidgen`"; echo "some data" > "test$uuid"; mv
   "test$uuid" "test" -f || break; echo "done:$uuid"; done

 This script was run in parallel from multiple mountpoints

Along the course of getting the above usecase working, many issues
were found:

Issue 1:
=======
consider a case of rename (src, dst). We can encounter a situation
where,
* dst is a file present at the time of lookup
* dst is removed by the time rename fop reaches glusterfs

In this scenario, acquring inodelk on dst fails with ESTALE resulting
in failure of rename. However, as per POSIX irrespective of whether
dst is present or not, rename should be successful. Acquiring entrylk
provides synchronization even in races like this.

Algorithm:
1. Take inodelks on src and dst (if dst is present) on respective
   cached subvols. These inodelks are done to preserve backward
   compatibility with older clients, so that synchronization is
   preserved when a volume is mounted by clients of different
   versions. Once relevant older versions (3.10, 3.12, 3.13) reach
   EOL, this code can be removed.
2. Ignore ENOENT/ESTALE errors of inodelk on dst.
3. protect namespace of src and dst. To protect namespace of a file,
   take inodelk on parent on hashed subvol, then take entrylk on the
   same subvol on parent with basename of file. inodelk on parent is
   done to guard against changes to parent layout so that hashed
   subvol won't change during rename.
4. <rest of rename continues>
5. unlock all locks

Issue 2:
========
linkfile creation in lookup codepath can race with a rename. Imagine
the following scenario:
* lookup finds a data-file with gfid - gfid-dst - without a
  corresponding linkto file on hashed-subvol. It decides to create
  linkto file with gfid - gfid-dst.
    - Note that some codepaths of dht-rename deletes linkto file of
      dst as first step. So, a lookup racing with an in-progress
      rename can easily run into this situation.
* a rename (src-path:gfid-src, dst-path:gfid-dst) renames data-file
  and hence gfid of data-file changes to gfid-src with path dst-path.
* lookup proceeds and creates linkto file - dst-path - with gfid -
  dst-gfid - on hashed-subvol.
* rename tries to create a linkto file dst-path with src-gfid on
  hashed-subvol, but it fails with EEXIST. But EEXIST is ignored
  during linkto file creation.

Now we've ended with dst-path having different gfids - dst-gfid on
linkto file and src-gfid on data file. Future lookups on dst-path will
always fail with ESTALE, due to differing gfids.

The fix is to synchronize linkfile creation in lookup path with rename
using the same mechanism of protecting namespace explained in solution
of Issue 1. Once locks are acquired, before proceeding with linkfile
creation, we check whether conditions for linkto file creation are
still valid. If not, we skip linkto file creation.

Issue 3:
========
gfid of dst-path can change by the time locks are acquired. This
means, either another rename overwrote dst-path or dst-path was
deleted and recreated by a different client. When this happens,
cached-subvol for dst can change. If rename proceeds with old-gfid and
old-cached subvol, we'll end up in inconsistent state(s) like dst-path
with different gfids on different subvols, more than one data-file
being present etc.

Fix is to do the lookup with a new inode after protecting namespace of
dst. Post lookup, we've to compare gfids and correct local state
appropriately to be in sync with backend.

Issue 4:
========
During revalidate lookup, if following a linkto file doesn't lead to a
valid data-file, local->cached-subvol was not reset to NULL. This
means we would be operating on a stale state which can lead to
inconsistency. As a fix, reset it to NULL before proceeding with
lookup everywhere.

Issue 5:
========
Stale dentries left out in inode table on brick resulted in failures
of link fop even though the file/dentry didn't exist on backend fs. A
patch is submitted to fix this issue. Please check the dependency tree
of current patch on gerrit for details

In short, we fix the problem by not blindly trusting the
inode-table. Instead we validate whether dentry is present by doing
lookup on backend fs.

Change-Id: I832e5c47d232f90c4edb1fafc512bf19bebde165
updates: bz#1543279
BUG: 1543279
Signed-off-by: Raghavendra G <rgowdapp at redhat.com>

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=Yjnj2kfQS6&a=cc_unsubscribe


More information about the Bugs mailing list