[Bugs] [Bug 1225809] New: [DHT-REBALANCE]-DataLoss: The data appended to a file during its migration will be lost once the migration is done

bugzilla at redhat.com bugzilla at redhat.com
Thu May 28 09:30:55 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1225809

            Bug ID: 1225809
           Summary: [DHT-REBALANCE]-DataLoss: The data appended to a file
                    during its migration will be lost once the migration
                    is done
           Product: GlusterFS
           Version: 3.7.0
         Component: distribute
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: rgowdapp at redhat.com
                CC: asrivast at redhat.com, bugs at gluster.org,
                    gluster-bugs at redhat.com, nsathyan at redhat.com,
                    rhs-bugs at redhat.com, shmohan at redhat.com,
                    srangana at redhat.com, ssamanta at redhat.com,
                    storage-qa-internal at redhat.com, vagarwal at redhat.com
        Depends On: 1142423
            Blocks: 1140506



+++ This bug was initially created as a clone of Bug #1142423 +++

+++ This bug was initially created as a clone of Bug #1140506 +++

Description of problem:
While a file migration in progress if we append any data it will be lost once
the migration is over

How reproducible:
Always

Steps to Reproduce:
1.created a 54 brick dist-rep volume
2.created a big file of size 3GB using urandom
dd if=/dev/urandom of=FILE bs=512M count=6
3.rename the file to something else so that subsequent rebalance will migrates
the same file
4. mv FILE abc
5. check the file size before migration 
6. start rebalance force
gluster volume rebalance <vol> start force
7. while migration is in progress append some data to the file
   use the program attached with bug
8.check the file size during migration
9. check the files size after migaration

Actual results:



Before migration
=======
[root at localhost mnt]# ll
total 3145737
drwxr-xr-x 2 root root        162 Sep 10 15:09 2
-rw-r--r-- 1 root root         24 Sep 10 15:12 f1
-rw-r-Sr-T 1 root root 3221225523 Sep 11 03:06 FILE1
-rwxr-xr-x 1 root root       8139 Sep 10 15:11 slow

after migration
=========
[root at localhost mnt]# ll
total 3145737
drwxr-xr-x 2 root root        162 Sep 10 15:09 2
-rw-r--r-- 1 root root         24 Sep 10 15:12 f1
-rw-r--r-- 1 root root 3221225522 Sep 11 03:06 FILE1
-rwxr-xr-x 1 root root       8139 Sep 10 15:11 slow



Additional info:

[2014-09-11 07:06:26.508878] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33829: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:27.510004] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33831: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:28.510720] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33833: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:29.511543] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33835: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:30.512182] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33837: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:31.517089] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33839: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:32.517822] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33841: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:33.518535] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33843: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:34.519160] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33845: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:35.519623] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33847: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:36.520035] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33849: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:37.520552] W [fuse-bridge.c:2238:fuse_writev_cbk]
0-glusterfs-fuse: 33851: WRITE => -1 (Invalid argument)
[2014-09-11 07:06:37.521722] W [fuse-bridge.c:1237:fuse_err_cbk]
0-glusterfs-fuse: 33855: FLUSH() ERR => -1 (Invalid argument)
[2014-09-11 07:07:10.539605] W [client-rpc-fops.c:2761:client3_3_lookup_cbk]
6-dongra-client-4: remote operation failed: No such file or directory. Path:
/FILE1 (fcbe026d-b646-4391-9f03-849a381e8a84)
[2014-09-11 07:07:10.539656] W [client-rpc-fops.c:2761:client3_3_lookup_cbk]
6-dongra-client-5: remote operation failed: No such file or directory. Path:
/FILE1 (fcbe026d-b646-4391-9f03-849a381e8a84)

attaching the logs

--- Additional comment from shylesh on 2014-09-12 05:46:52 EDT ---

Just an update on the bug.

This bug is not reproducible on fresh mount. i.e if rebalance is running for
the first time after the mount and data is appended to it , everything works
fine.

If same mount persists and subsequent rebalance with data append leads to this
bug.

2.1u2 had different issue for the same test case which is captured in the
following bugs.
https://bugzilla.redhat.com/show_bug.cgi?id=1059687
https://bugzilla.redhat.com/show_bug.cgi?id=1058569
https://bugzilla.redhat.com/show_bug.cgi?id=1054782

--- Additional comment from Shyamsundar on 2014-09-12 16:18:40 EDT ---

This bug has 2 code related issues. This is split as "Issue 1: Invalid stashed
value in inode ctx 1" and "Issue 2: Incorrect Phase 2 cached/hashed
determination on open fd". I am detailing Issue 1 and Du would detail the
second one.

Issue 1: Invalid stashed value in inode ctx1

Test case to reproduce this,
- Create a nx2 or even nx1 volume
- Mount on FUSE
- Create a 2 GB file (say FINAL)
1- Rename FINAL to ABCDE
2- Ensure that ABCDE hashes to a different subvolume (for the next rebalance
step to work)
3- Run a rebalance force
4- When rebalance has started on ABCDE, start an appending write for ABCDE
5- Check sizes on brick for file
- Repeat steps 1..5 without restarting the mount or remounting

The second time the test is done the appending write can demonstrate a couple
of behaviors,
   - The dht_write and dht_write2 write to the same subvol which is the old
cached subvol (so new location does not receive the bytes thus written)
   - The dht_write2 is called with a cached subvol where the fd we send is
invalid (this is not caught by the application due to write behind)

Finally, the data is either written to the older location only and no errors
popped to the application, or the data is not written anywhere, or the first
write appends the data to the older location and not to the newer location (i.e
the files hashed volume in this case) 

(( older is the cached location and newer is the hashed location, so any
appending writes not replayed to newer location will result in a loss post
rebalance is done with the file))

Code problem:
The dht_migration_complete_check_task (i.e migration phase 2 detection) never
gets called, as we finish the appending writes before the file is completely
migrated (hence the large file size). Due to this, inode_ctx_reset1 is never
called, so we have stashed a subvol here that we think we should send future
writes to in case we detect a rebalance in progress during the FOP (say write,
could be for other FOPs as well that check, dht_inode_ctx_get1) and blindly
send the FOP without opening the fd to the returned subvol).

So the issue is that the stashed data in ctx1 should be invalidated (post a
rebalance?) somehow, otherwise we end up in troubled waters with a data loss.

Phase 1 migration function dht_rebalance_in_progress_check is not called as
there is data already in inode ctx 1 for optimization reasons (i.e for each
write or FOP that needs this do not determine this again).

Solution proposed:
Stash this ctx1 information on the fd instead, so that it's life is the life of
the fd, and in case there are overruns of the fd (i.e it remaining open even
after rebalance is complete), it would still work as the brick would retain the
open fd on the brick, and we will detect Phase 2 of migration in
progress/complete when we reuse this fd (unlink will not delete the file, till
last fd is closed).

Other soltions welcome, else we will go ahead with this one for Issue 1
presented here.

--- Additional comment from Anand Avati on 2014-10-07 16:20:47 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for
files under migration) posted (#1) for review on master by Shyamsundar
Ranganathan (srangana at redhat.com)

--- Additional comment from Anand Avati on 2014-10-09 16:35:05 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for
files under migration) posted (#2) for review on master by Shyamsundar
Ranganathan (srangana at redhat.com)

--- Additional comment from Anand Avati on 2014-10-13 17:21:23 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for
files under migration) posted (#3) for review on master by Shyamsundar
Ranganathan (srangana at redhat.com)

--- Additional comment from Anand Avati on 2014-10-14 14:03:24 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for
files under migration) posted (#4) for review on master by Shyamsundar
Ranganathan (srangana at redhat.com)

--- Additional comment from Anand Avati on 2014-10-15 14:48:46 EDT ---

REVIEW: http://review.gluster.org/8912 (cluster/dht: Fix stale subvol cache for
files under migration) posted (#5) for review on master by Shyamsundar
Ranganathan (srangana at redhat.com)

--- Additional comment from Anand Avati on 2015-05-19 14:06:10 EDT ---

REVIEW: http://review.gluster.org/10834 (cluster/dht: fix incorrect dst subvol
info in inode_ctx) posted (#1) for review on master by N Balachandran
(nbalacha at redhat.com)

--- Additional comment from Anand Avati on 2015-05-21 05:06:24 EDT ---

REVIEW: http://review.gluster.org/10834 (cluster/dht: fix incorrect dst subvol
info in inode_ctx) posted (#2) for review on master by Raghavendra G
(rgowdapp at redhat.com)

--- Additional comment from Nagaprasad Sathyanarayana on 2015-05-22 03:10:09
EDT ---

http://review.gluster.org/10805

--- Additional comment from Anand Avati on 2015-05-25 09:25:15 EDT ---

REVIEW: http://review.gluster.org/10805 (cluster/dht: Don't rely on linkto
xattr to find destination subvol during phase 2 of migration.) posted (#2) for
review on master by Raghavendra G (rgowdapp at redhat.com)

--- Additional comment from Anand Avati on 2015-05-27 08:31:14 EDT ---

REVIEW: http://review.gluster.org/10943 (cluster/dht: pass a destination subvol
to fop2 variants to avoid races.) posted (#1) for review on master by
Raghavendra G (rgowdapp at redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 00:20:34 EDT ---

REVIEW: http://review.gluster.org/10834 (cluster/dht: fix incorrect dst subvol
info in inode_ctx) posted (#3) for review on master by Raghavendra G
(rgowdapp at redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 00:23:40 EDT ---

REVIEW: http://review.gluster.org/10805 (cluster/dht: Don't rely on linkto
xattr to find destination subvol during phase 2 of migration.) posted (#3) for
review on master by Raghavendra G (rgowdapp at redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 00:23:52 EDT ---

REVIEW: http://review.gluster.org/10943 (cluster/dht: pass a destination subvol
to fop2 variants to avoid races.) posted (#2) for review on master by
Raghavendra G (rgowdapp at redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 00:50:02 EDT ---

REVIEW: http://review.gluster.org/10943 (cluster/dht: pass a destination subvol
to fop2 variants to avoid races.) posted (#3) for review on master by
Raghavendra G (rgowdapp at redhat.com)

--- Additional comment from Anand Avati on 2015-05-28 05:23:30 EDT ---

COMMIT: http://review.gluster.org/10805 committed in master by Raghavendra G
(rgowdapp at redhat.com) 
------
commit 4df3ea9ab4d8a1aff98784460983b5f0cb4a9ee9
Author: Raghavendra G <rgowdapp at redhat.com>
Date:   Wed May 13 19:56:47 2015 +0530

    cluster/dht: Don't rely on linkto xattr to find destination subvol during
phase 2 of migration.

    linkto xattr on source file cannot be relied to find where the data
    file currently resides. This can happen if there are multiple
    migrations before phase 2 detection by a client. For eg.,

    * migration (M1, node1, node2) starts.
    * application writes some data. DHT correctly stores the state in
      inode context that phase-1 of migration is in progress
    * migration M1 completes
    * migration (M2, node2, node3) is triggered and completed
    * application resumes writes to the file. DHT identifies it as phase-2
      of migration. However, linkto xattr on node1 points to node2, but
      the file is on node3. A lookup correctly identifies node3 as cached
      subvol

    TBD:
       When we identify phase-2 of a previous migration (say M1), there
       might be a migration in progress - say (M3, node3, node4). In this
       case we need to send writes to both (node3, node4) not just
       node3. Also, the inode state needs to correctly indicate that its in
       phase-1 of migration. I'll send this as a different patch.

    Change-Id: I1a861f766258170af2f6c0935468edb6be687b95
    BUG: 1142423
    Signed-off-by: Raghavendra G <rgowdapp at redhat.com>
    Reviewed-on: http://review.gluster.org/10805
    Tested-by: NetBSD Build System


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1140506
[Bug 1140506] [DHT-REBALANCE]-DataLoss: The data appended to a file during
its migration will be lost once the migration is done
https://bugzilla.redhat.com/show_bug.cgi?id=1142423
[Bug 1142423] [DHT-REBALANCE]-DataLoss: The data appended to a file during
its migration will be lost once the migration is done
-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list