<div dir="ltr"><div><div><div><div>Failure is tracked by bz: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1615096">https://bugzilla.redhat.com/show_bug.cgi?id=1615096</a><br><br></div>&lt;RCA&gt;<br><pre class="gmail-bz_comment_text gmail-bz_wrap_comment_text" id="gmail-comment_text_0">Earlier this test did following things on M0 and M1 mounted on same
    volume:
    1 create file  M0/testfile
    2 open an fd on M0/testfile
    3 remove the file from M1, M1/testfile
    4 echo &quot;data&quot; &gt;&gt; M0/testfile
    
    The test expects appending data to M0/testfile to fail. However,
    redirector &quot;&gt;&gt;&quot; creates a file if it doesn&#39;t exist. So, the only
    reason test succeeded was due to lookup succeeding due to stale stat
    in md-cache. This hypothesis is verified by two experiments:
    * Add a sleep of 10 seconds before append operation. md-cache cache
      expires and lookup fails followed by creation of file and hence append
      succeeds to new file.
    * set md-cache timeout to 600 seconds and test never fails even with
      sleep 10 before append operation. Reason is stale stat in md-cache
      survives sleep 10.
    
    So, the spurious nature of failure was dependent on whether lookup is
    done when stat is present in md-cache or not.
    
    The actual test should&#39;ve been to write to the fd opened in step 2
    above. I&#39;ve changed the test accordingly. Note that this patch also
    remounts M0 after initial file creation as open-behind disables
    opening-behind on witnessing a setattr on the inode and touch involves
    a setattr. On remount, create operation is not done and hence file is
    opened-behind.</pre>&lt;/RCA&gt;<br><br></div>Fix submitted at: <a href="https://review.gluster.org/#/c/glusterfs/+/20710/">https://review.gluster.org/#/c/glusterfs/+/20710/</a><br><br></div>regards,<br></div>Raghavendra<br><div><div><div><div><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 13, 2018 at 6:12 AM, Shyam Ranganathan <span dir="ltr">&lt;<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">As a means of keeping the focus going and squashing the remaining tests<br>
that were failing sporadically, request each test/component owner to,<br>
<br>
- respond to this mail changing the subject (testname.t) to the test<br>
name that they are responding to (adding more than one in case they have<br>
the same RCA)<br>
- with the current RCA and status of the same<br>
<br>
List of tests and current owners as per the spreadsheet that we were<br>
tracking are:<br>
<br>
./tests/basic/distribute/<wbr>rebal-all-nodes-migrate.t              TBD<br>
./tests/basic/tier/tier-heald.<wbr>t         TBD<br>
./tests/basic/afr/sparse-file-<wbr>self-heal.t               TBD<br>
./tests/bugs/shard/bug-<wbr>1251824.t                TBD<br>
./tests/bugs/shard/configure-<wbr>lru-limit.t                TBD<br>
./tests/bugs/replicate/bug-<wbr>1408712.t    Ravi<br>
./tests/basic/afr/replace-<wbr>brick-self-heal.t             TBD<br>
./tests/00-geo-rep/00-georep-<wbr>verify-setup.t     Kotresh<br>
./tests/basic/afr/gfid-<wbr>mismatch-resolution-with-fav-<wbr>child-policy.t Karthik<br>
./tests/basic/stats-dump.t              TBD<br>
./tests/bugs/bug-1110262.t              TBD<br>
./tests/basic/ec/ec-data-heal.<wbr>t         Mohit<br>
./tests/bugs/replicate/bug-<wbr>1448804-check-quorum-type-<wbr>values.t           Pranith<br>
./tests/bugs/snapshot/bug-<wbr>1482023-snpashot-issue-with-<wbr>other-processes-accessing-<wbr>mounted-path.t<br>
TBD<br>
./tests/basic/ec/ec-5-2.t               Sunil<br>
./tests/bugs/shard/bug-shard-<wbr>discard.t          TBD<br>
./tests/bugs/glusterd/remove-<wbr>brick-testcases.t          TBD<br>
./tests/bugs/protocol/bug-<wbr>808400-repl.t         TBD<br>
./tests/bugs/quick-read/bug-<wbr>846240.t            Du<br>
./tests/bugs/replicate/bug-<wbr>1290965-detect-bitrotten-<wbr>objects.t           Mohit<br>
./tests/00-geo-rep/georep-<wbr>basic-dr-tarssh.t     Kotresh<br>
./tests/bugs/ec/bug-1236065.t           Pranith<br>
./tests/00-geo-rep/georep-<wbr>basic-dr-rsync.t      Kotresh<br>
./tests/basic/ec/ec-1468261.t           Ashish<br>
./tests/basic/afr/add-brick-<wbr>self-heal.t         Ravi<br>
./tests/basic/afr/granular-<wbr>esh/replace-brick.t          Pranith<br>
./tests/bugs/core/multiplex-<wbr>limit-issue-151.t           Sanju<br>
./tests/bugs/glusterd/<wbr>validating-server-quorum.t                Atin<br>
./tests/bugs/replicate/bug-<wbr>1363721.t            Ravi<br>
./tests/bugs/index/bug-<wbr>1559004-EMLINK-handling.t                Pranith<br>
./tests/bugs/replicate/bug-<wbr>1433571-undo-pending-only-on-<wbr>up-bricks.t             Karthik<br>
./tests/bugs/glusterd/add-<wbr>brick-and-validate-replicated-<wbr>volume-options.t<br>
        Atin<br>
./tests/bugs/glusterd/<wbr>rebalance-operations-in-<wbr>single-node.t             TBD<br>
./tests/bugs/replicate/bug-<wbr>1386188-sbrain-fav-child.t           TBD<br>
./tests/bitrot/bug-1373520.t    Kotresh<br>
./tests/bugs/distribute/bug-<wbr>1117851.t   Shyam/Nigel<br>
./tests/bugs/glusterd/quorum-<wbr>validation.t       Atin<br>
./tests/bugs/distribute/bug-<wbr>1042725.t           Shyam<br>
./tests/bugs/replicate/bug-<wbr>1586020-mark-dirty-for-entry-<wbr>txn-on-quorum-failure.t<br>
        Karthik<br>
./tests/bugs/quota/bug-<wbr>1293601.t                TBD<br>
./tests/bugs/bug-1368312.t      Du<br>
./tests/bugs/distribute/bug-<wbr>1122443.t           Du<br>
./tests/bugs/core/bug-1432542-<wbr>mpx-restart-crash.t       1608568 Nithya/Shyam<br>
<br>
Thanks,<br>
Shyam<br>
______________________________<wbr>_________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/maintainers</a><br>
</blockquote></div><br></div></div></div></div></div></div></div></div></div>