<div dir="ltr"><div>Initial RCA to point out commit 7131de81f72dda0ef685ed60d0887c6e14289b8c caused the issue was done by Nithya. Following was the conversation:<br><br></div>&lt;snip&gt;<br><br><div>With the latest master, I created a single brick volume and some files<br>    inside it.<br>    <br>    [root@rhgs313-6 ~]# umount -f /mnt/fuse1; mount -t glusterfs -s<br>    192.168.122.6:/thunder /mnt/fuse1; ls -l /mnt/fuse1/; echo &quot;Trying<br>    again&quot;; ls -l /mnt/fuse1<br>    umount: /mnt/fuse1: not mounted<br>    total 0<br>    ----------. 0 root root 0 Jan  1  1970 file-1<br>    ----------. 0 root root 0 Jan  1  1970 file-2<br>    ----------. 0 root root 0 Jan  1  1970 file-3<br>    ----------. 0 root root 0 Jan  1  1970 file-4<br>    ----------. 0 root root 0 Jan  1  1970 file-5<br>    d---------. 0 root root 0 Jan  1  1970 subdir<br>    Trying again<br>    total 3<br>    -rw-r--r--. 1 root root 33 Aug  3 14:06 file-1<br>    -rw-r--r--. 1 root root 33 Aug  3 14:06 file-2<br>    -rw-r--r--. 1 root root 33 Aug  3 14:06 file-3<br>    -rw-r--r--. 1 root root 33 Aug  3 14:06 file-4<br>    -rw-r--r--. 1 root root 33 Aug  3 14:06 file-5<br>    d---------. 0 root root  0 Jan  1  1970 subdir<br>    [root@rhgs313-6 ~]#<br>    <br>    Conversation can be followed on gluster-devel on thread with subj:<br>    tests/bugs/distribute/bug-1122443.t - spurious failure. git-bisected<br>    pointed this patch as culprit.<br></div><div>&lt;/snip&gt;</div><div><br></div><div>commit 7131de81f72dda0ef685ed60d0887c6e14289b8c zeroed out all members of iatt except for ia_gfid and ia_type in certain scenarios (one case that led to this bug was when a fresh inode - not linked - was picked up by readdirplus). This led to fuse_readdirp_cbk to wrongly think it has a valid stat (due to valid ia_gfid and ia_type) and give to kernel zeroed out attributes causing failures. Fix is included in <a href="https://review.gluster.org/20639">https://review.gluster.org/20639</a> to make sure to let kernel know attributes are not valid in this scenario (and not zero out stats even if inode picked up by readdirplus is not linked yet).</div><div><br></div><div>regards,</div><div>Raghavendra<br></div><div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 13, 2018 at 6:12 AM, Shyam Ranganathan <span dir="ltr">&lt;<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">As a means of keeping the focus going and squashing the remaining tests<br>
that were failing sporadically, request each test/component owner to,<br>
<br>
- respond to this mail changing the subject (testname.t) to the test<br>
name that they are responding to (adding more than one in case they have<br>
the same RCA)<br>
- with the current RCA and status of the same<br>
<br>
List of tests and current owners as per the spreadsheet that we were<br>
tracking are:<br>
<br>
./tests/basic/distribute/<wbr>rebal-all-nodes-migrate.t              TBD<br>
./tests/basic/tier/tier-heald.<wbr>t         TBD<br>
./tests/basic/afr/sparse-file-<wbr>self-heal.t               TBD<br>
./tests/bugs/shard/bug-<wbr>1251824.t                TBD<br>
./tests/bugs/shard/configure-<wbr>lru-limit.t                TBD<br>
./tests/bugs/replicate/bug-<wbr>1408712.t    Ravi<br>
./tests/basic/afr/replace-<wbr>brick-self-heal.t             TBD<br>
./tests/00-geo-rep/00-georep-<wbr>verify-setup.t     Kotresh<br>
./tests/basic/afr/gfid-<wbr>mismatch-resolution-with-fav-<wbr>child-policy.t Karthik<br>
./tests/basic/stats-dump.t              TBD<br>
./tests/bugs/bug-1110262.t              TBD<br>
./tests/basic/ec/ec-data-heal.<wbr>t         Mohit<br>
./tests/bugs/replicate/bug-<wbr>1448804-check-quorum-type-<wbr>values.t           Pranith<br>
./tests/bugs/snapshot/bug-<wbr>1482023-snpashot-issue-with-<wbr>other-processes-accessing-<wbr>mounted-path.t<br>
TBD<br>
./tests/basic/ec/ec-5-2.t               Sunil<br>
./tests/bugs/shard/bug-shard-<wbr>discard.t          TBD<br>
./tests/bugs/glusterd/remove-<wbr>brick-testcases.t          TBD<br>
./tests/bugs/protocol/bug-<wbr>808400-repl.t         TBD<br>
./tests/bugs/quick-read/bug-<wbr>846240.t            Du<br>
./tests/bugs/replicate/bug-<wbr>1290965-detect-bitrotten-<wbr>objects.t           Mohit<br>
./tests/00-geo-rep/georep-<wbr>basic-dr-tarssh.t     Kotresh<br>
./tests/bugs/ec/bug-1236065.t           Pranith<br>
./tests/00-geo-rep/georep-<wbr>basic-dr-rsync.t      Kotresh<br>
./tests/basic/ec/ec-1468261.t           Ashish<br>
./tests/basic/afr/add-brick-<wbr>self-heal.t         Ravi<br>
./tests/basic/afr/granular-<wbr>esh/replace-brick.t          Pranith<br>
./tests/bugs/core/multiplex-<wbr>limit-issue-151.t           Sanju<br>
./tests/bugs/glusterd/<wbr>validating-server-quorum.t                Atin<br>
./tests/bugs/replicate/bug-<wbr>1363721.t            Ravi<br>
./tests/bugs/index/bug-<wbr>1559004-EMLINK-handling.t                Pranith<br>
./tests/bugs/replicate/bug-<wbr>1433571-undo-pending-only-on-<wbr>up-bricks.t             Karthik<br>
./tests/bugs/glusterd/add-<wbr>brick-and-validate-replicated-<wbr>volume-options.t<br>
        Atin<br>
./tests/bugs/glusterd/<wbr>rebalance-operations-in-<wbr>single-node.t             TBD<br>
./tests/bugs/replicate/bug-<wbr>1386188-sbrain-fav-child.t           TBD<br>
./tests/bitrot/bug-1373520.t    Kotresh<br>
./tests/bugs/distribute/bug-<wbr>1117851.t   Shyam/Nigel<br>
./tests/bugs/glusterd/quorum-<wbr>validation.t       Atin<br>
./tests/bugs/distribute/bug-<wbr>1042725.t           Shyam<br>
./tests/bugs/replicate/bug-<wbr>1586020-mark-dirty-for-entry-<wbr>txn-on-quorum-failure.t<br>
        Karthik<br>
./tests/bugs/quota/bug-<wbr>1293601.t                TBD<br>
./tests/bugs/bug-1368312.t      Du<br>
./tests/bugs/distribute/bug-<wbr>1122443.t           Du<br>
./tests/bugs/core/bug-1432542-<wbr>mpx-restart-crash.t       1608568 Nithya/Shyam<br>
<br>
Thanks,<br>
Shyam<br>
______________________________<wbr>_________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/maintainers</a><br>
</blockquote></div><br></div></div></div>