<div dir="ltr"><div class="gmail-adn gmail-ads"><div class="gmail-gs"><div class="gmail-"><div id="gmail-:ke" class="gmail-ii gmail-gt"><div id="gmail-:if" class="gmail-a3s gmail-aXjCH gmail-m16530f69fd182e10"><div dir="ltr"><div class="gmail_extra">Failure of this test is tracked by bz <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1608158" target="_blank">https://bugzilla.redhat.com/<wbr>show_bug.cgi?id=1608158</a>.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><description><br></div><div class="gmail_extra"><pre class="gmail-m_5756068454348599147gmail-bz_comment_text gmail-m_5756068454348599147gmail-bz_wrap_comment_text" id="gmail-m_5756068454348599147gmail-comment_text_0">I was trying to debug regression failures on [1] and observed that split-brain-resolution.t was failing consistently.
=========================
TEST 45 (line 88): 0 get_pending_heal_count patchy
./tests/basic/afr/split-brain-<wbr>resolution.t .. 45/45 RESULT 45: 1
./tests/basic/afr/split-brain-<wbr>resolution.t .. Failed 17/45 subtests
Test Summary Report
-------------------
./tests/basic/afr/split-brain-<wbr>resolution.t (Wstat: 0 Tests: 45 Failed: 17)
Failed tests: 24-26, 28-36, 41-45
On probing deeper, I observed a curious fact - on most of the failures stat was not served from md-cache, but instead was wound down to afr which failed stat with EIO as the file was in split brain. So, I did another test:
* disabled md-cache
* mount glusterfs with attribute-timeout 0 and entry-timeout 0
Now the test fails always. So, I think the test relied on stat requests being absorbed either by kernel attribute cache or md-cache. When its not happening stats are reaching afr and resulting in failures of cmds like getfattr etc. Thoughts?
[1] <a href="https://review.gluster.org/#/c/20549/" target="_blank">https://review.gluster.org/#/<wbr>c/20549/</a>
tests/basic/afr/split-brain-<wbr>resolution.t:
tests/bugs/bug-1368312.t:
tests/bugs/replicate/bug-<wbr>1238398-split-brain-<wbr>resolution.t:
tests/bugs/replicate/bug-<wbr>1417522-block-split-brain-<wbr>resolution.t
Discussion on this topic can be found on gluster-devel with subj: regression failures on afr/split-brain-resolution</pre></description></div><div class="gmail_extra"><br></div><div class="gmail_extra">regards,</div><div class="gmail_extra">Raghavendra<div class="gmail-yj6qo"></div><div class="gmail-adL"><br></div></div></div><div class="gmail-adL">
</div></div></div><div class="gmail-hi"></div></div></div><div class="gmail-ajx"></div></div><div class="gmail-gA gmail-gt gmail-acV"><div class="gmail-gB gmail-xu"><div class="gmail-ip gmail-iq"><div id="gmail-:kd"><table class="gmail-cf gmail-wS"><tbody><tr><td class="gmail-amq"><img id="gmail-:0_8" name=":0" src="https://ssl.gstatic.com/ui/v1/icons/mail/no_photo.png" class="gmail-ajn gmail-bofPge"></td></tr></tbody></table></div></div></div></div><br><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Aug 13, 2018 at 6:12 AM, Shyam Ranganathan <span dir="ltr"><<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">As a means of keeping the focus going and squashing the remaining tests<br>
that were failing sporadically, request each test/component owner to,<br>
<br>
- respond to this mail changing the subject (testname.t) to the test<br>
name that they are responding to (adding more than one in case they have<br>
the same RCA)<br>
- with the current RCA and status of the same<br>
<br>
List of tests and current owners as per the spreadsheet that we were<br>
tracking are:<br>
<br>
./tests/basic/distribute/<wbr>rebal-all-nodes-migrate.t TBD<br>
./tests/basic/tier/tier-heald.<wbr>t TBD<br>
./tests/basic/afr/sparse-file-<wbr>self-heal.t TBD<br>
./tests/bugs/shard/bug-<wbr>1251824.t TBD<br>
./tests/bugs/shard/configure-<wbr>lru-limit.t TBD<br>
./tests/bugs/replicate/bug-<wbr>1408712.t Ravi<br>
./tests/basic/afr/replace-<wbr>brick-self-heal.t TBD<br>
./tests/00-geo-rep/00-georep-<wbr>verify-setup.t Kotresh<br>
./tests/basic/afr/gfid-<wbr>mismatch-resolution-with-fav-<wbr>child-policy.t Karthik<br>
./tests/basic/stats-dump.t TBD<br>
./tests/bugs/bug-1110262.t TBD<br>
./tests/basic/ec/ec-data-heal.<wbr>t Mohit<br>
./tests/bugs/replicate/bug-<wbr>1448804-check-quorum-type-<wbr>values.t Pranith<br>
./tests/bugs/snapshot/bug-<wbr>1482023-snpashot-issue-with-<wbr>other-processes-accessing-<wbr>mounted-path.t<br>
TBD<br>
./tests/basic/ec/ec-5-2.t Sunil<br>
./tests/bugs/shard/bug-shard-<wbr>discard.t TBD<br>
./tests/bugs/glusterd/remove-<wbr>brick-testcases.t TBD<br>
./tests/bugs/protocol/bug-<wbr>808400-repl.t TBD<br>
./tests/bugs/quick-read/bug-<wbr>846240.t Du<br>
./tests/bugs/replicate/bug-<wbr>1290965-detect-bitrotten-<wbr>objects.t Mohit<br>
./tests/00-geo-rep/georep-<wbr>basic-dr-tarssh.t Kotresh<br>
./tests/bugs/ec/bug-1236065.t Pranith<br>
./tests/00-geo-rep/georep-<wbr>basic-dr-rsync.t Kotresh<br>
./tests/basic/ec/ec-1468261.t Ashish<br>
./tests/basic/afr/add-brick-<wbr>self-heal.t Ravi<br>
./tests/basic/afr/granular-<wbr>esh/replace-brick.t Pranith<br>
./tests/bugs/core/multiplex-<wbr>limit-issue-151.t Sanju<br>
./tests/bugs/glusterd/<wbr>validating-server-quorum.t Atin<br>
./tests/bugs/replicate/bug-<wbr>1363721.t Ravi<br>
./tests/bugs/index/bug-<wbr>1559004-EMLINK-handling.t Pranith<br>
./tests/bugs/replicate/bug-<wbr>1433571-undo-pending-only-on-<wbr>up-bricks.t Karthik<br>
./tests/bugs/glusterd/add-<wbr>brick-and-validate-replicated-<wbr>volume-options.t<br>
Atin<br>
./tests/bugs/glusterd/<wbr>rebalance-operations-in-<wbr>single-node.t TBD<br>
./tests/bugs/replicate/bug-<wbr>1386188-sbrain-fav-child.t TBD<br>
./tests/bitrot/bug-1373520.t Kotresh<br>
./tests/bugs/distribute/bug-<wbr>1117851.t Shyam/Nigel<br>
./tests/bugs/glusterd/quorum-<wbr>validation.t Atin<br>
./tests/bugs/distribute/bug-<wbr>1042725.t Shyam<br>
./tests/bugs/replicate/bug-<wbr>1586020-mark-dirty-for-entry-<wbr>txn-on-quorum-failure.t<br>
Karthik<br>
./tests/bugs/quota/bug-<wbr>1293601.t TBD<br>
./tests/bugs/bug-1368312.t Du<br>
./tests/bugs/distribute/bug-<wbr>1122443.t Du<br>
./tests/bugs/core/bug-1432542-<wbr>mpx-restart-crash.t 1608568 Nithya/Shyam<br>
<br>
Thanks,<br>
Shyam<br>
______________________________<wbr>_________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/maintainers</a><br>
</blockquote></div><br></div></div>