<div dir="ltr"><div>Pranith,</div><div><br></div><div><a href="https://review.gluster.org/c/glusterfs/+/20685">https://review.gluster.org/c/glusterfs/+/20685</a> seems to have caused multiple failure runs out of <a href="https://review.gluster.org/c/glusterfs/+/20637/8">https://review.gluster.org/c/glusterfs/+/20637/8</a> out of yesterday&#39;s report. Did you get a chance to look at it?<br></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 10, 2018 at 1:03 PM Pranith Kumar Karampuri &lt;<a href="mailto:pkarampu@redhat.com">pkarampu@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 10, 2018 at 6:34 AM Shyam Ranganathan &lt;<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Today&#39;s test results are updated in the spreadsheet in sheet named &quot;Run<br>
patch set 8&quot;.<br>
<br>
I took in patch <a href="https://review.gluster.org/c/glusterfs/+/20685" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/20685</a> which<br>
caused quite a few failures, so not updating new failures as issue yet.<br>
<br>
Please look at the failures for tests that were retried and passed, as<br>
the logs for the initial runs should be preserved from this run onward.<br>
<br>
Otherwise nothing else to report on the run status, if you are averse to<br>
spreadsheets look at this comment in gerrit [1].<br>
<br>
Shyam<br>
<br>
[1] Patch set 8 run status:<br>
<a href="https://review.gluster.org/c/glusterfs/+/20637/8#message-54de30fa384fd02b0426d9db6d07fad4eeefcf08" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/20637/8#message-54de30fa384fd02b0426d9db6d07fad4eeefcf08</a><br>
On 08/07/2018 07:37 PM, Shyam Ranganathan wrote:<br>
&gt; Deserves a new beginning, threads on the other mail have gone deep enough.<br>
&gt; <br>
&gt; NOTE: (5) below needs your attention, rest is just process and data on<br>
&gt; how to find failures.<br>
&gt; <br>
&gt; 1) We are running the tests using the patch [2].<br>
&gt; <br>
&gt; 2) Run details are extracted into a separate sheet in [3] named &quot;Run<br>
&gt; Failures&quot; use a search to find a failing test and the corresponding run<br>
&gt; that it failed in.<br>
&gt; <br>
&gt; 3) Patches that are fixing issues can be found here [1], if you think<br>
&gt; you have a patch out there, that is not in this list, shout out.<br>
&gt; <br>
&gt; 4) If you own up a test case failure, update the spreadsheet [3] with<br>
&gt; your name against the test, and also update other details as needed (as<br>
&gt; comments, as edit rights to the sheet are restricted).<br>
&gt; <br>
&gt; 5) Current test failures<br>
&gt; We still have the following tests failing and some without any RCA or<br>
&gt; attention, (If something is incorrect, write back).<br>
&gt; <br>
&gt; ./tests/bugs/replicate/bug-1290965-detect-bitrotten-objects.t (needs<br>
&gt; attention)<br>
&gt; ./tests/00-geo-rep/georep-basic-dr-tarssh.t (Kotresh)<br>
&gt; ./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t<br>
&gt; (Atin)<br>
&gt; ./tests/bugs/ec/bug-1236065.t (Ashish)<br>
&gt; ./tests/00-geo-rep/georep-basic-dr-rsync.t (Kotresh)<br>
&gt; ./tests/basic/ec/ec-1468261.t (needs attention)<br>
&gt; ./tests/basic/afr/add-brick-self-heal.t (needs attention)<br>
&gt; ./tests/basic/afr/granular-esh/replace-brick.t (needs attention)<br>
&gt; ./tests/bugs/core/multiplex-limit-issue-151.t (needs attention)<br>
&gt; ./tests/bugs/glusterd/validating-server-quorum.t (Atin)<br>
&gt; ./tests/bugs/replicate/bug-1363721.t (Ravi)<br>
&gt; <br>
&gt; Here are some newer failures, but mostly one-off failures except cores<br>
&gt; in ec-5-2.t. All of the following need attention as these are new.<br>
&gt; <br>
&gt; ./tests/00-geo-rep/00-georep-verify-setup.t<br>
&gt; ./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t<br>
&gt; ./tests/basic/stats-dump.t<br>
&gt; ./tests/bugs/bug-1110262.t<br>
&gt; ./tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t<br>
&gt; ./tests/basic/ec/ec-data-heal.t<br>
&gt; ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t<br></blockquote><div><br></div><div>Sent <a href="https://review.gluster.org/c/glusterfs/+/20697" target="_blank">https://review.gluster.org/c/glusterfs/+/20697</a> for the test above.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
&gt; ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t<br>
&gt; ./tests/basic/ec/ec-5-2.t<br>
&gt; <br>
&gt; 6) Tests that are addressed or are not occurring anymore are,<br>
&gt; <br>
&gt; ./tests/bugs/glusterd/rebalance-operations-in-single-node.t<br>
&gt; ./tests/bugs/index/bug-1559004-EMLINK-handling.t<br>
&gt; ./tests/bugs/replicate/bug-1386188-sbrain-fav-child.t<br>
&gt; ./tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t<br>
&gt; ./tests/bitrot/bug-1373520.t<br>
&gt; ./tests/bugs/distribute/bug-1117851.t<br>
&gt; ./tests/bugs/glusterd/quorum-validation.t<br>
&gt; ./tests/bugs/distribute/bug-1042725.t<br>
&gt; ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t<br>
&gt; ./tests/bugs/quota/bug-1293601.t<br>
&gt; ./tests/bugs/bug-1368312.t<br>
&gt; ./tests/bugs/distribute/bug-1122443.t<br>
&gt; ./tests/bugs/core/bug-1432542-mpx-restart-crash.t<br>
&gt; <br>
&gt; Shyam (and Atin)<br>
&gt; <br>
&gt; On 08/05/2018 06:24 PM, Shyam Ranganathan wrote:<br>
&gt;&gt; Health on master as of the last nightly run [4] is still the same.<br>
&gt;&gt;<br>
&gt;&gt; Potential patches that rectify the situation (as in [1]) are bunched in<br>
&gt;&gt; a patch [2] that Atin and myself have put through several regressions<br>
&gt;&gt; (mux, normal and line coverage) and these have also not passed.<br>
&gt;&gt;<br>
&gt;&gt; Till we rectify the situation we are locking down master branch commit<br>
&gt;&gt; rights to the following people, Amar, Atin, Shyam, Vijay.<br>
&gt;&gt;<br>
&gt;&gt; The intention is to stabilize master and not add more patches that my<br>
&gt;&gt; destabilize it.<br>
&gt;&gt;<br>
&gt;&gt; Test cases that are tracked as failures and need action are present here<br>
&gt;&gt; [3].<br>
&gt;&gt;<br>
&gt;&gt; @Nigel, request you to apply the commit rights change as you see this<br>
&gt;&gt; mail and let the list know regarding the same as well.<br>
&gt;&gt;<br>
&gt;&gt; Thanks,<br>
&gt;&gt; Shyam<br>
&gt;&gt;<br>
&gt;&gt; [1] Patches that address regression failures:<br>
&gt;&gt; <a href="https://review.gluster.org/#/q/starredby:srangana%2540redhat.com" rel="noreferrer" target="_blank">https://review.gluster.org/#/q/starredby:srangana%2540redhat.com</a><br>
&gt;&gt;<br>
&gt;&gt; [2] Bunched up patch against which regressions were run:<br>
&gt;&gt; <a href="https://review.gluster.org/#/c/20637" rel="noreferrer" target="_blank">https://review.gluster.org/#/c/20637</a><br>
&gt;&gt;<br>
&gt;&gt; [3] Failing tests list:<br>
&gt;&gt; <a href="https://docs.google.com/spreadsheets/d/1IF9GhpKah4bto19RQLr0y_Kkw26E_-crKALHSaSjZMQ/edit?usp=sharing" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1IF9GhpKah4bto19RQLr0y_Kkw26E_-crKALHSaSjZMQ/edit?usp=sharing</a><br>
&gt;&gt;<br>
&gt;&gt; [4] Nightly run dashboard: <a href="https://build.gluster.org/job/nightly-master/" rel="noreferrer" target="_blank">https://build.gluster.org/job/nightly-master/</a><br>
&gt; _______________________________________________<br>
&gt; Gluster-devel mailing list<br>
&gt; <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
&gt; <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
&gt; <br>
_______________________________________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org" target="_blank">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/maintainers</a><br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail-m_8107148399313084123gmail_signature"><div dir="ltr">Pranith<br></div></div></div>
_______________________________________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org" target="_blank">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/maintainers</a><br>
</blockquote></div></div>