<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 10, 2018 at 7:39 PM Atin Mukherjee <<a href="mailto:amukherj@redhat.com">amukherj@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Pranith,</div><div><br></div><div><a href="https://review.gluster.org/c/glusterfs/+/20685" target="_blank">https://review.gluster.org/c/glusterfs/+/20685</a> seems to have caused multiple failure runs out of <a href="https://review.gluster.org/c/glusterfs/+/20637/8" target="_blank">https://review.gluster.org/c/glusterfs/+/20637/8</a> out of yesterday's report. Did you get a chance to look at it?<br></div></div></blockquote><div><br></div><div>All the ec failed tests after this patch is taken are timeout related issues. The centos run for the patch without any of the other changes didn't lead to any of these failures. So I am thinking of doing a rebase and re-run of the test @ <a href="https://review.gluster.org/c/glusterfs/+/20685">https://review.gluster.org/c/glusterfs/+/20685</a>, please let me know when that can be done.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 10, 2018 at 1:03 PM Pranith Kumar Karampuri <<a href="mailto:pkarampu@redhat.com" target="_blank">pkarampu@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Fri, Aug 10, 2018 at 6:34 AM Shyam Ranganathan <<a href="mailto:srangana@redhat.com" target="_blank">srangana@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Today's test results are updated in the spreadsheet in sheet named "Run<br>
patch set 8".<br>
<br>
I took in patch <a href="https://review.gluster.org/c/glusterfs/+/20685" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/20685</a> which<br>
caused quite a few failures, so not updating new failures as issue yet.<br>
<br>
Please look at the failures for tests that were retried and passed, as<br>
the logs for the initial runs should be preserved from this run onward.<br>
<br>
Otherwise nothing else to report on the run status, if you are averse to<br>
spreadsheets look at this comment in gerrit [1].<br>
<br>
Shyam<br>
<br>
[1] Patch set 8 run status:<br>
<a href="https://review.gluster.org/c/glusterfs/+/20637/8#message-54de30fa384fd02b0426d9db6d07fad4eeefcf08" rel="noreferrer" target="_blank">https://review.gluster.org/c/glusterfs/+/20637/8#message-54de30fa384fd02b0426d9db6d07fad4eeefcf08</a><br>
On 08/07/2018 07:37 PM, Shyam Ranganathan wrote:<br>
> Deserves a new beginning, threads on the other mail have gone deep enough.<br>
> <br>
> NOTE: (5) below needs your attention, rest is just process and data on<br>
> how to find failures.<br>
> <br>
> 1) We are running the tests using the patch [2].<br>
> <br>
> 2) Run details are extracted into a separate sheet in [3] named "Run<br>
> Failures" use a search to find a failing test and the corresponding run<br>
> that it failed in.<br>
> <br>
> 3) Patches that are fixing issues can be found here [1], if you think<br>
> you have a patch out there, that is not in this list, shout out.<br>
> <br>
> 4) If you own up a test case failure, update the spreadsheet [3] with<br>
> your name against the test, and also update other details as needed (as<br>
> comments, as edit rights to the sheet are restricted).<br>
> <br>
> 5) Current test failures<br>
> We still have the following tests failing and some without any RCA or<br>
> attention, (If something is incorrect, write back).<br>
> <br>
> ./tests/bugs/replicate/bug-1290965-detect-bitrotten-objects.t (needs<br>
> attention)<br>
> ./tests/00-geo-rep/georep-basic-dr-tarssh.t (Kotresh)<br>
> ./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t<br>
> (Atin)<br>
> ./tests/bugs/ec/bug-1236065.t (Ashish)<br>
> ./tests/00-geo-rep/georep-basic-dr-rsync.t (Kotresh)<br>
> ./tests/basic/ec/ec-1468261.t (needs attention)<br>
> ./tests/basic/afr/add-brick-self-heal.t (needs attention)<br>
> ./tests/basic/afr/granular-esh/replace-brick.t (needs attention)<br>
> ./tests/bugs/core/multiplex-limit-issue-151.t (needs attention)<br>
> ./tests/bugs/glusterd/validating-server-quorum.t (Atin)<br>
> ./tests/bugs/replicate/bug-1363721.t (Ravi)<br>
> <br>
> Here are some newer failures, but mostly one-off failures except cores<br>
> in ec-5-2.t. All of the following need attention as these are new.<br>
> <br>
> ./tests/00-geo-rep/00-georep-verify-setup.t<br>
> ./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t<br>
> ./tests/basic/stats-dump.t<br>
> ./tests/bugs/bug-1110262.t<br>
> ./tests/bugs/glusterd/mgmt-handshake-and-volume-sync-post-glusterd-restart.t<br>
> ./tests/basic/ec/ec-data-heal.t<br>
> ./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t<br></blockquote><div><br></div><div>Sent <a href="https://review.gluster.org/c/glusterfs/+/20697" target="_blank">https://review.gluster.org/c/glusterfs/+/20697</a> for the test above.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> ./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t<br>
> ./tests/basic/ec/ec-5-2.t<br>
> <br>
> 6) Tests that are addressed or are not occurring anymore are,<br>
> <br>
> ./tests/bugs/glusterd/rebalance-operations-in-single-node.t<br>
> ./tests/bugs/index/bug-1559004-EMLINK-handling.t<br>
> ./tests/bugs/replicate/bug-1386188-sbrain-fav-child.t<br>
> ./tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t<br>
> ./tests/bitrot/bug-1373520.t<br>
> ./tests/bugs/distribute/bug-1117851.t<br>
> ./tests/bugs/glusterd/quorum-validation.t<br>
> ./tests/bugs/distribute/bug-1042725.t<br>
> ./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t<br>
> ./tests/bugs/quota/bug-1293601.t<br>
> ./tests/bugs/bug-1368312.t<br>
> ./tests/bugs/distribute/bug-1122443.t<br>
> ./tests/bugs/core/bug-1432542-mpx-restart-crash.t<br>
> <br>
> Shyam (and Atin)<br>
> <br>
> On 08/05/2018 06:24 PM, Shyam Ranganathan wrote:<br>
>> Health on master as of the last nightly run [4] is still the same.<br>
>><br>
>> Potential patches that rectify the situation (as in [1]) are bunched in<br>
>> a patch [2] that Atin and myself have put through several regressions<br>
>> (mux, normal and line coverage) and these have also not passed.<br>
>><br>
>> Till we rectify the situation we are locking down master branch commit<br>
>> rights to the following people, Amar, Atin, Shyam, Vijay.<br>
>><br>
>> The intention is to stabilize master and not add more patches that my<br>
>> destabilize it.<br>
>><br>
>> Test cases that are tracked as failures and need action are present here<br>
>> [3].<br>
>><br>
>> @Nigel, request you to apply the commit rights change as you see this<br>
>> mail and let the list know regarding the same as well.<br>
>><br>
>> Thanks,<br>
>> Shyam<br>
>><br>
>> [1] Patches that address regression failures:<br>
>> <a href="https://review.gluster.org/#/q/starredby:srangana%2540redhat.com" rel="noreferrer" target="_blank">https://review.gluster.org/#/q/starredby:srangana%2540redhat.com</a><br>
>><br>
>> [2] Bunched up patch against which regressions were run:<br>
>> <a href="https://review.gluster.org/#/c/20637" rel="noreferrer" target="_blank">https://review.gluster.org/#/c/20637</a><br>
>><br>
>> [3] Failing tests list:<br>
>> <a href="https://docs.google.com/spreadsheets/d/1IF9GhpKah4bto19RQLr0y_Kkw26E_-crKALHSaSjZMQ/edit?usp=sharing" rel="noreferrer" target="_blank">https://docs.google.com/spreadsheets/d/1IF9GhpKah4bto19RQLr0y_Kkw26E_-crKALHSaSjZMQ/edit?usp=sharing</a><br>
>><br>
>> [4] Nightly run dashboard: <a href="https://build.gluster.org/job/nightly-master/" rel="noreferrer" target="_blank">https://build.gluster.org/job/nightly-master/</a><br>
> _______________________________________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org" target="_blank">Gluster-devel@gluster.org</a><br>
> <a href="https://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-devel</a><br>
> <br>
_______________________________________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org" target="_blank">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/maintainers</a><br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail-m_-5886687730623441266gmail-m_8107148399313084123gmail_signature"><div dir="ltr">Pranith<br></div></div></div>
_______________________________________________<br>
maintainers mailing list<br>
<a href="mailto:maintainers@gluster.org" target="_blank">maintainers@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/maintainers" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/maintainers</a><br>
</blockquote></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">Pranith<br></div></div></div>