<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Correction.<br></div><div><br></div><div><div>RCA - <a href="https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html" target="_blank" data-mce-href="https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html">https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html</a><br data-mce-bogus="1"></div><div>Patch - Mohit is working on this patch (server side) which is yet to be merged.</div><div><br></div><div>We can put extra test to make sure bricks are connected to shd before heal begin. Will send a patch for that.</div><div><br></div><div>---</div>Ashish</div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Ashish Pandey" <aspandey@redhat.com><br><b>To: </b>"Shyam Ranganathan" <srangana@redhat.com><br><b>Cc: </b>"GlusterFS Maintainers" <maintainers@gluster.org>, "Gluster Devel" <gluster-devel@gluster.org><br><b>Sent: </b>Monday, August 13, 2018 10:54:16 AM<br><b>Subject: </b>Re: [Gluster-devel] Master branch lock down: RCA for tests (ec-1468261.t)<br><div><br></div><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div><br></div><div>RCA - <a href="https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html" target="_blank">https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html</a><br></div><div>Patch - <a href="https://review.gluster.org/#/c/glusterfs/+/20657/" target="_blank">https://review.gluster.org/#/c/glusterfs/+/20657/</a> should also fix this issue.<br></div><div><br></div><div>Checking if we can put extra test to make sure bricks are connected to shd before heal begin. Will send a patch for that.<br></div><div><br></div><div>---<br></div><div>Ashish<br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>"Shyam Ranganathan" <srangana@redhat.com><br><b>To: </b>"Gluster Devel" <gluster-devel@gluster.org>, "GlusterFS Maintainers" <maintainers@gluster.org><br><b>Sent: </b>Monday, August 13, 2018 6:12:59 AM<br><b>Subject: </b>Re: [Gluster-devel] Master branch lock down: RCA for tests (testname.t)<br><div><br></div>As a means of keeping the focus going and squashing the remaining tests<br>that were failing sporadically, request each test/component owner to,<br><div><br></div>- respond to this mail changing the subject (testname.t) to the test<br>name that they are responding to (adding more than one in case they have<br>the same RCA)<br>- with the current RCA and status of the same<br><div><br></div>List of tests and current owners as per the spreadsheet that we were<br>tracking are:<br><div><br></div>./tests/basic/distribute/rebal-all-nodes-migrate.t TBD<br>./tests/basic/tier/tier-heald.t TBD<br>./tests/basic/afr/sparse-file-self-heal.t TBD<br>./tests/bugs/shard/bug-1251824.t TBD<br>./tests/bugs/shard/configure-lru-limit.t TBD<br>./tests/bugs/replicate/bug-1408712.t Ravi<br>./tests/basic/afr/replace-brick-self-heal.t TBD<br>./tests/00-geo-rep/00-georep-verify-setup.t Kotresh<br>./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t Karthik<br>./tests/basic/stats-dump.t TBD<br>./tests/bugs/bug-1110262.t TBD<br>./tests/basic/ec/ec-data-heal.t Mohit<br>./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t Pranith<br>./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t<br>TBD<br>./tests/basic/ec/ec-5-2.t Sunil<br>./tests/bugs/shard/bug-shard-discard.t TBD<br>./tests/bugs/glusterd/remove-brick-testcases.t TBD<br>./tests/bugs/protocol/bug-808400-repl.t TBD<br>./tests/bugs/quick-read/bug-846240.t Du<br>./tests/bugs/replicate/bug-1290965-detect-bitrotten-objects.t Mohit<br>./tests/00-geo-rep/georep-basic-dr-tarssh.t Kotresh<br>./tests/bugs/ec/bug-1236065.t Pranith<br>./tests/00-geo-rep/georep-basic-dr-rsync.t Kotresh<br>./tests/basic/ec/ec-1468261.t Ashish<br>./tests/basic/afr/add-brick-self-heal.t Ravi<br>./tests/basic/afr/granular-esh/replace-brick.t Pranith<br>./tests/bugs/core/multiplex-limit-issue-151.t Sanju<br>./tests/bugs/glusterd/validating-server-quorum.t Atin<br>./tests/bugs/replicate/bug-1363721.t Ravi<br>./tests/bugs/index/bug-1559004-EMLINK-handling.t Pranith<br>./tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t Karthik<br>./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t<br> Atin<br>./tests/bugs/glusterd/rebalance-operations-in-single-node.t TBD<br>./tests/bugs/replicate/bug-1386188-sbrain-fav-child.t TBD<br>./tests/bitrot/bug-1373520.t Kotresh<br>./tests/bugs/distribute/bug-1117851.t Shyam/Nigel<br>./tests/bugs/glusterd/quorum-validation.t Atin<br>./tests/bugs/distribute/bug-1042725.t Shyam<br>./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t<br> Karthik<br>./tests/bugs/quota/bug-1293601.t TBD<br>./tests/bugs/bug-1368312.t Du<br>./tests/bugs/distribute/bug-1122443.t Du<br>./tests/bugs/core/bug-1432542-mpx-restart-crash.t 1608568 Nithya/Shyam<br><div><br></div>Thanks,<br>Shyam<br>_______________________________________________<br>Gluster-devel mailing list<br>Gluster-devel@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-devel<br></div><div><br></div></div><br>_______________________________________________<br>Gluster-devel mailing list<br>Gluster-devel@gluster.org<br>https://lists.gluster.org/mailman/listinfo/gluster-devel</div><div><br></div></div></body></html>