[Gluster-devel] Master branch lock down: RCA for tests (ec-1468261.t)

Ashish Pandey aspandey at redhat.com
Mon Aug 13 05:43:10 UTC 2018


Correction. 

RCA - https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html 
Patch - Mohit is working on this patch (server side) which is yet to be merged. 

We can put extra test to make sure bricks are connected to shd before heal begin. Will send a patch for that. 

--- 
Ashish 

----- Original Message -----

From: "Ashish Pandey" <aspandey at redhat.com> 
To: "Shyam Ranganathan" <srangana at redhat.com> 
Cc: "GlusterFS Maintainers" <maintainers at gluster.org>, "Gluster Devel" <gluster-devel at gluster.org> 
Sent: Monday, August 13, 2018 10:54:16 AM 
Subject: Re: [Gluster-devel] Master branch lock down: RCA for tests (ec-1468261.t) 


RCA - https://lists.gluster.org/pipermail/gluster-devel/2018-August/055167.html 
Patch - https://review.gluster.org/#/c/glusterfs/+/20657/ should also fix this issue. 

Checking if we can put extra test to make sure bricks are connected to shd before heal begin. Will send a patch for that. 

--- 
Ashish 

----- Original Message -----

From: "Shyam Ranganathan" <srangana at redhat.com> 
To: "Gluster Devel" <gluster-devel at gluster.org>, "GlusterFS Maintainers" <maintainers at gluster.org> 
Sent: Monday, August 13, 2018 6:12:59 AM 
Subject: Re: [Gluster-devel] Master branch lock down: RCA for tests (testname.t) 

As a means of keeping the focus going and squashing the remaining tests 
that were failing sporadically, request each test/component owner to, 

- respond to this mail changing the subject (testname.t) to the test 
name that they are responding to (adding more than one in case they have 
the same RCA) 
- with the current RCA and status of the same 

List of tests and current owners as per the spreadsheet that we were 
tracking are: 

./tests/basic/distribute/rebal-all-nodes-migrate.t TBD 
./tests/basic/tier/tier-heald.t TBD 
./tests/basic/afr/sparse-file-self-heal.t TBD 
./tests/bugs/shard/bug-1251824.t TBD 
./tests/bugs/shard/configure-lru-limit.t TBD 
./tests/bugs/replicate/bug-1408712.t Ravi 
./tests/basic/afr/replace-brick-self-heal.t TBD 
./tests/00-geo-rep/00-georep-verify-setup.t Kotresh 
./tests/basic/afr/gfid-mismatch-resolution-with-fav-child-policy.t Karthik 
./tests/basic/stats-dump.t TBD 
./tests/bugs/bug-1110262.t TBD 
./tests/basic/ec/ec-data-heal.t Mohit 
./tests/bugs/replicate/bug-1448804-check-quorum-type-values.t Pranith 
./tests/bugs/snapshot/bug-1482023-snpashot-issue-with-other-processes-accessing-mounted-path.t 
TBD 
./tests/basic/ec/ec-5-2.t Sunil 
./tests/bugs/shard/bug-shard-discard.t TBD 
./tests/bugs/glusterd/remove-brick-testcases.t TBD 
./tests/bugs/protocol/bug-808400-repl.t TBD 
./tests/bugs/quick-read/bug-846240.t Du 
./tests/bugs/replicate/bug-1290965-detect-bitrotten-objects.t Mohit 
./tests/00-geo-rep/georep-basic-dr-tarssh.t Kotresh 
./tests/bugs/ec/bug-1236065.t Pranith 
./tests/00-geo-rep/georep-basic-dr-rsync.t Kotresh 
./tests/basic/ec/ec-1468261.t Ashish 
./tests/basic/afr/add-brick-self-heal.t Ravi 
./tests/basic/afr/granular-esh/replace-brick.t Pranith 
./tests/bugs/core/multiplex-limit-issue-151.t Sanju 
./tests/bugs/glusterd/validating-server-quorum.t Atin 
./tests/bugs/replicate/bug-1363721.t Ravi 
./tests/bugs/index/bug-1559004-EMLINK-handling.t Pranith 
./tests/bugs/replicate/bug-1433571-undo-pending-only-on-up-bricks.t Karthik 
./tests/bugs/glusterd/add-brick-and-validate-replicated-volume-options.t 
Atin 
./tests/bugs/glusterd/rebalance-operations-in-single-node.t TBD 
./tests/bugs/replicate/bug-1386188-sbrain-fav-child.t TBD 
./tests/bitrot/bug-1373520.t Kotresh 
./tests/bugs/distribute/bug-1117851.t Shyam/Nigel 
./tests/bugs/glusterd/quorum-validation.t Atin 
./tests/bugs/distribute/bug-1042725.t Shyam 
./tests/bugs/replicate/bug-1586020-mark-dirty-for-entry-txn-on-quorum-failure.t 
Karthik 
./tests/bugs/quota/bug-1293601.t TBD 
./tests/bugs/bug-1368312.t Du 
./tests/bugs/distribute/bug-1122443.t Du 
./tests/bugs/core/bug-1432542-mpx-restart-crash.t 1608568 Nithya/Shyam 

Thanks, 
Shyam 
_______________________________________________ 
Gluster-devel mailing list 
Gluster-devel at gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-devel 


_______________________________________________ 
Gluster-devel mailing list 
Gluster-devel at gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-devel 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180813/5b06512a/attachment-0001.html>


More information about the Gluster-devel mailing list