[Gluster-Maintainers] Flaky Regression tests again?
Amar Tumballi
amar at kadalu.io
Sat Aug 15 14:03:27 UTC 2020
If I look at the recent regression runs (
https://build.gluster.org/job/centos7-regression/), there is more than 50%
failure in tests.
At least 90% of the failures are not due to the patch itself. Considering
regression tests are very critical for our patches to get merged, and takes
almost 6-7 hours now a days to complete, how can we make sure we are
passing regression with 100% certainty ?
Again, out of this, there are only a few tests which keep failing, should
we revisit the tests and see why it is failing? or Should we mark them as
'Good if it passes, but don't fail regression if the tests fail' condition?
Some tests I have listed here from recent failures:
tests/bugs/core/multiplex-limit-issue-151.t
tests/bugs/distribute/bug-1122443.t +++
tests/bugs/distribute/bug-1117851.t
tests/bugs/glusterd/bug-857330/normal.t +
tests/basic/mount-nfs-auth.t +++++
tests/basic/changelog/changelog-snapshot.t
tests/basic/afr/split-brain-favorite-child-policy.t
tests/basic/distribute/rebal-all-nodes-migrate.t
tests/bugs/glusterd/quorum-value-check.t
tests/features/lock-migration/lkmigration-set-option.t
tests/bugs/nfs/bug-1116503.t
tests/basic/ec/ec-quorum-count-partial-failure.t
Considering these are just 12 of 750+ tests we run, Should we even consider
marking them bad till they are fixed to be 100% consistent?
Any thoughts on how we should go ahead?
Regards,
Amar
(+) indicates a count, so more + you see against the file, more times that
failed.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20200815/fa8c0708/attachment.html>
More information about the maintainers
mailing list