[Gluster-devel] Release 5: Master branch health report (Week of 23rd July)

Nigel Babu nigelb at redhat.com
Thu Jul 26 04:53:26 UTC 2018


Replies inline

On Thu, Jul 26, 2018 at 1:48 AM Shyam Ranganathan <srangana at redhat.com>
wrote:

> On 07/24/2018 03:28 PM, Shyam Ranganathan wrote:
> > On 07/24/2018 03:12 PM, Shyam Ranganathan wrote:
> >> 1) master branch health checks (weekly, till branching)
> >>   - Expect every Monday a status update on various tests runs
> >
> > See https://build.gluster.org/job/nightly-master/ for a report on
> > various nightly and periodic jobs on master.
> >
> > RED:
> > 1. Nightly regression
> > 2. Regression with multiplex (cores and test failures)
> > 3. line-coverage (cores and test failures)
>
> The failures for line coverage issues, are filed as the following BZs
> 1) Parent BZ for nightly line coverage failure:
> https://bugzilla.redhat.com/show_bug.cgi?id=1608564
>
> 2) glusterd crash in test sdfs-sanity.t:
> https://bugzilla.redhat.com/show_bug.cgi?id=1608566
>
> glusterd folks, request you to take a look to correct this.
>
> 3) bug-1432542-mpx-restart-crash.t times out consistently:
> https://bugzilla.redhat.com/show_bug.cgi?id=1608568
>
> @nigel is there a way to on-demand request lcov tests through gerrit? I
> am thinking of pushing a patch that increases the timeout and check if
> it solves the problem for this test as detailed in the bug.
>

You should have access to trigger the job from Jenkins. Does that work for
now?


>
> >
> > Calling out to contributors to take a look at various failures, and post
> > the same as bugs AND to the lists (so that duplication is avoided) to
> > get this to a GREEN status.
> >
> > GREEN:
> > 1. cpp-check
> > 2. RPM builds
> >
> > IGNORE (for now):
> > 1. clang scan (@nigel, this job requires clang warnings to be fixed to
> > go green, right?)
>

So there are two ways. Back when I first ran it, I set a limit on how many
clang failures we have. If we went above the number, the job would turn
yellow. The current threshold is 955 and we're at 1001. What would be
useful is for us to fix a few bugs a week and keeping bumping this limit
down.


> >
> > Shyam
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-devel
> >
>


-- 
nigelb
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180726/6d6c5a7a/attachment.html>


More information about the Gluster-devel mailing list