[Gluster-devel] Gerrit review, submit type and Jenkins testing

Raghavendra Gowdappa rgowdapp at redhat.com
Tue Jan 12 04:27:23 UTC 2016



----- Original Message -----
> From: "Raghavendra Talur" <rtalur at redhat.com>
> To: "Michael Adam" <obnox at samba.org>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Tuesday, January 12, 2016 8:46:05 AM
> Subject: Re: [Gluster-devel] Gerrit review, submit type and Jenkins testing
> 
> 
> 
> 
> On Jan 12, 2016 3:44 AM, "Michael Adam" < obnox at samba.org > wrote:
> > 
> > On 2016-01-08 at 12:03 +0530, Raghavendra Talur wrote:
> > > Top posting, this is a very old thread.
> > > 
> > > Keeping in view the recent NetBSD problems and the number of bugs
> > > creeping
> > > in, I suggest we do these things right now:
> > > 
> > > a. Change the gerrit merge type to fast forward only.
> > > As explained below in the thread, with our current setup even if both
> > > PatchA and PatchB pass regression separately when both are merged it is
> > > possible that a functional bug creeps in.
> > > This is the only solution to prevent that from happening.
> > > I will work with Kaushal to get this done.
> > > 
> > > b. In Jenkins, remove gerrit trigger and make it a manual operation
> > > 
> > > Too many developers use the upstream infra as a test cluster and it is
> > > *not*.
> > > It is a verification mechanism for maintainers to ensure that the patch
> > > does not cause regression.
> > > 
> > > It is required that all developers run full regression on their machines
> > > before asking for reviews.
> > 
> > Hmm, I am not 100% sure I would underwrite that.
> > I am coming from the Samba process, where we have exactly
> > that: A developer should have run full selftest before
> > submitting the change for review. Then after two samba
> > team developers have given their review+ (counting the
> > author), it can be pushed to our automatism that keeps
> > rebasing on current upstream and running selftest until
> > either selftest succeeds and is pushed as a fast forward
> > or selftest fails.
> > 
> > The reality is that people are lazy and think they know
> > when they can skip selftest. But people are deceived and
> > overlook problems. Hence either reviewers run into failures
> > or the automatic pre-push selftest fails. The problem
> > I see with this is that it wastes the precios time of
> > the reviewers.
> > 
> > When I started contributing to Gluster, I found it to
> > be a big, big plus to have automatic regression runs
> > as a first step after submission, so that a reviewer
> > has the option to only start looking at the patch once
> > automatic tests have passed.
> > 
> > I completely agree that the fast-forward-only and
> > post-review-pre-merge-regression-run approach
> > is the way to go, only this way the original problem
> > described by Talur can be avoided.
> > 
> > But would it be possible to keep and even require some
> > amount of automatic pre-review test run (build and at
> > least some amount of runtimte test)?
> > It really prevents waste of time of reviewers/maintainers.
> > 
> > The problem with this is of course that it can increase
> > the (real) time needed to complete a review from submission
> > until upstream merge.
> > 
> > Just a few thoughts...
> > 
> > Cheers - Michael
> > 
> 
> We had same concern from many other maintainers. I guess it would be better
> if test runs both before and after review. With these changes we would have
> removed test runs of work in progress patches.

Yes. I think It would be better one set of regressions to be run before a +1. Normally it takes couple of iterations to +1 a patch. So, I think it would unnecessarily serialize regression runs and review. If reviews are run parallely, developers can work on regression failures parallely while others do the review.

> 
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list