[Gluster-devel] Gerrit review, submit type and Jenkins testing

Atin Mukherjee atin.mukherjee83 at gmail.com
Tue Nov 10 06:03:59 UTC 2015


-Atin
Sent from one plus one
On Nov 10, 2015 11:24 AM, "Kaushal M" <kshlmster at gmail.com> wrote:
>
> On Tue, Nov 10, 2015 at 9:24 AM, Raghavendra Gowdappa
> <rgowdapp at redhat.com> wrote:
> >
> >
> > ----- Original Message -----
> >> From: "Raghavendra Talur" <rtalur at redhat.com>
> >> To: "Gluster Devel" <gluster-devel at gluster.org>
> >> Sent: Tuesday, November 10, 2015 3:10:34 AM
> >> Subject: [Gluster-devel] Gerrit review, submit type and Jenkins testing
> >>
> >> Hi,
> >>
> >> While trying to understand how our gerrit+jenkins setup works, I
realized of
> >> a possibility of allowing bugs to get in.
> >>
> >> Currently, our gerrit is setup to have cherry-pick as the submit type.
Now
> >> consider a case where:
> >>
> >> Dev1 sends a commit B with parent commit A(A is already merged).
> >> Dev2 sends a commit C with parent commit A(A is already merged).
> >>
> >> Both the patches get +2 from Jenkins.
> >>
> >> Maintainer merges commit B from Dev1.
> >> Another maintainer merges commit C from Dev2.
> >>
> >> If the two commits B and C changed code which had no merge conflicts
but were
> >> conflicting in logic,
> >> then we have a master which has bugs.
> >>
> >> If Dev3 now sends a commit D with re-based master as parent, we have
the
> >> following cases:
> >>
> >> 1. If bug introduced above is not racy, we have tests always failing
for Dev3
> >> on commit D. Tests that fail would be from components that commit B
and C
> >> changed. Dev3 has no idea on how to fix them and has to enlist help
from
> >> Dev1 and Dev2.
> >>
> >> 2. If bug introduced above is racy, then there is a probability that
Dev3
> >> escapes from this trouble and someone else will bear it later. Even if
the
> >> racy code is hit and test fails, Dev3 will probably re-trigger the
tests
> >> given that they failed for a component which is not related to his/her
code
> >> and the bug stays in code longer.
> >>
> >> The most obvious but not practical solution to the above problem is to
change
> >> the submit type in gerrit to "fast-forward only". It would then ensure
that
> >> once commit B is merged, Dev2 has to re-base and re-run the tests on
commit
> >> C with commit B as parent, before it could be merged. It is not
practical
> >> because it will cause all patches in review to get re-based and
re-triggered
> >> whenever a patch is merged.
> >>
> >> A little modification to the above solution would be to
> >>
> >>
> >>     * change submit type to fast-forward only
> >>     * don't run any jenkins job on patches till they get +2 from
reviewers
> >>     * once a +2 is given, run jenkins job on patch and automatically
submit
> >>     it if test passes.
> >>     * automatically rebase all patches on review with new master and
mark
> >>     conflict if merge conflict arises.
> >
> > Seems like a good suggestion. How about a slight variation to the above
process? Can we run one initial set of regression immediately after
submission, but before any reviews? That way reviewers can prioritize those
patches that have passed regression over the ones that have failed? Flip
side is that minimum two sets of regressions are needed to merge any patch.
I am making this suggestion with the assumption that dev/reviewer time is
more precious than machine time. Of course, this will have issues with
patches that need to get in urgently (user/customer hot fix etc) where time
is a constraint. But that can be worked around on a case-by-case basis.
>
> We would still be running smoke, which would catch any very obvious
> mistakes, isn't this enough?
>
> Regarding the initial regression run, would it include the complete
> regression suite or just a subset. If it is the complete set, then it
> would be no different from what we are doing now. If it is a subset,
> then we will need to come up with a subset of the regression suite
> that catches most of obvious mistakes. We've had discussions several
> times (don't remember if it was on the mailing lists, but I have had
> conversations) about doing this. Every time we ended up on the
> question of how we choose the subset, which is where we stopped.
How about running tests/basic/*.t? I know coverage wise this isn't good
enough, but still better than running everything or nothing.
>
> >
> >>
> >> As a side effect of this, Dev would now be forced to run a complete
> >> regression on dev machine before sending a patch for review.
> >>
> >> Any thoughts on the above solutions or other suggestions?
> >>
> >> Thanks,
> >> Raghavendra Talur
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> Gluster-devel mailing list
> >> Gluster-devel at gluster.org
> >> http://www.gluster.org/mailman/listinfo/gluster-devel
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel at gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20151110/a34c5f44/attachment-0001.html>


More information about the Gluster-devel mailing list