[Gluster-devel] Reducing merge conflicts

Jeff Darcy jdarcy at redhat.com
Fri Jul 8 13:57:06 UTC 2016


(combining replies to multiple people)

Pranith:
> I agree about encouraging specific kind of review. At the same time we need
> to make reviewing, helping users in the community as important as sending
> patches in the eyes of everyone. It is very important to know these
> statistics to move in the right direction. My main problem with this is,
> everyone knows that reviews are important, then why are they not happening?
> Is it really laziness?

"Laziness" was clearly a bad choice of words, for which I apologize.  I
should have said "lack of diligence" or something to reflect that it's an
*organizational* rather than personal problem.  We *as a group* have not
been keeping up with the review workload.  Whatever the reasons are, to
change the outcome we need to change behavior, and to change behavior we
need to change the incentives.


Raghavendra G:
> Personally I've found a genuine -1 to be more valuable than a +1. Since we
> are discussing about measuring, how does one measure the issues that are
> prevented (through a good design, thoughtful coding/review) than the issues
> that are _fixed_?

Another excellent point.  It's easier to see the failures than the successes.
It's a bit like traffic accidents.  Everyone sees when you cause one, but not
when you avoid one.  If a regression occurs, everyone can look back to see
who the author and reviewers were.  If there's no regression ... what then?
Pranith has suggested some mechanism to give credit/karma in cases where it
can't be done automatically.  Meta-review (review of reviews) is another
possibility.  I've seen it work in other contexts, but I'm not sure how to
apply it here.

> Measuring -1s and -2s along with +1s and +2s can be a good
> place to start with (though as with many measurements, they may not reflect
> the underlying value accurately).

The danger here is that we'll incentivize giving a -1 for superficial
reasons.  We don't need more patches blocked because a reviewer doesn't
like a file/variable name, or wants to play "I know a better way" games.
Unfortunately, it's hard to distinguish those from enforcing standards
that really matter, or avoiding technical debt.  I guess that brings us
back to manual overrides and/or meta-review.


Poornima:
> Below are the few things that we can do to reduce our review backlog:
> - No time for maintainers to review is not a good enough reason to bitrot
> patches in review for months, it clearly means we need additional
> maintainers for that component?
> - Add maintainers for every component that is in Gluster(atleast the ones
> which have incoming patches)
> - For every patch we submit we add 'component(s)' label, and evaluate if
> gerrit can automatically add maintainers as reviewers, and have another
> label 'Maintainers ack' which needs to be present for any patch to be
> merged.

Excellent points.  Not much to add here, except that we also need a way to
deal with patches that cross many components (as many of yours and mine do).
If getting approval from one maintainer is a problem, getting approval from
several will be worse.  Maybe it's enough to say that approval by one of
those several maintainers is sufficient, and to rely on maintainers talking
to one another.


Atin:
> How about having "review marathon" once a week by every team? In past this
> has worked well and I don't see any reason why can't we spend 3-4 hours in a
> meeting on weekly basis to review incoming patches on the component that the
> team owns.

I love this idea.  If I may add to it, I suggest that such "marathons" are a
good way not only to reduce the backlog but also to teach people how to
review well.  Reviewing's a skill, learnable like any other.  In addition to
improving review quantity, getting reviews more focused on real bugs and
technical debt would be great.


Pranith (again):
> Everyone in the team started reviewing the patches and giving +1 and I am
> reviewing only after a +1.

In the past I've done this myself (as a project-level maintainer), so I
totally understand the motivation, but I'm still ambivalent about whether
it's a good idea.  On the one hand, it seems like projects bigger than
ours essentially work this way.  For example, how often does Linus review
something that hasn't already been reviewed by one of his lieutenants?
Not often, it seems.  On the other hand, reinforcing such hierarchies in
the review process is counter to our goal of breaking them down in a more
general sense.  I hope some day we can get to the point where people are
actively seeking out things to review, instead of actively filtering the
list they already have.


It's great to see such an energetic discussion.  I know it's already
the weekend for everyone I've just replied to, but I hope we can keep
the discussion going next week.



More information about the Gluster-devel mailing list