[Gluster-devel] Reducing merge conflicts

Pranith Kumar Karampuri pkarampu at redhat.com
Thu Jul 14 17:26:19 UTC 2016


On Fri, Jul 8, 2016 at 7:27 PM, Jeff Darcy <jdarcy at redhat.com> wrote:

> (combining replies to multiple people)
>
> Pranith:
> > I agree about encouraging specific kind of review. At the same time we
> need
> > to make reviewing, helping users in the community as important as sending
> > patches in the eyes of everyone. It is very important to know these
> > statistics to move in the right direction. My main problem with this is,
> > everyone knows that reviews are important, then why are they not
> happening?
> > Is it really laziness?
>
> "Laziness" was clearly a bad choice of words, for which I apologize.  I
> should have said "lack of diligence" or something to reflect that it's an
> *organizational* rather than personal problem.  We *as a group* have not
> been keeping up with the review workload.  Whatever the reasons are, to
> change the outcome we need to change behavior, and to change behavior we
> need to change the incentives.
>
>
> Raghavendra G:
> > Personally I've found a genuine -1 to be more valuable than a +1. Since
> we
> > are discussing about measuring, how does one measure the issues that are
> > prevented (through a good design, thoughtful coding/review) than the
> issues
> > that are _fixed_?
>
> Another excellent point.  It's easier to see the failures than the
> successes.
> It's a bit like traffic accidents.  Everyone sees when you cause one, but
> not
> when you avoid one.  If a regression occurs, everyone can look back to see
> who the author and reviewers were.  If there's no regression ... what then?
> Pranith has suggested some mechanism to give credit/karma in cases where it
> can't be done automatically.  Meta-review (review of reviews) is another
> possibility.  I've seen it work in other contexts, but I'm not sure how to
> apply it here.
>
> > Measuring -1s and -2s along with +1s and +2s can be a good
> > place to start with (though as with many measurements, they may not
> reflect
> > the underlying value accurately).
>
> The danger here is that we'll incentivize giving a -1 for superficial
> reasons.  We don't need more patches blocked because a reviewer doesn't
> like a file/variable name, or wants to play "I know a better way" games.
> Unfortunately, it's hard to distinguish those from enforcing standards
> that really matter, or avoiding technical debt.  I guess that brings us
> back to manual overrides and/or meta-review.
>
>
> Poornima:
> > Below are the few things that we can do to reduce our review backlog:
> > - No time for maintainers to review is not a good enough reason to bitrot
> > patches in review for months, it clearly means we need additional
> > maintainers for that component?
> > - Add maintainers for every component that is in Gluster(atleast the ones
> > which have incoming patches)
> > - For every patch we submit we add 'component(s)' label, and evaluate if
> > gerrit can automatically add maintainers as reviewers, and have another
> > label 'Maintainers ack' which needs to be present for any patch to be
> > merged.
>
> Excellent points.  Not much to add here, except that we also need a way to
> deal with patches that cross many components (as many of yours and mine
> do).
> If getting approval from one maintainer is a problem, getting approval from
> several will be worse.  Maybe it's enough to say that approval by one of
> those several maintainers is sufficient, and to rely on maintainers talking
> to one another.
>
>
> Atin:
> > How about having "review marathon" once a week by every team? In past
> this
> > has worked well and I don't see any reason why can't we spend 3-4 hours
> in a
> > meeting on weekly basis to review incoming patches on the component that
> the
> > team owns.
>
> I love this idea.  If I may add to it, I suggest that such "marathons" are
> a
> good way not only to reduce the backlog but also to teach people how to
> review well.  Reviewing's a skill, learnable like any other.  In addition
> to
> improving review quantity, getting reviews more focused on real bugs and
> technical debt would be great.
>
>
> Pranith (again):
> > Everyone in the team started reviewing the patches and giving +1 and I am
> > reviewing only after a +1.
>
> In the past I've done this myself (as a project-level maintainer), so I
> totally understand the motivation, but I'm still ambivalent about whether
> it's a good idea.  On the one hand, it seems like projects bigger than
> ours essentially work this way.  For example, how often does Linus review
> something that hasn't already been reviewed by one of his lieutenants?
> Not often, it seems.  On the other hand, reinforcing such hierarchies in
> the review process is counter to our goal of breaking them down in a more
> general sense.  I hope some day we can get to the point where people are
> actively seeking out things to review, instead of actively filtering the
> list they already have.
>

The feedback I got is, "it is not motivating to review patches that are
already merged by maintainer." Do you suggest they should change that
behaviour in that case? We are still in discussions about how to change
this policy as we missed some patches for 3.8.1 because of this rule. What
will lead to people actively seeking out things to review? Did you do
anything before that made this happen?
          In my personal experience, we as a community don't recognize
review contribution anywhere. Even the mails say it is maintainers'
responsibility to review the patches. And the mails also say these are
boring bits of being maintainer. And we have good metrics which show who
sends more patches by month/year and it also ranks everyone. So why
shouldn't the person aim to get on the charts which has high number of
patches sent instead? How many ever mails we send and talk about this over
and over nothing will change until reviews/talking to users gets equal
recognition. We already showed a stick to measure(
http://projects.bitergia.com/redhat-glusterfs-dashboard/browser/). Now we
are wondering why reviews don't happen. We should fix the measuring sticks.

let us give equal recognition for:
patches sent
patches reviewed - this one is missing.
helping users on gluster-users
helping users on #gluster/#gluster-dev

Feel free to add anything more I might have missed out. May be new
ideas/design/big-refactor?

let people do what they like more among these and let us also recognize
them for all their contributions. Let us celebrate their work in each
monthly news letter.

The case I want to make for reviews is that, when you get really good at
reviewing, you see the program run on all the machines with all the threads
with all possible race conditions/memory leaks/feature bugs/crashes/ etc
etc in your head. So no, reviewing is not boring. There could be patches at
time which will be boring to review though. At least this has been my
personal experience.


>
> It's great to see such an energetic discussion.  I know it's already
> the weekend for everyone I've just replied to, but I hope we can keep
> the discussion going next week.
>
>


-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160714/d80a3b92/attachment-0001.html>


More information about the Gluster-devel mailing list