[Gluster-devel] Reducing merge conflicts
Pranith Kumar Karampuri
pkarampu at redhat.com
Fri Jul 8 03:58:57 UTC 2016
On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy <jdarcy at redhat.com> wrote:
> > What gets measured gets managed.
>
> Exactly. Reviewing is part of everyone's job, but reviews aren't tracked
> in any way that matters. Contrast that with the *enormous* pressure most
> of us are under to get our own patches in, and it's pretty predictable
> what will happen. We need to change that calculation.
>
>
> > What I have seen at least is that it is easy to find
> > people who sent patches, how many patches someone sent in a month etc.
> There
> > is no easy way to get these numbers for reviews. 'Reviewed-by' tag in
> commit
> > only includes the people who did +1/+2 on the final revision of the
> patch,
> > which is bad.
>
> That's a very good point. I think people people who comment also get
> Reviewed-by: lines, but it doesn't matter because there's still a whole
> world of things completely outside of Gerrit. Reviews done by email won't
> get counted, nor will consultations in the hallway or on IRC. I have some
> ideas who's most active in those ways. Some (such as yourself) show up in
> the Reviewed-by: statistics. Others do not. In terms of making sure
> people get all the credit they deserve, those things need to be counted
> too. However, in terms of *getting the review queue unstuck* I'm not so
> sure. What matters for that is the reviews that Gerrit uses to determine
> merge eligibility, so I think encouraging that specific kind of review
> still moves us in a positive direction.
>
In my experience at least it was only adding 'reviewied-by' for the people
who gave +1/+2 on the final version of the patch
I agree about encouraging specific kind of review. At the same time we need
to make reviewing, helping users in the community as important as sending
patches in the eyes of everyone. It is very important to know these
statistics to move in the right direction. My main problem with this is,
everyone knows that reviews are important, then why are they not happening?
Is it really laziness? Are we sure if there are people in the team who are
not sharing the burden because of which it is becoming too much for 1 or 2
people to handle the total load? All these things become very easy to
reason about if we have this data. Then I am sure we can easily find how
best to solve this issue. Same goes for spurious failures. These are not
problems that are not faced by others in the world either. I remember
watching a video where someone shared (I think it was in google) that they
started putting giant TVs in the hall-way in all the offices and the people
who don't attend to spurious-build-failure problems would show up on the
screen for everyone in the world to see. Apparently the guy with the
biggest picture(the one who was not attending to any build failures at all
I guess) came to these folks and asked how should he get his picture
removed from the screen, and it was solved in a day or two. We don't have
to go to those lengths, but we do need data to nudge people in the right
direction.
--
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-devel/attachments/20160708/32f14174/attachment-0001.html>
More information about the Gluster-devel
mailing list