[Gluster-devel] Reducing merge conflicts
Raghavendra Gowdappa
rgowdapp at redhat.com
Fri Jul 8 05:21:01 UTC 2016
----- Original Message -----
> From: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> To: "Jeff Darcy" <jdarcy at redhat.com>
> Cc: "Gluster Devel" <gluster-devel at gluster.org>
> Sent: Friday, July 8, 2016 9:28:57 AM
> Subject: Re: [Gluster-devel] Reducing merge conflicts
>
>
>
> On Fri, Jul 8, 2016 at 8:40 AM, Jeff Darcy < jdarcy at redhat.com > wrote:
>
>
> > What gets measured gets managed.
>
> Exactly. Reviewing is part of everyone's job, but reviews aren't tracked
> in any way that matters. Contrast that with the *enormous* pressure most
> of us are under to get our own patches in, and it's pretty predictable
> what will happen. We need to change that calculation.
+1.
>
>
> > What I have seen at least is that it is easy to find
> > people who sent patches, how many patches someone sent in a month etc.
> > There
> > is no easy way to get these numbers for reviews. 'Reviewed-by' tag in
> > commit
> > only includes the people who did +1/+2 on the final revision of the patch,
> > which is bad.
>
> That's a very good point. I think people people who comment also get
> Reviewed-by: lines, but it doesn't matter because there's still a whole
> world of things completely outside of Gerrit. Reviews done by email won't
> get counted, nor will consultations in the hallway or on IRC. I have some
> ideas who's most active in those ways. Some (such as yourself) show up in
> the Reviewed-by: statistics. Others do not. In terms of making sure
> people get all the credit they deserve, those things need to be counted
> too. However, in terms of *getting the review queue unstuck* I'm not so
> sure. What matters for that is the reviews that Gerrit uses to determine
> merge eligibility, so I think encouraging that specific kind of review
> still moves us in a positive direction.
>
> In my experience at least it was only adding 'reviewied-by' for the people
> who gave +1/+2 on the final version of the patch
>
> I agree about encouraging specific kind of review. At the same time we need
> to make reviewing, helping users in the community as important as sending
> patches in the eyes of everyone. It is very important to know these
> statistics to move in the right direction. My main problem with this is,
> everyone knows that reviews are important, then why are they not happening?
> Is it really laziness?
I think its more of
1. priorities and incentives rather than laziness. Of course, maintainers have visibility on the component and kind of have a default responsibility (not the only one) - and incentive (and again not the only one) to at least review the patch. As a developer, there are no such ready incentives (Of course one gains more insight into the component etc, but that's not really a strong incentive - and expertise takes time to build up - given the other pressures we are all subjected to).
2. Its also a factor of competency (you know more about code/design, you find the issues, propose solutions and the satisfaction adds up as a feedback loop).
3. Another factor is accountability. If there is a direct accountability we tend to prioritize the things over others. However reviewing as of now is a _shared_ responsibility.
> Are we sure if there are people in the team who are
> not sharing the burden because of which it is becoming too much for 1 or 2
> people to handle the total load? All these things become very easy to reason
> about if we have this data.
Personally I've found a genuine -1 to be more valuable than a +1. Since we are discussing about measuring, how does one measure the issues that are prevented (through a good design, thoughtful coding/review) than the issues that are _fixed_? Measuring -1s and -2s along with +1s and +2s can be a good place to start with (though as with many measurements, they may not reflect the underlying value accurately).
> Then I am sure we can easily find how best to
> solve this issue. Same goes for spurious failures.
May be test cases are not subjected to strong scrutiny as the accompanying code. If one is time-pressed to choose between code and the test-case, I think I'll prioritize code over test-case (same with sending patches).
> These are not problems
> that are not faced by others in the world either. I remember watching a
> video where someone shared (I think it was in google) that they started
> putting giant TVs in the hall-way in all the offices and the people who
> don't attend to spurious-build-failure problems would show up on the screen
> for everyone in the world to see. Apparently the guy with the biggest
> picture(the one who was not attending to any build failures at all I guess)
> came to these folks and asked how should he get his picture removed from the
> screen, and it was solved in a day or two. We don't have to go to those
> lengths, but we do need data to nudge people in the right direction.
Though shaming might be an effective way to get things done, I would abstain from using it till we are not left with any other options (at least because of the fear someone else might use the same strategy on me).
regards,
Raghavendra
>
>
> --
> Pranith
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
More information about the Gluster-devel
mailing list