[Gluster-Maintainers] Lock down period merge process

Atin Mukherjee amukherj at redhat.com
Thu Sep 27 14:05:13 UTC 2018

On Thu, 27 Sep 2018 at 18:27, Pranith Kumar Karampuri <pkarampu at redhat.com>

> On Thu, Sep 27, 2018 at 5:27 PM Atin Mukherjee <amukherj at redhat.com>
> wrote:
>> tests/bugs/<component Y>/xxx.t failing can’t always mean there’s a bug in
>> component Y.
> I agree.
>> It could be anywhere till we root cause the problem.
> Some one needs to step in to find out what the root cause is. I agree that
> for a component like glusterd bugs in other components can easily lead to
> failures. How do we make sure that someone takes a look at it?
>> Now does this mean we block commit rights for component Y till we have
>> the root cause?
> It was a way of making it someone's priority. If you have another way to
> make it someone's priority that is better than this, please suggest and we
> can have a discussion around it and agree on it :-).

This is what I can think of:

1. Component peers/maintainers take a first triage of the test failure. Do
the initial debugging and (a) point to the component which needs further
debugging or (b) seek for help at gluster-devel ML for additional insight
for identifying the problem and narrowing down to a component.
2. If it’s (1 a) then we already know the component and the owner. If it’s
(2 b) at this juncture, it’s all maintainers responsibility to ensure the
email is well understood and based on the available details the ownership
is picked up by respective maintainers. It might be also needed that
multiple maintainers might have to be involved and this is why I focus on
this as a group effort than individual one.

>> That doesn’t make much sense right? This is one of the reasons in such
>> case we need to work as a group, figure out the problem and fix it, till
>> then locking down the entire repo for further commits look a better option
>> (IMHO).
> Let us dig deeper into what happens when we work as a group, in general it
> will be one person who will take the lead and get help. Is there a way to
> find that person without locking down whole master? If there is, we may
> never have to get to a place where we lock down master completely. We may
> not even have to lock down components. Suggestions are welcome.
>> On Thu, 27 Sep 2018 at 14:04, Nigel Babu <nigelb at redhat.com> wrote:
>>> We know maintainers of the components which are leading to repeated
>>>> failures in that component and we just need to do the same thing we did to
>>>> remove commit access for the maintainer of the component instead of all of
>>>> the people. So in that sense it is not good faith and can be enforced.
>>> Pranith, I believe the difference of opinion is because you're looking
>>> at this problem in terms of "who" rather than "what". We do not care about
>>> *who* broke master. Removing commit access from a component owner doesn't
>>> stop someone else from landing a patch will create a failure in the same
>>> component or even a different component. We cannot stop patches from
>>> landing because it touches a specific component. And even if we could, our
>>> components are not entirely independent of each other. There could still be
>>> failures. This is a common scenario and it happened the last time we had to
>>> close master. Let me further re-emphasize our goals:
>>> * When master is broken, every team member's energy needs to be focused
>>> on getting master to green. Who broke the build isn't a concern as much as
>>> *the build is broken*. This is not a situation to punish specific people.
>>> * If we allow other commits to land, we run the risk of someone else
>>> breaking master with a different patch. Now we have two failures to debug
>>> and fix.
>>> _______________________________________________
>>> maintainers mailing list
>>> maintainers at gluster.org
>>> https://lists.gluster.org/mailman/listinfo/maintainers
>> --
>> - Atin (atinm)
> --
> Pranith
- Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180927/464bd9f5/attachment.html>

More information about the maintainers mailing list