[GEDI] [RFC v2 0/9] error: auto propagated local_err

Vladimir Sementsov-Ogievskiy vsementsov at virtuozzo.com
Tue Sep 24 15:49:55 UTC 2019


24.09.2019 18:46, Vladimir Sementsov-Ogievskiy wrote:
> 24.09.2019 18:44, Vladimir Sementsov-Ogievskiy wrote:
>> 24.09.2019 18:28, Eric Blake wrote:
>>> On 9/24/19 9:12 AM, Vladimir Sementsov-Ogievskiy wrote:
>>>
>>>>>> 3. What to do with huge auto-generated commit 07? Should I split it
>>>>>> per-maintainer or per-subsystem, or leave it as-is?
>>>>>
>>>>> It's big. I'd split it into multiple patches (and reduce the cc - except
>>>>> for the cover letter, the rest of the patches can be limited to the
>>>>> actual maintainer/subsystem affected rather than everyone involved
>>>>> anywhere else in the series. With the current large cc, anyone that
>>>>> replies gets several mail bounces about "too many recipients").  It may
>>>>> be easier to split along directory boundaries than by maintainer
>>>>> boundaries.  Markus has applied large tree-wide Coccinelle cleanups
>>>>> before, maybe he has some advice.
>>>>
>>>>
>>>> If split by subsystem it would be 200+ patches:
>>>> git diff --name-only | while read f; do scripts/get_maintainer.pl -f $f --subsystem --no-rolestats 2>/dev/null | grep -v @ | head -1; done | sort | uniq | wc -l
>>>> 205
>>>>
>>>>
>>>> Try to look at larger subsystem:
>>>> git diff --name-only | while read f; do scripts/get_maintainer.pl -f $f --subsystem --no-rolestats 2>/dev/null | grep -v @ | tail -2 | head -1; done | sort | uniq | wc -l
>>>> 139
>>>>
>>>> still too many.. Or is it OK?
>>>
>>> Hmm - that becomes a tradeoff in length of the series (where individual
>>> patches may be reviewed fast, but where the overall process may be
>>> bogged down by sheer length), vs. length of individual emails (where the
>>> email itself is daunting, but as the review is mechanical and done by
>>> automation, it becomes a matter of spot-checking if we trust that the
>>> automation was done correctly).  You can probably group it in fewer
>>> patches, by joining smaller patches across a couple of subsystems.  It's
>>> an art form, there's probably several ways to do it that would work, and
>>> it comes down to a judgment call on how much work you want to do to try
>>> and reduce other's work in reviewing it.  Maybe even an off-hand split
>>> of gathering files until you reach about 500 or so lines per diff.  I
>>> wish I had easier advice on how to tackle this sort of project in the
>>> way that will get the fastest response time.

git diff | wc -l
48941

so, by 500 lines it would be 97 patches.

Seems, we'll never be able to review this thing :(

Any ideas?

May be, separate big patch, which just add ERRP_FUNCTION_BEGIN() to all errp functions?

>>>
>>>
>>>>>>    vl.c                                          |  13 +-
>>>>>>    scripts/coccinelle/auto-propagated-errp.cocci |  82 +++++++
>>>>>>    319 files changed, 2729 insertions(+), 4245 deletions(-)
>>>>>>    create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci
>>>>>
>>>>> The diffstat is huge, but promising.
>>>
>>> We also learned in reviews of 7/9 that the diffstat here is misleading,
>>> the number of insertions will definitely be increasing once the
>>> Coccinelle script is fixed to insert the macro in more functions, but
>>> hopefully it's still a net reduction in overall lines.
>>>
>>
>> No hope for us: with fixed script I now see
>>
>> 919 files changed, 6425 insertions(+), 4234 deletions(-)
>>
> 
> Also, I have add include "qapi/error.h" to files, where errp only passed
> to called functions (or for functions, which are not simple stubs):
> 
> # git diff | grep '+#include' | wc -l
> 253
> 
> 


-- 
Best regards,
Vladimir


More information about the integration mailing list