[Gluster-devel] Brainstorming: stacks of patches
Jeff Darcy
jdarcy at redhat.com
Wed Apr 15 15:34:28 UTC 2015
In general, it's good to break up a large project into a series of
smaller self-contained patches. These are easier to review, and
particularly to *space out* reviews, compared to some giant 7000-line
blob that has to be reviewed all at once immediately, leaving both
authors and reviewers feeling harried.
Unfortunately, these stacks of patches can be very expensive when they
hit Gerrit/Jenkins. Every time a developer sitting on top of such a
stack pushes to Gerrit, this creates a new Jenkins job for every patch
in the stack down to where it rejoins with gerrit/master. Now, imagine
that the very first patch (lowest in the stack) is incompatible with
something else that has been merged recently, so they're all going to
fail. Even with the recent changes to make them each fail more quickly,
waiting for each individually is still rather expensive.
One way to reduce this expense would be to keep track of which patches
belong to the same stack/group, and auto-abort all related jobs if one
fails. Further optimizations might include:
Resource limiting: allow a given stack to consume at most N executors
at once, to keep a single developer's push from crowding out everyone
else on the whole project.
Staggered start: don't start all jobs at once, but wait N minutes
between them to maximize the chance that most of them will *never
even have to start* if the first one is going to fail.
I don't *exactly* know how to do any of this, but I think the basic form
is just a matter of tweaking the Jenkins scriptlets we use to run jobs.
Does anyone else have any thoughts on this?
More information about the Gluster-devel
mailing list