[Gluster-Maintainers] Maintainer's Meeting Minutes: Meeting date: 11/15/2017 (Nov 15th)
Amar Tumballi
atumball at redhat.com
Fri Nov 17 05:15:41 UTC 2017
BJ Link
- Bridge: https://bluejeans.com/205933580
- Download: https://bluejeans.com/s/eFJy
<https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvATIiFhJgCYarBQCMCYIQA===?both#attendance>
Attendance
- [Sorry Note] misc, Atin (Conflicting meeting), Csaba
- Amar, Nigel, Nithya, Xavi, Ravi, Mohit Agrawal, Shyam, Deepshika,
Kaushal, Niels (late, BlueJeans–)
- <Add your name here>
<https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvATIiFhJgCYarBQCMCYIQA===?both#agenda>
Agenda
-
Action items from last meeting:
- [nigelb] Metrics on first-time contributors?
- [nigelb] Cregit run?
- Both to be tracked as bugzilla bug, request queue full atm.
-
Re-visiting closing old reviews [nigelb]
- Using Gerrit to do the initial closing is a bad idea.
- We have a lot of reviews and each abandon triggers an email to
everyone involved.
- This means Gerrit server will get greylisted by all providers as we
did for stage recently.
- We have a Jenkins job that will close a few old reviews every day.
Currently thinking of 25 per day. Once we catch up, we can
either continue
with the bot or use Gerrit to do this.
- Is the plan sounding fine?
- Yes
- Review: https://review.gluster.org/#/c/18734/
-
Release sanity [nigelb]
- We run regression-test-burn-in for master.
- We don’t for release branches. Seems like a no-brainer to do this.
- However, this will add load onto regression machines 1x number of
active releases per day.
- This occupies machines, should we run such things once daily, so
that we can keep regression machines free?
- Are we not moving towards reducing regression time, so this
is not a problem?
- Need more regression machines to pull this off
- Move the job to Eastern TZ (mid-day or later) as that is a
relatively free of regression jobs zone.
- More patches in one job, means finding out what caused
failure would be more difficult
- This possibly can be handled using git bisect and other
such.
- Options to mitigate
- We will trigger a regression run only if there are changes since
last run.
- Shall we move regression-test-burn-in to nightly?
- We (release-owners) need the ability to trigger this job, so
that a release can be made (releatively) deterministically [Shyam]
-
Jeff’s email on Pressing ‘Submit’ if everything is OK?
- What is stopping you from doing it?
- Assumption is that maintainer of the component merges the patch
- Not focussing on the patch backlog due to other constraints
- Xavi is taking good initiative here, need more of the same
- We need a catch all case when patches are not moving, than
depend/rely on maintainers always
- Master dashboard:
https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:master-dashboard
- Components in the fringe are often ignored, and need attention
(for example hooks or such)
-
Regression Suggestion:
- Should author wait for at least one sanity review before running
regression?
- Current regression takes time, so not running on local machines
- Sometimes reviewers see patches only post a regression score
- People trigger regression before smoke finishes, which is bad!
(when smoke fails)
- Should we pipeline this, regression only if smoke passes.
This may lead to some trouble with voting, needs a bit of
expperimentation.
- This could be a problem in terms of people running random
code on our tests.
- Release branches is better to get regression votes before
review, as release-owners may need to review and merge in the
window that
they work with the branch/release [Shyam]
- Decision: Not yet! (wait for regression jobs to run faster)
- Will save some cycles as I have seen authors doing ‘+1’ to verify
immediately, and then they get -1.
- Makes sense if their patch gets reviewed just after smoke (or even
without smoke +1 too)
- [Atin] I have a disagreement of author to wait for review before
marking the patch verified +1. To me, it’s author’s responsibility to
ensure the basic regression is passed and that way maintainers get a
confidence on the sanity of the patch. As a GlusterD & CLI
maintainer, most
of the times I look at reviewing patches (in 90 - 95% cases)
which have
passed regression.
-
‘experimental’ branch rebase
- Major conflicts with posix changes :-/
- Shyam to blame :/
- AI: Shyam to sync with Amar and get this moving (Shyam)
- Other option changes which GD2 was dependent on, is sent to master
with --author set to original authors.
-
4.0 Schedule [Shyam]
- Slated for branching mid Dec, are we ready?
- GD2 still a lot to be done
- Do we have burn down charts?
- AI: Do this weekly (Shyam)
- Example: http://radekstepan.com/burnchart/#!/gluster/glusterfs/3
-
Round Table:
- [Ravi] Patch https://review.gluster.org/#/c/17673/ for 3.13 is
acceptable?
- AI: Shyam to get back on this (by Nov-16-2017)
<https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvATIiFhJgCYarBQCMCYIQA===?both#decisions>
Decisions
- Jenkins job to retire older reviews: Ack to do this, in batches
- regression-test-burn-in for release branches: Ack
- regression-test-burn-in on demand for release branches: (I think this
was an Ack from Infra folks)
- regression-test-burn-in for master moved to nightly (close to mid-day
easter TZ): Ack
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20171117/67036fd9/attachment-0001.html>
More information about the maintainers
mailing list