[Gluster-Maintainers] Meeting minutes: 09/20/2017 (Sept 20th)
Amar Tumballi
atumball at redhat.com
Wed Sep 20 18:16:14 UTC 2017
Meeting date: 09/20/2017 (Sept 20th)
<https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvATIiFhJgCYarBQCMCYIQA===?both#bj-link>BJ
Link
- Bridge: https://bluejeans.com/205933580
- Download/Watch : https://bluejeans.com/s/mgtCo
<https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvATIiFhJgCYarBQCMCYIQA===?both#attendance>
Attendance
- [Sorry Note] mscherer, kshlm, atinm, amye
- Amar, Rafi, Nigel, Milind, Nithya, Kaleb, Shyam, Xavi, Ravi, raghug,
vbellur, Kotresh
<https://hackmd.io/MYTgzADARgplCGBaA7DArMxAWAZvATIiFhJgCYarBQCMCYIQA===?both#agenda>
Agenda
-
AI from previous week
- [Nigel] Changes to ‘Submit Type’ - DONE on 2017-09-20 02:20 UTC
- [Amye] Email sent out with hotel information around Gluster Summit,
if you didn’t get it, ping me or ask around. – amye
-
Note: Archive old meeting notes to wiki so the hackmd is lighter.
- [Amar] Can we archive it in our website somewhere, so we know where to
search for old meeting minutes?
-
What are we doing with regression failure emails?
(netbsd/netbsd-brick-mux?)
- You should all be getting emails from failures onto maintainers@
- [Atin] brick mux regression was on centos. volume status clients
command is broken. Root cause availble. We have reverted the new test
introduced in volume-status.t. regression is back to normal.
- [Atin] netbsd regression multiple test failures. Please look into
it if it falls into your components.
- tests/basic/distribute/rebal-all-nodes-migrate.t
- tests/features/delay-gen.t
- tests/bitrot/bug-1373520.t (generated core)
- Let’s have a rotating group of people who look at failures.
- [shyam] Do they look at only releases or master? Preferably only
release branches, because master is overwhelming.
- [nigel] We should probably have one person look at all the
branches and especially master. A lot of our test runs are triggered
periodically against master. This person’s job would be chase down the
failures, find the right component, and get the fix pushed as soon as
possible for centos-regression, netbsd-regression, regression with
multiplex, and glusto tests.
-
Release roadmap
- Clarifications on 3.13 (STM), 4.0 (STM), 4.1 (LTM)
<https://www.gluster.org/release-schedule/>
- Current calendar is 2 months between 3.13 and 4.0.
- [Amar] No features for 3.13 proposed yet. Nothing proposed yet
- [Shyam] 3.13 may be a sneak peak into 4.0 as features for 4.0
land at the time of branching.
- We should plan to take in Gfproxy into 3.13. If we can get it in
early, we can stabilize messaging around Gfproxy. Poornima’s
latest patch
passes regression.
- Poornima has updated github issue with latest status.
- Amar is considering error codes for 3.13 since there’s 2 months
as well. At least an early version given nothing changes from
user point of
view. Not committing given large code change and review effort.
- Rafi: Halo can be taken in. Amar: Halo is already in. Rafi:
Looking at the patches that FB has in their branch specific to Halo
replication. This is already in and can be highlighted as a feature to
3.13. (Already landed in 3.11).
- Kotresh: Also useful to have a use-case defined for Halo
replication vs geo-replication. Vijay: When Halo is
available, we will need
to update our documentation for different types of
replications we provide.
- 4.0 is slated for January 2018. Early Jan but worst case late
Jan. Features are already planned. We have to discuss how to
get them in
early and what support those developers will need. During the
summit, we
need to do an off-hand check with maintainers about what they need.
Possibly Thursday night?
- Expectations from maintainers:
- Scope clarity: 4.0 milestone on Gihub has 50 features listed.
When you mark an issue for 4.0 milestone, send an email with
link to issue.
There’s 50-ish features. We’re 5 months away from the
release. Can we ship
them all? Would be nice for maintainers to look at their
components to see
what can happen. If we can’t ship them, then please remove
them from the
milestone, so we’re clear what can make it.
- Status of features: Good to have status update on big features
so we know what’s going on.
- What help do you/others need: As we get nearer to the release,
Shyam picks reviews that are connected to major features and
chase down
reviews for it. Please help with this process and if you’re
being chased,
help with prioritizing reviews as well.
-
Improving Quality of releases
- Baseline and do better on coverage reports: As we add more code, we
want our coverage to improve. We’d like maintainers to look at their
component and improve their coverage. Or at the very least not decrease
coverage. We want to target this for 4.0. As pre-requisite for release.
- Same as above for coverity
<https://download.gluster.org/pub/gluster/glusterfs/static-analysis/master/glusterfs-coverity/>:
Let’s set baseline and bring down the number of issues. We have 848
coverity failures at the moment, how do we set targets to bring
it down. We
need to set baselines at release time and assign ownership for components
which need to improve. Release team will send out reminders about this
focus and provide call outs as a release gate.
- New features, plan to address regression tests and coverage are
needed: We’re adding a bunch of new features. We cannot have tests as
hindsight. When these features land, we need healthy test coverage. We
should *plan* for higher coverage of new features as they land and at
least before branching.
-
Additional release owners for 3.13 and 4.0
- [Amar] Can help in follow ups
- Anyone interested can contact Shyam
-
How are we tracking changes in 3.8-fb branch? Should maintainers see
whats useful? or should we followup with FB on when it would be sent to
‘master’?
- [Ravi] What is facebook’s stragety for contributing patches to master?
- FB has completed upstreaming it’s patches to release-3.8-fb. About
98%. They’re keep to get these patches into master. Since
they’re not keen
to carry these patches in their fork. They intend to do an accelerated
forward port to master around December. At this point, we will
maintainers
to review their patches and accepting.
- Around 3.10 we called this out, there are patches in 3-8-fb branch.
If you could monitor it and port patches into master, that would be good.
These are fixes that would be good to have for us too. Retain
the change-Id
so that we track that the patches are ported.
- If the fix is the same, but we take a different approach, what do
we do? Like every project, let’s do the change and invite them
to comment.
Some of the Halo fixes, the patch description doesn’t help
understand what
it’s trying to fix. Email/add them to patches. If they don’t
respond, we’ll
talk to FB during the fortnightly sync ups.
- 4.0 branching is around early December. We will be busy around the
same time. FB only has time around early December, we cannot change that.
-
Gluster 4.0
- GD2 -
- Need maintainers to help with options changes.
- Currently you can create a volume, start a volume and mount a
client in glusterd.
- The framework for making it generic with volumegen and volumeset
isn’t complete yet. That will land later this month. That’s
where they need
maintainer help. GlusterD will not maintain it’s own options
table. All
translators which provide options need change with new flags
and default
values.
- After volumegen patch gets in, we’ll move to georep and other
plugins. Aravinda has send a patch to change the way georep
configs are
written. Working towards getting snapshot and quota team to talk to
glusterd2 team so they can have plans for these changes.
- Protocol changes.
- [Amar] From this month onwards, few members of team will spend
at least one day on Gluster 4.0 activities per week, in the BLR office.
- Mostly working on protocol changes next week.
- Monitoring
- initial patches sent for review
- Will be broken into multiple patches, will need help.
- GFProxy -
- updated status available at
https://github.com/gluster/glusterfs/issues/242
- Error codes -
- Initial changes are in, needs a lot of reviews
- https://github.com/gluster/glusterfs/issues/280
- Part of hackathon at Bangalore for later weeks.
- RIO
- [Shyam] Update mail in progress, should hit the lists by this
week
-
Round Table
- [Nithya] Upstream gluster documentation work, need help from all
- [Vijay / Shyam / Amar] Very critical, please extend help.
- [Shyam] Release retrospective for 3.12, please do talk about things
that can be improved
- [Vijay] Welcome Xavier as full time Gluster contributor. Glad to
have you onboard in Red Hat Gluster team.
- [Vijay] FOSDEM, DevConf discussion are on, will hear more in the
future on this
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20170920/3d2f2cf1/attachment-0001.html>
More information about the maintainers
mailing list