[Gluster-devel] [Gluster-Maintainers] Meeting date: 07/09/2018 (July 09th, 2018), 18:30 IST, 13:00 UTC, 09:00 EDT
Amar Tumballi
atumball at redhat.com
Tue Jul 10 04:46:41 UTC 2018
Meeting date: 07/09/2018 (July 09th, 2018), 18:30 IST, 13:00 UTC, 09:00 EDT
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#BJ-Link>BJ Link
- Bridge: https://bluejeans.com/217609845
- Download: https://bluejeans.com/s/FC2Qi
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Attendance>Attendance
- Sorry Note: Ravi, NigelBabu, ndevos,
- Nithya, Xavi, Rafi, Susant, Raghavendra Bhat, Amar, Deepshika, Csaba,
Atin, ppai, Sachi, Hari.
<https://hackmd.io/yTC-un5XT6KUB9V37LG6OQ?both#Agenda>Agenda
-
Python3 migration:
- Some confusion existed about Py2 support in RHEL etc. Should we bother?
- [Amar] For upstream, focusing on just 1 version is good, and
confusion less. Downstream projects can decide for themselves
about how to
handle the project, when they ship it.
- [ndevos] Maybe we need to decide which distributions/versions we
want to offer to our users? Many run on CentOS-7 and there is
no reasonable
way to include Python3 there. I assume other commonly used stable
distributions (Debian, Ubuntu?) are also lacking Python3.
- [Atin] Are we not supporting python2 at all? What are the
patches intended to?
- [ppai] Code is mostly py2 and py3 compatible. The issue is with
#! line, where we have to have specific python2 or python3.
Fedora mandates
it to be one of it.
- [Atin] My vote would be to go for both py2 & py3 compatibility
and figure out a way how to handle builds?
- [Amar] Need a guidelines about how to handle existing code Vs
reviewing new code into the repository.
- [ppai] Many companies are moving towards new project being
python3 only, where as supporting py2 for existing projects.
- [AI] Amar to respond to Nigel’s email, and plan to take to
completion soon.
- If we go with only python3, what is the work pending?
- What are the automated validation tests needed? Are we good there?
-
Infra: Update on where are we.
- Distributed tests
- [Deepshika] jobs are running, figuring out issues as we run
tests.
- Need to increase the disk storage.
- Coding Standard as pre-commit hook (clang)
- In progress, need to finalize the config file.
- AI: all gets 1 week to finalize config.
- shellcheck?
- [Sac] shellcheck is good enough! Does check for unused variable
etc.
- Anything else?
-
Failing regression:
-
tests/bugs/core/bug-1432542-mpx-restart-crash.t
- consistently failing now. Need to address ASAP. Even if it takes
disabling it.
- It is an important feature, but not default in the project yet,
hence should it be blocking all the 10-15 patches now?
- Mohit/Xavi’s patches seems to solve the issue, is it just fine
for all the pending patches?
- [Atin] We should try to invest some time to figure out why the
cleanup is taking more time.
- [Xavi] noticed that selinux is taking more time (semanage).
Mostly because there are many volumes.
- [Deepshika] I saw it takes lot of memory, is that a concern?
- [Atin] It creates 120 volumes, so expected to take more than 1GB
memory, easily.
- [Nithya] Do we need so many volumes for regression?
- [Atin] Looks like we can reduce it a bit.
-
tests/00-geo-rep/georep-basic-dr-rsync.t
- Test itself passes, but generates CORE. Doesnt’ happen always.
- Not a geo-rep issue. The crash is in cleanup path, in gf_msg()
path.
-
Any other tests?
-
Round Table
- <Everyone gets to talk/make a point>
- [Atin] - Timing works great for me.
- [Nithya] - Anyone triaging upstream BZs?
-
- Most like not happening.
Regards,
Amar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20180710/fda65456/attachment.html>
More information about the Gluster-devel
mailing list