[Gluster-devel] IMP: Release 3.10: RC1 Pending bugs (Need fixes by 21st Feb)
Atin Mukherjee
amukherj at redhat.com
Mon Feb 20 03:54:25 UTC 2017
On Mon, Feb 20, 2017 at 8:25 AM, Shyam <srangana at redhat.com> wrote:
> Hi,
>
> RC1 tagging is *tentatively* scheduled for 21st Feb, 2017
>
> The intention is that RC1 becomes the release, hence we would like to
> chase down all blocker bugs [1] and get them fixed before RC1 is tagged.
>
> This mail requests information on the various bugs and to understand if it
> is possible to get them fixed done by the 21st.
>
> Bugs pending for RC1 tagging:
> 1) Bug 1415226 - packaging: python/python2(/python3) cleanup
> - Status: Review awaiting verification and a backport
> - master bug: https://bugzilla.redhat.com/show_bug.cgi?id=1414902
> - Review: https://review.gluster.org/#/c/16649/
> - *Niels*, I was not able verify this over the weekend, there is a
> *chance* I can do this tomorrow. Do you have alternate plans to get this
> verified?
>
> 2) Bug 1421590 - Bricks take up new ports upon volume restart after
> add-brick op with brick mux enabled
> - Status: *Atin/Samikshan/Jeff*, any update on this?
> - Can we document this as a known issue? What would be the way to
> get volume to use the older ports (a glusterd restart?)?
>
I think we can live with it for sometime given with brick-multiplexing on,
the number of ports consumption is very low and there is no serious harm in
having few stale ports on volume restart. Having said that Samikshan is
working on this issue and we expect to fix it in master some time soon.
>
> 3) Bug 1421956 - Disperse: Fallback to pre-compiled code execution when
> dynamic code generation fails
> - Status: Awaiting review closure
> - *Pranith/Ashish*, request one of you to close the review on this
> one, so that Xavi can backport this to 3.10
> - master bug: https://bugzilla.redhat.com/show_bug.cgi?id=1421955
> - Review: https://review.gluster.org/16614
>
> 4) Bug 1422769 - brick process crashes when glusterd is restarted
> - Status: As per comment #6, the test case that Jeff developed for
> this is not reporting a crash
> - *Atin*, should we defer this form the blocker list for 3.10? Can you
> take a look at the test case as well?
> - Tet case: https://review.gluster.org/#/c/16651/
>
> 5) Bug 1422781 - Transport endpoint not connected error seen on client
> when glusterd is restarted
> - Status: Repro not clean across setups, still debugging the problem
> - *Atin*, we may need someone from your team to take this up and
> narrow this down to a fix or determine if this is really a blocker
>
I'd consider the test case as blocker. The surprising thing here is I was
able to hit it at the first go (on release-3.10 HEAD at that time) when the
bug was filed but yesterday I retried it several times but had no luck. Not
sure if any patch went in between fixed it or I was *extremely* lucky to
hit the race at first attempt. I"d check with Karthick (reporter of the BZ)
to see if he can reproduce it or not and then feedback.
> 6) Bug 1423385 - Crash in index xlator because of race in inode_ctx_set
> and inode_ref
> - Status: Review posted for master, awaiting review closure
> - *Du/Pranith*, please close the review of the above
> - Review: https://review.gluster.org/16622
> - master bug: https://bugzilla.redhat.com/show_bug.cgi?id=1423373
> - Related note: I was facing the same crash on the client stack as
> mentioned in bug #1423065, cherry picking this fix and rerunning my tests
> does not reproduce the crash (as was suggested by Ravi and Poornima).
>
> Thanks,
> Shyam
>
> [1] 3.10 tracker bug: https://bugzilla.redhat.com/sh
> ow_bug.cgi?id=glusterfs-3.10.0
>
> [2] Dynamic tracker list: https://bugzilla.redhat.com/sh
> owdependencytree.cgi?id=glusterfs-3.10.0&maxdepth=1&hide_resolved=1
>
--
~ Atin (atinm)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170220/8a95734b/attachment-0001.html>
More information about the Gluster-devel
mailing list