[Gluster-Maintainers] [Gluster-devel] Release 5: Branched and further dates
amukherj at redhat.com
Thu Oct 4 16:01:55 UTC 2018
On Thu, Oct 4, 2018 at 9:03 PM Shyam Ranganathan <srangana at redhat.com>
> On 09/13/2018 11:10 AM, Shyam Ranganathan wrote:
> > RC1 would be around 24th of Sep. with final release tagging around 1st
> > of Oct.
> RC1 now stands to be tagged tomorrow, and patches that are being
> targeted for a back port include,
> 1) https://review.gluster.org/c/glusterfs/+/21314 (snapshot volfile in
> mux cases)
> @RaBhat working on this.
> 2) Py3 corrections in master
> @Kotresh are all changes made to master backported to release-5 (may not
> be merged, but looking at if they are backported and ready for merge)?
> 3) Release notes review and updates with GD2 content pending
> @Kaushal/GD2 team can we get the updates as required?
> 4) This bug  was filed when we released 4.0.
> The issue has not bitten us in 4.0 or in 4.1 (yet!) (i.e the options
> missing and hence post-upgrade clients failing the mount). This is
> possibly the last chance to fix it.
> Glusterd and protocol maintainers, can you chime in, if this bug needs
> to be and can be fixed? (thanks to @anoopcs for pointing it out to me)
This is a bad bug to live with. OTOH, I do not have an immediate solution
in my mind on how to make sure (a) these options when reintroduced are made
no-ops, especially they will be disallowed to tune (with out dirty option
check hacks at volume set staging code) . If we're to tag RC1 tomorrow, I
wouldn't be able to take a risk to commit this change.
Can we actually have a note in our upgrade guide to document that if you're
upgrading to 4.1 or higher version make sure to disable these options
before the upgrade to mitigate this?
> The tracker bug  does not have any other blockers against it, hence
> assuming we are not tracking/waiting on anything other than the set above.
>  Tracker: https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-5.0
>  Potential upgrade bug:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the maintainers