[Gluster-Maintainers] glusterfs-3.12.7 released
Atin Mukherjee
amukherj at redhat.com
Thu Mar 22 07:37:16 UTC 2018
On Thu, Mar 22, 2018 at 12:38 PM, Jiffin Tony Thottan <jthottan at redhat.com>
wrote:
>
>
> On Thursday 22 March 2018 12:29 PM, Jiffin Tony Thottan wrote:
>
>
>
> On Wednesday 21 March 2018 09:06 AM, Atin Mukherjee wrote:
>
>
>
> On Wed, Mar 21, 2018 at 12:18 AM, Shyam Ranganathan <srangana at redhat.com>
> wrote:
>
>> On 03/20/2018 01:10 PM, Jiffin Thottan wrote:
>> > Hi Shyam,
>> >
>> > Actually I planned to do the release on March 8th(posted the release
>> note on that day). But it didn't happen.
>> > I didn't merge any patches post sending the release note(blocker bug
>> had some merge conflict issue on that so I skipped AFAIR).
>> > I performed 3.12.7 tagging yesterday and ran the build job today.
>> >
>> > Can u please provide a suggestion here ? Do I need to perform a
>> 3.12.7-1 for the blocker bug ?
>>
>> I see that the bug is marked against the tracker, but is not a
>> regression or an issue that is serious enough that it cannot wait for
>> the next minor release.
>>
>> Copied Atin to the mail, who opened that issue for his comments. If he
>> agrees, let's get this moving and get the fix into the next minor
>> release.
>>
>>
> Even though it's not a regression and a day 1 bug with brick multiplexing,
> the issue is severe enough to consider this to be fixed *asap* . In this
> scenario, if you're running a multi node cluster with brick multiplexing
> enabled and one node down and there're some volume operations performed and
> post that when the node comes back, brick processes fail to come up.
>
>
> Issue is impact only with glusterd, whether any other component needs this
> fix?
>
>
> Sorry I meant brick multiplexing not glusterd
> --
> Jiffin
>
> If it is issue not report from upstream user/community, I prefer to take
> it for next release.
>
>
IMO, assessment of an issue should be done based on its merit, not based on
where it originates from. It might be a fair question to ask that "do we
have users who have brick multiplexing enabled" and based on that take a
call to fix it immediately or as part of next update but at the same time,
you're still exposing a known problem with out flagging a warning that
don't use brick multiplexing till this bug is fixed.
>
> Regards,
> Jiffin
>
>
> >
>> > --
>> > Regards,
>> > Jiffin
>> >
>> >
>> >
>> >
>> > ----- Original Message -----
>> > From: "Shyam Ranganathan" <srangana at redhat.com>
>> > To: jenkins at build.gluster.org, packaging at gluster.org,
>> maintainers at gluster.org
>> > Sent: Tuesday, March 20, 2018 9:06:57 PM
>> > Subject: Re: [Gluster-Maintainers] glusterfs-3.12.7 released
>> >
>> > On 03/20/2018 11:19 AM, jenkins at build.gluster.org wrote:
>> >> SRC: https://build.gluster.org/job/release-new/47/artifact/gluste
>> rfs-3.12.7.tar.gz
>> >> HASH: https://build.gluster.org/job/release-new/47/artifact/gluste
>> rfs-3.12.7.sha512sum
>> >>
>> >> This release is made off jenkins-release-47
>> >
>> > Jiffin, there are about 6 patches ready in the 3.12 queue, that are not
>> > merged for this release, why?
>> > https://review.gluster.org/#/projects/glusterfs,dashboards/d
>> ashboard:3-12-dashboard
>> >
>> > The tracker bug for 3.12.7 calls out
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1543708 as a blocker, and
>> > has a patch, which is not merged.
>> >
>> > Was this some test packaging job?
>> >
>> >
>> >
>> >
>> >>
>> >>
>> >>
>> >> _______________________________________________
>> >> maintainers mailing list
>> >> maintainers at gluster.org
>> >> http://lists.gluster.org/mailman/listinfo/maintainers
>> >>
>> > _______________________________________________
>> > maintainers mailing list
>> > maintainers at gluster.org
>> > http://lists.gluster.org/mailman/listinfo/maintainers
>> >
>>
>
>
>
>
> _______________________________________________
> maintainers mailing listmaintainers at gluster.orghttp://lists.gluster.org/mailman/listinfo/maintainers
>
>
>
> _______________________________________________
> maintainers mailing list
> maintainers at gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180322/39c2de89/attachment.html>
More information about the maintainers
mailing list