[Gluster-Maintainers] glusterfs-3.12.7 released
Jiffin Tony Thottan
jthottan at redhat.com
Thu Mar 22 10:13:55 UTC 2018
On Thursday 22 March 2018 01:38 PM, Jiffin Tony Thottan wrote:
>
>
>
> On Thursday 22 March 2018 01:07 PM, Atin Mukherjee wrote:
>>
>>
>> On Thu, Mar 22, 2018 at 12:38 PM, Jiffin Tony Thottan
>> <jthottan at redhat.com <mailto:jthottan at redhat.com>> wrote:
>>
>>
>>
>> On Thursday 22 March 2018 12:29 PM, Jiffin Tony Thottan wrote:
>>>
>>>
>>>
>>> On Wednesday 21 March 2018 09:06 AM, Atin Mukherjee wrote:
>>>>
>>>>
>>>> On Wed, Mar 21, 2018 at 12:18 AM, Shyam Ranganathan
>>>> <srangana at redhat.com <mailto:srangana at redhat.com>> wrote:
>>>>
>>>> On 03/20/2018 01:10 PM, Jiffin Thottan wrote:
>>>> > Hi Shyam,
>>>> >
>>>> > Actually I planned to do the release on March 8th(posted
>>>> the release note on that day). But it didn't happen.
>>>> > I didn't merge any patches post sending the release
>>>> note(blocker bug had some merge conflict issue on that so I
>>>> skipped AFAIR).
>>>> > I performed 3.12.7 tagging yesterday and ran the build
>>>> job today.
>>>> >
>>>> > Can u please provide a suggestion here ? Do I need to
>>>> perform a 3.12.7-1 for the blocker bug ?
>>>>
>>>> I see that the bug is marked against the tracker, but is not a
>>>> regression or an issue that is serious enough that it
>>>> cannot wait for
>>>> the next minor release.
>>>>
>>>> Copied Atin to the mail, who opened that issue for his
>>>> comments. If he
>>>> agrees, let's get this moving and get the fix into the next
>>>> minor release.
>>>>
>>>>
>>>> Even though it's not a regression and a day 1 bug with brick
>>>> multiplexing, the issue is severe enough to consider this to be
>>>> fixed *asap* . In this scenario, if you're running a multi node
>>>> cluster with brick multiplexing enabled and one node down and
>>>> there're some volume operations performed and post that when
>>>> the node comes back, brick processes fail to come up.
>>>
>>> Issue is impact only with glusterd, whether any other component
>>> needs this fix?
>>
>> Sorry I meant brick multiplexing not glusterd
>> --
>> Jiffin
>>
>>> If it is issue not report from upstream user/community, I prefer
>>> to take it for next release.
>>
>>
>> IMO, assessment of an issue should be done based on its merit, not
>> based on where it originates from. It might be a fair question to ask
>> that "do we have users who have brick multiplexing enabled" and based
>> on that take a call to fix it immediately or as part of next update
>> but at the same time, you're still exposing a known problem with out
>> flagging a warning that don't use brick multiplexing till this bug is
>> fixed.
>
> I have not yet sent the announcement mail for the release nor sent
> release notes to https://docs.gluster.org/en. I can mention about it
> over there
> --
> Jiffin
>
>
Can u please tell me whether it works for u?
--
Jiffin
>>
>>>
>>> Regards,
>>> Jiffin
>>>
>>>>
>>>> >
>>>> > --
>>>> > Regards,
>>>> > Jiffin
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > ----- Original Message -----
>>>> > From: "Shyam Ranganathan" <srangana at redhat.com
>>>> <mailto:srangana at redhat.com>>
>>>> > To: jenkins at build.gluster.org
>>>> <mailto:jenkins at build.gluster.org>, packaging at gluster.org
>>>> <mailto:packaging at gluster.org>, maintainers at gluster.org
>>>> <mailto:maintainers at gluster.org>
>>>> > Sent: Tuesday, March 20, 2018 9:06:57 PM
>>>> > Subject: Re: [Gluster-Maintainers] glusterfs-3.12.7 released
>>>> >
>>>> > On 03/20/2018 11:19 AM, jenkins at build.gluster.org
>>>> <mailto:jenkins at build.gluster.org> wrote:
>>>> >> SRC:
>>>> https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.tar.gz
>>>> <https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.tar.gz>
>>>> >> HASH:
>>>> https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.sha512sum
>>>> <https://build.gluster.org/job/release-new/47/artifact/glusterfs-3.12.7.sha512sum>
>>>> >>
>>>> >> This release is made off jenkins-release-47
>>>> >
>>>> > Jiffin, there are about 6 patches ready in the 3.12
>>>> queue, that are not
>>>> > merged for this release, why?
>>>> >
>>>> https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard
>>>> <https://review.gluster.org/#/projects/glusterfs,dashboards/dashboard:3-12-dashboard>
>>>> >
>>>> > The tracker bug for 3.12.7 calls out
>>>> > https://bugzilla.redhat.com/show_bug.cgi?id=1543708
>>>> <https://bugzilla.redhat.com/show_bug.cgi?id=1543708> as a
>>>> blocker, and
>>>> > has a patch, which is not merged.
>>>> >
>>>> > Was this some test packaging job?
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >>
>>>> >>
>>>> >>
>>>> >> _______________________________________________
>>>> >> maintainers mailing list
>>>> >> maintainers at gluster.org <mailto:maintainers at gluster.org>
>>>> >> http://lists.gluster.org/mailman/listinfo/maintainers
>>>> <http://lists.gluster.org/mailman/listinfo/maintainers>
>>>> >>
>>>> > _______________________________________________
>>>> > maintainers mailing list
>>>> > maintainers at gluster.org <mailto:maintainers at gluster.org>
>>>> > http://lists.gluster.org/mailman/listinfo/maintainers
>>>> <http://lists.gluster.org/mailman/listinfo/maintainers>
>>>> >
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> maintainers mailing list
>>> maintainers at gluster.org <mailto:maintainers at gluster.org>
>>> http://lists.gluster.org/mailman/listinfo/maintainers
>>> <http://lists.gluster.org/mailman/listinfo/maintainers>
>>
>>
>> _______________________________________________
>> maintainers mailing list
>> maintainers at gluster.org <mailto:maintainers at gluster.org>
>> http://lists.gluster.org/mailman/listinfo/maintainers
>> <http://lists.gluster.org/mailman/listinfo/maintainers>
>>
>>
>
>
>
> _______________________________________________
> maintainers mailing list
> maintainers at gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180322/33d890f2/attachment.html>
More information about the maintainers
mailing list