[Gluster-Maintainers] [Gluster-devel] Proposal to change the version numbers of Gluster project
Amar Tumballi
atumball at redhat.com
Wed Mar 21 08:15:12 UTC 2018
With all this discussion, I see that there are no serious concerns about
release numbers.
Should we go ahead and say that from 4-1.Next release, our names would be
Gluster 5.0.0, Gluster 6.0.0 or Gluster 2018.10, Gluster 2019.02 ?
Regards,
Amar
On Fri, Mar 16, 2018 at 5:58 PM, Atin Mukherjee <amukherj at redhat.com> wrote:
>
>
> On Fri, Mar 16, 2018 at 11:03 AM, Vijay Bellur <vbellur at redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 14, 2018 at 9:48 PM, Atin Mukherjee <amukherj at redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Thu, Mar 15, 2018 at 9:45 AM, Vijay Bellur <vbellur at redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Mar 14, 2018 at 5:40 PM, Shyam Ranganathan <srangana at redhat.com
>>>> > wrote:
>>>>
>>>>> On 03/14/2018 07:04 PM, Joe Julian wrote:
>>>>> >
>>>>> >
>>>>> > On 03/14/2018 02:25 PM, Vijay Bellur wrote:
>>>>> >>
>>>>> >>
>>>>> >> On Tue, Mar 13, 2018 at 4:25 AM, Kaleb S. KEITHLEY
>>>>> >> <kkeithle at redhat.com <mailto:kkeithle at redhat.com>> wrote:
>>>>> >>
>>>>> >> On 03/12/2018 02:32 PM, Shyam Ranganathan wrote:
>>>>> >> > On 03/12/2018 10:34 AM, Atin Mukherjee wrote:
>>>>> >> >> *
>>>>> >> >>
>>>>> >> >> After 4.1, we want to move to either continuous
>>>>> >> numbering (like
>>>>> >> >> Fedora), or time based (like ubuntu etc) release
>>>>> >> numbers. Which
>>>>> >> >> is the model we pick is not yet finalized. Happy to
>>>>> >> hear opinions.
>>>>> >> >>
>>>>> >> >>
>>>>> >> >> Not sure how the time based release numbers would make more
>>>>> >> sense than
>>>>> >> >> the one which Fedora follows. But before I comment further on
>>>>> >> this I
>>>>> >> >> need to first get a clarity on how the op-versions will be
>>>>> >> managed. I'm
>>>>> >> >> assuming once we're at GlusterFS 4.1, post that the releases
>>>>> >> will be
>>>>> >> >> numbered as GlusterFS5, GlusterFS6 ... So from that
>>>>> >> perspective, are we
>>>>> >> >> going to stick to our current numbering scheme of op-version
>>>>> >> where for
>>>>> >> >> GlusterFS5 the op-version will be 50000?
>>>>> >> >
>>>>> >> > Say, yes.
>>>>> >> >
>>>>> >> > The question is why tie the op-version to the release number?
>>>>> That
>>>>> >> > mental model needs to break IMO.
>>>>> >> >
>>>>> >> > With current options like,
>>>>> >> > https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/
>>>>> >> <https://docs.gluster.org/en/latest/Upgrade-Guide/op_version/>
>>>>> it is
>>>>> >> > easier to determine the op-version of the cluster and what it
>>>>> >> should be,
>>>>> >> > and hence this need not be tied to the gluster release
>>>>> version.
>>>>> >> >
>>>>> >> > Thoughts?
>>>>> >>
>>>>> >> I'm okay with that, but——
>>>>> >>
>>>>> >> Just to play the Devil's Advocate, having an op-version that
>>>>> bears
>>>>> >> some
>>>>> >> resemblance to the _version_ number may make it easy/easier to
>>>>> >> determine
>>>>> >> what the op-version ought to be.
>>>>> >>
>>>>> >> We aren't going to run out of numbers, so there's no reason to
>>>>> be
>>>>> >> "efficient" here. Let's try to make it easy. (Easy to not make a
>>>>> >> mistake.)
>>>>> >>
>>>>> >> My 2¢
>>>>> >>
>>>>> >>
>>>>> >> +1 to the overall release cadence change proposal and what Kaleb
>>>>> >> mentions here.
>>>>> >>
>>>>> >> Tying op-versions to release numbers seems like an easier approach
>>>>> >> than others & one to which we are accustomed to. What are the
>>>>> benefits
>>>>> >> of breaking this model?
>>>>> >>
>>>>> > There is a bit of confusion among the user base when a release
>>>>> happens
>>>>> > but the op-version doesn't have a commensurate bump. People ask why
>>>>> they
>>>>> > can't set the op-version to match the gluster release version they
>>>>> have
>>>>> > installed. If it was completely disconnected from the release
>>>>> version,
>>>>> > that might be a great enough mental disconnect that the expectation
>>>>> > could go away which would actually cause less confusion.
>>>>>
>>>>> Above is the reason I state it as well (the breaking of the mental
>>>>> model
>>>>> around this), why tie it together when it is not totally related. I
>>>>> also
>>>>> agree that, the notion is present that it is tied together and hence
>>>>> related, but it may serve us better to break it.
>>>>>
>>>>>
>>>>
>>>> I see your perspective. Another related reason for not introducing an
>>>> op-version bump in a new release would be that there are no incompatible
>>>> features introduced (in the new release). Hence it makes sense to preserve
>>>> the older op-version.
>>>>
>>>> To make everyone's lives simpler, would it be useful to introduce a
>>>> command that provides the max op-version to release number mapping? The
>>>> output of the command could look like:
>>>>
>>>> op-version X: 3.7.0 to 3.7.11
>>>> op-version Y: 3.7.12 to x.y.z
>>>>
>>>
>>> We already have introduced an option called cluster.max-op-version where
>>> one can run a command like "gluster v get all cluster.max-op-version" to
>>> determine what highest op-version the cluster can be bumped up to. IMO,
>>> this helps users not to look at the document for at given x.y.z release the
>>> op-version has to be bumped up to XXXXX . Isn't that sufficient for this
>>> requirement?
>>>
>>
>>
>> I think it is a more elegant solution than what I described. Do we have
>> a single interface to determine the current & max op-versions of all
>> members in the trusted storage pool? If not, it might be an useful
>> enhancement to add at some point in time.
>>
>
> We do have a way to get to that details:
>
> root at a7f4b3e96fde:/home/glusterfs# gluster v get all all | grep op-version
> cluster.op-version 40100
>
> cluster.max-op-version 40100
>
>
>> If we don't hear much complaints about op-version mismatches from users,
>> I think the CLI you described could be sufficient for understanding the
>> cluster operating version.
>>
>>
>> Thanks,
>> Vijay
>>
>
>
> _______________________________________________
> maintainers mailing list
> maintainers at gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>
>
--
Amar Tumballi (amarts)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/maintainers/attachments/20180321/6bae10f3/attachment.html>
More information about the maintainers
mailing list