[Gluster-devel] Creating new options for multiple gluster versions

Vijay Bellur vbellur at redhat.com
Wed Feb 1 04:24:52 UTC 2017


On Tue, Jan 31, 2017 at 2:36 AM, Xavier Hernandez <xhernandez at datalab.es>
wrote:

> Hi Atin,
>
> On 31/01/17 05:45, Atin Mukherjee wrote:
>
>>
>>
>> On Mon, Jan 30, 2017 at 9:02 PM, Xavier Hernandez <xhernandez at datalab.es
>> <mailto:xhernandez at datalab.es>> wrote:
>>
>>     Hi Atin,
>>
>>     On 30/01/17 15:25, Atin Mukherjee wrote:
>>
>>
>>
>>         On Mon, Jan 30, 2017 at 7:30 PM, Xavier Hernandez
>>         <xhernandez at datalab.es <mailto:xhernandez at datalab.es>
>>         <mailto:xhernandez at datalab.es <mailto:xhernandez at datalab.es>>>
>>
>>         wrote:
>>
>>             Hi,
>>
>>             I'm wondering how a new option needs to be created to be
>>         available
>>             to different versions of gluster.
>>
>>             When a new option is created for 3.7 for example, it needs
>>         to have a
>>             GD_OP_VERSION referencing the next 3.7 release. This ensures
>>         that
>>             there won't be any problem with previous versions.
>>
>>             However what happens with 3.8 ?
>>
>>             3.8.0 is greater than any 3.7.x, however the new option won't
>> be
>>             available until the next 3.8 release. How this needs to be
>>         handled ?
>>
>>
>>         I'd discourage to backport any new volume options from mainline
>>         to the
>>         stable releases branches like 3.7 & 3.8. This creates a lot of
>>         backward
>>         compatibility issues w.r.t clients. Any new option is actually
>>         an RFE
>>         and supposed to be slated for only upcoming releases.
>>
>>
>>     Even if it's needed to solve an issue in all versions ?
>>
>>     For example, a hardcoded timeout is seen to be insufficient in some
>>     configurations, so it needs to be increased, but increasing it will
>>     be too much for many of the environments where the current timeout
>>     has worked fine. It could even be not enough for other environments
>>     still not tried, needed a future increase.
>>
>>     With a new option, this can be solved case by case and only when
>> needed.
>>
>>     How can this be solved ?
>>
>>
>> Hi Xavi,
>>
>> Let me try to explain this a bit in detail. A new option with an
>> op-version say 30721 (considering 3.7.21 is the next update of 3.7 which
>> is the oldest active branch) is introduced in mainline and then the same
>> is backported to 3.7 (slated for 3.7.21) &  3.8 branch (slated for
>> 3.8.9).  Now say if an user forms a cluster of three nodes with gluster
>> versions as 3.7.21, 3.8.9 & 3.8.8 respectively and tries to set this
>> option, volume set would always fail as in 3.8.8 this option is not
>> defined. Also any client running with 3.8 version would see a
>> compatibility issue here. Also the op-version number of the new option
>> has to be same across different release branches.
>>
>
> Thanks for the explanation. This confirms what I already thought. So the
> question is: now that 3.10 has already been branched, does it mean that any
> new option won't be available for LTS users until 3.12 is released ? I
> think this is not acceptable, specially for changes intended to fix an
> issue, not introducing new features.
>
>
>> With the current form of op-version management, I don't think this can
>> be solved, the only way is to ask users to upgrade to the latest.
>>
>
> As I said, someone using 3.10 LTS won't be able to upgrade until 3.12 is
> released. What would we say to them when we add a new option to 3.11 ?
>
> Maybe we should add a new kind of option that causes no failure if not
> recognized. They are simply ignored. Many options do not cause any visible
> functional change, so they could be defined even if some nodes of the
> cluster don't recognize them (for example performance improvement options
> or some timeout values).
>


I like this idea. This will give us some flexibility for defining options.

-Vijay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-devel/attachments/20170131/67972368/attachment.html>


More information about the Gluster-devel mailing list