[Gluster-Maintainers] RFC for change in release model

Aravinda avishwan at redhat.com
Mon Apr 18 06:11:26 UTC 2016


regards
Aravinda

On 04/15/2016 09:19 PM, Niels de Vos wrote:
> On Fri, Apr 15, 2016 at 07:35:52PM +0530, Aravinda wrote:
>> regards
>> Aravinda
>>
>> On 04/15/2016 03:16 PM, Pranith Kumar Karampuri wrote:
>>> hi,
>>>         Yesterday Vijay, Niels and I discussed(#gluster-dev) that there
>>> has been a tension/conflict between keeping a release stable vs maturing
>>> new features where users have been giving feedback.
>>> At the moment if the feature misses a release, for us to get feedback from
>>> users it generally takes 7-8 months because it needs to get into next
>>> release. So we want to shorten it by following shorter minor release
>>> cycles (3months) with some releases termed as long term support(LTS) like
>>> we have in Ubuntu world(At least that is where I first heard it). So the
>>> proposal is to have 3.8 as long-term support release, 3.9 to be released
>>> in September which is not a long-term support release. As soon as 4.0 gets
>>> released 3.9 based releases will stop as all those features will move to
>>> 4.0. branch. This will make sure that small features won't be backported
>>> to already released branches. Also we can point enthusiastic users to try
>>> the new features out in the next release, which is not too far off.
>>>
>>>        From a user's perspective people who are not looking for any new
>>> features should stay on long-term-support branch based releases. Where as
>>> people who are interested in a new feature can start their testing with
>>> release where the feature is available whether it is going to be
>>> long-term-support or not and give us feedback, if they like the stability
>>> they can even put that in production. Once the feature is in a LTS branch
>>> they should stick to LTS branches from then on.
>>>
>> Awesome. +1 for the idea. Also matches with Debian releases (Stable, Testing
>> and Unstable)
> I guess this would start to map to:
>
>    Stable - 3.8
>    Testing - 3.9
>    Unstable - master
>
>>> Please feel free to let us know what you guys think about this. Main
>>> problem we need to solve is to prevent new features to land in the middle
>>> of stable release cycles. At the same time not have longer wait times for
>>> users to give us feedback.
>> I think we need feature gate/feature flag for every feature. All code
>> changes for that feature should be behind this gate/flag. Enabling/Disabling
>> feature should be done by enabling/disabling in feature list.
> Could you explain this a little more, I am not sure I understand what
> your suggestion is. Maybe you can give an example?
As per my understanding, we don't have a easy way to disable many
features(snapshot, glusterfind etc) during compile time. If feature
related patches are interdependent then back porting other bug fix
patches is very difficult.

New patches can't be applied automatically without applying feature
patches which modified the common code. If we mandate feature gate for
every code change of a feature, we need not worry about breaking the
change and can avoid manual back porting completely.

Example of a feature gate($SRC/cli/src/cli-cmd-volume.c),

#if (SYNCDAEMON_COMPILE)
         {"volume "GEOREP" [<VOLNAME>] [<SLAVE-URL>] {create [[ssh-port 
n] [[no-verify]|[push-pem]]] [force]"
          "|start [force]|stop [force]|pause [force]|resume 
[force]|config|status [detail]|delete} [options...]",
          cli_cmd_volume_gsync_set_cbk,
          "Geo-sync operations",
          cli_cmd_check_gsync_exists_cbk},
#endif

Geo-rep feature can be enabled using, ./configure
--enable-georeplication, If the feature is disabled then no code
related to that feature should compile. Each release can have its own
enable flags.

We already have these flags for many features, but we should do this
for every feature patch as mandate. We need not worry even if we back 
port all
the patches to release branch(including feature patches) since those
features will not be enabled in release branches.


>
> Thanks,
> Niels



More information about the maintainers mailing list