[Gluster-devel] op-version issues in the 3.5.1 beta release

Kaushal M kshlmster at gmail.com
Tue Jun 3 11:16:17 UTC 2014


I've sent out a draft/rfc patch on the master branch
http://review.gluster.org/7963.

~kaushal

On Tue, Jun 3, 2014 at 3:20 PM, Niels de Vos <ndevos at redhat.com> wrote:
> On Tue, Jun 03, 2014 at 11:01:25AM +0530, Kaushal M wrote:
>> Niels,
>> This approach will work for well when the cluster is uniform, ie. of
>> the same (major) version. This could lead to problems in mixed
>> clusters when using volume set. Volume set compares the op-versions of
>> the options being set and will reject the set operation when the
>> op-versions are different. So, if a user were to run a mixed cluster
>> of gluster-3.5.1 and gluster-3.6.0, he wouldn't be able to set
>> server.manage-gids as its op-versions would be different.
>
> Thanks Kaushal, that is exactly my expectation.
>
>> But I don't expect anyone to be running such a cluster always. It
>> would mostly be during upgrades, during which users shouldn't be doing
>> volume operations.
>
> Yes, I agree. And it should be easy enough to diagnose 'volume set'
> issues when glusterfs versions are different.
>
> Would you like me to send a patch for the master branch that makes the
> changes mentioned below, or is that something you can do soon? I'll be
> waiting for the change to be merged before 3.5.1 can get the updated
> op-version for server.manage-gids.
>
> Thanks,
> Niels
>
>>
>> ~kaushal
>>
>> On Mon, Jun 2, 2014 at 8:28 PM, Niels de Vos <ndevos at redhat.com> wrote:
>> > Hi,
>> >
>> > today on IRC we has a discussion about the op-version for the current
>> > 3.5.1 beta. This beta includes a backport that introduces a new volume
>> > option (server.manage-gids) and needed to increase the op-version to
>> > prevent issues with systems that do not know about this new option.
>> >
>> > Currently, the op-version in 3.5.1 is (seems to be) hard-coded to '3':
>> >
>> >   libglusterfs/src/globals.h:#define GD_OP_VERSION_MAX  3
>> >
>> > Now, the new option required a glusterd with op-version=4. This worked
>> > fine when setting the option, and glusterd.info got updated too.
>> > Unfortunately, a restart of glusterd fails, because the op-version from
>> > the configuration is greater than the GD_OP_VERSION_MAX.
>> >
>> > Increasing GD_OP_VERSION_MAX is not really suitable, because
>> > op-version=4 would make other systems assume that the 3.5.1 release has
>> > all the op-version=4 features (incorrect, because the upcoming 3.6 has
>> > op-version=4).
>> >
>> > I see one option to fix this issue, that allows stable branches to
>> > include backports of volume options and similar, without conflicting
>> > with the development branch or newer versions:
>> >
>> > 1. define an op-version as multi-digit value, with gaps for stable
>> >    releases
>> > 2. stable releases may only include backports of volume options that are
>> >    in the development branch and newer versions
>> > 3. stable maintainers should pay extra care when new volume options are
>> >    being backported
>> >
>> > The idea is the following:
>> >
>> > - update the hard-coded op-version in libglusterfs/src/globals.h in the
>> >   master branch to 360 (based on the 3.6 release for easier matching)
>> > - update any options that have op-version >= 4 to 360 (master branch)
>> > - update the op-version in libglusterfs/src/globals.h in the release-3.5
>> >   branch to 351
>> > - update the op-version of server.manage-gids option in 3.5.1 to 351
>> >
>> >
>> > The only issue would be that current 3.6 packages in testing have
>> > a lower op-version than the new 3.5.1 packages. I hope it is not
>> > a common practise to have systems installed with packages from the
>> > master-branch in the same environment as 3.5.1 servers.
>> >
>> > Any ideas, suggestions or thoughts?
>> >
>> > If this can not be solved in a similar easy way, I will be forced to
>> > revert the 3.5.1 server.manage-gids option. Users were expecting this to
>> > be present so that deployments with many (ca. 93+) secondary groups have
>> > permissions working as expected.
>> >
>> > Thanks,
>> > Niels
>> > _______________________________________________
>> > Gluster-devel mailing list
>> > Gluster-devel at gluster.org
>> > http://supercolony.gluster.org/mailman/listinfo/gluster-devel


More information about the Gluster-devel mailing list