[Gluster-Maintainers] Upgrade issue when new mem type is added in libglusterfs
Atin Mukherjee
amukherj at redhat.com
Tue Jul 12 06:38:23 UTC 2016
On Tue, Jul 12, 2016 at 12:05 PM, Aravinda <avishwan at redhat.com> wrote:
>
> regards
> Aravinda
>
> On 07/12/2016 11:51 AM, Atin Mukherjee wrote:
>
>
>
> On Tue, Jul 12, 2016 at 11:40 AM, Aravinda <avishwan at redhat.com> wrote:
>
>> How about running the same upgrade steps again after %post
>> geo-replication. Upgrade steps will run twice(fails in first step) but it
>> solves these issues.
>>
>
> I'd not do that if we can solve the problem in first upgrade attempt
> itself which looks feasible.
>
> I think we can't safely handle this in first call unless we skip
> checking/calling gsyncd.
>
That's what I proposed earlier, we'd need to call configure_syncdaemon ()
conditionally. Kotresh already has a patch [1] now.
[1] http://review.gluster.org/#/c/14898
>
>
>
>>
>> regards
>> Aravinda
>>
>> On 07/11/2016 01:56 PM, Niels de Vos wrote:
>>
>> On Mon, Jul 11, 2016 at 12:56:24PM +0530, Kaushal M wrote:
>>
>> On Sat, Jul 9, 2016 at 10:02 PM, Atin Mukherjee <amukherj at redhat.com> <amukherj at redhat.com> wrote:
>>
>> ...
>>
>>
>> GlusterD depends on the cluster op-version when generating volfiles,
>> to insert new features/xlators into the volfile graph.
>> This was done to make sure that the homogeneity of the volfiles is
>> preserved across the cluster.
>> This behaviour makes running GlusterD in upgrade mode after a package
>> upgrade, essentially a noop.
>> The cluster op-version doesn't change automatically when packages are upgraded,
>> so the regenerated volfiles in the post-upgrade section are basically
>> the same as before.
>> (If something is getting added into volfiles after this, it is
>> incorrect, and is something I'm yet to check).
>>
>> The correct time to regenerate the volfiles is after all members of
>> the cluster have been upgraded and the cluster op-version has been
>> bumped.
>> (Bumping op-version doesn't regenerate anything, it is just an
>> indication that the cluster is now ready to use new features.)
>>
>> We don't have a direct way to get volfiles regenerated on all members
>> with a single command yet. We can implement such a command with
>> relative ease.
>> For now, volfiles can regenerated by making use of the `volume set`
>> command, by setting a `user.upgrade` option on a volume.
>> Options in the `user.` namespace are passed on to hook scripts and not
>> added into any volfiles, but setting such an option on a volume causes
>> GlusterD to regenerate volfiles for the volume.
>>
>> My suggestion would be to stop using glusterd in upgrade mode during
>> post-upgrade to regenerate volfiles, and document the above way to get
>> volfiles regenerated across the cluster correctly.
>> We could do away with upgrade mode itself, but it could be useful for
>> other things (Though I can't think of any right now).
>>
>> What do the other maintainers feel about this?
>>
>> Would it make sense to have the volfiles regenerated when changing the
>> op-version? For environments where multiple volumes are used, I do not
>> like the need to regenerate them manually for all of them.
>>
>> On the other hand, a regenerate+reload/restart results in a short
>> interruption. This may not be suitable for all volumes at the same time.
>> A per volume option might be preferred by some users. Getting the
>> feedback from users would be good before deciding on an approach.
>>
>> Running GlusterD in upgrade mode while updating the installed binaries
>> is something that easily gets forgotten. I'm not even sure if this is
>> done in all packages, and I guess it is skipped a lot when people have
>> installations from source. We should probably put the exact steps in our
>> release-notes to remind everyone.
>>
>> Thanks,
>> Niels
>>
>>
>>
>> _______________________________________________
>> maintainers mailing listmaintainers at gluster.orghttp://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>>
>> _______________________________________________
>> maintainers mailing list
>> maintainers at gluster.org
>> http://www.gluster.org/mailman/listinfo/maintainers
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/maintainers/attachments/20160712/a2d8ddbc/attachment.html>
More information about the maintainers
mailing list