[Gluster-devel] Request for Comments: Upgrades from 3.x to 4.0+

Kaushal M kshlmster at gmail.com
Thu Nov 2 08:56:05 UTC 2017


We're fast approaching the time for Gluster-4.0. And we would like to
set out the expected upgrade strategy and try to polish it to be as
user friendly as possible.

We're getting this out here now, because there was quite a bit of
concern and confusion regarding the upgrades between 3.x and 4.0+.

---
## Background

Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),
which is backwards incompatible with the GlusterD (GD1) in
GlusterFS-3.1+.  As a hybrid cluster of GD1 and GD2 cannot be
established, rolling upgrades are not possible. This meant that
upgrades from 3.x to 4.0 would require a volume downtime and possible
client downtime.

This was a cause of concern among many during the recently concluded
Gluster Summit 2017.

We would like to keep pains experienced by our users to a minimum, so
we are trying to develop an upgrade strategy that avoids downtime as
much as possible.

## (Expected) Upgrade strategy from 3.x to 4.0

Gluster-4.0 will ship with both GD1 and GD2.
For fresh installations, only GD2 will be installed and available by default.
For existing installations (upgrades) GD1 will be installed and run by
default. GD2 will also be installed simultaneously, but will not run
automatically.

GD1 will allow rolling upgrades, and allow properly setup Gluster
volumes to be upgraded to 4.0 binaries, without downtime.

Once the full pool is upgraded, and all bricks and other daemons are
running 4.0 binaries, migration to GD2 can happen.

To migrate to GD2, all GD1 processes in the cluster need to be killed,
and GD2 started instead.
GD2 will not automatically form a cluster. A migration script will be
provided, which will form a new GD2 cluster from the existing GD1
cluster information, and migrate volume information from GD1 into GD2.

Once migration is complete, GD2 will pick up the running brick and
other daemon processes and continue. This will only be possible if the
rolling upgrade with GD1 happened successfully and all the processes
are running with 4.0 binaries.

During the whole migration process, the volume would still be online
for existing clients, who can still continue to work. New clients will
not be possible during this time.

After migration, existing clients will connect back to GD2 for
updates. GD2 listens on the same port as GD1 and provides the required
SunRPC programs.

Once migrated to GD2, rolling upgrades to newer GD2 and Gluster
versions. without volume downtime, will be possible.

### FAQ and additional info

#### Both GD1 and GD2? What?

While both GD1 and GD2 will be shipped, the GD1 shipped will
essentially be the GD1 from the last 3.x series. It will not support
any of the newer storage or management features being planned for 4.0.
All new features will only be available from GD2.

#### How long will GD1 be shipped/maintained for?

We plan to maintain GD1 in the 4.x series for at least a couple of
releases, at least 1 LTM release. Current plan is to maintain it till
4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and
then upgrade to newer releases.

#### Migration script

The GD1 to GD2 migration script and the required features in GD2 are
being planned only for 4.1. This would technically mean most users
will only be able to migrate from 3.x to 4.1. But users can still
migrate from 3.x to 4.0 with GD1 and get many bug fixes and
improvements. They would only be missing any new features. Users who
live on the edge, should be able to the migration manually in 4.0.

---

Please note that the document above gives the expected upgrade
strategy, and is not final, nor complete. More details will be added
and steps will be expanded upon, as we move forward.

To move forward, we need your participation. Please reply to this
thread with any comments you have. We will try to answer and solve any
questions or concerns. If there a good new ideas/suggestions, they
will be integrated. If you just like it as is, let us know any way.

Thanks.

Kaushal and Gluster Developers.


More information about the Gluster-devel mailing list