<div dir="ltr">Ahh OK I see, thanks<br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On 6 November 2017 at 00:54, Kaushal M <span dir="ltr"><<a href="mailto:kshlmster@gmail.com" target="_blank">kshlmster@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Fri, Nov 3, 2017 at 8:50 PM, Alastair Neil <<a href="mailto:ajneil.tech@gmail.com">ajneil.tech@gmail.com</a>> wrote:<br>
> Just so I am clear the upgrade process will be as follows:<br>
><br>
> upgrade all clients to 4.0<br>
><br>
> rolling upgrade all servers to 4.0 (with GD1)<br>
><br>
> kill all GD1 daemons on all servers and run upgrade script (new clients<br>
> unable to connect at this point)<br>
><br>
> start GD2 ( necessary or does the upgrade script do this?)<br>
><br>
><br>
> I assume that once the cluster had been migrated to GD2 the glusterd startup<br>
> script will be smart enough to start the correct version?<br>
><br>
<br>
</span>This should be the process, mostly.<br>
<br>
The upgrade script needs to GD2 running on all nodes before it can<br>
begin migration.<br>
But they don't need to have a cluster formed, the script should take<br>
care of forming the cluster.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
> -Thanks<br>
><br>
><br>
><br>
><br>
><br>
> On 3 November 2017 at 04:06, Kaushal M <<a href="mailto:kshlmster@gmail.com">kshlmster@gmail.com</a>> wrote:<br>
>><br>
>> On Thu, Nov 2, 2017 at 7:53 PM, Darrell Budic <<a href="mailto:budic@onholyground.com">budic@onholyground.com</a>><br>
>> wrote:<br>
>> > Will the various client packages (centos in my case) be able to<br>
>> > automatically handle the upgrade vs new install decision, or will we be<br>
>> > required to do something manually to determine that?<br>
>><br>
>> We should be able to do this with CentOS (and other RPM based distros)<br>
>> which have well split glusterfs packages currently.<br>
>> At this moment, I don't know exactly how much can be handled<br>
>> automatically, but I expect the amount of manual intervention to be<br>
>> minimal.<br>
>> The least minimum amount of manual work needed would be enabling and<br>
>> starting GD2 and starting the migration script.<br>
>><br>
>> ><br>
>> > It’s a little unclear that things will continue without interruption<br>
>> > because<br>
>> > of the way you describe the change from GD1 to GD2, since it sounds like<br>
>> > it<br>
>> > stops GD1.<br>
>><br>
>> With the described upgrade strategy, we can ensure continuous volume<br>
>> access to clients during the whole process (provided volumes have been<br>
>> setup with replication or ec).<br>
>><br>
>> During the migration from GD1 to GD2, any existing clients still<br>
>> retain access, and can continue to work without interruption.<br>
>> This is possible because gluster keeps the management (glusterds) and<br>
>> data (bricks and clients) parts separate.<br>
>> So it is possible to interrupt the management parts, without<br>
>> interrupting data access to existing clients.<br>
>> Clients and the server side brick processes need GlusterD to start up.<br>
>> But once they're running, they can run without GlusterD. GlusterD is<br>
>> only required again if something goes wrong.<br>
>> Stopping GD1 during the migration process, will not lead to any<br>
>> interruptions for existing clients.<br>
>> The brick process continue to run, and any connected clients continue<br>
>> to remain connected to the bricks.<br>
>> Any new clients which try to mount the volumes during this migration<br>
>> will fail, as a GlusterD will not be available (either GD1 or GD2).<br>
>><br>
>> > Early days, obviously, but if you could clarify if that’s what<br>
>> > we’re used to as a rolling upgrade or how it works, that would be<br>
>> > appreciated.<br>
>><br>
>> A Gluster rolling upgrade process, allows data access to volumes<br>
>> during the process, while upgrading the brick processes as well.<br>
>> Rolling upgrades with uninterrupted access requires that volumes have<br>
>> redundancy (replicate or ec).<br>
>> Rolling upgrades involves upgrading servers belonging to a redundancy<br>
>> set (replica set or ec set), one at a time.<br>
>> One at a time,<br>
>> - A server is picked from a redundancy set<br>
>> - All Gluster processes are killed on the server, glusterd, bricks and<br>
>> other daemons included.<br>
>> - Gluster is upgraded and restarted on the server<br>
>> - A heal is performed to heal new data onto the bricks.<br>
>> - Move onto next server after heal finishes.<br>
>><br>
>> Clients maintain uninterrupted access, because a full redundancy set<br>
>> is never taken offline all at once.<br>
>><br>
>> > Also clarification that we’ll be able to upgrade from 3.x<br>
>> > (3.1x?) to 4.0, manually or automatically?<br>
>><br>
>> Rolling upgrades from 3.1x to 4.0 are a manual process. But I believe,<br>
>> gdeploy has playbooks to automate it.<br>
>> At the end of this you will be left with a 4.0 cluster, but still be<br>
>> running GD1.<br>
>> Upgrading from GD1 to GD2, in 4.0 will be a manual process. A script<br>
>> that automates this is planned only for 4.1.<br>
>><br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>__<br>
>> > From: Kaushal M <<a href="mailto:kshlmster@gmail.com">kshlmster@gmail.com</a>><br>
>> > Subject: [Gluster-users] Request for Comments: Upgrades from 3.x to 4.0+<br>
>> > Date: November 2, 2017 at 3:56:05 AM CDT<br>
>> > To: <a href="mailto:gluster-users@gluster.org">gluster-users@gluster.org</a>; Gluster Devel<br>
>> ><br>
>> > We're fast approaching the time for Gluster-4.0. And we would like to<br>
>> > set out the expected upgrade strategy and try to polish it to be as<br>
>> > user friendly as possible.<br>
>> ><br>
>> > We're getting this out here now, because there was quite a bit of<br>
>> > concern and confusion regarding the upgrades between 3.x and 4.0+.<br>
>> ><br>
>> > ---<br>
>> > ## Background<br>
>> ><br>
>> > Gluster-4.0 will bring a newer management daemon, GlusterD-2.0 (GD2),<br>
>> > which is backwards incompatible with the GlusterD (GD1) in<br>
>> > GlusterFS-3.1+. As a hybrid cluster of GD1 and GD2 cannot be<br>
>> > established, rolling upgrades are not possible. This meant that<br>
>> > upgrades from 3.x to 4.0 would require a volume downtime and possible<br>
>> > client downtime.<br>
>> ><br>
>> > This was a cause of concern among many during the recently concluded<br>
>> > Gluster Summit 2017.<br>
>> ><br>
>> > We would like to keep pains experienced by our users to a minimum, so<br>
>> > we are trying to develop an upgrade strategy that avoids downtime as<br>
>> > much as possible.<br>
>> ><br>
>> > ## (Expected) Upgrade strategy from 3.x to 4.0<br>
>> ><br>
>> > Gluster-4.0 will ship with both GD1 and GD2.<br>
>> > For fresh installations, only GD2 will be installed and available by<br>
>> > default.<br>
>> > For existing installations (upgrades) GD1 will be installed and run by<br>
>> > default. GD2 will also be installed simultaneously, but will not run<br>
>> > automatically.<br>
>> ><br>
>> > GD1 will allow rolling upgrades, and allow properly setup Gluster<br>
>> > volumes to be upgraded to 4.0 binaries, without downtime.<br>
>> ><br>
>> > Once the full pool is upgraded, and all bricks and other daemons are<br>
>> > running 4.0 binaries, migration to GD2 can happen.<br>
>> ><br>
>> > To migrate to GD2, all GD1 processes in the cluster need to be killed,<br>
>> > and GD2 started instead.<br>
>> > GD2 will not automatically form a cluster. A migration script will be<br>
>> > provided, which will form a new GD2 cluster from the existing GD1<br>
>> > cluster information, and migrate volume information from GD1 into GD2.<br>
>> ><br>
>> > Once migration is complete, GD2 will pick up the running brick and<br>
>> > other daemon processes and continue. This will only be possible if the<br>
>> > rolling upgrade with GD1 happened successfully and all the processes<br>
>> > are running with 4.0 binaries.<br>
>> ><br>
>> > During the whole migration process, the volume would still be online<br>
>> > for existing clients, who can still continue to work. New clients will<br>
>> > not be possible during this time.<br>
>> ><br>
>> > After migration, existing clients will connect back to GD2 for<br>
>> > updates. GD2 listens on the same port as GD1 and provides the required<br>
>> > SunRPC programs.<br>
>> ><br>
>> > Once migrated to GD2, rolling upgrades to newer GD2 and Gluster<br>
>> > versions. without volume downtime, will be possible.<br>
>> ><br>
>> > ### FAQ and additional info<br>
>> ><br>
>> > #### Both GD1 and GD2? What?<br>
>> ><br>
>> > While both GD1 and GD2 will be shipped, the GD1 shipped will<br>
>> > essentially be the GD1 from the last 3.x series. It will not support<br>
>> > any of the newer storage or management features being planned for 4.0.<br>
>> > All new features will only be available from GD2.<br>
>> ><br>
>> > #### How long will GD1 be shipped/maintained for?<br>
>> ><br>
>> > We plan to maintain GD1 in the 4.x series for at least a couple of<br>
>> > releases, at least 1 LTM release. Current plan is to maintain it till<br>
>> > 4.2. Beyond 4.2, users will need to first upgrade from 3.x to 4.2, and<br>
>> > then upgrade to newer releases.<br>
>> ><br>
>> > #### Migration script<br>
>> ><br>
>> > The GD1 to GD2 migration script and the required features in GD2 are<br>
>> > being planned only for 4.1. This would technically mean most users<br>
>> > will only be able to migrate from 3.x to 4.1. But users can still<br>
>> > migrate from 3.x to 4.0 with GD1 and get many bug fixes and<br>
>> > improvements. They would only be missing any new features. Users who<br>
>> > live on the edge, should be able to the migration manually in 4.0.<br>
>> ><br>
>> > ---<br>
>> ><br>
>> > Please note that the document above gives the expected upgrade<br>
>> > strategy, and is not final, nor complete. More details will be added<br>
>> > and steps will be expanded upon, as we move forward.<br>
>> ><br>
>> > To move forward, we need your participation. Please reply to this<br>
>> > thread with any comments you have. We will try to answer and solve any<br>
>> > questions or concerns. If there a good new ideas/suggestions, they<br>
>> > will be integrated. If you just like it as is, let us know any way.<br>
>> ><br>
>> > Thanks.<br>
>> ><br>
>> > Kaushal and Gluster Developers.<br>
>> > ______________________________<wbr>_________________<br>
>> > Gluster-users mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
>> > <a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br>
>> ><br>
>> ><br>
>> ______________________________<wbr>_________________<br>
>> Gluster-devel mailing list<br>
>> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
>> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> Gluster-devel mailing list<br>
> <a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
> <a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br>
</div></div></blockquote></div><br></div>