[Gluster-users] Gluster 3.5.2 upgrade to Gluster 3.6.3 QEMU gfafpi complications

Josh Boon gluster at joshboon.com
Tue May 5 13:49:02 UTC 2015


Yeah I'll be doing a test upgrade and migration to make sure it works in the lap but my production stuff significantly more busy so we'll see if it folds under pressure. My biggest concern is when I'll have one node 3.5.2 and one node 3.6.3 in a replica set. I don't see any major reason why they wouldn't be compatible or cause data issues but I thought I would check with the list first. 

----- Original Message -----

From: "Pranith Kumar Karampuri" <pkarampu at redhat.com> 
To: "Josh Boon" <gluster at joshboon.com>, "Gluster-users at gluster.org List" <gluster-users at gluster.org> 
Sent: Tuesday, May 5, 2015 12:55:19 AM 
Subject: Re: [Gluster-users] Gluster 3.5.2 upgrade to Gluster 3.6.3 QEMU gfafpi complications 


On 05/05/2015 02:27 AM, Josh Boon wrote: 



Hey folks, 

I'll be doing an upgrade soon for my core hypervisors running qemu 2.0 built with Gluster 3.5.2 connecting to a replicated 3.5.2 volume. 
The upgrade path I'd like to do is: 
1. migrate all machines to node not being upgraded 
2. prevent client heals as documented over at http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6 
3. stop gluster server and gluster processes on node being upgraded 
4. upgrade kvm, gluster, and supporting packages to required to 3.6.3 
5. restart node being upgraded 
6. Node joins pool again except one node will be running 3.6.3 and the other 3.5.2 
7. perform heal to ensure data correct 
8. migrate all machines over to newly upgraded node 
9. repeat steps 3-5 for other node 
10. perform heal to ensure data correct 
11. rebalance machines as necessary 
12. upgrade complete 

This method has the obvious issue of will the two nodes behave as expected when on different major versions with the gain of no downtime for vm's. Is this method too risky? Has anyone tried it? Would appreciate any input. 


One way to gain confidence is to perform this on a test setup to know more about how your workload is affected by this upgrade? 

Pranith 

<blockquote>


Thanks, 
Josh 


_______________________________________________
Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users 

</blockquote>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150505/c053553e/attachment.html>


More information about the Gluster-users mailing list