[Gluster-users] Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]

Ravishankar N ravishankar at redhat.com
Fri Sep 22 00:36:11 UTC 2017


I had replied (reply-to-all) to this email y'day but it I don't see it 
on the list. Anyway pasting it again:


On 09/21/2017 10:03 AM, Ravishankar N wrote:
>
> On 09/20/2017 01:45 PM, Martin Toth wrote:
>> *My Questions*
>> - is heal and rebalance necessary in order to upgrade replica 2 to 
>> replica 3 ?
> No, `gluster volume add-brick voname replica 3 
> node3.san:/tank/gluster/brick1' should automatically trigger healing 
> in gluster 3.12 (actually earlier than 3.12 but I don't remember which 
> release the automatic healing was added on add-brick command). Ensure 
> that `gluster volume heal volname info` eventually becomes zero.
>> - is this upgrade procedure OK ? What more/else should I do in order 
>> to do this upgrade correctly ?
>>
> Looks okay in theory (I haven't tried out a 3.7 to 3.12 upgrade). It 
> might be good to check there are no pending heals before you stop the 
> old nodes for the upgrade.
> -Ravi

-Ravi

On 09/21/2017 09:50 PM, Amye Scavarda wrote:
> Just making sure this gets through.
>
>
> ---------- Forwarded message ----------
> From: Martin Toth <snowmailer at gmail.com>
> Date: Thu, Sep 21, 2017 at 9:17 AM
> Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
> To: gluster-users at gluster.org
> Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com
>
>
> Hello all fellow GlusterFriends,
>
> I would like you to comment / correct my upgrade procedure steps on
> replica 2 volume of 3.7.x gluster.
> Than I would like to change replica 2 to replica 3 in order to correct
> quorum issue that Infrastructure currently has.
>
> Infrastructure setup:
> - all clients running on same nodes as servers (FUSE mounts)
> - under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache
> - all two hypervisor running as GlusterFS nodes and also Qemu compute
> nodes (Ubuntu 16.04 LTS)
> - we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula)
> - we currently run : 1x2 , Type: Replicate volume
>
> Current Versions :
> glusterfs-* [package] 3.7.6-1ubuntu1
> qemu-* [package] 2.5+dfsg-5ubuntu10.2glusterfs3.7.14xenial1
>
> What we need : (New versions)
> - upgrade GlusterFS to 3.12 LTM version (Ubuntu 16.06 LTS packages are
> EOL - see https://www.gluster.org/community/release-schedule/)
> - I want to use
> https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12 as
> package repository for 3.12
> - upgrade Qemu (with build-in support for libgfapi) -
> https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.12
> - (sadly Ubuntu has packages build without libgfapi support)
> - add third node to replica setup of volume (this is probably most
> dangerous operation)
>
> Backup Phase
> - backup "NFS storage” - raw DATA that runs on VMs
> - stop all running VMs
> - backup all running VMs (Qcow2 images) outside of gluster
>
> Upgrading Gluster Phase
> - killall glusterfs glusterfsd glusterd (on every server)
> (this should stop all gluster services - server and client as it runs
> on same nodes)
> - install new Gluster Server and Client packages from repository
> mentioned upper (on every server)
> - install new Monotek's qemu glusterfs package with gfapi enabled
> support (on every server)
> - /etc/init.d/glusterfs-server start (on every server)
> - /etc/init.d/glusterfs-server status - verify that all runs ok (on
> every server)
> - check :
> - gluster volume info
> - gluster volume status
> - check gluster FUSE clients, if mounts working as expected
> - test if various VMs are able tu boot and run as expected (if
> libgfapi works in Qemu)
> - reboot all nodes - do system upgrade of packages
> - test and check again
>
> Adding third node to replica 2 setup (replica 2 => replica 3)
> (volumes will be mounted and up after upgrade and we tested VMs are
> able to be served with libgfapi = upgrade of gluster sucessfuly
> completed)
> (next we extend replica 2 to replica 3 while volumes are mounted but
> no data is touched = no running VMs, only glusterfs servers and
> clients on nodes)
> - issue command : gluster volume add-brick volume replica 3
> node3.san:/tank/gluster/brick1 (on new single node - node3)
> so we change :
> Bricks:
> Brick1: node1.san:/tank/gluster/brick1
> Brick2: node2.san:/tank/gluster/brick1
> to :
> Bricks:
> Brick1: node1.san:/tank/gluster/brick1
> Brick2: node2.san:/tank/gluster/brick1
> Brick3: node3.san:/tank/gluster/brick1
> - check gluster status
> - (is rebalance / heal required here ?)
> - start all VMs and start celebration :)
>
> My Questions
> - is heal and rebalance necessary in order to upgrade replica 2 to replica 3 ?
> - is this upgrade procedure OK ? What more/else should I do in order
> to do this upgrade correctly ?
>
> Many thanks to all for support. Hope my little preparation howto will
> help others to solve same situation.
>
> Best Regards,
> Martin
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170922/22068189/attachment.html>


More information about the Gluster-users mailing list