<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi, <div class=""><br class=""></div><div class="">thanks for suggesions. Yes "gluster peer probe node3” will be first command in order to discover 3rd node by Gluster.</div><div class="">I am running on latest 3.7.x - there is 3.7.6-1ubuntu1 installed and latest 3.7.x according <a href="https://packages.ubuntu.com/xenial/glusterfs-server" class="">https://packages.ubuntu.com/xenial/glusterfs-server</a> is 3.7.6-1ubuntu1, so this should be OK.</div><div class=""><br class=""></div><div class=""><blockquote type="cite" class=""><div dir="auto" class=""><div class=""><div dir="auto" class="">If you are *not* on the latest 3.7.x, you are unlikely to be able to go</div></div></div></blockquote></div><div class="">Do you mean latest package from Ubuntu repository or latest package from Gluster PPA (3.7.20-ubuntu1~xenial1). </div><div class="">Currently I am using Ubuntu repository package, but want to use PPA for upgrade because Ubuntu has old packages of Gluster in repo.</div><div class=""><br class=""></div><div class="">I do not use sharding because all bricks has same size, so it will not speedup healing of VMs images in case of heal operation. Volume is 3TB, how long does it take to heal on 2x1gbit (linux bond) connection, can you approximate ? </div><div class="">I want to turn every VM off because its required for upgrading gluster procedure, thats why I want to add 3rd brick (3rd replica) at this time (after upgrade when VMs will be offline).</div><div class=""><br class=""></div><div class="">Martin</div><div class=""><br class=""><div><blockquote type="cite" class=""><div class="">On 22 Sep 2017, at 12:20, Diego Remolina <<a href="mailto:dijuremo@gmail.com" class="">dijuremo@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="auto" class=""><div class="">Procedure looks good.<div dir="auto" class=""><br class=""></div><div dir="auto" class="">Remember to back up Gluster config files before update:</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">/etc/glusterfs</div><div dir="auto" class="">/var/lib/glusterd</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">If you are *not* on the latest 3.7.x, you are unlikely to be able to go back to it because PPA only keeps the latest version of each major branch, so keep that in mind. With Ubuntu, every time you update, make sure to download and keep a manual copy of the .Deb files. Otherwise you will have to compile the packages yourself in the event you wanted to go back.</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">Might need before adding 3rd replica:</div><div dir="auto" class="">gluster peer probe node3 </div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">When you add the 3rd replica, it should start healing, and there may be an issue there if the VMs are running. Your plan to not have VMs up is good here. Are you using sharding? If you are not sharding, I/O in running VMs may be stopped for too long while a large image is healed. If you were already using sharding you should be able to add the 3rd replica when VMs are running without much issue.</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">Once healing is completed and if you are satisfied with 3.12, then remember to bump op version of Gluster.</div><div dir="auto" class=""><br class=""></div><div dir="auto" class="">Diego</div><div dir="auto" class=""><br class=""></div><div class="gmail_extra"><br class=""><div class="gmail_quote">On Sep 20, 2017 19:32, "Martin Toth" <<a href="mailto:snowmailer@gmail.com" class="">snowmailer@gmail.com</a>> wrote:<br type="attribution" class=""><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word" class=""><div class="">Hello all fellow GlusterFriends,</div><div class=""><br class=""></div><div class="">I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster.</div><div class="">Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has.</div><div class=""><br class=""></div><div class=""><b class="">Infrastructure setup:</b></div><div class="">- all clients running on same nodes as servers (FUSE mounts)</div><div class="">- under gluster there is ZFS pool running as raidz2 with SSD ZLOG/ZIL cache</div><div class="">- all two hypervisor running as GlusterFS nodes and also Qemu compute nodes (Ubuntu 16.04 LTS)</div><div class="">- we are running Qemu VMs that accesses VMs disks via gfapi (Opennebula)</div><div class="">- we currently run : 1x2 , Type: Replicate volume</div><div class=""><br class=""></div><div class=""><b class="">Current Versions :</b></div><div class="">glusterfs-* [package] 3.7.6-1ubuntu1</div><div class="">qemu-*<span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                </span>[package] 2.5+dfsg-5ubuntu10.<wbr class="">2glusterfs3.7.14xenial1</div><div class=""><br class=""></div><div class=""><b class="">What we need : (New versions)</b></div><div class="">- upgrade GlusterFS to 3.12 LTM version (Ubuntu 16.06 LTS packages are EOL - see <a href="https://www.gluster.org/community/release-schedule/" target="_blank" class="">https://www.gluster.org/<wbr class="">community/release-schedule/</a>)</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">        </span>- I want to use <a href="https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.12" target="_blank" class="">https://launchpad.net/~<wbr class="">gluster/+archive/ubuntu/<wbr class="">glusterfs-3.12</a> as package repository for 3.12</div><div class="">- upgrade Qemu (with build-in support for libgfapi) - <a href="https://launchpad.net/~monotek/+archive/ubuntu/qemu-glusterfs-3.12" target="_blank" class="">https://launchpad.net/~<wbr class="">monotek/+archive/ubuntu/qemu-<wbr class="">glusterfs-3.12</a></div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">        </span>- (sadly Ubuntu has packages build without libgfapi support)</div><div class="">- add third node to replica setup of volume (this is probably most dangerous operation)</div><div class=""><br class=""></div><div class=""><b class="">Backup Phase</b></div><div class="">- backup "NFS storage” - raw DATA that runs on VMs</div><div class="">- stop all running VMs</div><div class="">- backup all running VMs (Qcow2 images) outside of gluster</div><div class=""><br class=""></div><div class=""><b class="">Upgrading Gluster Phase</b></div><div class="">- killall glusterfs glusterfsd glusterd (on every server)</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">        </span>(this should stop all gluster services - server and client as it runs on same nodes)</div><div class="">- install new Gluster Server and Client packages from repository mentioned upper (on every server) </div><div class="">- install new Monotek's qemu glusterfs package with gfapi enabled support (on every server) </div><div class="">- /etc/init.d/glusterfs-server start (on every server)</div><div class="">- /etc/init.d/glusterfs-server status - verify that all runs ok (on every server)</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">        </span>- check :</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                </span>- gluster volume info</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                </span>- gluster volume status</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                </span>- check gluster FUSE clients, if mounts working as expected</div><div class="">- test if various VMs are able tu boot and run as expected (if libgfapi works in Qemu)</div><div class="">- reboot all nodes - do system upgrade of packages</div><div class="">- test and check again</div><div class=""><br class=""></div><div class=""><b class="">Adding third node to replica 2 setup (replica 2 => replica 3)</b></div><div class="">(volumes will be mounted and up after upgrade and we tested VMs are able to be served with libgfapi = upgrade of gluster sucessfuly completed)</div><div class="">(next we extend replica 2 to replica 3 while volumes are mounted but no data is touched = no running VMs, only glusterfs servers and clients on nodes)</div><div class="">- issue command : gluster volume add-brick volume replica 3 node3.san:/tank/gluster/brick1 (on new single node - node3)</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">        </span>so we change : </div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                </span>Bricks:</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                        </span>Brick1: node1.san:/tank/gluster/brick1</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                        </span>Brick2: node2.san:/tank/gluster/brick1</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">        </span>to :</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                        </span>Bricks:</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                        </span>Brick1: node1.san:/tank/gluster/brick1</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                        </span>Brick2: node2.san:/tank/gluster/brick1</div><div class=""><span class="m_-4813293742426275633Apple-tab-span" style="white-space:pre-wrap">                        </span>Brick3: node3.san:/tank/gluster/brick1</div><div class="">- check gluster status</div><div class="">- (is rebalance / heal required here ?)</div><div class="">- start all VMs and start celebration :)</div><div class=""><br class=""></div><div class=""><b class="">My Questions</b></div><div class="">- is heal and rebalance necessary in order to upgrade replica 2 to replica 3 ?</div><div class="">- is this upgrade procedure OK ? What more/else should I do in order to do this upgrade correctly ?</div><div class=""><br class=""></div><div class="">Many thanks to all for support. Hope my little preparation howto will help others to solve same situation.</div><div class=""><br class=""></div><div class="">Best Regards,</div><div class="">Martin</div></div><br class="">______________________________<wbr class="">_________________<br class="">
Gluster-users mailing list<br class="">
<a href="mailto:Gluster-users@gluster.org" class="">Gluster-users@gluster.org</a><br class="">
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank" class="">http://lists.gluster.org/<wbr class="">mailman/listinfo/gluster-users</a><br class=""></blockquote></div><br class=""></div></div></div>
</div></blockquote></div><br class=""></div></body></html>