<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi Diego,<div class=""><br class=""></div><div class="">I’ve tried to upgrade and then extend gluster with 3rd node in virtualbox test environment and all went without problems.</div><div class="">Sharding will not help me at this time so I will consider upgrading 1G to 10G before this procedure in production. That should lower downtime - healing time of VM image files on Gluster.<br class=""><div><br class=""></div><div>I hope healing will take as short as possible on 10G.</div><div><br class=""></div><div>Additional info for Gluster/Qemu Users:</div><div>- Ubuntu does not have Qemu compiled with libgfapi support so I’ve created PPA for that :</div><div><span class="Apple-tab-span" style="white-space:pre">        </span><a href="https://launchpad.net/~snowmanko/+archive/ubuntu/qemu-glusterfs-3.12" class="">https://launchpad.net/~snowmanko/+archive/ubuntu/qemu-glusterfs-3.12</a> (I will try to make this repo up to date)</div><div><span class="Apple-tab-span" style="white-space:pre">        </span>- it’s tested against glusterfs3.12.1 version (libgfapi works as expected with this repo)</div><div><br class=""></div><div>- Moreover related to this problem - there is MIR - <a href="https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/1274247" class="">https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/1274247</a> - it’s now accepted, I am really excited to see libgfapi compiled by default in Ubuntu Qemu packages in near future</div><div><br class=""></div><div>Thanks for support.</div><div><br class=""></div><div>BR,</div><div><br class=""></div><div>Martin</div><div><br class=""><blockquote type="cite" class=""><div class="">On 22 Sep 2017, at 14:50, Diego Remolina <<a href="mailto:dijuremo@gmail.com" class="">dijuremo@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi Martin,<br class=""><br class=""><blockquote type="cite" class="">Do you mean latest package from Ubuntu repository or latest package from<br class="">Gluster PPA (3.7.20-ubuntu1~xenial1).<br class="">Currently I am using Ubuntu repository package, but want to use PPA for<br class="">upgrade because Ubuntu has old packages of Gluster in repo.<br class=""></blockquote><br class="">When you switch to PPA, make sure to download and keep a copy of each<br class="">set of gluster deb packages, otherwise if you ever want to back out an<br class="">upgrade to an older release, you will have to download the source deb<br class="">file and build it yourself, because PPAs only keep the latest version<br class="">for binaries.<br class=""><br class=""><blockquote type="cite" class=""><br class="">I do not use sharding because all bricks has same size, so it will not<br class="">speedup healing of VMs images in case of heal operation. Volume is 3TB, how<br class="">long does it take to heal on 2x1gbit (linux bond) connection, can you<br class="">approximate ?<br class=""></blockquote><br class="">Sharding is not so much about brick size. Sharding is about preventing<br class="">a whole large VM file being locked when it is being healed. Also<br class="">minimizes the amount of data copied because gluster only heals smaller<br class="">pieces versus a whole VM image.<br class=""><br class="">Say your 100GB IMG needs to be healed, the file is locked while it<br class="">gets copied from one server to the other and the running VM may not be<br class="">able to use it while the heal is going, so your VM may in fact stop<br class="">working or have I/O errors. With sharding, VMs are cut into, well,<br class="">shards, largest shard is 512MB, then the heal process only locks the<br class="">shards being healed. So gluster only heals the shards that changed<br class="">which are much smaller and faster to copy, and do not need to lock the<br class="">whole 100GB IMG file which takes longer to copy, just the shard being<br class="">healed. Do note that if you had never used sharding, if you turn it on<br class="">it will *not* convert your older files. Also you should *never* turn<br class="">on sharding and then back off, as that will result in corrupted VM<br class="">image files. Once it is on, if you want to turn it off, stop your VMs,<br class="">then move all VM IMG files elsewhere, turn off sharding and then copy<br class="">the files back to the volume after disabling sharding.<br class=""><br class="">As for speed, I really cannot tell you as it depends on the disks,<br class="">netowr, etc. For example, I have a two node setup plus an arbiter (2<br class="">nodes with bricks, one is just the arbiter to keep quorum if one of<br class="">the brick servers goes down). I recently replaced the HDDs in one<br class="">machine as the drives hit the 5 year age mark. So I took the 12 drives<br class="">out, added 24 drives to the machine (we had unused slots),<br class="">reconfigured raid 6 and left it initializing in the background and<br class="">started the heal of 13.1TB of data. My servers are connected via<br class="">10Gbit (I am not seeing reads/writes over 112MB/s) and this process<br class="">started last Monday at 7;20PM and it is not done yet. It is missing<br class="">healing about 40GB still. Now my servers are used as a file server,<br class="">which means lots of small files which take longer to heal. I would<br class="">think your VM images will heal much faster.<br class=""><br class=""><blockquote type="cite" class="">I want to turn every VM off because its required for upgrading gluster<br class="">procedure, thats why I want to add 3rd brick (3rd replica) at this time<br class="">(after upgrade when VMs will be offline).<br class=""><br class=""></blockquote><br class="">You could even attempt an online upgrade if you try to add the new<br class="">node/brick running 3.12 to the mix before upgrading from 3.7.x on the<br class="">other nodes. However, I am not sure how that is going to work. With<br class="">such a difference in versions, it may not work well.<br class=""><br class="">If you can afford the downtime to upgrade, that will be the safest option.<br class=""><br class="">Diego<br class=""></div></div></blockquote></div><br class=""></div></body></html>