Is this an oVirt cluster ?<div><br></div><div>Best Regards,</div><div>Strahil Nikolov<br> <br> <blockquote style="margin: 0 0 20px 0;"> <div style="font-family:Roboto, sans-serif; color:#6D00F6;"> <div>On Sun, Mar 6, 2022 at 10:06, Strahil Nikolov</div><div><hunter86_bg@yahoo.com> wrote:</div> </div> <div style="padding: 10px 0 0 20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> <div id="yiv3437406979"><div>It seems that only vh1-4 provide bricks, so vh5,6,7,8 can be removed.<div><br clear="none"></div><div>First check why vh5 is offline. Changes to all modes are propagated and in this case vh5 is down and won't receive the peer detach commands.</div><div><br clear="none"></div><div>Once you fix vh5, you can safely 'gluster peer detach' any of the nodes that is not in the volume.</div><div><br clear="none"></div><div>Keep in mind that it's always best practice to have odd number of nodes in the TSP (3,5,7,9,etc).</div><div><br clear="none"></div><div>Best Regards,</div><div>Strahil Nikolov</div><div id="yiv3437406979yqt14055" class="yiv3437406979yqt8431055951"><div> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div style="font-family:Roboto, sans-serif;color:#6D00F6;"> <div>On Sun, Mar 6, 2022 at 4:06, Todd Pfaff</div><div><pfaff@rhpcs.mcmaster.ca> wrote:</div> </div> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> [<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@vh1" target="_blank" href="mailto:root@vh1">root@vh1</a> ~]# gluster volume info vol1<br clear="none"><br clear="none">Volume Name: vol1<br clear="none">Type: Replicate<br clear="none">Volume ID: dfd681bb-5b68-4831-9863-e13f9f027620<br clear="none">Status: Started<br clear="none">Snapshot Count: 0<br clear="none">Number of Bricks: 1 x 4 = 4<br clear="none">Transport-type: tcp<br clear="none">Bricks:<br clear="none">Brick1: vh1:/pool/gluster/brick1/data<br clear="none">Brick2: vh2:/pool/gluster/brick1/data<br clear="none">Brick3: vh3:/pool/gluster/brick1/data<br clear="none">Brick4: vh4:/pool/gluster/brick1/data<br clear="none">Options Reconfigured:<br clear="none">transport.address-family: inet<br clear="none">nfs.disable: on<br clear="none">performance.client-io-threads: off<br clear="none"><br clear="none"><br clear="none">[<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@vh1" target="_blank" href="mailto:root@vh1">root@vh1</a> ~]# gluster pool list<br clear="none">UUID                                    Hostname        State<br clear="none">75fc4258-fabd-47c9-8198-bbe6e6a906fb    vh2             Connected<br clear="none">00697e28-96c0-4534-a314-e878070b653d    vh3             Connected<br clear="none">2a9b891b-35d0-496c-bb06-f5dab4feb6bf    vh4             Connected<br clear="none">8ba6fb80-3b13-4379-94cf-22662cbb48a2    vh5             Disconnected<br clear="none">1298d334-3500-4b40-a8bd-cc781f7349d0    vh6             Connected<br clear="none">79a533ac-3d89-44b9-b0ce-823cfec8cf75    vh7             Connected<br clear="none">4141cd74-9c13-404c-a02c-f553fa19bc22    vh8             Connected<br clear="none"><br clear="none"><div id="yiv3437406979yqtfd53241" class="yiv3437406979yqt0184386359"><br clear="none">On Sat, 5 Mar 2022, Strahil Nikolov wrote:<br clear="none"><br clear="none">> Hey Todd,<br clear="none">> <br clear="none">> can you provide 'gluster volume info <VOLUME>' ?<br clear="none">> <br clear="none">> Best Regards,<br clear="none">> Strahil Nikolov<br clear="none">><br clear="none">>       On Sat, Mar 5, 2022 at 18:17, Todd Pfaff<br clear="none">> <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:pfaff@rhpcs.mcmaster.ca" target="_blank" href="mailto:pfaff@rhpcs.mcmaster.ca">pfaff@rhpcs.mcmaster.ca</a>> wrote:<br clear="none">> I have a replica volume created as:<br clear="none">> <br clear="none">> gluster volume create vol1 replica 4 \<br clear="none">>   host{1,2,3,4}:/mnt/gluster/brick1/data \<br clear="none">>   force<br clear="none">> <br clear="none">> <br clear="none">> All hosts host{1,2,3,4} mount this volume as:<br clear="none">> <br clear="none">> localhost:/vol1 /mnt/gluster/vol1 glusterfs defaults<br clear="none">> <br clear="none">> <br clear="none">> Some other hosts are trusted peers but do not contribute bricks, and<br clear="none">> they<br clear="none">> also mount vol1 in the same way:<br clear="none">> <br clear="none">> localhost:/vol1 /mnt/gluster/vol1 glusterfs defaults<br clear="none">> <br clear="none">> <br clear="none">> All hosts run CentOS 7.9, and all are running glusterfs 9.4 or 9.5 from<br clear="none">> centos-release-gluster9-1.0-1.el7.noarch.<br clear="none">> <br clear="none">> <br clear="none">> All hosts run kvm guests that use qcow2 files for root filesystems that<br clear="none">> are stored on gluster volume vol1.<br clear="none">> <br clear="none">> <br clear="none">> This is all working well, as long as none of host{1,2,3,4} go offline.<br clear="none">> <br clear="none">> <br clear="none">> I want to take one of host{1,2,3,4} offline temporarily for<br clear="none">> maintenance.<br clear="none">> I'll refer to this as hostX.<br clear="none">> <br clear="none">> I understand that hostX will need to be healed when it comes back<br clear="none">> online.<br clear="none">> <br clear="none">> I would, of course, migrate guests from hostX to another host, in which<br clear="none">> case hostX would then only be participating as a gluster replica brick<br clear="none">> provider and serving gluster client requests.<br clear="none">> <br clear="none">> What I've experienced is that if I take one of host{1,2,3,4} offline,<br clear="none">> this<br clear="none">> can disrupt some of the VM guests on various other hosts such that<br clear="none">> their<br clear="none">> root filesystems go to read-only.<br clear="none">> <br clear="none">> What I'm looking for here are suggestions as to how to properly take<br clear="none">> one<br clear="none">> of host{1,2,3,4} offline to avoid such disruption or how to tune the<br clear="none">> libvirt kvm hosts and guests to be sufficiently resilient in the face<br clear="none">> of<br clear="none">> taking one gluster replica node offline.<br clear="none">> <br clear="none">> Thanks,<br clear="none">> Todd<br clear="none">> ________<br clear="none">> <br clear="none">> <br clear="none">> <br clear="none">> Community Meeting Calendar:<br clear="none">> <br clear="none">> Schedule -<br clear="none">> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">> Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">> Gluster-users mailing list<br clear="none">> <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br clear="none">> <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">> <br clear="none">> <br clear="none">><br clear="none"></div> </div> </blockquote></div></div></div></div> </div> </blockquote></div>