<div dir="ltr">What do you mean "sharding"? Do you mean sharing folders between two servers to host qcow2 or raw vm images?<br>Here I am using Proxmox which uses qemu but not virsh.<div><br></div><div>Thanks<br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">(47) 3025-5907</span><br></div><div><font size="4"><b></b></font></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px">Skype: gilberto.nunes36</p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em qui., 6 de ago. de 2020 às 01:09, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">As you mentioned qcow2 files, check the virt group (/var/lib/glusterfs/group or something like that). It has optimal setttins for VMs and is used by oVirt.<br>
<br>
WARNING: If you decide to enable the group, which will also enable sharding, NEVER EVER DISABLE SHARDING -> ONCE ENABLED STAYS ENABLED !!!<br>
Sharding helps reduce loocking during replica heals.<br>
<br>
WARNING2: As virt group uses sharding (fixes the size of file into shard size), you should consider cluster.favorite-child-policy with value ctime/mtime.<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
На 6 август 2020 г. 1:56:58 GMT+03:00, Gilberto Nunes <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> написа:<br>
>Ok...Thanks a lot Strahil<br>
><br>
>This gluster volume set VMS cluster.favorite-child-policy size do the<br>
>trick<br>
>to me here!<br>
><br>
>Cheers<br>
>---<br>
>Gilberto Nunes Ferreira<br>
><br>
>(47) 3025-5907<br>
>(47) 99676-7530 - Whatsapp / Telegram<br>
><br>
>Skype: gilberto.nunes36<br>
><br>
><br>
><br>
><br>
><br>
>Em qua., 5 de ago. de 2020 às 18:15, Strahil Nikolov<br>
><<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>><br>
>escreveu:<br>
><br>
>> This could happen if you have pending heals. Did you reboot that node<br>
>> recently ?<br>
>> Did you set automatic unsplit-brain ?<br>
>><br>
>> Check for pending heals and files in splitbrain.<br>
>><br>
>> If not, you can check<br>
>><br>
><a href="https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/</a><br>
>> (look at point 5).<br>
>><br>
>> Best Regards,<br>
>> Strahil Nikolov<br>
>><br>
>> На 5 август 2020 г. 23:41:57 GMT+03:00, Gilberto Nunes <<br>
>> <a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> написа:<br>
>> >I'm in trouble here.<br>
>> >When I shutdown the pve01 server, the shared folder over glusterfs<br>
>is<br>
>> >EMPTY!<br>
>> >It's supposed to be a qcow2 file inside it.<br>
>> >The content is show right, just after I power on pve01 backup...<br>
>> ><br>
>> >Some advice?<br>
>> ><br>
>> ><br>
>> >Thanks<br>
>> ><br>
>> >---<br>
>> >Gilberto Nunes Ferreira<br>
>> ><br>
>> >(47) 3025-5907<br>
>> >(47) 99676-7530 - Whatsapp / Telegram<br>
>> ><br>
>> >Skype: gilberto.nunes36<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> >Em qua., 5 de ago. de 2020 às 11:07, Gilberto Nunes <<br>
>> ><a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> escreveu:<br>
>> ><br>
>> >> Well...<br>
>> >> I do the follow:<br>
>> >><br>
>> >> gluster vol create VMS replica 3 arbiter 1 pve01:/DATA/brick1<br>
>> >> pve02:/DATA/brick1.5 pve01:/DATA/arbiter1.5 pve02:/DATA/brick2 pv<br>
>> >> e01:/DATA/brick2.5 pve02:/DATA/arbiter2.5 force<br>
>> >><br>
>> >> And now I have:<br>
>> >> gluster vol info<br>
>> >><br>
>> >> Volume Name: VMS<br>
>> >> Type: Distributed-Replicate<br>
>> >> Volume ID: 1bd712f5-ccb9-4322-8275-abe363d1ffdd<br>
>> >> Status: Started<br>
>> >> Snapshot Count: 0<br>
>> >> Number of Bricks: 2 x (2 + 1) = 6<br>
>> >> Transport-type: tcp<br>
>> >> Bricks:<br>
>> >> Brick1: pve01:/DATA/brick1<br>
>> >> Brick2: pve02:/DATA/brick1.5<br>
>> >> Brick3: pve01:/DATA/arbiter1.5 (arbiter)<br>
>> >> Brick4: pve02:/DATA/brick2<br>
>> >> Brick5: pve01:/DATA/brick2.5<br>
>> >> Brick6: pve02:/DATA/arbiter2.5 (arbiter)<br>
>> >> Options Reconfigured:<br>
>> >> cluster.quorum-count: 1<br>
>> >> cluster.quorum-reads: false<br>
>> >> cluster.self-heal-daemon: enable<br>
>> >> cluster.heal-timeout: 10<br>
>> >> storage.fips-mode-rchecksum: on<br>
>> >> transport.address-family: inet<br>
>> >> nfs.disable: on<br>
>> >> performance.client-io-threads: off<br>
>> >><br>
>> >> This values I have put it myself, in order to see if could improve<br>
>> >the<br>
>> >> time to make the volume available, when pve01 goes down with<br>
>ifupdown<br>
>> >> cluster.quorum-count: 1<br>
>> >> cluster.quorum-reads: false<br>
>> >> cluster.self-heal-daemon: enable<br>
>> >> cluster.heal-timeout: 10<br>
>> >><br>
>> >> Nevertheless, it took more than 1 minutes to the volume VMS<br>
>available<br>
>> >in<br>
>> >> the other host (pve02).<br>
>> >> Is there any trick to reduce this time ?<br>
>> >><br>
>> >> Thanks<br>
>> >><br>
>> >> ---<br>
>> >> Gilberto Nunes Ferreira<br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >> Em qua., 5 de ago. de 2020 às 08:57, Gilberto Nunes <<br>
>> >> <a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> escreveu:<br>
>> >><br>
>> >>> hum I see... like this:<br>
>> >>> [image: image.png]<br>
>> >>> ---<br>
>> >>> Gilberto Nunes Ferreira<br>
>> >>><br>
>> >>> (47) 3025-5907<br>
>> >>> (47) 99676-7530 - Whatsapp / Telegram<br>
>> >>><br>
>> >>> Skype: gilberto.nunes36<br>
>> >>><br>
>> >>><br>
>> >>><br>
>> >>><br>
>> >>><br>
>> >>> Em qua., 5 de ago. de 2020 às 02:14, Computerisms Corporation <<br>
>> >>> <a href="mailto:bob@computerisms.ca" target="_blank">bob@computerisms.ca</a>> escreveu:<br>
>> >>><br>
>> >>>> check the example of the chained configuration on this page:<br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> ><br>
>><br>
><a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/creating_arbitrated_replicated_volumes" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/creating_arbitrated_replicated_volumes</a><br>
>> >>>><br>
>> >>>> and apply it to two servers...<br>
>> >>>><br>
>> >>>> On 2020-08-04 8:25 p.m., Gilberto Nunes wrote:<br>
>> >>>> > Hi Bob!<br>
>> >>>> ><br>
>> >>>> > Could you, please, send me more detail about this<br>
>configuration?<br>
>> >>>> > I will appreciate that!<br>
>> >>>> ><br>
>> >>>> > Thank you<br>
>> >>>> > ---<br>
>> >>>> > Gilberto Nunes Ferreira<br>
>> >>>> ><br>
>> >>>> > (47) 3025-5907<br>
>> >>>> > **<br>
>> >>>> > (47) 99676-7530 - Whatsapp / Telegram<br>
>> >>>> ><br>
>> >>>> > Skype: gilberto.nunes36<br>
>> >>>> ><br>
>> >>>> ><br>
>> >>>> ><br>
>> >>>> ><br>
>> >>>> ><br>
>> >>>> > Em ter., 4 de ago. de 2020 às 23:47, Computerisms Corporation<br>
>> >>>> > <<a href="mailto:bob@computerisms.ca" target="_blank">bob@computerisms.ca</a> <mailto:<a href="mailto:bob@computerisms.ca" target="_blank">bob@computerisms.ca</a>>> escreveu:<br>
>> >>>> ><br>
>> >>>> > Hi Gilberto,<br>
>> >>>> ><br>
>> >>>> > My understanding is there can only be one arbiter per<br>
>> >replicated<br>
>> >>>> > set. I<br>
>> >>>> > don't have a lot of practice with gluster, so this could<br>
>be<br>
>> >bad<br>
>> >>>> advice,<br>
>> >>>> > but the way I dealt with it on my two servers was to use 6<br>
>> >bricks<br>
>> >>>> as<br>
>> >>>> > distributed-replicated (this is also relatively easy to<br>
>> >migrate to<br>
>> >>>> 3<br>
>> >>>> > servers if that happens for you in the future):<br>
>> >>>> ><br>
>> >>>> > Server1 Server2<br>
>> >>>> > brick1 brick1.5<br>
>> >>>> > arbiter1.5 brick2<br>
>> >>>> > brick2.5 arbiter2.5<br>
>> >>>> ><br>
>> >>>> > On 2020-08-04 7:00 p.m., Gilberto Nunes wrote:<br>
>> >>>> > > Hi there.<br>
>> >>>> > > I have two physical servers deployed as replica 2 and,<br>
>> >>>> obviously,<br>
>> >>>> > I got<br>
>> >>>> > > a split-brain.<br>
>> >>>> > > So I am thinking in use two virtual machines,each one<br>
>in<br>
>> >>>> physical<br>
>> >>>> > > servers....<br>
>> >>>> > > Then this two VMS act as a artiber of gluster set....<br>
>> >>>> > ><br>
>> >>>> > > Is this doable?<br>
>> >>>> > ><br>
>> >>>> > > Thanks<br>
>> >>>> > ><br>
>> >>>> > > ________<br>
>> >>>> > ><br>
>> >>>> > ><br>
>> >>>> > ><br>
>> >>>> > > Community Meeting Calendar:<br>
>> >>>> > ><br>
>> >>>> > > Schedule -<br>
>> >>>> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>> >>>> > > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> >>>> > ><br>
>> >>>> > > Gluster-users mailing list<br>
>> >>>> > > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
>> ><mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
>> >>>> > ><br>
><a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >>>> > ><br>
>> >>>> > ________<br>
>> >>>> ><br>
>> >>>> ><br>
>> >>>> ><br>
>> >>>> > Community Meeting Calendar:<br>
>> >>>> ><br>
>> >>>> > Schedule -<br>
>> >>>> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>> >>>> > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>> >>>> ><br>
>> >>>> > Gluster-users mailing list<br>
>> >>>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
><mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
>> >>>> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>> >>>> ><br>
>> >>>><br>
>> >>><br>
>><br>
</blockquote></div>