<div dir="ltr">Ok...Thanks a lot Strahil<div><br></div><div>This gluster volume set VMS cluster.favorite-child-policy size do the trick to me here!</div><div><br></div><div>Cheers</div><div><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">(47) 3025-5907</span><br></div><div><font size="4"><b></b></font></div><div><span style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p style="font-size:12.8px;margin:0px">Skype: gilberto.nunes36</p><p style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em qua., 5 de ago. de 2020 às 18:15, Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">This could happen if you have pending heals. Did you reboot that node recently ?<br>
Did you set automatic unsplit-brain ?<br>
<br>
Check for pending heals and files in splitbrain.<br>
<br>
If not, you can check <a href="https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/" rel="noreferrer" target="_blank">https://docs.gluster.org/en/latest/Troubleshooting/resolving-splitbrain/</a> (look at point 5).<br>
<br>
Best Regards,<br>
Strahil Nikolov<br>
<br>
На 5 август 2020 г. 23:41:57 GMT+03:00, Gilberto Nunes <<a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> написа:<br>
>I'm in trouble here.<br>
>When I shutdown the pve01 server, the shared folder over glusterfs is<br>
>EMPTY!<br>
>It's supposed to be a qcow2 file inside it.<br>
>The content is show right, just after I power on pve01 backup...<br>
><br>
>Some advice?<br>
><br>
><br>
>Thanks<br>
><br>
>---<br>
>Gilberto Nunes Ferreira<br>
><br>
>(47) 3025-5907<br>
>(47) 99676-7530 - Whatsapp / Telegram<br>
><br>
>Skype: gilberto.nunes36<br>
><br>
><br>
><br>
><br>
><br>
>Em qua., 5 de ago. de 2020 às 11:07, Gilberto Nunes <<br>
><a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> escreveu:<br>
><br>
>> Well...<br>
>> I do the follow:<br>
>><br>
>> gluster vol create VMS replica 3 arbiter 1 pve01:/DATA/brick1<br>
>> pve02:/DATA/brick1.5 pve01:/DATA/arbiter1.5 pve02:/DATA/brick2 pv<br>
>> e01:/DATA/brick2.5 pve02:/DATA/arbiter2.5 force<br>
>><br>
>> And now I have:<br>
>> gluster vol info<br>
>><br>
>> Volume Name: VMS<br>
>> Type: Distributed-Replicate<br>
>> Volume ID: 1bd712f5-ccb9-4322-8275-abe363d1ffdd<br>
>> Status: Started<br>
>> Snapshot Count: 0<br>
>> Number of Bricks: 2 x (2 + 1) = 6<br>
>> Transport-type: tcp<br>
>> Bricks:<br>
>> Brick1: pve01:/DATA/brick1<br>
>> Brick2: pve02:/DATA/brick1.5<br>
>> Brick3: pve01:/DATA/arbiter1.5 (arbiter)<br>
>> Brick4: pve02:/DATA/brick2<br>
>> Brick5: pve01:/DATA/brick2.5<br>
>> Brick6: pve02:/DATA/arbiter2.5 (arbiter)<br>
>> Options Reconfigured:<br>
>> cluster.quorum-count: 1<br>
>> cluster.quorum-reads: false<br>
>> cluster.self-heal-daemon: enable<br>
>> cluster.heal-timeout: 10<br>
>> storage.fips-mode-rchecksum: on<br>
>> transport.address-family: inet<br>
>> nfs.disable: on<br>
>> performance.client-io-threads: off<br>
>><br>
>> This values I have put it myself, in order to see if could improve<br>
>the<br>
>> time to make the volume available, when pve01 goes down with ifupdown<br>
>> cluster.quorum-count: 1<br>
>> cluster.quorum-reads: false<br>
>> cluster.self-heal-daemon: enable<br>
>> cluster.heal-timeout: 10<br>
>><br>
>> Nevertheless, it took more than 1 minutes to the volume VMS available<br>
>in<br>
>> the other host (pve02).<br>
>> Is there any trick to reduce this time ?<br>
>><br>
>> Thanks<br>
>><br>
>> ---<br>
>> Gilberto Nunes Ferreira<br>
>><br>
>><br>
>><br>
>><br>
>><br>
>><br>
>> Em qua., 5 de ago. de 2020 às 08:57, Gilberto Nunes <<br>
>> <a href="mailto:gilberto.nunes32@gmail.com" target="_blank">gilberto.nunes32@gmail.com</a>> escreveu:<br>
>><br>
>>> hum I see... like this:<br>
>>> [image: image.png]<br>
>>> ---<br>
>>> Gilberto Nunes Ferreira<br>
>>><br>
>>> (47) 3025-5907<br>
>>> (47) 99676-7530 - Whatsapp / Telegram<br>
>>><br>
>>> Skype: gilberto.nunes36<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> Em qua., 5 de ago. de 2020 às 02:14, Computerisms Corporation <<br>
>>> <a href="mailto:bob@computerisms.ca" target="_blank">bob@computerisms.ca</a>> escreveu:<br>
>>><br>
>>>> check the example of the chained configuration on this page:<br>
>>>><br>
>>>><br>
>>>><br>
><a href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/creating_arbitrated_replicated_volumes" rel="noreferrer" target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/creating_arbitrated_replicated_volumes</a><br>
>>>><br>
>>>> and apply it to two servers...<br>
>>>><br>
>>>> On 2020-08-04 8:25 p.m., Gilberto Nunes wrote:<br>
>>>> > Hi Bob!<br>
>>>> ><br>
>>>> > Could you, please, send me more detail about this configuration?<br>
>>>> > I will appreciate that!<br>
>>>> ><br>
>>>> > Thank you<br>
>>>> > ---<br>
>>>> > Gilberto Nunes Ferreira<br>
>>>> ><br>
>>>> > (47) 3025-5907<br>
>>>> > **<br>
>>>> > (47) 99676-7530 - Whatsapp / Telegram<br>
>>>> ><br>
>>>> > Skype: gilberto.nunes36<br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> > Em ter., 4 de ago. de 2020 às 23:47, Computerisms Corporation<br>
>>>> > <<a href="mailto:bob@computerisms.ca" target="_blank">bob@computerisms.ca</a> <mailto:<a href="mailto:bob@computerisms.ca" target="_blank">bob@computerisms.ca</a>>> escreveu:<br>
>>>> ><br>
>>>> > Hi Gilberto,<br>
>>>> ><br>
>>>> > My understanding is there can only be one arbiter per<br>
>replicated<br>
>>>> > set. I<br>
>>>> > don't have a lot of practice with gluster, so this could be<br>
>bad<br>
>>>> advice,<br>
>>>> > but the way I dealt with it on my two servers was to use 6<br>
>bricks<br>
>>>> as<br>
>>>> > distributed-replicated (this is also relatively easy to<br>
>migrate to<br>
>>>> 3<br>
>>>> > servers if that happens for you in the future):<br>
>>>> ><br>
>>>> > Server1 Server2<br>
>>>> > brick1 brick1.5<br>
>>>> > arbiter1.5 brick2<br>
>>>> > brick2.5 arbiter2.5<br>
>>>> ><br>
>>>> > On 2020-08-04 7:00 p.m., Gilberto Nunes wrote:<br>
>>>> > > Hi there.<br>
>>>> > > I have two physical servers deployed as replica 2 and,<br>
>>>> obviously,<br>
>>>> > I got<br>
>>>> > > a split-brain.<br>
>>>> > > So I am thinking in use two virtual machines,each one in<br>
>>>> physical<br>
>>>> > > servers....<br>
>>>> > > Then this two VMS act as a artiber of gluster set....<br>
>>>> > ><br>
>>>> > > Is this doable?<br>
>>>> > ><br>
>>>> > > Thanks<br>
>>>> > ><br>
>>>> > > ________<br>
>>>> > ><br>
>>>> > ><br>
>>>> > ><br>
>>>> > > Community Meeting Calendar:<br>
>>>> > ><br>
>>>> > > Schedule -<br>
>>>> > > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>>>> > > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>>>> > ><br>
>>>> > > Gluster-users mailing list<br>
>>>> > > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
><mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
>>>> > > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>>>> > ><br>
>>>> > ________<br>
>>>> ><br>
>>>> ><br>
>>>> ><br>
>>>> > Community Meeting Calendar:<br>
>>>> ><br>
>>>> > Schedule -<br>
>>>> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>>>> > Bridge: <a href="https://bluejeans.com/441850968" rel="noreferrer" target="_blank">https://bluejeans.com/441850968</a><br>
>>>> ><br>
>>>> > Gluster-users mailing list<br>
>>>> > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
>>>> > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>>>> ><br>
>>>><br>
>>><br>
</blockquote></div>