<div dir="ltr">One of the thing with thin-arbiter is, the process is not expected to be managed by the glusterd itself, ie, it is designed to be outside the storage pool, in a cloud or a high latency backup/more available setup.<div><br></div><div>We (kadalu project) make use of it in our 'Replica 2' volume types. For testing see if using tie-breaker.kadalu.io:/mnt as the brick for thin-arbiter works for you.</div><div><br></div><div>-Amar</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 16, 2022 at 6:22 PM Diego Zuccato <<a href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Not there. It's not one of the defined services :(<br>
Maybe Debian does not support it?<br>
<br>
Il 16/02/2022 13:26, Strahil Nikolov ha scritto:<br>
> My bad, it should be /gluster-ta-volume.service/<br>
> <br>
>Â Â Â On Wed, Feb 16, 2022 at 7:45, Diego Zuccato<br>
>Â Â Â <<a href="mailto:diego.zuccato@unibo.it" target="_blank">diego.zuccato@unibo.it</a>> wrote:<br>
>Â Â Â No such process is defined. Just the standard glusterd.service and<br>
>Â Â Â glustereventsd.service . Using Debian stable.<br>
> <br>
>Â Â Â Il 15/02/2022 15:41, Strahil Nikolov ha scritto:<br>
>Â Â Â > Any errors in gluster-ta.service on the arbiter node ?<br>
>Â Â Â ><br>
>Â Â Â > Best Regards,<br>
>Â Â Â > Strahil Nikolov<br>
>Â Â Â ><br>
>Â Â Â >Â Â On Tue, Feb 15, 2022 at 14:28, Diego Zuccato<br>
>Â Â Â >Â Â <<a href="mailto:diego.zuccato@unibo.it" target="_blank">diego.zuccato@unibo.it</a> <mailto:<a href="mailto:diego.zuccato@unibo.it" target="_blank">diego.zuccato@unibo.it</a>>> wrote:<br>
>Â Â Â >Â Â Hello all.<br>
>Â Â Â ><br>
>Â Â Â >Â Â I'm experimenting with thin-arbiter and getting disappointing<br>
>Â Â Â results.<br>
>Â Â Â ><br>
>Â Â Â >Â Â I have 3 hosts in the trusted pool:<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster --version<br>
>Â Â Â >Â Â glusterfs 9.2<br>
>Â Â Â >Â Â [...]<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster pool list<br>
>   >  UUID                  Hostname    State<br>
>   >  d4791fed-3e6d-4f8f-bdb6-4e0043610ead  nas3      Connected<br>
>Â Â Â >Â Â bff398f0-9d1d-4bd0-8a47-0bf481d1d593Â Â nas2Â Â Â Â Â Â Connected<br>
>   >  4607034c-919d-4675-b5fc-14e1cad90214  localhost   Connected<br>
>Â Â Â ><br>
>Â Â Â >Â Â When I try to create a new volume, the first initialization<br>
>Â Â Â succeeds:<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster v create Bck replica 2<br>
>Â Â Â >Â Â thin-arbiter 1<br>
>Â Â Â >Â Â nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck<br>
>Â Â Â >Â Â volume create: Bck: success: please start the volume to access<br>
>Â Â Â data<br>
>Â Â Â ><br>
>Â Â Â >Â Â But adding a second brick segfaults the daemon:<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster v add-brick Bck<br>
>Â Â Â >Â Â nas{1,3}:/bricks/01/Bck<br>
>Â Â Â >Â Â Connection failed. Please check if gluster daemon is operational.<br>
>Â Â Â ><br>
>Â Â Â >Â Â After erroring out, systemctl status glusterd reports daemon in<br>
>Â Â Â >Â Â "restarting" state and it eventually restarts. But the new<br>
>Â Â Â brick is not<br>
>Â Â Â >Â Â added to the volume, even if trying to re-add it yelds a "brick is<br>
>Â Â Â >Â Â already part of a volume" error. Seems glusterd crashes<br>
>Â Â Â between marking<br>
>Â Â Â >Â Â brick dir as used and recording its data in the config.<br>
>Â Â Â ><br>
>Â Â Â >Â Â If I try to add all the bricks during the creation, glusterd<br>
>Â Â Â does not<br>
>Â Â Â >Â Â die but the volume doesn't get created:<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# rm -rf /bricks/{00..07}/Bck && mkdir<br>
>Â Â Â >Â Â /bricks/{00..07}/Bck<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster v create Bck replica 2<br>
>Â Â Â >Â Â thin-arbiter 1<br>
>Â Â Â >Â Â nas{1,3}:/bricks/00/Bck nas{1,3}:/bricks/01/Bck<br>
>Â Â Â nas{1,3}:/bricks/02/Bck<br>
>Â Â Â >Â Â nas{1,3}:/bricks/03/Bck nas{1,3}:/bricks/04/Bck<br>
>Â Â Â nas{1,3}:/bricks/05/Bck<br>
>Â Â Â >Â Â nas{1,3}:/bricks/06/Bck nas{1,3}:/bricks/07/Bck<br>
>Â Â Â nas2:/bricks/arbiter/Bck<br>
>Â Â Â >Â Â volume create: Bck: failed: Commit failed on localhost. Please<br>
>Â Â Â check<br>
>Â Â Â >Â Â the<br>
>Â Â Â >Â Â log file for more details.<br>
>Â Â Â ><br>
>Â Â Â >Â Â Couldn't find anything useful in the logs :(<br>
>Â Â Â ><br>
>Â Â Â >Â Â If I create a "replica 3 arbiter 1" over the same brick<br>
>Â Â Â directories<br>
>Â Â Â >Â Â (just adding some directories to keep arbiters separated), it<br>
>Â Â Â succeeds:<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster v create Bck replica 3<br>
>Â Â Â >Â Â arbiter 1<br>
>Â Â Â >Â Â nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck/00<br>
>Â Â Â >Â Â volume create: Bck: success: please start the volume to access<br>
>Â Â Â data<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# for T in {01..07}; do gluster v<br>
>Â Â Â >Â Â add-brick Bck<br>
>Â Â Â >Â Â nas{1,3}:/bricks/$T/Bck nas2:/bricks/arbiter/Bck/$T ; done<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â >Â Â volume add-brick: success<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster v start Bck<br>
>Â Â Â >Â Â volume start: Bck: success<br>
>Â Â Â > root@nas1 <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>> <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a><br>
>Â Â Â <mailto:<a href="mailto:root@nas1" target="_blank">root@nas1</a>>>:~# gluster v info Bck<br>
>Â Â Â ><br>
>Â Â Â >Â Â Volume Name: Bck<br>
>Â Â Â >Â Â Type: Distributed-Replicate<br>
>Â Â Â >Â Â Volume ID: 4786e747-8203-42bf-abe8-107a50b238ee<br>
>Â Â Â >Â Â Status: Started<br>
>Â Â Â >Â Â Snapshot Count: 0<br>
>Â Â Â >Â Â Number of Bricks: 8 x (2 + 1) = 24<br>
>Â Â Â >Â Â Transport-type: tcp<br>
>Â Â Â >Â Â Bricks:<br>
>Â Â Â >Â Â Brick1: nas1:/bricks/00/Bck<br>
>Â Â Â >Â Â Brick2: nas3:/bricks/00/Bck<br>
>Â Â Â >Â Â Brick3: nas2:/bricks/arbiter/Bck/00 (arbiter)<br>
>Â Â Â >Â Â Brick4: nas1:/bricks/01/Bck<br>
>Â Â Â >Â Â Brick5: nas3:/bricks/01/Bck<br>
>Â Â Â >Â Â Brick6: nas2:/bricks/arbiter/Bck/01 (arbiter)<br>
>Â Â Â >Â Â Brick7: nas1:/bricks/02/Bck<br>
>Â Â Â >Â Â Brick8: nas3:/bricks/02/Bck<br>
>Â Â Â >Â Â Brick9: nas2:/bricks/arbiter/Bck/02 (arbiter)<br>
>Â Â Â >Â Â Brick10: nas1:/bricks/03/Bck<br>
>Â Â Â >Â Â Brick11: nas3:/bricks/03/Bck<br>
>Â Â Â >Â Â Brick12: nas2:/bricks/arbiter/Bck/03 (arbiter)<br>
>Â Â Â >Â Â Brick13: nas1:/bricks/04/Bck<br>
>Â Â Â >Â Â Brick14: nas3:/bricks/04/Bck<br>
>Â Â Â >Â Â Brick15: nas2:/bricks/arbiter/Bck/04 (arbiter)<br>
>Â Â Â >Â Â Brick16: nas1:/bricks/05/Bck<br>
>Â Â Â >Â Â Brick17: nas3:/bricks/05/Bck<br>
>Â Â Â >Â Â Brick18: nas2:/bricks/arbiter/Bck/05 (arbiter)<br>
>Â Â Â >Â Â Brick19: nas1:/bricks/06/Bck<br>
>Â Â Â >Â Â Brick20: nas3:/bricks/06/Bck<br>
>Â Â Â >Â Â Brick21: nas2:/bricks/arbiter/Bck/06 (arbiter)<br>
>Â Â Â >Â Â Brick22: nas1:/bricks/07/Bck<br>
>Â Â Â >Â Â Brick23: nas3:/bricks/07/Bck<br>
>Â Â Â >Â Â Brick24: nas2:/bricks/arbiter/Bck/07 (arbiter)<br>
>Â Â Â >Â Â Options Reconfigured:<br>
>Â Â Â >Â Â cluster.granular-entry-heal: on<br>
>Â Â Â >Â Â storage.fips-mode-rchecksum: on<br>
>Â Â Â >Â Â transport.address-family: inet<br>
>Â Â Â >Â Â nfs.disable: on<br>
>Â Â Â >Â Â performance.client-io-threads: off<br>
>Â Â Â ><br>
>Â Â Â >Â Â Does thin arbiter support just one replica of bricks?<br>
>Â Â Â ><br>
>Â Â Â >Â Â --<br>
>Â Â Â >Â Â Diego Zuccato<br>
>Â Â Â >Â Â DIFA - Dip. di Fisica e Astronomia<br>
>Â Â Â >Â Â Servizi Informatici<br>
>   >  Alma Mater Studiorum - Università di Bologna<br>
>Â Â Â >Â Â V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
>Â Â Â >Â Â tel.: +39 051 20 95786<br>
>Â Â Â >Â Â ________<br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â ><br>
>Â Â Â >Â Â Community Meeting Calendar:<br>
>Â Â Â ><br>
>Â Â Â >Â Â Schedule -<br>
>Â Â Â >Â Â Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
>Â Â Â >Â Â Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
>Â Â Â <<a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>><br>
>Â Â Â >Â Â <<a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
>Â Â Â <<a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a>>><br>
>Â Â Â >Â Â Gluster-users mailing list<br>
>Â Â Â > <a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>><br>
>Â Â Â <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> <mailto:<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a>>><br>
>Â Â Â > <a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>Â Â Â <<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>><br>
>Â Â Â >Â Â <<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
>Â Â Â <<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a>>><br>
> <br>
>Â Â Â ><br>
> <br>
>Â Â Â -- <br>
>Â Â Â Diego Zuccato<br>
>Â Â Â DIFA - Dip. di Fisica e Astronomia<br>
>Â Â Â Servizi Informatici<br>
>   Alma Mater Studiorum - Università di Bologna<br>
>Â Â Â V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
>Â Â Â tel.: +39 051 20 95786<br>
> <br>
<br>
-- <br>
Diego Zuccato<br>
DIFA - Dip. di Fisica e Astronomia<br>
Servizi Informatici<br>
Alma Mater Studiorum - Università di Bologna<br>
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br>
tel.: +39 051 20 95786<br>
________<br>
<br>
<br>
<br>
Community Meeting Calendar:<br>
<br>
Schedule -<br>
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br>
Bridge: <a href="https://meet.google.com/cpu-eiue-hvk" rel="noreferrer" target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">--<div><a href="https://kadalu.io" target="_blank">https://kadalu.io</a></div><div>Container Storage made easy!</div><div><br></div></div></div>