<div id="yiv7042488709"><div id="yiv7042488709yqt58525" class="yiv7042488709yqt6112744554"><div>My bad, it should be <i style="white-space: pre-wrap;">gluster-ta-volume.service</i><br> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div style="font-family:Roboto, sans-serif;color:#6D00F6;"> <div>On Wed, Feb 16, 2022 at 7:45, Diego Zuccato</div><div><diego.zuccato@unibo.it> wrote:</div> </div> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> No such process is defined. Just the standard glusterd.service and <br clear="none">glustereventsd.service . Using Debian stable.<br clear="none"><br clear="none">Il 15/02/2022 15:41, Strahil Nikolov ha scritto:<br clear="none">> Any errors in gluster-ta.service on the arbiter node ?<br clear="none">> <br clear="none">> Best Regards,<br clear="none">> Strahil Nikolov<br clear="none">> <br clear="none">>     On Tue, Feb 15, 2022 at 14:28, Diego Zuccato<br clear="none">>     <<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:diego.zuccato@unibo.it" target="_blank" href="mailto:diego.zuccato@unibo.it">diego.zuccato@unibo.it</a>> wrote:<br clear="none">>     Hello all.<br clear="none">> <br clear="none">>     I'm experimenting with thin-arbiter and getting disappointing results.<br clear="none">> <br clear="none">>     I have 3 hosts in the trusted pool:<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster --version<br clear="none">>     glusterfs 9.2<br clear="none">>     [...]<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster pool list<br clear="none">>     UUID                                    Hostname        State<br clear="none">>     d4791fed-3e6d-4f8f-bdb6-4e0043610ead    nas3            Connected<br clear="none">>     bff398f0-9d1d-4bd0-8a47-0bf481d1d593    nas2            Connected<br clear="none">>     4607034c-919d-4675-b5fc-14e1cad90214    localhost      Connected<br clear="none">> <br clear="none">>     When I try to create a new volume, the first initialization succeeds:<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster v create Bck replica 2<br clear="none">>     thin-arbiter 1<br clear="none">>     nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck<br clear="none">>     volume create: Bck: success: please start the volume to access data<br clear="none">> <br clear="none">>     But adding a second brick segfaults the daemon:<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster v add-brick Bck<br clear="none">>     nas{1,3}:/bricks/01/Bck<br clear="none">>     Connection failed. Please check if gluster daemon is operational.<br clear="none">> <br clear="none">>     After erroring out, systemctl status glusterd reports daemon in<br clear="none">>     "restarting" state and it eventually restarts. But the new brick is not<br clear="none">>     added to the volume, even if trying to re-add it yelds a "brick is<br clear="none">>     already part of a volume" error. Seems glusterd crashes between marking<br clear="none">>     brick dir as used and recording its data in the config.<br clear="none">> <br clear="none">>     If I try to add all the bricks during the creation, glusterd does not<br clear="none">>     die but the volume doesn't get created:<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# rm -rf /bricks/{00..07}/Bck && mkdir<br clear="none">>     /bricks/{00..07}/Bck<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster v create Bck replica 2<br clear="none">>     thin-arbiter 1<br clear="none">>     nas{1,3}:/bricks/00/Bck nas{1,3}:/bricks/01/Bck nas{1,3}:/bricks/02/Bck<br clear="none">>     nas{1,3}:/bricks/03/Bck nas{1,3}:/bricks/04/Bck nas{1,3}:/bricks/05/Bck<br clear="none">>     nas{1,3}:/bricks/06/Bck nas{1,3}:/bricks/07/Bck nas2:/bricks/arbiter/Bck<br clear="none">>     volume create: Bck: failed: Commit failed on localhost. Please check<br clear="none">>     the<br clear="none">>     log file for more details.<br clear="none">> <br clear="none">>     Couldn't find anything useful in the logs :(<br clear="none">> <br clear="none">>     If I create a "replica 3 arbiter 1" over the same brick directories<br clear="none">>     (just adding some directories to keep arbiters separated), it succeeds:<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster v create Bck replica 3<br clear="none">>     arbiter 1<br clear="none">>     nas{1,3}:/bricks/00/Bck nas2:/bricks/arbiter/Bck/00<br clear="none">>     volume create: Bck: success: please start the volume to access data<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# for T in {01..07}; do gluster v<br clear="none">>     add-brick Bck<br clear="none">>     nas{1,3}:/bricks/$T/Bck nas2:/bricks/arbiter/Bck/$T ; done<br clear="none">>     volume add-brick: success<br clear="none">>     volume add-brick: success<br clear="none">>     volume add-brick: success<br clear="none">>     volume add-brick: success<br clear="none">>     volume add-brick: success<br clear="none">>     volume add-brick: success<br clear="none">>     volume add-brick: success<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster v start Bck<br clear="none">>     volume start: Bck: success<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:root@nas1" target="_blank" href="mailto:root@nas1">root@nas1</a>>:~# gluster v info Bck<br clear="none">> <br clear="none">>     Volume Name: Bck<br clear="none">>     Type: Distributed-Replicate<br clear="none">>     Volume ID: 4786e747-8203-42bf-abe8-107a50b238ee<br clear="none">>     Status: Started<br clear="none">>     Snapshot Count: 0<br clear="none">>     Number of Bricks: 8 x (2 + 1) = 24<br clear="none">>     Transport-type: tcp<br clear="none">>     Bricks:<br clear="none">>     Brick1: nas1:/bricks/00/Bck<br clear="none">>     Brick2: nas3:/bricks/00/Bck<br clear="none">>     Brick3: nas2:/bricks/arbiter/Bck/00 (arbiter)<br clear="none">>     Brick4: nas1:/bricks/01/Bck<br clear="none">>     Brick5: nas3:/bricks/01/Bck<br clear="none">>     Brick6: nas2:/bricks/arbiter/Bck/01 (arbiter)<br clear="none">>     Brick7: nas1:/bricks/02/Bck<br clear="none">>     Brick8: nas3:/bricks/02/Bck<br clear="none">>     Brick9: nas2:/bricks/arbiter/Bck/02 (arbiter)<br clear="none">>     Brick10: nas1:/bricks/03/Bck<br clear="none">>     Brick11: nas3:/bricks/03/Bck<br clear="none">>     Brick12: nas2:/bricks/arbiter/Bck/03 (arbiter)<br clear="none">>     Brick13: nas1:/bricks/04/Bck<br clear="none">>     Brick14: nas3:/bricks/04/Bck<br clear="none">>     Brick15: nas2:/bricks/arbiter/Bck/04 (arbiter)<br clear="none">>     Brick16: nas1:/bricks/05/Bck<br clear="none">>     Brick17: nas3:/bricks/05/Bck<br clear="none">>     Brick18: nas2:/bricks/arbiter/Bck/05 (arbiter)<br clear="none">>     Brick19: nas1:/bricks/06/Bck<br clear="none">>     Brick20: nas3:/bricks/06/Bck<br clear="none">>     Brick21: nas2:/bricks/arbiter/Bck/06 (arbiter)<br clear="none">>     Brick22: nas1:/bricks/07/Bck<br clear="none">>     Brick23: nas3:/bricks/07/Bck<br clear="none">>     Brick24: nas2:/bricks/arbiter/Bck/07 (arbiter)<br clear="none">>     Options Reconfigured:<br clear="none">>     cluster.granular-entry-heal: on<br clear="none">>     storage.fips-mode-rchecksum: on<br clear="none">>     transport.address-family: inet<br clear="none">>     nfs.disable: on<br clear="none">>     performance.client-io-threads: off<br clear="none">> <br clear="none">>     Does thin arbiter support just one replica of bricks?<br clear="none">> <br clear="none">>     -- <br clear="none">>     Diego Zuccato<br clear="none">>     DIFA - Dip. di Fisica e Astronomia<br clear="none">>     Servizi Informatici<br clear="none">>     Alma Mater Studiorum - Università di Bologna<br clear="none">>     V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">>     tel.: +39 051 20 95786<br clear="none">>     ________<br clear="none">> <br clear="none">> <br clear="none">> <br clear="none">>     Community Meeting Calendar:<br clear="none">> <br clear="none">>     Schedule -<br clear="none">>     Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br clear="none">>     Bridge: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a><br clear="none">>     <<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://meet.google.com/cpu-eiue-hvk">https://meet.google.com/cpu-eiue-hvk</a>><br clear="none">>     Gluster-users mailing list<br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a> <mailto:<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:Gluster-users@gluster.org" target="_blank" href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a>><br clear="none">>     <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br clear="none">>     <<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.gluster.org/mailman/listinfo/gluster-users">https://lists.gluster.org/mailman/listinfo/gluster-users</a>><div id="yiv7042488709yqtfd65694" class="yiv7042488709yqt9127731017"><br clear="none">> <br clear="none"><br clear="none">-- <br clear="none">Diego Zuccato<br clear="none">DIFA - Dip. di Fisica e Astronomia<br clear="none">Servizi Informatici<br clear="none">Alma Mater Studiorum - Università di Bologna<br clear="none">V.le Berti-Pichat 6/2 - 40127 Bologna - Italy<br clear="none">tel.: +39 051 20 95786<br clear="none"></div> </div> </blockquote></div></div></div>