<p dir="ltr">As everything seems OK, you can check if your arbiter is ok.<br>
Run 'gluster peer status' on all nodes.</p>
<p dir="ltr">If all peers report 2 peers connected ,you can run:<br>
gluster volume add-brick gvol0 replica 2 arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0</p>
<p dir="ltr">Bewt Regards,<br>
Strahil Nikolov</p>
<div class="quote">On May 20, 2019 02:31, David Cunningham &lt;dcunningham@voisonics.com&gt; wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hello,</div><div><br /></div><div>It does show everything as Connected and 0 for the existing bricks, gfs1 and gfs2. The new brick gfs3 isn&#39;t listed, presumably because of the failure as per my original email. Would anyone have any further suggestions on how to prevent the &#34;Transport endpoint is not connected&#34; error when adding the new brick?</div><div><br /></div><div># gluster volume heal gvol0 info summary<br />Brick gfs1:/nodirectwritedata/gluster/gvol0<br />Status: Connected<br />Total Number of entries: 0<br />Number of entries in heal pending: 0<br />Number of entries in split-brain: 0<br />Number of entries possibly healing: 0<br /><br />Brick gfs2:/nodirectwritedata/gluster/gvol0<br />Status: Connected<br />Total Number of entries: 0<br />Number of entries in heal pending: 0<br />Number of entries in split-brain: 0<br />Number of entries possibly healing: 0</div><div><br /></div><div><br /></div><div># gluster volume status all<br />Status of volume: gvol0<br />Gluster process                             TCP Port  RDMA Port  Online  Pid<br />------------------------------------------------------------------------------<br />Brick gfs1:/nodirectwritedata/gluster/gvol0 49152     0          Y       7706 <br />Brick gfs2:/nodirectwritedata/gluster/gvol0 49152     0          Y       7624 <br />Self-heal Daemon on localhost               N/A       N/A        Y       47636<br />Self-heal Daemon on gfs3                    N/A       N/A        Y       18542<br />Self-heal Daemon on gfs2                    N/A       N/A        Y       37192<br /> <br />Task Status of Volume gvol0<br />------------------------------------------------------------------------------<br />There are no active volume task</div><div><br /></div></div></div></div><br /><div class="elided-text"><div dir="ltr">On Sat, 18 May 2019 at 22:34, Strahil &lt;<a href="mailto:hunter86_bg&#64;yahoo.com">hunter86_bg&#64;yahoo.com</a>&gt; wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><p dir="ltr">Just run &#39;gluster volume heal my_volume info summary&#39;.</p>
<p dir="ltr">It will report any issues - everything should be &#39;Connected&#39; and show &#39;0&#39;.</p>
<p dir="ltr">Best Regards,<br />
Strahil Nikolov</p>
<div>On May 18, 2019 02:01, David Cunningham &lt;<a href="mailto:dcunningham&#64;voisonics.com">dcunningham&#64;voisonics.com</a>&gt; wrote:<br /><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi Ravi,</div><div><br /></div><div>The existing two nodes aren&#39;t in split-brain, at least that I&#39;m aware of. Running &#34;gluster volume status all&#34; doesn&#39;t show any problem.</div><div><br /></div><div>I&#39;m not sure what &#34;in metadata&#34; means. Can you please explain that one?</div><div><br /></div></div></div><br /><div><div dir="ltr">On Fri, 17 May 2019 at 22:43, Ravishankar N &lt;<a href="mailto:ravishankar&#64;redhat.com">ravishankar&#64;redhat.com</a>&gt; wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex">
  
    
  
  <div>
    <p><br />
    </p>
    <div>On 17/05/19 5:59 AM, David Cunningham
      wrote:<br />
    </div>
    <blockquote>
      
      <div dir="ltr">
        <div dir="ltr">
          <div dir="ltr">
            <div dir="ltr">
              <div dir="ltr">
                <div dir="ltr">
                  <div dir="ltr">
                    <div dir="ltr">
                      <div>Hello,</div>
                      <div><br />
                      </div>
                      <div>We&#39;re adding an arbiter node to an existing
                        volume and having an issue. Can anyone help? The
                        root cause error appears to be
                        &#34;00000000-0000-0000-0000-000000000001: failed to
                        resolve (Transport endpoint is not connected)&#34;,
                        as below.</div>
                      <div><br />
                      </div>
                    </div>
                  </div>
                </div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <p>Was your root directory of the replica 2 volume  in metadata or
      entry split-brain? If yes, you need to resolve it before
      proceeding with the add-brick.<br />
    </p>
    <p>-Ravi<br />
    </p>
    <p><br />
    </p>
    <blockquote>
      <div dir="ltr">
        <div dir="ltr">
          <div dir="ltr">
            <div dir="ltr">
              <div dir="ltr">
                <div dir="ltr">
                  <div dir="ltr">
                    <div dir="ltr">
                      <div>We are running glusterfs 5.6.1. Thanks in
                        advance for any assistance!<br />
                      </div>
                      <div><br />
                      </div>
                      <div>On existing node gfs1, trying to add new
                        arbiter node gfs3:</div>
                      <div><br />
                      </div>
                      <div># gluster volume add-brick gvol0 replica 3
                        arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0<br />
                        volume add-brick: failed: Commit failed on gfs3.
                        Please check log file for details.<br />
                        <br />
                      </div>
                      <div>On new node gfs3 in
                        gvol0-add-brick-mount.log:</div>
                      <div><br />
                      </div>
                      <div>[2019-05-17 01:20:22.689721] I
                        [fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse:
                        FUSE inited with protocol versions: glusterfs
                        7.24 kernel 7.22<br />
                        [2019-05-17 01:20:22.689778] I
                        [fuse-bridge.c:4878:fuse_graph_sync] 0-fuse:
                        switched to graph 0<br />
                        [2019-05-17 01:20:22.694897] E
                        [fuse-bridge.c:4336:fuse_first_lookup] 0-fuse:
                        first lookup on root failed (Transport endpoint
                        is not connected)<br />
                        [2019-05-17 01:20:22.699770] W
                        [fuse-resolve.c:127:fuse_resolve_gfid_cbk]
                        0-fuse: 00000000-0000-0000-0000-000000000001:
                        failed to resolve (Transport endpoint is not
                        connected)<br />
                        [2019-05-17 01:20:22.699834] W
                        [fuse-bridge.c:3294:fuse_setxattr_resume]
                        0-glusterfs-fuse: 2: SETXATTR
                        00000000-0000-0000-0000-000000000001/1
                        (trusted.add-brick) resolution failed<br />
                        [2019-05-17 01:20:22.715656] I
                        [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse:
                        initating unmount of /tmp/mntQAtu3f<br />
                        [2019-05-17 01:20:22.715865] W
                        [glusterfsd.c:1500:cleanup_and_exit]
                        (--&gt;/lib64/<a href="http://libpthread.so">libpthread.so</a>.0(&#43;0x7dd5)
</div></div></div></div></div></div></div></div></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div><br clear="all" /><br />-- <br /><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br /><a href="http://voisonics.com/">http://voisonics.com/</a><br />USA: &#43;1 213 221 1092<br />New Zealand: &#43;64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div>