<div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hello,</div><div><br></div><div>It does show everything as Connected and 0 for the existing bricks, gfs1 and gfs2. The new brick gfs3 isn't listed, presumably because of the failure as per my original email. Would anyone have any further suggestions on how to prevent the "Transport endpoint is not connected" error when adding the new brick?</div><div><br></div><div># gluster volume heal gvol0 info summary<br>Brick gfs1:/nodirectwritedata/gluster/gvol0<br>Status: Connected<br>Total Number of entries: 0<br>Number of entries in heal pending: 0<br>Number of entries in split-brain: 0<br>Number of entries possibly healing: 0<br><br>Brick gfs2:/nodirectwritedata/gluster/gvol0<br>Status: Connected<br>Total Number of entries: 0<br>Number of entries in heal pending: 0<br>Number of entries in split-brain: 0<br>Number of entries possibly healing: 0</div><div><br></div><div><br></div><div># gluster volume status all<br>Status of volume: gvol0<br>Gluster process TCP Port RDMA Port Online Pid<br>------------------------------------------------------------------------------<br>Brick gfs1:/nodirectwritedata/gluster/gvol0 49152 0 Y 7706 <br>Brick gfs2:/nodirectwritedata/gluster/gvol0 49152 0 Y 7624 <br>Self-heal Daemon on localhost N/A N/A Y 47636<br>Self-heal Daemon on gfs3 N/A N/A Y 18542<br>Self-heal Daemon on gfs2 N/A N/A Y 37192<br> <br>Task Status of Volume gvol0<br>------------------------------------------------------------------------------<br>There are no active volume task</div><div><br></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 18 May 2019 at 22:34, Strahil <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><p dir="ltr">Just run 'gluster volume heal my_volume info summary'.</p>
<p dir="ltr">It will report any issues - everything should be 'Connected' and show '0'.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="gmail-m_5850751063211820068quote">On May 18, 2019 02:01, David Cunningham <<a href="mailto:dcunningham@voisonics.com" target="_blank">dcunningham@voisonics.com</a>> wrote:<br type="attribution"><blockquote class="gmail-m_5850751063211820068quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi Ravi,</div><div><br></div><div>The existing two nodes aren't in split-brain, at least that I'm aware of. Running "gluster volume status all" doesn't show any problem.</div><div><br></div><div>I'm not sure what "in metadata" means. Can you please explain that one?</div><div><br></div></div></div><br><div class="gmail-m_5850751063211820068elided-text"><div dir="ltr">On Fri, 17 May 2019 at 22:43, Ravishankar N <<a href="mailto:ravishankar@redhat.com" target="_blank">ravishankar@redhat.com</a>> wrote:<br></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<p><br>
</p>
<div>On 17/05/19 5:59 AM, David Cunningham
wrote:<br>
</div>
<blockquote>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hello,</div>
<div><br>
</div>
<div>We're adding an arbiter node to an existing
volume and having an issue. Can anyone help? The
root cause error appears to be
"00000000-0000-0000-0000-000000000001: failed to
resolve (Transport endpoint is not connected)",
as below.</div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<p>Was your root directory of the replica 2 volume in metadata or
entry split-brain? If yes, you need to resolve it before
proceeding with the add-brick.<br>
</p>
<p>-Ravi<br>
</p>
<p><br>
</p>
<blockquote>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>We are running glusterfs 5.6.1. Thanks in
advance for any assistance!<br>
</div>
<div><br>
</div>
<div>On existing node gfs1, trying to add new
arbiter node gfs3:</div>
<div><br>
</div>
<div># gluster volume add-brick gvol0 replica 3
arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0<br>
volume add-brick: failed: Commit failed on gfs3.
Please check log file for details.<br>
<br>
</div>
<div>On new node gfs3 in
gvol0-add-brick-mount.log:</div>
<div><br>
</div>
<div>[2019-05-17 01:20:22.689721] I
[fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse:
FUSE inited with protocol versions: glusterfs
7.24 kernel 7.22<br>
[2019-05-17 01:20:22.689778] I
[fuse-bridge.c:4878:fuse_graph_sync] 0-fuse:
switched to graph 0<br>
[2019-05-17 01:20:22.694897] E
[fuse-bridge.c:4336:fuse_first_lookup] 0-fuse:
first lookup on root failed (Transport endpoint
is not connected)<br>
[2019-05-17 01:20:22.699770] W
[fuse-resolve.c:127:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001:
failed to resolve (Transport endpoint is not
connected)<br>
[2019-05-17 01:20:22.699834] W
[fuse-bridge.c:3294:fuse_setxattr_resume]
0-glusterfs-fuse: 2: SETXATTR
00000000-0000-0000-0000-000000000001/1
(trusted.add-brick) resolution failed<br>
[2019-05-17 01:20:22.715656] I
[fuse-bridge.c:5144:fuse_thread_proc] 0-fuse:
initating unmount of /tmp/mntQAtu3f<br>
[2019-05-17 01:20:22.715865] W
[glusterfsd.c:1500:cleanup_and_exit]
(-->/lib64/<a href="http://libpthread.so" target="_blank">libpthread.so</a>.0(+0x7dd5)
</div></div></div></div></div></div></div></div></div></blockquote></div></blockquote></div></blockquote></div></blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>David Cunningham, Voisonics Limited<br><a href="http://voisonics.com/" target="_blank">http://voisonics.com/</a><br>USA: +1 213 221 1092<br>New Zealand: +64 (0)28 2558 3782</div></div></div></div></div></div></div></div></div></div></div>