<p dir="ltr">Just run 'gluster volume heal my_volume info summary'.</p>
<p dir="ltr">It will report any issues - everything should be 'Connected' and show '0'.</p>
<p dir="ltr">Best Regards,<br>
Strahil Nikolov</p>
<div class="quote">On May 18, 2019 02:01, David Cunningham <dcunningham@voisonics.com> wrote:<br type='attribution'><blockquote class="quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Hi Ravi,</div><div><br /></div><div>The existing two nodes aren't in split-brain, at least that I'm aware of. Running "gluster volume status all" doesn't show any problem.</div><div><br /></div><div>I'm not sure what "in metadata" means. Can you please explain that one?</div><div><br /></div></div></div><br /><div class="elided-text"><div dir="ltr">On Fri, 17 May 2019 at 22:43, Ravishankar N <<a href="mailto:ravishankar@redhat.com">ravishankar@redhat.com</a>> wrote:<br /></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb( 204 , 204 , 204 );padding-left:1ex">
<div>
<p><br />
</p>
<div>On 17/05/19 5:59 AM, David Cunningham
wrote:<br />
</div>
<blockquote>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>Hello,</div>
<div><br />
</div>
<div>We're adding an arbiter node to an existing
volume and having an issue. Can anyone help? The
root cause error appears to be
"00000000-0000-0000-0000-000000000001: failed to
resolve (Transport endpoint is not connected)",
as below.</div>
<div><br />
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<p>Was your root directory of the replica 2 volumeĀ in metadata or
entry split-brain? If yes, you need to resolve it before
proceeding with the add-brick.<br />
</p>
<p>-Ravi<br />
</p>
<p><br />
</p>
<blockquote>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div>We are running glusterfs 5.6.1. Thanks in
advance for any assistance!<br />
</div>
<div><br />
</div>
<div>On existing node gfs1, trying to add new
arbiter node gfs3:</div>
<div><br />
</div>
<div># gluster volume add-brick gvol0 replica 3
arbiter 1 gfs3:/nodirectwritedata/gluster/gvol0<br />
volume add-brick: failed: Commit failed on gfs3.
Please check log file for details.<br />
<br />
</div>
<div>On new node gfs3 in
gvol0-add-brick-mount.log:</div>
<div><br />
</div>
<div>[2019-05-17 01:20:22.689721] I
[fuse-bridge.c:4267:fuse_init] 0-glusterfs-fuse:
FUSE inited with protocol versions: glusterfs
7.24 kernel 7.22<br />
[2019-05-17 01:20:22.689778] I
[fuse-bridge.c:4878:fuse_graph_sync] 0-fuse:
switched to graph 0<br />
[2019-05-17 01:20:22.694897] E
[fuse-bridge.c:4336:fuse_first_lookup] 0-fuse:
first lookup on root failed (Transport endpoint
is not connected)<br />
[2019-05-17 01:20:22.699770] W
[fuse-resolve.c:127:fuse_resolve_gfid_cbk]
0-fuse: 00000000-0000-0000-0000-000000000001:
failed to resolve (Transport endpoint is not
connected)<br />
[2019-05-17 01:20:22.699834] W
[fuse-bridge.c:3294:fuse_setxattr_resume]
0-glusterfs-fuse: 2: SETXATTR
00000000-0000-0000-0000-000000000001/1
(trusted.add-brick) resolution failed<br />
[2019-05-17 01:20:22.715656] I
[fuse-bridge.c:5144:fuse_thread_proc] 0-fuse:
initating unmount of /tmp/mntQAtu3f<br />
[2019-05-17 01:20:22.715865] W
[glusterfsd.c:1500:cleanup_and_exit]
(-->/lib64/<a href="http://libpthread.so">libpthread.so</a>.0(+0x7dd5)
</div></div></div></div></div></div></div></div></div></blockquote></div></blockquote></div></blockquote></div>