<html><head></head><body><div>AH HA! Found the errant 3rd node. In testing to use corosync for NFS a lock volume was created and that was still holding a use of the peer. Dropped that volume and the peer detached as expected.</div><div><br></div><div><br></div><div><br></div><div>On Thu, 2018-05-31 at 14:41 +0530, Atin Mukherjee wrote:</div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 30, 2018 at 10:55 PM, Jim Kinney <span dir="ltr"><<a href="mailto:jim.kinney@gmail.com" target="_blank">jim.kinney@gmail.com</a>></span> wrote:<br><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div><div>All,</div><div><br></div><div>I added a third peer for a arbiter brick host to replica 2 cluster. Then I realized I can't use it since it has no infiniband like the other two hosts (infiniband and ethernet for clients). So I removed the new arbiter bricks from all of the volumes. However, I can't detach the peer as it keeps saying there are bricks it hosts. Nothing in volume status or info shows that host to be involved.</div><div><br></div><div>gluster peer detach innuendo force</div><div>peer detach: failed: Brick(s) with the peer innuendo exist in cluster</div></div><br></blockquote><div><br></div><div>How did you remove the arbiter bricks from the volumes? I If all the brick removals were successful, then the 3rd host shouldn't be hosting any bricks. Could you provide the output of gluster volume info from all the nodes?<br></div><div><br></div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div><div></div><div><span><pre><br></pre><pre>The Self-heal daemon is still running on innuendo for each brick.</pre><pre><br></pre><pre>Should I re-add the arbiter brick and wait for the arbiter heal process to complete? How do I take the arbiter brick out and not break things? It was added using:</pre><pre>for fac in <list of volumes>; do gluster volume add-brick ${fac}2 replica 3 arbiter 1 innuendo:/data/glusterfs/${<wbr>fac}2/brick; done</pre><pre><br></pre><pre>And then removed using:</pre><pre>for fac in <list of volumes>; do gluster volume remove-brick ${fac}2 replica 2 innuendo:/data/glusterfs/${<wbr>fac}2/brick force; done</pre><pre><br></pre><pre><br></pre><pre>Adding a new 3rd full brick host soon to avoid split-brain and trying to get this cleaned up before the new hardware arrives and I start the sync.</pre><span class="HOEnZb"><font color="#888888"><pre>-- <br></pre><pre>James P. Kinney III</pre><pre><br></pre><pre>Every time you stop a school, you will have to build a jail. What you</pre><pre>gain at one end you lose at the other. It's like feeding a dog on his</pre><pre>own tail. It won't fatten the dog.</pre><pre>- Speech 11/23/1900 Mark Twain</pre><pre><br></pre><pre><a href="http://heretothereideas.blogspot.com/" target="_blank">http://heretothereideas.<wbr>blogspot.com/</a></pre><pre><br></pre></font></span></span></div></div><br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br></div></div>
</blockquote><div><span><pre><pre>-- <br></pre>James P. Kinney III
Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain
http://heretothereideas.blogspot.com/
</pre></span></div></body></html>