[Gluster-users] Remove and re-add bricks/peers

Tom Cannaerts - INTRACTO tom.cannaerts at intracto.com
Mon Jul 17 09:55:03 UTC 2017


We had some issues with a volume. The volume is a 3 replica volume with 3
gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
is operational. If we restart the node on one of the other nodes, the
entire volume becomes unresponsive.

After a lot of trial and error, we have come to the conclusion that we do
not wan't to try to rejoin the other 2 nodes in the current form. We would
like to completely remove them from the config of the running node,
entirely reset the config on the nodes itself and then re-add them as if it
was a new node, having it completely sync the volume from the working node.

What would be the correct procedure for this? I assume I can use "gluster
volume remove-brick" to force-remove the failed bricks from the volume and
decrease the replica count, and then use "gluster peer detach" to
force-remove the peers from the config, all on the currently still working
node. But what do I need to do to completely clear the config and data of
the failed peers? The gluster processes are currently not running on these
nodes, but config + data are still present. So basically, I need to be able
to clean them out before restarting them, so that they start in a clean
state and not try to connect/interfere with the currently still working
node.

Thanks,

Tom


-- 
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?

<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170717/e95aa7d0/attachment.html>


More information about the Gluster-users mailing list