[Gluster-users] Remove and re-add bricks/peers

Tom Cannaerts - INTRACTO tom.cannaerts at intracto.com
Tue Jul 18 07:18:36 UTC 2017


We'll definitely look into upgrading this, but it's a older, legacy system
so we need to see what we can do without breaking it.

Returning to the re-adding question, what steps do I need to do to clear
the config of the failed peers? Do I just wipe the data directory of the
volume, or do I need to clear some other config file/folders as well?

Tom


Op ma 17 jul. 2017 om 16:39 schreef Atin Mukherjee <amukherj at redhat.com>:

> That's the way. However I'd like to highlight that you're running a very
> old gluster release. We are currently with 3.11 release which is STM and
> the long term support is with 3.10. You should consider to upgrade to
> atleast 3.10.
>
> On Mon, Jul 17, 2017 at 3:25 PM, Tom Cannaerts - INTRACTO <
> tom.cannaerts at intracto.com> wrote:
>
>> We had some issues with a volume. The volume is a 3 replica volume with 3
>> gluster 3.5.7 peers. We are now in a situation where only 1 of the 3 nodes
>> is operational. If we restart the node on one of the other nodes, the
>> entire volume becomes unresponsive.
>>
>> After a lot of trial and error, we have come to the conclusion that we do
>> not wan't to try to rejoin the other 2 nodes in the current form. We would
>> like to completely remove them from the config of the running node,
>> entirely reset the config on the nodes itself and then re-add them as if it
>> was a new node, having it completely sync the volume from the working node.
>>
>> What would be the correct procedure for this? I assume I can use "gluster
>> volume remove-brick" to force-remove the failed bricks from the volume and
>> decrease the replica count, and then use "gluster peer detach" to
>> force-remove the peers from the config, all on the currently still working
>> node. But what do I need to do to completely clear the config and data of
>> the failed peers? The gluster processes are currently not running on these
>> nodes, but config + data are still present. So basically, I need to be able
>> to clean them out before restarting them, so that they start in a clean
>> state and not try to connect/interfere with the currently still working
>> node.
>>
>> Thanks,
>>
>> Tom
>>
>>
>> --
>> Met vriendelijke groeten,
>> Tom Cannaerts
>>
>>
>> *Service and MaintenanceIntracto - digital agency*
>>
>> Zavelheide 15 - 2200 Herentals
>> Tel: +32 14 28 29 29
>> www.intracto.com
>>
>>
>> Ben je tevreden over deze e-mail?
>>
>> <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
>>     <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
>>     <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
>>     <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
>>     <http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
> --
Met vriendelijke groeten,
Tom Cannaerts


*Service and MaintenanceIntracto - digital agency*

Zavelheide 15 - 2200 Herentals
Tel: +32 14 28 29 29
www.intracto.com


Ben je tevreden over deze e-mail?

<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=5>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=4>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=3>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=2>
<http://www.intracto.com/feedback?user=tom.cannaerts&response_code=1>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170718/80c2fb39/attachment.html>


More information about the Gluster-users mailing list