[Gluster-users] Safely remove one replica
amukherj at redhat.com
Thu Jun 25 04:34:56 UTC 2015
On 06/25/2015 10:01 AM, John Gardeniers wrote:
> Hi Atin,
> On 25/06/15 14:24, Atin Mukherjee wrote:
>> On 06/25/2015 03:07 AM, John Gardeniers wrote:
>>> No takers on this one?
>>> On 22/06/15 14:37, John Gardeniers wrote:
>>>> Until last weekend we had a simple 1x2 replicated volume, consisting
>>>> of a single brick on each peer. After a drive failure screwed the
>>>> brick on one peer we decided to create a new peer and swap the bricks.
>>>> Running "gluster volume replace-brick gluster-rhev
>>>> dead_peer:/gluster_brick_1 new_peer:/gluster_brick_1 commit force".
>> Did replace brick succeeded? Ideally if you run replace brick commit
>> force, that can result into data loss until and unless you explicitly
>> take care of it.
> No, replace brick failed. Gluster wanted both the old and new servers
> connected and refused to proceed without them.
Why are the nodes not connected? If that's the case you should look into
that first. Even in this situation remove brick would fail. Checking
glusterd log files might give you some clue?
>>>> After trying for some time and not wishing to rely on a single peer we
>>>> added kari as an additional replica with "gluster volume add-brick
>>>> gluster-rhev replica 3 new_peer:/gluster_brick_1 force".
>>>> Can we now *safely* remove the dead brick and revert back to replica 2?
>> If the earlier replace brick didn't happen, then you can go for
>> remove-brick start followed by commit once the status is completed. But
>> double check the data as well.
> Thanks, I'll try it over a weekend.
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> This email has been scanned by the Symantec Email Security.cloud
>>>> For more information please visit http://www.symanteccloud.com
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
More information about the Gluster-users