[Gluster-users] How to shrink replicated volume from 3 to 2 nodes?
Alexandr Porunov
alexandr.porunov at gmail.com
Sun Nov 27 10:49:26 UTC 2016
# gluster volume status gv0
Status of volume: gv0
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.0.123:/data/brick1/gv0 N/A N/A N
N/A
Brick 192.168.0.125:/data/brick1/gv0 49152 0 Y
1396
Self-heal Daemon on localhost N/A N/A Y
3252
Self-heal Daemon on 192.168.0.125 N/A N/A Y
13339
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
It doesn't show that 192.168.0.124 is in the volume but it is in the
cluster. Here is why:
When I try to add it back to peer list it doesn't do anything. Because it
says that it is already in a peer list:
# gluster peer probe 192.168.0.124
peer probe: success. Host 192.168.0.124 port 24007 already in peer list
OK. I go to the machine 192.168.0.124 and try to show a peer list:
# gluster peer status
Number of Peers: 0
OK. I go to the machine 192.168.0.123 and try to show peer status:
# gluster peer status
Number of Peers: 2
Hostname: 192.168.0.125
Uuid: a6ed1da8-3027-4400-afed-96429380fdc9
State: Peer in Cluster (Connected)
Hostname: 192.168.0.124
Uuid: b7d829f3-80d9-4a78-90b8-f018bc758df0
State: Peer Rejected (Connected)
As we see machine with ip 192.168.0.123 thinks that 192.168.0.124 is in the
cluster. OK lets remove it from the cluster:
# gluster peer detach 192.168.0.124:/data/brick1
peer detach: failed: 192.168.0.124:/data/brick1 is not part of cluster
# gluster peer detach 192.168.0.124
peer detach: failed: Brick(s) with the peer 192.168.0.124 exist in cluster
Isn't it strange? It is in the cluster and it isn't in the cluster. I can't
neither add machine with IP 192.168.0.124 nor remove machine with IP
192.168.0.124
Do you know what is wrong with it?
Sincerely,
Alexandr
On Sun, Nov 27, 2016 at 12:29 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:
> On 27/11/2016 7:28 PM, Alexandr Porunov wrote:
>
>> # Above command showed success but in reality brick is still in the
>> cluster.
>>
>
> What makes you think this? what does a "gluster v gv0" show?
>
>
> --
> Lindsay Mathieson
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161127/7e1b6d79/attachment.html>
More information about the Gluster-users
mailing list