[Gluster-users] How to shrink replicated volume from 3 to 2 nodes?

Alexandr Porunov alexandr.porunov at gmail.com
Sun Nov 27 14:05:32 UTC 2016


Seems I found a problem. The problem was in the gluster_shared_storage. All
nodes had gluster_shared_storage and when I tried to remove a node from the
cluster it hadn't took any effects in shared storage. In shared storage it
still was in peer list.
I.e. this command won't work with eneabled shared storage:
gluster peer detach 192.168.0.124

So, I removed shared storage and then removed a node:
gluster volume set all cluster.enable-shared-storage disable
gluster peer detach 192.168.0.124
gluster volume set all cluster.enable-shared-storage enable

It isn't very convenient. Also, now the main problem is when I adding a new
node. How to add shared_storage to that node without removing shared
storage and creating a new shared storage?

Sincerely,
Alexandr

On Sun, Nov 27, 2016 at 3:45 PM, Lindsay Mathieson <
lindsay.mathieson at gmail.com> wrote:

> bricks are not peers and vica versa.
>
> Your peers are the nodes, bricks are the disks on the nodes. When you
> remove a bricks from the cluster you don't remove the peer.
>
> # gluster peer detach 192.168.0.124:/data/brick1
>
> Incorrect syntax, the command is for removing the peer, not the brick. It
> should be:
>
> # gluster peer detach 192.168.0.124
>
>
> On 27/11/2016 8:49 PM, Alexandr Porunov wrote:
>
> # gluster volume status gv0
> Status of volume: gv0
> Gluster process                             TCP Port  RDMA Port  Online
>  Pid
> ------------------------------------------------------------
> ------------------
> Brick 192.168.0.123:/data/brick1/gv0        N/A       N/A        N
> N/A
> Brick 192.168.0.125:/data/brick1/gv0        49152     0          Y
> 1396
> Self-heal Daemon on localhost               N/A       N/A        Y
> 3252
> Self-heal Daemon on 192.168.0.125           N/A       N/A        Y
> 13339
>
> Task Status of Volume gv0
> ------------------------------------------------------------
> ------------------
> There are no active volume tasks
>
> It doesn't show that 192.168.0.124 is in the volume but it is in the
> cluster. Here is why:
>
> When I try to add it back to peer list it doesn't do anything. Because it
> says that it is already in a peer list:
>
> # gluster peer probe 192.168.0.124
> peer probe: success. Host 192.168.0.124 port 24007 already in peer list
>
> OK. I go to the machine 192.168.0.124 and try to show a peer list:
> # gluster peer status
> Number of Peers: 0
>
> OK. I go to the machine 192.168.0.123 and try to show peer status:
> # gluster peer status
> Number of Peers: 2
>
> Hostname: 192.168.0.125
> Uuid: a6ed1da8-3027-4400-afed-96429380fdc9
> State: Peer in Cluster (Connected)
>
> Hostname: 192.168.0.124
> Uuid: b7d829f3-80d9-4a78-90b8-f018bc758df0
> State: Peer Rejected (Connected)
>
> As we see machine with ip 192.168.0.123 thinks that 192.168.0.124 is in
> the cluster. OK lets remove it from the cluster:
>
> # gluster peer detach 192.168.0.124:/data/brick1
> peer detach: failed: 192.168.0.124:/data/brick1 is not part of cluster
>
> # gluster peer detach 192.168.0.124
> peer detach: failed: Brick(s) with the peer 192.168.0.124 exist in cluster
>
> Isn't it strange? It is in the cluster and it isn't in the cluster. I
> can't neither add machine with IP 192.168.0.124 nor remove machine with IP
> 192.168.0.124
>
> Do you know what is wrong with it?
>
> Sincerely,
> Alexandr
>
>
> On Sun, Nov 27, 2016 at 12:29 PM, Lindsay Mathieson <
> lindsay.mathieson at gmail.com> wrote:
>
>> On 27/11/2016 7:28 PM, Alexandr Porunov wrote:
>>
>>> # Above command showed success but in reality brick is still in the
>>> cluster.
>>>
>>
>> What makes you think this? what does a "gluster v gv0" show?
>>
>>
>> --
>> Lindsay Mathieson
>>
>>
>
>
> --
> Lindsay Mathieson
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161127/79dea03f/attachment.html>


More information about the Gluster-users mailing list