[Bugs] [Bug 1802041] Peer is already being detached from cluster.

bugzilla at redhat.com bugzilla at redhat.com
Mon Feb 24 07:09:00 UTC 2020


https://bugzilla.redhat.com/show_bug.cgi?id=1802041

Sanju <srakonde at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
              Flags|                            |needinfo?(akshayvijapur at gma
                   |                            |il.com)



--- Comment #1 from Sanju <srakonde at redhat.com> ---
I don't see this happening in my environment. Have you executed peer detach on
server from two different nodes, which resulted in this?

[root at server4 glusterfs]# gluster pe s
Number of Peers: 3

Hostname: server1
Uuid: 23d8606c-7d10-449a-a269-a8ab1a83d4e5
State: Peer in Cluster (Connected)

Hostname: server2
Uuid: 9af9a8b7-3aeb-49db-9343-1f6b5b741616
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: fdf6192d-0faf-418e-aa7e-569d7ad2c598
State: Peer in Cluster (Connected)
[root at server4 glusterfs]# 

[root at server4 glusterfs]# gluster v stop rep4
Stopping volume will make its data inaccessible. Do you want to continue? (y/n)
y
volume stop: rep4: success
[root at server4 glusterfs]# gluster v remove-brick rep4 replica 3 server3:/tmp/b1
force
Remove-brick force will not migrate files from the removed bricks, so they will
no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root at server4 glusterfs]# gluster v remove-brick rep4 replica 2 server2:/tmp/b1
force
Remove-brick force will not migrate files from the removed bricks, so they will
no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root at server4 glusterfs]# gluster pe detach server2
All clients mounted through the peer which is getting detached need to be
remounted using one of the other active peers in the trusted storage pool to
ensure client gets notification on any changes done on the gluster
configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root at server4 glusterfs]# gluster pe detach server3
All clients mounted through the peer which is getting detached need to be
remounted using one of the other active peers in the trusted storage pool to
ensure client gets notification on any changes done on the gluster
configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root at server4 glusterfs]# 

Thanks,
Sanju

-- 
You are receiving this mail because:
You are on the CC list for the bug.


More information about the Bugs mailing list