[Gluster-users] How to remove dead peer, osrry urgent again :(

Lindsay Mathieson lindsay.mathieson at gmail.com
Sun Jun 11 11:44:49 UTC 2017


On 11/06/2017 9:23 PM, Atin Mukherjee wrote:
> Until and unless server side quorum is not enabled that's not correct. 
> I/O path should be active even though management plane is down. We can 
> still get this done by one node after another with out bringing down 
> all glusterd instances at one go but just wanted to ensure the 
> workaround is safe and clean.

Not quite sure of your wording here but I

  * brought down all glusterd with "systemctl stop
    glusterfs-server.service"  on each node
  * rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7 on
    each node
  * systemctl start glusterfs-server.service"  on each node


Several hundred shards needed to be healed after that, but all done now 
with no split-brain.  And:

    root at vng:~# gluster peer status
    Number of Peers: 2

    Hostname: vnh.proxmox.softlog
    Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7
    State: Peer in Cluster (Connected)

    Hostname: vnb.proxmox.softlog
    Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36
    State: Peer in Cluster (Connected)


Which is good. Not in a position to test quorum by rebooting a node 
right now though :) but I'm going to assume its all good, probably test 
next weekend.

Thanks for all the help, much appreciated.

-- 
Lindsay Mathieson

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170611/7c03f6eb/attachment.html>


More information about the Gluster-users mailing list