[Gluster-users] unable to remove brick, pleas help

Alex K rightkicktech at gmail.com
Thu Nov 16 05:56:16 UTC 2017


In case you do not need any data from the brick you may append "force" at
the command, as the error mentions

Alex

On Nov 15, 2017 11:49, "Rudi Ahlers" <rudiahlers at gmail.com> wrote:

> Hi,
>
> I am trying to remove a brick, from a server which is no longer part of
> the gluster pool, but I keep running into errors for which I cannot find
> answers on google.
>
> [root at virt2 ~]# gluster peer status
> Number of Peers: 3
>
> Hostname: srv1
> Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb
> State: Peer in Cluster (Disconnected)
>
> Hostname: srv3
> Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825
> State: Peer in Cluster (Connected)
>
> Hostname: srv4
> Uuid: 1a6eedc6-59eb-4329-b091-2b9bc6f0834f
> State: Peer in Cluster (Connected)
> [root at virt2 ~]#
>
>
>
>
> [root at virt2 ~]# gluster volume info data
>
> Volume Name: data
> Type: Replicate
> Volume ID: d09e4534-8bc0-4b30-be89-bc1ec2b439c7
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: srv1:/gluster/data/brick1
> Brick2: srv2:/gluster/data/brick1
> Brick3: srv3:/gluster/data/brick1
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.low-prio-threads: 32
> network.remote-dio: enable
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 10000
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 512MB
>
>
>
> [root at virt2 ~]# gluster volume remove-brick data replica 2
> srv1:/gluster/data/brick1 start
> volume remove-brick start: failed: Migration of data is not needed when
> reducing replica count. Use the 'force' option
>
>
> [root at virt2 ~]# gluster volume remove-brick data replica 2
> srv1:/gluster/data/brick1 commit
> Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
> volume remove-brick commit: failed: Brick srv1:/gluster/data/brick1 is not
> decommissioned. Use start or force option
>
>
>
> The server virt1 is not part of the cluster anymore.
>
>
>
>
> --
> Kind Regards
> Rudi Ahlers
> Website: http://www.rudiahlers.co.za
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171116/8c1b157b/attachment.html>


More information about the Gluster-users mailing list