[Gluster-users] gluster volume delete mdsgv01: volume delete: mdsgv01: failed: Some of the peers are down

Sanju Rakonde srakonde at redhat.com
Sun Sep 29 06:44:24 UTC 2019


Hi Tom,

Volume delete operation is not permitted when some of peers in cluster are
down. Please check peer status output and make sure that all the nodes are
up and running. and then you can go for volume delete operation.

On Sun, Sep 29, 2019 at 8:53 AM TomK <tomkcpr at mdevsys.com> wrote:

> Hello All,
>
> I'm not able to remove the last brick and consequently, the volume.  How
> do I go about deleting it?
>
> [root at mdskvm-p01 ~]# gluster volume delete mdsgv01
> Deleting volume will erase all information about the volume. Do you want
> to continue? (y/n) y
> volume delete: mdsgv01: failed: Some of the peers are down
> [root at mdskvm-p01 ~]#
>
> [root at mdskvm-p01 ~]# gluster volume info
>
> Volume Name: mdsgv01
> Type: Distribute
> Volume ID: f5b57076-dbd4-4d77-ae13-c1f3ee3adbe0
> Status: Stopped
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01
> Options Reconfigured:
> diagnostics.client-log-level: DEBUG
> diagnostics.brick-sys-log-level: INFO
> diagnostics.brick-log-level: DEBUG
> performance.readdir-ahead: on
> server.allow-insecure: on
> nfs.trusted-sync: on
> performance.cache-size: 1GB
> performance.io-thread-count: 16
> performance.write-behind-window-size: 8MB
> client.event-threads: 8
> server.event-threads: 8
> cluster.quorum-type: none
> cluster.server-quorum-type: none
> storage.owner-uid: 36
> features.shard: on
> features.shard-block-size: 512MB
> performance.low-prio-threads: 32
> cluster.data-self-heal-algorithm: full
> storage.owner-gid: 36
> [root at mdskvm-p01 ~]#
>
>
> [root at mdskvm-p01 ~]# gluster volume remove-brick mdsgv01
> mdskvm-p01.nix.mds.xyz:/mnt/p01-d01/glusterv01 force
> Remove-brick force will not migrate files from the removed bricks, so
> they will no longer be available on the volume.
> Do you want to continue? (y/n) y
> volume remove-brick commit force: failed: Deleting all the bricks of the
> volume is not allowed
> [root at mdskvm-p01 ~]#
>
>
> [root at mdskvm-p01 ~]# rpm -aq|grep -Ei gluster
> glusterfs-client-xlators-6.5-1.el7.x86_64
> glusterfs-geo-replication-6.5-1.el7.x86_64
> libvirt-daemon-driver-storage-gluster-4.5.0-23.el7_7.1.x86_64
> glusterfs-events-6.5-1.el7.x86_64
> python2-gluster-6.5-1.el7.x86_64
> glusterfs-server-6.5-1.el7.x86_64
> glusterfs-fuse-6.5-1.el7.x86_64
> glusterfs-cli-6.5-1.el7.x86_64
> glusterfs-6.5-1.el7.x86_64
> glusterfs-libs-6.5-1.el7.x86_64
> glusterfs-api-6.5-1.el7.x86_64
> glusterfs-rdma-6.5-1.el7.x86_64
> [root at mdskvm-p01 ~]#
>
>
>
>
>
> --
> Thx,
> TK.
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
Thanks,
Sanju
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190929/89b07527/attachment.html>


More information about the Gluster-users mailing list