[Bugs] [Bug 1229242] data tiering:force Remove brick is detaching-tier

bugzilla at redhat.com bugzilla at redhat.com
Thu Jun 18 10:38:30 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1229242

Triveni Rao <trao at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |VERIFIED
                 CC|                            |trao at redhat.com



--- Comment #2 from Triveni Rao <trao at redhat.com> ---
https://bugzilla.redhat.com/show_bug.cgi?id=1229242 -------------  data
tiering:force Remove brick is detaching-tier



[root at rhsqa14-vm3 ~]# gluster v create test 10.70.47.159:/rhs/brick1/t0
10.70.46.2:/rhs/brick1/t0 10.70.47.159:/rhs/brick2/t0 10.70.46.2:/rhs/brick2/t0
volume create: test: success: please start the volume to access data
[root at rhsqa14-vm3 ~]# gluster v start test
volume start: test: success
[root at rhsqa14-vm3 ~]# gluster v info

Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on

Volume Name: test
Type: Distribute
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/t0
Brick2: 10.70.46.2:/rhs/brick1/t0
Brick3: 10.70.47.159:/rhs/brick2/t0
Brick4: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v attach-tier test 10.70.47.159:/rhs/brick3/t0
10.70.46.2:/rhs/brick3/t0
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: success
volume rebalance: test: success: Rebalance on test has been started
successfully. Use rebalance status command to check status of the rebalance
process.
ID: af7dd4b2-b4b7-4d72-9e12-847e3c231eea

[root at rhsqa14-vm3 ~]# gluster v info test

Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 


[root at rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0
10.70.47.159:/rhs/brick3/t0 start
volume remove-brick start: failed: Removing brick from a Tier volume is not
allowed
[root at rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0
10.70.47.159:/rhs/brick3/t0 force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Removing brick from a Tier volume is
not allowed
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0
10.70.47.159:/rhs/brick3/t0 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Removing brick from a Tier volume is not
allowed
[root at rhsqa14-vm3 ~]# 


[root at rhsqa14-vm3 ~]# gluster  v info test

Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 


tried with single brick removal also :

[root at rhsqa14-vm3 ~]# gluster v info test

Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0
start
volume remove-brick start: failed: Removing brick from a Tier volume is not
allowed
[root at rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0
force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y 
volume remove-brick commit force: failed: Removing brick from a Tier volume is
not allowed
[root at rhsqa14-vm3 ~]# gluster v remove-brick test 10.70.46.2:/rhs/brick3/t0
commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: failed: Removing brick from a Tier volume is not
allowed
[root at rhsqa14-vm3 ~]# 

[root at rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root at rhsqa14-vm3 ~]# 


this bug is verified with IO tried to remove brick but not successful.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=uw6i09EOtK&a=cc_unsubscribe


More information about the Bugs mailing list