[Bugs] [Bug 1271725] Data Tiering: Disallow attach tier on a volume where any rebalance process is in progress to avoid deadlock(like remove brick commit pending etc)
bugzilla at redhat.com
bugzilla at redhat.com
Wed Nov 25 09:41:00 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1271725
surabhi <sbhaloth at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ON_QA |VERIFIED
CC| |sbhaloth at redhat.com
--- Comment #3 from surabhi <sbhaloth at redhat.com> ---
Following steps used to verify the BZ:
1.created a distribute volume with 3 bricks
2.execute remove brick to remove one of the bricks .
3.Now without commiting the remove brick, go ahead and attach tier
Expected result:
Attach tier should should not be allowed if rebalance/remove brick is in
progress.
Actual result:
Attach tier failed when remove-brick is not committed/rebalance is in progress.
Volume Name: distribute
Type: Distribute
Volume ID: 8d918c68-893d-4173-8a09-1e0baef90b7f
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.46.111:/rhs/brick3/b1
Brick2: 10.70.46.90:/rhs/brick3/b2
Brick3: 10.70.46.136:/bricks/brick3/b3
Options Reconfigured:
performance.readdir-ahead: on
[root at localhost ~]# gluster vol remove-brick distribute
10.70.46.136:/bricks/brick3/b3 start
volume remove-brick start: success
ID: ea216473-7dba-4439-94cf-734f13891f55
[root at localhost ~]# gluster vol remove-brick distribute
10.70.46.136:/bricks/brick3/b3 status
Node Rebalanced-files size
scanned failures skipped status run time in secs
--------- ----------- -----------
----------- ----------- ----------- ------------ --------------
10.70.46.136 0 0Bytes
0 0 0 completed 0.00
[root at localhost ~]# gluster vol attach-tier distribute
10.70.46.111:/rhs/brick4/hot1 10.70.46.90:/rhs/brick4/hot2
volume attach-tier: failed: An earlier remove-brick task exists for volume
distribute. Either commit it or stop it before attaching a tier.
Tier command failed
[root at localhost ~]# gluster vol remove-brick distribute
10.70.46.136:/bricks/brick3/b3 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
Check the removed bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount
point before re-purposing the removed brick.
[root at localhost ~]# gluster vol attach-tier distribute
10.70.46.111:/rhs/brick4/hot1 10.70.46.90:/rhs/brick4/hot2
volume attach-tier: success
Tiering Migration Functionality: distribute: success: Attach tier is successful
on distribute. use tier status to check the status.
ID: abd37426-ea0f-49c4-b502-833568162eb5
marking the BZ as verified on build :
rpm -qa | grep glusterfs
glusterfs-3.7.5-7.el7rhgs.x86_64
glusterfs-api-3.7.5-7.el7rhgs.x86_64
glusterfs-server-3.7.5-7.el7rhgs.x86_64
glusterfs-rdma-3.7.5-7.el7rhgs.x86_64
glusterfs-fuse-3.7.5-7.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-7.el7rhgs.x86_64
samba-vfs-glusterfs-4.2.4-6.el7rhgs.x86_64
glusterfs-libs-3.7.5-7.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-7.el7rhgs.x86_64
glusterfs-cli-3.7.5-7.el7rhgs.x86_64
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=pMK8dybeFV&a=cc_unsubscribe
More information about the Bugs
mailing list