[Bugs] [Bug 1229257] Incorrect vol info post detach on disperse volume

bugzilla at redhat.com bugzilla at redhat.com
Thu Jun 18 10:30:57 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1229257

Triveni Rao <trao at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |VERIFIED



--- Comment #4 from Triveni Rao <trao at redhat.com> ---
[root at rhsqa14-vm3 ~]# gluster v create ecvol disperse 6 redundancy 2
10.70.47.159:/rhs/brick1/e0 10.70.46.2:/rhs/brick1/e0
10.70.47.159:/rhs/brick2/e0 10.70.46.2:/rhs/brick2/e0
10.70.47.159:/rhs/brick3/e0 10.70.46.2:/rhs/brick3/e0 force
volume create: ecvol: success: please start the volume to access data
[root at rhsqa14-vm3 ~]# gluster v info

Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Created
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 


[root at rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2
10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: success
volume rebalance: ecvol: failed: Volume ecvol needs to be started to perform
rebalance
Failed to run tier start. Please execute tier start command explictly
Usage : gluster volume rebalance <volname> tier start
[root at rhsqa14-vm3 ~]# gluster v start ecvol
volume start: ecvol: success
[root at rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2
10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: failed: Volume ecvol is already a tier.
[root at rhsqa14-vm3 ~]# gluster v attach-tier ecvol replica 2
10.70.47.159:/rhs/brick4/e0 10.70.46.2:/rhs/brick4/e0 force
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: failed: Volume ecvol is already a tier.
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v info

Volume Name: ecvol
Type: Tier
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: 10.70.46.2:/rhs/brick4/e0
Brick2: 10.70.47.159:/rhs/brick4/e0
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (4 + 2) = 6
Brick3: 10.70.47.159:/rhs/brick1/e0
Brick4: 10.70.46.2:/rhs/brick1/e0
Brick5: 10.70.47.159:/rhs/brick2/e0
Brick6: 10.70.46.2:/rhs/brick2/e0
Brick7: 10.70.47.159:/rhs/brick3/e0
Brick8: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 


root at rhsqa14-vm3 ~]# gluster volume rebalance ecvol tier start
volume rebalance: ecvol: success: Rebalance on ecvol has been started
successfully. Use rebalance status command to check status of the rebalance
process.
ID: d72d6a3b-b9b2-479c-b961-0bf500a98588

[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster volume rebalance ecvol tier status
Node                 Promoted files       Demoted files        Status           
---------            ---------            ---------            ---------        
localhost            0                    0                    in progress      
10.70.46.2           0                    0                    in progress      
volume rebalance: ecvol: success: 
[root at rhsqa14-vm3 ~]# 


[root at rhsqa14-vm3 ~]# gluster v detach-tier ecvol start
volume detach-tier start: success
ID: 1a9a6afa-f81a-4372-b3d0-ccc43c874661
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v detach-tier ecvol commit
volume detach-tier commit: success
Check the detached bricks to ensure all files are migrated.
If files with data are found on the brick path, copy them via a gluster mount
point before re-purposing the removed brick. 
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v info

Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 


[root at rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root at rhsqa14-vm3 ~]# 

NOTE:

vol info shows properly after detach tier. but when IO was running on the vol
and detach tier executed then IO failed and exited on mount point.
we will put the status as verified but for IO failure will open a separate bug.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=91uBswqy2A&a=cc_unsubscribe


More information about the Bugs mailing list