[Bugs] [Bug 1229251] Data Tiering; Need to change volume info details like type of volume and number of bricks when tier is attached to a EC(disperse) volume
bugzilla at redhat.com
bugzilla at redhat.com
Thu Jun 18 09:37:28 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1229251
Triveni Rao <trao at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|ON_QA |VERIFIED
--- Comment #3 from Triveni Rao <trao at redhat.com> ---
This bug has been verified with both types of EC volumes such as pure disperse
and distributed-disperse volume.
this is with plane disperse vol.
[root at rhsqa14-vm3 ~]# gluster v create ec1 disperse-data 2 redundancy 1
10.70.47.159:/rhs/brick1/ec1 10.70.46.2:/rhs/brick1/ec1
10.70.47.159:/rhs/brick2/ec1 force
volume create: ec1: success: please start the volume to access data
[root at rhsqa14-vm3 ~]# gluster v start ec1
volume start: ec1: success
[root at rhsqa14-vm3 ~]# gluster v info
Volume Name: ec1
Type: Disperse
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec1
Brick2: 10.70.46.2:/rhs/brick1/ec1
Brick3: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: ecvol
Type: Disperse
Volume ID: 140cf106-24e8-4c0b-8b87-75c4d361fdca
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/e0
Brick2: 10.70.46.2:/rhs/brick1/e0
Brick3: 10.70.47.159:/rhs/brick2/e0
Brick4: 10.70.46.2:/rhs/brick2/e0
Brick5: 10.70.47.159:/rhs/brick3/e0
Brick6: 10.70.46.2:/rhs/brick3/e0
Options Reconfigured:
performance.readdir-ahead: on
Volume Name: test
Type: Tier
Volume ID: 0b2070bd-eca0-4f1a-bdab-207a3f0a95f7
Status: Started
Number of Bricks: 6
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick3/t0
Brick2: 10.70.47.159:/rhs/brick3/t0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 4
Brick3: 10.70.47.159:/rhs/brick1/t0
Brick4: 10.70.46.2:/rhs/brick1/t0
Brick5: 10.70.47.159:/rhs/brick2/t0
Brick6: 10.70.46.2:/rhs/brick2/t0
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# gluster v info ec1
Volume Name: ec1
Type: Disperse
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec1
Brick2: 10.70.46.2:/rhs/brick1/ec1
Brick3: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v ec1 status
'unrecognized word: ec1 (position 1)
[root at rhsqa14-vm3 ~]# gluster v status ec1
Status of volume: ec1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.47.159:/rhs/brick1/ec1 49159 0 Y 27228
Brick 10.70.46.2:/rhs/brick1/ec1 49159 0 Y 16170
Brick 10.70.47.159:/rhs/brick2/ec1 49160 0 Y 27246
NFS Server on localhost 2049 0 Y 27265
Self-heal Daemon on localhost N/A N/A Y 27273
NFS Server on 10.70.46.2 2049 0 Y 16189
Self-heal Daemon on 10.70.46.2 N/A N/A Y 16197
Task Status of Volume ec1
------------------------------------------------------------------------------
There are no active volume tasks
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v attach-tier ec1 10.70.47.159:/rhs/brick4/ec1
10.70.46.2:/rhs/brick4/ec1
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: success
volume rebalance: ec1: success: Rebalance on ec1 has been started successfully.
Use rebalance status command to check status of the rebalance process.
ID: b25fb766-f9bf-4df2-a1ff-3d43c2f4faa5
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v info ec1
Volume Name: ec1
Type: Tier
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 5
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick4/ec1
Brick2: 10.70.47.159:/rhs/brick4/ec1
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (2 + 1) = 3
Brick3: 10.70.47.159:/rhs/brick1/ec1
Brick4: 10.70.46.2:/rhs/brick1/ec1
Brick5: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
=========================================================
this is with distributed-disperse vol.
[root at rhsqa14-vm3 ~]# gluster v create ec2 disperse-data 4 redundancy 2
10.70.47.159:/rhs/brick1/ec2 10.70.46.2:/rhs/brick1/ec2
10.70.47.159:/rhs/brick2/ec2 10.70.46.2:/rhs/brick2/ec2
10.70.47.159:/rhs/brick3/ec2 10.70.46.2:/rhs/brick3/ec2 force
volume create: ec2: success: please start the volume to access data
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v start ec2
volume start: ec2: success
[root at rhsqa14-vm3 ~]# gluster v info ec2
Volume Name: ec2
Type: Disperse
Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec2
Brick2: 10.70.46.2:/rhs/brick1/ec2
Brick3: 10.70.47.159:/rhs/brick2/ec2
Brick4: 10.70.46.2:/rhs/brick2/ec2
Brick5: 10.70.47.159:/rhs/brick3/ec2
Brick6: 10.70.46.2:/rhs/brick3/ec2
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v add-brick ec2 10.70.47.159:/rhs/brick4/ec2
10.70.46.2:/rhs/brick4/ec2 10.70.47.159:/rhs/brick5/ec2
10.70.46.2:/rhs/brick5/ec2 10.70.47.159:/rhs/brick6/ec2
10.70.46.2:/rhs/brick6/ec2
volume add-brick: success
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v info ec2
Volume Name: ec2
Type: Distributed-Disperse
Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70
Status: Started
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec2
Brick2: 10.70.46.2:/rhs/brick1/ec2
Brick3: 10.70.47.159:/rhs/brick2/ec2
Brick4: 10.70.46.2:/rhs/brick2/ec2
Brick5: 10.70.47.159:/rhs/brick3/ec2
Brick6: 10.70.46.2:/rhs/brick3/ec2
Brick7: 10.70.47.159:/rhs/brick4/ec2
Brick8: 10.70.46.2:/rhs/brick4/ec2
Brick9: 10.70.47.159:/rhs/brick5/ec2
Brick10: 10.70.46.2:/rhs/brick5/ec2
Brick11: 10.70.47.159:/rhs/brick6/ec2
Brick12: 10.70.46.2:/rhs/brick6/ec2
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# gluster v attach-tier ec2 10.70.47.159:/rhs/brick6/ec2_0
10.70.46.2:/rhs/brick6/ec2_0
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y
volume attach-tier: success
volume rebalance: ec2: success: Rebalance on ec2 has been started successfully.
Use rebalance status command to check status of the rebalance process.
ID: bb666fe1-475c-45a8-8256-b2a6ff9bffc6
[root at rhsqa14-vm3 ~]# gluster v info ec2
Volume Name: ec2
Type: Tier
Volume ID: 6617f138-3f8b-4c21-99e2-bbdd25f98e70
Status: Started
Number of Bricks: 14
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick6/ec2_0
Brick2: 10.70.47.159:/rhs/brick6/ec2_0
Cold Tier:
Cold Tier Type : Distributed-Disperse
Number of Bricks: 2 x (4 + 2) = 12
Brick3: 10.70.47.159:/rhs/brick1/ec2
Brick4: 10.70.46.2:/rhs/brick1/ec2
Brick5: 10.70.47.159:/rhs/brick2/ec2
Brick6: 10.70.46.2:/rhs/brick2/ec2
Brick7: 10.70.47.159:/rhs/brick3/ec2
Brick8: 10.70.46.2:/rhs/brick3/ec2
Brick9: 10.70.47.159:/rhs/brick4/ec2
Brick10: 10.70.46.2:/rhs/brick4/ec2
Brick11: 10.70.47.159:/rhs/brick5/ec2
Brick12: 10.70.46.2:/rhs/brick5/ec2
Brick13: 10.70.47.159:/rhs/brick6/ec2
Brick14: 10.70.46.2:/rhs/brick6/ec2
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]#
[root at rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root at rhsqa14-vm3 ~]#
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=itcB4PxVUZ&a=cc_unsubscribe
More information about the Bugs
mailing list