[Bugs] [Bug 1229239] Data Tiering:Adding tier to Disperse(erasure code) volume is converting volume to distribute-disperse instead of tier-disperse type

bugzilla at redhat.com bugzilla at redhat.com
Thu Jun 18 09:05:01 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1229239

Triveni Rao <trao at redhat.com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|ON_QA                       |VERIFIED



--- Comment #3 from Triveni Rao <trao at redhat.com> ---
https://bugzilla.redhat.com/show_bug.cgi?id=1229239 ------- Adding tier to
Disperse(erasure code) volume is converting volume to distribute-disperse
instead of tier-disperse type


[root at rhsqa14-vm3 ~]# gluster v create ec1 disperse-data 2 redundancy 1
10.70.47.159:/rhs/brick1/ec1 10.70.46.2:/rhs/brick1/ec1
10.70.47.159:/rhs/brick2/ec1
volume create: ec1: failed: Multiple bricks of a replicate volume are present
on the same server. This setup is not optimal. Use 'force' at the end of the
command if you want to override this behavior. 
[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v create ec1 disperse-data 2 redundancy 1
10.70.47.159:/rhs/brick1/ec1 10.70.46.2:/rhs/brick1/ec1
10.70.47.159:/rhs/brick2/ec1 force
volume create: ec1: success: please start the volume to access data
[root at rhsqa14-vm3 ~]# gluster v start ec1
volume start: ec1: success
[root at rhsqa14-vm3 ~]# 



[root at rhsqa14-vm3 ~]# gluster v info ec1

Volume Name: ec1
Type: Disperse
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.47.159:/rhs/brick1/ec1
Brick2: 10.70.46.2:/rhs/brick1/ec1
Brick3: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 

[root at rhsqa14-vm3 ~]# gluster v status ec1
Status of volume: ec1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.159:/rhs/brick1/ec1          49159     0          Y       27228
Brick 10.70.46.2:/rhs/brick1/ec1            49159     0          Y       16170
Brick 10.70.47.159:/rhs/brick2/ec1          49160     0          Y       27246
NFS Server on localhost                     2049      0          Y       27265
Self-heal Daemon on localhost               N/A       N/A        Y       27273
NFS Server on 10.70.46.2                    2049      0          Y       16189
Self-heal Daemon on 10.70.46.2              N/A       N/A        Y       16197

Task Status of Volume ec1
------------------------------------------------------------------------------
There are no active volume tasks


[root at rhsqa14-vm3 ~]# gluster v attach-tier ec1 10.70.47.159:/rhs/brick4/ec1
10.70.46.2:/rhs/brick4/ec1
Attach tier is recommended only for testing purposes in this release. Do you
want to continue? (y/n) y

volume attach-tier: success
volume rebalance: ec1: success: Rebalance on ec1 has been started successfully.
Use rebalance status command to check status of the rebalance process.
ID: b25fb766-f9bf-4df2-a1ff-3d43c2f4faa5

[root at rhsqa14-vm3 ~]# 
[root at rhsqa14-vm3 ~]# gluster v info ec1

Volume Name: ec1
Type: Tier
Volume ID: 088e6bb8-83e4-4304-85a8-79a4e292afd8
Status: Started
Number of Bricks: 5
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.2:/rhs/brick4/ec1
Brick2: 10.70.47.159:/rhs/brick4/ec1
Cold Tier:
Cold Tier Type : Disperse
Number of Bricks: 1 x (2 + 1) = 3
Brick3: 10.70.47.159:/rhs/brick1/ec1
Brick4: 10.70.46.2:/rhs/brick1/ec1
Brick5: 10.70.47.159:/rhs/brick2/ec1
Options Reconfigured:
performance.readdir-ahead: on
[root at rhsqa14-vm3 ~]# 




[root at rhsqa14-vm3 ~]# rpm -qa | grep gluster
glusterfs-3.7.1-3.el6rhs.x86_64
glusterfs-cli-3.7.1-3.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-3.el6rhs.x86_64
glusterfs-libs-3.7.1-3.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-3.el6rhs.x86_64
glusterfs-fuse-3.7.1-3.el6rhs.x86_64
glusterfs-server-3.7.1-3.el6rhs.x86_64
glusterfs-rdma-3.7.1-3.el6rhs.x86_64
glusterfs-api-3.7.1-3.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-3.el6rhs.x86_64
[root at rhsqa14-vm3 ~]# 


this bug is verified and found no issues.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=oW0TfRDVuO&a=cc_unsubscribe


More information about the Bugs mailing list