[Bugs] [Bug 1207204] Data Tiering:Adding tier to Disperse(erasure code) volume is converting volume to distribute-disperse instead of tier-disperse type
bugzilla at redhat.com
bugzilla at redhat.com
Tue Mar 31 06:34:40 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1207204
nchilaka <nchilaka at redhat.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|unspecified |high
Blocks| |1186580
| |(qe_tracker_everglades)
Assignee|bugs at gluster.org |josferna at redhat.com
Summary|Data Tiering |Data Tiering:Adding tier to
| |Disperse(erasure code)
| |volume is converting volume
| |to distribute-disperse
| |instead of tier-disperse
| |type
--- Comment #1 from nchilaka <nchilaka at redhat.com> ---
Submitted by mistake before filling in the info. Hence adding the info
Description of problem:
======================
When i tried to add a tier to EC volume, the bricks were consumed as though it
was a distribute bricks.
That means the volume is getting converted to distribute-disperse.
There are no tier xattrs too.
We need to support tier with disperse/EC volume as this is a usecase where
cutomers would want to use a cold tier as disperse type with the hot tier as
SSDs
Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.821.git0934432.autobuild//
root at interstellar glusterfs]# gluster --version
glusterfs 3.7dev built on Mar 28 2015 01:05:28
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
Steps to Reproduce:
===================
1. Create a disperse volume and start it
[root at interstellar ~]# gluster v create ec1 disperse-data 2 redundancy 1
transformers:/pavanbrick1/ec1/b1 interstellar:/pavanbrick1/ec1/b1
transformers:/pavanbrick1/ec1/rb1 force
volume create: ec1: success: please start the volume to access data
[root at interstellar ~]# gluster v start ec1
volume start: ec1: success
[root at interstellar ~]# gluster v status ec1
Status of volume: ec1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick transformers:/pavanbrick1/ec1/b1 49160 0 Y 44215
Brick interstellar:/pavanbrick1/ec1/b1 49160 0 Y 40225
Brick transformers:/pavanbrick1/ec1/rb1 49161 0 Y 44235
NFS Server on localhost N/A N/A N N/A
NFS Server on 10.70.34.44 N/A N/A N N/A
Task Status of Volume ec1
------------------------------------------------------------------------------
There are no active volume tasks
2.Now add a tier
[root at interstellar ~]# gluster v attach-tier ec1
transformers:/pavanbrick2/ec1/hb1 interstellar:/pavanbrick2/ec1/hb1
3.check the vol info
[root at interstellar ~]# gluster v info ec1
Volume Name: ec1
Type: Distributed-Disperse
Volume ID: 1c47d8a3-f26f-4ef5-ad32-7da3462aaa61
Status: Started
Number of Bricks: 1 x (2 + 1) = 5
Transport-type: tcp
Bricks:
Brick1: interstellar:/pavanbrick2/ec1/hb1
Brick2: transformers:/pavanbrick2/ec1/hb1
Brick3: transformers:/pavanbrick1/ec1/b1
Brick4: interstellar:/pavanbrick1/ec1/b1
Brick5: transformers:/pavanbrick1/ec1/rb1
Expected results:
==================
support Tiering with EC properly.
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
--
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list