[Bugs] [Bug 1206592] New: Data Tiering:Allow adding brick to hot tier too(or let user choose to add bricks to any tier of their wish)

bugzilla at redhat.com bugzilla at redhat.com
Fri Mar 27 13:28:40 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1206592

            Bug ID: 1206592
           Summary: Data Tiering:Allow adding brick to hot tier too(or let
                    user choose to add bricks to any tier of their wish)
           Product: GlusterFS
           Version: mainline
         Component: core
          Severity: high
          Assignee: bugs at gluster.org
          Reporter: nchilaka at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem:
======================
Currently if a user adds a brick, the brick becomes part of the cold tier.
But, there can be many cases where the user wants to expand the hot tier also.
Some reasons can be 
1)may be more SSDs at disposal
2)the volume is being accessed at a very high rate temporarily and hence want
to expand hot tier to accomodate more hot data for faster access.

So let the user decide to which tier he/she wants to add the brick


Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/

glusterfs 3.7dev built on Mar 26 2015 01:04:24


How reproducible:
================
easily


Steps to Reproduce:
==================
1.create a distribute volume
2.attach a tier using attach-tier to the volume
3.issue a volume info or volume status command
4. Now try to add a new brick. It can be seen that the brick gets added to the
cold tier without any choice(this can be seen by seeing the xattrs where the
dht of cold tier gets overlapped

NOTE:
=====
How am I confirming it is getting added to cold tier and not hot tier?
Ans: there are two ways
1) the cold bricks has ranges get messed up
2)another way is when we issue a detach-tier, this brick stays back with the
volume and doesnt get detached. Hence concluding it is a cold brick


Expected results:
==================
Allow it to the user to add the brick to hot/cold tier

Additional info(CLI):
===================
[root at rhs-client44 ~]# gluster v info tiervol10

Volume Name: tiervol10
Type: Tier
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 5 x 1 = 5
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick2/tiervol10/hb1
Brick2: rhs-client37:/pavanbrick2/tiervol10/hb1
Brick3: rhs-client44:/pavanbrick1/tiervol10/b1
Brick4: rhs-client37:/pavanbrick1/tiervol10/b1
Brick5: rhs-client38:/pavanbrick1/tiervol10/b1

[root at rhs-client44 ~]# gluster v add-brick tiervol10
rhs-client38:/pavanbrick2/tiervol10/newbrick
volume add-brick: success
[root at rhs-client44 ~]# gluster v info tiervol10

Volume Name: tiervol10
Type: Tier
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 6 x 1 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick2/tiervol10/hb1
Brick2: rhs-client37:/pavanbrick2/tiervol10/hb1
Brick3: rhs-client44:/pavanbrick1/tiervol10/b1
Brick4: rhs-client37:/pavanbrick1/tiervol10/b1
Brick5: rhs-client38:/pavanbrick1/tiervol10/b1
Brick6: rhs-client38:/pavanbrick2/tiervol10/newbrick
[root at rhs-client44 ~]# gluster v status tiervol10
Status of volume: tiervol10
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client44:/pavanbrick2/tiervol10/h
b1                                          49171     0          Y       2784 
Brick rhs-client37:/pavanbrick2/tiervol10/h
b1                                          49167     0          Y       17933
Brick rhs-client44:/pavanbrick1/tiervol10/b
1                                           49168     0          Y       29334
Brick rhs-client37:/pavanbrick1/tiervol10/b
1                                           49164     0          Y       1075 
Brick rhs-client38:/pavanbrick1/tiervol10/b
1                                           49161     0          Y       19137
Brick rhs-client38:/pavanbrick2/tiervol10/n
ewbrick                                     49162     0          Y       20362
NFS Server on localhost                     2049      0          Y       2956 
NFS Server on rhs-client37                  2049      0          Y       18060
NFS Server on 10.70.36.62                   2049      0          Y       20383

Task Status of Volume tiervol10
------------------------------------------------------------------------------
There are no active volume tasks

[root at rhs-client44 ~]# gluster v detach-tier tiervol10
volume remove-brick unknown: success
[root at rhs-client44 ~]# gluster v info tiervol10

Volume Name: tiervol10
Type: Distribute
Volume ID: e6223c16-50fa-4916-b8b9-a83db6e8ec6c
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client37:/pavanbrick1/tiervol10/b1
Brick2: rhs-client38:/pavanbrick1/tiervol10/b1
Brick3: rhs-client38:/pavanbrick2/tiervol10/newbrick


=================HOT BRICKS=========================================
[root at rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/hb1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000007ffe7c30
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x00000001000000009995e878ffffffff

[root at rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/hb1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000007ffe7c31ffffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x00000001000000009995e878ffffffff







=================COLD BRICKS============================================
[root at rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

[root at rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000aaa79b0effffffff
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877

[root at rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tiervol10/b1
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000005553cd86
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b
trusted.tier-gfid=0x0000000100000000000000009995e877





Newly added brick
=================
[root at rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tiervol10/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tiervol10/newbrick
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000005553cd87aaa79b0d
trusted.glusterfs.volume-id=0xbb730f770c0841a5ad3649e42f0bac1b

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list