[Bugs] [Bug 1206517] New: Data Tiering:Distribute-replicate type Volume not getting converted to a tiered volume on attach-tier

bugzilla at redhat.com bugzilla at redhat.com
Fri Mar 27 10:35:10 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1206517

            Bug ID: 1206517
           Summary: Data Tiering:Distribute-replicate type Volume not
                    getting converted to a tiered volume on attach-tier
           Product: GlusterFS
           Version: mainline
         Component: core
          Severity: urgent
          Assignee: bugs at gluster.org
          Reporter: nchilaka at redhat.com
                CC: bugs at gluster.org, gluster-bugs at redhat.com



Description of problem:
=======================
When attaching a tier to a distribute-replicate volume, the attach passes but
the volume doesnt get converted to a tiered volume.
But given that dist-rep volumes are most deployed among all type of volumes,
tiering must be supported on dist-rep volumes too

Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.803.gitf64666f.autobuild/

glusterfs 3.7dev built on Mar 26 2015 01:04:24

How reproducible:
=================
Easy to reproduce


Steps to Reproduce:
==================
1.create a gluster volume of type distribute-replicate type and start the
volume
2.attach a tier to the volume using attach-tier. 
3. Now check the volume type. It still shows as dist-rep instead of
tier-volume.
4. Also check the xattrs of the bricks, they dont have any tier attributes,
even after mounting 
5. after mounting volume, now write some files to the volume.
It can be seen that all the files just get distributed and replicated over all
the bricks and their repective replica pairs. They have nothing to do with if
the bricks are part of cold or hot tier


Actual results:
===============
The volume doesnt get converted to tiered volume.
Neither the volume info shows or the dht.tier gets added.
Also the files also get dispersed over just like a regular dist-rep volume

Expected results:
================
A dist-rep volume should be able to be converted to tiered volume and behave
like a tiered volume.
But currently attach-tier is only working like an add-brick command


Additional info(CLI logs):
===============
[root at rhs-client44 ~]# gluster v create tier_distrep replica 2
rhs-client44:/pavanbrick1/tier_distrep/b1
rhs-client37:/pavanbrick1/tier_distrep/b1m
rhs-client37:/pavanbrick1/tier_distrep/b2
rhs-client38:/pavanbrick1/tier_distrep/b2m
rhs-client44:/pavanbrick1/tier_distrep/b3m
rhs-client38:/pavanbrick1/tier_distrep/b3
volume create: tier_distrep: success: please start the volume to access data
[root at rhs-client44 ~]# gluster v info tier_distrep

Volume Name: tier_distrep
Type: Distributed-Replicate
Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick1/tier_distrep/b1
Brick2: rhs-client37:/pavanbrick1/tier_distrep/b1m
Brick3: rhs-client37:/pavanbrick1/tier_distrep/b2
Brick4: rhs-client38:/pavanbrick1/tier_distrep/b2m
Brick5: rhs-client44:/pavanbrick1/tier_distrep/b3m
Brick6: rhs-client38:/pavanbrick1/tier_distrep/b3
[root at rhs-client44 ~]# gluster v attach-tier
Usage: volume attach-tier <VOLNAME> [<replica COUNT>] <NEW-BRICK>...
[root at rhs-client44 ~]# gluster v attach-tier tier_distrep
rhs-client44:/pavanbrick2/tier_distrep/hb1
rhs-client37:/pavanbrick2/tier_distrep/hb1m
rhs-client37:/pavanbrick2/tier_distrep/hb2
rhs-client38:/pavanbrick2/tier_distrep/hb2m
volume add-brick: success
[root at rhs-client44 ~]# gluster v info tier_distrep

Volume Name: tier_distrep
Type: Distributed-Replicate
Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26
Status: Created
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: rhs-client38:/pavanbrick2/tier_distrep/hb2m
Brick2: rhs-client37:/pavanbrick2/tier_distrep/hb2
Brick3: rhs-client37:/pavanbrick2/tier_distrep/hb1m
Brick4: rhs-client44:/pavanbrick2/tier_distrep/hb1
Brick5: rhs-client44:/pavanbrick1/tier_distrep/b1
Brick6: rhs-client37:/pavanbrick1/tier_distrep/b1m
Brick7: rhs-client37:/pavanbrick1/tier_distrep/b2
Brick8: rhs-client38:/pavanbrick1/tier_distrep/b2m
Brick9: rhs-client44:/pavanbrick1/tier_distrep/b3m
Brick10: rhs-client38:/pavanbrick1/tier_distrep/b3
[root at rhs-client44 ~]# gluster v status tier_distrep
Volume tier_distrep is not started
[root at rhs-client44 ~]# gluster v start tier_distrep
volume start: tier_distrep: success
[root at rhs-client44 ~]# gluster v info tier_distrep

Volume Name: tier_distrep
Type: Distributed-Replicate
Volume ID: ad81ef54-70ec-41f2-800c-17e5025acb26
Status: Started
Number of Bricks: 5 x 2 = 10
Transport-type: tcp
Bricks:
Brick1: rhs-client38:/pavanbrick2/tier_distrep/hb2m
Brick2: rhs-client37:/pavanbrick2/tier_distrep/hb2
Brick3: rhs-client37:/pavanbrick2/tier_distrep/hb1m
Brick4: rhs-client44:/pavanbrick2/tier_distrep/hb1
Brick5: rhs-client44:/pavanbrick1/tier_distrep/b1
Brick6: rhs-client37:/pavanbrick1/tier_distrep/b1m
Brick7: rhs-client37:/pavanbrick1/tier_distrep/b2
Brick8: rhs-client38:/pavanbrick1/tier_distrep/b2m
Brick9: rhs-client44:/pavanbrick1/tier_distrep/b3m
Brick10: rhs-client38:/pavanbrick1/tier_distrep/b3
[root at rhs-client44 ~]# gluster v status tier_distrep
Status of volume: tier_distrep
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client38:/pavanbrick2/tier_distre
p/hb2m                                      49155     0          Y       1927 
Brick rhs-client37:/pavanbrick2/tier_distre
p/hb2                                       49155     0          Y       32498
Brick rhs-client37:/pavanbrick2/tier_distre
p/hb1m                                      49156     0          Y       32518
Brick rhs-client44:/pavanbrick2/tier_distre
p/hb1                                       49161     0          Y       28127
Brick rhs-client44:/pavanbrick1/tier_distre
p/b1                                        49162     0          Y       28147
Brick rhs-client37:/pavanbrick1/tier_distre
p/b1m                                       49157     0          Y       32538
Brick rhs-client37:/pavanbrick1/tier_distre
p/b2                                        49158     0          Y       32558
Brick rhs-client38:/pavanbrick1/tier_distre
p/b2m                                       49156     0          Y       1950 
Brick rhs-client44:/pavanbrick1/tier_distre
p/b3m                                       49163     0          Y       28167
Brick rhs-client38:/pavanbrick1/tier_distre
p/b3                                        49157     0          Y       1973 
NFS Server on localhost                     2049      0          Y       28188
Self-heal Daemon on localhost               N/A       N/A        Y       28197
NFS Server on 10.70.36.62                   2049      0          Y       2001 
Self-heal Daemon on 10.70.36.62             N/A       N/A        Y       2013 
NFS Server on rhs-client37                  2049      0          Y       32580
Self-heal Daemon on rhs-client37            N/A       N/A        Y       32588

Task Status of Volume tier_distrep
------------------------------------------------------------------------------
There are no active volume tasks




#######################
Xattrs


[root at rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tier_distrep/b1
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-0=0x000000000000000000000000
trusted.afr.tier_distrep-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000003331f8286663f04f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick1/tier_distrep/b3m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-4=0x000000000000000000000000
trusted.afr.tier_distrep-client-5=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000009995e878ccc7e09f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root at rhs-client44 ~]# 
[root at rhs-client44 ~]# 
[root at rhs-client44 ~]# 
[root at rhs-client44 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tier_distrep/hb1
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-6=0x000000000000000000000000
trusted.afr.tier_distrep-client-7=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003331f827
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root at rhs-client44 ~]# 
#################################################################################################################
[root at rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tier_distrep/b2m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-2=0x000000000000000000000000
trusted.afr.tier_distrep-client-3=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000006663f0509995e877
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick1/tier_distrep/b3
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-4=0x000000000000000000000000
trusted.afr.tier_distrep-client-5=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000009995e878ccc7e09f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root at rhs-client38 ~]# 
[root at rhs-client38 ~]# 
[root at rhs-client38 ~]# 
[root at rhs-client38 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tier_distrep/hb2m
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000ccc7e0a0ffffffff
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

####################################################################################################################
[root at rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick1/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick1/tier_distrep/b1m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-0=0x000000000000000000000000
trusted.afr.tier_distrep-client-1=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000003331f8286663f04f
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick1/tier_distrep/b2
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-2=0x000000000000000000000000
trusted.afr.tier_distrep-client-3=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x00000001000000006663f0509995e877
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root at rhs-client37 ~]# 
[root at rhs-client37 ~]# 
[root at rhs-client37 ~]# 
[root at rhs-client37 ~]# getfattr -d -e hex -m . /pavanbrick2/tier_distrep/*
getfattr: Removing leading '/' from absolute path names
# file: pavanbrick2/tier_distrep/hb1m
trusted.afr.dirty=0x000000000000000000000000
trusted.afr.tier_distrep-client-6=0x000000000000000000000000
trusted.afr.tier_distrep-client-7=0x000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000000000003331f827
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

# file: pavanbrick2/tier_distrep/hb2
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x0000000100000000ccc7e0a0ffffffff
trusted.glusterfs.volume-id=0xad81ef5470ec41f2800c17e5025acb26

[root at rhs-client37 ~]#

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list