[Bugs] [Bug 1278413] New: Data Tiering: AFR(replica) self-heal deamon details go missing on attach-tier

bugzilla at redhat.com bugzilla at redhat.com
Thu Nov 5 12:20:35 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1278413

            Bug ID: 1278413
           Summary: Data Tiering: AFR(replica) self-heal deamon details go
                    missing on attach-tier
           Product: Red Hat Gluster Storage
           Version: 3.1
         Component: glusterfs
     Sub Component: tiering
          Keywords: Reopened, Triaged
          Severity: urgent
          Priority: urgent
          Assignee: rhs-bugs at redhat.com
          Reporter: nchilaka at redhat.com
        QA Contact: nchilaka at redhat.com
                CC: bugs at gluster.org, dlambrig at redhat.com,
                    josferna at redhat.com
        Depends On: 1212830
            Blocks: 1260923, 1186580 (qe_tracker_everglades), 1199352
                    (glusterfs-3.7.0)



+++ This bug was initially created as a clone of Bug #1212830 +++

Description of problem:
======================
When we create a dist-rep volume and check the status, the volume details shows
self heal deamon of afr as below:
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601

But on attach-tier this deamon process fails to show up on vol status


Version-Release number of selected component (if applicable):
============================================================
[root at interstellar ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
[root at interstellar ~]# rpm -qa|grep gluster
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64


Steps to Reproduce:
===================
1.created a 3x dist-rep volume
2.start and issue status of volume
[root at interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
Self-heal Daemon on localhost               N/A       N/A        Y       62245
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
Self-heal Daemon on 10.70.34.56             N/A       N/A        Y       24601
NFS Server on ninja                         N/A       N/A        N       N/A  
Self-heal Daemon on ninja                   N/A       N/A        Y       14898
NFS Server on transformers                  N/A       N/A        N       N/A  
Self-heal Daemon on transformers            N/A       N/A        Y       15357

Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks


3. now attach a tier and reissue the command, it can be seen that the self
deamons are not showing up
[root at interstellar ~]# gluster v attach-tier rep3 replica 3
ninja:/rhs/brick1/rep3-tier interstellar:/rhs/brick1/rep3-tier
transformers:/rhs/brick1/rep3-tier 
volume add-brick: success
[root at interstellar ~]# gluster v status rep3
Status of volume: rep3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick transformers:/rhs/brick1/rep3-tier    49175     0          Y       15496
Brick interstellar:/rhs/brick1/rep3-tier    49190     0          Y       62447
Brick ninja:/rhs/brick1/rep3-tier           49237     0          Y       15080
Brick ninja:/rhs/brick1/rep3a               49234     0          Y       14452
Brick interstellar:/rhs/brick1/rep3a        49187     0          Y       60206
Brick transformers:/rhs/brick1/rep3a        49172     0          Y       14930
Brick interstellar:/rhs/brick1/rep3b        49188     0          Y       60223
Brick ninja:/rhs/brick1/rep3b               49235     0          Y       14471
Brick transformers:/rhs/brick1/rep3b        49173     0          Y       14948
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on ninja                         N/A       N/A        N       N/A  
NFS Server on 10.70.34.56                   N/A       N/A        N       N/A  
NFS Server on transformers                  N/A       N/A        N       N/A  

Task Status of Volume rep3
------------------------------------------------------------------------------
There are no active volume tasks



For  more logs refer to bz#1212822

--- Additional comment from Dan Lambright on 2015-04-23 11:05:58 EDT ---

We do not support self healing with tiered volumes in V1. We will support it in
the future. Marking as deferred.

--- Additional comment from nchilaka on 2015-11-05 07:19:28 EST ---

I think now we should be supporting this, right?


Referenced Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
https://bugzilla.redhat.com/show_bug.cgi?id=1199352
[Bug 1199352] GlusterFS 3.7.0 tracker
https://bugzilla.redhat.com/show_bug.cgi?id=1212830
[Bug 1212830] Data Tiering: AFR(replica) self-heal deamon details go
missing on attach-tier
https://bugzilla.redhat.com/show_bug.cgi?id=1260923
[Bug 1260923] Tracker for tiering in 3.1.2
-- 
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=SSSYwhG8WF&a=cc_unsubscribe


More information about the Bugs mailing list