[Bugs] [Bug 1582443] New: gluster volume status <volname> does not show glustershd status correctly

bugzilla at redhat.com bugzilla at redhat.com
Fri May 25 08:35:09 UTC 2018


https://bugzilla.redhat.com/show_bug.cgi?id=1582443

            Bug ID: 1582443
           Summary: gluster volume status <volname> does not show
                    glustershd status correctly
           Product: GlusterFS
           Version: 3.12
         Component: glusterd
          Assignee: bugs at gluster.org
          Reporter: zz.sh.cynthia at gmail.com
                CC: bugs at gluster.org



Description of problem:

glustershd status is not correctly showed in command "gluster v status
<volname>"
Version-Release number of selected component (if applicable):

3.12.3
How reproducible:
isolate sn-0 node by drop all packet comming in/out to/from other sn nodes. for
a while then restore network

Steps to Reproduce:
1.isolate sn-0
2. wait 10 seconds
3.restore network
4.execute "gluster v status <volname>"

Actual results:

Status of volume: export
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick sn-0.local:/mnt/bricks/export/brick   49154     0          Y       15425
Brick sn-1.local:/mnt/bricks/export/brick   49154     0          Y       3218 
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
Self-heal Daemon on sn-0.local              N/A       N/A        Y       15568
Self-heal Daemon on sn-1.local              N/A       N/A        Y       13719

Task Status of Volume export
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: log
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick sn-0.local:/mnt/bricks/log/brick      49155     0          Y       4067 
Brick sn-1.local:/mnt/bricks/log/brick      49155     0          Y       3509 
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
Self-heal Daemon on sn-0.local              N/A       N/A        Y       15568
Self-heal Daemon on sn-1.local              N/A       N/A        Y       13719

Task Status of Volume log
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: mstate
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick sn-0.local:/mnt/bricks/mstate/brick   49153     0          Y       3500 
Brick sn-1.local:/mnt/bricks/mstate/brick   49153     0          Y       2970 
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
Self-heal Daemon on sn-0.local              N/A       N/A        Y       15568
Self-heal Daemon on sn-1.local              N/A       N/A        Y       13719

Task Status of Volume mstate
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: services
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick sn-0.local:/mnt/bricks/services/brick 49156     0          Y       15442
Brick sn-1.local:/mnt/bricks/services/brick 49152     0          Y       2618 
Self-heal Daemon on localhost               N/A       N/A        N       N/A  
Self-heal Daemon on sn-0.local              N/A       N/A        Y       15568
Self-heal Daemon on sn-1.local              N/A       N/A        Y       13719

Task Status of Volume services

[root at sn-2:/root]
# ps -ef | grep glustershd
root     11142     1  0 14:30 ?        00:00:00 /usr/sbin/glusterfs -s
sn-2.local --volfile-id gluster/glustershd -p
/var/run/gluster/glustershd/glustershd.pid -l /var/log/glusterfs/glustershd.log
-S /var/run/gluster/31d6e90b5e65aededb7ada7278c7181a.socket --xlator-option
*replicate*.node-uuid=7321b551-5b98-4583-bc0b-887ebae4ba2a
root     21017 16286  0 15:25 pts/2    00:00:00 grep --color=auto glustershd
[root at sn-2:/root]

Expected results:

gluster v status should show glustershd status OK
Additional info:

-- 
You are receiving this mail because:
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list