[Bugs] [Bug 1224195] New: Disperse volume: gluster volume status doesn't show shd status
bugzilla at redhat.com
bugzilla at redhat.com
Fri May 22 10:15:30 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1224195
Bug ID: 1224195
Summary: Disperse volume: gluster volume status doesn't show
shd status
Product: Red Hat Gluster Storage
Version: 3.1
Component: glusterfs
Sub Component: disperse
Keywords: Triaged
Assignee: rhs-bugs at redhat.com
Reporter: byarlaga at redhat.com
QA Contact: byarlaga at redhat.com
CC: amukherj at redhat.com, aspandey at redhat.com,
bugs at gluster.org, byarlaga at redhat.com,
gluster-bugs at redhat.com, pkarampu at redhat.com
Depends On: 1217311
Blocks: 1186580 (qe_tracker_everglades)
Group: redhat
+++ This bug was initially created as a clone of Bug #1217311 +++
Description of problem:
=======================
Gluster volume status command doesn't list the shd status for an ec volume even
after enabling it.
[root at vertigo ~]# gluster v status testvol
Status of volume: testvol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick vertigo:/rhs/brick1/b1 49170 0 Y 23251
Brick ninja:/rhs/brick1/b2 49166 0 Y 30404
Brick transformers:/rhs/brick1/b3 49160 0 Y 27340
Brick interstellar:/rhs/brick1/b4 49160 0 Y 29854
Brick vertigo:/rhs/brick2/b5 49171 0 Y 23269
Brick ninja:/rhs/brick2/b6 49167 0 Y 30421
Brick transformers:/rhs/brick2/b7 49161 0 Y 27357
Brick interstellar:/rhs/brick2/b8 49161 0 Y 29871
Brick vertigo:/rhs/brick3/b9 49172 0 Y 20391
Brick ninja:/rhs/brick3/b10 49168 0 Y 30438
Brick transformers:/rhs/brick3/b11 49162 0 Y 27374
Brick interstellar:/rhs/brick3/b12 49162 0 Y 29888
Brick vertigo:/rhs/brick4/b13 49174 0 Y 21396
Brick ninja:/rhs/brick4/b14 49170 0 Y 31147
Brick transformers:/rhs/brick4/b15 49164 0 Y 28119
Brick interstellar:/rhs/brick4/b16 49164 0 Y 30528
Brick vertigo:/rhs/brick1/b17 49175 0 Y 21415
Brick ninja:/rhs/brick1/b18 49171 0 Y 31166
Brick transformers:/rhs/brick1/b19 49165 0 Y 28138
Brick interstellar:/rhs/brick1/b20 49165 0 Y 30547
Brick vertigo:/rhs/brick2/b21 49176 0 Y 21435
Brick ninja:/rhs/brick2/b22 49172 0 Y 31185
Brick transformers:/rhs/brick2/b23 49166 0 Y 28157
Brick interstellar:/rhs/brick2/b24 49166 0 Y 30566
Snapshot Daemon on localhost 49173 0 Y 20475
NFS Server on localhost 2049 0 Y 24081
Quota Daemon on localhost N/A N/A Y 24135
Snapshot Daemon on transformers 49163 0 Y 27470
NFS Server on transformers 2049 0 Y 30403
Quota Daemon on transformers N/A N/A Y 30477
Snapshot Daemon on ninja 49169 0 Y 30522
NFS Server on ninja N/A N/A N N/A
Quota Daemon on ninja N/A N/A Y 914
Snapshot Daemon on interstellar 49163 0 Y 29975
NFS Server on interstellar 2049 0 Y 32709
Quota Daemon on interstellar N/A N/A Y 32764
Task Status of Volume testvol
------------------------------------------------------------------------------
Task : Rebalance
ID : 08f87c28-cbcc-41eb-acab-09924f6dcd63
Status : in progress
[root at vertigo ~]#
[root at vertigo ~]# gluster v info testvol
Volume Name: testvol
Type: Distributed-Disperse
Volume ID: e7979f7a-69c8-40ce-8541-2931fbf37d23
Status: Started
Number of Bricks: 2 x (8 + 4) = 24
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/b1
Brick2: ninja:/rhs/brick1/b2
Brick3: transformers:/rhs/brick1/b3
Brick4: interstellar:/rhs/brick1/b4
Brick5: vertigo:/rhs/brick2/b5
Brick6: ninja:/rhs/brick2/b6
Brick7: transformers:/rhs/brick2/b7
Brick8: interstellar:/rhs/brick2/b8
Brick9: vertigo:/rhs/brick3/b9
Brick10: ninja:/rhs/brick3/b10
Brick11: transformers:/rhs/brick3/b11
Brick12: interstellar:/rhs/brick3/b12
Brick13: vertigo:/rhs/brick4/b13
Brick14: ninja:/rhs/brick4/b14
Brick15: transformers:/rhs/brick4/b15
Brick16: interstellar:/rhs/brick4/b16
Brick17: vertigo:/rhs/brick1/b17
Brick18: ninja:/rhs/brick1/b18
Brick19: transformers:/rhs/brick1/b19
Brick20: interstellar:/rhs/brick1/b20
Brick21: vertigo:/rhs/brick2/b21
Brick22: ninja:/rhs/brick2/b22
Brick23: transformers:/rhs/brick2/b23
Brick24: interstellar:/rhs/brick2/b24
Options Reconfigured:
features.uss: on
features.quota: on
server.event-threads: 3
client.event-threads: 4
cluster.disperse-self-heal-daemon: enable
[root at vertigo ~]#
Version-Release number of selected component (if applicable):
=============================================================
[root at vertigo ~]# gluster --version
glusterfs 3.8dev built on Apr 28 2015 14:47:20
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General
Public License.
[root at vertigo ~]#
How reproducible:
=================
100%
Actual results:
Expected results:
Additional info:
--- Additional comment from Anand Avati on 2015-05-14 02:06:34 EDT ---
REVIEW: http://review.gluster.org/10764 ( Added support to get status of Self
Heal Daemon for disperse volume. ("gluster volume status")) posted (#2) for
review on master by Ashish Pandey (aspandey at redhat.com)
--- Additional comment from Anand Avati on 2015-05-14 05:27:28 EDT ---
REVIEW: http://review.gluster.org/10764 ( Added support to get status of Self
Heal Daemon for disperse volume. ("gluster volume status")) posted (#3) for
review on master by Ashish Pandey (aspandey at redhat.com)
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1186580
[Bug 1186580] QE tracker bug for Everglades
https://bugzilla.redhat.com/show_bug.cgi?id=1217311
[Bug 1217311] Disperse volume: gluster volume status doesn't show shd
status
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=k47W2AGfbq&a=cc_unsubscribe
More information about the Bugs
mailing list