[Bugs] [Bug 1271659] New: gluster v status --xml for a replicated hot tier volume
bugzilla at redhat.com
bugzilla at redhat.com
Wed Oct 14 12:49:34 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1271659
Bug ID: 1271659
Summary: gluster v status --xml for a replicated hot tier
volume
Product: Red Hat Gluster Storage
Version: 3.1
Component: glusterfs
Sub Component: tiering
Keywords: Triaged
Assignee: rhs-bugs at redhat.com
Reporter: hgowtham at redhat.com
QA Contact: nchilaka at redhat.com
CC: anekkunt at redhat.com, bugs at gluster.org
Depends On: 1268810
+++ This bug was initially created as a clone of Bug #1268810 +++
Description of problem:
The existing status --xml command for a tiered volume with a number of hot
bricks is failing
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1.gluster volume create tiervol replica 2
gfvm3:/opt/volume_test/tier_vol/b1_1
gfvm3:/opt/volume_test/tier_vol/b1_2
gfvm3:/opt/volume_test/tier_vol/b2_1
gfvm3://opt/volume_test/tier_vol/b2_2
gfvm3:/opt/volume_test/tier_vol/b3_1
gfvm3:/opt/volume_test/tier_vol/b3_2 force
2.gluster volume start tiervol
3.echo 'y' | gluster volume attach-tier tiervol replica 2
gfvm3:/opt/volume_test/tier_vol/b4_1
gfvm3:/opt/volume_test/tier_vol/b4_2
gfvm3:/opt/volume_test/tier_vol/b5_1
gfvm3:/opt/volume_test/tier_vol/b5_2 force
4.gluster v status tiervol --xml
Actual results:
Expected results:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volStatus>
<volumes>
<volume>
<volName>tiervol</volName>
<nodeCount>11</nodeCount>
<hotBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b5_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49164</port>
<ports>
<tcp>49164</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8684</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b5_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49163</port>
<ports>
<tcp>49163</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8687</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b4_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49162</port>
<ports>
<tcp>49162</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8699</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b4_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49161</port>
<ports>
<tcp>49161</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8708</pid>
</node>
</hotBricks>
<coldBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b1_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49155</port>
<ports>
<tcp>49155</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8716</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b1_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49156</port>
<ports>
<tcp>49156</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8724</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b2_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49157</port>
<ports>
<tcp>49157</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8732</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b2_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49158</port>
<ports>
<tcp>49158</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8740</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b3_1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49159</port>
<ports>
<tcp>49159</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8750</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/b3_2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49160</port>
<ports>
<tcp>49160</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8751</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8678</pid>
</node>
</coldBricks>
<tasks>
<task>
<type>Tier migration</type>
<id>975bfcfa-077c-4edb-beba-409c2013f637</id>
<status>1</status>
<statusStr>in progress</statusStr>
</task>
</tasks>
</volume>
<volume>
<volName>v1</volName>
<nodeCount>4</nodeCount>
<hotBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/hbr1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49154</port>
<ports>
<tcp>49154</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8763</pid>
</node>
</hotBricks>
<coldBricks>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/cb1</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49152</port>
<ports>
<tcp>49152</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8769</pid>
</node>
<node>
<hostname>10.70.42.203</hostname>
<path>/data/gluster/tier/cb2</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>49153</port>
<ports>
<tcp>49153</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8778</pid>
</node>
<node>
<hostname>NFS Server</hostname>
<path>localhost</path>
<peerid>149ac603-8078-41c5-8f71-7373f2a3016f</peerid>
<status>1</status>
<port>2049</port>
<ports>
<tcp>2049</tcp>
<rdma>N/A</rdma>
</ports>
<pid>8678</pid>
</node>
</coldBricks>
<tasks>
<task>
<type>Tier migration</type>
<id>cfdf6ebf-e4f9-45c5-b8d8-850bfbb426f3</id>
<status>1</status>
<statusStr>in progress</statusStr>
</task>
</tasks>
</volume>
</volumes>
</volStatus>
</cliOutput>
Additional info:
--- Additional comment from Anand Nekkunti on 2015-10-08 10:48:57 EDT ---
patch: http://review.gluster.org/#/c/12302/
Referenced Bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1268810
[Bug 1268810] gluster v status --xml for a replicated hot tier volume
--
You are receiving this mail because:
You are on the CC list for the bug.
Unsubscribe from this bug https://bugzilla.redhat.com/token.cgi?t=GJ8mURRyBn&a=cc_unsubscribe
More information about the Bugs
mailing list