[Bugs] [Bug 1258338] New: Data Tiering: Tiering related information is not displayed in gluster volume info xml output
bugzilla at redhat.com
bugzilla at redhat.com
Mon Aug 31 06:36:24 UTC 2015
https://bugzilla.redhat.com/show_bug.cgi?id=1258338
Bug ID: 1258338
Summary: Data Tiering: Tiering related information is not
displayed in gluster volume info xml output
Product: GlusterFS
Version: 3.7.3
Component: tiering
Severity: low
Assignee: bugs at gluster.org
Reporter: aloganat at redhat.com
QA Contact: bugs at gluster.org
CC: bugs at gluster.org
Description of problem:
Tiering related information is not displayed in gluster volume info xml output.
It would be good if the information is displayed in xml output for the
automation purpose.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create a volume.
2. Attach tier bricks.
3. Execute "gluster volume info --xml"
Actual results:
Tiering related information is not displayed in gluster volume info xml output
Expected results:
Tiering related information should be displayed in gluster volume info xml
output
Additional info:
[root at node31 ~]# gluster volume info
Volume Name: testvol
Type: Tier
Volume ID: 496dfa0d-a370-4dd9-84b5-4048e91aef71
Status: Started
Number of Bricks: 3
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distribute
Number of Bricks: 2
Brick1: 10.70.46.51:/bricks/brick0/testvol_tier1
Brick2: 10.70.47.76:/bricks/brick1/testvol_tier0
Cold Tier:
Cold Tier Type : Distribute
Number of Bricks: 1
Brick3: 10.70.47.76:/bricks/brick0/testvol_brick0
Options Reconfigured:
performance.readdir-ahead: on
[root at node31 ~]#
[root at node31 ~]# gluster volume info --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
<opRet>0</opRet>
<opErrno>0</opErrno>
<opErrstr/>
<volInfo>
<volumes>
<volume>
<name>testvol</name>
<id>496dfa0d-a370-4dd9-84b5-4048e91aef71</id>
<status>1</status>
<statusStr>Started</statusStr>
<brickCount>3</brickCount>
<distCount>1</distCount>
<stripeCount>1</stripeCount>
<replicaCount>1</replicaCount>
<disperseCount>0</disperseCount>
<redundancyCount>0</redundancyCount>
<type>5</type>
<typeStr>Tier</typeStr>
<transport>0</transport>
<xlators/>
<bricks>
<brick
uuid="9d77138d-ce50-4fdd-9dad-6c4efbd391e7">10.70.46.51:/bricks/brick0/testvol_tier1<name>10.70.46.51:/bricks/brick0/testvol_tier1</name><hostUuid>9d77138d-ce50-4fdd-9dad-6c4efbd391e7</hostUuid></brick>
<brick
uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick1/testvol_tier0<name>10.70.47.76:/bricks/brick1/testvol_tier0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
<brick
uuid="261b213b-a9f6-4fb6-8313-11e7eba47258">10.70.47.76:/bricks/brick0/testvol_brick0<name>10.70.47.76:/bricks/brick0/testvol_brick0</name><hostUuid>261b213b-a9f6-4fb6-8313-11e7eba47258</hostUuid></brick>
</bricks>
<optCount>1</optCount>
<options>
<option>
<name>performance.readdir-ahead</name>
<value>on</value>
</option>
</options>
</volume>
<count>1</count>
</volumes>
</volInfo>
</cliOutput>
[root at node31 ~]#
--
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.
More information about the Bugs
mailing list