[Bugs] [Bug 1268822] New: tier/cli: number of bricks remains the same in v info --xml

bugzilla at redhat.com bugzilla at redhat.com
Mon Oct 5 11:43:25 UTC 2015


https://bugzilla.redhat.com/show_bug.cgi?id=1268822

            Bug ID: 1268822
           Summary: tier/cli: number of bricks remains the same in v info
                    --xml
           Product: GlusterFS
           Version: mainline
         Component: tiering
          Assignee: bugs at gluster.org
          Reporter: hgowtham at redhat.com
        QA Contact: bugs at gluster.org
                CC: bugs at gluster.org



Description of problem:
the number of bricks remain one for n number of bricks in the coldtype.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1.create a gluster tier volume with more than one brick in cold type
2.issue gluster v info --xml
3.

Actual results:
<coldBricks>
            <coldBrickType>Replicate</coldBrickType>
            <numberOfBricks>1 x 2 = 2</numberOfBricks>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_1<name>10.70.42.203:/data/gluster/tier/b1_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_2<name>10.70.42.203:/data/gluster/tier/b1_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_1<name>10.70.42.203:/data/gluster/tier/b2_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_2<name>10.70.42.203:/data/gluster/tier/b2_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_1<name>10.70.42.203:/data/gluster/tier/b3_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_2<name>10.70.42.203:/data/gluster/tier/b3_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
          </coldBricks>


Expected results:
 <coldBricks>
            <coldBrickType>Distributed-Replicate</coldBrickType>
            <numberOfBricks>3 x 2 = 6</numberOfBricks>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_1<name>10.70.42.203:/data/gluster/tier/b1_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b1_2<name>10.70.42.203:/data/gluster/tier/b1_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_1<name>10.70.42.203:/data/gluster/tier/b2_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b2_2<name>10.70.42.203:/data/gluster/tier/b2_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_1<name>10.70.42.203:/data/gluster/tier/b3_1</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
            <brick
uuid="149ac603-8078-41c5-8f71-7373f2a3016f">10.70.42.203:/data/gluster/tier/b3_2<name>10.70.42.203:/data/gluster/tier/b3_2</name><hostUuid>149ac603-8078-41c5-8f71-7373f2a3016f</hostUuid></brick>
          </coldBricks>


Additional info:

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
You are on the CC list for the bug.
You are the assignee for the bug.


More information about the Bugs mailing list