[Gluster-users] Bricks to sub-volume mapping

Aravinda avishwan at redhat.com
Tue Jan 9 07:17:17 UTC 2018


No, we don't store the information separately. But it can be easily 
predictable from the Volume Info.

For example, in the below Volume info, it shows "Number of Bricks" in 
the following format,

     Number of Subvols x (Number of Data bricks + Number of Redundancy 
bricks) = Total Bricks

Note: Sub volumes are predictable without storing it as separate info 
since we do not have a concept to mix different sub volumes types for 
single Volume(Except in case of Tiering).  But in future we may support 
sub volumes with multiple types within a Volume.(Issue with Glusterd2 
https://github.com/gluster/glusterd2/issues/388)


On Tuesday 09 January 2018 12:33 PM, Anand Malagi wrote:
>
> But do we store this information somewhere as part of gluster metadata 
> or something…
>
> Thanks and Regards,
>
> --Anand
>
> Extn : 6974
>
> Mobile : 91 9552527199, 91 9850160173
>
> *From:*Aravinda [mailto:avishwan at redhat.com]
> *Sent:* 09 January 2018 12:31
> *To:* Anand Malagi <amalagi at commvault.com>; gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] Bricks to sub-volume mapping
>
> First 6 bricks belong to First sub volume and next 6 bricks belong to 
> second.
>
> On Tuesday 09 January 2018 12:11 PM, Anand Malagi wrote:
>
>     Hi Team,
>
>     Please let me know how I can know which bricks are part of which
>     sub-volumes in case of disperse volume, for example in below
>     volume has two sub-volumes :
>
>     Type: Distributed-Disperse
>
>     Volume ID: 6dc8ced8-27aa-4481-bfe8-057133c31d0b
>
>     Status: Started
>
>     Snapshot Count: 0
>
>     Number of Bricks: 2 x (4 + 2) = 12
>
>     Transport-type: tcp
>
>     Bricks:
>
>     Brick1: pdchyperscale1sds:/ws/disk1/ws_brick
>
>     Brick2: pdchyperscale2sds:/ws/disk1/ws_brick
>
>     Brick3: pdchyperscale3sds:/ws/disk1/ws_brick
>
>     Brick4: pdchyperscale1sds:/ws/disk2/ws_brick
>
>     Brick5: pdchyperscale2sds:/ws/disk2/ws_brick
>
>     Brick6: pdchyperscale3sds:/ws/disk2/ws_brick
>
>     Brick7: pdchyperscale1sds:/ws/disk3/ws_brick
>
>     Brick8: pdchyperscale2sds:/ws/disk3/ws_brick
>
>     Brick9: pdchyperscale3sds:/ws/disk3/ws_brick
>
>     Brick10: pdchyperscale1sds:/ws/disk4/ws_brick
>
>     Brick11: pdchyperscale2sds:/ws/disk4/ws_brick
>
>     Brick12: pdchyperscale3sds:/ws/disk4/ws_brick
>
>     Please suggest how to know which bricks are part of first and
>     second sub-volume.
>
>     Thanks and Regards,
>
>     --Anand
>
>     Extn : 6974
>
>     Mobile : 91 9552527199, 91 9850160173
>
>     ***************************Legal
>     Disclaimer***************************
>
>     "This communication may contain confidential and privileged
>     material for the
>
>     sole use of the intended recipient. Any unauthorized review, use
>     or distribution
>
>     by others is strictly prohibited. If you have received the message
>     by mistake,
>
>     please advise the sender by reply email and delete the message.
>     Thank you."
>
>     **********************************************************************
>
>
>
>     _______________________________________________
>
>     Gluster-users mailing list
>
>     Gluster-users at gluster.org <mailto:Gluster-users at gluster.org>
>
>     http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> -- 
> regards
> Aravinda VK
> ***************************Legal Disclaimer***************************
> "This communication may contain confidential and privileged material 
> for the
> sole use of the intended recipient. Any unauthorized review, use or 
> distribution
> by others is strictly prohibited. If you have received the message by 
> mistake,
> please advise the sender by reply email and delete the message. Thank 
> you."
> **********************************************************************


-- 
regards
Aravinda VK
http://aravindavk.in

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180109/06342f06/attachment.html>


More information about the Gluster-users mailing list