[Gluster-users] How do I increase volume space on a gluster disperse volume

Leung, Alex (398C) alex.leung at jpl.nasa.gov
Mon Oct 10 18:48:02 UTC 2016


Actually, I was planning to slowly grow the individual brick to a bigger size without doing data transfer or volume shutdown.


Alex Leung


On 10/10/16, 11:15 AM, "gluster-users-bounces at gluster.org on behalf of Joe Julian" <gluster-users-bounces at gluster.org on behalf of joe at julianfamily.org> wrote:

    
    
    On 10/10/2016 11:07 AM, Serkan Çoban wrote:
    >> Is it like
    >> Gluster volume add-brick pdsclust raid1-gb:/data/gfs raid2-gb:/data/gfs raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs raid7-gb:/data/gfs
    > Yes the command is like that.
    >
    >> Besides, Can I have different size of the brick? Such as raid1,2,3 is 20 TB and raid5,6,7 is 40TB?
    > You don't want to do that, 20TB from 40TB is wasted. Bricks should be same size.
    
    Similarly to my response to "Rebalancing after adding larger bricks", by 
    utilizing the cluster.min-free-disk setting (defaults to 10%) you can, 
    at least, ensure that your larger bricks are utilized as needed to 
    prevent over filling your smaller bricks should you choose to have 
    mismatched sizes.
    
    >
    > On Mon, Oct 10, 2016 at 8:42 PM, Leung, Alex (398C)
    > <alex.leung at jpl.nasa.gov> wrote:
    >> Thanks, but what is the exact command to add-brick?
    >>
    >> volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ...
    >>
    >> Is it like
    >>
    >> Gluster volume add-brick pdsclust raid1-gb:/data/gfs raid2-gb:/data/gfs raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs raid7-gb:/data/gfs
    >> What is the value of > [<stripe|replica> <COUNT>]?
    >>
    >> Besides, Can I have different size of the brick? Such as raid1,2,3 is 20 TB and raid5,6,7 is 40TB?
    >>
    >>
    >> Alex Leung
    >>
    >> On 10/10/16, 7:17 AM, "Vijay Bellur" <vbellur at redhat.com> wrote:
    >>
    >>      On Thu, Oct 6, 2016 at 11:34 AM, Leung, Alex (398C)
    >>      <alex.leung at jpl.nasa.gov> wrote:
    >>      > Here is my configuration:
    >>      >
    >>      >
    >>      >
    >>      > [root at raid4 ~]# gluster volume info
    >>      >
    >>      > Volume Name: pdsclust
    >>      >
    >>      > Type: Disperse
    >>      >
    >>      > Volume ID: 02629f52-cfe1-4542-8581-21d25e254d39
    >>      >
    >>      > Status: Started
    >>      >
    >>      > Number of Bricks: 1 x (4 + 2) = 6
    >>      >
    >>      > Transport-type: tcp
    >>      >
    >>      > Bricks:
    >>      >
    >>      > Brick1: raid4-gb:/data/gfs
    >>      >
    >>      > Brick2: raid8-gb:/data/gfs
    >>      >
    >>      > Brick3: raid10-gb:/data/gfs
    >>      >
    >>      > Brick4: raid12-gb:/data/gfs
    >>      >
    >>      > Brick5: raid14-gb:/data/gfs
    >>      >
    >>      > Brick6: raid16-gb:/data/gfs
    >>      >
    >>      > Options Reconfigured:
    >>      >
    >>      > performance.readdir-ahead: on
    >>      >
    >>      > [root at raid4 ~]#
    >>      >
    >>      >
    >>      >
    >>      >
    >>      >
    >>      > How do add-bricks to this disperse volume?
    >>      >
    >>      > How do I create a sub volume of (4+2) = 6 to make it
    >>      >
    >>      >
    >>      >
    >>      > Number of Bricks: 2 x (4 + 2) = 12
    >>
    >>
    >>      You would need to add 6 more bricks to the volume to get to this state.
    >>
    >>      Regards,
    >>      Vijay
    >>
    >>
    >>
    >> _______________________________________________
    >> Gluster-users mailing list
    >> Gluster-users at gluster.org
    >> http://www.gluster.org/mailman/listinfo/gluster-users
    > _______________________________________________
    > Gluster-users mailing list
    > Gluster-users at gluster.org
    > http://www.gluster.org/mailman/listinfo/gluster-users
    
    _______________________________________________
    Gluster-users mailing list
    Gluster-users at gluster.org
    http://www.gluster.org/mailman/listinfo/gluster-users
    




More information about the Gluster-users mailing list