[Gluster-users] How do I increase volume space on a gluster disperse volume

Pranith Kumar Karampuri pkarampu at redhat.com
Wed Oct 12 15:47:57 UTC 2016


hi,
      These messages generally come when gluster identifies a file needs
heal. May be you should find out why heal traffic is increasing?

On Tue, Oct 11, 2016 at 10:36 PM, Leung, Alex (398C) <
alex.leung at jpl.nasa.gov> wrote:

> Got a tons of error messages from the export-gfs.log, any ideas what is
> that and how to fix this.
>
> The message "W [MSGID: 122053] [ec-common.c:116:ec_check_status]
> 0-pdsclust-disperse-0: Operation failed on some subvolumes (up=3F, mask=1F,
> remaining=0, good=1F, bad=20)" repeated 3 times between [2016-10-11
> 13:32:04.509834] and [2016-10-11 13:32:06.008820]
> [2016-10-11 13:32:06.012555] W [MSGID: 122056]
> [ec-combine.c:866:ec_combine_check] 0-pdsclust-disperse-0: Mismatching
> xdata in answers of 'LOOKUP'
> [2016-10-11 13:32:06.013320] W [MSGID: 122053] [ec-common.c:116:ec_check_status]
> 0-pdsclust-disperse-0: Operation failed on some subvolumes (up=3F, mask=3F,
> remaining=0, good=1F, bad=20)
>
> Thanks
>
> Alex Leung
>
> On 10/10/16, 11:07 AM, "Serkan Çoban" <cobanserkan at gmail.com> wrote:
>
>     >Is it like
>     >Gluster volume add-brick pdsclust raid1-gb:/data/gfs
> raid2-gb:/data/gfs raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs
> raid7-gb:/data/gfs
>
>     Yes the command is like that.
>
>     > Besides, Can I have different size of the brick? Such as raid1,2,3
> is 20 TB and raid5,6,7 is 40TB?
>     You don't want to do that, 20TB from 40TB is wasted. Bricks should be
> same size.
>
>     On Mon, Oct 10, 2016 at 8:42 PM, Leung, Alex (398C)
>     <alex.leung at jpl.nasa.gov> wrote:
>     > Thanks, but what is the exact command to add-brick?
>     >
>     > volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ...
>     >
>     > Is it like
>     >
>     > Gluster volume add-brick pdsclust raid1-gb:/data/gfs
> raid2-gb:/data/gfs raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs
> raid7-gb:/data/gfs
>     > What is the value of > [<stripe|replica> <COUNT>]?
>     >
>     > Besides, Can I have different size of the brick? Such as raid1,2,3
> is 20 TB and raid5,6,7 is 40TB?
>     >
>     >
>     > Alex Leung
>     >
>     > On 10/10/16, 7:17 AM, "Vijay Bellur" <vbellur at redhat.com> wrote:
>     >
>     >     On Thu, Oct 6, 2016 at 11:34 AM, Leung, Alex (398C)
>     >     <alex.leung at jpl.nasa.gov> wrote:
>     >     > Here is my configuration:
>     >     >
>     >     >
>     >     >
>     >     > [root at raid4 ~]# gluster volume info
>     >     >
>     >     > Volume Name: pdsclust
>     >     >
>     >     > Type: Disperse
>     >     >
>     >     > Volume ID: 02629f52-cfe1-4542-8581-21d25e254d39
>     >     >
>     >     > Status: Started
>     >     >
>     >     > Number of Bricks: 1 x (4 + 2) = 6
>     >     >
>     >     > Transport-type: tcp
>     >     >
>     >     > Bricks:
>     >     >
>     >     > Brick1: raid4-gb:/data/gfs
>     >     >
>     >     > Brick2: raid8-gb:/data/gfs
>     >     >
>     >     > Brick3: raid10-gb:/data/gfs
>     >     >
>     >     > Brick4: raid12-gb:/data/gfs
>     >     >
>     >     > Brick5: raid14-gb:/data/gfs
>     >     >
>     >     > Brick6: raid16-gb:/data/gfs
>     >     >
>     >     > Options Reconfigured:
>     >     >
>     >     > performance.readdir-ahead: on
>     >     >
>     >     > [root at raid4 ~]#
>     >     >
>     >     >
>     >     >
>     >     >
>     >     >
>     >     > How do add-bricks to this disperse volume?
>     >     >
>     >     > How do I create a sub volume of (4+2) = 6 to make it
>     >     >
>     >     >
>     >     >
>     >     > Number of Bricks: 2 x (4 + 2) = 12
>     >
>     >
>     >     You would need to add 6 more bricks to the volume to get to this
> state.
>     >
>     >     Regards,
>     >     Vijay
>     >
>     >
>     >
>     > _______________________________________________
>     > Gluster-users mailing list
>     > Gluster-users at gluster.org
>     > http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pranith
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161012/e2b4a958/attachment.html>


More information about the Gluster-users mailing list