[Gluster-users] Add single brick to dispersed volume?
Ashish Pandey
aspandey at redhat.com
Mon Sep 30 06:33:58 UTC 2019
Hi,
Yes, you are right, the better question is "how do I add storage capacity to an
existing disperse volume?"
In this case, the steps which I provided in last mail are valid and you can follow those
steps to add more capacity to existing disperse volume.
---
Ashish
----- Original Message -----
From: "William Ferrell" <willfe at gmail.com>
To: "Ashish Pandey" <aspandey at redhat.com>
Cc: gluster-users at gluster.org
Sent: Wednesday, September 25, 2019 7:12:47 PM
Subject: Re: [Gluster-users] Add single brick to dispersed volume?
Thanks for the quick reply!
So it sounds like I did misunderstand how disperse volumes work, and I
can't add bricks one at a time (I have to add bricks in groups of N,
where N is the original disperse count). Sorry for mixing up the
terminology in explaining my question.
I guess the better question is "how do I add storage capacity to an
existing disperse volume?"
Here's the output from the two commands you asked for, btw.
root at gluster1:~# gluster volume info test1
Volume Name: test1
Type: Disperse
Volume ID: 73ed8639-4ace-46c8-9807-874aba8a33c8
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/export/gfs/brick
Brick2: gluster2:/export/gfs/brick
Brick3: gluster3:/export/gfs/brick
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
root at gluster1:~# gluster volume status test1
Status of volume: test1
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/export/gfs/brick 49152 0 Y 601
Brick gluster2:/export/gfs/brick 49152 0 Y 664
Brick gluster3:/export/gfs/brick 49152 0 Y 662
Self-heal Daemon on localhost N/A N/A Y 6992
Self-heal Daemon on gluster4 N/A N/A Y 10981
Self-heal Daemon on gluster3 N/A N/A Y 6986
Self-heal Daemon on gluster2 N/A N/A Y 7010
Task Status of Volume test1
------------------------------------------------------------------------------
There are no active volume tasks
On Wed, Sep 25, 2019 at 9:31 AM Ashish Pandey <aspandey at redhat.com> wrote:
>
> Hi William,
>
> If you want to increase capacity of a disperse volume, you have to add bricks to your existing disperse volume.
> The number of bricks you add should be in multiple of the existing configuration.
>
> For example:
>
> If you have created a disperse volume like this -
>
> gluster volume create myvol disperse 3 redundancy 1 host1:brick1 host2:brick2 host3:brick3
>
> This is a 1 x (2+1) disperse volume.
> Now if you want to add some bricks to this volume, you have to do following -
>
> gluster volume add-brick myvol host1:brick11 host2:brick22 host3:brick33
> which will add 3 bricks in the form of one new sub volume - 2 x (2+1)
>
> or you can do this
>
> gluster volume add-brick myvol host1:brick11 host2:brick22 host3:brick33 host1:brick111 host2:brick222 host3:brick333
> which will add 6 bricks in the form of two new sub volume - 3 x (2+1)
>
> You mixed replication and disperse words so I am not sure which type of volume you have created. However, for replica volume also you have to add bricks in multiple.
>
> In these type of queries, it would be better to provide information about volumes which would help us to understand your questions.
>
> gluster v <volume name> info
> gluster v <volume name> status
>
> ---
> Ashish
>
> ________________________________
> From: "William Ferrell" <willfe at gmail.com>
> To: gluster-users at gluster.org
> Sent: Wednesday, September 25, 2019 6:02:37 PM
> Subject: [Gluster-users] Add single brick to dispersed volume?
>
> Hello,
>
> I'm just getting started with GlusterFS, so please forgive what's
> probably a newbie question. I searched the mailing list archives and
> didn't see anything about this, so I figured I'd just ask.
>
> I'm running four VMs on a single machine to learn the ropes a little
> bit. Each VM is set up the same way, with an OS disk and a separate
> disk (formatted as xfs) for use as a brick.
>
> Initially I created a dispersed volume using three bricks with
> replication 1. That seems to work pretty well. Now I'm trying to add a
> fourth brick to the volume, but am receiving an error when I try:
>
> root at gluster1:~# gluster volume add-brick test1 gluster4:/export/gfs/brick
> volume add-brick: failed: Incorrect number of bricks supplied 1 with count 3
>
> Have I misunderstood how dispersed volumes work? I was under the
> impression (from the documentation at
> https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#expanding-volumes)
> that bricks could be added to dispersed volumes so long as they were
> added N at a time, where N is the replication value. This error
> message makes it sound like I need to add three bricks at once
> instead.
>
> Is it possible to expand dispersed volumes like this? Or would I be
> better off doing this with a "regular" replicated volume? I'm
> interested in using dispersed volumes because of the space savings,
> but I'm a bit confused about how expansion works for them.
>
> Thanks for you help!
> ________
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users at gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
--
William W. Ferrell
Software Engineer
http://willfe.com/ -- willfe at gmail.com
________
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314
NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314
Gluster-users mailing list
Gluster-users at gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190930/c6f8e85a/attachment.html>
More information about the Gluster-users
mailing list