[Gluster-users] 答复: Add brick question.

M S Vishwanath Bhat msvbhat at gmail.com
Thu Mar 19 15:47:09 UTC 2015


On 19 March 2015 at 17:48, 何亦军 <heyijun at greatwall.com.cn> wrote:

>  Thanks, MS.
>
> My problem resolved.
>
Great...

>
> Volume Name: vol01
> Type: Distributed-Replicate
> Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c
> Status: Started
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: gwgfs01:/data/brick1/vol01
> Brick2: gwgfs03:/data/brick2/vol01
> Brick3: gwgfs01:/data/brick2/vol01
> Brick4: gwgfs02:/data/brick2/vol01
> Brick5: gwgfs02:/data/brick1/vol01
> Brick6: gwgfs03:/data/brick1/vol01
>
>
> I have last question, What are correct procedure like my requirement? I
> didn't find any similar case in document.
>
> Environment: Every node have two brick, Distributed-Replicate
> Requirement: Add node to pool
>

peer probe to add new node to the pool and add_brick + rebalance *is* the
correct procedure to expand the volume (increase the storage space)

MS


>
>
>  ------------------------------
> *发件人:* M S Vishwanath Bhat [msvbhat at gmail.com]
> *发送时间:* 2015年3月19日 18:27
> *收件人:* 何亦军
> *抄送:* gluster-users at gluster.org
> *主题:* Re: [Gluster-users] Add brick question.
>
>
>
> On 19 March 2015 at 14:59, 何亦军 <heyijun at greatwall.com.cn> wrote:
>
>>  Hi Guys,
>>
>>
>>
>>      I have two servers in my pool , I plan add new server to that pool.
>>
>>      My volume info below:
>>
>>
>>
>>     Volume Name: vol01
>>
>> Type: Distributed-Replicate
>>
>> Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c
>>
>> Status: Started
>>
>> Number of Bricks: 2 x 2 = 4
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: gwgfs01:/data/brick1/vol01
>>
>> Brick2: gwgfs02:/data/brick1/vol01
>>
>> Brick3: gwgfs01:/data/brick2/vol01
>>
>> Brick4: gwgfs02:/data/brick2/vol01
>>
>>
>>
>> I plan to form a combination:
>>
>>
>>
>> Brick1: gwgfs01:/data/brick1/vol01
>>
>> Brick2: gwgfs03:/data/brick2/vol01
>>
>> Brick3: gwgfs01:/data/brick2/vol01
>>
>> Brick4: gwgfs02:/data/brick2/vol01
>>
>> Brick5: gwgfs02:/data/brick1/vol01
>>
>> Brick6: gwgfs03:/data/brick1/vol01
>>
>>
>>
>> My processing steps:
>>
>> 1.       gluster peer probe gwgfs03
>>
>> 2.       gluster volume replace-brick vol01 gwgfs02:/data/brick1/vol01
>> gwgfs03:/data/brick2/vol01 status
>>
>> 3.       After replace completed, do: gluster volume rebalance vol01
>> start
>>
>
>
>>
>>
>> And  finally  I try to add brick meet problem:
>>
>> [root at gwgfs03 vol01]# gluster volume add-brick vol01
>> gwgfs02:/data/brick1/vol01  gwgfs03:/data/brick1/vol01
>>
>> volume add-brick: failed: /data/brick1/vol01 is already part of a volume
>>
>
>
> The brick directory will have some xattrs while being part of the volume.
> So you will have to remove the xattrs from the directory before adding the
> brick to the volume again. You can simply delete the directory and
> re-create it again.
>
>
>>
>> [root at gwgfs03 vol01]# gluster volume remove-brick vol01
>> gwgfs02:/data/brick1/vol01 start
>>
>> volume remove-brick start: failed: Remove brick incorrect brick count of
>> 1 for replica 2
>>
>
>  Since you have replica 2 volume, you should add/remove at least 2 bricks
> at a time. You can try the following (after you delete and re-create
> gwgfs02:/data/brick1/vol01)
>
>  gluster volume add-brick vol01  gwgfs02:/data/brick1/vol01
> gwgfs03:/data/brick1/vol01
>
>  HTH
>  MS
>
>
>>
>> Current , my volume info below:    What can I do now? Help?
>>
>>
>>
>> [root at gwgfs03 vol01]# gluster volume info
>>
>>
>>
>> Volume Name: vol01
>>
>> Type: Distributed-Replicate
>>
>> Volume ID: 0bcd8d7c-b48a-4408-b7b8-56d4b5f8a97c
>>
>> Status: Started
>>
>> Number of Bricks: 2 x 2 = 4
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: gwgfs01:/data/brick1/vol01
>>
>> Brick2: gwgfs03:/data/brick2/vol01
>>
>> Brick3: gwgfs01:/data/brick2/vol01
>>
>> Brick4: gwgfs02:/data/brick2/vol01
>>
>> Options Reconfigured:
>>
>> nfs.disable: on
>>
>> user.cifs: disable
>>
>> auth.allow: *
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150319/9c0052df/attachment.html>


More information about the Gluster-users mailing list