[Gluster-users] Distributed-Replicated adding/removing nodes
Arnold Krille
arnold at arnoldarts.de
Tue Feb 14 09:40:59 UTC 2012
Hi,
On Monday 13 February 2012 17:17:47 Bryan Whitehead wrote:
> I have 3 servers, but want replicate = 2. To do this I have 2 bricks on
> each server:
>
> Example output:
>
> Volume Name: images
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 3 x 2 = 6
> Transport-type: rdma
> Bricks:
> Brick1: lab0:/g0
> Brick2: lab1:/g0
> Brick3: lab2:/g0
> Brick4: lab0:/g1
> Brick5: lab1:/g1
> Brick6: lab2:/g1
>
> If I want to add lab3:/g0 and lab3:/g1, it will end up looking like this:
>
> Volume Name: images
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: rdma
> Bricks:
> Brick1: lab0:/g0
> Brick2: lab1:/g0
> Brick3: lab2:/g0
> Brick4: lab0:/g1
> Brick5: lab1:/g1
> Brick6: lab2:/g1
> Brick7: lab3:/g0
> Brick8: lab3:/g1
>
> This seems like files could potentially both be stuck on lab3. Do I need to
> do some crazy migrations to move bricks around? Is this somewhat automated?
You have to replace one of the exisiting bricks (lets assume lab2/g0) with
lab3/g0, wipe the data from the old brick lab2/g0 and then add another pair of
bricks of lab2/g0-lab3/g1. Which then gives you:
> Volume Name: images
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 4 x 2 = 8
> Transport-type: rdma
> Bricks:
> Brick1: lab0:/g0
> Brick2: lab1:/g0
> Brick3: lab3:/g0
> Brick4: lab0:/g1
> Brick5: lab1:/g1
> Brick6: lab2:/g1
> Brick7: lab2:/g0
> Brick8: lab3:/g1
But with this you still only protect yourself against one failing node. The
same as with three nodes.
Have fun,
Arnold
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120214/812749a9/attachment.sig>
More information about the Gluster-users
mailing list