[Gluster-users] Issues with adding a volume with glusterfs 3.2.1

chaica at ohmytux.com chaica at ohmytux.com
Fri Jul 1 14:34:38 UTC 2011


On Fri, 1 Jul 2011 09:01:12 -0400, "Burnash, James"
<jburnash at knight.com> wrote:
> Hi Carl.
> 
> I was similarly confused at first when I did the exact same thing as
> you - adding two additional server/bricks to an existing
> Distributed-Replicate volume. Here's the thing - you specified
> "replica 2" when you created the volume - that means that two replicas
> will be kept, and only two. Going forward after adding bricks 3 and 4,
> those replicas will on either the first pair of bricks (Brick1 and
> Brick2) or the second pair of bricks (Brick3 and Brick4).  What
> happens when you add bricks to a Distributed-Replicate volume such as
> yours is that the Distribute part kicks in, and "distributes" the
> files amongst the mirror (replica) pairs given above.
> 
> Clear as mud? :-)

Hi James,

Thanks a lot for your message. I understand now what's going on.

To Gluster team: IMO it could be really useful to explain this thing on
the dedicated documentation page [1], because it seems *really* weird
when you don't know how this volume expansion will take place and
struggling with this situation I could not understand from the
documentation what I did wrong.

[1] :
http://www.gluster.com/community/documentation/index.php/Gluster_3.2:_Expanding_Volumes

Bye,
Carl Chenet

> 
> 
> -----Original Message-----
> From: gluster-users-bounces at gluster.org
> [mailto:gluster-users-bounces at gluster.org] On Behalf Of
> chaica at ohmytux.com
> Sent: Friday, July 01, 2011 5:54 AM
> To: gluster-users at gluster.org
> Subject: [Gluster-users] Issues with adding a volume with glusterfs 3.2.1
> 
> Hi,
> 
> I can't succeed adding two new replicated bricks in my volume.
> 
> I do have a replicated volume on two servers. I created it with the
> following commands :
> 
> root at glusterfs1:/var/log# gluster peer probe 192.168.1.31 Probe
> successful root at glusterfs1:~# gluster volume create test-volume
> replica 2 transport tcp 192.168.1.30:/sharedspace
> 192.168.1.31:/sharedspace Creation of volume test-volume has been
> successful. Please start the volume to access data.
> root at glusterfs1:~# gluster volume start test-volume
> root at glusterfs1:~# gluster volume info
> 
> Volume Name: test-volume
> Type: Replicate
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.30:/sharedspace
> Brick2: 192.168.1.31:/sharedspace
> 
> Ok now from a client :
> 
> client~# mount -t glusterfs 192.168.1.30:test-volume
> /distributed-volume client~# echo "hello" > /distributed-volume/foo
> 
> Foo is well replicated across the two bricks :
> 
> root at glusterfs1:~# ll /sharedspace/
> total 28
> -rw-r--r-- 1 root root    26  1 juil. 10:29 foo
> drwx------ 2 root root 16384 11 janv. 12:18 lost+found
> 
> root at glusterfs2:~# ll /sharedspace/
> total 28
> -rw-r--r-- 1 root root    26  1 juil. 10:29 foo
> drwx------ 2 root root 16384 11 janv. 12:18 lost+found
> 
> That's perfect... until I try to add two new bricks :
> 
> root at glusterfs1:~# gluster peer probe 192.168.1.32 Probe successful
> root at glusterfs1:~# gluster peer probe 192.168.1.33 Probe successful
> root at glusterfs1:~# gluster volume add-brick test-volume
> 192.168.1.32:/sharedspace 192.168.1.33:/sharedspace Add Brick
> successful root at glusterfs1:~# gluster volume info
> 
> Volume Name: test-volume
> Type: Distributed-Replicate
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.30:/sharedspace
> Brick2: 192.168.1.31:/sharedspace
> Brick3: 192.168.1.32:/sharedspace
> Brick4: 192.168.1.33:/sharedspace
> root at glusterfs1:~# gluster volume rebalance test-volume start
> starting rebalance on volume test-volume has been successful
> root at glusterfs1:~# gluster volume rebalance test-volume status
> rebalance completed
> 
> Ok, so If I'm correct, data on Brick1 and Brick2 should be now also
> available from Brick3 and Brick4. But that's not the case :
> 
> root at glusterfs3:~# ll /sharedspace/
> total 20
> drwx------ 2 root root 16384 11 janv. 12:18 lost+found
> 
> root at glusterfs4:~# ll /sharedspace/
> total 20
> drwx------ 2 root root 16384 11 janv. 12:18 lost+found
> 
> I'm using Debian Sid with Glusterfs 3.2.1 from official Debian
> repository. Am I wrong somewhere ?
> 
> Regards,
> Carl Chenet
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> 
> 
> DISCLAIMER: 
> This e-mail, and any attachments thereto, is intended only for use by
> the addressee(s) named herein and may contain legally privileged
> and/or confidential information. If you are not the intended recipient
> of this e-mail, you are hereby notified that any dissemination,
> distribution or copying of this e-mail, and any attachments thereto,
> is strictly prohibited. If you have received this in error, please
> immediately notify me and permanently delete the original and any copy
> of any e-mail and any printout thereof. E-mail transmission cannot be
> guaranteed to be secure or error-free. The sender therefore does not
> accept liability for any errors or omissions in the contents of this
> message which arise as a result of e-mail transmission.
> NOTICE REGARDING PRIVACY AND CONFIDENTIALITY Knight Capital Group
> may, at its discretion, monitor and review the content of all e-mail
> communications. http://www.knight.com




More information about the Gluster-users mailing list