[Gluster-users] Automation of single server addition to replica

Joe Julian joe at julianfamily.org
Wed Nov 9 18:26:19 UTC 2016

On 11/09/2016 10:21 AM, Lopez, Dan-Joe wrote:
> Thanks Joe and Gandalf!
> I’ve look at the blog that you wrote Joe, but it seems to reference a 
> more complicated scenario than I am working with.
> We have a` replica n` volume, and I want to make it a` replica n+1` 
> volume. Is that possible?
> Dan-Joe Lopez

Sure, that's easy, add-brick $myvol replica $new_n $new_brick

but... why are you adding replica? Is it to improve redundancy and 
availability, or is it so you "have the same files on all our servers" 
(a common mistake)?

Another page that might be of use to you:

> *From:*gluster-users-bounces at gluster.org 
> [mailto:gluster-users-bounces at gluster.org] *On Behalf Of *Gandalf 
> Corvotempesta
> *Sent:* Tuesday, November 8, 2016 10:54 PM
> *To:* Joe Julian <joe at julianfamily.org>
> *Cc:* gluster-users at gluster.org
> *Subject:* Re: [Gluster-users] Automation of single server addition to 
> replica
> Il 09 nov 2016 1:23 AM, "Joe Julian" <joe at julianfamily.org 
> <mailto:joe at julianfamily.org>> ha scritto:
> >
> > Replicas are defined in the order bricks are listed in the volume 
> create command. So gluster volume create myvol replica 2 
> server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 
> server4:/data/brick1 will replicate between server1 and server2 and 
> replicate between server3 and server4.
> >
> > See also 
> https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
> >
> i really hope this could be automated in newer gluster versions
> There is almost no sense to make a replica on the same server so gluster
> should automatically move bricks to preserve data consistency when 
> adding servers.
> Ceph does this by moving objects around and you don't have to add 
> servers in a multiple of replica
> the rebalance command could be used to rebalance newly added bricks by 
> preserving replicas in a proper state

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161109/b2bc6538/attachment.html>

More information about the Gluster-users mailing list