[Gluster-users] Automation of single server addition to replica

Joe Julian joe at julianfamily.org
Wed Nov 9 18:32:21 UTC 2016

On 11/08/2016 10:53 PM, Gandalf Corvotempesta wrote:
> Il 09 nov 2016 1:23 AM, "Joe Julian" <joe at julianfamily.org 
> <mailto:joe at julianfamily.org>> ha scritto:
> >
> > Replicas are defined in the order bricks are listed in the volume 
> create command. So gluster volume create myvol replica 2 
> server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 
> server4:/data/brick1 will replicate between server1 and server2 and 
> replicate between server3 and server4.
> >
> > See also 
> https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/
> >
> i really hope this could be automated in newer gluster versions
> There is almost no sense to make a replica on the same server so gluster
> should automatically move bricks to preserve data consistency when 
> adding servers.
> Ceph does this by moving objects around and you don't have to add 
> servers in a multiple of replica

Yes, and ceph has a metadata server to manage this, which breaks 
horribly if you have a cascading failure where your sas expanders start 
dropping drives when the throughput reaches the max bandwidth (not that 
I've /ever/ had that problem... <sigh>). The final straw in that failure 
scenario is that the database could never converge between all the 
monitors as the objects were moving around and eventually all 5 monitors 
ran out of database space - losing the object map and all the data.

I'm not blaming ceph for that failure, but just pointing out that 
gluster's lack of a metadata server is part of its design philosophy 
which serves a specific engineering requirement that ceph does not 
fulfill. Luckily, we have both tools to use where they're each most 

> the rebalance command could be used to rebalance newly added bricks by 
> preserving replicas in a proper state

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161109/e9fa86b4/attachment.html>

More information about the Gluster-users mailing list