[Gluster-users] Glusterfs Rack-Zone Awareness feature...

COCHE Sébastien SCOCHE at sigma.fr
Tue Apr 22 15:17:24 UTC 2014


Sorry if my question is not clear.

When I create a new replicated volume, using only 2 nodes, I use this command line : ‘gluster volume create vol_name replica 2 transport tcp server1:/export/brick1/1 server2:/export/brick1/1’

server1 and server2 are in 2 different datacenters.

Now, if I want to expand gluster volume, using 2 new servers (ex : server3 and server4) , I use those command lines :

‘gluster volume add-brick vol_name server3: /export/brick1/1’

‘gluster volume add-brick vol_name server4: /export/brick1/1’

‘gluster volume rebalance vol_name fix-layout start’

‘gluster volume rebalance vol_name  start’

How the rebalance command work ?

How to be sure that replicated data are not stored on servers hosted in the same datacenter ?



Sébastien



-----Message d'origine-----
De : Jeff Darcy [mailto:jdarcy at redhat.com]
Envoyé : vendredi 18 avril 2014 18:52
À : COCHE Sébastien
Cc : gluster-users at gluster.org
Objet : Re: [Gluster-users] Glusterfs Rack-Zone Awareness feature...



> I do not understand why it could be a problem to place the data's

> replica on a different node group.

> If a group of node become unavailable (due to datacenter failure, for

> example) volume should remain online, using the second group.



I'm not sure what you're getting at here.  If you're talking about initial placement of replicas, we can place all members of each replica set in different node groups (e.g. racks).  If you're talking about adding new replica members when a previous one has failed, then the question is *when*.

Re-populating a new replica can be very expensive.  It's not worth starting if the previously failed replica is likely to come back before you're done.

We provide the tools (e.g. replace-brick) to deal with longer term or even permanent failures, but we don't re-replicate automatically.  Is that what you're talking about?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20140422/0ade02c1/attachment.html>


More information about the Gluster-users mailing list