[Gluster-users] Shared storage for KVM

Craig Carl craig at gluster.com
Wed Nov 24 00:22:27 UTC 2010

Udo -
    Starting in 3.1 we have added an RPC process that listens for 
changes in the cluster configuration. If for example you want to add 
your 25th storage server to your cluster you would run `gluster peer 
probe server25`. A message informing all of the nodes participating in 
the cluster, both clients and servers would be sent. Every participant 
would need to acknowledge that change before it was committed. The same 
is true of any other type of change. We did this for a couple of 
reasons, as organizations scale Gluster clusters out it won't be 
reasonable to expect them to keep thousands of configuration files in 
sync. Eliminating the volume files also enabled dynamic volume 
management (DVM), adding and removing storage space without restarting 
Gluster. If this process isn't working please make sure all the servers 
can communicate on the ports detailed here -  

     We still enable a significant amount of tuning, we have just 
changed the way they are applied, please see - 

Please let me know if you have any other questions.



Craig Carl
Senior Systems Engineer; Gluster, Inc.
Cell - (408) 829-9953 (California, USA)
Office - (408) 770-1884
Gtalk - craig.carl at gmail.com
Twitter - @gluster

On 11/23/2010 10:01 AM, Udo Waechter wrote:
> Hi,
> On 23.11.2010, at 18:41, Kristofer Pettijohn wrote:
>> Would you be willing to share what your configs look like for this set up?
> There are no configs. We use Gluster 3.1 and simply did "gluster volume create brick1:/bla brick2:/bla brick3:/bla"
> Then some settings for auth.allow
> That was it.
> I do not understand how one can change config files in 3.1 such that the cluster (i.e. bricks) get notified about it. Everything should be automatic, but our experiments show that none of this is true.
> The gluster command itself does not allow for very much tuning and/or setting of parameters.
> Bye,
> udo.

More information about the Gluster-users mailing list