[Gluster-users] [Gluster-devel] Proposal for GlusterD-2.0

Balamurugan Arumugam bala at gluster.com
Thu Sep 11 01:46:45 UTC 2014



----- Original Message -----
> From: "Kaushal M" <kshlmster at gmail.com>
> To: "Gluster Devel" <gluster-devel at gluster.org>
> Cc: gluster-users at gluster.org
> Sent: Friday, September 5, 2014 3:51:35 PM
> Subject: [Gluster-devel] Proposal for GlusterD-2.0
> 
> GlusterD performs the following functions as the management daemon for
> GlusterFS:
> - Peer membership management
> - Maintains consistency of configuration data across nodes (distributed
> configuration store)
> - Distributed command execution (orchestration)
> - Service management (manage GlusterFS daemons)
> - Portmap service for GlusterFS daemons
> 
> 
> This proposal aims to delegate the above functions to technologies that solve
> these problems well. We aim to do this in a phased manner.
> The technology alternatives we would be looking for should have the following
> properties,
> - Open source
> - Vibrant community
> - Good documentation
> - Easy to deploy/manage
> 


I did a small PoC on this front in SaltStack[1] which already facilitates this infrastructure.  The PoC does

1. Adding peers to existing/form cluster.
This comes as default with Salt infra where nodes are managed.  I have nothing written for gluster peers.

2. Create gluster volume.
This is merely a configuration management.  I am able to solve brick/volume configuration in matter defining few lines how brick/volume configuration looks like for plain distribute type volume.

3. Start gluster volume.
This is service management.  This is done by just defining state data what to do.

WRT glusterd problem, I see Salt already resolves most of them at infrastructure level.  Its worth considering it.



> This would allow GlusterD's architecture to be more modular. We also aim to
> make GlusterD's architecture as transparent and observable as possible.
> Separating out these functions would allow us to do that.
> 
> Bulk of current GlusterD code deals with keeping the configuration of the
> cluster and the volumes in it consistent and available across the nodes. The
> current algorithm is not scalable (N^2 in no. of nodes) and doesn't prevent
> split-brain of configuration. This is the problem area we are targeting for
> the first phase.
> 
> As part of the first phase, we aim to delegate the distributed configuration
> store. We are exploring consul [1] as a replacement for the existing
> distributed configuration store (sum total of /var/lib/glusterd/* across all
> nodes). Consul provides distributed configuration store which is consistent
> and partition tolerant. By moving all Gluster related configuration
> information into consul we could avoid split-brain situations.
> 
> All development efforts towards this proposal would happen in parallel to the
> existing GlusterD code base. The existing code base would be actively
> maintained until GlusterD-2.0 is production-ready.
> 
> This is in alignment with the GlusterFS Quattro proposals on making GlusterFS
> scalable and easy to deploy. This is the first phase ground work towards
> that goal.
> 
> Questions and suggestions are welcome.
> 

Regards,
Bala

[1] http://www.saltstack.com/


More information about the Gluster-users mailing list