[Gluster-devel] [Gluster-users] Proposal for GlusterD-2.0
Krishnan Parthasarathi
kparthas at redhat.com
Mon Sep 8 04:05:05 UTC 2014
> Bulk of current GlusterD code deals with keeping the configuration of the
> cluster and the volumes in it consistent and available across the nodes. The
> current algorithm is not scalable (N^2 in no. of nodes) and doesn't prevent
> split-brain of configuration. This is the problem area we are targeting for
> the first phase.
>
> As part of the first phase, we aim to delegate the distributed configuration
> store. We are exploring consul [1] as a replacement for the existing
> distributed configuration store (sum total of /var/lib/glusterd/* across all
> nodes). Consul provides distributed configuration store which is consistent
> and partition tolerant. By moving all Gluster related configuration
> information into consul we could avoid split-brain situations.
> Did you get a chance to go over the following questions while making the
> decision? If yes could you please share the info.
> What are the consistency guarantees for changing the configuration in case of
> network partitions?
> specifically when there are 2 nodes and 1 of them is not reachable?
> consistency guarantees when there are more than 2 nodes?
> What are the consistency guarantees for reading configuration in case of
> network partitions?
consul uses Raft[1] distributed consensus algorithm internally for maintaining
consistency. The Raft consensus algorithm is proven to be correct. I will be
going through the workings of the algorithm soon. I will share my answers to
the above questions after that. Thanks for the questions, it is important
for the user to understand the behaviour of a system especially under failure.
I am considering adding a FAQ section to this proposal, where questions like the above would
go, once it gets accepted and makes it to the feature page.
[1] - https://ramcloud.stanford.edu/wiki/download/attachments/11370504/raft.pdf
~KP
More information about the Gluster-devel
mailing list