[Gluster-devel] Glusterd: A New Hope

Jeff Darcy jdarcy at redhat.com
Mon Mar 25 19:59:00 UTC 2013


On 03/25/2013 02:44 PM, Alex Attarian wrote:
> where do you get the idea that I'm against glusterd? I'm perfectly fine with
> 3.x versions, those are still maintainable. But if you want to add Zookeper
> now, on top of Java requirement, where is it going to end?

I explicitly rejected ZK because of the Java requirement.  That kind of 
complexity and resource load just can't be hidden from the users.

> Yes re-inventing is
> not the best thing, but sometimes it can be much worse to add a 3rd party
> component with some strenuous requirements than re-iventing. Right now things
> are very easy to maintain in any of the 3.x versions, right inside glusterd.

I certainly don't think so.  A simple "volume set" command might generate 
dozens of RPC messages spread across several daemons using multiple RPC 
dispatch tables, with state machines and validation stages and all sorts of 
other complexity.  Finding out where in all that a command died can be *very* 
challenging.  Debugging problems from nodes having inconsistent volfiles 
because one died in the middle of that "volume set" command can be even worse. 
  I wouldn't call that maintainable.


> Why not keep that? Even all these other functionalities that others want and
> you really want to implement for scalability and flexibility, they could all be
> built with your cluster on gluster solution.
>
> I really don't want to worry about Zookeeper or Doozer when I run gluster.

You shouldn't have to.  Even if we were to use one of those - and the whole 
point of this discussion is to explore that along with other alternatives - it 
wouldn't be exposed as a separate service.  It would be embedded within 
glusterd, started and stopped when glusterd itself is, etc.  Yes, developers 
might need to learn to navigate some new code, but they would in any event and 
users/administrators shouldn't care at all.  To them it would be the same CLI 
as before, producing logs and other artifacts that are if anything more 
comprehensible than today.

I'm still not opposed to the "GlusterFS on GlusterFS" approach instead, but it 
has its own issues that need to be worked out.  Maybe someone who prefers that 
could sketch out a way to do that without having to retain all of glusterd as 
it is now to manage that special config volume (which obviously can't rely on 
the same services it provides).  There'd still be more layers, there'd still be 
more daemons to manage, and it seems like there'd be two sets of code doing 
essentially the same thing at different levels.




More information about the Gluster-devel mailing list