[heketi-devel] [heketi] Pre-existing GlusterFS cluster

Jose A. Rivera jarrpa at redhat.com
Wed Feb 8 17:51:32 UTC 2017


Ping :)

On Thu, Jan 26, 2017 at 8:59 AM, Jose A. Rivera <jarrpa at redhat.com> wrote:
> Sure thing!
>
> The ask is for an OpenShift use case where I want to create a
> GlusterFS volume to store the local Docker registry before any
> containers are running. I was thinking of doing this by running
> Gluster natively on the OpenShift nodes, outside of containers,
> creating a cluster of them, then selecting a directory on each node to
> serve as bricks for the volume. The idea here is that I would still
> want to deploy heketi in a container later on, and just use these same
> nodes in the topology file. heketi would still need to be given
> dedicated storage devices on each node.
>
> As far as the algorithm, I figure it should be something like:
>
> For adding a node of a pre-existing cluster, watch for the return
> code/value from the exec of gluster peer probe and if it says peer
> already in list we return success. If for some reason the pre-existing
> cluster only overlaps on a subset of nodes with the heketi cluster,
> gluster can handle this.
>
> In the inverse, when you remove a node from heketi, watch for a
> message that peer cannot be detached because it has bricks and remove
> the node from heketi anyway. heketi already does its own checks to see
> if a volume is on a particular node, so we can't get to the point
> where a heketi-managed brick is still extant on a heketi-managed node
> unless something goes really wrong (and then we have to resort to the
> backend command line anyway, I'd imagine?).
>
> Feedback, concerns, or flames welcome. :)
> --Jose
>
> On Wed, Jan 25, 2017 at 10:55 PM, Luis Pabon <lpabon at gmail.com> wrote:
>> I think that people keep asking because they want to continue what they
>> know.
>>
>> I think it would work better if you provide a set of requirements and
>> preconditions and the algorithm which satisfies what you would like to do.
>> Provide it here, and let's discuss it.  No code needed.
>>
>> - Luis
>>
>> On Wed, Jan 25, 2017 at 7:29 PM, Jose A. Rivera <jarrpa at redhat.com> wrote:
>>>
>>> On Wed, Jan 25, 2017 at 2:16 PM, Luis Pabon <lpabon at gmail.com> wrote:
>>> > Hi José,
>>> >   This has been asked for many many times.  Heketi was designed to "rule
>>> > them all".  Heketi was never designed for systems that have been setup
>>> > already because the permutations of possibilities of configurations
>>> > could be
>>> > extensive to figure out how to manage.  It is like creating a Ceph Rados
>>> > system by yourself, then asking the pool manager to figure out what you
>>> > did.
>>> > If instead Ceph is viewed as a collection of the access+pool+storage and
>>> > not
>>> > as individual parts, then it all works well and is predictable.  In the
>>> > same
>>> > way, it should not be viewed as Heketi managing GlusterFS, but
>>> > Heketi/GlusterFS instead.  Once this view is accepted (which is what
>>> > users
>>> > want, but old school gluster users have a hard time with), then what
>>> > Heketi
>>> > currently does makes perfect sense.
>>> >
>>> > So, back to the question, no, Heketi does not and will never manage such
>>> > a
>>> > model.  Any software that manages such a configuration would be hard to
>>> > productize and guarantee.  Can you make a hack that does it? Maybe, but
>>> > reliability and simplicity is what Heketi is after.
>>> >
>>> > Hope this answers your question.
>>>
>>> I know this has been asked more than once and I believe this keeps
>>> being asked for because the above is still an unsatisfactory answer.
>>> :) Already we are seeing new users asking for maintenance features
>>> that would be perfectly possible with Gluster but which are currently
>>> out of reach when going with heketi. I think focusing too hard on
>>> "simplicity" will quickly become limiting to heketi's desirability. It
>>> would seem to make more sense to go with a mindset of a tailored
>>> experience, with the ability to go in deeper if desired.
>>>
>>> There doesn't seem to be anything technically complicated about the
>>> idea that heketi could tolerate a peer probe coming back already
>>> satisfied, or that a node is removed without removing it from the peer
>>> list. I don't see how this would prove to be dangerous as long as we
>>> maintain the understanding that you are not to go in on the backend to
>>> mess with anything heketi is actively managing. This seems like
>>> something we could easily test, make reliable, and productize.
>>>
>>> --Jose
>>>
>>> > - Luis
>>> >
>>> > On Tue, Jan 24, 2017 at 12:24 PM, Jose A. Rivera <jarrpa at redhat.com>
>>> > wrote:
>>> >>
>>> >> Hey Luis, et al.,
>>> >>
>>> >> I talked to Ashiq about $SUBJECT, and he raised some concerns.
>>> >> Apparently heketi can not load/import nodes that are already part of a
>>> >> Gluster cluster? E.g. if I have an existing cluster with all the nodes
>>> >> already peer probed, heketi will try to redo the probe and then fail
>>> >> when it comes back already in peer list? This seems odd to me, but if
>>> >> so sounds like a relatively easy thing to change. Thoughts?
>>> >>
>>> >> --Jose
>>> >
>>> >
>>
>>


More information about the heketi-devel mailing list