[heketi-devel] [heketi] Pre-existing GlusterFS cluster
Jose A. Rivera
jarrpa at redhat.com
Fri Mar 3 16:00:01 UTC 2017
He's exactly the guy that sparked this question. :)
--Jose
On Fri, Mar 3, 2017 at 4:07 AM, Mohamed Ashiq Liyazudeen
<mliyazud at redhat.com> wrote:
> Hi,
>
> We can do something like this. Not for now but as a support. Let me know if this helps?
>
> https://github.com/heketi/heketi/issues/700
>
> --
> Ashiq
>
> ----- Original Message -----
> From: "Raghavendra Talur" <rtalur at redhat.com>
> To: "Jose A. Rivera" <jarrpa at redhat.com>
> Cc: heketi-devel at gluster.org
> Sent: Thursday, March 2, 2017 2:19:11 PM
> Subject: Re: [heketi-devel] [heketi] Pre-existing GlusterFS cluster
>
> On Tue, Feb 28, 2017 at 10:09 PM, Jose A. Rivera <jarrpa at redhat.com> wrote:
>> Poking at this again. This time putting forth the idea of having two
>> heketi-controlled clusters overlapping in nodes (but not in devices,
>> of course). This facilitates, for example, a node that has both HDDs
>> and SSD, and being able to group the HDDs in come cluster and SSDs in
>> another so that users could have some ability to select the underlying
>> storage type if it matters to them.
>
> The same node being part of two clusters is not possible with Gluster
> architecture. We might be able to achieve that only if we figure out
> how to run Gluster containers with using host networking.
> I do want to see a solution for the problem mentioned in the subject
> and like the previous algorithm given by Jose.
>
>
>>
>> --Jose
>>
>> On Wed, Feb 8, 2017 at 11:51 AM, Jose A. Rivera <jarrpa at redhat.com> wrote:
>>> Ping :)
>>>
>>> On Thu, Jan 26, 2017 at 8:59 AM, Jose A. Rivera <jarrpa at redhat.com> wrote:
>>>> Sure thing!
>>>>
>>>> The ask is for an OpenShift use case where I want to create a
>>>> GlusterFS volume to store the local Docker registry before any
>>>> containers are running. I was thinking of doing this by running
>>>> Gluster natively on the OpenShift nodes, outside of containers,
>>>> creating a cluster of them, then selecting a directory on each node to
>>>> serve as bricks for the volume. The idea here is that I would still
>>>> want to deploy heketi in a container later on, and just use these same
>>>> nodes in the topology file. heketi would still need to be given
>>>> dedicated storage devices on each node.
>>>>
>>>> As far as the algorithm, I figure it should be something like:
>>>>
>>>> For adding a node of a pre-existing cluster, watch for the return
>>>> code/value from the exec of gluster peer probe and if it says peer
>>>> already in list we return success. If for some reason the pre-existing
>>>> cluster only overlaps on a subset of nodes with the heketi cluster,
>>>> gluster can handle this.
>
> +1
>
>>>>
>>>> In the inverse, when you remove a node from heketi, watch for a
>>>> message that peer cannot be detached because it has bricks and remove
>>>> the node from heketi anyway. heketi already does its own checks to see
>>>> if a volume is on a particular node, so we can't get to the point
>>>> where a heketi-managed brick is still extant on a heketi-managed node
>>>> unless something goes really wrong (and then we have to resort to the
>>>> backend command line anyway, I'd imagine?).
>>>>
>>>> Feedback, concerns, or flames welcome. :)
>>>> --Jose
>>>>
>>>> On Wed, Jan 25, 2017 at 10:55 PM, Luis Pabon <lpabon at gmail.com> wrote:
>>>>> I think that people keep asking because they want to continue what they
>>>>> know.
>>>>>
>>>>> I think it would work better if you provide a set of requirements and
>>>>> preconditions and the algorithm which satisfies what you would like to do.
>>>>> Provide it here, and let's discuss it. No code needed.
>>>>>
>>>>> - Luis
>>>>>
>>>>> On Wed, Jan 25, 2017 at 7:29 PM, Jose A. Rivera <jarrpa at redhat.com> wrote:
>>>>>>
>>>>>> On Wed, Jan 25, 2017 at 2:16 PM, Luis Pabon <lpabon at gmail.com> wrote:
>>>>>> > Hi José,
>>>>>> > This has been asked for many many times. Heketi was designed to "rule
>>>>>> > them all". Heketi was never designed for systems that have been setup
>>>>>> > already because the permutations of possibilities of configurations
>>>>>> > could be
>>>>>> > extensive to figure out how to manage. It is like creating a Ceph Rados
>>>>>> > system by yourself, then asking the pool manager to figure out what you
>>>>>> > did.
>>>>>> > If instead Ceph is viewed as a collection of the access+pool+storage and
>>>>>> > not
>>>>>> > as individual parts, then it all works well and is predictable. In the
>>>>>> > same
>>>>>> > way, it should not be viewed as Heketi managing GlusterFS, but
>>>>>> > Heketi/GlusterFS instead. Once this view is accepted (which is what
>>>>>> > users
>>>>>> > want, but old school gluster users have a hard time with), then what
>>>>>> > Heketi
>>>>>> > currently does makes perfect sense.
>>>>>> >
>>>>>> > So, back to the question, no, Heketi does not and will never manage such
>>>>>> > a
>>>>>> > model. Any software that manages such a configuration would be hard to
>>>>>> > productize and guarantee. Can you make a hack that does it? Maybe, but
>>>>>> > reliability and simplicity is what Heketi is after.
>>>>>> >
>>>>>> > Hope this answers your question.
>>>>>>
>>>>>> I know this has been asked more than once and I believe this keeps
>>>>>> being asked for because the above is still an unsatisfactory answer.
>>>>>> :) Already we are seeing new users asking for maintenance features
>>>>>> that would be perfectly possible with Gluster but which are currently
>>>>>> out of reach when going with heketi. I think focusing too hard on
>>>>>> "simplicity" will quickly become limiting to heketi's desirability. It
>>>>>> would seem to make more sense to go with a mindset of a tailored
>>>>>> experience, with the ability to go in deeper if desired.
>>>>>>
>>>>>> There doesn't seem to be anything technically complicated about the
>>>>>> idea that heketi could tolerate a peer probe coming back already
>>>>>> satisfied, or that a node is removed without removing it from the peer
>>>>>> list. I don't see how this would prove to be dangerous as long as we
>>>>>> maintain the understanding that you are not to go in on the backend to
>>>>>> mess with anything heketi is actively managing. This seems like
>>>>>> something we could easily test, make reliable, and productize.
>>>>>>
>>>>>> --Jose
>>>>>>
>>>>>> > - Luis
>>>>>> >
>>>>>> > On Tue, Jan 24, 2017 at 12:24 PM, Jose A. Rivera <jarrpa at redhat.com>
>>>>>> > wrote:
>>>>>> >>
>>>>>> >> Hey Luis, et al.,
>>>>>> >>
>>>>>> >> I talked to Ashiq about $SUBJECT, and he raised some concerns.
>>>>>> >> Apparently heketi can not load/import nodes that are already part of a
>>>>>> >> Gluster cluster? E.g. if I have an existing cluster with all the nodes
>>>>>> >> already peer probed, heketi will try to redo the probe and then fail
>>>>>> >> when it comes back already in peer list? This seems odd to me, but if
>>>>>> >> so sounds like a relatively easy thing to change. Thoughts?
>>>>>> >>
>>>>>> >> --Jose
>>>>>> >
>>>>>> >
>>>>>
>>>>>
>> _______________________________________________
>> heketi-devel mailing list
>> heketi-devel at gluster.org
>> http://lists.gluster.org/mailman/listinfo/heketi-devel
> _______________________________________________
> heketi-devel mailing list
> heketi-devel at gluster.org
> http://lists.gluster.org/mailman/listinfo/heketi-devel
>
> --
> Regards,
> Mohamed Ashiq.L
>
More information about the heketi-devel
mailing list