<div dir="ltr">I think that people keep asking because they want to continue what they know. <div><br></div><div>I think it would work better if you provide a set of requirements and preconditions and the algorithm which satisfies what you would like to do. Provide it here, and let's discuss it. No code needed.</div><div><br></div><div>- Luis</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 25, 2017 at 7:29 PM, Jose A. Rivera <span dir="ltr"><<a href="mailto:jarrpa@redhat.com" target="_blank">jarrpa@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Wed, Jan 25, 2017 at 2:16 PM, Luis Pabon <<a href="mailto:lpabon@gmail.com">lpabon@gmail.com</a>> wrote:<br>
> Hi José,<br>
> This has been asked for many many times. Heketi was designed to "rule<br>
> them all". Heketi was never designed for systems that have been setup<br>
> already because the permutations of possibilities of configurations could be<br>
> extensive to figure out how to manage. It is like creating a Ceph Rados<br>
> system by yourself, then asking the pool manager to figure out what you did.<br>
> If instead Ceph is viewed as a collection of the access+pool+storage and not<br>
> as individual parts, then it all works well and is predictable. In the same<br>
> way, it should not be viewed as Heketi managing GlusterFS, but<br>
> Heketi/GlusterFS instead. Once this view is accepted (which is what users<br>
> want, but old school gluster users have a hard time with), then what Heketi<br>
> currently does makes perfect sense.<br>
><br>
> So, back to the question, no, Heketi does not and will never manage such a<br>
> model. Any software that manages such a configuration would be hard to<br>
> productize and guarantee. Can you make a hack that does it? Maybe, but<br>
> reliability and simplicity is what Heketi is after.<br>
><br>
> Hope this answers your question.<br>
<br>
</span>I know this has been asked more than once and I believe this keeps<br>
being asked for because the above is still an unsatisfactory answer.<br>
:) Already we are seeing new users asking for maintenance features<br>
that would be perfectly possible with Gluster but which are currently<br>
out of reach when going with heketi. I think focusing too hard on<br>
"simplicity" will quickly become limiting to heketi's desirability. It<br>
would seem to make more sense to go with a mindset of a tailored<br>
experience, with the ability to go in deeper if desired.<br>
<br>
There doesn't seem to be anything technically complicated about the<br>
idea that heketi could tolerate a peer probe coming back already<br>
satisfied, or that a node is removed without removing it from the peer<br>
list. I don't see how this would prove to be dangerous as long as we<br>
maintain the understanding that you are not to go in on the backend to<br>
mess with anything heketi is actively managing. This seems like<br>
something we could easily test, make reliable, and productize.<br>
<span class="HOEnZb"><font color="#888888"><br>
--Jose<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
> - Luis<br>
><br>
> On Tue, Jan 24, 2017 at 12:24 PM, Jose A. Rivera <<a href="mailto:jarrpa@redhat.com">jarrpa@redhat.com</a>> wrote:<br>
>><br>
>> Hey Luis, et al.,<br>
>><br>
>> I talked to Ashiq about $SUBJECT, and he raised some concerns.<br>
>> Apparently heketi can not load/import nodes that are already part of a<br>
>> Gluster cluster? E.g. if I have an existing cluster with all the nodes<br>
>> already peer probed, heketi will try to redo the probe and then fail<br>
>> when it comes back already in peer list? This seems odd to me, but if<br>
>> so sounds like a relatively easy thing to change. Thoughts?<br>
>><br>
>> --Jose<br>
><br>
><br>
</div></div></blockquote></div><br></div>