<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    On 05/30/2017 03:52 PM, Ric Wheeler wrote:<br>
    <blockquote
      cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
      type="cite">On 05/30/2017 06:37 PM, Joe Julian wrote:
      <br>
      <blockquote type="cite">On 05/30/2017 03:24 PM, Ric Wheeler wrote:
        <br>
        <blockquote type="cite">On 05/27/2017 03:02 AM, Joe Julian
          wrote:
          <br>
          <blockquote type="cite">On 05/26/2017 11:38 PM, Pranith Kumar
            Karampuri wrote:
            <br>
            <blockquote type="cite">
              <br>
              <br>
              On Wed, May 24, 2017 at 9:10 PM, Joe Julian
              &lt;<a class="moz-txt-link-abbreviated" href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:joe@julianfamily.org">&lt;mailto:joe@julianfamily.org&gt;</a>&gt; wrote:
              <br>
              <br>
                  Forwarded for posterity and follow-up.
              <br>
              <br>
              <br>
                  -------- Forwarded Message --------
              <br>
                  Subject:     Re: GlusterFS removal from Openstack
              Cinder
              <br>
                  Date:     Fri, 05 May 2017 21:07:27 +0000
              <br>
                  From:     Amye Scavarda <a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com">&lt;amye@redhat.com&gt;</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com">&lt;mailto:amye@redhat.com&gt;</a>
              <br>
                  To:     Eric Harney <a class="moz-txt-link-rfc2396E" href="mailto:eharney@redhat.com">&lt;eharney@redhat.com&gt;</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:eharney@redhat.com">&lt;mailto:eharney@redhat.com&gt;</a>, Joe
              <br>
                  Julian <a class="moz-txt-link-rfc2396E" href="mailto:me@joejulian.name">&lt;me@joejulian.name&gt;</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:me@joejulian.name">&lt;mailto:me@joejulian.name&gt;</a>, Vijay Bellur
              <br>
                  <a class="moz-txt-link-rfc2396E" href="mailto:vbellur@redhat.com">&lt;vbellur@redhat.com&gt;</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:vbellur@redhat.com">&lt;mailto:vbellur@redhat.com&gt;</a>
              <br>
                  CC:     Amye Scavarda <a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com">&lt;amye@redhat.com&gt;</a>
              <a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com">&lt;mailto:amye@redhat.com&gt;</a>
              <br>
              <br>
              <br>
              <br>
                  Eric,
              <br>
                  I'm sorry to hear this.
              <br>
                  I'm reaching out internally (within Gluster CI team
              and CentOS CI which
              <br>
                  supports Gluster) to get an idea of the level of
              effort we'll need to
              <br>
                  provide to resolve this.
              <br>
                  It'll take me a few days to get this, but this is on
              my radar. In the
              <br>
                  meantime, is there somewhere I should be looking at
              for requirements to
              <br>
                  meet this gateway?
              <br>
              <br>
                  Thanks!
              <br>
                  -- amye
              <br>
              <br>
                  On Fri, May 5, 2017 at 16:09 Joe Julian
              &lt;<a class="moz-txt-link-abbreviated" href="mailto:me@joejulian.name">me@joejulian.name</a>
              <br>
                  <a class="moz-txt-link-rfc2396E" href="mailto:me@joejulian.name">&lt;mailto:me@joejulian.name&gt;</a>&gt; wrote:
              <br>
              <br>
                      On 05/05/2017 12:54 PM, Eric Harney wrote:
              <br>
                      &gt;&gt; On 04/28/2017 12:41 PM, Joe Julian wrote:
              <br>
                      &gt;&gt;&gt; I learned, today, that GlusterFS was
              deprecated and removed from
              <br>
                      &gt;&gt;&gt; Cinder as one of our #gluster
              (freenode) users was attempting to
              <br>
                      &gt;&gt;&gt; upgrade openstack. I could find no
              rational nor discussion of that
              <br>
                      &gt;&gt;&gt; removal. Could you please educate me
              about that decision?
              <br>
                      &gt;&gt;&gt;
              <br>
                      &gt;
              <br>
                      &gt; Hi Joe,
              <br>
                      &gt;
              <br>
                      &gt; I can fill in on the rationale here.
              <br>
                      &gt;
              <br>
                      &gt; Keeping a driver in the Cinder tree requires
              running a CI platform to
              <br>
                      &gt; test that driver and report results against
              all patchsets submitted to
              <br>
                      &gt; Cinder.  This is a fairly large burden, which
              we could not meet
              <br>
                      once the
              <br>
                      &gt; Gluster Cinder driver was no longer an active
              development target at
              <br>
                      Red Hat.
              <br>
                      &gt;
              <br>
                      &gt; This was communicated via a warning issued by
              the driver for anyone
              <br>
                      &gt; running the OpenStack Newton code, and via
              the Cinder release notes for
              <br>
                      &gt; the Ocata release.  (I can see in retrospect
              that this was probably not
              <br>
                      &gt; communicated widely enough.)
              <br>
                      &gt;
              <br>
                      &gt; I apologize for not reaching out to the
              Gluster community about this.
              <br>
                      &gt;
              <br>
                      &gt; If someone from the Gluster world is
              interested in bringing this driver
              <br>
                      &gt; back, I can help coordinate there.  But it
              will require someone
              <br>
                      stepping
              <br>
                      &gt; in in a big way to maintain it.
              <br>
                      &gt;
              <br>
                      &gt; Thanks,
              <br>
                      &gt; Eric
              <br>
              <br>
                      Ah, Red Hat's statement that the acquisition of
              InkTank was not an
              <br>
                      abandonment of Gluster seems rather disingenuous
              now. I'm disappointed.
              <br>
              <br>
              <br>
              I am a Red Hat employee working on gluster and I am happy
              with the kind of investments the company did in GlusterFS.
              Still am. It is a pretty good company and really open. I
              never had any trouble saying something the management did
              is wrong when I strongly felt and they would give a decent
              reason for their decision.
              <br>
            </blockquote>
            <br>
            Happy to hear that. Still looks like meddling to an
            outsider. Not the Gluster team's fault though (although more
            participation of the developers in community meetings would
            probably help with that feeling of being disconnected, in my
            own personal opinion).
            <br>
          </blockquote>
          <br>
          As a community, each member needs to make sure that their
          specific use case has the resources it needs to flourish. If
          some team cares about Gluster in openstack, they should step
          forward and provide the engineering and hardware resources
          needed to make it succeed.
          <br>
          <br>
          Red Hat has and continues to pour resources into Gluster -
          Gluster is thriving. We have loads of work going on with
          gluster in RHEV, Kubernetes, NFS Ganesha and Samba.
          <br>
          <br>
          What we are not doing and that has been clear for many years
          now is to invest in Gluster in openstack.
          <br>
        </blockquote>
        <br>
        Again, nobody communicated with either the Openstack nor the
        Gluster communities about this, short of deprecation warnings
        which are not the most effective way of reaching people (that
        may be wrong on the part of most users, but unfortunately it's a
        reality). Red Hat wasn't interested in investing in Gluster on
        Openstack anymore. That's fine. It's your money. As a community
        leader, proponent, and champion, however, Red Hat should have at
        least invested in finding an interested party to take over the
        effort - imho.
        <br>
      </blockquote>
      <br>
      I think it is 100% disingenuous to position this as a surprise
      withdrawal of Gluster from Red Hat from openstack. The position we
      have had with what we have focused on with Gluster has been
      exceedingly clear for years.
      <br>
    </blockquote>
    <br>
    I am completely sincere. I do not posture or pose. I have absolutely
    no reason to do so. I am not financially connected to gluster in any
    way. The only place I currently use gluster is at home. My day job
    with Samsung CNCT is solely connected to kubernetes and all the
    persistent storage needs for our use is currently handled by AWS EBS
    volumes. I am simply a member of the community for the sake of the
    community so when I make a statement about this being a surprise I
    do so as a user and community member.<br>
    <br>
    <blockquote
      cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
      type="cite">
      <br>
      As Eric pointed out, this was a warning in the Neutron code and
      was also in the release notes for prior openstack releases.
      <br>
    </blockquote>
    <br>
    <br>
    <a class="moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/mitaka.html">https://docs.openstack.org/releasenotes/cinder/mitaka.html</a> /gluster<br>
    <a class="moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/newton.html">https://docs.openstack.org/releasenotes/cinder/newton.html</a> /gluster<br>
    <a class="moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/ocata.html">https://docs.openstack.org/releasenotes/cinder/ocata.html</a> /gluster<br>
        * The GlusterFS volume driver, which was deprecated in the
    Newton release has been removed.<br>
    <br>
    Sure, if there's a fault it lies with the release note author.
    Mistakes happen. I can shrug that off.<br>
    <br>
<a class="moz-txt-link-freetext" href="https://lists.gt.net/engine?list=openstack;do=search_results;search_type=AND;search_forum=forum_3;search_string=gluster&amp;sb=post_time">https://lists.gt.net/engine?list=openstack;do=search_results;search_type=AND;search_forum=forum_3;search_string=gluster&amp;sb=post_time</a><br>
    <br>
<a class="moz-txt-link-freetext" href="https://www.google.com/search?q=gluster-users+search&amp;oq=gluster-users+search&amp;q=site:lists.gluster.org+openstack+cinder+driver+newton">https://www.google.com/search?q=gluster-users+search&amp;oq=gluster-users+search&amp;q=site:lists.gluster.org+openstack+cinder+driver+newton</a><br>
    <br>
    No communication. Eric didn't think of doing so. Again, mistakes
    happen. &lt;shrug&gt;<br>
    <br>
    Now we just want to move forward. I really couldn't care less about
    the history of this except to possibly learn from it. I did not want
    this to turn into an issue of blame nor one of defense. Stuff
    happens, fine, can we learn and fix it and turn this in to a
    positive? I think so.<br>
    <br>
    <blockquote
      cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
      type="cite">
      <blockquote type="cite">
        <br>
        <blockquote type="cite">
          <br>
          <blockquote type="cite">
            <br>
            <blockquote type="cite">
              <br>
                      Would you please start a thread on the
              gluster-users and gluster-devel
              <br>
                      mailing lists and see if there's anyone willing to
              take ownership of
              <br>
                      this. I'm certainly willing to participate as well
              but my $dayjob has
              <br>
                      gone more kubernetes than openstack so I have only
              my limited free time
              <br>
                      that I can donate.
              <br>
              <br>
              <br>
              Do we know what would maintaining cinder as active entail?
              Did Eric get back to any of you?
              <br>
            </blockquote>
            <br>
            Haven't heard anything more, no.
            <br>
          </blockquote>
          <br>
          Who in the community that is using gluster in openstack is
          willing to help with their own time and resources to meet the
          openstack requirements?
          <br>
        </blockquote>
        <br>
        Nobody knows. We have no idea what that entails. Can you help
        get that question answered? </blockquote>
      <br>
      The way open source works is that when some gives notice in a
      release that they are not maintaining a subsystem, that is an
      invitation for someone else to step up. Sounds like an excellent
      job for the community to dig into.
      <br>
    </blockquote>
    <br>
    The way open source actually <b>works</b> is when there is an
    active and communicative community. I know I don't need to tell you,
    you guys have literally written the book on community.<br>
    <br>
    <blockquote
      cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
      type="cite">
      <br>
      As someone who runs the largest team of paid Gluster engineers in
      the world, my job is to deliver engineering features in Red Hat
      Gluster Storage that meet our business needs.</blockquote>
    <br>
    Agreed. No argument nor expectation otherwise.<br>
  </body>
</html>