<div dir="ltr">Joe,<div><br></div><div>Agree with you on turning this around into something more positive.</div><div><br></div><div>One aspect that would really help us decide on our next steps here is the actual number of deployments that will be affected by the removal of the gluster driver in Cinder. If you are running or aware of a deployment of OpenStack Cinder &amp; Gluster, can you please respond on this thread or to me &amp; Niels in private providing more details about your deployment? Details like OpenStack &amp; Gluster versions, number of Gluster nodes &amp; total storage capactiy would be very useful to us.</div><div><br></div><div>Thanks!</div><div>Vijay</div><div><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 30, 2017 at 7:22 PM, Joe Julian <span dir="ltr">&lt;<a href="mailto:joe@julianfamily.org" target="_blank">joe@julianfamily.org</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF"><div><div class="gmail-h5">
    On 05/30/2017 03:52 PM, Ric Wheeler wrote:<br>
    <blockquote type="cite">On 05/30/2017 06:37 PM, Joe Julian wrote:
      <br>
      <blockquote type="cite">On 05/30/2017 03:24 PM, Ric Wheeler wrote:
        <br>
        <blockquote type="cite">On 05/27/2017 03:02 AM, Joe Julian
          wrote:
          <br>
          <blockquote type="cite">On 05/26/2017 11:38 PM, Pranith Kumar
            Karampuri wrote:
            <br>
            <blockquote type="cite">
              <br>
              <br>
              On Wed, May 24, 2017 at 9:10 PM, Joe Julian
              &lt;<a class="gmail-m_-8070790892442781168moz-txt-link-abbreviated" href="mailto:joe@julianfamily.org" target="_blank">joe@julianfamily.org</a>
              <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:joe@julianfamily.org" target="_blank">&lt;mailto:joe@julianfamily.org&gt;</a>&gt; wrote:
              <br>
              <br>
                  Forwarded for posterity and follow-up.
              <br>
              <br>
              <br>
                  -------- Forwarded Message --------
              <br>
                  Subject:     Re: GlusterFS removal from Openstack
              Cinder
              <br>
                  Date:     Fri, 05 May 2017 21:07:27 +0000
              <br>
                  From:     Amye Scavarda <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:amye@redhat.com" target="_blank">&lt;amye@redhat.com&gt;</a>
              <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:amye@redhat.com" target="_blank">&lt;mailto:amye@redhat.com&gt;</a>
              <br>
                  To:     Eric Harney <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:eharney@redhat.com" target="_blank">&lt;eharney@redhat.com&gt;</a>
              <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:eharney@redhat.com" target="_blank">&lt;mailto:eharney@redhat.com&gt;</a>, Joe
              <br>
                  Julian <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:me@joejulian.name" target="_blank">&lt;me@joejulian.name&gt;</a>
              <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:me@joejulian.name" target="_blank">&lt;mailto:me@joejulian.name&gt;</a>, Vijay Bellur
              <br>
                  <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:vbellur@redhat.com" target="_blank">&lt;vbellur@redhat.com&gt;</a>
              <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:vbellur@redhat.com" target="_blank">&lt;mailto:vbellur@redhat.com&gt;</a>
              <br>
                  CC:     Amye Scavarda <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:amye@redhat.com" target="_blank">&lt;amye@redhat.com&gt;</a>
              <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:amye@redhat.com" target="_blank">&lt;mailto:amye@redhat.com&gt;</a>
              <br>
              <br>
              <br>
              <br>
                  Eric,
              <br>
                  I&#39;m sorry to hear this.
              <br>
                  I&#39;m reaching out internally (within Gluster CI team
              and CentOS CI which
              <br>
                  supports Gluster) to get an idea of the level of
              effort we&#39;ll need to
              <br>
                  provide to resolve this.
              <br>
                  It&#39;ll take me a few days to get this, but this is on
              my radar. In the
              <br>
                  meantime, is there somewhere I should be looking at
              for requirements to
              <br>
                  meet this gateway?
              <br>
              <br>
                  Thanks!
              <br>
                  -- amye
              <br>
              <br>
                  On Fri, May 5, 2017 at 16:09 Joe Julian
              &lt;<a class="gmail-m_-8070790892442781168moz-txt-link-abbreviated" href="mailto:me@joejulian.name" target="_blank">me@joejulian.name</a>
              <br>
                  <a class="gmail-m_-8070790892442781168moz-txt-link-rfc2396E" href="mailto:me@joejulian.name" target="_blank">&lt;mailto:me@joejulian.name&gt;</a>&gt; wrote:
              <br>
              <br>
                      On 05/05/2017 12:54 PM, Eric Harney wrote:
              <br>
                      &gt;&gt; On 04/28/2017 12:41 PM, Joe Julian wrote:
              <br>
                      &gt;&gt;&gt; I learned, today, that GlusterFS was
              deprecated and removed from
              <br>
                      &gt;&gt;&gt; Cinder as one of our #gluster
              (freenode) users was attempting to
              <br>
                      &gt;&gt;&gt; upgrade openstack. I could find no
              rational nor discussion of that
              <br>
                      &gt;&gt;&gt; removal. Could you please educate me
              about that decision?
              <br>
                      &gt;&gt;&gt;
              <br>
                      &gt;
              <br>
                      &gt; Hi Joe,
              <br>
                      &gt;
              <br>
                      &gt; I can fill in on the rationale here.
              <br>
                      &gt;
              <br>
                      &gt; Keeping a driver in the Cinder tree requires
              running a CI platform to
              <br>
                      &gt; test that driver and report results against
              all patchsets submitted to
              <br>
                      &gt; Cinder.  This is a fairly large burden, which
              we could not meet
              <br>
                      once the
              <br>
                      &gt; Gluster Cinder driver was no longer an active
              development target at
              <br>
                      Red Hat.
              <br>
                      &gt;
              <br>
                      &gt; This was communicated via a warning issued by
              the driver for anyone
              <br>
                      &gt; running the OpenStack Newton code, and via
              the Cinder release notes for
              <br>
                      &gt; the Ocata release.  (I can see in retrospect
              that this was probRecording of the meeting can be found at [3].ably not
              <br>
                      &gt; communicated widely enough.)
              <br>
                      &gt;
              <br>
                      &gt; I apologize for not reaching out to the
              Gluster community about this.
              <br>
                      &gt;
              <br>
                      &gt; If someone from the Gluster world is
              interested in bringing this driver
              <br>
                      &gt; back, I can help coordinate there.  But it
              will require someone
              <br>
                      stepping
              <br>
                      &gt; in in a big way to maintain it.
              <br>
                      &gt;
              <br>
                      &gt; Thanks,
              <br>
                      &gt; Eric
              <br>
              <br>
                      Ah, Red Hat&#39;s statement that the acquisition of
              InkTank was not an
              <br>
                      abandonment of Gluster seems rather disingenuous
              now. I&#39;m disappointed.
              <br>
              <br>
              <br>
              I am a Red Hat employee working on gluster and I am happy
              with the kind of investments the company did in GlusterFS.
              Still am. It is a pretty good company and really open. I
              never had any trouble saying something the management did
              is wrong when I strongly felt and they would give a decent
              reason for their decision.
              <br>
            </blockquote>
            <br>
            Happy to hear that. Still looks like meddling to an
            outsider. Not the Gluster team&#39;s fault though (although more
            participation of the developers in community meetings would
            probably help with that feeling of being disconnected, in my
            own personal opinion).
            <br>
          </blockquote>
          <br>
          As a community, each member needs to make sure that their
          specific use case has the resources it needs to flourish. If
          some team cares about Gluster in openstack, they should step
          forward and provide the engineering and hardware resources
          needed to make it succeed.
          <br>
          <br>
          Red Hat has and continues to pour resources into Gluster -
          Gluster is thriving. We have loads of work going on with
          gluster in RHEV, Kubernetes, NFS Ganesha and Samba.
          <br>
          <br>
          What we are not doing and that has been clear for many years
          now is to invest in Gluster in openstack.
          <br>
        </blockquote>
        <br>
        Again, nobody communicated with either the Openstack nor the
        Gluster communities about this, short of deprecation warnings
        which are not the most effective way of reaching people (that
        may be wrong on the part of most users, but unfortunately it&#39;s a
        reality). Red Hat wasn&#39;t interested in investing in Gluster on
        Openstack anymore. That&#39;s fine. It&#39;s your money. As a community
        leader, proponent, and champion, however, Red Hat should have at
        least invested in finding an interested party to take over the
        effort - imho.
        <br>
      </blockquote>
      <br>
      I think it is 100% disingenuous to position this as a surprise
      withdrawal of Gluster from Red Hat from openstack. The position we
      have had with what we have focused on with Gluster has been
      exceedingly clear for years.
      <br>
    </blockquote>
    <br></div></div>
    I am completely sincere. I do not posture or pose. I have absolutely
    no reason to do so. I am not financially connected to gluster in any
    way. The only place I currently use gluster is at home. My day job
    with Samsung CNCT is solely connected to kubernetes and all the
    persistent storage needs for our use is currently handled by AWS EBS
    volumes. I am simply a member of the community for the sake of the
    community so when I make a statement about this being a surprise I
    do so as a user and community member.<span class="gmail-"><br>
    <br>
    <blockquote type="cite">
      <br>
      As Eric pointed out, this was a warning in the Neutron code and
      was also in the release notes for prior openstack releases.
      <br>
    </blockquote>
    <br>
    <br>
    </span><a class="gmail-m_-8070790892442781168moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/mitaka.html" target="_blank">https://docs.openstack.org/<wbr>releasenotes/cinder/mitaka.<wbr>html</a> /gluster<br>
    <a class="gmail-m_-8070790892442781168moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/newton.html" target="_blank">https://docs.openstack.org/<wbr>releasenotes/cinder/newton.<wbr>html</a> /gluster<br>
    <a class="gmail-m_-8070790892442781168moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/ocata.html" target="_blank">https://docs.openstack.org/<wbr>releasenotes/cinder/ocata.html</a> /gluster<br>
        * The GlusterFS volume driver, which was deprecated in the
    Newton release has been removed.<br>
    <br>
    Sure, if there&#39;s a fault it lies with the release note author.
    Mistakes happen. I can shrug that off.<br>
    <br>
<a class="gmail-m_-8070790892442781168moz-txt-link-freetext" href="https://lists.gt.net/engine?list=openstack;do=search_results;search_type=AND;search_forum=forum_3;search_string=gluster&amp;sb=post_time" target="_blank">https://lists.gt.net/engine?<wbr>list=openstack;do=search_<wbr>results;search_type=AND;<wbr>search_forum=forum_3;search_<wbr>string=gluster&amp;sb=post_time</a><br>
    <br>
<a class="gmail-m_-8070790892442781168moz-txt-link-freetext" href="https://www.google.com/search?q=gluster-users+search&amp;oq=gluster-users+search&amp;q=site:lists.gluster.org+openstack+cinder+driver+newton" target="_blank">https://www.google.com/search?<wbr>q=gluster-users+search&amp;oq=<wbr>gluster-users+search&amp;q=site:<wbr>lists.gluster.org+openstack+<wbr>cinder+driver+newton</a><br>
    <br>
    No communication. Eric didn&#39;t think of doing so. Again, mistakes
    happen. &lt;shrug&gt;<br>
    <br>
    Now we just want to move forward. I really couldn&#39;t care less about
    the history of this except to possibly learn from it. I did not want
    this to turn into an issue of blame nor one of defense. Stuff
    happens, fine, can we learn and fix it and turn this in to a
    positive? I think so.<span class="gmail-"><br>
    <br>
    <blockquote type="cite">
      <blockquote type="cite">
        <br>
        <blockquote type="cite">
          <br>
          <blockquote type="cite">
            <br>
            <blockquote type="cite">
              <br>
                      Would you please start a thread on the
              gluster-users and gluster-devel
              <br>
                      mailing lists and see if there&#39;s anyone willing to
              take ownership of
              <br>
                      this. I&#39;m certainly willing to participate as well
              but my $dayjob has
              <br>
                      gone more kubernetes than openstack so I have only
              my limited free time
              <br>
                      that I can donate.
              <br>
              <br>
              <br>
              Do we know what would maintaining cinder as active entail?
              Did Eric get back to any of you?
              <br>
            </blockquote>
            <br>
            Haven&#39;t heard anything more, no.
            <br>
          </blockquote>
          <br>
          Who in the community that is using gluster in openstack is
          willing to help with their own time and resources to meet the
          openstack requirements?
          <br>
        </blockquote>
        <br>
        Nobody knows. We have no idea what that entails. Can you help
        get that question answered? </blockquote>
      <br>
      The way open source works is that when some gives notice in a
      release that they are not maintaining a subsystem, that is an
      invitation for someone else to step up. Sounds like an excellent
      job for the community to dig into.
      <br>
    </blockquote>
    <br></span>
    The way open source actually <b>works</b> is when there is an
    active and communicative community. I know I don&#39;t need to tell you,
    you guys have literally written the book on community.<span class="gmail-"><br>
    <br>
    <blockquote type="cite">
      <br>
      As someone who runs the largest team of paid Gluster engineers in
      the world, my job is to deliver engineering features in Red Hat
      Gluster Storage that meet our business needs.</blockquote>
    <br></span>
    Agreed. No argument nor expectation otherwise.<br>
  </div>

<br>______________________________<wbr>_________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org">Gluster-devel@gluster.org</a><br>
<a href="http://lists.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" target="_blank">http://lists.gluster.org/<wbr>mailman/listinfo/gluster-devel</a><br></blockquote></div><br></div></div></div>