<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
On 05/30/2017 03:52 PM, Ric Wheeler wrote:<br>
<blockquote
cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
type="cite">On 05/30/2017 06:37 PM, Joe Julian wrote:
<br>
<blockquote type="cite">On 05/30/2017 03:24 PM, Ric Wheeler wrote:
<br>
<blockquote type="cite">On 05/27/2017 03:02 AM, Joe Julian
wrote:
<br>
<blockquote type="cite">On 05/26/2017 11:38 PM, Pranith Kumar
Karampuri wrote:
<br>
<blockquote type="cite">
<br>
<br>
On Wed, May 24, 2017 at 9:10 PM, Joe Julian
<<a class="moz-txt-link-abbreviated" href="mailto:joe@julianfamily.org">joe@julianfamily.org</a>
<a class="moz-txt-link-rfc2396E" href="mailto:joe@julianfamily.org"><mailto:joe@julianfamily.org></a>> wrote:
<br>
<br>
Forwarded for posterity and follow-up.
<br>
<br>
<br>
-------- Forwarded Message --------
<br>
Subject: Re: GlusterFS removal from Openstack
Cinder
<br>
Date: Fri, 05 May 2017 21:07:27 +0000
<br>
From: Amye Scavarda <a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com"><amye@redhat.com></a>
<a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com"><mailto:amye@redhat.com></a>
<br>
To: Eric Harney <a class="moz-txt-link-rfc2396E" href="mailto:eharney@redhat.com"><eharney@redhat.com></a>
<a class="moz-txt-link-rfc2396E" href="mailto:eharney@redhat.com"><mailto:eharney@redhat.com></a>, Joe
<br>
Julian <a class="moz-txt-link-rfc2396E" href="mailto:me@joejulian.name"><me@joejulian.name></a>
<a class="moz-txt-link-rfc2396E" href="mailto:me@joejulian.name"><mailto:me@joejulian.name></a>, Vijay Bellur
<br>
<a class="moz-txt-link-rfc2396E" href="mailto:vbellur@redhat.com"><vbellur@redhat.com></a>
<a class="moz-txt-link-rfc2396E" href="mailto:vbellur@redhat.com"><mailto:vbellur@redhat.com></a>
<br>
CC: Amye Scavarda <a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com"><amye@redhat.com></a>
<a class="moz-txt-link-rfc2396E" href="mailto:amye@redhat.com"><mailto:amye@redhat.com></a>
<br>
<br>
<br>
<br>
Eric,
<br>
I'm sorry to hear this.
<br>
I'm reaching out internally (within Gluster CI team
and CentOS CI which
<br>
supports Gluster) to get an idea of the level of
effort we'll need to
<br>
provide to resolve this.
<br>
It'll take me a few days to get this, but this is on
my radar. In the
<br>
meantime, is there somewhere I should be looking at
for requirements to
<br>
meet this gateway?
<br>
<br>
Thanks!
<br>
-- amye
<br>
<br>
On Fri, May 5, 2017 at 16:09 Joe Julian
<<a class="moz-txt-link-abbreviated" href="mailto:me@joejulian.name">me@joejulian.name</a>
<br>
<a class="moz-txt-link-rfc2396E" href="mailto:me@joejulian.name"><mailto:me@joejulian.name></a>> wrote:
<br>
<br>
On 05/05/2017 12:54 PM, Eric Harney wrote:
<br>
>> On 04/28/2017 12:41 PM, Joe Julian wrote:
<br>
>>> I learned, today, that GlusterFS was
deprecated and removed from
<br>
>>> Cinder as one of our #gluster
(freenode) users was attempting to
<br>
>>> upgrade openstack. I could find no
rational nor discussion of that
<br>
>>> removal. Could you please educate me
about that decision?
<br>
>>>
<br>
>
<br>
> Hi Joe,
<br>
>
<br>
> I can fill in on the rationale here.
<br>
>
<br>
> Keeping a driver in the Cinder tree requires
running a CI platform to
<br>
> test that driver and report results against
all patchsets submitted to
<br>
> Cinder. This is a fairly large burden, which
we could not meet
<br>
once the
<br>
> Gluster Cinder driver was no longer an active
development target at
<br>
Red Hat.
<br>
>
<br>
> This was communicated via a warning issued by
the driver for anyone
<br>
> running the OpenStack Newton code, and via
the Cinder release notes for
<br>
> the Ocata release. (I can see in retrospect
that this was probably not
<br>
> communicated widely enough.)
<br>
>
<br>
> I apologize for not reaching out to the
Gluster community about this.
<br>
>
<br>
> If someone from the Gluster world is
interested in bringing this driver
<br>
> back, I can help coordinate there. But it
will require someone
<br>
stepping
<br>
> in in a big way to maintain it.
<br>
>
<br>
> Thanks,
<br>
> Eric
<br>
<br>
Ah, Red Hat's statement that the acquisition of
InkTank was not an
<br>
abandonment of Gluster seems rather disingenuous
now. I'm disappointed.
<br>
<br>
<br>
I am a Red Hat employee working on gluster and I am happy
with the kind of investments the company did in GlusterFS.
Still am. It is a pretty good company and really open. I
never had any trouble saying something the management did
is wrong when I strongly felt and they would give a decent
reason for their decision.
<br>
</blockquote>
<br>
Happy to hear that. Still looks like meddling to an
outsider. Not the Gluster team's fault though (although more
participation of the developers in community meetings would
probably help with that feeling of being disconnected, in my
own personal opinion).
<br>
</blockquote>
<br>
As a community, each member needs to make sure that their
specific use case has the resources it needs to flourish. If
some team cares about Gluster in openstack, they should step
forward and provide the engineering and hardware resources
needed to make it succeed.
<br>
<br>
Red Hat has and continues to pour resources into Gluster -
Gluster is thriving. We have loads of work going on with
gluster in RHEV, Kubernetes, NFS Ganesha and Samba.
<br>
<br>
What we are not doing and that has been clear for many years
now is to invest in Gluster in openstack.
<br>
</blockquote>
<br>
Again, nobody communicated with either the Openstack nor the
Gluster communities about this, short of deprecation warnings
which are not the most effective way of reaching people (that
may be wrong on the part of most users, but unfortunately it's a
reality). Red Hat wasn't interested in investing in Gluster on
Openstack anymore. That's fine. It's your money. As a community
leader, proponent, and champion, however, Red Hat should have at
least invested in finding an interested party to take over the
effort - imho.
<br>
</blockquote>
<br>
I think it is 100% disingenuous to position this as a surprise
withdrawal of Gluster from Red Hat from openstack. The position we
have had with what we have focused on with Gluster has been
exceedingly clear for years.
<br>
</blockquote>
<br>
I am completely sincere. I do not posture or pose. I have absolutely
no reason to do so. I am not financially connected to gluster in any
way. The only place I currently use gluster is at home. My day job
with Samsung CNCT is solely connected to kubernetes and all the
persistent storage needs for our use is currently handled by AWS EBS
volumes. I am simply a member of the community for the sake of the
community so when I make a statement about this being a surprise I
do so as a user and community member.<br>
<br>
<blockquote
cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
type="cite">
<br>
As Eric pointed out, this was a warning in the Neutron code and
was also in the release notes for prior openstack releases.
<br>
</blockquote>
<br>
<br>
<a class="moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/mitaka.html">https://docs.openstack.org/releasenotes/cinder/mitaka.html</a> /gluster<br>
<a class="moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/newton.html">https://docs.openstack.org/releasenotes/cinder/newton.html</a> /gluster<br>
<a class="moz-txt-link-freetext" href="https://docs.openstack.org/releasenotes/cinder/ocata.html">https://docs.openstack.org/releasenotes/cinder/ocata.html</a> /gluster<br>
* The GlusterFS volume driver, which was deprecated in the
Newton release has been removed.<br>
<br>
Sure, if there's a fault it lies with the release note author.
Mistakes happen. I can shrug that off.<br>
<br>
<a class="moz-txt-link-freetext" href="https://lists.gt.net/engine?list=openstack;do=search_results;search_type=AND;search_forum=forum_3;search_string=gluster&sb=post_time">https://lists.gt.net/engine?list=openstack;do=search_results;search_type=AND;search_forum=forum_3;search_string=gluster&sb=post_time</a><br>
<br>
<a class="moz-txt-link-freetext" href="https://www.google.com/search?q=gluster-users+search&oq=gluster-users+search&q=site:lists.gluster.org+openstack+cinder+driver+newton">https://www.google.com/search?q=gluster-users+search&oq=gluster-users+search&q=site:lists.gluster.org+openstack+cinder+driver+newton</a><br>
<br>
No communication. Eric didn't think of doing so. Again, mistakes
happen. <shrug><br>
<br>
Now we just want to move forward. I really couldn't care less about
the history of this except to possibly learn from it. I did not want
this to turn into an issue of blame nor one of defense. Stuff
happens, fine, can we learn and fix it and turn this in to a
positive? I think so.<br>
<br>
<blockquote
cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
type="cite">
<blockquote type="cite">
<br>
<blockquote type="cite">
<br>
<blockquote type="cite">
<br>
<blockquote type="cite">
<br>
Would you please start a thread on the
gluster-users and gluster-devel
<br>
mailing lists and see if there's anyone willing to
take ownership of
<br>
this. I'm certainly willing to participate as well
but my $dayjob has
<br>
gone more kubernetes than openstack so I have only
my limited free time
<br>
that I can donate.
<br>
<br>
<br>
Do we know what would maintaining cinder as active entail?
Did Eric get back to any of you?
<br>
</blockquote>
<br>
Haven't heard anything more, no.
<br>
</blockquote>
<br>
Who in the community that is using gluster in openstack is
willing to help with their own time and resources to meet the
openstack requirements?
<br>
</blockquote>
<br>
Nobody knows. We have no idea what that entails. Can you help
get that question answered? </blockquote>
<br>
The way open source works is that when some gives notice in a
release that they are not maintaining a subsystem, that is an
invitation for someone else to step up. Sounds like an excellent
job for the community to dig into.
<br>
</blockquote>
<br>
The way open source actually <b>works</b> is when there is an
active and communicative community. I know I don't need to tell you,
you guys have literally written the book on community.<br>
<br>
<blockquote
cite="mid:f834bcd3-c6af-d497-fc5b-780e5192aebd@redhat.com"
type="cite">
<br>
As someone who runs the largest team of paid Gluster engineers in
the world, my job is to deliver engineering features in Red Hat
Gluster Storage that meet our business needs.</blockquote>
<br>
Agreed. No argument nor expectation otherwise.<br>
</body>
</html>