[Gluster-infra] Rackspace and Gluster

Michael Scherer mscherer at redhat.com
Thu Oct 19 12:40:03 UTC 2017


Hi,

so Rackspace decided to stop their OSS Funding program 2 days ago [1].
For people not aware of it, Rackspace was funding various OSS projects
with 2000 US$  worth of credit per month, which we used to run various
systems (whose list is too long to be listed here). 

Thanks to them for the fish and the support all theses year, and their
help was really appreciated to grow the project.

But we now have to do something before the 31th of December, after that
then we would have to pay for the infra. 

Nigel did started to work yesterday on a spreadsheet for budget
planning in case we do not hit the target, and I did accelerate the
move of infra that was already under way. We are still in the planning
phase and since I was on PTO yesterday, we do not have a document ready
to share, but we are working on it.


And while we were slowly planning to move out of rackspace to be more
resilient for this precise event anyway, 2 months is quite short and
while I personally think we can do it (hopefully without breakage),
this imply to focus on that, and that mean likely push some of the work
we wanted to do this quarter for later. That also imply that if anyone
need us, please be mindful until January of not adding unplanned work
for us.


IMHO, our challenges would be:
- move the download server in a way that do not disrupt production
(ideally, have a mirror, something we need since a long time). I
already have some idea for that, and opened a bug on it: https://bugzil
la.redhat.com/show_bug.cgi?id=1503529

- deal with the NetBSD VMs in a automated and scalable way (again, a
long time open item).
- scale the number of builders in the cage (e.g., start to use them for
regression testing and not just source code building )

- move the lists server to the cage, in a way that do not disrupt
communication. We have to take in account DNS propagation and stuff
like this.

The rest (moving munin, syslog, freeipa, cleaning old servers) are
mostly internal details that shouldn't impact people, and was already
on its way. And are IMHO also easier and more controlled, so I will
focus on them for the time being.



[1] https://twitter.com/ericholscher/status/920396452307668992
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.gluster.org/pipermail/gluster-infra/attachments/20171019/4e780458/attachment.sig>


More information about the Gluster-infra mailing list