[Gluster-infra] Plan to move out of rackspace

Michael Scherer mscherer at redhat.com
Wed Oct 17 10:01:45 UTC 2018


Hi,

so as people who have to deal with that, we have to get out of
rackspace since they are stopping sponsoring free software projects in
2 months. 

After cleaning up servers, we still have the following systems in
rackspace:

- download.gluster.org
- supercolony.gluster.org
- bugs.gluster.org
- a few builders

download
========

For download, the short term plan is to install a EL7 system in the
datacenter on a hypervisor,  then move the data there, and switch DNS.
This should be relatively painless. 

Later, I want to improve the setup by having:
- a internal server (aka, not directly exposed to the internet)

- have it behind our pair of proxy (I may do it right away, since that
would be easier for the future)

- have a 2nd internal server (ideally synced with gluster, but the
exact logistic of this one is still to be discussed), with the same HA
setup we have for the reverse proxy, the proxy and the firewalls

- have a pipeline so the server no longer requires a human to do scp +
manual work.

- finish the aide setup to detect server compromission

Nigel did propose to have a staging server in the mean time, so people
can access it to push data and then we sync to the backend server(s), a
proposal not without merits but that need to be discussed a bit more in
details. 

I also still think we should split the server in 2, one for critical
content (tarball) and one for less critical (logs). But that need to
sit down and maybe simplify the structure (that I still find
confusing).

supercolony
===========

This one is going to be the fun one. So supercolony is EL6, the last
one we have. The server had most of the service moved somewhere else
(planet got moved to a internal server, the blog got moved last year,
the various legacy redirection are done on proxy now), slowly but
surely, but it still serve a rather important purpose, being the mail
server for the gluster.org domain. So running mailman 2, and doing mail
aliases, etc. 

I still find weird stuff there, so I would really want to wipe it and
upgrade to EL7, but this requires a lot more planning and testing. I
would also ideally move it to a more resilient setup, but mailman 2 is
not obvious to scale from what I have seen. So the plan is the
following:

- clean the logs (since we still have old files around, seeing what
vhost people are using is not trivial)
- verify that we moved all the vhost that can be moved to another
server. The only hosts that should be there should be
"lists.gluster.org". 
- update the server, shut it down, get a snapshot made, restart it
- copy the snapshot to a hypervisor, do the proper setup to start the
VM and have it in ansible.
- test it
- declare a day to switch, switch, fix stuff we missed

Later:
- move the service to EL7 (with saner partitioning)
- burn the older VM
- celebrate

Later again
- split mailman out of the MX part, get 2 VMs for the mx setup, have
mailman move to the lan

bugs
====

This one is the simple, and will likely be the first one I deal with. 

SO my first plan was to just finish doing ansible integration, move it
to a VM on the internal vlan (so behind our proxy), and switch. 

However, looking at the work I did, I wonder if it would be simpler to
generate the code on a jenkins builder (for example, bugzilla.int), and
just run the jobs here (and copy the data to the internal http server
like ci-logs, etc)

Nigel did discuss moving that to Openshift, which would be a good idea,
but we do not have that yet, and frankly, given that all my attempts to
install openshift did finish with spending lots of time fixing small
bugs, I think I will wait until the path of break^W innovation slow
down before using that when I do have a 2 months deadline looming.

Niels, as you are the one who deal with bugs.gluster.org, have a
opinion with that plan ? 

the few builders
=================

So we do have regressions testing done there, and the freebsd builders.
The regression can be switched to the cage builders quite easily, but I
will let Nigel comment on that.

The freebsd builder is a bit more interesting. We do have a internal
one, but I also want to 
1) rename it, cause naming it freebsd10.3 wasn't my best idea
- we upgraded the os so that's now 10.4
- munin think there is a subdomain .3.int.rht.gluster.org

2) upgrade to freebsd 11 (so, prepare for trouble)
3) have a 2nd builder (so, make it double)

So, to protect the wor^W^W^W^W, to improve the setup, I propose to:
- switch the internal builder as the official builder
- forget the one in the cloud
- install 2 new fresh VM on freebsd 11.X
- do a job for testing the build there
- switch to the 2 new VM
- remove the older one


Thought on the plan ?

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.gluster.org/pipermail/gluster-infra/attachments/20181017/35c9787d/attachment-0001.sig>


More information about the Gluster-infra mailing list