[Gluster-infra] More proxy cleanup coming

Michael Scherer mscherer at redhat.com
Tue Oct 16 14:40:59 UTC 2018


Le mercredi 05 septembre 2018 à 12:56 +0200, Michael Scherer a écrit :
> Le mardi 04 septembre 2018 à 17:50 +0200, Michael Scherer a écrit :
> > Le jeudi 23 août 2018 à 17:52 +0200, Michael Scherer a écrit :
> > > Le jeudi 15 mars 2018 à 15:35 +0100, Michael Scherer a écrit :
> > > > Hi,
> > > > 
> > > > So now we have a new proxy (yes, I am almost as proud of it as
> > > > the
> > > > firewall), I need to move the old service on the old proxy to
> > > > the
> > > > new
> > > > one. It will imply some time of unavailability, because DNS has
> > > > latency
> > > > to propagate, and we need DNS in place for letsencrypt before
> > > > deploying. And we still have DNS issue on the server side that
> > > > make
> > > > change take far more longer than before.
> > > > 
> > > > While I can manually do some magic, I rather avoid manual
> > > > fiddling
> > > > when
> > > > I can, so I would like people to tell how critical are each of
> > > > theses
> > > > domain so I can figure the best way, e.g, can they be down for
> > > > 10
> > > > to
> > > > 20
> > > > minutes for a while, do people want to know some time in
> > > > advance,
> > > > etc:
> > > > 
> > > > - bits.gluster.org
> > > > - ci-logs.gluster.org
> > > > - softserve.gluster.org
> > > > - fstat.gluster.org
> > > > 
> > > > I also plan to move jenkins (so build.gluster.org) on the said
> > > > proxy,
> > > > and the jenkins stage instance, and later move the VM to the
> > > > internal
> > > > network. 
> > > > 
> > > > While the stage instance is not a problem, I guess we need to
> > > > find
> > > > some
> > > > time for the prod one.
> > > 
> > > So nobody answered to me on theses, so I am gonna assume "not
> > > overly
> > > critical" and switch DNS over a weekend. I will try to make it
> > > non
> > > disruptive as possible, but that's DNS, and most of the time out
> > > of
> > > control for me. The biggest issue is that we need DNS to work for
> > > let's
> > > encrypt, and let's encrypt to do the deployment of the vhost. And
> > > so
> > > can't do the vhost in advance, or at least, not too much. 
> > > 
> > > So chicken and egg issues.
> > > 
> > > Not this weekend, but around the start of september. 
> > > 
> > > The proxy did successfully renew letsencrypt certificate 3 days
> > > ago,
> > > so
> > > I consider it to be working good enough for switching to it. 
> > > 
> > > Then I will decommission the old VM.
> > 
> > So:
> > bits.gluster.org
> > ci-logs.gluster.org
> > 
> > have been moved. Please tell me if anything was broken, I will do
> > the
> > 2
> > others later during the week.
> 
> And the 2 others have been moved. 
> 
> The only one missing is failurestat.gluster.org, I will see with
> Nigel
> about how critical is that vhost, then remove the server.

And so the server have been removed from ansible, from the disk, from
nagios and all. 


> next step, move fstat/softserve to a internal server so we can free a
> few public IP, and likely do the same with jenkins (which will be a
> bigger deal).

still need to do that for fstat/softserve tho

-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.gluster.org/pipermail/gluster-infra/attachments/20181016/2854bb3e/attachment.sig>


More information about the Gluster-infra mailing list