[Gluster-infra] New servers and usage for them
Niels de Vos
ndevos at redhat.com
Thu Mar 12 02:52:34 UTC 2015
On Wed, Mar 11, 2015 at 12:06:16PM +0100, Michael Scherer wrote:
> Le mardi 10 mars 2015 à 13:49 -0400, Kaleb S. KEITHLEY a écrit :
> > On 03/05/2015 07:26 AM, Michael Scherer wrote:
> > > Le jeudi 05 mars 2015 à 04:08 +0000, Justin Clift a écrit :
> > >> On 4 Mar 2015, at 22:22, Michael Scherer <mscherer at redhat.com> wrote:
> > >>> Hi,
> > >>>
> > >>> So back in december, and as part of the project to mvoe some of the
> > >>> infra out of iweb, we got 2 servers to be hosted in RH DC. Due to
> > >>> various reasons, they are not yet online, but I will try to get them
> > >>> soon.
> > >>>
> > >>> Since we are starting to get to the limit of the sponsored
> > >>> infrastructure of Rackspace, Justin asked me to push that a bit ( as
> > >>> this was planned to be just the webserver, and we were not in a hurry, I
> > >>> didn't prioritize too much when they were plugged ).
> > ...
> > >>> So this can be a bit challenging to have a external master to go on a
> > >>> slave behind a firewall. There is multiple solution to that problem
> > >>> ( VPN, moving the master on the server, having more than 1 server, etc,
> > >>> etc), so what are people ideas, based on what we need and what we want ?
> > >>
> > >> From memory, these are the two servers bought for us, in place of the
> > >> two which Kaleb had been trying to get racked and publicly accessible
> > >> for a few months prior. So, Kaleb's the lead guy on what these two new
> > >> servers are for then I guess. :)
> > >
> > ...
> > >
> > > The 2nd one is big enough to replace the 2 servers that kaleb wanted to
> > > rack. I have no idea why we didn't went with the servers he had in the
> > > first place, but I supposed this was discussed before my involvement in
> > > the first place.
> > >
> > It was only one server, and believe it or not, it's ready. (Well,
> > almost, one of the drives in one of the equallogics is bad. I have a
> > replacement coming.)
> > Do we still want it? Personally I don't think we can have enough jenkins
> > slaves, along with with getting off Rackspace, or at least under our $$$
> > allotment. Is there room for it in the rack the above equipment is
> > intended to go in?
> > If you don't remember, it's a Dell R515 w/ 32G RAM, ~3TB in the PERC,
> > and 16TB raw in a pair of Equallogic 6010s. (~11TB in RAID6.) Total 8U.
> > I wanted it to run slaves on, so it needs external/public connectivity.
> > I'm half tempted to drive it down to RDU myself and hold people's hands
> > (twist their arms) to get it, and the above, installed pronto.
> While I would also entertain the idea of making a road movie with you
> driving from Boston to RDU in the snow to the south just to drop the 8 U
> and have both of us convince people responsible for the DC to host it,
> we still have no space for it AFAIK. We could have squeezed for a fewer
> U, I think but 8U is too much given the current space.
> My team is working to get more space there ( last ETA is May/June, but
> it was April at first, and we are dependant on logistic issues from the
> hoster side ). I will none the less ask again to IT/Eng-ops see if we
> can do something for that server, or if stuff changed.
> In the mean time, and while that's annoying we cannot find space for the
> existing server you have right now, we have a "32 G of ram/lots of
> CPU/reasonable amount of disk" server installed in the DC and dedicated
> to be used for CI. I can get the exact hardware, but I am lazy, but I
> think the specs are sufficiently similar to yours to replace as a short
> term solution.
> This one is waiting for being used ( as soon as I have a idea of what we
> want to install on it, I will install EL6 or 7, and likely libvirt or
> something ).
> I did ask only for 1 single ip for now, and would like to stay this way
> if we can.
> So, we want jenkins slaves, like rackspace, is this ok to use a VPN or
> something ?
I think the slaves can be on a VPN without issue. The only issue may be
to get developers access to a slave in case there is an error that is
difficult/not reproducible on other systems. As long as developers can
get access to the slaves, we should be good. Putting them on an
internal/secured network sounds like a good thing to me.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 181 bytes
Desc: not available
More information about the Gluster-infra