[Gluster-infra] Jenkins test server is down

Michael Scherer mscherer at redhat.com
Fri Aug 2 22:03:33 UTC 2019


TLDR: A unused test server was running a cryptominer, nothing got lost,
we stopped the server and we will burn it and reinstall.


so on Monday, I found out that due to neglect (aka, we didn't upgraded
the plugins), the stage instance of jenkins had been compromised,
likely during a wide scale attack (https://isc.sans.edu/diary/rss/24916

Upon seeing a weird process running under the jenkins account, I
immediately suspended the server, and contacted our security team. 

After doing a bit of forensic with volatility, guestfs and radare2, I
did conclude that nothing was accessed but CPU time, that the server
was running a Monero miner, and that it was compromised since more than
2 months (our logs on that server do not go back enough in time). I
also found that no one was using the server, since it was down for 1
whole month before being restarted after a jenkins upgrade.

The server was just here to test packages, plugins, config without
touching to prod. it is basically a sandbox, and after Nigel left, it
has been left rotting. While we do automated upgrade of all packages,
the jenkins plugins are not, as they were old.

One in particular was Script Security, that was lagging a lot behind
(version 1.29, so roughly 2 years ago), and that's what we use to
mitigate CVE-2018-1000861. There is several "sandbox bypass" problem
since end of 2018, and to this day, we still get attempts on the
production server (who are blocked, because it is kept up to date)


For people who want more information on the type of attack, this is
explained here:




Since we have seen no selinux violation in log (nor nothing utterly
suspicious), that the process wasn't really trying to hide itself, that
the malware was connected to a monero pool, with a signature matching a
monero miner, and everything was running under the jenkins account, we
have no reason to think anything else happened.

While the malware try to hide itself if it manage to get root access
with sudo or anything, it didn't went that far, and we didn't find any
suspect process in memory.

And since the server was basically minimally configured (no ssh key,
old nodes name from rackspace time, no jenkins secret, no user but a
few local ones), I think nothing but mining happened (and I suspect the
miner wasn't even efficient, since the only thing I see in the graph is
jenkins having a problem during 1 month, and I also see 2 segfault in
the log from a process related to the malware):


I suspect the attack was around the 8th of may:


While attacked on a regular basis, production wasn't impacted, because
Deepshika took care of keeping the plugins up to date, and we have
automation for the rest (and proper monitoring).

So, we are going to erase the server, and reinstall it from 0. This
will also be the opportunity to automate the deployment and
configuration further, based on groovy scripts ran at start of jenkins
(which is why I connected to the staging instance, to test how that
work once I found out that we can do that).


We also identified some way to automate the plugin upgrade, using


And I will also place it in the internal lan, were the access to
external network is strictly controlled (firewall, proxy, DNS logging,
etc, etc).

Also, I will be out until the 19th, for vacation, and also for Flock.
In case of emergency, do as usual, don't panic.

Michael Scherer
Sysadmin, Community Infrastructure

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part
URL: <http://lists.gluster.org/pipermail/gluster-infra/attachments/20190803/640dea33/attachment-0001.sig>

More information about the Gluster-infra mailing list