VMs are bad because they make it much easier for an attacker to get a process on the same CPU as yours; nothing more, nothing less.
For VM-specific material, read e.g. http://cseweb.ucsd.edu/~hovav/dist/cloudsec.pdf, in particular section 8.4, and note that keystroke timings are usually enough to recover plaintext (passwords are more difficult, but it should still give a good guess). The cache-based covert channel is interesting as well, mostly because it suggests that other cache-based attacks are possible.
Side-channel attacks work just fine outside of academic environments, but the people performing them are testers under NDA (consider Common Criteria for smart cards) or working for various intelligence agencies; they're unlikely to run their mouth on the internet.
> VMs are bad because they make it much easier for an attacker to get a process on the same CPU as yours; nothing more, nothing less.
The paper cites less than 25% of the time which is specific to EC2. With more cores on the host this attack rapidly becomes impossible as domUs rarely share cores. Which is why I asked for actual evidence of cryptographic compromise in the wild and not yet more papers. You suffer from academia. Written in a paper and demonstrated to enable the same timing attacks that having network access enables (with reference to extracting passwords) does not mean "any cryptographic algorithm I use is compromised".
You said "assume the NSA can break your crypto". Since everybody likes to bang on EC2 I would like evidence of how that is accomplished since you used the words "very real". You bring up a good point in that if it is being done it is under NDA and, without realizing it, admitted that you have never heard of it being done.
Which begs the question: how can you say "very real"? Have you ever observed cryptographic compromise via a CPU side-channel exacerbated by virtualization or have you merely read about it?
At any rate, I look forward to your results in a couple of months after you side-channel a neighboring domU and compromise their crypto. Once you're ready I'll give you a couple of my own domUs to demonstrate on free of charge.
To your inevitable followup question: no, I'm not going to talk to you about it.
To your overarching point: if you are on a cloud platform that promises you will never share hardware with any other company – which virtually nobody is – you are still at greater risk simply being on a nanosecond-timeable switched network with your attackers. But local crypto timing attacks are far more powerful.
Nobody bothers with this stuff because simple application-layer attacks are so simple that there's little impetus to develop and mainstream the attack techniques to exploit them. You're naive indeed if you think that's a gauge of how practical those attacks are.
I wonder where your stridency on this topic comes from. I've read all your comments here --- I mean all of them, on HN period --- and I haven't been able to discern what background you might have in software crypto security. You're here saying something that contradicts virtually every other software crypto person I know, is why I wonder.
You talked earlier about "all the papers you read being theoretical" (I'm paraphrasing). What papers would those be? Because I'm a little familiar with this research (we pirated it gleefully for our virtualized rootkit detection talk several years ago), and, relative to the crypto literature at large, x86 side channel research is striking in how non- theoretical it is; to wit: most crypto papers don't come with exploit testbed how-tos.
So your NDA allows you to acknowledge that a side-channel cryptographic compromise is possible but not give any details? That's a really funny NDA. I call bullshit.
Since I have executed one with my employer yes I do.
For example if you asked me directly if such an attack was possible I cannot answer you due to my NDA even though I have personal experience with the matter. You seem really eager to answer that it is though.
All of the NDAs I have signed have never said anything like "you can't say how, but you can say that we pulled it off". In fact most of the NDAs I've signed have been along the lines of "you don't talk about Fight Club".
Can we deduce that you are willing to violate your NDA to write that you have observed such an attack or that you never executed an NDA regarding the specific attack? Yes.
VMs are bad because they make it much easier for an attacker to get a process on the same CPU as yours; nothing more, nothing less.
For VM-specific material, read e.g. http://cseweb.ucsd.edu/~hovav/dist/cloudsec.pdf, in particular section 8.4, and note that keystroke timings are usually enough to recover plaintext (passwords are more difficult, but it should still give a good guess). The cache-based covert channel is interesting as well, mostly because it suggests that other cache-based attacks are possible.
Side-channel attacks work just fine outside of academic environments, but the people performing them are testers under NDA (consider Common Criteria for smart cards) or working for various intelligence agencies; they're unlikely to run their mouth on the internet.