Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Buys you time" for what? For fixing it? Why not do it right in the first place?

The danger in this line of thought is that security breaches only have to happen once for real damage to ensue. For software companies, when it happens, I always expect to see a clear explanation for why it did happen and in case of stupid architectural problems, I tend to avoid that company in the future.

Of course there's a difference between software and real world undercover operations performed by government agencies. Placing an agent undercover is both a gamble and a race against the clock. The government knows the risks involved, the agent knows the risks involved, human casualties can happen but that's part of the contract so to speak. Nobody willingly enters such operations without knowing the risks involved.

But if you're gambling with customer's data, be prepared to explain that to the angry customers when the shit hits the fan.



As far as I know, we've never yet created a cryptographic algorithm that's withstood more than 20 years of scrutiny. We could assume this trend continues into the future, and that any crypto algorithm we do create will be broken within 20 years. This means that encrypting a piece of data isn't some magical eternal protection--it just seals it in a time-capsule that'll "degrade" after 20 years or less. (Sometimes far less.)

But, so far, this property has also been pretty much irrelevant: almost all the things we want to do by passing along a secret are time-sensitive, and breaking the secret 20 years after the fact doesn't really buy you anything. Being able to impersonate the SSL key of Microsoft.com-as-it-was-in-1993 doesn't let you do anything to Microsoft.com-as-it-is-today.

This policy scales down, of course: in military comsec terms, you only need the encryption on operational details to last until the day after the operation is carried out. After that, your "secret" has become "plain" (something quite obviously blew up, etc.) and so the enemy breaking the encryption on the orders won't tell them anything they didn't already realize by hearing the explosion.

This is why the military keeps multiple different kinds of ciphers for different levels of secrecy, by the way: they assume that the more things they use a particular crypto algorithm for--the more signals the enemy gets to intercept that use that algorithm--the more enemy sigint folks will be put to the task of breaking that algorithm. So "top secret" encryption isn't meant to withstand any more scrutiny than "secret" encryption; it's just generally a bunch of orthogonal crypto primitives to the ones in the merely-secret crypto, and only used rarely, for the kinds of orders that need to stay secret long after execution (e.g. covert ops on allies.) Thus, enemy nations will have comparatively little reason to have analyzed and broken it--and breaking the secret-level ciphers won't help them, because of the orthogonal implementations.


You're right of course, however we need to make a distinction here.

First of all there's the issue of how strong is an encryption algorithm. For example RSA is based on the problem of factoring large numbers, a problem that's generally considered to be hard as we know of no efficient algorithm for solving it. But we haven't proved that factoring large numbers will remain a hard to solve problem in the future. The NSA could very well have custom hardware for efficiently factoring 1024-bit primes by now and the upcoming quantum computing is a real threat. If they haven't done it by now, 1024-bit keys will become breakable in the future, however 2048-bit keys are another issue entirely and 4096-bit keys will probably stay unbreakable.

But, even if breakthroughs in solving the integer factorization problem will be made in the future, as long as P != NP then perfect encryption is possible. In fact, we already know of encryption schemes that are provably unbreakable even with unlimited hardware at disposal, the problem being that they are also hard to implement, so we ended up with making tradeoffs.

Second, it's far easier to attack a particular implementation, to bypass the encryption algorithm entirely, e.g. attacks against the key generation system, side-channels, the protocol of the software system we are talking about, etc ... because software always has bugs, as in zero-day exploits that one could make use of.

For this reason - if indeed the military is using different encryption algorithms for different security levels, algorithms that aren't used in the wild, then to me that's a pretty bad idea, as far more often than not it's the implementation that's broken, not the algorithm. And in case of inside leaks, the implementation is always easier to get a hold of, compared to the key.


Military comsec basically has to assume the implementation will be immediately made available to enemy sigint, since they build crypto implementations into things like secure phones which can just be stolen off a dead soldier and pulled apart.

The implementations can be upgraded in the field when a flaw is found, as with any firmware (and frequently an implementation will be cycled out for a different one even if it is thought to be unbroken, just to put any time that's been put into breaking it to waste.) But enemy governments are precisely the people with enough resources, and reason, to want to break entire algorithms.

The thing is, it is a hard problem--so they only bother to break algorithms where they know they'll get big rewards for doing so ("top secret" doesn't usually mean more valuable to enemies, after all; usually it just means "fewer people should know this ever happened.") A standard secure phone will have all the Suite A and Suite B ciphers[1] built into it, but since so many more transmissions will be using Suite B ciphers, there'll be comparatively less strategic advantage in cracking the currently-used Suite A cipher before it's cycled out for the next one. So Suite B ciphers sometimes do get cracked during their "useful shelf-life" and have to be immediately switched, while Suite A ciphers are usually left alone.

---

[1] http://en.wikipedia.org/wiki/NSA_Suite_A_Cryptography, http://en.wikipedia.org/wiki/NSA_Suite_B_Cryptography


Diffie-Hellman and RSA are older than 20 years, aren't they?

Also DES has never been `broken'---only brute-forced.


Seemed to work for Skype. Took quite a while to reverse engineer the protocol, and I'm not sure there are any proper cleanroom implementations. That's a solid business win for them.


It did not work for Skype[1]. This conversation has been about security through obscurity. You're describing their competitive advantage because competitors couldn't build external interfaces to the protocol; that's not security a security win, it's a business win, as you said.

[1] http://en.wikipedia.org/wiki/Skype_security#Flaws_and_potent...


Please point out security flaws in Skype's voice protocol. That list is a list of problems and flaws with Skype's software (which is, in general, shit). It doesn't seem to document any crypto failures. The largest security failing listed there is that it pulls ads over an unencrypted connection.

For all we know, the core Skype protocol may be perfectly implemented. The Wikipedia link states there's no peer-review.


So because we can't study it we assume it's perfect? That's the essence of the fallacy of security through obscurity.

As a corollary, just because you can't do a quick Google search for 0days in iOS or Windows doesn't mean they exist. In fact they do, and they're bought and sold on black markets or are kept secret by governments and the like.

You don't assume something is secure because you can't readily access documented flaws. You assume something is secure when it has undergone rigorous peer review, which, as you stated, does not exist.

Your argument seems to be that you can't simply find a laundry list of Skype flaws floating around. This is true. But it says positively nothing about the security or lack of security regarding Skype's protocol.


You said flat-out it did NOT work. I'm saying that it's not determinable, and so far, no published security holes in Skype exist. In fact, no real good details exist, despite plenty of people trying. Skype's probably the most popular IM/Voice/Video protocol in the world.

I agree that Skype's protocol may be terrible. But you cannot state that obscurity didn't help. "No one" is even able to connect to Skype, let alone break it, at this point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: