Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's getting attention because:

- It's an easy to spot bug,

- in the most critical part of the code,

- of a fundamental security library,

- and it's been there for a long time, nobody knows how many systems have already been compromised due to it.

With this bug, Apple's library isn't actually a SSL implementation. It does not perform the most essential part of a SSL implementation - verifying that the peer possesses the private key it claims to posses.

> it seems far less crucial than remote-code-execution

> I don't understand why this is getting more attention that other (seemingly) more dangerous exploits.

Because it's the foundation for a heap of applications and services that were believed to be secure due to use of SSL. If you consider those, this is worse than any bug in any application. It's not just one application with a critical vulnerability - effectively, it's half (or whatmany) of all OSX and iOS apps with a critical vulnerability. All you need to do is go browse the net in a coffee shop, and some stranger can easily do things like:

- Pwn your box (MITM the auto update). Actually with this he can do all of the latter too.

- Steal your money (MITM your bank connection).

- Steam your online accounts, including email.

It's not "just a bug". Yes, everyone makes mistakes, we're all human. But it's completely unacceptable that those mistakes get unnoticed and into production code of such a critical component, and deployed to millions of users.

> That's certainly a problem, but most people are using trustworthy ISPs

Your argument seems to be that it's not a big issue if the security is totally broke since we don't need security in the first place.



> But it's completely unacceptable that those mistakes get unnoticed and into production code of such a critical component, and deployed to millions of users.

The handling has been abysmal as well. They dropped a 0-day on themselves by releasing the iOS update, and then delayed the fix by several days, apparently so they could release it along with the Facetime integration.

And even then they don't mention it on the release notes![0] If you look at the release notes for this update, you'd have no idea how important this is, if you didn't already know.

[0] The release notes (http://support.apple.com/kb/HT6114) link to this: http://support.apple.com/kb/HT1222 , which as of right now, lists Dec. 16th as the most recent OS X security update.


> The handling has been abysmal as well. They dropped a 0-day on themselves by releasing the iOS update, and then delayed the fix by several days, apparently so they could release it along with the Facetime integration.

The only alternative would have been to delay the iOS release, which they didn't do because almost certainly this bug was already being exploited in the wild. All this did was make more people aware of it, and only then for a few days.

As for OS X release, I'm sure they released it as fast as they could. It has nothing to do with releasing along with FaceTime integration, and everything to do with 10.9.2. was already going through the GM process, and it was faster/easier to add this fix into that and continue trying to validate the GM than it was to spin up an entirely new train for a 10.9.1.1 with just this fix and try to validate that.


>The only alternative would have been to delay the iOS release

Right. This is basically Apple violating their own "responsible disclosure" policy and announcing a 0-day vulnerability in OS X.

They should have delayed the release of the iOS patch until the OS X one was ready. This is the whole point of responsible disclosure: maybe the vulnerability is being used in the wild, but by delaying release of it until the vendor can patch it, the potential for expoitation is greatly reduced.

>All this did was make more people aware of it, and only then for a few days.

You say that as if it's not a big deal...


As I said, the iOS bug was almost certainly already being exploited. Delaying the release of a fix for that seems like the absolute last thing anyone should be suggesting they do.


As a fellow Mac user: your apologism is showing.

There is no justification for this bug. It never should have shipped. It never should have gone unnoticed for so long. It never should have been announced prior to a patch being available.

No matter how you slice it, Apple failed miserably, and "iOS was probably being exploited" is not an excuse. Apple has how much money? How much money do you think it costs to put their entire Core OS engineering staff on SHIPPING AN UPDATE FOR BOTH OPERATING SYSTEMS?

They could have afforded it. They were simply too incompetent, after a chain of incompetence, to do so.


In arguing that they should add more people in order to ship faster, the only incompetence on display is your own. That's not how software development works, which you should know if you've done it professionally.


Huh? It's a one line change. The patch has to be validated across the entire testing matrix of their entire product line. That is a trivially parallelizable problem.

Don't cargo cult 'common wisdom'; the only incompetence on display here is your axiomation of things you don't understand.


If you'd meant QA, you would have said QA, not engineering. You don't want engineers doing QA, which you would also know if you actually worked in the industry. They're notoriously bad at it. You'd also know that a test cycle takes a certain amount of time, and for something as complex as OS X, that amount is going to be measured in days per configuration, and there's nothing you can do about that -- adding more people will, again, just slow it down. Admit you don't know what you're talking about and move on. Or just stop talking, whatever.


I worked at Apple, in that department, so yes, I'm aware of what I'm saying and why.

Stop trying to acquire internet points by being a jerk.


> I worked at Apple, in that department

Please have the bridge delivered to my home between noon and six.

(Though, really, I should just accept this absurd statement, since it amounts to you admitting your own incompetence.)

> Stop trying to acquire internet points by being a jerk.

This from the guy who decided his scintillating contribution to the thread would be redundantly accusing people of "apologism" and "incompetence". You do understand the people who actually do work at Apple are human beings, and that you are flinging insults at them, right?


> Please have the bridge delivered to my home between noon and six.

Why? Do you not already have a bridge to troll under?

> You do understand the people who actually do work at Apple are human beings, and that you are flinging insults at them, right?

Yes, and I know who they are.


The point of responsible disclosure (as opposed to telling the company and then not telling anyone) is to force the company into action, and force them to fix it with the threat of public disclosure later


> As for OS X release, I'm sure they released it as fast as they could. It has nothing to do with releasing along with FaceTime integration, and everything to do with 10.9.2. was already going through the GM process, and it was faster/easier to add this fix into that and continue trying to validate the GM than it was to spin up an entirely new train for a 10.9.1.1 with just this fix and try to validate that.

If this is true, then their process could use some adjustment. Contrast with Google Chrome which has the regular motion of changes going through channels, but the ability to update virtually all clients within a matter of hours if a critical issue is found.

(I realize there is a lot more QA necessary for an OS update, but I'm not convinced that a fix for this specific bug would have taken a long time to QA. Certainly not anywhere near as long as we've waited for this update, or as long as a lot of people will delay installing it because it is huge.)


The hell with GM process. There should be a way to push out simple changes like this, as soon as possible, for cases like this which is very important.


That's a great way to let a bad build slip out, which would do significantly more harm than any bug it could possibly hope to fix.


Which is why you need a process for shipping out emergency fixes. Microsoft can do it in 24 hours, and on the desktop, the impact of a broken build for Microsoft is staggeringly large when compared to Apple.


The GM process is there precisely to stop bugs like this malingbit into production. Who knows how many potential bugs it has stopped. You can't know.

To play the devil a bit, their process still needs some work, there isn't a good reason why they couldnt have released this patch in its own approval process simultaneously with a higher priority for staff to choose it over facetime.


From https://gotofail.com/faq.html: "I have been seeing Apple IP addresses hitting the site with fixed browsers identifying as OS X 10.9.2 since Saturday morning Cupertino time."


> "It's not "just a bug". Yes, everyone makes mistakes, we're all human. But it's completely unacceptable that those mistakes get unnoticed and into production code of such a critical component, and deployed to millions of users."

This is not a reasonable argument. At Pwn2Own each year, how many browsers have vulnerabilities that allow remote code execution? All of them. How many of these vulnerabilities are zero-days? A significant number of those. This happens every single year. Even the advanced protections in e.g. Chrome don't stop new vulnerabilities from being found on a regular basis. And all of these products are deployed to millions of users.

You could complain about Apple's response to this bug, and that might be a reasonable complaint to make. At least Google patches bugs quickly when they surface at Pwn2Own. But that's different from claiming the bugs shouldn't have existed (or should have been caught before making it into production). Bugs are fundamentally hard to find, and it's not really getting any easier.


While it's true that almost all software has bugs that can result in exploits, I think most of the exploits used in Pwn2Own are typically the result of complex interactions between subsystems that are hard to predict. As software gets more complex, the attack surface increases.

The Apple bug isn't really in that class of exploit. It's a simple coding/merge error, and it's actually a regression from previously working code. One of the things that worries me is that this bug would have been caught so easily with basic unit tests.

    Test 1: Make connection to server with a valid SSL certificate [PASSED]
    Test 2: Make connection to a server with an invalid SSL certificate [FAILED]
Are we meant to think that Apple's build process doesn't use unit testing? Seems unlikely. Or perhaps this component didn't warrant an extensive test suite? I hope not! Not really sure what the explanation is.

I mean, I certainly can't claim that all my code is run through extensive tests before every deploy, but then I'm not working on the security tools that underpin an entire operating system.


In this case, the very test you're describing would not have worked. For a better writeup, see agl's post[1] on the matter. The basic gist of it: On affected systems, the server may use any combination of private key and certificate. Most SSL libraries used on the server side will make sure the moduli of cert and private key match (and abort if this isn't the case). Unit testing would thus require a server with some modifications to its SSL library (OpenSSL, in most cases).

What would have caught the bug: automatic code indentation or any sort of compile-time warnings about dead code.

[1] https://www.imperialviolet.org/2014/02/22/applebug.html


I see what you mean, the amount of work and foresight needed to predict the bug and write a test for it does seem unrealistic in that light. Hingdsight is 20/20, etc.


Two entire operating systems.


One single library.


As I said: It's not just another security bug. It's an easy to spot bug, in the most critical part of the code, of a fundamental security library. THIS is what makes it unacceptable. It pretty much means the change has never gone through code review, or has been planted.


While working at AWS and seeing outages being posted here, and the wildly inaccurate summaries (guesses) of what the problems were, and the wildly simplistic fixes that are assumed to easily be put in place I can say: systems like this are more complicated than you think they are.


A huge +1 to this, from someone who worked at Facebook.


Agreed, everyone enjoys playing armchair critic and in the majority of the time have no idea how the internal systems are structured or managed.


> “It pretty much means that the change has never gone through code review, or has been planted.” Or Hanlon's Razor; “Never attribute to malice that which is adequately explained by stupidity.” Going straight to dumb-ass conspiracy theories it what is unacceptable.


Hence the "or". In this case, stupidity is equally unacceptable.


No. To err is human. In the grand scheme of things, it's a PR fuck up; nothing more. I doubt it affects you directly enough as a Gentoo user to have such a strong reaction anyway, but if it make you feel superior, then all power to you. Lighten up and have a look at this and ask your self if you have honestly never done the same thing: http://xkcd.com/292/ if you think you haven't, I can gaur tee that you are deluding yourself...


The error is not that of the programmer's. The fault is not in code, it's in processes that were chosen by management.

Indeed, to err is human. This is why you're a negligent jackass if you don't plan for errors and build multiple systems to prevent and detect them, at least until computers start programming themselves for us.


I am interested to hear your opinion: at what point should a vulnerability be considered unacceptable?


I dont even know what that would mean, to make an existing vulnerability 'unacceptable'.

Vulnerabilities of all kinds exist. We need to find them, learn from them and we need to fix them.

Getting hung up on whether or not they are 'acceptable' is just kind of weird.

Bad stuff happens, incompetence happens, mistakes happen. None of that is 'acceptable', but it happens just the same.

Creating an environment where some kinds of mistakes are 'unacceptable' doesn't eliminate those kinds of mistakes, it just causes people to stop reporting them.

Complaining about their release cycle makes some sense. complaining about the existence of an existing bug is basically just howling at the moon.


part of growing up is learning to become the kind of hypocrite you can live with.

if you think you operate and hold yourself to a much higher standard then go ahead and complain all you want. maybe you do... maybe


This is so different from Pwn2Own I don't even know where to start. This failure case shows up in the most basic test case of what the library is supposed to do. The whole point of having certificate validation is that you identify invalid certificates. Try to come up with a reason why there are no regression tests with the library and why there wasn't a regression test that verified that for the default case, an invalid certificate was reported as such.


It's even more unacceptable that it took them FOUR DAYS to fix it, just so they could add a couple of features to FaceTime while they were at it.


Have you seen the content of the security update? It doesn't really excuse the 4 days between iOS and OS X updates, but it's a hell of a lot more than FaceTime updates.

http://support.apple.com/kb/HT6150


While this is too long IMO, Similarly critical bugs in windows have been left unfixed for years.


While that may have been true, I dare say that the burnt child that is today's Microsoft would not have mishandled this so horribly.


10.9.2 has been in beta for weeks and was evidently just about to be released. It made a lot of sense from a QA perspective to do it just the way they did.


I'm fully on board with the "bugs happen" point of view, but if they really have had this particular fix in the pipeline for weeks, then no, they really really should have done an out of band update of a much smaller scope.


Facetime Audio on Mac! Heck yes, finally.


How many malicious exploits have occurred in the last 5 days? I'm genuinely curious if anyone has a guess.

Wouldn't you need to have control of the router Starbucks is using to set up the MITM attack?


Because iOS (and Mac OS) automatically connects to known WiFi hotspot names, it's possible to create a hotspot named the same as Starbucks's WiFi, even if you're nowhere near a Starbucks, and iOS will happily connect to it if possible (unless other preferred networks around and it picks them over you).

Also, a lot of smaller coffee shops will just set up a wifi router and give it a password and call it done, when many of them have inherent exploits.

On top of all that, there are lots of Asus routers out there running firmware that can be remotely exploited and for which there is no patch[1]. Or Linksys routers[2]. Or D-Link[3].

All an attacker needs to do is change your DNS server settings and they can send you to any server they want instead of the server that you expected. They could redirect all HTTP and HTTPS requests to common services through their servers, MITM your connections to Facebook, Twitter, Apple, Bank of America, your e-mail, your health insurance website, etc.

On its own it's safe, but if you have control of someone's WiFi router (which is apparently trivial) then it's entirely possible for someone to snoop on a huge swath of your supposedly secure internet traffic.

The only real saving grace here is that OS code signing hasn't been compromised, so the system won't install a backdoor'ed 10.9.2 update. At least that part of the chain is secure and people can update.

    [1] http://threatpost.com/unpatched-vulnerabilities-disclosed-in-asus-home-routers/101317
    [2] http://www.pcworld.com/article/2098520/exploit-released-for-vulnerability-targeted-by-linksys-router-worm.html
    [3] http://hexus.net/tech/news/network/61245-easy-exploit-backdoor-found-several-d-link-router-models/


If you configure that hotspot name with WPA-PSK, it should not attempt to connect to hotspots with the same name that are not WPA-PSK. If the hotspot it tries to connect to does not have a matching PSK, it should fail the handshake (and no, the client isn't disclosing the PSK to anyone during the handshake). If any of this is not true, that would be another vulnerability.

Also WPA-PSK is useless for coffee shops because you have to give the PSK to everyone, and with that the ability to impersonate the AP. For that purpose, EAP/TLS is better, assuming you take care to verify the AP's certificate.


Anyone can hide his access point in a backpack and claim to be Starbucks. Or anything else - most people don't care, they'll hook themselves to just about anything.

This isn't true of home networks using PSK, where the AP and the client need an identical PSK in order to authenticate each other. Yes, each other - if you successfully connect to an AP using a PSK, you can be pretty sure that AP knows the same key, and probably isn't someone impersonating it (note however that anyone with the PSK can impersonate the access point).

Remember kids, when you connect to a network without a PSK or more elaborate authentication where you verify the identity of the AP, you generally have no idea who is operating that network.


What stops someone from doing that anyway, with their own hot spot, and just serving a self-signed certificate? Will the browser remember the old certificate, and put up the warning?


A self-signed certificate will throw an error in the browser because the certificate chain isn't trusted (even if you have the appropriate key).

In SSL, you have the certificate and the key; the key is private and secret, and the certificate is public. A public certificate which is wrong (e.g. self-signed) does you no good because browsers won't trust it (and many of them have made it frustrating to try to bypass the warning).


I may be wrong but is part of WebKit and not just Safari? In which case this isn't solely apple.


It's part of neither. The bug is in Secure Transport, the system SSL/TLS implementation in OS X and iOS, that Safari, Mail, and many third party apps (almost all apps that use SSL/TLS on iOS and OS X) rely on for TLS communications.


What does it mean to "steam your online account"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: