Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Modern phones are more like general purpose computers than game consoles. The console argument from Apple is disingenuous and gets far too little pushback from courts. Same goes for their argument that developers who don't like the App Store rules should make web apps — but limits Safari support for PWAs and limits third-party browsers to an older, slower JavaScript engine.

From a different angle, corporations are not people; they do not inherently deserve the same consideration as people. Sideloading provides actual individuals the option of more flexibility in how they use the device they purchased with their hard-earned dollars. Sideloading also provides the freedom to continue to install apps that might be removed due to government pressure. "It's their platform" holds absolutely no weight as an argument in my mind; it's reflects excessive deference to corporations.

Apple should be forced because the real-world use of devices they make is broader than they argue in court, because it is a company not a person, and because other actions it takes restrict the ability of developers to take advantage of the alternative Apple itself promotes.

Less bureaucracy is the solution yet the United States, famously lazy about regulating tech, has managed to support only two truly viable mobile operating systems. Not even Microsoft wants to be in the game. This indicates that the bar is much higher than "they should just go make their own" and therefore we can expect more of the behemoths.



For those complaining about not allowing 3rd Party JIT engines for 3rd Party Browsers. Please consider the vulnerability track records for Google Chrome, Mozilla Firefox and Safari:

I'm taking the best year for all of these (2024) - there are far worse years in the past 5 that could have been picked.

Chrome had 107 vulnerabilities that were Overflow or Memory Corruption. That is a vulnerability every 3.4 days. [0]

Mozilla had 52 vulnerabilities that were Overflow or memory Corruption. This is a vulnerability about every 7 days. [1]

Safari had 10 vulnerabilities that were Overflow or Memory Corruption. This is a vulnerability about every 36 days. [2]

Sources:

[0] <https://www.cvedetails.com/product/15031/Google-Chrome.html?...>

[1] <https://www.cvedetails.com/product/3264/Mozilla-Firefox.html...>

[2] <https://www.cvedetails.com/product/2935/Apple-Safari.html?ve...>


How many of those vulnerabilities were related to JITs? How many were actually feasibly exploitable, and not just theoretical? How many would have resulted in something actually dangerous (code execution, privilege escalation) and not just something annoying (denial of service)?

How many people are actively doing security research on each browser? Is the number of finds per browser more a function of how many eyeballs are on it than how many issues actually exist?

I don't doubt that there are actual, real differences here, but presenting context-free stats like that is misleading.


Your criticism is valid. Adding context is very subjective. Getting objective metrics to some of these questions is an open issue for the software world.

I don't think it matters if the vulnerabilities are JIT related - a process that can JIT can create code, so any exploitable (controllable) overflow or memory vulnerability CAN be pivoted into arbitrary code execution.

The problem with CVEs is that it is not required to prove exploitability to get assigned. It can take a lot of effort (single or multiple people) to prove exploitability. Earlier this week someone quoted "weeks" to me for each bug. They were quoting numbers for some of the Chrome bugs. These researchers said it was not possible to keep up with the number of bugs being found.

I believe (but cannot back it up) that security bugs follow a bathtub curve for each change set. If you've got a lot of change in your code-base then you'll pretty much be on a high bug point of the curve for the whole project. It also probably matters quite a bit about what sort of changes are being made. Working to get high performance seems (again a feeling) to increase the chance of creating a security vulnerability.

The level of public research is a tough metric. The reward / motivation factors are not the same. There is also an issue with internal research teams. They will find bugs before they are released, so they never really "exist". Does measuring the number of CVEs issued indicate the quality or level of internal research? What is a "good" metric for any of this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: