This article is misinformed if not intentionally misleading about Apple Pay’s privacy status. It is technically true that Apple does not share your credit card number, but it does share its own unique, persistent identifier.
This tool is deeply flawed. Fingerprinting protection is sometimes done by binning, which this tool rewards, and is sometimes done by randomizing, which this tool harshly punishes. The net result is it generally guides you away from the strongest protection.
The flip side of this, having the complementary flaw of testing only persistence, not uniqueness, is (warning, real tracking link) fingerprinting.com/demo. You can try resetting your ID and seeing if it changes here. Since tracking requires (a degree of) uniqueness AND (a degree of) persistence, the danger signal is only failing both the EFF test and this test.
Failing both is a requirement to derive meaning, not being lax: measuring only uniqueness would fail a random number generator, and measuring only persistence would fail the number 4.
You make an interesting point on binning vs randomization. I'm not an expert but to me your point is consistent with Tor having the "best protection" according to the website, because I know that Tor's strategy is binning. However, this is what actually makes sense for many variables though. For example, font sizes come in integers. If you're trying to be clever by "randomizing" and claiming to use decimal-sized, you might be the only person in the world to do so and immediately fingerprinted. So I think that randomization might indeed be a bad idea in many cases.
Your link doesn't work though. I just get "file not found".
I agree on randomization, but there are other places where it doesn’t stick out like that. I’ll look up specifics if I find the time, but I think reading canvas data without permission is one place it’s utilized, including by Tor.
Also check out oklch.com, I found it useful for building an intuition. Some stumbling blocks are that hues aren’t the same as HSL hues, and max chroma is different depending on hue and lightness. This isn’t a bug, but a reflection of human eyes and computer screens; the alternative, as in HSL, is a consistent max but inconsistent meaning.
Another very cool thing about CSS’s OKLCH is it’s a formula, so you can write things like oklch(from var(--accent) calc(l + .1) c h). Do note, though, that you’ll need either some color theory or fiddling to figure out your formulas, my programmer’s intuition told me lies like “a shadow is just a lightness change, not a hue change”.
Also, OKLCH gradients aren’t objectively best, they’re consistently colorful. When used with similar hues, not like the article’s example, they can look very nice, but it’s not realistic; if your goal is how light mixes, then you actually want XYZ. More: https://developer.mozilla.org/en-US/docs/Web/CSS/color_value....
Also, fun fact: the “ok” is actually just the word “ok”. The implication being that LCH was not OK, it had some bugs.
restic’s rest-server append-only mode unfortunately doesn’t prevent data deletion under normal usage. More here: https://restic.readthedocs.io/en/stable/060_forget.html#secu.... Their workaround is pretty weak, in my opinion: a compromised client can still delete all your historic backups, and you’re on a tight timeline to notice and fix it before they can delete the rest of your backups, too.
That article says that a compromised client can not delete your historic backups, however, a compromised client could create enough garbage backups that an automatic job by an non-compromised administration account could delete them due to retention policies.
I'm not sure what exactly you expect that would be different?
This page seems designed to direct attention anywhere but the meat of the issue: are the moon details in Samsung photos derived from sensor data alone, or also external photos? To me, the answer is clear: Samsung can’t produce this result unless they also ingest external, high res moon photos. Yet Samsung frames it as “enhancing details”, not “adding details”, clearly implying the details in question were already there in some form.
Right, disappearing messages aren’t a safeguard against a malicious recipient, they’re a blast radius limiter on future device compromise on either end.
The abandoned plan was perceptual hashing, which should return the same hash for very similar photos, while the new one is a checksum, which should return the same hash only for identical photos. I don’t think that invalidates the point, but it does seem relevant. It certainly makes it much less useful for CSAM scanning or enforcing local dictator whims, since it’s now trivial to defeat if you actually try to.
The big difference is with photos end-to-end encrypted, Apple can't (by choice nor force) have human "content reviewers" look at photos to inspect them for unlawful content, as was the intention under Apple's 2021 plan [1] after a threshold of 30 hash matches was met.
Although it was starting on CSAM material, it wasn't clear which other illegal activities Apple would assist governments in tracking. In countries in which [being gay is illegal](https://www.humandignitytrust.org/lgbt-the-law/map-of-crimin...), having Apple employees aid law enforcement by pointing out photographic evidence of unlawful behaviour (for example, a man hugging his husband) would have been a recipe for grotesque human rights abuses.
With photos encrypted, Apple can't be pressured to hire human reviewers to inspect them, and thus cannot be pressured by governments that enforce absurd laws to pass on information on who might be engaging in "unlawful" activities.
>The abandoned plan was perceptual hashing, which should return the same hash for very similar photos . . .
Is there any proof they actually abandoned this? NeuralHash seems alive and well in iOS 16[1]. Supposedly the rest of the machinery around comparing these hashes to a blind database, encrypting those matches, and sending them to Apple et al. to be reviewed has all been axed. However that's not exactly trivial to verify since Photos is closed source.
Anything over a network can be decrypted and inspected with a MITM proxy (manually adding its root certificate to the trust store), as long as only TLS (no application-level encryption) is being used.
There are a multitude of ways to inspect the decrypted traffic of your own device, whether it's a jailbroken iPhone provided by Apple to the security community or a non-kosher jailbroken device. People inspect this traffic all the time.
> . . . as long as only TLS (no application-level encryption) is being used.
Therein lies the rub: the payload itself is protected by an encryption scheme where the keys are intentionally being withheld by either party. In the case of Apple's proposed CSAM detection Apple would be withholding the secret in the form of the unblinded database's derivation key. In the case of Advanced Data Protection the user's key lives in the SEP, unknown to Apple.
By design the interior of the "safety vouchers" cannot be inspected, supposedly not even by Apple, unless you are in possession of (a) dozens of matching vouchers and (b) the unblinded database. So on the wire you're just going to see opaque encrypted containers representing a photo destined for iCloud.
I felt this way for a while, but I’d like to offer another angle: maybe it’s like saying “Every company eventually hires tall people. Some have no tall people, but it’s just a matter of time. Tall people are really an unstoppable force.” The missing piece is also observing short people (ad-free offerings), which you’d notice are not being eliminated, are also prevalent, and also “infect” everything. I think the unstoppable force being felt here is just statistics: ads are often an option, companies make a lot of decisions, it’s statistically inevitable that some of those will be ads, unless the company has some strong aversion to ads, which most don’t.