> but at some point a state based cyber attack that just wipes wikipedia off the net is deeply damaging to our modern society’s ability to agree on common facts
Haven't we hit that point already with bad faith (and potentially government-run) coordinated editing and voting campaigns, as both Wales and Sanger have been pointing out for a while now?
I'm not sure what you mean by this. Wikipedia is not a democracy.
> as both Wales and Sanger have been pointing out
{{fv}}. Neither of those essays make this point. The closest either gets is Sanger's first thesis, which misunderstands the "support / oppose" mechanism. Ironically, his ninth thesis says to introduce voting, which would create the "voting campaign" vulnerability!
These are both really bad takes, which I struggle to believe are made in good faith, and I'm glad Wikipedians are mostly ignoring them. (I have not read the third link you provided, because Substack.)
That's a relatively recent process: there have only been 3 such elections ever. They have measures in place to try to curb abuse of the process, and it cannot really be used to introduce bias (since an administrator exhibiting bias would leave a public trail of evidence attesting to that bias). That said, thanks for letting me know about it.
I feel like the section on primality testing with Fermat's test should at least make a shout out to Carmichael numbers and that for some inputs the probability you get a false positive result is 1.
This is so tangentially related to the P vs NP problem that the title is basically pure clickbait. Remove every sentence relating to polynomial anything and the information content of the write-up doesn't change at all.
What's a concrete threat model here? If you're sending data to an ssh server, you already need to trust that it's handling your input responsibly. What's the scenario where it's fine that the client doesn't know if the server is using pastebin for backing up session dumps, but it's problematic that the server tells the client that it's not accepting a certain timing obfuscation technique?
The behavior exists to prevent a 3rd party from inferring keystrokes from active terminal sessions, which is surprisingly easy, particularly with knowledge about the user's typing speed, keyboard type, etc. The old CIA TEMPEST stuff used to make good guesses at keystrokes from the timing of AC power circuit draws for typewriters and real terminals. Someone with a laser and a nearby window can measure the vibrations in the glass from the sound of a keyboard. The problem is real and has been an OPSEC sort of consideration for a long time.
The client and server themselves obviously know the contents of the communications anyway, but the client option (and default behavior) expects this protection against someone that can capture network traffic in between. If there was some server side option they'd probably also want to include some sort of warning message that the option was requested but not honored, etc.
To clarify the point in the other reply -- imagine it sent one packet per keystroke. Now anyone sitting on the network gets a rough measurement of the delay between your keystrokes. If you are entering a password for something (perhaps not the initial auth) it can guess how many characters it is and turns out there are some systemic patterns in how that relates to the keys pressed -- eg letters typed with the same finger have longer delays between them. Given the redundancy in most text and especially structured input that's a serious security threat.
Capping margins at a percentage also directly breeds inefficiencies. If you could spend $10M to fix a problem that costs you $4M/yr, you're effectively paying $10M now to lose $400k in annual profit potential.
It depends on their test dataset. If the test set was written 80% by AI and 20% by humans, a tool that labels every essay as AI-written would have a reported accuracy of 80%. That's why other metrics such as specificity and sensitivity (among many others) are commonly reported as well.
Just speaking in general here -- I don't know what specific phrasing TurnItIn uses.
I don’t know the “correct” answer, but here’s my answer as someone whose TOTP are split across a YubiKey and Bitwarden: I store TOTP in Bitwarden when the 2FA is required and I just want it to shut up. My Vault is already secured with a passphrase and a YubiKey, both of which are required in sequence, and to actually use a cred once the Vault is authenticated, requires a PIN code (assuming the Vault has been unlocked during this run of the browser, otherwise it requires a master password again).
At that point, frankly, I am gaining nearly nothing from external TOTP for most services. If you have access to my Vault, and were able to fill my password from it, I am already so far beyond pwned that it’s not even worth thinking about. My primary goal is now to get the website to stop moaning at me about how badly I need to configure TOTP (and maybe won’t let me use the service until I do). If it’s truly so critical I MUST have another level of auth after my Vault, it needs to be a physical security key anyway.
I was begging every site ever to let me use TOTP a decade ago, and it was still rare. Oh the irony that I now mostly want sites to stop bugging me for multiple factors again.
My Bitwarden account is protected with YubiKey as the 2FA. I then store every other TOTP in Bitwarden right next to the password.
I get amazing convince with this setup, and it’s still technically two factor. To get into my Bitwarden account you need to know both my Bitwarden password and have my yubikey. If you can get into my Bitwarden, then I am owned. But for most of us who are not say, being specifically targeted by state agents, this setup provides good protection with very good user experience.
2FA most commonly thwarts server-side compromised passwords. An API can leak credentials and an attacker still can’t access the account without the 2FA app, regardless of which app that is. The threat vector it does open you up to are a) a compromised device or b) someone with access to your master password, secret key and email account. Those are both much harder to do and you’re probably screwed in either case unless you use a ubikey or similar device.
How is it possible to have compromised password but not compromised the second factor? I don't understand the theory of leaking not enough factors. What is stopping webmasters from using 100FA?
> How is it possible to have compromised password but not compromised the second factor?
Server-side (assuming weak password storage or weak in-transit encryption) or phishing (more advanced phishers may get the codes too but only single instance of the code, not the base key).
> What is stopping webmasters from using 100FA?
The users would hunt them down and beat them mercilessly?
Mostly for the sites that insist on MFA and I need to use daily. Using two separate stores would be too annoying, and the increase in security is minimal - I consider Bitwarden to be secure enough (password + yubikey), and the main scenario somebody could get to my account would be on the server side, or phishing. For that, MFA helps somewhat, but storing MFA code in a separate app doesn't do much.
Only finitely many values of BB can be mathematically determined. Once your Turing Machines become expressive enough to encode your (presumably consistent) proof system, they can begin encoding nonsense of the form "I will halt only after I manage to derive a proof that I won't ever halt", which means that their halting status (and the corresponding Busy Beaver value) fundamentally cannot be proven.
Yes, but as far as I know, nobody has shown that the Collatz conjecture is anything other than a really hard problem. It isn't terribly difficult to mathematically imagine that perhaps the Collatz problem space considered generally encodes Turing complete computations in some mathematically meaningful way (even when we don't explicitly construct them to be "computational"), but as far as I know that is complete conjecture. I have to imagine some non-trivial mathematical time has been spent on that conjecture, too, so that is itself a hard problem.
But there is also definitely a place where your axiom systems become self-referential in the Busy Beaver and that is a qualitative change on its own. Aaronson and some of his students have put an upper bound on it, but the only question is exactly how loose it is, rather than whether or not it is loose. The upper bound is in the hundreds, but at [1] in the 2nd-to-last paragraph Scott Aaronson expresses his opinion that the true boundary could be as low as 7, 8, or 9, rather than hundreds.
That's a misinterpretation of what the article says. There is no actual bound in principle to what can be computed. There is a fairly practical bound which is likely BB(10) for all intents and purposes, but in principle there is no finite value of n for which BB(n) is somehow mathematically unknowable.
ZFC is not some God given axiomatic system, it just happens to be one that mathematicians in a very niche domain have settled on because almost all problems under investigation can be captured by it. Most working mathematicians don't really concern themselves with it one way or another, almost no mathematical proofs actually reference ZFC, and with respect to busy beavers, it's not at all uncommon to extend ZFC with even more powerful axioms such as large cardinality axioms in order to investigate them.
Anyhow, just want to dispel a common misconception that comes up that somehow there is a limit in principle to what the largest BB(n) is that can be computed. There are practical limits for sure, but there is no limit in principle.
You can compute a number that is equal to BB(n), but you can't prove that it is the right number you are looking for. For any fixed set of axioms you'll eventually run into BB(n) too big that gets indepentent.
>You can compute a number that is equal to BB(n), but you can't prove that it is the right number you are looking for.
You can't categorically declare that something is unprovable. You can simply state that within some formal theory a proposition is independent, but you can't state that a proposition is independent of all possibly formal theories.
They didn't claim that. They claimed that any (sound and consistent) finitely axiomatizable theory (basically, any recursively enumerable set of theorems) can only prove finitely many theorems of the form BB(n) = N.
Only if your goalpost of what "mathematics" is endlessly shifting. To prove values of BB(50000) you're probably going to need some pretty wacky axioms in your system. With BB(any large number) that's just going to be unfeasible to justify that the system isn't tailored to prove that fact, just short of adding axiom of "BB(x) = y".
I don't understand the point of this article, as it doesn't define an objective function. It just states a strategy that is only practically implementable for small board sizes (given the cited NP-completeness result) and then calls it good sans theorem or even conjecture.
I believe it is provably not the optimal algorithm for solving the problem under the minimax objective, and I have a hunch that (due to rounding issues) it is also not optimal for minimizing the expected number of guesses under even a uniform prior. So what does this actually accomplish?
I agree with you. I agree with OP in the following sentences:
>We have now landed on our final strategy: start by figuring out the number of possible secret codes n. For each guess, calculate the number n_i' of codes that will still be viable if the Code Master gives response i in return. Do this for all possible responses.
But then I don't agree with:
>Finally, calculate the entropy of each guess; pick the one with the highest.
Why wouldn't we just pick argmin_{guess} sum{i in possible responses}{Pr[i] * n'_i} = sum{i in possible responses}{n'_i/n * n'_i} = sum{i in possible responses}{n'_i^2}? This is the guess that minimizes the expected size of the resulting solution space.
Haven't we hit that point already with bad faith (and potentially government-run) coordinated editing and voting campaigns, as both Wales and Sanger have been pointing out for a while now?
See, for example,
* Sanger: https://en.wikipedia.org/wiki/User:Larry_Sanger/Nine_Theses
* Wales: https://en.wikipedia.org/wiki/Talk:Gaza_genocide/Archive_22#...
* PirateWires: https://www.piratewires.com/p/how-wikipedia-is-becoming-a-ma...
reply