It happens as soon as the fourth sentence (maybe even the third):
> There was a silence at the table—not confusion, but recognition.
Then keeps happening again and again and again to the point where it would be obnoxious even if a human wrote it.
It's unbelievable to me that there's a kind of person who would publish an article about learning that is so obviously AI generated, and from a professor nonetheless. Maybe this is some sort of experiment or an ironic joke that went over my head.
These sorts of tests are invaluable for things like ensuring adherence to specifications such as OAuth2 flows. A high-level test that literally describes each step of a flow will swiftly catch odd changes in behavior such as a request firing twice in a row or a well-defined payload becoming malformed. Say a token validator starts misbehaving and causes a refresh to occur with each request (thus introducing latency and making the IdP angry). That change in behavior would be invisible to users, but a test that verified each step in an expected order would catch it right away, and should require little maintenance unless the spec itself changes.
Georgia Tech has a great online MSc CS program (OMSCS) that's very affordable for what it is, though the amount of direct interaction with the professor varies from class to class.
The post title is misleading and the content reads more like a guerilla advertisement for claude. TL;DR: author works for Anthropic, and used claude to implement an optimization for LLVM.
He’s also well respected in the Python community for maintaining the cryptography package, partially written in Rust. This is just a random blog post, not an ad.
The author has added a note in the beginning of the post now making it clear that he works for Anthropic, which may explain the fixation on Claude Code!
And the top 1000 replies for any stupid thing he says are nothing but positive reinforcement from boosted blue check bot accounts that bury any actual criticism.
If you hide the boosted replies, you find Twitter's thread loading stopping after about 200 while it's trying to fill the space you made, so you're lucky to get more than a few non-boosted replies under one of his "popular" Tweets, they're bidding for a very limited space under there.
Wow, if I needed any more proof Google is a ghost ship then this is it. The $5K bounty is an insult, and the fact that they low-balled it in the first place makes them look like absolute clowns. Good on you for calling out how little of a shit Google gives about actually protecting user data.
Nobody is forced to participate in a bug bounty. If you don't like the rewards, don't do it. There's a limit to the financial viability of these programs.
Who's talking about participation? We can be appalled by their business practices as their customers (actual or potential). These are the same companies that tell us that our privacy and security is their #1 concern, and use that justification to take away our rights "for our own good", but when there's a real threat they address it with with a business-casual equivalent of "fuck off".
I like how the app shows reference calls/songs for detected birds so I can verify using my own judgement, or to figure out which birds are which when there's a lot of chirping going on.
> No passwords, private keys, or funds were exposed and Coinbase Prime accounts are untouched.
I'm curious why no Coinbase Prime accounts were part of the leak (assuming that's what they mean). Is there some sort of additional layer of data protection behind the Coinbase Prime paywall? Or perhaps those accounts were intentionally avoided as they would presumably belong to more savvy users.
Coinbase Prime is its own exchange with its own support (actual humans in the USA that are available to chat to). It's for "institutional investors" so unavailable to most customers without the proper credentials/paperwork. They don't share the same outsourced "support" as the regular exchange, which appears to be the attack vector here.
> There was a silence at the table—not confusion, but recognition.
Then keeps happening again and again and again to the point where it would be obnoxious even if a human wrote it.
It's unbelievable to me that there's a kind of person who would publish an article about learning that is so obviously AI generated, and from a professor nonetheless. Maybe this is some sort of experiment or an ironic joke that went over my head.