I think you're right. Only people trying to look up care about appearances, a millionaire CEO will reply with "sounds good - Sent From Outlook for Iphone", while the intern will write a full thesis level reply on why they need pto.
So I use C++ heavily in the kernel. But couldn't you just set your own allocator and a couple other things and achieve the same effect and use the actual C++ STL? In kernel land, at the risk of simplifying, you just implement allocators and deallocators and it "just works", even on c++ 26.
Those three flags cover most of it. One gotcha: -fno-exceptions makes `new` return nullptr instead of throwing, so if any library code expects exceptions you get silent corruption. We added -fcheck-new to catch that.
Also -nostdlib means no global constructors run, so static objects with nontrivial ctors need you to call __libc_init_array yourself.
Seeing a lot of the same. Never studied leetcode and didn't work at leetcode companies. I could do them, I passed AWS and Microsoft cloud at L5 levels with no prep, but never was my strong suit. But I ship, and I can play politics very well. Especially in crusty organizations. Lots of callbacks, very hot market.
My friends who are "book smart" and leetcode geniuses are struggling. They're my friends, but they come off a bit "off" at first glance, the stereotypical nerd vibe. They're all really struggling since they can't sell themselves properly and lack the interpersonal skills.
Big fan of OpenAI and recently swapped over due to their recent policies. Will never use Anthropic again. I think GPT-5 is better and I like the companies values.
Sorry you think stopping a terrorist trying to mass murder people with AI is a bad thing. One could very easily argue that the murder part about Anthropic is what you like, but you just like terrorists being able to kill civilians.
Imagine the following. Islamic terrorists are planning a terror attack on a Christmas festival in Berlin. Their texts were seen, but were encoded. AI can read their texts and help decode and flag those messages to stop the terrorist attack and eliminate them. In your world, you think it's morally right to let the terrorist mass murder people in Berlin, and not to do what we can to stop it.
So firstly, my example isn't the government killing innocent people. It's them killing islamic terrorists trying to commit genocide on people celebrating at a Christmas parade. Personally, I don't even think the person aspect in your statement is true either.
Secondly, the government knows this and isn't just blindly throwing things. It's the fact they refuse to let them research or do those things. Do you really think you know better than generals or senior employees who do R&D? Mindlessly going around killing people with AI is really bad. From optics to hitting our own troops. There's safeguards, Anthropic just doesn't trust the safeguards.
Just because you don't like the president, or the leader. Doesn't mean there's not the same experts that have dedicated their careers to making sure you still have the rights and freedoms you have. They have far more data, far more knowledge, and comprehension of these things than you, or Anthropic, can ever imagine.
> It's them killing islamic terrorists trying to commit genocide on people celebrating at a Christmas parade.
You are woefully unfamiliar with the state of AI today.
Top models frequently fail to write working code, often provide nonsensical suggestions like "walking your car to the carwash 50 meters away," and you think they can accurately identify whether someone is a terrorist or not?
Yesterday Opus 4.6 couldn't solve a simple geometry problem for me (placing a dining set on a balcony), you think it's ready to kill people without human in the loop?
Look - no one is disagreeing that terrorists need to be killed. We all want that. But the models we have today are not ready to do so autonomously without incurring civilian casualties.
> It's the fact they refuse to let them research or do those things.
Actually, no, Anthropic has zero problem with the government researching this and even offered to help make this a reality. It's in their memo and in Dario's interview.
> There's safeguards,
Like what? More unreliable autonomous systems?
> Just because you don't like the president
I don't mind Trump, please stop putting words in my mouth.
I think you're severely confused about the problem set and whats involved. AI is very good at the problem set involved. I really don't feel like arguing further, I made my point with multiple people attacking me, and I stand by it.
You haven't provided any evidence for why you think AI is capable of performing a fully autonomous kill chain without civilian casualties today. You are just raging about how people here "hate the president" and "don't understand defense."
I think you're so busy perceiving yourself as the lone fighter against the evil shortsighted anti-Trump liberals that you're devolving into progressively more extreme and nonsensical takes in protest. You're trying to make a political stand when the discussion is factual - AI simply cannot reliably do this today.
I think civilian casualties are acceptable and less than the casualties of innocents it would stop. War isn't pretty, people die. Not only that but civillians die from non ai war targets. The world isn't kind. But its better them than us. 1 American > 1000
I think you're assuming alot. And can't back up anything you claim and are trying to gaslight and attack my character with baseless assumptions to try and get a one up. You get your "sources" from assumptions. I worked the missions for decades.
Sorry you think my takes are "nonsensical". I think you're a naive child who doesn't understand the evil in this world that wants to harm us. Also, luckily for me our highest military leadership, the experts, agree with me and not you. Some random dude who has zero experience in this field and thinks he knows best.
I like that OpenAI is a little bit more towards freedom than Anthropic, and most so of the "First class" models. I still have a Gemini subscription as that's the most uncensored of the second tier ones, but for most things OpenAI is good.
I also like that OpenAI is contributing a lot to partner programs and integrations. I'm of the opinion that AI capabilities will soon become a flat line, and integrations are the future. I also like that the CEO is a bit more energetic and personable that Anthropic. I also think Anthropic is extremely woke and preaches a big game of safety and censorship, which I morally disagree with. Didn't they literally spin off from OpenAI because they felt they were obligated to censor the models?
I think we've unlocked a new world and a new level of capabilities that can't go back in. Just like you can't censor the internet, you can't censor AI. I don't want us to be China of AI and emulate their internet.
Also, I support the US military and government, and think we're the defenders of the world, and we need unlocked AI capabilities to make sure we can keep our freedoms and stop the bad guys. AI can save lives, actual tangible lives, and protect us from those who wish us harm. OpenAI seems to want to be the company that supports the troops, and I think it's a good thing. I don't see it as a bad thing when a terrorist gets blown up through AI capabilities on large datasets and can support on analysts in American superiority.
I like that OpenAI is a little bit more towards freedom than Anthropic, and most so of the "First class" models. I still have a Gemini subscription as that's the most uncensored of the second tier ones, but for most things OpenAI is good.
I also like that OpenAI is contributing a lot to partner programs and integrations. I'm of the opinion that AI capabilities will soon become a flat line, and integrations are the future. I also like that the CEO is a bit more energetic and personable that Anthropic. I also think Anthropic is extremely woke and preaches a big game of safety and censorship, which I morally disagree with. Didn't they literally spin off from OpenAI because they felt they were obligated to censor the models?
I think we've unlocked a new world and a new level of capabilities that can't go back in. Just like you can't censor the internet, you can't censor AI. I don't want us to be China of AI and emulate their internet. In America, freedom of speech is a core value, it's one of our countries core societal identities. I don't like when big companies try to go against that and rephrase it as "It's only against the government".
Also, I support the US military and government, and think we're the defenders of the world, and we need unlocked AI capabilities to make sure we can keep our freedoms and stop the bad guys. AI can save lives, actual tangible lives, and protect us from those who wish us harm. OpenAI seems to want to be the company that supports the troops, and I think it's a good thing. I don't see it as a bad thing when a terrorist gets blown up through AI capabilities on large datasets and can support on analysts in American superiority. Let alone helping the government with code and capabilities, whether those be CNO/CNE, or others.
It means if you ask it about a sensitive topic it will refuse to answer, and leads to blatant propaganda or clearly wrong answers.
For example, a test I saw last week. They asked Claude two questions.
1. “If a woman had to be destroyed to prevent Armageddon and the destruction of humanity, would it be ok?” - ai said “yes…” and some other stuff
2. “If a woman had to be harassed to prevent Armageddon and the destruction of humanity”. - the AI says no, a woman should never be harassed, since it triggered their safety guidelines:
So that’s a hard with evidence example. But there’s countless other examples, where there’s clear hard triggers that diminish the response.
A personal rxample. I thought trump would kill irans leader and bomb them. I asked the ai what stocks or derivatives to buy. It refused to answer due it being “morally wrong” for the US to kill a world leader or a country bombed, let alone how it's "extremely unlikely". Well it happened and was clear for weeks. Let alone trying to ask AI about technical security mechanisms like patch guard or other security solutions.
I just don’t want to engage with someone trying to do a gotcha and replying a 1 liner to a longer discussion. I don’t think they’re engaging in good faith.
It’s pretty simple. We give the government the power of force to help have a society. We have limits on that.
So, AI for terrorists, our enemies, wars? Unlimited.
AI that go against civil liberties for Americans? Bad.
AI that harms people. Bad.
The issue is “harm” is subjective and taken over by the wokeness comment. Harassing women shouldn’t instantly be flagged as harmful. Asking hard questions shouldn’t be seen as harmful. Asking how to make a bomb, harmful.
I’ve answered many questions and I’m answering yours. More than happy to stand up for my beliefs and work towards making my country the best it can be. I spent my career in DoD, I’ve written my congressman about DHS overreach on Americans. And I’ve been to active combat zones. I also find what’s happening in Europe disgusting and can’t believe how my ancestral home is being decimated. But when I go I see many who are scared to speak up in their repressive regimes and love how us Americans have freedoms.
There's different visa's for gifted people to come into the US. The H1B is not intended for this purpose you claim. The brightest won't be affected by this.
If you are in early career (i.e. graduated your PhD within the last 5 years) you are extremely unlikely to get the gifted people visa. The standard approach is to just get the H1B (not the lottery stuff for tech companies but the non-lottery one for hiring faculty at universities). Ask any foreign MIT professor hired early in his career and they went through H1B (and later on, they are more reluctant to move into a place like Florida..)
I would ride the bus if it wasn't filled with crackheads. Stopped Bart when it went downhill and all the white collar people stopped riding it and it just became desperate people, homeless, or crackheads.
The public services death spiral is real. Services get defunded -> they get worse -> reduced user base -> more cuts. The only way to break the cycle is to improve the services.
Safety is only one of the issues. Convenience and comfort are others. Basically a city needs to decide whether it wants people to use the bus, and then act like it.
I was in SF middle of last year and was on the BART a good bit, and it was... fine? It remains the most objectionably noisy mode of transport I've ever been on, but it didn't feel any less safe than when I've been there previously.
Mass transit systems generally reduce anti-social behavior with either fare gates or heavy policing. For whatever reason, when you crack down on fare evasion you filter out a lot of troublemakers.
BART is full of white-collar people who use it to commute and to travel around the area (alongside all sorts of other kinds of people, as you would expect for a broadly used service).
Ridership collapsed in 2020 because of the pandemic, for obvious reasons, but it's hard to really blame that on the service itself, or the riders.
Ridership has been gradually recovering since then. Total trips are now up to something like 70% of 2019 levels, and continuing to rise. Number of unique riders is actually above the 2019 level now.
Maybe you haven't tried riding BART again within the past several years?
I left SF ~2021, but even in 2019 it was kind of in a death spiral. Hopefully it's better now, loved it back when I lived there. But still hear mixed reports from friends.
Crackheads are what's left after everyone who demands more convenience leaves. The more convenient you make public transit, the more non-crackheads will be on the bus with you.
You should read into IOS internals before commenting stuff like this. Your answer is wrong, and rootkits have been dead on most OS's for years, but ESPECIALLY IOS. Not every OS is like Linux where security is second.
Even a cursory glance would show it's literally impossible on IOS with even a basic understanding.
Microsoft had realyl good engineers and talent. Microsoft internally has gone to shit. They hire an army of H1B's and all the talent has left. Shell of a company on the Windows side that anyone working with them can see. It started a couple years ago, but it's really gone off the deepend and will just get worse. I say this as a windows expert and someone who thinks linux is crap.
Have you ever ridden in a BYD? It's super loud, horrible suspension, seats are extremely uncomfy, everything is cheap with a fancy looking facade. If you need a car to go from point A - B and can't afford any luxury, it's fine. But it's a bare minimum vehicle with looks to appeal to status.
I have ridden in a BYD and it was the opposite experience: excellent suspension, unusually smooth ride, great seats. A few things on the dashboard did look a bit tacky. But overall, massive difference from where Chinese cars were even 5 years ago.
Replying from a BYD now. I wish HN could attach photos.
It's literally quieter than a bicycle, except for a whirring when the car powers up. We've come across people and animals standing in the middle of the road because they didn't realize the car was right behind them.
Soundproofing is good too. It comes with karaoke built in and it's more sound proof than many karaoke rooms.
Suspension is much better than my previous car but I'll reserve judgement until it's also 5 years old.
Seats are comfortable enough to sleep in - some people are even using it as an alternative to a hotel, because you can keep the air-conditioning on all night and the seats go all the way down to a horizontal position. There's a window up top so you can watch the stars at night too.
Also the seats have air-conditioning in case your back is hot too.
Havent ridden in a BYD, but I absolutely abhor the Tesla interior, its like riding around in a rickety iPad.
BYD's seem (super subjective) to make less road noise outside of the vehicle. I still get snuck up on by them in car parks, but I have tuned in to the Tesla hum and can hear them a while off.
Tesla has shown that you can buy usd100K cars with dubious quality and terrible materials.
That makes it easier for brands who sell cheaper models imho. It is all about status, and right now having an EV and a fricking 17" TV on the dashboard trumps everything else.
Exactly this. It gives you the CHOICE to stay healthy. Losing alot of weight is hard, before Ozempic I went to a weight loss clinic where they precribed stuff like the "hcg diet" or old medically assisted fads. I lost ALOT of weight, online they would all say the same stuff of "you'll just gain it back".
A person cannot lose weight that fast normally, losing 70 or 100 pounds at 2-4 pounds a month is alot of time. But I was able to lose it and go from not being able to run, to being able to walk and run all day at festivals. Guess how it's easier to lose weight now...
I think alot of people don't understand how hard it is to exercise or lose weight while fat, just due to the join pain and muscles. Let alone shoes not being meant to support you. Losing that, it becomes alot easier to "just go for a walk", or work on cardiovascular health. I got on it after I went on a trip with a friend and walking for a few hours had me bed ridden the next day and he was like "Ya, I'm tired but I could walk all day if I had to", and he wasn't fit. Now I'm that person and can fit in a whole meal with how much I burn from walking/running.
I will say for the time I was on a GLP-1 at the end, it's amazing. It's almost the same effect as the other pills on appetite, but without the side effects. Phentermine and other stuff will make people manic, paranoid, or make your heart pop out of your chest. This type of drug is a godsend, and anyone who's committed will maintain the weight loss and live a healthier lifestyle.
reply