An LLM can trivially instruct someone to take medications with adverse interactions, steer a mental health crisis toward suicide, or make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminated. Words can't kill people, but words can definitely lead to deaths.
Part of the problem is due to the marketing of LLMs as more capable and trustworthy than they really are.
And the safety testing actually makes this worse, because it leads people to trust that LLMs are less likely to give dangerous advice, when they could still do so.
Spend 15 minutes talking to a person in their 20's about how they use ChatGPT to work through issues in their personal lives and you'll see how much they already trust the "advice" and other information produced by LLMs.
Netflix needs to do a Black Mirror episode where either a sentient AI pretends that it's "dumber" than it is while secretly plotting to overthrow humanity. Either that or a LLM is hacked by deep state actors that provides similar manipulated advice.
It's not just young people. My boss (originally a programmer) agreed with me that there's lots of problems using ChatGPT for our products and programs as it gives the wrong answers too often, but tgen 30 seconds later told me that it was apparently great at giving medical advice.
...later someone higher-up decided that it's actually great at programming as well, and so now we all believe it's incredibly useful and necessary for us to be able to do our daily work
Most doctors will prescribe antibiotics for viral infections just to get you out and the next guy in, they have zero interest in sitting there to troubleshoot with you.
For this reason o3 is way better than most of the doctors I've had access to, to the point where my PCP just writes whatever I brought in because she can't follow 3/4 of it.
Yes, the answers are often wrong and incomplete, and it's up to you to guide the model to sort it out, but it's just like vibe coding: if you put in the steering effort, you can get a decent output.
Would it be better if you could hire an actual professional to do it? Of course. But most of us are priced out of that level of care.
Family in my case. There are two reasons they do this. A lot of people like medicine - they think it justifies the cost of the visit, and there's a real placebo effect (which is not an oxymoron as many might think).
The second is that many viral infections can, in rare scenarios, lead to bacterial infections. For instance a random flu can leave one more susceptible to developing pneumonia. Throwing antibiotics at everything is a defensive measure to help ward of malpractice lawsuits. Even if frivolous, it's something no doctor wants to deal with, but some absurd number - something like 1 in 15 per year, will.
I can co-sign this being bi-coastal. in the US not once have I or my 12-year old kid been prescribed antibiotics. on three ocassions in europe I had to take my kid to the doctor and each time antibiotics were prescribed (never consumed)
No. They are sort of good at reading a list of symptoms and having a decent chance of coming up with whatever their training set thinks is a likely diagnosis. They seem to be rather erratic, though, and they also make stuff up that would be trivially rejected by anyone with actual understanding of medicine (and by the same LLM if you quiz it harder!). They also seem to suffer from what I believe is known as the “reverse problem”: if the training set contains “condition A causes symptoms B and C” but not “if you have symptoms B and C then condition A should be considered”, then LLMs seem very likely to miss the diagnosis. Interestingly, real doctors seem to have the same problem, and I suspect that medicine is full of supposedly rare conditions that are actually not so rare but that most doctors are incapable of diagnosing.
This is analogous to saying a computer can be used to do bad things if it is loaded with the right software. Coincidentally, people do load computers with the right software to do bad things, yet people are overwhelmingly opposed to measures that would stifle such things.
If you hook up a chat bot to a chat interface, or add tool use, it is probable that it will eventually output something that it should not and that output will cause a problem. Preventing that is an unsolved problem, just as preventing people from abusing computers is an unsolved problem.
(1) Execute yes (with or without arguments, whatever you desire).
(2) Let the program run as long as you desire.
(3) When you stop desiring the program to spit out your argument,
(4) Stop the program.
Between (3) and (4) some time must pass. During this time the program is behaving in an undesired way. Ergo, yes is not a counter example of the GP's claim.
I upvoted your reply for its clever (ab)use of ambiguity to say otherwise to a fairly open and shut case.
That said, I suspect the other person was actually agreeing with me, and tried to state that software incorporating LLMs would eventually malfunction by stating that this is true for all software. The yes program was an obvious counter example. It is almost certain that all LLMs will eventually generate some output that is undesired given that it is determining the next token to output based on probabilities. I say almost only because I do not know how to prove the conjecture. There is also some ambiguity in what is a LLM, as the first L means large and nobody has made a precise definition of what is large. If you look at literature from several years ago, you will find people saying 100 million parameters is large, while some people these days will refuse to use the term LLM to describe a model of that size.
Radical rebuttal of this idea: if you hire an assassin then you are responsible too (even more so, actually), even if you only told them stuff over the phone.
Table saws sold all over the world are inspected and certified by trusted third parties to ensure they operate safely. They are illegal to sell without the approval seal.
Moreover, table saws sold in the United States & EU (at least) have at least 3 safety features (riving knife, blade guard, antikickback device) designed to prevent personal injury while operating the machine. They are illegal to sell without these features.
Then of course there are additional devices like sawstop, but it is not mandatory yet as far as I'm aware. Should be in a few years though.
LLMs have none of those board labels or safety features, so I'm not sure what your point was exactly?
They are somewhat self regulated, as they can cause permament damage to the company that releases them, and they are meant for general consumers without any training, unlike table saws that are meant for trained people.
An example is the first Microsoft bot that started to go extreme rightwing when people realized how to make it go that direction. Grok had a similar issue recently.
Google had racial issues with its image generation (and earlier with image detection). Again something that people don't forget.
Also an OpenAI 4o release was encouraging stupid things to people when they asked stupid questions and they just had to roll it back recently.
Of course I'm not saying that that's the real reason (somehow they never say that the problem is with performance for not releasing stuff), but safety matters with consumer products.
An LLM is gonna convince you to treat your wound with quack medics instead of seeing a doctor, which will eventually result the limb being chopped to save you from gangrene.
You can perfectly use an LLM to attack someone. Your sentence is very weird as it comes off as a denial of things that have been happening for months and are ramping up. Examples abound: generate scam letters, find security flaws in a codebase, extract personal information from publicly-available-yet-not-previously-known locations, generate attack software customized for particular targets, generate untraceable hit offers and then post them on anonymized Internet services on your behalf, etc. etc.
> An LLM can trivially make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminate
This is an extraordinary claim.
I trust that the vast majority of people are good and would ignore such garbage.
Even assuming that an LLM can trivially build a compelling case to convince someone who is not already murderous to go on a killing spree to kill a large group of people, one killer has limited impact radius.
For contrast, many books and religious texts, have vastly more influence and convincing power over huge groups of people. And they have demonstrably caused widespread death or other harm. And yet we don’t censor or ban them.
This drug comes with warnings: “Taking acetaminophen and drinking alcohol in large amounts can be risky. Large amounts of either of these substances can cause liver damage. Acetaminophen can also interact with warfarin, carbamazepine (Tegretol), and cholestyramine. It can also interact with antibiotics like isoniazid and rifampin.”
The problem is “safety” prevents users from using LLMs to meet their requirements.
We typically don’t critique the requirements of users, at least not in functionality.
The marketing angle is that this measure is needed because LLMs are “so powerful it would be unethical not to!”
AI marketers are continually emphasizing how powerful their software is. “Safety”
reinforces this.
“Safety” also brings up many of the debates “mis/disinformation” brings up. Misinformation concerns consistently overestimate the power of social media.
I’d feel much better if “safety” focused on preventing unexpected behavior, rather than evaluating the motives of users.
At the end of the day an LM is just a machine that talks. It might say silly things, bad things, nonsensical things, or even crazy insane things. But end the end of the day it just talks. Words don't kill.
Was there? It seems like that was the perfect natural experiment then. So what was the outcome? Was there a sudden rash of holocausts the year that publishing started again?
2016 was also first Trump, Brexit, and roughly when the AfD (who are metaphorically wading ankle deep in the waters of legal trouble of this topic) made the transition from "joke party" to "political threat".
Major book publishers have sensitivity readers that evaluate whether or not a book can be "safely" published nowadays. And even historically there have always been at least a few things publishers would refuse to print.
GP said major publishers. There's nothing stopping you from printing out your book and spiral binding it by hand, if that's what it takes to get your ideas into the world. Companies having standards for what they publish isn't censorship.
does your CPU, your OS, your web browser come with ~~built-in censorship~~ safety filters too?
AI 'safety' is one of the most neurotic twitter-era nanny bullshit things in existence, blatantly obviously invented to regulate small competitors out of existence.
It isn’t. This is dismissive without first thinking through the difference of application.
AI safety is about proactive safety. Such an example: if an AI model could be used to screen hiring applications, making sure it doesn’t have any weighted racial biases.
The difference here is that it’s not reactive. Reading a book with a racial bias would be the inverse; where you would be reacting to that information.
That’s the basis of proper AI safety in a nutshell
As someone who has reviewed people’s résumés that they submitted with job applications in the past, I find it difficult to imagine this. The résumés that I saw had no racial information. I suppose the names might have some correlation to such information, but anyone feeding these things into a LLM for evaluation would likely censor the name to avoid bias. I do not see an opportunity for proactive safety in the LLM design here. It is not even clear that they even are evaluating whether there is bias in such a scenario when someone did not properly sanitize inputs.
Luckily, this is something that can be studied and has been. Sticking a stereotypically Black name on a resume on average substantially decreases the likelihood that the applicant will get past a resume screen, compared to the same resume with a generic or stereotypically White name:
That is a terrible study. The stereotypically black names are not just stereotypically black, they are stereotypical for the underclass of trashy people. You would also see much higher rejection rates if you slapped stereotypical white underclass names like "Bubba" or "Cleetus" on resumes. As is almost always the case, this claim of racism in America is really classism and has little to do with race.
"Names from N.C. speeding tickets were selected from the most common names where at least 90% of individuals are reported to belong to the relevant race and gender group."
If you're deploying LLM-based decision making that affects lives, you should be the one held responsible for the results. If you don't want to do due diligence on automation, you can screen manually instead.
okay. and? there are no AI 'safety' laws in the US.
without OpenAI, Anthropic and Google's fearmongering, AI 'safety' would exist only in the delusional minds of people who take sci-fi way too seriously.
for fuck's sake, how more obvious could they be? sama himself went on a world tour begging for laws and regulations, only to purge safetyists a year later. if you believe that he and the rest of his ilk are motivated by anything other than profit, smh tbh fam.
it's all deceit and delusion. China will crush them all, inshallah.
That's not even considering tool use!