Gemini-3.0-flash-preview came out right away with the 3.0 release and I was expecting 3.0-flash-lite before a bump on the pro model. I wonder if they have abandoned that part of the Pareto/price-performance.
“Stochastic chaos” is really not a good way to put it. By using the word “stochastic” you prime the reader that you’re saying something technical, then the word “chaos” creates confusion, since chaos, by definition, is deterministic. I know they mean chaos in they lay sense, but then don’t use the word “stochastic”, just say "random".
This definitely is not true, outside of physical domains.
I chose a random domain (philosophers writing their seminal work) and found that most wrote them in their 40s. Kant wrote the critique of pure reason at 57 years old!
This is a pointless discussion though without talking about testosterone and dopamine levels. Doesn't really matter what your IQ is at 60, if your testosterone and dopamine system is that of the average old man you are not going to have the desire to write Leviathan or Critique of Pure Reason.
Catch up growth is premised on the assumption that productivity producing innovations diffuse through the world. This assumption is true, of course, but not universally. Many technologies also rely on culture, institutions, human capital.
I wish I could reply to the original post, but I can't because it is flagged. But if I could reply, I would say the following:
This post is almost certainly wrong--as in, the existence of Big Foot is more likely.
Take an example like Haiti vs. Dominican Republic, two halves of one island. Haiti at #175 is near the bottom of the list in GDP per person, while DR is #71--above both China and Mexico. And consider that as recently as 1960, both had similar GDP per person.
And of course, there's the famous example of North Korea ($600 GDP per person) vs. South Korea ($50,000+ GDP per person).
If countries can diverge so radically, even though they share very similar land and peoples, it is much more likely that the differences between rich countries and poor countries is due entirely to external factors, like governance and history, and not the IQ of people.
I will grant you that the most oppressive regime in the world does have an impact on GDP in Korea. But DR and Haiti are not the same genetically. Haitians African ancestry is 85% and 95%. In the Dominican Republic is 38% and 40%. So I don't see that as an exception to the rule.
it all muddy and impossible to untangle. probably better not to focus on any one example. but i will say that both countries were basically under US military control through the 1930s, so there are external forces at play. gdp growth rates as trajectories probably say more than snapshots. ultimately I think it's self-evident that some genetic groups are more capable of advanced thinking than others, and that it shows up in development scores. Other factors also matter. All life is equally valuable under God. Some perform better on economics.
Classic hindsight bias. In fact, you could be paying a lot of attention to politics and still think tariffs were not going to go so high. Here's [1] a betting market that regularly was below 5% chance of tariffs above 40% on Chinese imports in first 100 days of Trump's second term.
Polymarket isn’t a source for this, lol. Maybe google trends, since there’s no reason to manipulate it. There were also reasons to anticipate the amount of the tariffs, and the absolute stupidity of the tariffs (still reeling from the Heard and McDonald islands tariffs lmao).
This is a strange position to take. Sure, Polymarket has warts, but that doesn't mean it's not a very good source for consensus opinions about the future from the past. Do you think this market was manipulated?
Open, public non-academic prediction markets basically exist to be manipulated by people with insider knowledge.
Filter out all the noise of people random ass guessing what will happen in the future and focus on people making big bets late in the game. That's your important "prediction".
See: Anonymous person who made $400,000 betting on Maduro being out of office, etc.
I'd be surprised if there weren't already people running HFT-like setups to look for these anomalously large late stage trades to piggyback their own bets on the insider information.
If you're so much of a better predictor than Polymarket, then why don't you put your money where your mouth is and make a killing off those manipulators?
> There’s an extremely hurtful narrative going around that my product, a revolutionary new technology that exists to scam the elderly and make you distrust anything you see online, is harmful to society
The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.
I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.
The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.
And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.
Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.
This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.
> bad actors quickly realized this is a force multiplication factor for them
You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).
Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.
AI doesn't encompass any "human behaviours", the humans controlling it do. Grok doesn't generate nude pictures of women because it wants to, it does it because people tell it to and it has (or had) no instructions to the contrary
If it can generate porn, it can do so because it was explicitly trained on porn. Therefore the system was designed to generate porn. It can't just materialize a naked body without having seen millions of them. they do not work that way.
I hate to be a smartass, but do you read the stuff you type out?
>Grok doesn't generate nude pictures of women because it wants to,
I don't generate chunks of code because I want to. I do it because that's how I get paid and like to eat.
What's interesting with LLMs is they are more like human behaviors than any other software. First you can't tell non-AI (not just genAI)software to generate a picture of a naked women, it doesn't have that capability. So after that you have models that are trained on content such as naked people. I mean, that's something humans are trained on, unless we're blind I guess. If you take a data set encompassing all human behaviors, which we do, then the model will have human like behaviors.
It's in post training that we add instructions to the contrary. Much like if you live in American you're taught that seeing naked people is worse than murdering someone and that if someone creates a naked picture of you, your soul has been stolen. With those cultural biases programmed into you, you will find it hard to do things like paint a picture of a naked person as art. This would be openAI's models. And if you're a person that wanted to rebel, or lived in a culture that accepted nudity, then you wouldn't have a problem with it.
How many things do you do because society programmed you that way, and you're unable to think outside that programming?
These are one of those things that are hard to get statistics of due to the nature of the subject, but going to any website that features AI generated content like CivitAI will show you a lot more naked AI generated women than men, and that the images of women are greatly better in quality than the men. None of the people actually exist, of course, but some things stem from this:
1. There are probably AI portals that are OK with uploading nonconsensual sexual images of people. I am not about to go looking for those, but the ratio of women to men on those sites is likely similar.
2. The fact that the quality of women is better than the quality of men speaks to vastly more training being done on women
3. Because there's so much training on women it's just easier to use AI for nefarious purposes on women than men (have to find custom trained LORAs to get male anatomy right, for example).
I did try to look for statistics out of curiosity, but most just cite a number without evidence.
Obviously there is more interest in generating images of naked women, since naked women look better than naked men. It’s not some kind of patriarchal conspiracy.
It is obvious, but again that's subjective (I'm a straight male so of course I find it to be true but I'm not sure straight women would agree). The person I was responding to was asking if evidence existed, so I was curious to see if evidence did indeed exist.
In addition to AI-specific data, the existing volume and consumption patterns for non-AI pornography can be extrapolated to AI, I think, with high confidence.
>The internet by comparison feels like a clear net positive to me, even with all the bad it enables.
When I think of the internet, I think of malware, porn, social media manipulating people, flame wars, "influencers", and more.
It is also used to scam the elderly, sharing photoshopped sexually explicit pictures of men, women, and children, without their consent, stealing all kinds of copyrighted material, and definitely sucking the joy out of everything. Revenge porn wasn't started in 2023 with OpenAI. And just look at META's current case about Instagram being addicting and harmful to children. If "AI" is a tech carcinogen, then the internet is a nuclear reactor, spewing radioactive material every which way. But hey, it keeps the lights on! Clearly, a net positive.
Let's just be intellectually consistent, that's all I'm saying.
Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.
Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.
That's a small part on why people became more cynical of tech over the decades. At least with the internet there were large efforts to try and nail down security in the early 00's. Imagine if we instead left it devolve into a moderator-less hellscape where every other media post is some goatse style jump scare.
That's what it feels like with AI. But perhaps worse since companies are lobbying to keep the chaos instead of making a board of standards and etiquette.
This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.
> Just because you can cook with a hammer doesn't make it its purpose.
If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.
If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.
Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.
To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.
Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.
The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.
This is satire. Its purpose is to use exaggeration to provide comedy while also drawing attention to issues.
Obviously the intended use and design of AI isn't to scam the elderly, but it's extremely efficient at doing it, and has no guard rails to help prevent it.
Why is anyone allowed to make a digital copy of me, without my permission, and then use that to call my relatives? It should be illegal to use it and it should be illegal to even generate it. Sure, it's already illegal to defaud people, but that's simply not enough at this point. The AI companies producing these models should be held liable for this form of fraud, as they're not providing any form of protection.
You're exactly the person that this article is satirizing.
No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”
An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.
There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.
Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.
Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.
Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.
People seemingly have some very odd views on products when it comes to AI.
It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.
I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.
People have been making nude celebrity photos for decades now with just Photoshop.
Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.
This conversation is naive and simplifies technologies into “does it achieve something you otherwise couldn’t”.
The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.
Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?
Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.
It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.
No. I'm just stating that a huge portion of these comments have their own emotional investment and are confusing OUGHT/IS. On top of that their arguments aren't particularly sound, and if they were applied to any other technologies that we worship here in the church of HN would seem like an advanced form of hypocrisy.
...generate piles of low quality content for almost free.
AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.
I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.
There is a particular problem that comes with your line of thinking and why AI will never be able to solve it. In fact it's not a solved human problem either.
And that is slop work is always easier and cheaper than doing something right. We can make perfectly good products as it is, yet we find Shien and Temu filled with crap. That's not related to AI. Humans drown themselves in trash whenever we gain the technological capability to do so.
To put this another way, you cannot get a 10x speed up in high quality work without also getting a 1000x speed up in low quality work. We'll pretty much have to kill any further technological advancement if that's a showstopper for you.
It's satire. It's supposed to be absurd. Why else do students still read A Modest Proposal nearly three hundred years after its publication?
Regardless, LLMs are already being abused to mass produce spam, and some of that spam has almost certainly been employed to separate the elderly from their savings, so there's nothing particularly implausible about the satirical product, either.
if you make a thing and the thing is going to be inevitably used for a purpose and you could do something about that use and you do not --- then yes, it exists for that purpose, and you are responsible for it being used in that way. you don't get to say "ah well who could have seen this inevitable thing happening? it's a shame nobody could do anything about it" when it was you that could have done something about it.
Yeah. Example: stripper poles. Or hitachi magic wands.
Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.
In this case we're debating whether one of the purposes of AI is to scan the elderly. Probably 'purpose' is not quite the right word, but the point would be: it is not the purpose of AI to not scam the elderly (or it would explicitly prevent that).
(note: I do not actually know if it explicitly prevents that. But because I am very cynical about corporations, I'd tend to assume it doesn't.)
The original the article is spoofing is interviewers asking Huang about the narrative that:
>It's the jobs and employment. Nobody's going to be able to work again. It's God AI is going to solve every problem. It's we shouldn't have open source for XYZ... https://youtu.be/k-xtmISBCNE?t=1436
and he says a "end of the world narrative science fiction narrative" is hurtful.
Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"
It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.
It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.
I think that maybe the point isn't that the scams/distrust are "new" with the advent of AI, but "easier" and "more polished" than before.
The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)
It doesn't exist for that express purpose, but the voice and video impersonation is definitely being used to scam elderly people.
Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.
The article doesn't specify which elderly they're referring to. They've certainly successfully captured the gerontocrats in Washington and Wall Street that keep bouying their assets.
LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.
After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:
> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.
True, but no more true than it is if you replace the antecedent with "people".
Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.
History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.
What you (and the authors) call "hallucination," other people call "imagination."
Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.
> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.
It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.
It's a fine line. Humans don't always fuck shit up.
But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.
The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.
> I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.
While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.
So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?
Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.
> So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?
There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.
I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.
Here's a really old example of what that looks like (the models are a lot better at this now) :
There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.
But new media also lets creativity blossom. The printing press eventually enabled novels through cost reduction. Prussian blue pigment is a large part of ukyio-e's attraction; it got used a lot because it was new and was a better blue. The Gothic arch's improved strength compared to the circular arch enabled cathedrals with huge windows. Concrete enabled all sorts of fluid architecture; Soviet bus stations, for instance [1].
Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint.
But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.
Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.
>I get that this is satire, but satire has to have some basis in truth.
The Trump administration is using AI generated imagery to advance his narrative, and it seems like it's a thing that mostly the elderly would fall for. So yes, there is some truth to it.
In general, the elderly will always be more vulnerable to technological exploitation.
While the employees of the companies that make AI may have noble, even humanity-redeeming/saving intentions, the billionaire class absolutely has bond-villain level intentions. The destruction of the middle class and the removal of all livable-wage jobs is absolutely part of the techno-feudalist playbook that Trump, Altman, Zuckerberg, etc are intentionally moving toward. I'd say that is a scam. They want to recreate the conditions of earlier society - an upper class (them, who own the entire means of production and can operate the entire machine without the need for peons' input) who does whatever they want because the lower class is incapable of opposing them.
You mean the guy that has in his bio "YC and VC backed founder" and has made multiple posts in the last couple months dismissing different negative thoughts about AI? Yeah that guy probably doesn't have significant funds tied up in the success of AI.
I don’t, actually, unless you call index funds “tied up”.
To be honest, it’s really distasteful to make a high level comment about this article then have people rush to attack me personally. This is the mentality of a mob.
in this case a more appropriate term for the mob is "the people" because one defining dynamic of the rollout of this technology is that a minority of people seem to be extremely invested to shove it into the faces of a majority of people who don't want it, and then claim that they are visionaries and everyone else is 'the mob'.
Just like with Mark Zuckerberg's "Metaverse" we're now in a post-market vanity economy where not consumer demand but increasingly desperate founders, investors and gurus are trying to justify their valuations by doling out products for free and shoving their AI services into everything to justify the tens of billions they dumped into it
I'm sorry that some people's pension funds, startup funding and increasingly the entire American economy rests on this collective delusion but it's not really most people's problem
What’s the point of attacking a straw man while ignoring the actual points being brought up?
The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.
I'm also not convinced the HN refrain of "it's actually not that much water" is entirely true. I've seen conflicting reports from sources i generally trust, and it's no secret an all-GPU AI data center is more resource intensive than a general purpose data center.
In my experience it’s more like idiot savant engineers. Still remarkable.
reply