Hacker Newsnew | past | comments | ask | show | jobs | submit | seizethecheese's commentslogin

> It feels like we are now able to manage incredibly smart engineers for a month at the price of a good sushi dinner.

In my experience it’s more like idiot savant engineers. Still remarkable.


Its like getting access to an amazing engineer, but you get a new individual engineer each prompt, not one consistent mind.

I use Gemini flash lite in a side project, and it’s stuck on 2.5. It’s now well behind schedule. Any speculation as to what’s going on?

Gemini-3.0-flash-preview came out right away with the 3.0 release and I was expecting 3.0-flash-lite before a bump on the pro model. I wonder if they have abandoned that part of the Pareto/price-performance.

The definition of a moat is what cannot be bought.

To modern ears it seems dim, but to contemporary ears it would be very bright compared to normal nighttime!

“Stochastic chaos” is really not a good way to put it. By using the word “stochastic” you prime the reader that you’re saying something technical, then the word “chaos” creates confusion, since chaos, by definition, is deterministic. I know they mean chaos in they lay sense, but then don’t use the word “stochastic”, just say "random".

I have a feeling OP used the phrase as a nod to "stochastic terrorism", which would make sense in this instance.

Right. It captures the destabilizing effect of stochastic terrorism, without the terroristic intent. It’s a neat phrase.

Yes, that's exactly what I was trying to get at.

That would have been a lot less confusing.

The word "stochastic" in relation to chaos is a thing though. It helps distinguish between closed and open systems.

I don't think this is correct.

This definitely is not true, outside of physical domains.

I chose a random domain (philosophers writing their seminal work) and found that most wrote them in their 40s. Kant wrote the critique of pure reason at 57 years old!


Hobbes was 63 when he wrote Leviathan.

This is a pointless discussion though without talking about testosterone and dopamine levels. Doesn't really matter what your IQ is at 60, if your testosterone and dopamine system is that of the average old man you are not going to have the desire to write Leviathan or Critique of Pure Reason.


Catch up growth is premised on the assumption that productivity producing innovations diffuse through the world. This assumption is true, of course, but not universally. Many technologies also rely on culture, institutions, human capital.


You may be right but what if IQ is downstream of GDP not the other way around? No dog in this fight, just a philosophical question.


I wish I could reply to the original post, but I can't because it is flagged. But if I could reply, I would say the following:

This post is almost certainly wrong--as in, the existence of Big Foot is more likely.

Take an example like Haiti vs. Dominican Republic, two halves of one island. Haiti at #175 is near the bottom of the list in GDP per person, while DR is #71--above both China and Mexico. And consider that as recently as 1960, both had similar GDP per person.

And of course, there's the famous example of North Korea ($600 GDP per person) vs. South Korea ($50,000+ GDP per person).

If countries can diverge so radically, even though they share very similar land and peoples, it is much more likely that the differences between rich countries and poor countries is due entirely to external factors, like governance and history, and not the IQ of people.


I will grant you that the most oppressive regime in the world does have an impact on GDP in Korea. But DR and Haiti are not the same genetically. Haitians African ancestry is 85% and 95%. In the Dominican Republic is 38% and 40%. So I don't see that as an exception to the rule.


But they were economically equal in 1960 with similar ancestry, so external factors must be the cause.


it all muddy and impossible to untangle. probably better not to focus on any one example. but i will say that both countries were basically under US military control through the 1930s, so there are external forces at play. gdp growth rates as trajectories probably say more than snapshots. ultimately I think it's self-evident that some genetic groups are more capable of advanced thinking than others, and that it shows up in development scores. Other factors also matter. All life is equally valuable under God. Some perform better on economics.


The existing data leans heavily against "IQ is just downstream of GDP."


Classic hindsight bias. In fact, you could be paying a lot of attention to politics and still think tariffs were not going to go so high. Here's [1] a betting market that regularly was below 5% chance of tariffs above 40% on Chinese imports in first 100 days of Trump's second term.

https://polymarket.com/event/trump-imposes-40-blanket-tariff...


Polymarket isn’t a source for this, lol. Maybe google trends, since there’s no reason to manipulate it. There were also reasons to anticipate the amount of the tariffs, and the absolute stupidity of the tariffs (still reeling from the Heard and McDonald islands tariffs lmao).


This is a strange position to take. Sure, Polymarket has warts, but that doesn't mean it's not a very good source for consensus opinions about the future from the past. Do you think this market was manipulated?


Search “Polymarket manipulated” or similar and examples are legion. You can even do that on hacker news. There’s a lot of incentive to do so.


Open, public non-academic prediction markets basically exist to be manipulated by people with insider knowledge.

Filter out all the noise of people random ass guessing what will happen in the future and focus on people making big bets late in the game. That's your important "prediction".

See: Anonymous person who made $400,000 betting on Maduro being out of office, etc.

I'd be surprised if there weren't already people running HFT-like setups to look for these anomalously large late stage trades to piggyback their own bets on the insider information.


Sure, but that’s not likely in this specific market, at least in enough size to make a difference to the main point here.


If you're so much of a better predictor than Polymarket, then why don't you put your money where your mouth is and make a killing off those manipulators?


> There’s an extremely hurtful narrative going around that my product, a revolutionary new technology that exists to scam the elderly and make you distrust anything you see online, is harmful to society

The article is certainly interesting as yet another indicator of the backlash against AI, but I must say, “exists to scam the elderly” is totally absurd. I get that this is satire, but satire has to have some basis in truth.

I say this as someone whose father was scammed out of a lot of money, so I’m certainly not numb to potential consequences there. The scams were enabled by the internet, does the internet exist for this purpose? Of course not.


The article names a lot of other things that AI is being used for besides scamming the elderly, such as making us distrust everything we see online, generating sexually explicit pictures of women without their consent, stealing all kinds of copyrighted material, driving autonomous killer drones and more generally sucking the joy out of everything.

And I think I'm inclined to agree. There are a small amount of things that have gotten better due to AI (certain kinds of accessibility tech) and a huge pile of things that just suck now. The internet by comparison feels like a clear net positive to me, even with all the bad it enables.


Here's the thing with AI, especially as it becomes more AGI like, it will encompass all human behaviors. This will lead to the bad behaviors becoming especially noticeable since bad actors quickly realized this is a force multiplication factor for them.

This is something everyone needs to think about when discussing AI safety. Even ANI applications carry a lot of potential societal risks and they may not be immediately evident. I know with the information superhighway few expected it to turn into a dopamine drip feed for advertising dollars, yet here we are.


> bad actors quickly realized this is a force multiplication factor for them

You'd think we would have learned this lesson in failing to implement email charges that net'd to $0 for balanced send/receive patterns. And thereby heralded in a couple decades of spam, only eventually solved by centralization (Google).

Driving the cost of anything valuable to zero inevitably produces an infinite torrent of volume.


AI doesn't encompass any "human behaviours", the humans controlling it do. Grok doesn't generate nude pictures of women because it wants to, it does it because people tell it to and it has (or had) no instructions to the contrary


If it can generate porn, it can do so because it was explicitly trained on porn. Therefore the system was designed to generate porn. It can't just materialize a naked body without having seen millions of them. they do not work that way.


Not all pictures of naked people are porn.


I hate to be a smartass, but do you read the stuff you type out?

>Grok doesn't generate nude pictures of women because it wants to,

I don't generate chunks of code because I want to. I do it because that's how I get paid and like to eat.

What's interesting with LLMs is they are more like human behaviors than any other software. First you can't tell non-AI (not just genAI)software to generate a picture of a naked women, it doesn't have that capability. So after that you have models that are trained on content such as naked people. I mean, that's something humans are trained on, unless we're blind I guess. If you take a data set encompassing all human behaviors, which we do, then the model will have human like behaviors.

It's in post training that we add instructions to the contrary. Much like if you live in American you're taught that seeing naked people is worse than murdering someone and that if someone creates a naked picture of you, your soul has been stolen. With those cultural biases programmed into you, you will find it hard to do things like paint a picture of a naked person as art. This would be openAI's models. And if you're a person that wanted to rebel, or lived in a culture that accepted nudity, then you wouldn't have a problem with it.

How many things do you do because society programmed you that way, and you're unable to think outside that programming?


You’re way off base. It can also create sexually explicit pictures of men.


Not sure if you're being sarcastic, but women are disproportionately affected by this than men.


That sounds like it could be true, but do you have any actual evidence of that?


These are one of those things that are hard to get statistics of due to the nature of the subject, but going to any website that features AI generated content like CivitAI will show you a lot more naked AI generated women than men, and that the images of women are greatly better in quality than the men. None of the people actually exist, of course, but some things stem from this:

1. There are probably AI portals that are OK with uploading nonconsensual sexual images of people. I am not about to go looking for those, but the ratio of women to men on those sites is likely similar. 2. The fact that the quality of women is better than the quality of men speaks to vastly more training being done on women 3. Because there's so much training on women it's just easier to use AI for nefarious purposes on women than men (have to find custom trained LORAs to get male anatomy right, for example).

I did try to look for statistics out of curiosity, but most just cite a number without evidence.

https://www.pbs.org/newshour/show/women-face-new-sexual-hara...

https://verfassungsblog.de/deepfakes-ncid-ai-regulation/

https://www.csis.org/analysis/left-shoulder-worries-ai


Obviously there is more interest in generating images of naked women, since naked women look better than naked men. It’s not some kind of patriarchal conspiracy.


It is obvious, but again that's subjective (I'm a straight male so of course I find it to be true but I'm not sure straight women would agree). The person I was responding to was asking if evidence existed, so I was curious to see if evidence did indeed exist.


In addition to AI-specific data, the existing volume and consumption patterns for non-AI pornography can be extrapolated to AI, I think, with high confidence.


Source: I have eyes


>The internet by comparison feels like a clear net positive to me, even with all the bad it enables.

When I think of the internet, I think of malware, porn, social media manipulating people, flame wars, "influencers", and more.

It is also used to scam the elderly, sharing photoshopped sexually explicit pictures of men, women, and children, without their consent, stealing all kinds of copyrighted material, and definitely sucking the joy out of everything. Revenge porn wasn't started in 2023 with OpenAI. And just look at META's current case about Instagram being addicting and harmful to children. If "AI" is a tech carcinogen, then the internet is a nuclear reactor, spewing radioactive material every which way. But hey, it keeps the lights on! Clearly, a net positive.

Let's just be intellectually consistent, that's all I'm saying.


[flagged]


It's true, making these things easier and faster and more accessible really doesn't matter


That's a bonkers take.

Am I misunderstanding you or are you somehow saying anything done in the past is fine to do more of?


Poe's Law, mate.


> I get that this is satire, but satire has to have some basis in truth.

Do you think that it isn't used for this? The satire part is to expand that usecase to say it exists purely for that purpose.


Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).

Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.


Not at all. I’m saying AI doesn’t exist to scam elderly, which is saying nothing about whether it’s dangerous in that respect.


Perhaps you’ve heard that the purpose of a system is what it does?


Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.


That's because they were thinking about their stock options instead.


That's a small part on why people became more cynical of tech over the decades. At least with the internet there were large efforts to try and nail down security in the early 00's. Imagine if we instead left it devolve into a moderator-less hellscape where every other media post is some goatse style jump scare.

That's what it feels like with AI. But perhaps worse since companies are lobbying to keep the chaos instead of making a board of standards and etiquette.


This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.


In broad strokes - disagree.

This is the knife-food vs knife-stab vs gun argument. Just because you can cook with a hammer doesn't make it its purpose.


> Just because you can cook with a hammer doesn't make it its purpose.

If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.

If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.


I do mean this is a pretty piss poor example.

Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.

To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.


>that is more than enough motivation for our more conservative congress members to ban the internet in the first place

Yes, and now porn is highly regulated. Maybe that's a hint?


Email volume is mostly robots fighting robots these days.

Today if we could survey AI contact with humans, I'm afraid the top 3 by a wide margin would be scams, cheating, deep fakes, and porn.


Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.


The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.


I was going to reply to the post above but you said it perfectly.


It’s possible to write malware in C. Does that mean that C is designed to write malware?


This is satire. Its purpose is to use exaggeration to provide comedy while also drawing attention to issues.

Obviously the intended use and design of AI isn't to scam the elderly, but it's extremely efficient at doing it, and has no guard rails to help prevent it.

Why is anyone allowed to make a digital copy of me, without my permission, and then use that to call my relatives? It should be illegal to use it and it should be illegal to even generate it. Sure, it's already illegal to defaud people, but that's simply not enough at this point. The AI companies producing these models should be held liable for this form of fraud, as they're not providing any form of protection.

You're exactly the person that this article is satirizing.


No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”


An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.


I can't think of many other reasons to create voice cloning AI, or deepfake AI (other than porn, of course).


There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.

Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.


Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.


Sure, phones aren't directly doing the scamming, but they're supercharging the ability to do so.

Phones are also a very popular mechanism for scamming businesses. It's tough to pull off CEO scams without text and calls.

Therefore, phones are bad?

This is of course before we talk about what criminals do with money, making money truly evil.


Without phones, we couldn’t talk to people across great distances (oversimplification but you get it).

Without Generative AI, we couldn’t…?


Whats the big deal in talking to people across great distances? We can live without it.


Are you really implying that generative AI doesn't enable things that were not previously possible?


Name some then! I initially scoffed too but I can only think of stuff LLM’s make easier not things that were impossible previously.


Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.

People seemingly have some very odd views on products when it comes to AI.


It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.

I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.

People have been making nude celebrity photos for decades now with just Photoshop.

Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.


Would it be fair to say a car or plane aren’t significant then, given we could always traverse by horse or boat?


What did internet bring?


For the most part, it hasn't. What do you consider previously impossible, and how is it good for the world?


> were not previously possible?

How obtuse. The poster is saying they don't enable anything of value.


Can you name one thing generative AI enables that wasn't previously possible?


Can you name one thing a plow enables that wasn't previously possible?

This line of thinking is ridiculous.


A plow enables you to till land you couldn't before with your bare hands.

The phone let's you talk to someone you couldn't before when shouting can't.

ChatGPT let's you...

Please complete the sentence without an analogy


This conversation is naive and simplifies technologies into “does it achieve something you otherwise couldn’t”.

The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.

Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?

Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.

It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.


>A plow enables you to till land you couldn't before with your bare hands.

It does not. You could still till the land with hand tools. You just get a lot more done.

ChatGPT let's me program in languages I was not efficient in before.

Anyway, I'm done with your technology purity contest, it has about zero basis in reality.


Why are you so mad? You're the only one in these comments dismissing arguments because you don't like them. Are you invested?


No. I'm just stating that a huge portion of these comments have their own emotional investment and are confusing OUGHT/IS. On top of that their arguments aren't particularly sound, and if they were applied to any other technologies that we worship here in the church of HN would seem like an advanced form of hypocrisy.


They tilled by hand for thousands of years before inventing a plow to speed it up.

They spoke slowly, through letters, until phones sped it up.

We coded slowly, letter by letter, until agents sped it up.


...generate piles of low quality content for almost free.

AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.

I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.


There is a particular problem that comes with your line of thinking and why AI will never be able to solve it. In fact it's not a solved human problem either.

And that is slop work is always easier and cheaper than doing something right. We can make perfectly good products as it is, yet we find Shien and Temu filled with crap. That's not related to AI. Humans drown themselves in trash whenever we gain the technological capability to do so.

To put this another way, you cannot get a 10x speed up in high quality work without also getting a 1000x speed up in low quality work. We'll pretty much have to kill any further technological advancement if that's a showstopper for you.


> Therefore, phones are bad?

Phones are utilities. AI companies are not.


It's satire. It's supposed to be absurd. Why else do students still read A Modest Proposal nearly three hundred years after its publication?

Regardless, LLMs are already being abused to mass produce spam, and some of that spam has almost certainly been employed to separate the elderly from their savings, so there's nothing particularly implausible about the satirical product, either.


if you make a thing and the thing is going to be inevitably used for a purpose and you could do something about that use and you do not --- then yes, it exists for that purpose, and you are responsible for it being used in that way. you don't get to say "ah well who could have seen this inevitable thing happening? it's a shame nobody could do anything about it" when it was you that could have done something about it.


Yeah. Example: stripper poles. Or hitachi magic wands.

Those poles WERE NOT invented for strippers/pole dancers. Ditto for the hitachis. Even now, I'm pretty sure more firemen use the poles than strippers. But that doesn't stop the association from forming. That doesn't make me not feel a certain way if I see a stripper pole or a hitachi magic wand in your living room.


I'm super confused what harms come from stripper poles and vibrators. I am prepared to accept that the joke might have gone right over my head.


I don't get the jump either but it was certainly lateral enough to be amusing


To be fair to the magic wands, thats why “massagers” were invented in the first place. [1] [2] [3]

[1] https://thefactbase.com/the-vibrator-was-invented-in-1869-to...

[2] https://archive.nytimes.com/www.nytimes.com/books/first/m/ma...

[3] https://en.wikipedia.org/wiki/Female_hysteria


And I'll go out on a limb and say the first person to use a pole resembling a fire pole in the fireman vs stripper debate was probably the stripper!


how many front rooms have you walked into that had a stripper pole?

(also: what city? for a friend...)


i've seen it in NYC a few times. Pole dance is a fairly common fitness hobby. there are plenty of gyms/studios for it also.


> you...could have done something about it

What is it that isn't being done here, and who isn't doing it?


In this case we're debating whether one of the purposes of AI is to scan the elderly. Probably 'purpose' is not quite the right word, but the point would be: it is not the purpose of AI to not scam the elderly (or it would explicitly prevent that).

(note: I do not actually know if it explicitly prevents that. But because I am very cynical about corporations, I'd tend to assume it doesn't.)


The original the article is spoofing is interviewers asking Huang about the narrative that:

>It's the jobs and employment. Nobody's going to be able to work again. It's God AI is going to solve every problem. It's we shouldn't have open source for XYZ... https://youtu.be/k-xtmISBCNE?t=1436

and he says a "end of the world narrative science fiction narrative" is hurtful.


My hypothesis: Generative AI is, in part, reaping the reaction that cryptocurrency sowed.


Could you expand on that?


Numerous critics pointed out that the only application where cryptocurrencies had any real advantages over conventional finance was crime.

The critics were right, and the cryptocurrency ecosystem in which Silicon Valley was a not insubstantial part were wrong.

As such, and combined with the ills of social media, Silicon Valley has blow’ any public trust it may have had in the past.


Training a model on sound data from readily available public social network posts and targeting their followers (which on say fb would include family and is full of "olds") isn't a very far fetched use-case for AI. I've created audio models used as audiobook narrators where you can trivially make a "frantic/panicked" voice clip saying "help it's [grandson], I'm in jail and need bail. Send money to [scammer]"

If it's not happening yet, it will...


It is happening already, recently Brazilian woman living in Italy was scammed thinking she was having an online relationship with Brazilian tiktoker, the scammers created a fake profile and were sending her audio messages with the voice of said tiktoker cloned via AI. She sent the scammers a lot of money for the wedding but when she arrived in Brazil discovered the con.


It's already happening in India. Voicefakes are working unnervingly well and it's amplified by the fact that old people who had very little exposure to tech have basically been handed a smart phone that has control of their pension fund money in an app.


I think that maybe the point isn't that the scams/distrust are "new" with the advent of AI, but "easier" and "more polished" than before.

The language of the reader is no longer a serious barrier/indicator of a scam (A real bank would never talk like that, is now, well, that's something they would say, the way that they would say it)


It doesn't exist for that express purpose, but the voice and video impersonation is definitely being used to scam elderly people.

Instead of being used to protect us or make our lives easier, it is being used by evildoers to scam the weak and vulnerable. None of the AI believers will do anything about it because it kills their vibe.


> the voice and video impersonation is definitely being used to scam elderly people

And like with the child pornography, the AI companies are engaging in high-octane buck passing more than actually trying to tamp down the problem.


The article doesn't specify which elderly they're referring to. They've certainly successfully captured the gerontocrats in Washington and Wall Street that keep bouying their assets.


>> enabled by the internet, does the internet exist for this purpose? Of course not.

I think point article was trying to make: LLMs and new genAI tools helped the scammers scale their operations.


So did the Internet


LLMs are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

After you eliminate anything that requires accountability and trustworthiness from the tasks which LLMs may be responsibly used for, the most obvious remaining use-cases are those built around lying:

- advertising

- astroturfing

- other forms of botting

- scamming old people out of their money


> [...] are fiction machines. All they can do is hallucinate, and sometimes the hallucinations are useful. That alone rules them out, categorically, from any critical control loop.

True, but no more true than it is if you replace the antecedent with "people".

Saying that the tools make mistakes is correct. Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

History is paved with people who got steamrollered by technology they didn't think would ever work. On a practical level AI seems very median in that sense. It's notable only because it's... kinda creepy, I guess.


> True, but no more true than it is if you replace the antecedent with "people".

Incorrect. People are capable of learning by observation, introspection, and reasoning. LLMs can only be trained by rote example.

Hallucinations are, in fact, an unavoidable property of the technology - something which is not true for people. [0]

[0] https://arxiv.org/abs/2401.11817


What you (and the authors) call "hallucination," other people call "imagination."

Also, you don't know very many people, including yourself, if you think that confabulation and self-deception aren't integral parts of our core psychological makeup. LLMs work so well because they inherit not just our logical thinking patterns, but our faults and fallacies.


what I call it is "buggy garbage"

it's not a person, it doesn't hallucinate or have imagination

it's simply unreliable software, riddled with bugs


(Shrug) Perhaps other sites beckon.


The suggestion that hallucinations are avoidable in humans is quite a bold claim.


> Saying that (like people) they can never be trained and deployed such that the mistakes are tolerable is an awfully tall order.

It is, though. We have numerous studies on why hallucinations are central to the architecture, and numerous case studies by companies who have tried putting them in control loops! We have about 4 years of examples of bad things happening because the trigger was given to an LLM.


> We have numerous studies on why hallucinations are central to the architecture,

And we have tens of thousands of years of shared experience of "People Were Wrong and Fucked Shit Up". What's your point?

Again, my point isn't that LLMs are infallible; it's that they only need to be better than their competition, and their competition sucks.


It's a fine line. Humans don't always fuck shit up.

But human systems that don't fuck shit up are short-lived, rare, and fragile, and they've only become a potential - not a reality - in the last century or so.

The rest of history is mostly just endless horrors, with occasional tentative moments of useful insight.


[flagged]


> I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

While i certainly respect your interactivity and subsequent force multiplayer nature of AI, this doesn't mean you should try to emulate an already given piece of work. You'll certainly gain a small dopamine when you successfully copy something but it would also atrophy your critical skills and paralyze you from making any sort of original art. You'll miss out on discovering the feeling of any frontier work that you can truly call your own.


So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

Claims of productive boosts must always be inspected very carefully, as they are often perceived, and reality may be the opposite (eg spending more time wrestling the tools), or creating unmaintainable debt, or making someone else spend extra time to review the PR and make 50 comments.


> So instead of actually making films, thing you as a filmmaker supposedly like to do, you have some chat bot to do it for you? Or what part of that is generated by chat bot?

There's no chatbot. You can use image-to-image, ControlNets, LoRAs, IPAdapters, inpainting, outpainting, workflows, and a lot of other techniques and tools to mold images as if they were clay.

I use a lot of 3D blocking with autoregressive editing models to essentially control for scene composition, pose, blocking, camera focal length, etc.

Here's a really old example of what that looks like (the models are a lot better at this now) :

https://www.youtube.com/watch?v=QYVgNNJP6Vc

There are lots of incredibly talented folks using Blender, Unreal Engine, Comfy, Touch Designer, and other tools to interface with models and play them like an orchestra - direct them like a film auteur.


As a rule real creativity blossoms under constraints, not under abundance.


But new media also lets creativity blossom. The printing press eventually enabled novels through cost reduction. Prussian blue pigment is a large part of ukyio-e's attraction; it got used a lot because it was new and was a better blue. The Gothic arch's improved strength compared to the circular arch enabled cathedrals with huge windows. Concrete enabled all sorts of fluid architecture; Soviet bus stations, for instance [1].

[1] https://www.russiabeyond.com/arts/327147-10-best-soviet-bus-...


Trying to make a dent in the universe while we metabolize and oxidize our telomeres away is a constraint.

But to be more in the spirit of your comment, if you've used these systems at all, you know how many constraints you bump into on an almost minute to minute basis. These are not magical systems and they have plenty of flaws.

Real creativity is connecting these weird, novel things together into something nobody's ever seen before. Working in new ways that are unproven and completely novel.


Genuine question: does the agent work for you if you didn't build it, train it, or host it?

It's ostensibly doing things you asked it, but in terms dictated by its owner.


indeed

and it's even worse than that: you're literally training your replacement by using it when it re-transmits what you're accepting/discarding

and you're even paying them to replace you


> AI is a force multiplier for labor capital

for an 2011 account that's a shockingly naive take

yes, AI is a labor capital multiplier. and the multiplicand is zero

hint: soon you'll be competing not with humans without AI, but with AIs using AIs


Even if it's >1, it doesn't follow that it's good news for the "labor capitalist".

"OK, so I lost my job, but even adjusting for that, I can launch so many more unfinished side-projects per hour now!"


I want to be able to leverage Hollywood grade VFX and make shows and transform my likeness for real time improv.

Do you know anything about "Hollywood grade VFX" ? Have you ever worked for any company that does it?

No more nepotism in Hollywood

Do you think "Hollywood VFX" is full of nepotism?


always good to be in the pick and shovel biz


Extremely exaggerated comment. LLMs dont hallucinate that much. That doesn’t rule them out of any control loop.

I mean, I think you have not put much thought into your theory.


> The scams were enabled by the internet, does the internet exist for this purpose? Of course not.

But did it accelerate the whole process? Hell yeah.


>I get that this is satire, but satire has to have some basis in truth.

The Trump administration is using AI generated imagery to advance his narrative, and it seems like it's a thing that mostly the elderly would fall for. So yes, there is some truth to it.

In general, the elderly will always be more vulnerable to technological exploitation.


They're used for scams. Isn't that the basis in truth you're looking for in satire?

Before this we had "the internet is for porn." Same sort of exaggerated statement.


Porn was enabled by the internet’s but does the internet exist for this purpose?

Yes. Yes it does. That is the satire.


> Why you think the net was born? > Porn porn porn


I mean... explain sora.


Revolutionizing cat memes


While the employees of the companies that make AI may have noble, even humanity-redeeming/saving intentions, the billionaire class absolutely has bond-villain level intentions. The destruction of the middle class and the removal of all livable-wage jobs is absolutely part of the techno-feudalist playbook that Trump, Altman, Zuckerberg, etc are intentionally moving toward. I'd say that is a scam. They want to recreate the conditions of earlier society - an upper class (them, who own the entire means of production and can operate the entire machine without the need for peons' input) who does whatever they want because the lower class is incapable of opposing them.

If you aren't familiar, look into it.


[flagged]


The person you're replying to is probably not personally a major AI magnate.


You mean the guy that has in his bio "YC and VC backed founder" and has made multiple posts in the last couple months dismissing different negative thoughts about AI? Yeah that guy probably doesn't have significant funds tied up in the success of AI.


It becomes insulting when they think we're this foolish.


I don’t, actually, unless you call index funds “tied up”.

To be honest, it’s really distasteful to make a high level comment about this article then have people rush to attack me personally. This is the mentality of a mob.


in this case a more appropriate term for the mob is "the people" because one defining dynamic of the rollout of this technology is that a minority of people seem to be extremely invested to shove it into the faces of a majority of people who don't want it, and then claim that they are visionaries and everyone else is 'the mob'.

Just like with Mark Zuckerberg's "Metaverse" we're now in a post-market vanity economy where not consumer demand but increasingly desperate founders, investors and gurus are trying to justify their valuations by doling out products for free and shoving their AI services into everything to justify the tens of billions they dumped into it

I'm sorry that some people's pension funds, startup funding and increasingly the entire American economy rests on this collective delusion but it's not really most people's problem


One thing this characterization is not is honest.


What part is not honest?


No, but the attitude is congruent, even if they don't have the investment money lying around to fill the shoes exactly.


> satire has to have some basis in truth

In order to be funny at least!


article forgot to mention the usual "think about the water usage"


What’s the point of attacking a straw man while ignoring the actual points being brought up?

The water usage by data centers is fairly trivial in most places. The water use manufacturing the physical infrastructure + electricity generation is surprisingly large but again mostly irrelevant. Yet modern ‘AI’ has all sorts of actual problems.


It mentions ecological destruction, which I must say is way better than water usage, AI is a power hog after all.


If it's the "usual reply", maybe it's because....I dunno...water is kinda important?


I'm also not convinced the HN refrain of "it's actually not that much water" is entirely true. I've seen conflicting reports from sources i generally trust, and it's no secret an all-GPU AI data center is more resource intensive than a general purpose data center.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: