Hacker Newsnew | past | comments | ask | show | jobs | submit | greatgib's commentslogin

I'm so piss off by website like messenger, or google meet for example that try to force you to install their app on your phone when you just when to send a message or make a call on the web app.

Strangely it works very well in the browser, but they can't spy you as easily so they don't like that.


I would not say if Grok has a real problem or not but the CCDH that did the study looks like to be a "scam". I don't know who fund them but they have clearly an agenda and would "manufacture" data however they can to support it.

Title of the study and article says that Grok "Generated", but in fact:

> The CCDH then extrapolated

Basically they invent numbers.

They took a sample of 20k generated images, and it is assumed (but I don't know if the source is reliable) that Grok would have generated 4.6 millions image at the same time. So the sample is 00.4%.

If you see the webpage of the CCDH it is a joke their study. First:

   - Images were defined as sexualized if they contain [...] a person in underwear, swimwear or similarly revealing clothing.

   - Sexualized Images (Adults & Children): 12,995 found

   - Sexualized Images (Likely Children): 101 found
First they invent their own definition, then adequately mixup possible "adult" pictures to give scary numbers.

What do you propose they do? Manually review every single image generated?

Even if it’s “only” 1 million that would be a math task. Random sampling is the best we can do.


Not the person you are asking but I would require a better analyzer. It must be able to recognize children in sexual poses, children with exposed genitalia, children performing oral copulation or children being penetrated. If AI can be told to create a thing it should be able to identify that same thing. If Grok can not identify that which it was told to create that is potentially a bigger issue as someone may have nerfed that ability on purpose.

There are psychological books on identifying signs of prepubescence based on facial and genitalia features that one can search for if they are in that line of work. Some of the former Facebook mods with PTSD know what I am referring to.

Leave everything else to manual flagging assuming Grok has a flag or report button that is easy to find. If not send links to these people [1] if in the USA.

[1] - https://www.ic3.gov/


Right... So how much CSAM is an acceptable amount of CSAM in your opinion then?

A couple of things come to mind:

1) Zero is basically never the best error rate, effort isn't infinite and spending too much of it on one defect ends up meaning spending less on other issues.

2) Look at what he's saying. This is a classic pattern for providing a fake proof of evil.

a) Point to evil. For example, CSAM

b) Expand the definition of that evil in ways that are often not even evil. Here, include scantily clad in your definition of "sexual". Note that swimsuits qualify.

c) Point to examples of evil in your expanded pool.

d) Claim this points to evidence of the original definition. Note that nothing about their claims precludes their "CSAM" being nothing more than ordinary beach or pool scenes. Their claim includes the null and when the null is a possible answer it should be assumed.


To your point 2b, I would posit that it is also evil to sexualize adults against their consent

I've asked how much lower the error rate should be in order to be acceptable, and you've then replied with a rebuttal to the message of the posted article.

I agree that a zero error rate is generally not possible, although I think a company like Xitter can manage better than 101 in 20k.


Who was abused here?

When you post on a public forum defending child pornography, it's maybe a good time to take a step back and evaluate your life.

The environment.

The people used as template faces and bodies

The future victims when the imagery stops being enough.

Has this been studied? I'm not following the topic, but without any evidence one could also say that availability of fake imagery might decrease demand for real imagery and therefore decrease the amount of abuse. But I'm not implying anything, just asking.

Honestly looks highly suspicious to me. Because ok they might need some big storage like petabits. But how can this be a match in proportion with the capacity that is currently usually needed for everything that is hard drive hungry. Any cloud service, any storage service, all the storage needed for private photo/video/media storage for everything that is produced everyday, for all consumer hardwares like computers...

Gpu I understand but hard drive looks excessive. It's like if tomorrow there is a shortage of computer cabling because ai datacenter needs some.


If you're building for future training needs and not just present, it makes more sense. Scaling laws say the more data you have, the smarter and more knowledgeable your AI model gets in the end. So that extra storage can be quite valuable.

If you’re building a text-only model then the storage is limited but once you get to things like video then it’ll explode exponentially

No one can be surprised to see that all of these artificial "shortages" are impacting components with monopoly or few actors producers...

That's the electronics industry in general though. The shortages are real and a normal part of growing pains for any industry that's so capital-intensive and capacity constrained.

An additional factor missing in the post I think Is AI.

Before, projects were more often carefully human crafted.

But nowadays we expect such projects to be "vibe coded" in a day. And so, we don't have the motivation to invest mental energy in something that we expect to be crap underneath and probably a nice show off without future.

Even if the result is not the best in the world, I think that what interest us is to see the effort.


It's reasonably clear from the second sentence in the post that the uptick in submissions can be largely attributed to AI-assisted projects.

> The post quickly disappeared from Show HN's first page, amongst the rest of the vibecoded pulp.

The linked article[0] also talks at length about the impact of AI and vibe-coding on indie craftsmanship's longevity.

[0] - https://johan.hal.se/wrote/2026/02/03/the-sideprocalypse/


And just wait for when they force their developers (that are not writing any line of code), to go back onsite because "remote is bad"...

I agree that your screenshots doesn't make any sense in relation with the content of your text. It's like image randomly spread along the text without no meaningful relation. So much that I was wondering if it was not an AI generated article or partially.

Everyone would expect that you show example of good and bad UX for each point that you are making.

Making me see confirm by myself with example that doing X is clearly looking nicer to understand than doing Y.


On my side, I feel disappointed on two different counts.

- Obviously, when your selling point against competitor and alternative services was that you were Open Source, and you do a rug pull once you got enough traction, that is not great.

- But also they also switched of target. The big added value of Minio initially is that it was totally easy to run, targeting the possibility to have an s3 server running in a minute, on single instances and so... That was the perfect solution for rapid tests, local setups and automatic testing. Then, again once they started to get enough traction, they didn't just move to add more "scaling" solutions to Minio, but they kind of twisted it completely to be a complex deployment scalable solution like any other one that you find in the market. Without that much added value on that count to be honest.


Looks nice but sadly it is just a blog post without Open Source or shared tool that we could use to test.

The goal was to have computers become "intelligent" and instead what we got is world leaders becoming dumber.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: