Hacker Newsnew | past | comments | ask | show | jobs | submit | ngriffiths's commentslogin

A lot of comments about how this is another case of useless bloat. I don't know, markdown is just incredibly useful and widespread and yet it is pretty annoying to find a good editor:

- There wasn't anything that comes with Windows that natively supports it (before now)

- All your favorite text editors don't support it natively, and plugins vary

- You can pay for a nice markdown editor but for some reason your more powerful usual text editor is still free?

- You can open VSCode, which is hilarious overkill if you just want to take some notes. Obsidian is excellent but same problem.

- Maybe something I'm missing?

Basically I think it is a great thing if I just get a lightweight markdown friendly editor built in, because I'll probably use it all the time.

...except if it immediately leads to a CVE, I guess.


> Additionally, not all writing serves the same purpose.

I think this is a really important point and to add on, there is a lot of writing that is really good, but only in a way that a niche audience can appreciate. Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff targeted to a general audience, and it's still nowhere close to some more niche classics.

An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

I kind of think... there is still something fundamental that would get in the way, but that it is still totally achievable to overcome that some day? I don't think it's impossible for an AI to be creative in a humanlike way, they don't seem optimized for it because they are completely optimized for the sort of analytical mode of reading and writing, not the creative/immersive one.


> An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

I am sure it could but then what is the point? Consider this, lets assume that someone did manage to use LLM to produce a very well written novel. Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?

The moment I know how its made, the exact prompts and process, I can then have an infinite number of said great novels in 1000 different variations. To me this makes the output way, way less valuable compared to the input. If great novels are cheap to produce, they are no longer novel and becomes the norm, expectation rises and we will be looking for something new.


I'm inclined to believe that the difference that makes the upper bound of human writing (or creativity) higher than that of an LLM comes from having experiences in the real world. When someone is "inspired" by others' work or is otherwise deriving ideas from them, they inevitably and unavoidably insert their own biases and experiences into their own work, i.e. they also derive from real-world processes. An LLM, however, is derived directly and entirely from others' work, and cannot be influenced by the real world, only a projection of it.

> Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?

The "process", in many cases, is not necessarily preferable to the novel. Because an important part of the creative process is real-world experiences (as described above), and the real world is often unpleasant, hard, and complex, I'd often prefer a novel over the source material. Reading Animal Farm is much less unpleasant than being caught in the Spanish Civil War, for example.


I agree with you.

I also think it's a matter of time before we start constructing virtual worlds in which we train AI. Meaning, representations of simulated world-like events, scenarios, scenery, even physics. This will begin with heavy HF, but will move to both synthetic content creation and curation over time.

People will do this because it's interesting and because there's potential to capitalize on the result.

I thought of this in jest, but I now see this as an eventuality.


> People will do this because it's interesting and because there's potential to capitalize on the result.

I don't know why anyone admits to thinking this. For one, there's nothing stopping you from making movies or writing stories now. You're not suddenly going to develop creativity or interesting ideas using LLMs, either.

Also, think it through. If everyone can yell at computer until movie fall out, there will be millions of them and nobody will pay for anything.


I don't want AI content, but there's a market in the belief that people do.

Is there? It sound like a bunch of uncreative people wishcasting.

> The "process", in many cases, is not necessarily preferable to the novel. Because an important part of the creative process is real-world experiences (as described above), and the real world is often unpleasant, hard, and complex, I'd often prefer a novel over the source material. Reading Animal Farm is much less unpleasant than being caught in the Spanish Civil War, for example.

I think you misunderstood what I meant by "prompts and process that lead to that novel". I am talking about the process that the "author" used to generate that novel output. I am more interested in the technique that they use, and the moment that technique is known. Then, I can produce billions of War And Peace.

I suppose the argument is that, the moment there's an LLM that can produce a unique and interesting novels, what stops it from generating another billion similarly interesting novels?


> Then, I can produce billions of War And Peace

You cannot and will never lol.

This so fundamentally misunderstands (1) the point of writing a novel and (2) what makes a novel interesting.

A novel isn't just a buncha words slapped together, bing bam slop boom, done.

What makes a novel interesting is the author and the author's choices, like all art. It's the closest you can get to experiencing what it's like to be someone else. You can't generate that, it's specific to a person.


The GP assumes that an LLM is able to write such novel. So I was working from there. My thesis is that even IF LLMs are able to produce "novelty", it will become the norm and we will simply demand even more exotic novelty.

> An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.


It can have anything you like in a training set, you still can't build specific human experiences.

I haven't read War & Peace -- I don't have the patience for Russian literature -- but a much more accessible example is the Vorkosigan series by Lois Bujold. She uses a lot of Tolstoy lol.

While you can read them as fun military scifi, that's not why the series is so good and so famous. In her books, humanity invented two critical things: wormhole FTL travel and uterine replicators.

A lot of the series is exploring how people actually would use and abuse those two things. And then on another layer the books are about her thoughts on parenting, marriage, power, inheritance, and so on.

Good art isn't about accepting someone's opinion that it's good art. Good art impacts you. I think about things differently after those books.

You cannot write a good novel using the algorithmic mean of a lot of different stories.


> Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff

But compete in what sense? It already wins on volume alone, because LLM writing is much cheaper than human writing. If you search for an explanation of a concept in science, engineering, philosophy, or art, the first result is an AI summary, probably followed by five AI-generated pages that crowded out the source material.

If you get your news on HN, a significant proportion of stories that make it to the top are LLM-generated. If you open a newspaper... a lot of them are using LLMs too. LLM-generated books are ubiquitous on Amazon. So what kind of competition / victory are we talking about? The satisfaction of writing better for an audience of none?


Tens of millions of people, if not hundreds now thanks to the popularity of the television adaptation, have been waiting 15 years now for Winds of Winter to get published. If AI is such a good writer and can replace anything, write Winds of Winter for George. I don't really give a shit what's ubiquitous on Amazon. Nobody will remember any of it in a century the way we remember War and Peace. People will remember the Song of Ice and Fire books.

I think it's fine. As said above, most reading isn't done because people are looking for thought-provoking, deeply emotional multi-decade experiences with nearly parasocial relationships to major characters. They're just looking to avoid the existential dread of being alone with their thoughts for more than a few minutes. There's room for both twinkies and filet mignon in the world and filet mignon alone can't feed the entire world anyway. By the same token, if we expected all journalists to write like H.L. Menken, a lot of people wouldn't get any news, but the world still deserves to have at least a few H.L. Menkens and I don't think they'll have an audience of "none" even if their audience is smaller than Stephanie Meyer or whoever is popular today.

If it were me, I don't know man, does nobody on Hacker News still care about actually being good at anything as opposed to just making sales and having reach? Personally, I'd rather be Anthony Joshua than Jake Paul, even though Jake Paul is richer. Shit, I think Jake Paul himself would rather be Anthony Joshua


> if you get your news on HN, significant portion that make it to the top are LLM-generated.

You mean this anecdotally I assume.

This makes me think of the split between people who read the article and people who _only_ read the comments. I'm in the second group. I'd say we were preemptive in seeking the ideas and discussion, less so achieving "the point" of the article.

FWIW, AI infiltrates everything, i get that, but there's a difference between engagement with people around ideas and engagement with the content. it's blurry i know, but helps to be clear on what we're talking about.

edit: in this way, reading something a particular human wrote is both content engagement and engagement with people around an idea. lovely. engaging with content only, is something else. something less satisfying.


There are very few things worth reading submitted to this site. The only meaningful thing I'm glad to have read was the "I sell onions on the internet" blog post. Everything else I've forgotten, mostly VC marketing fluff or dev infighting in open source; hardly anything worth noting.

This place is up there with reddit, it's all lowish calorie info; 90% forgettable, 10% meaningful but you have to dig quite quite deep to find it.


To be fair, it has gotten harder, but when the meaningful stuff does happen, it is hard to beat. Some of the audience can have rather pointed takes. And if it is then somehow topped by 'off the beaten path' guy, it really makes it for me ( in the sense that maybe not all is lost quite yet ). I still sometimes reel from 'manifest bananas' guy.

>The satisfaction of writing better for an audience of none?

The satisfaction of writing for an engine. The last of what could still be recognized as a real human being writing. There’s no competition with AI, but also no resignation and no fear of being limited compared to the vast knowledge of an LLM. Even in a context of an "audience of none", somewhere there will be a scraper tool interested in my writing. And if it gets hallucinated... wow!


[dead]


<< most writing was already bad before LLMs.

I am not sure this is the problem. The problem, as it were, is that writing muscles will atrophy and in a year or two we will be looking at those tiktok reels as long lost havens of enlightenment. Personally, if anything, I write a lot more now, but then I am fascinated by llms and how they work, so .. I test and that requires writing. I might be bad, but there is hope I won't need ugh to English llm translator.


Because research on real humans and real diseases is exceptionally difficult. Clinical research is notoriously expensive, results are likely to differ from non-human (preclinical) models, and trials take forever to get started, gather enough data, and get a drug actually reviewed and approved. So even when everyone is excited by the preclinical data, there are so many barriers (both scientific and non-scientific) that getting to an approved drug is pretty unlikely.


We really should be able to grow human bodies without a brain for testing purposes. It’s gruesome but realistically victimless at the end of the day.


This sounds ethically questionable to me. I wouldn't rule it out entirely, but I'd want to see a well-reasoned argument, both technical and moral, that it was likely to lead to greatly reduced suffering for patients. Even then.... growing a body without a brain likely would not produce a model organism with predictive ability for human diseases.


I believe it could for a large number of tests. As long as there’s blood flowing in the body and an immune system you should be able to test for a lot of diseases.


I simply cannot see a technical path to achieve what you're describing.


Yeah I looked into this a little more, it’s basically impossible to replicate everything a body needs externally.


I don't think the biology is there, let alone consensus on the major ethical questions involved


> human bodies without a brain for testing

I think the way a drug impacts the brain is kind of important


Can you imagine the political/religious push-back were you to do that?!

Growth of single human organs or organ tissue is easier, cheaper and less fraught with political peril.


As someone whose mother died to pancan, I could really care less on any of the brainwashed old farts in their churches or parliaments. None of that matters to me or the people suffering from cancers, it’s al Knut a selfish obstruction attaching religion to the research material


I hear ya. I don't care what they think either.

Unfortunately, they can vote.


Hey, you missed the easier, cheaper part. Answer rationally, otherwise you're just social network clickbait.


We have the next best thing: organoids.


A more practical option is using brain-dead humans for medical testing. This was discussed recently in the journal Science, using the term "physiologically maintained deceased". As they say, this "traverses complex ethical and moral terrain". (I've seen enough zombie movies to know how this ends up :-)

https://www.science.org/doi/10.1126/science.adt3527


The anti abortion and anti birth control contingent would never let even a little of that happen in countries with significant fundamentalist and Catholic voters. There are plenty of examples where these people force babies to be born without a brain on principle. Just recently https://www.nbcnews.com/news/us-news/louisiana-woman-carryin... One can go back to something like Terri Schiavo https://en.wikipedia.org/wiki/Terri_Schiavo_case


What do you mean by "without a brain"?

There are multiple examples in the literature of people leading perfectly ordinary lives whilst unknowingly having no more than 5% of the typical amount of brain matter (typically because of hydrocephalus). For example, https://www.science.org/doi/10.1126/science.7434023 from 1980.


They mean stuff like https://en.wikipedia.org/wiki/Anencephaly.

The brain is indeed incredibly resilient - some kids with serious epilepsy get an entire hemisphere taken out - but which 5% you're left with matters enormously.


IN MICE. (To be fair, also IN SOME OTHER BETTER MICE).

https://jamesheathers.medium.com/in-mice-explained-77b61b598...

(mostly a joke, but I'd be in favor of adding context to the HN headline if possible)


This context is very important.

"Little by little, over-inflated results and breathless breakthroughs betray trust. They throwing dimes in a wishing well which people rapidly start to expect will never pay compound interest."

"Then, when one of those people is elected to parliament, or Congress, and start to cut the budget for the National Science Foundation, or declares that All Research Should Be In The National Interest (whatever that is), I wonder how much we reap what we have sown."


This isn't quite as bad as the garden variety "in mice" studies:

> The combination therapy also led to significant regression in genetically engineered mouse tumours and in human cancer tissues grown in lab mice, known as patient-derived tumour xenografts (PDX).


PDX is a double edged sword. Human tumors are engrafted into mice with no immune system. Immune-cancer interface is incredibly important, yet completely lacking in these models. Consider that some of the greatest cancer drugs ever work specifically on the immune system (e.g. checkpoint inhibitors like Keytruda).


A drug that works even without help from the immune system seems like it might be even better. It's hard to imagine how the immune system might interfere, since it doesn't interfere with other drugs.


>"The combination therapy also led to significant regression in genetically engineered mouse tumours and in human cancer tissues grown in lab mice"

Required XKCD: https://xkcd.com/1217/


Is PDX considered to be illegitimate? Would be curious to know if prior studies that showed success with PDX methods ultimately resulted in useful therapeutics.


Vorinostat


I wonder how long until we'll start seeing these breakthrough cancer treatment articles for clinical trials done in dogs. Oncologists think dog research is a better fit than mice because of greater genetic similarities to humans and the fact that pet dogs live in similar environments as their owners. I think in general people definitely wouldn't be as ok with inducing cancer in dogs as in mice, but finding volunteers owners of dogs with existing cancer is certainly easier.


That's interesting because rodents and apes share a more recent common ancestor (75Mya) than dogs and apes (85 Mya).


I used to work at a biomedical institution that did cancer treatment experiments on dogs. There was basically a kennel and periodically they would take a dog and irradiate it.

That was fine in the abstract, but there were computational labs above the kennel and periodically you'd just get this huge outporing of dogs barking and howling and it was really hard to get any work done.


Mice have the best drugs.


Also the worst. You win some, you lose some.


There really has never been a better time to be a critically-ill mouse. They've got something for you.


I opened the comments fully expecting the top reply to be “In mice.” Bingo.


One of the costs of saying no to meetings is that going to other people's (useless) meetings is a super low effort way to say "I value our working relationship." Not going often explicitly sends the opposite message.

Sometimes there is a whole set of rituals used to "prove" you actually care about the group, and the rituals only ever happen in meetings, and you cannot change them without bothering a lot of people.


Also sometimes who gets to work on a future project is just based on perceived interest

And not going to a meeting may be perceived that you aren't interested in that project


Am I the only one who was really underwhelmed? I saw that it was supposedly a very tense trainwreck situation and sure, it gets sarcastic and stuff, but most of it was

Interviewer: "so I heard you were/are doing a bad job with moderation"

CEO: repeats banal PR talking point for the 10th time

Repeat.

I mean, at no point did the CEO say anything interesting about the moderation problem or what they are doing. The interviewers seem too skeptical to be genuinely interested. He explains to them that cost =/= quality and that 2016 =/= 2025 for what feels like an eternity. I was bored.


> CEO: repeats banal PR talking point for the 10th time

https://en.wikipedia.org/wiki/Cooperative_principle

So historically, when someone accepted an interview yet refused to engage with any questions, or stay on topic, AND also was not interested in the smooth polish of PR-style transitions that would give an appearance of basic cooperation.. it was considered unhinged and obviously crazy behavior.

If interviewee acts clueless, drawling, or drooling then they could be pretty uncooperative and mostly get a pass because it's not very polite to point out stupidity. But for the big bonus crazy-points though, interviewee may opt in to escalation, becoming unabashedly and almost childishly combative, talking over each other, etc. Obviously all of these tactics are pretty normalized now though.

> I was bored.

This is basically the goal. After the interviewee realizes the interviewer is hostile, they just double down on their talking points to signal to investors and ignore the intended audience of the interviewer. Mistake on the interviewers part honestly to publish it at that point IMO.


Well said, my reaction was basically, what he is saying is not really for a listener like me.

> AND also was not interested in the smooth polish of PR-style transitions that would give an appearance of basic cooperation..

I agree this is a big part of it and would add that the "unhinged" look is probably just a lack of PR skill. Both sides are hostile but the interviewers "win" here by staying within the rules of the game, and they also do a beautiful job of sort of winking at the audience like "wow this dude is crazy right?"

It's impressive but sorta annoying, I'd rather listen to actual content.


"I thought we were going to be talking about something else, but I want to talk about anything you guys want"

doesn't talk about the thing they want

It is redundant and I don't think it is a trainwreck either


> a private company shouldn’t have to decide what counts as “illegal” content under threat of legal action.

Immediately reminded me of patio11's amazing write up[1] of debanking, featuring banks being deputized as law enforcement for financial crimes (which is completely non controversial), and even used as a convenient tool to regulate other industries that the white house didn't like (kinda controversial).

[1]: https://www.bitsaboutmoney.com/archive/debanking-and-debunki...


I strongly disagree that financial institutions being a de facto extension of law enforcement is "non-controversial".

It may be the way things are; it may be a pre-req of making financial crime tractable; but that does not detract from the fact that every financial institution is in essence, deputized law enforcement, and negate the chilling effect that comes as a consequence thereof on a business environment subject to it.


Fair enough. It was something that impressed me when I first read about it. You hear about disputes between Apple and the FBI over unlocking phones and meanwhile banks are like "and over here are whole floors of analysts tracking suspicious stuff." I definitely agree there are downsides and not everyone is happy about the floors of analysts, but I do think they are very far away from the Overton window.


Typically long-winded patio11 article that basically says: Banks are suspicious of crypto.


I think it makes more sense in languages where you use the "let" keyword. Then it sounds like assignment, though you still have to get comfortable with the X being on the right side too.


Not all engineers are in the target audience, and not all details of research findings need to be conveyed to the target audience to make a real impact. The point is if no findings ever make it to engineers (in the broadest sense), there is zero real world impact. I guess real impact is not the only goal but it's a valid one.


> active science communication has been sparse in the area of software research, and those who have tried often find their efforts unrewarded or unsuccessful.

The authors suggest:

> Identify your target audience to tailor your message! Use diverse communication channels beyond papers, and actively engage with practitioners to foster dialogue rather than broadcasting information!

What I would emphasize is that many researchers just don't know how to do it. It isn't as simple as just thinking up a target audience and churning out a blog post. If you are the median researcher, ~0 people will read that post!

I think people underestimate:

- How hard it is to find the right target audience - How hard it is to understand the target audience's language - How hard it is to persuade the target reader that this work you've done should matter even a little to their work, even when you designed it specifically for them - How few people in the audience will ever understand your work well - How narrow your target audience should be

I also think many researchers want to be able to, if not as a primary career goal then at least as a fulfilling, public service type activity. Currently testing this out a bit (more: https://griffens.net).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: