Hacker Newsnew | past | comments | ask | show | jobs | submit | sebastiennight's commentslogin

Would you be kind enough to explain your gut reaction here with logical arguments as to why this is definitely not a feature that would ever be released?

Yes! It's not a marketable feature for the cost!

Thanks. That does make sense.

> Recursive self-improvement doesn't get around this problem. Where does it get the data for next iteration? From interactions with humans.

It wasn't true for AlphaGo, and I see no reason it should be true for a system based on math. It makes sense that a talented mathematician who's literally made of math, could build a slightly better mathematician, and so on.


AlphaGo was able to recursively self-improve within the domain of the game of go, which has an astonishingly small set of rules.

We're asking AIs to have data that covers the real physical world, plus pretty much all of human society and culture. Doing self-improvement on that without external input is a fundamentally different proposition than doing it for go.


That is a valid argument. I do think that

> the real physical world, plus pretty much all of human society and culture

is only a tiny part of the problem (more data plus understanding more rules) and the main problem is "getting smarter".

You can get smarter without learning more about the world or human society and culture. I mean, that's allegedly how Blaise Pascal worked out a lot of mathematics in his teenage years.

My point is that the "getting smarter" part (not book-smart which is your physical world data, not street-smart which is your human culture data, but better-at-processing-and-solving-problems smart) is made of math. And using math to make that part better is the self-improvement that does not necessarily require human input.


Your math point is proven wrong, with math. The argument goes like this:

1. AI is a computer program.

2. Some math is not solvable with any computer program.

3. Therefore, there are limits to what AI can do with math.

I recommend you to read this lovely paper about Busy Beaver numbers by Scott Aaronson. [1]

[1]: https://www.scottaaronson.com/papers/bb.pdf


I think you're strawmanning my math point from "if you're made of math and can make a trivial improvement in the math, you get a smarter n+1 program that can likely make another trivial improvement to n+2"... to "AI can solve all math" (which is not my point at all).

You seem to be generalizing item #3 from "there are limits to what AI can do with math", to "therefore, AI can't improve any math, and definitely not the very specific kind of math that is relevant to improving AI". That is a huge unjustified logical jump.

Has it ever happened on the path from Enigma to Claude Opus 4.6, that the necessary next step was to figure out a new nth Busy Beaver? Is Opus 4.6 a better Busy Beaver than Sonnet 3.5?

Or is that a mostly unrelated piece of math that is mostly irrelevant to making a "smarter" AI program from where we are today?


As I shared in a comment above, the book is released under a Creative Commons license that does not authorize sharing derived works so only the original author can distribute you an alternative version

However I vibe-coded this tool for my personal use with our good friend Claude: https://onetake-ai.github.io/html-ebooks/

Which when pointed at a repository containing an "html book", like https://github.com/yan7109/yan7109.github.io/tree/main/ma-bo...

will give you an ePub with all the content in one place.


So... the book is released under a Creative Commons license that does not authorize sharing derived works...

BUT...

If you use this tool I just vibe-coded with our good friend Claude: https://onetake-ai.github.io/html-ebooks/

And point it at the repository: https://github.com/yan7109/yan7109.github.io/tree/main/ma-bo...

It will give you an ePub with all the chapters, including a bit of styling etc.

The code is MIT licensed, so do with it as you wish.


Thank you! This was very helpful :)

Wow, it's always amazing to me how the law of unintended consequences (with capitalistic incentives acting as the Monkey's Paw) strikes everytime some well-intended new law gets passed.


They're making a big promise here, that very few tech companies have been able to keep in the past.

Maybe there's a predictive market gamble starting about how long it will take Claude to follow suit if OpenAI starts making 9 figures in ad revenue.


Counterpoint being that Slack, for example, for all its faults, does not have ads in its chats.

If Anthropic is positioned as "thing for professionals to do professional work" then I think you just avoid this issue entirely. Fee for service. OpenAI trying to be the thing everyone is using won't work in that model, though.


Member when today's biggest advertising company used to claim no ads as their USP? Tegridy members...


Wait for the API costs or open-source models to be cheap enough and we'll get there. I mean it's a guaranteed HN frontpage (and currently also a guaranteed epic credit card bill).


I was not aware of WikiSpeedRuns, that's a fun one (and then the 2nd link you shared basically allows you to check how well you did)


I think you meant *Ballmer, but the typo is hilarious and works just as well


Haha yeah I noticed too late :P


I've been waiting for someone to implement this well! I think in the future we might even have tiktok-style influencer videos generated from wikipedia content, who knows.

I've been swiping a lot for the last 10 minutes and I'm not sure how much it's learning. I have some feedback.

- I have never liked or clicked a biography but it keeps suggesting vast amounts of those

- It does not seem to update the score based on clicking vs liking vs doing both. I would assume clicking is a solid form of engagement that should be taken into consideration

- It would be interesting to see some stats. I have no idea how many articles i've scrolled through or the actual time spent on liked vs disliked article previews. If you can add such insight it would be interesting

- A negative feedback mechanism would be interesting as well. There is no way to signal whether I'm just neutral towards something (and swipe through) or actively negative about it (which is a form of engagement the doomscroll would actually use to show me such content once in a while)

- since this website has already shown me multiple pages about things I'm learning about thanks through it, it might benefit from a "share" button (another engagement signal) as HN folks are likely to want to share on HN things they've just learned

- Would you be willing to make the experiment open source?


It's opensource insofar that the javascript is not minified or obfuscated. You can see it at https://github.com/rebane2001/xikipedia too.


I want to try reimplementing it for Wikipedia in another language, would you mind sharing how you went from the 400MB Wikipedia export to the .1x (40MB) file that is downloaded here?


Yeah I plan on putting the code for that on GitHub soon too.


Added now!


Perfect! Thanks for clarification. I thought there was server-side preparation of the content, but it seems from the other posts that it's all local, and I commend you for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: