Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A Deepness in the Sky was perhaps the first "hard sci-fi" novel I ever read (this was before I knew of Greg Egan). The concept of spiders and the onOff planet was just awe-inspiring.

While Egan's idea-density is off the charts, I found Deepness in the Sky to be the most complete and entertaining hard-scifi novel. It has a lot of novel science but ensures that the reader is never overwhelmed (Egan will have you overwhelmed within the first paragraph of the first page). Highly entertaining and interesting.

I wonder what Vinge thought of LLMs. If you've read the book, Vinge had literal human LMs in the novel to decode the Spider language. Maybe he just didn't anticipate that computers could do what they do today.

A huge loss indeed.



> Vinge had literal human LMs in the novel to decode the Spider language.

Could you elaborate on this? It's been a while since I read the novel. I remember the use of Focus to create obsessive problem-solvers, but not sure how it relates to generative models or LLMs.

Thinking about it, I'm not sure how useful LLMs can be for translating entirely new languages. As I understand it they rely on statistical correlations harvested from training data which would not include any existing translations by definition.


I do not recall the exact details but I remember that some of the focused individuals were kept in a grid or matrix of some sort. The aim of these grids were to translate the spider-talk and achieve some form of conversation with the spiders on the planet. It is also mentioned that the focused individuals have their own invented language with which they communicate to other focused individuals, which is faster and more efficient than human languages.

I may be misremembering certain details, but the similarity to neural networks and their use in machine translation was quite apparent.


The zipheads were crippled with a weaponized virus that turned them all into autistic savants. The virus was somewhat magnetic, and using MRI like technologies, they could target specific parts of the brain to be affected to lesser or greater degrees. It's been awhile since I've re-read it, but "focused" was the propaganda label for it from the monstrous tyrannical regime that used it to turn people into zombies, no?


Not zombies, but loving slaves. People able to apply all of their creativity and problem–solving skills to any task given to them, but without much capacity for reflection or any kind of personal ambitions or desires.


Or ability to remember to feed, clean, or toilet themselves.

It’s that hyper-focus that I suspect many of us have experienced, but without agency and permanent. Worse than slavery.


Lack of agency is right. No ability to request medical aid even when suffering crippling pain from a burst appendix.


Yes, they could target specific portions of the brain. Have to re-read the book!


> If you've read the book, Vinge had literal human LMs in the novel to decode the Spider language. Maybe he just didn't anticipate that computers could do what they do today.

I mean, I don't think LLMs have been notably useful in decoding unknown languages, have they?


All currently-unknown real languages that an LLM might decode are languages that are unknown because of a lack of data, due the civilization being dead. An LLM won't necessarily be able to overcome that.

In the book the characters had access to effectively unbounded input since it was a live civilization generating the data, plus they had reference to at least some video, and... something else that would be very useful for decoding language but would constitute probably a medium-grade spoiler if I shared, so there's another relevant difference.

Still, it should also be said it wasn't literally LLMs, it was humans, merely, "affected" in a way that they are basically all idiot savants on the particular topic of language acquisition.


Oh, yeah; I'm just not convinced there's any particular reason to think that LLMs would be useful for decoding languages.

(That said it would be an interesting _experiment_, if a little hard to set up; you'd need a live language which hadn't made it into the LLM's training set at all, so you'd probably need to purpose-train an LLM...)


LLMs are.. not bad at finding some semantic relationships between some arbitrary data. Sure, if you dump an unknown language into LLM then you can only receive a semantically correct sentences of unknown meaning, but as you start to decode the language itself it would be way easier to find the relationships there, if not just outright replacing the terms with a translated ones.


No idea, though being next-token predictors, it can't hurt to use LLMs?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: