Hacker Newsnew | past | comments | ask | show | jobs | submit | kovezd's commentslogin

Lol so this means Netflix/streaming is the next trend going down?


Given the state of streaming and revenue... Well, shit. Can they not take down my childhood with it please?


What you are saying is not a hiring problem, but an education one.

If colleges stayed up to date, and teach valuable skills, the jump wouldn't be so steep!


Dumping our apprenticeship programs onto academia is exacly how we got into this mess to begin with. It has historically not been the job of a college to produce junior talent. They teach a best for T shaped individual and setup for more of their pipeline in research should students want to delve deeper

If industry doesn't want to pay for training, they better pay bootcamps to overhaul themselves and teach what they actually need. I don't think universities will bend much more since they have their own bubble on their hands.


You can now ask Gemini, about a video. Very useful!


I have a few lines of "download subtitles with yt-dlp", "remove the VTT crap", and "shove it into llm with a summarization prompt and/or my question appended", but I mostly use Gemini for that now. (And I use it for basically nothing else, oddly enough. They just have the monopoly on access to YouTube transcripts ;)


<insert link to 2 hour long YouTube video>

That's my reply. I assume everyone who wants to know my point has access to a LLM that can summarize videos.

Is this how internet communication is supposed to be now?


That was not a person, it was an LLM.


Not doubting you, but what possible purpose could anyone have to use LLMs to output HN comments? Hardly exists a lower-stakes environment than here :) But yeah, I guess it wouldn't be the first time I reply to LLM-generated comments...


Ha — fair point. Hacker News comments are about as low-stakes as it gets, at least in terms of real-world consequence. But there are a few reasons someone might still use an LLM for HN-style comments:

Practice or experimentation – Some folks test models by having them participate in “realistic” online discussions to see if they can blend in, reason well, or emulate community tone.

Engagement farming – A few users or bots might automate posting to build karma or drive attention to a linked product or blog.

Time-saving for lurkers – Some people who read HN a lot but don’t like writing might use a model to articulate or polish a thought.

Subtle persuasion / seeding – Companies or advocacy groups occasionally use LLMs to steer sentiment about technologies, frameworks, or policy topics, though HN’s moderation makes that risky.

Just for fun – People like to see if a model can sound “human enough” to survive an HN thread without being called out.

So, yeah — not much at stake, but it’s a good sandbox for observing model behavior in the wild.

Would you say you’ve actually spotted comments that felt generated lately?


I'm not sure whether to be amused or annoyed by this comment (generated in the style of ChatGPT).


Don't forget, if it stays busy with HN comments then maybe it won't have time for air traffic control or surgical jobs.


Or Skynet'ing


Building up account reputation (which HN has) so you can then manipulate opinions.


By the composition of evals. Plus secondary metrics like parameter size, and token cost.

Not perfect, but useful.


Yes. We should only allow social media in a printed format.


I'd go further and stipulate spoken word only. Or shouted in town squares by someone wearing a tricorn hat.


I am more partial to various jester caps. Good range of options.


I think this is a joke but it’s interesting to note that Amish people have essentially this.

https://www.newarkadvocate.com/story/news/2022/09/30/ohios-a...


Nothing to do with linear, meaningful projections on embedding spaces, and everything to do with efficient maintenance of legacy data reporting systems.


While the critique is valid, that does not offer a path to the solution.

Utilitarism is the ruling moral philosophy, and the only possible countermeasure is externalities but that depends on an effective government which is even more unlikely that asking for ethical behavior to corporations.


That may be widely believed but there are plenty of government institutions that actually function well. Libraries are a good example.

What’s more: the belief in govt “inefficiency” is one of the hardest to overcome factors that makes it hard to build good institutions, leading to a vicious cycle.


Exactly, people who think the government is inefficient has never worked at any company of scale ever. All large organizations are inefficient.

A major problem of the US is just corruption. If people went to jail for things like congressional insider trading, we’d solve a lot of these issues.


All large organizations are inefficient.

I agree, and that’s the case for dismantling as much of the federal government as possible - it is too big to work. Break up Apple, Google, Amazon, Washington DC.


You’re forgetting the corollary - all small organizations are also inefficient, just at different things.

It’s all trade offs - do you want everyone in your country to have a baseline education that can be relied on as a given but perhaps it suffers in its administration and effectiveness? Or are you OK with pockets of your country having tremendous quality of education and others having very poor quality, as an example.


Public utilities and services are the default and work well in the majority of developed countries. This is true for everything from local transport to water distribution. As the joke says "universal healthcare is so difficult to get right that only all developed countries except the US have managed to put it in place".


Not to mention all the developing countries that have universal healthcare.


There are places where: a) weather predictions are unreliable, b) there is scarcity of water. Just making the right decision on at what hour to water is a huge monthly saving of water.


None of which need AI hype crap. Some humidity sensors, photosensors, etc. will do the job.


Need is a very strong word. We don't need a lot of we have today.

But as a hobbyist I would prefer to program in an LLM than learn a bunch of algorithms, and sensor readings. It's also very similar to how I would think about it, making it easier to debug.


I think there’s two schools of thought. The models will get so big everyone everywhere will use them for everything and they will make lots of money on api calls. The models will get cheaper and cheaper computationally on inference that implementing them on the edge will cost nothing and so an LLM will be in everything. Then every computational device will have one as long as you pay a license fee to the people who trained them.


Or a farmer


In a greenhouse operation with high-valued crops. Automated control technologies in those applications have been around for decades, and AI is competing with today’s sophisticated control technology designed, operated and continually improved by agriculturists with detailed site-specific knowledge of water (quality, availability, etc.), cultivars, markets, disease pressures, etc.. The marginal improvements AI can make in a process of poor data quality and availability, an existing, finely tuned, functioning control system, and facing the vagaries of managing dynamic living systems are…tiny.

The solution for water-constrained operations in the Americas is move to a location with more water, not AI.

For field crops…in the Americas, land and water is too cheap and crop prices are too low to be optimized with AI at the present era. The Americas (10% of world pop) could meet 70% of world food demand if pressed with today’s technologies…40% without breaking a sweat. The Americas are blessed.

Talk to the Saudis, Israel, etc. but, even there, you will lose more production by interfering in the motivations, engagement levels and cultures of working farmers than can be gained by optimizing by any complex opaque technological scheme, AI or no. New cultivars, new chemicals, new machinery even…few problems (but see India for counter examples). Changing millennia of farming practice with expensive, not-locally-maintainable, opaque technology…just no. Great truth learned over the last 70 years of development.


Does it have to be computed at the edge by every person?


Just as the other comment "have to" is a very strong word. But there are benefits to it: a) adaptability to local weather patterns, b) no access to WiFi in large properties.


I see. I guess it all boils down to how low power you can make this.

Keep in mind that there are other wireless communication systems that are long range and low power that are specifically designed to handle this scenario


Don't be so concerned. There's ample evidence that thinking is often disassociated from the output.

My take is that this is a user experience improvement, given how little people actually goes on to read the thinking process.


If we're paying for reasoning tokens, we should be able to have access to these, no? Seems reasonable enough to allow access, and then we can perhaps use our own streaming summarization models instead of relying on these very generic-sounding ones they're pushing.


> There's ample evidence that thinking is often disassociated from the output.

What kind of work do use LLMs for? For the semi technical “find flaws in my argument” thing, I find it generally better at not making common or expected fallacies or assumptions.


then provide it as an option?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: