Hacker Newsnew | past | comments | ask | show | jobs | submit | thewhitetulip's commentslogin

I wanted to do something complex in google sheets. We had just gotten Gemini in gsheet. I assumed they'd have used some fancy mcp and enabled us to do a lot of things but all gemini in gsheet could do was summarize

It sounds like that's still a step up from Copilot.

I know Gemini has more advanced features in Docs and they rolled something out for Sheets. I would bet GWorkspace keeps gaining ground on the functionality battle.


Steve Jobs used to say every product needs a killer feature

AI is a product in search of a killer feature

First AGI was anyday going to come. Gpt5 had showed intelligence apparently

Then got started adult chat with paying customers


Isn't AGI Adult Group Interaction?

Yes, I didn't think ofthat. See he is right. They did achieve AGI, just not the one he wanted

Because Apple products at least work deterministically

They don't say here is a 1000 $ iphone and there is a 60% chance you can successfully message or call a friend

The other 40% well? AGI is right around the corner and can US govt pls give me 1 trillion dollar loan and a bailout?


When I ask Siri to play some album on Spotify, it feels like it works about 60% of the time.

Siri is not Apple's MVP though.

So far my agents has worked better than Siri and afaik nobody has actually asked for a bailout yet.

Don't worry, it's coming

Apple's primary product is not Siri though. Siri is at best a side quest.

Or how "your next meeting will be in Metaverse"

Hoping that LLMs go the way of the Metaverse.

there is little chance of that, especially with people running them locally

Why? What don't you like about them?

Not who you asked, but I don't like the effect they have on people. People develop dependence on them at the cost of their own skills. I have two problems with that. A lot of their outputs are factually incorrect, but confidently stated. They project an air of trustworthiness seemingly more effectively than a used care salesman. My other problem is farther-looking. Once everyone is sufficiently hooked, and the enshittification begins, whoever is pulling the strings on these models will be able to silently direct public sentiment from under cover. People are increasingly outsourcing their own decisions to these machines.

exactly. People are blindly dumping everything into LLMs. A few years into the future, will we have Sr or Staff enggs who can fix things themselves? What happens when claude has an outage and there is a prod issue?!

PRs these days are all AI slop.


Good example.

I learnt programming when Books were actually used, back when the docs page were barebone.

My 2 cents: read the actual docs, these days docs are exceptional. Rustlang offers a full fledged book as part of their docs. Back when Go was launched and their docs wrre inadequate and I had started to write a short github based "book" for newbies, and it did well (looking at the github stars)

Learn without AI, be an expert. And then use AI to write the code.

Using AI to learn is honestly delusional. You don't learn when AI writes the code for you. Also for a new language it'll take some time for us yo get used to the syntax - hence writing by hand until you become an expert.

The goal of writing software for your job is to write it within that sprint.

But for hobby at least you can take time & learn

Although I'd recommend to get into depth for whatever tools you are going to use at your job without AI because who knows, maybe your next company won't allow you to use AI!


I use Cursor daily, I have worked on Agents using LangChain. Maybe we are doing something wrong but even ysing SOTA models unless we explicitly give which mcp tool to call, it uses anything - sometimes - while other times it can do a passavle job. So now our mandate is to spell everything out to LLM so it doesn't add a non existent column like created at or updated at to our queries

I've used every SOTA for day to day work, and at best they save some effort. They can't do everything yet


Precisely, I always find myself thinking that maybe I'm just too dumb to use these LLM's properly, but that would defeat the purpose of them being useful haha.

And I keep reading people who heap praises at AI like the Staff engg at Google who weirdly praised a competitor LLM. They miss one important part - AI is good for end to end problems that are already solved. Asking it to write a load balancer will result in a perfect solution because it has access to very well written load balancers already.

The real MOAT is to write something custom and this is where it struggles sometimes.


On a contrary note: if LLMs really are that helpful why are QA teams needed? Wouldn't the LLM magically write the best code?

Since LLMs have been shoved down everyone's work schedule, we're seeing more frequent outages. In 2025 2 azure outage. Then aws outage. Last week 2 snowflake outages.

Either LLMs are not the panacea that they're marketed to be or something is deeply wrong in the industry


Why not both? It's not this industry, it's everything. Fuck Jack Welch, fuck the Chicago School.

Yes, it is both. If something is forced top down as a productivity spike then it probably isn't one! I remember back in the days when I had to fight management for using Python for something! It gave us a productivity boost to write our tooling in Python. If LLMs were that great since the start, we would have to fight for them.

That's what he said. AI is building AI so that's where they are headed.

Most of my devs don't write code, they review it.


I would wager $1m that Anthropic will still have SWEs doing engineering work 3 years from now, if the company is still around.


Yeah. And afaik and hage used AI models, unless you use claude opus 4.5high you don't get good results


If AI can do the full job of a software engineer, then it can review code too.


I use cursor on a daily basis. It is good for a certain use cases. Horribly bad for some other. Read the below one by keeping that in mind! I am not an LLM skeptic.

It is wild that people are ao confident with AI that they're not testing the code at all?

What are we doing as a programmer? Reducing the typing + testing time? Because we have to write the prompt in English and do software design otherwise AI systems write a billion lines of code just to add two numbers.

This hype machine should show tangible outputs, and before anyone says they're entitled to not share their hidden talents then they should stop publishing articles as well.

You can't have your cake and have it too!


You'll see countless posts on LinkedIn about how great LLM is. Nobody goes in depth these days - just superficial posts


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: