I wanted to do something complex in google sheets. We had just gotten Gemini in gsheet. I assumed they'd have used some fancy mcp and enabled us to do a lot of things but all gemini in gsheet could do was summarize
It sounds like that's still a step up from Copilot.
I know Gemini has more advanced features in Docs and they rolled something out for Sheets. I would bet GWorkspace keeps gaining ground on the functionality battle.
Not who you asked, but I don't like the effect they have on people. People develop dependence on them at the cost of their own skills. I have two problems with that. A lot of their outputs are factually incorrect, but confidently stated. They project an air of trustworthiness seemingly more effectively than a used care salesman. My other problem is farther-looking. Once everyone is sufficiently hooked, and the enshittification begins, whoever is pulling the strings on these models will be able to silently direct public sentiment from under cover. People are increasingly outsourcing their own decisions to these machines.
exactly. People are blindly dumping everything into LLMs. A few years into the future, will we have Sr or Staff enggs who can fix things themselves? What happens when claude has an outage and there is a prod issue?!
I learnt programming when Books were actually used, back when the docs page were barebone.
My 2 cents: read the actual docs, these days docs are exceptional. Rustlang offers a full fledged book as part of their docs. Back when Go was launched and their docs wrre inadequate and I had started to write a short github based "book" for newbies, and it did well (looking at the github stars)
Learn without AI, be an expert. And then use AI to write the code.
Using AI to learn is honestly delusional. You don't learn when AI writes the code for you. Also for a new language it'll take some time for us yo get used to the syntax - hence writing by hand until you become an expert.
The goal of writing software for your job is to write it within that sprint.
But for hobby at least you can take time & learn
Although I'd recommend to get into depth for whatever tools you are going to use at your job without AI because who knows, maybe your next company won't allow you to use AI!
I use Cursor daily, I have worked on Agents using LangChain. Maybe we are doing something wrong but even ysing SOTA models unless we explicitly give which mcp tool to call, it uses anything - sometimes - while other times it can do a passavle job. So now our mandate is to spell everything out to LLM so it doesn't add a non existent column like created at or updated at to our queries
I've used every SOTA for day to day work, and at best they save some effort. They can't do everything yet
Precisely, I always find myself thinking that maybe I'm just too dumb to use these LLM's properly, but that would defeat the purpose of them being useful haha.
And I keep reading people who heap praises at AI like the Staff engg at Google who weirdly praised a competitor LLM. They miss one important part - AI is good for end to end problems that are already solved. Asking it to write a load balancer will result in a perfect solution because it has access to very well written load balancers already.
The real MOAT is to write something custom and this is where it struggles sometimes.
On a contrary note: if LLMs really are that helpful why are QA teams needed? Wouldn't the LLM magically write the best code?
Since LLMs have been shoved down everyone's work schedule, we're seeing more frequent outages. In 2025 2 azure outage. Then aws outage. Last week 2 snowflake outages.
Either LLMs are not the panacea that they're marketed to be or something is deeply wrong in the industry
Yes, it is both. If something is forced top down as a productivity spike then it probably isn't one! I remember back in the days when I had to fight management for using Python for something! It gave us a productivity boost to write our tooling in Python. If LLMs were that great since the start, we would have to fight for them.
I use cursor on a daily basis. It is good for a certain use cases. Horribly bad for some other. Read the below one by keeping that in mind! I am not an LLM skeptic.
It is wild that people are ao confident with AI that they're not testing the code at all?
What are we doing as a programmer? Reducing the typing + testing time? Because we have to write the prompt in English and do software design otherwise AI systems write a billion lines of code just to add two numbers.
This hype machine should show tangible outputs, and before anyone says they're entitled to not share their hidden talents then they should stop publishing articles as well.
reply