I wonder if anyone has done an analysis on the HN user sentiment on the varying AI models over time. I'd be curious to see what that looks like. Increasingly, I'm seeing more and more people talk positively about Gemini and Google (and having used Gemini recently, I align with that sentiment)
I think Bard (lol) and Gemini got a late start and so lots of folks dismissed it but I feel like they've fully caught up. Definitely excited to see what Gemini 3 vs GPT-5 vs Claude 4 looks like!
I'm using Windsurf IDE so have all the main models available. Mainly doing Python, JS, HTML, CSS, some Go. I have found Claude 3.7 outperforms Gemini 2.5 and ChatGPT 4.1, 4o, Deepseek, etc, for my work in most cases.
I suspect that I experience some performance throttling with Gemini 2.5 in my Windsurf setup because it's just not as good as anecdotal reports by others, and benchmarks.
I also seem to run up against a kind of LLM laziness sometimes when they seemingly can't be bothered to answer a challenging prompt ... a consequence of load balancing in action perhaps.
I think Bard (lol) and Gemini got a late start and so lots of folks dismissed it but I feel like they've fully caught up. Definitely excited to see what Gemini 3 vs GPT-5 vs Claude 4 looks like!