Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wonder if they have measured their results?

This is a notoriously difficult thing to measure in a study. More relevantly though, IMO, it's not a small effect that might be difficult to notice - it's a huge, huge speedup.

How many developers have measured whether they are faster when programming in Python vs assembly? I doubt many have. And I doubt many have chosen Python over assembly because of any study that backs it up. But it's also not exactly a subtle difference - I'm fairly 99% of people will say that, in practice, it's obvious that Python is faster for programming than assembly.

I talked literally yesterday to a colleague who's a great senior dev, and he made a demo in an hour and a half that he says would've taken him two weeks to do without AI. This isn't a subtle, hard to measure difference. Of course this is in an area where AI coding shines (a new codebase for demo purposes) - but can we at least agree that in some things AI is clearly an order of magnitude speedup?





It is entirely plausible that your colleague did have a significant speedup from using AI. But that may also be a one off overstated story (probably not as you respect his judgement as "senior dev").

I'm not saying they didn't get a speedup or you can't get a speedup. I'm just wondering how they determined that speedup and how reliable they are as a source for the statement "using AI will improve your performance".

As the other thread was discussing, it is a difficult thing to prove but some (debatable) measurement proves more then anecdotal stories. And I am trying to probe for some sources where these kind of statements are proven as I am a programmer continually judging the tools available to me.

And on the flip side there is at least some evidence that programmers often overstate/overestimate their speedup when using AI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: