there's a huge flaw in the logic here but I can't pinpoint it
we've never had machines that can reason. today, they reason badly. tomorrow, they'll reason better. not all ai research white papers will pan out, but some will. the ai research community is acutely aware of the limitations in reasoning; it's their whole mission right now. 100% chance someone will make progress. Analogous to the chip industry or self driving, in that regard
incremental improvements in reasoning will broaden the dead zone for human developers
what difference does it make if in 20 years NASA still needs a few human developers but the rest of us are unemployed? ai agents still can't get you to the moon with one shot coding session in 2045? who cares?
the "still can't" we really need to be worried about is "politicians still can't consider UBI"
we've never had machines that can reason. today, they reason badly. tomorrow, they'll reason better. not all ai research white papers will pan out, but some will. the ai research community is acutely aware of the limitations in reasoning; it's their whole mission right now. 100% chance someone will make progress. Analogous to the chip industry or self driving, in that regard
incremental improvements in reasoning will broaden the dead zone for human developers
what difference does it make if in 20 years NASA still needs a few human developers but the rest of us are unemployed? ai agents still can't get you to the moon with one shot coding session in 2045? who cares?
the "still can't" we really need to be worried about is "politicians still can't consider UBI"