Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whether philosophy has been a useless waste of time is a commonly recurring question. I think it's asked from a kind of historically privileged position, since we have already mostly gotten it over with, so to speak.

Another perspective is that these naive thoughts have a curious tendency to pop up, and philosophy as the work of critiquing and developing such thoughts is a crucial part of intellectual culture. There's a fascinating case in the history of AI research, as described by Hubert Dreyfus:

> When I was teaching at MIT in the 1960s, students from the Artificial Intelligence Laboratory would come to my Heidegger course and say in effect: ‘‘You philosophers have been reflecting in your armchairs for over 2000 years and you still don’t understand intelligence. We in the AI Lab have taken over and are succeeding where you philosophers have failed.’’ But in 1963, when I was invited to evaluate the work of Alan Newell and Herbert Simon on physical symbol systems, I found to my surprise that, far from replacing philosophy, these pioneering researchers had learned a lot, directly and indirectly, from us philosophers: e.g., Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a ‘universal characteristic’ (a set of primitives in which all knowledge could be expressed), Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Wittgenstein’s postulation of logical atoms in his Tractatus. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.

> But I began to suspect that the insights formulated in existentialist armchairs, especially Heidegger’s and Merleau-Ponty’s, were bad news for those working in AI laboratories—that, by combining representationalism, conceptualism, formalism, and logical atomism into a research program, AI researchers had condemned their enterprise to reenact a failure. Using Heidegger as a guide, I began looking for signs that the whole AI research program was degenerating. I was particularly struck by the fact that, among other troubles, researchers were running up against the problem of representing significance and relevance—a problem that Heidegger saw was implicit in Descartes’ understanding of the world as a set of meaningless facts to which the mind assigned values, which John Searle now calls function predicates.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: