Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In what way did old fashioned AI lead to greater insight in thinking and reasoning? Sure that was the goal, but it completely failed at doing so, as witnessed by the lack luster results. And usually, the more it tried to use any supposed insight or "model" of reasoning, the harder it failed to produce results.

People love to repeat that "stochastic parrots" argument which, I mean sure I get why. But what does it say about GOFAI when even basic "black boxes" outperform almost everything GOFAI can do? And often with much less overfitting. Not just in language models, but also vision, classification, anomaly detection etc. There's a reason why current techniques are, well, current.

I guess I just don't see how it led to any actual insights if it hasn't been able to reproduce any part of it. Now maybe real GOFAI just hasn't been tried enough, or we did it wrong, and we just need to try going back to classical techniques and theories (ie. trying to replicate human thinking)... but people are free to do that! and I'm sure most researchers would be delighted if someone came up with a way to make it work and outperform the current "black box" driven approach. It's just that it never happens, but it "sounds" good and "feels" better for some to think it will, eventually.



> In what way did old fashioned AI lead to greater insight in thinking and reasoning? Sure that was the goal, but it completely failed at doing so

I was involved in an AI project funded by the EU Esprit programme in the early 90s, developing an Expert System Builder. Our goal was definitely not to gain insights into thinking and reasoning, but to help commercialise an academic technology by providing tools that allowed domain experts to build reasoning systems. That could be sold.

It went about as well as you can expect given the limitations of the expert-system approach, although two of the companies involved did manage to produce novel, vaguely useful in-house tools that were used in demos to senior customers to show that the companies were forward looking.


Ah sorry, I guess I was generalizing from what I was taught and what my teachers (at Université de Montréal) were doing back then :). Weren't reasoning/expert systems usually based on trying to model the human thought process? At least early on? I might be totally wrong.


> Weren't reasoning/expert systems usually based on trying to model the human thought process?

Yes, in the sense of using rules / heuristics in the way that human experts were believed to. One classic architecture involved a black-board of facts. Rules were triggered when the facts matched their pre-conditions and could update the blackboard with new facts, and so on. The rules looked like a mass of if-then statements, but the order in which they were fired was driven by the contents of the knowledge base and the behaviour of the inference engine.

In my experience, once you reached a certain number of rules / level of complexity, it became harder and harder to add new rules, and the lack of traditional programmatic approaches to structuring and control compromised the purely 'knowledge based' approach. As a traditional programmer myself (in Lisp), I increasingly encountered situations where I just wanted to call a proper function. There were also more theoretical issues, such as non-monotonic reasoning, where you discover that a previously asserted fact was misleading / incorrect, and you needed to retract subsequent assertions, etc. Comedy example here is where you have knowledge that Tweety is a bird, and use rules to design an aviary for him. You then discover that Tweety is a penguin, so a completely different habitat is required. There were also comedy examples where people used a medical expert system to diagnose their car's problems and it would determine that the rust was a bad case of measles.

I think it did lead to improved understanding of mathematical logic-based systems, but didn't feed back into an understanding of human cognition.


Symbolic manipulation from early AI work has been used to great effect in computer algebra systems like Maple or Mathematica. It's a measure of the 'defining down' of AI that none of that stuff counts in people's minds any more.


> when even basic "black boxes" outperform almost everything GOFAI can do?

wondering what exactly you includes in "everything"? Maybe you can provide specific example?


I was referring to the stuff I mention later in my comment: Anomaly detection, image classification/segmentation, upscaling data, even interpolation, complex forecasting, text generation, text to speech, speech recognition, etc.

I'm curious about where gofai is still outperforming modern techniques in complex tasks. As in, genuinely curious, because I want to be wrong on this!


> I was referring to the stuff I mention later in my comment: Anomaly detection, image classification/segmentation, upscaling data, even interpolation, complex forecasting, text generation, text to speech, speech recognition, etc.

I think most/all of this was not target of GOFAI.

> I'm curious about where gofai is still outperforming modern techniques in complex tasks. As in, genuinely curious, because I want to be wrong on this!

theorem proving, equation solving as example. NN still suck in deep symbolic math and reasoning.


True, I didn't consider those as being AI tasks but that's just proving your point!

Though I think that a lot of those (certainly image classification, face recognition, TTS, etc) were tasks that were very important in the field back in the 1970/80/90s. A lot of resources were spent on AI specifically for those purposes.


> Though I think that a lot of those (certainly image classification, face recognition, TTS, etc) were tasks that were very important in the field back in the 1970/80/90s. A lot of resources were spent on AI specifically for those purposes.

Maybe, it is hard for me to see how you came to such conclusion. But discussion is about specific term GOFAI. I would look at following as source of truth:

- wiki definition, which explicitly states that GOFAI is symbolic AI(rule based reasoning, like prolog, cog, cyc).

- table of content of 2nd edition of Norvig book: https://aima.cs.berkeley.edu/2nd-ed/contents.html which has very little about what you described, but mostly focuses on search, discrete algos and reasoning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: