Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Back in 2005 I suggested that it would be interesting to build a system that would play any game without being given directions (that blog post has since disappeared from the web). This, and not image recognition, would mean intelligence, I thought at the time: you can't truly tell a donkey from a dog without having some general knowledge about the world donkeys and dogs (and your AI) are living in.

The academic image recognition machine though seems unstoppable and yes it does seem to improve over time. I honestly don't know what the limits of "dumb" image recognition are in terms of quality, but calling it AI still doesn't make sense to me.



AGI is the idea of creating general intelligence. Which almost certainly requires some reinforcement learning (the ways dogs and humans learn, by having a goal and an internal reward system).

Deepmind's Atari player is based on deep reinforcement learning, where increasing the score represents a reward:

https://m.youtube.com/watch?v=EfGD2qveGdQ

I used to believe the moving goalpost idea, that AI is anything that isn't yet possible. I now disagree.

I saved myself a lot of energy by avoiding entirely the question "what counts as AI?" by switching to the question, "what counts as AGI?" which is a term with a clearer threshold:

https://en.m.wikipedia.org/wiki/Artificial_general_intellige...


> but calling it AI still doesn't make sense to me.

That's the core problem of AI, no matter what progress is made, it's instantly called not AI anymore while the goalpost of what AI is is continually being pushed out to not this. The real issue of course is what don't how intelligence actually works so it's impossible to set a fixed goalpost of when AI is truly achieved.


> The real issue of course is what don't how intelligence actually works so it's impossible to set a fixed goalpost of when AI is truly achieved.

No, the real issue is that people still think, intuitively, that there's a little homunculus in their head making the decisions. Each time we build something that doesn't look like a homunculus, we've failed at attaining "real" intelligence...


> impossible to set a fixed goalpost of when AI is truly achieved

We have a kind of fixed goal post in human intelligence. When computers are worse at something than humans like chess it's thought of as intelligence and when they get better it's ticked off as just an algorithm. The AI researches gradually tick off abilities. Chess long ago, image recognition happening now, general reasoning at some point in the future.


Making decisions is not that complicated, neither is it interesting. Your iPhone can make a decision to kill a process that takes too much system resources while in the background. What's more interesting (and what seems to be the main function of the so called homunculus) is being aware of your own location in space and time, as well as remembering previous locations. In other words, having some model of the world and knowing your place in it is what computers haven't achieved yet in any meaningful way.


So, map building is now what will truly define AI until it's also just another technology.


How is map building AI? It's a pretty mechanical process. Start somewhere, make some measurements, move along, repeat. At what point is there any notion of intelligence involved?


It's deeper than you describe...

https://en.wikipedia.org/wiki/Simultaneous_localization_and_...

... but I agree that there's nothing in particular that distinguishes it from other problems.

My comment was satirical in nature. SLAM is an interpretation of what the parent comment had described:

"[B]eing aware of your own location in space and time, as well as remembering previous locations. In other words, having some model of the world and knowing your place in it[.]".

There is a general pattern of statements of the form "We'll only really have AI when computers X", followed by computers being able to X, followed by everyone concluding that X is just a simple matter of engineering like everything else we've already accomplished. As my AI prof put it, ages ago, "AI is the study of things that don't work yet."


Or it could be, a system capable of reasoning its way to doing X would be intelligent, but you can also teach to the test, so to speak, and build a system that does X without being generalizable and thus satisfy X without being intelligent.


> Or it could be, a system capable of reasoning its way to doing X would be intelligent, but you can also teach to the test, so to speak, and build a system that does X without being generalizable and thus satisfy X without being intelligent.

Which is exactly what we do with many kids today; makes you wonder how many times we might invent AI and not know it because we don't raise it correctly so it appears too dumb to be considered a success.


That's sort of where a lot of people have arrived, to be sure, with distinguishing the notion of an "artificial general intelligence" from other things in the AI bucket.


Your iPhone is aware of it's own location in space and time and remembers previous locations as well.


I don't think the iPhone has a model of the world with its own body in it. You could program that too, and once you also add the ability to move/act according to a certain goal, you have an intelligent agent. However, making your system more or less useful is what takes the task of AI to levels of complexity we can't reach yet. Compare ants and dogs: the former is achievable in terms of simulation but not interesting, the latter can be potentially useful but is already too complex for us to implement.


Control theory is a thing. Causal induction is a thing.


Control theory: worked with some smart folks when it came to designing plant control systems. All of it was human labor. There was zero intelligence on the part of the tools and all parameters had to be figured out by the person designing the control stategy.

Causal induction: sounds interesting until you dig in and realize everything is non-computable.

So what exactly is your point?


>Causal induction: sounds interesting until you dig in and realize everything is non-computable.

Wait, what?


Here you go: https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc.... Follow the chain a little bit and you'll come to things like: http://www.hutter1.net/publ/aixiaxiom2.pdf. Super interesting but very impractical.


Causal induction is a different thing from Solomonoff Induction.

https://cocosci.berkeley.edu/tom/papers/tbci.pdf


That's basically what I'm saying in the previous sentence.


The reason behind that is probably that a large enough collection of interconnected algorithms executing simultaneously may be indistinguishable from intelligence. The questions then becomes 'what is large enough' and 'which algorithms', and this is where we may be off by much more than our most optimistic guesses and that still leaves the possibility of being flat-out wrong about this.


> that a large enough collection of interconnected algorithms executing simultaneously may be indistinguishable from intelligence

Or is intelligence.


AI is like magic.

If you don't know how it works -- it looks like magic. It can tell a donkey from a horse, it can play checkers, diagnose a patient etc.

After you are told the trick -- it is just A*, or Rete algorithm, or a multi-layer NN. It makes it less magic and it becomes just another algorithm.


There isn't a lot in the world of AI that seems worthy of the I. Image recognition though, while not a part of our conscious reasoning is a very strong part of our brains. It continues to advance because it is immediately profitable.


That's actually starting to happen. Deepmind built an AI that can learn to beat the best human players on many atari games. After just a few hours of playing the game and learning. And of course it uses all that advanced image recognition stuff. That has always been the hard part. The actual game playing part is just a simple reinforcement learning neural network that is put on top of it.

The reason it's AI is because it isn't specific to speech recognition. Deep learning is very general. The same algorithms work just as well at speech recognition, or translation, or controlling robots, etc. Image recognition is just the most popular application.


I've read elsewhere that they trained in whether the score went up. For it to really be a general game playing AI, it should be able to figure out the goals of games without scoring systems, such as adventure games.


That is, unfortunately, impossible. All AIs need some kind of reward function, an incentive to do things. Without that they have no reason to do anything.


So your thesis is that AIs will never reach human intelligence, which can indeed figure out goals of video games on its own?


It might be able to figure them out, though it has no reason to care.

Certainly, a new born baby given a video game controller would not be able to figure it out.


Important to realize the AI fails completely at other Atari games.


Specifically the ones that require any sort of identification of state. It kicks butt in hand eye coordination tasks and it is awesome that it can learn those hand-eye tasks automatically, but higher order reasoning is obviously out. AI makes progress every year and when those progress in individual tasks are coalesced into a coherent single entity we will have what the average person calls intelligent.


Yes, but that's because the specific ANN the Deepmind researchers used doesn't have any state. IIRC it's given the last 2 or 3 frames as input. No doubt that makes the learning easier (I should mention that reinforcement learning is used, unlike most tasks ANNs are applied to). But recently stateful ANNs (typically based on Long Short-Term Memory (LSTM) networks, from 1997 already) are becoming more and more popular. I would like to see someone make another attempt at Atari games with such a stateful network; probably already done actually.


This is true. The reason there's a scene in the movie, 2001, in which HAL plays chess is that at the time it was thought that playing chess well required real human intelligence.

But as soon as chess programs got good, we all took them for granted.


That's actually trivially doable and has been done for video games and board games.


One that can play ANY game, not programmed to play a specific one.


Yes, it's trivial. For board games all AIs essentially use Alpha–beta pruning + scoring function and an approximation to that scoring function can be autogenerated using standard machine learning techniques.

For (classic) video games it's actually somewhat similar but rather than a board being fed in as an input you just feed in a bitmap of the display (sometimes at a lower resolution using compression techniques to reduce input features) and optimize moves made to maximize the score at any point rather than end-game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: