AI turns the corner?

Is the latest version of AlphaZero showing signs of human style intuition and creativity?

The current AlphaZero is a more generalised version of the program, produced by Demis Hassabis and his team, that beat top human players of Go for the first time. The new version, presented briefly in Science magazine, is able to learn a range of different games; besides Go it learned chess and shogi, and apparently reached world-class play in all of them.

The Science article, by David Silver et al, modestly says this achievement is an important step towards a fully-general game-playing program, but press reports went further, claiming that in chess particularly AlphaZero showed more human traits than any previous system. It reinvented human strategies and sacrificed pieces for advantage fairly readily, the way human players do; chess commentators said that its play seemed to have new qualities of insight and intuition.

This is somewhat unexpected, because so far as I can tell the latest version is in fact not a radical change from its predecessors; in essence it uses the same clever combination of appropriate algorithms with a deep neural network, simply applying them more generally. It does appear that the approach has proved more widely powerful than we might have expected, but it is no more human in nature than the earlier versions and does not embody any new features copied from real brains. It learns its games from scratch, with only the rules to go on, playing out games against itself repeatedly in order to find what works. This is not much like the way humans learn chess; I think you would probably die of boredom after a few hundred games, and even if you survived, without some instruction and guidance you would probably never learn to be a good player, let alone a superlative one. However, running through possible games in one’s mind may be quite like what a good human player does when trying to devise new strategies.

The key point for me is that although the new program is far more general in application, it still only operates in the well-defined and simple worlds provided by rule-governed games. To be anything like human, it needs to display the ability to deal with the heterogenous and undefinable world of real life. That is still far distant (Hassabis himself has displayed an awareness of the scale of the problem, warning against releasing self-driving cars on to real roads prematurely), though I don’t altogether rule out the possibility that we are now moving perceptibly in the right direction.

Someone who might deny that is Gary Marcus, who in a recent Nautilus piece set out his view that deep learning is simply not enough. It needs, he says, to be supplemented by other tools, and in particular it needs symbol manipulation.

To me this is confusing, because I naturally interpret ‘symbol manipulation’ as being pretty much a synonym for Turing style computation. That’s the bedrock of any conventional computer, so it seems odd to say we need to add it. I suppose Marcus is using the word ‘symbol’ in a different sense. The ones and zeroes shuffled around by a Turing machine are meaningless to the machine. We assign a meaning to the input and so devise the manipulations that the output can be given an interpretation which makes it useful or interesting to us, but the machine itself knows nothing of that. Marcus perhaps means that we need a new generation of machines that can handle symbols according to their meanings.

If that’s it, then few would disagree that that is one of the ultimate aims. Those who espouse deep learning techniques merely think that those methods may in due course lead to a machine that handles meanings in the true sense; at some stage the system will come up with the unknown general strategy that enables it to get meaningful and use symbols the way a human would. Marcus presumably thinks that is hopeless optimism, on the scale of those people who think any system that is sufficiently complex will naturally attain self-awareness.

Since we don’t have much of an idea how the miracle of meaning might happen it is indeed optimistic to think we’re on the road towards it. How can a machine bootstrap itself into true ‘symbol manipulation’ without some kind of help? But we know that the human brain must have done just that at some stage in evolution – and indeed each of our brains seem to have done it again during our development. It has got to be possible. Optimistic yes, hopeless – maybe not.

15 thoughts on “AI turns the corner?

  1. From his twitter feed:

    Symbol-manipulation = implementing abstract operations over variables, as in algebra, and using those operations to generalize to new instances. Solving y = x + 2 for new value of x, or comparing x = y, etc,,,

    The tweet continues, that piece seems to be his full definition, with example.

    To me it seems more like a Searlean symbol, one that inherently signifies something, rather than a Turingian formal symbol.

  2. Could this AI’s insight lead to its own intention or to the intention of the AlphaZero team…

  3. Yes, Peter, I think your interpretation is correct, by ‘symbolic’ is meant information to which ‘meaning’ has been assigned, in the sense that this information can be *communicated* between minds.

    Knowledge in current DL systems doesn’t have this quality – it’s just a ‘black box’ , which has no meaning (is not understandable) to humans. Symbolic knowledge is information that is meaningful (explanatory) to us, by comparison to the non-explanatory knowledge of current DL systems.

  4. I looked up ‘hope’ and chose this definition, hope is expectant trust…
    …trusting, as the means to my meaning, is a formulated question for all of my senses to be the means to my meaning…

    That hearing seeing touching smelling tasting feeling thinking perceiving minding reproducing…are meaning…
    …so is this the question we can live with, is life and our environment our means for meaning…

    And then optimistically, what is meaning for…

  5. I’d have thought people would insist at this point it’d get to being a p-zombie. Like if the device can self reflect, it might get to the point where it might discuss insights into Shakespeare’s work that you’re not aware of – but people would, even as they say it might do so, also say it would do so as a p-zombie.

    But…actually I almost deleted saying that, because that sort of insight isn’t allowed to even be attributed to p-zombies, I guess. Does conversation fall short, because we start seeing what looks like insight from the machine – and so the old p-zombie call can’t be played?

  6. I looked up p-zombie, farther further hmmmm…

    Searching if Mr. Chalmers or Mr. Dennett, in their discussions and conversations have represented…
    …’here-now’ and/or ‘what one needs is in front of one’…

    and thanks Callan S.

  7. Most AI is “big-data” processing. Using big-data technology to do lots of extrapolations on data by throwing CPU at it, the way NASA used to at things, for a fraction of the price. You might call it using a sledgehammer to crack a nut, or “unintelligent” intelligence. It’s alos nothing new, technologically or otherwise. It’s just that economics makes it worthwhile now.

    Even so, it’s still generally impossible to handle large numbers of variables without a combinatorial explosion, which requires the “intelligent” machine to have a considerable human nudge or two. There are also connections made all over the place in data that are meaningless or coincidental, a by-product of the way the data was collected perhaps, or just freak so the numbers system.

    Either way the “intelligent” machine doesn’t know the difference, so a few human dumboes have to make the executive decision to keep the meaningful stuff only.

    Marcus is right to point out that causal models are the only way of getting rid of the latter class of problem. Causal models will give the “hints” that humans provide and reduce the unecessary correlations. But suggesting that the answer is that some symbols have inherenet meaning .. well, need more be said… Only in the world of AI craziness can someone be taken seriously when suggesting that syntax and semantic can be the same.

    But having said that, we have to remember that these guys are engineers. They don’t care about consciousness or how brains actually work. They just want to make money. And if there is a way of automating the hinting process – by using more syntactial-only processes (maybe based upon micro-sampling) they’ll shamelessly call it “causal analysis” without batting an eyelid, despite the evident absence of the same.

    JBD

  8. check out, sciam.com, at tech, “Computers Determine States of Consciousness”, seems relevant…

    Shouldn’t computers/AI also be tasked to determine their own state of consciousness…
    …as a fundamental in evolution…

  9. John Davey – “But having said that, we have to remember that these guys are engineers. They don’t care about consciousness or how brains actually work. They just want to make money.”

    You really want to dump on the whole profession of engineering? In my experience engineers are most interested in not only how things work, but in making things work. They tend not to be hampered by things like a lack of a complete formal theory. It reminds me a bit of how the brain actually works, through all kinds of practical shortcuts that make best use of limited resources even if that sometimes leaves some means through which the brain can be tricked by resourceful illusionists.

    But yes, I agree that causal analysis is a necessary part of real AI.

  10. Arnold: I’d reject p-zombies on metaphysically neutral grounds, b/c if there’s no qualia wouldn’t natural selection have made them physically different from us?

  11. I agree with you Sci, but being here for me, is learning more about our natures and understanding neutrally in life…

    A non-metaphysical nature is kind of like a passive or active or a negative or positive nature, where is neutrality…
    …it is fundamentally to late to position metaphysics, other than only, as neutral or qualia in nature…

  12. Stephen


    You really want to dump on the whole profession of engineering? In my experience engineers are most interested in not only how things work, but in making things work.

    In my experience, engineers are interested in the latter far more than the former. If they were interested principally in the former, they’d be scientists. And indeed they are paid to do the latter and don’t have the budgets for the former, as well as the lack of expertise.

    JBD

  13. @ Arnold: I just meant no matter one’s philosophical position, the idea underlying p-zombies is they are like us but have no qualia…yet if there is no qualia to take advantage of why would we have these particular neuronal structures?

    So p-zombies aren’t a good proof against materialism, but more so they sidetrack us from considering the relevance of our neurosystem’s structure which any “ism”, even Idealism, has to give us an account of.

  14. Re “I naturally interpret ‘symbol manipulation’ as being pretty much a synonym for Turing style computation. That’s the bedrock of any conventional computer, so it seems odd to say we need to add it.”

    This is an issue that has, I think, confounded several analyses, including, possibly, the Penrose/Lucas argument. To set the stage, consider this parallel construction: “The brain is a chemical device, so it seems odd to say we need to add it.” Does that seem to be a convincing argument that we should have an innate understanding of chemistry?

    The issue is one of levels of abstraction and implementation, and a computer illustrates it well: from the quantum behavior of its components, to the function of individual transistors, to logic gates, to circuitry, to a finite-state machine, the implementation of a computer can be explained at many levels of abstraction, and that’s even before we layer on the multiple levels of abstraction that hardware architecture and software add (see, for example, the OSI model for communication), and without which a computer is just an expensive electrical heater.

    The higher levels of abstraction represent the emergent properties of the raw hardware, and when considering what a computer does, one should do so at the right level – it is not helpful, for example, to consider the quantum-mechanical view of a classical computer when
    programming it. And when an AI does gain an ability to understand the meaning of symbols, it will not be because, at a fundamental level, its electrons are doing things that we can understand in terms of symbol manipulation.

Leave a Reply

Your email address will not be published. Required fields are marked *