A blast of old-fashioned optimism from Owen Holland: let’s just build a conscious robot!

It’s a short video so Holland doesn’t get much chance to back up his prediction that if you’re under thirty you will meet a conscious robot. He voices feelings which I suspect are common on the engineering and robotics side of the house, if not usually expressed so clearly: why don’t we just get on and put a machine together to do this? Philosophy, psychology, all that airy fairy stuff is getting us nowhere; we’ll learn more from a bad robot than twenty papers on qualia.

His basic idea is that we’re essentially dealing with an internal model of the world. We can now put together robots with an increasingly good internal modelling capability (and we can peek into those models); why not do that and then add new faculties and incremental improvements till we get somewhere?

Yeah, but. The history of AI is littered with projects that started like this and ran into the sand. In particular the idea that it’s all about an internal model may be a radical mis-simplification. We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model? In general our conscious experience is complex and elusive, and cannot accurately be put on a screen or described on a page (though generations of novelists have tried everything they can think of).

The danger when we start building is that the first step is wrong and already commits us to a wrong path. Maybe adding new abilities won’t help. Perhaps our ability to model the world is just one aspect of a deeper and more general faculty that we haven’t really grasped yet; building in a fixed spatial modeller might turn us away from that right at the off. Instead of moving incrementally towards consciousness we might end up going nowhere (although there should be some pretty cool robots along the way).

Still, without some optimism we’ll certainly get nowhere anyway.

12 Comments

  1. 1. Lloyd Rice says:

    Hear, hear. I say Owen is absolutely right. He may be understating the issues in building a world model that includes a thorough semantic analysis of the perceived content. And Peter is quite correct that such semantics needs to include a good understanding of self. Certainly self is a major part of the perceived world. But I see these as technical hurdles, toward which progress is rapidly underway. I’m not absolutely sure that those under thirty will meet one, but I think there’s a pretty good chance of it.

    Current visual analysis is just barely able to identify objects in a general sense, ie. in any environment, conditions, etc. But we screw up such things, ourselves. So perfection is not the key, here. Rather, it’s “good enough” will do.

    And I see no issue in the idea that a good semantic analysis would be able to contemplate itself, its role in the world, etc. etc. Now that’s a long way from Siri not knowing much about itself. But hey, it’s just a cell phone. It will take a few more deep networks than that.

  2. 2. Jorge says:

    “We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model?”

    If you’re going to ‘evolve’ consciousness through iterative training of a backpropagating neural network (like what was used to have AlphaGo teach itself Go) then clearly what you want is a value function that rewards better models of theory of mind. You get that by pitting multiple AIs against each other in increasingly complicated social contexts (read: political games).

    This hypothesis is based on the idea that consciousness (or at the very least its prerequisite neuronal correlates) evolved due to the human need to outcompete conspecifics- namely deceive and trick each other.

  3. 3. VicP says:

    Jorge, Towards the end of the video he says something to the effect that as they add more functionality to the robot they will be able to peer into its inner world and see what it is: thinking? Sounds like there is an issue of trust here, that if a human creates it, it will be obedient enough to tell us the truth about: itself?…or us?

    Speaking of trust, imagine all of those nice congenial folks in the audience? What would happen if they mysteriously got locked in there?…for 1 hour?..
    8 hours?…days?…weeks?…months?
    Those issues of trust, group behavior, cooperation, deception, warfare would all be on display.

  4. 4. SelfAwarePatterns says:

    I like his approach. It reminds me of something a Google AI engineer said when asked about “real intelligence”, that the question missed the point, that they’d keep adding functionality until we decided there was a mind there. I particularly like that they had the robot simulate an action before it took it. In other words, they gave it an incipient imagination.

    My only concern is that they make be taking too bit a bite aiming for human level consciousness. Maybe they should first build a robot with the ability to navigate the world at least as well as a zebrafish. They could then graduate up to mouse level intelligence, dog level, etc.

  5. 5. VicP says:

    It is really more of a curiosity and the audience really displayed the same fascination as watching a magic show as opposed to scientific discovery. No knock against magic either.

    But when is it really considered a high level of intelligence? Perhaps when it shows leadership skills? Gets nominated for public office or starts its own church?

  6. 6. Peter says:

    When it delivers a talk about consciousness, maybe 😉

  7. 7. David Xanatos says:

    All we’re missing is that we first need to have the robot model itself. Only then can we have the robot model its model of itself, modeling the world around it. Nothing conscious happens without self-awareness. The robot needs to know itself, recognize itself, as an entity separate from its environment. We can’t just program it to say “I”, it has to understand what “I” is. Until we do that, all efforts are stuck in the sand.

  8. 8. vicp says:

    ……and starts its own blog.

    Actually thinking about Graziano who says our own machinery can recognize another consciousness and sometimes falsely assign it to a ventriloquist puppet or in this case a robot.

    It’s all in the timing or rubber bands in this case.

  9. 9. Tom Clark says:

    Holland says

    “In certain states of mind [e.g., dreaming, drugs, meditation] this imagined world [of experience] can seem as real as the real world…but, there’s a twist to all this: the real world is just as fake as the imagined world and your real self is just as fake as your imagined self They are both just models…And the real world you see is not real – it’s your brain’s model of it.”

    As Thomas Metzinger would put it, we’re always stuck in our ego tunnels. We commonsensically understand dreams as being a model that we didn’t know was a model while we were dreaming, since we when we awake we realize that there was no world corresponding to what we were dreaming about. But to understand and fully grasp that waking experience is equally a model, one from which we cannot awake, is a pretty mind-blowing idea that I don’t think most folks are aware of. Which is why he advised that the bar was open in the interval…

    Of course the real (mind-independent) world is only and ever known via models, whether in terms of qualities, concepts, and/or numbers, so to say that what we experience is “fake” isn’t quite right (what’s fake are things like illusions). And since experience itself is untranscendably real for us, there’s no danger of us not taking the model seriously, of dismissing it as fake: it’s all we’ve got.

  10. 10. SelfAwarePatterns says:

    Very well said Tom!

    I’m actually more and more starting to see consciousness as a simulation engine, an engine that uses perceptual models as its inputs and outputs. Under that idea, when we’re dreaming, the engine is running, but its access to those perceptual models is compromised, which is why most dreams are nonsensical.

  11. 11. Michael Murden says:

    The machines that have beaten the top humans at Go, Chess and Jeopardy don’t think the games the way humans do. Similarly, airplanes don’t fly the way birds of bees fly, but they fly. I don’t see any reason to assume machines that exhibit general intelligence will think anything like the way humans think. Given that we merely infer the consciousness of other humans from their similarity to ourselves and our certainty that we are conscious, we have to at least consider that the reliability of our inferences regarding the consciousness or lack thereof of inhuman-looking entities may be suspect. If, as has been argued in this space before, our ability to create theories of mind evolved to manage social interactions with other humans and was subsequently adapted (exapted?) for introspection then we should not give any more credence to our guesses about our own minds than to our guesses about the minds of other people.

  12. 12. David Bailey says:

    Douglas Lenat decided decades and decades ago to produce an intelligent system by representing really large quantities of human knowledge in first order predicate calculus (a logic that deals with assertions such as “If X is the parent of Y and X is female, then X is the mother of Y”).

    Incredibly he is still at it, but it doesn’t seem to be transforming the world.

    I have found the failure (more or less) of AI really fascinating, and I think it tells us something deep about consciousness – at least Roger Penrose is on my side!

Leave a Reply