A blast of old-fashioned optimism from Owen Holland: let’s just build a conscious robot!

It’s a short video so Holland doesn’t get much chance to back up his prediction that if you’re under thirty you will meet a conscious robot. He voices feelings which I suspect are common on the engineering and robotics side of the house, if not usually expressed so clearly: why don’t we just get on and put a machine together to do this? Philosophy, psychology, all that airy fairy stuff is getting us nowhere; we’ll learn more from a bad robot than twenty papers on qualia.

His basic idea is that we’re essentially dealing with an internal model of the world. We can now put together robots with an increasingly good internal modelling capability (and we can peek into those models); why not do that and then add new faculties and incremental improvements till we get somewhere?

Yeah, but. The history of AI is littered with projects that started like this and ran into the sand. In particular the idea that it’s all about an internal model may be a radical mis-simplification. We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model? In general our conscious experience is complex and elusive, and cannot accurately be put on a screen or described on a page (though generations of novelists have tried everything they can think of).

The danger when we start building is that the first step is wrong and already commits us to a wrong path. Maybe adding new abilities won’t help. Perhaps our ability to model the world is just one aspect of a deeper and more general faculty that we haven’t really grasped yet; building in a fixed spatial modeller might turn us away from that right at the off. Instead of moving incrementally towards consciousness we might end up going nowhere (although there should be some pretty cool robots along the way).

Still, without some optimism we’ll certainly get nowhere anyway.


  1. 1. Lloyd Rice says:

    Hear, hear. I say Owen is absolutely right. He may be understating the issues in building a world model that includes a thorough semantic analysis of the perceived content. And Peter is quite correct that such semantics needs to include a good understanding of self. Certainly self is a major part of the perceived world. But I see these as technical hurdles, toward which progress is rapidly underway. I’m not absolutely sure that those under thirty will meet one, but I think there’s a pretty good chance of it.

    Current visual analysis is just barely able to identify objects in a general sense, ie. in any environment, conditions, etc. But we screw up such things, ourselves. So perfection is not the key, here. Rather, it’s “good enough” will do.

    And I see no issue in the idea that a good semantic analysis would be able to contemplate itself, its role in the world, etc. etc. Now that’s a long way from Siri not knowing much about itself. But hey, it’s just a cell phone. It will take a few more deep networks than that.

  2. 2. Jorge says:

    “We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model?”

    If you’re going to ‘evolve’ consciousness through iterative training of a backpropagating neural network (like what was used to have AlphaGo teach itself Go) then clearly what you want is a value function that rewards better models of theory of mind. You get that by pitting multiple AIs against each other in increasingly complicated social contexts (read: political games).

    This hypothesis is based on the idea that consciousness (or at the very least its prerequisite neuronal correlates) evolved due to the human need to outcompete conspecifics- namely deceive and trick each other.

  3. 3. VicP says:

    Jorge, Towards the end of the video he says something to the effect that as they add more functionality to the robot they will be able to peer into its inner world and see what it is: thinking? Sounds like there is an issue of trust here, that if a human creates it, it will be obedient enough to tell us the truth about: itself?…or us?

    Speaking of trust, imagine all of those nice congenial folks in the audience? What would happen if they mysteriously got locked in there?…for 1 hour?..
    8 hours?…days?…weeks?…months?
    Those issues of trust, group behavior, cooperation, deception, warfare would all be on display.

  4. 4. SelfAwarePatterns says:

    I like his approach. It reminds me of something a Google AI engineer said when asked about “real intelligence”, that the question missed the point, that they’d keep adding functionality until we decided there was a mind there. I particularly like that they had the robot simulate an action before it took it. In other words, they gave it an incipient imagination.

    My only concern is that they make be taking too bit a bite aiming for human level consciousness. Maybe they should first build a robot with the ability to navigate the world at least as well as a zebrafish. They could then graduate up to mouse level intelligence, dog level, etc.

  5. 5. VicP says:

    It is really more of a curiosity and the audience really displayed the same fascination as watching a magic show as opposed to scientific discovery. No knock against magic either.

    But when is it really considered a high level of intelligence? Perhaps when it shows leadership skills? Gets nominated for public office or starts its own church?

  6. 6. Peter says:

    When it delivers a talk about consciousness, maybe 😉

  7. 7. David Xanatos says:

    All we’re missing is that we first need to have the robot model itself. Only then can we have the robot model its model of itself, modeling the world around it. Nothing conscious happens without self-awareness. The robot needs to know itself, recognize itself, as an entity separate from its environment. We can’t just program it to say “I”, it has to understand what “I” is. Until we do that, all efforts are stuck in the sand.

  8. 8. vicp says:

    ……and starts its own blog.

    Actually thinking about Graziano who says our own machinery can recognize another consciousness and sometimes falsely assign it to a ventriloquist puppet or in this case a robot.

    It’s all in the timing or rubber bands in this case.

  9. 9. Tom Clark says:

    Holland says

    “In certain states of mind [e.g., dreaming, drugs, meditation] this imagined world [of experience] can seem as real as the real world…but, there’s a twist to all this: the real world is just as fake as the imagined world and your real self is just as fake as your imagined self They are both just models…And the real world you see is not real – it’s your brain’s model of it.”

    As Thomas Metzinger would put it, we’re always stuck in our ego tunnels. We commonsensically understand dreams as being a model that we didn’t know was a model while we were dreaming, since we when we awake we realize that there was no world corresponding to what we were dreaming about. But to understand and fully grasp that waking experience is equally a model, one from which we cannot awake, is a pretty mind-blowing idea that I don’t think most folks are aware of. Which is why he advised that the bar was open in the interval…

    Of course the real (mind-independent) world is only and ever known via models, whether in terms of qualities, concepts, and/or numbers, so to say that what we experience is “fake” isn’t quite right (what’s fake are things like illusions). And since experience itself is untranscendably real for us, there’s no danger of us not taking the model seriously, of dismissing it as fake: it’s all we’ve got.

  10. 10. SelfAwarePatterns says:

    Very well said Tom!

    I’m actually more and more starting to see consciousness as a simulation engine, an engine that uses perceptual models as its inputs and outputs. Under that idea, when we’re dreaming, the engine is running, but its access to those perceptual models is compromised, which is why most dreams are nonsensical.

  11. 11. Michael Murden says:

    The machines that have beaten the top humans at Go, Chess and Jeopardy don’t think the games the way humans do. Similarly, airplanes don’t fly the way birds of bees fly, but they fly. I don’t see any reason to assume machines that exhibit general intelligence will think anything like the way humans think. Given that we merely infer the consciousness of other humans from their similarity to ourselves and our certainty that we are conscious, we have to at least consider that the reliability of our inferences regarding the consciousness or lack thereof of inhuman-looking entities may be suspect. If, as has been argued in this space before, our ability to create theories of mind evolved to manage social interactions with other humans and was subsequently adapted (exapted?) for introspection then we should not give any more credence to our guesses about our own minds than to our guesses about the minds of other people.

  12. 12. David Bailey says:

    Douglas Lenat decided decades and decades ago to produce an intelligent system by representing really large quantities of human knowledge in first order predicate calculus (a logic that deals with assertions such as “If X is the parent of Y and X is female, then X is the mother of Y”).

    Incredibly he is still at it, but it doesn’t seem to be transforming the world.

    I have found the failure (more or less) of AI really fascinating, and I think it tells us something deep about consciousness – at least Roger Penrose is on my side!

  13. 13. Owen Holland says:


    Many thanks for putting on your generally excellent site this 10 minute video of a 5 year old talk to a general audience about part of my previous work (the robot video is from 2008, and the robot itself is in the Science Museum, London). However, I’m not too happy that you use the video to summarise and criticise (or even parody) my position – I’m afraid your comments have fallen well below your usual very high standard.

    “…psychology…all this airy fairy stuff” – I have a degree in psychology and used to teach it at a respectable university, and I partnered with psychologists at another respectable university for the UK research council funded project that developed the robot and the accompanying theory. I certainly don’t think psychology is airy fairy stuff, except for the evidence-free fringes.

    “His basic idea is that we’re essentially dealing with an internal model of the world.” No, my basic idea in that project was very similar to Thomas Metzinger’s – in an embodied system in which there is a self model, a model of the world, and models of their interactions, the key aspect relevant to consciousness centres around the situation of the self model, and that is something we can examine using our technology.

    “Yeah, but. The history of AI is littered with projects that started like this and ran into the sand.” What has this comment got to do with my project? I don’t think intelligence and consciousness have much more in common than their dependence on modelling, and perhaps on some particular models, and I didn’t mention intelligence, natural or artificial, in the talk. And of course, many projects in all disciplines, including philosophy, run into sand, but at least they try to do something that will then be exposed to the Darwinian forces of which Dennett is a keen advocate in his latest book (I do read some philosophers!). Perhaps you were thinking of CYC, in which the simple addition of ‘facts’ was supposed to reach some threshold that would cause a transition to true intelligence? What I proposed adding in the talk were more functions that features, and many of these functions have long been associated with consciousness – for example, I mentioned language and inner speech.

    And so on, but never mind. At 70, I can’t wait half a century for the meeting I mentioned in the talk, and so in the last couple of weeks I’ve been setting up Conscious Machines Ltd (interim and very gung ho web page consciousmachines.net) in the Technology Business Incubator of the Bristol Robotics Laboratory, the biggest robotics centre in the UK. (The company’s project is similar but far from identical to the project in the talk.) So just as a huge amount of AI research has gone private in the last couple of years, now AC (or MC as I prefer to call it) is following. Here’s another prediction: within three years or less, all the major AI businesses will have machine consciousness teams working with real or virtual robots. Perhaps this will offer some highly paid career opportunities for philosophers?

    Keep up the good work – with rare exceptions, your site is one of the most informative and thoughtful on the net, and I’ll remain a faithful reader.

  14. 14. Peter says:


    Many thanks for responding, and sorry if my commentary was slapdash. It may be that I still don’t take videos seriously enough (I haven’t been featuring them regularly for long).

    I certainly underestimated the age of your remarks; I believe I picked up the video from a recent tweet which perhaps left the impression it was itself recent.

    Anyway, very good to hear that you’re moving forward and I wish you every success. Maybe I’ll be able to do you better justice another time?

  15. 15. arnold says:

    In meta-physics: We know Earth is our Being here … Can we know what Being here is…

Leave a Reply