observationAre we being watched? Over at Aeon, George Musser asks whether some AI could quietly become conscious without our realising it. After all, it might not feel the need to stop whatever it was doing and announce itself. If it thought about the matter at all, it might think it was prudent to remain unobserved. It might have another form of consciousness, not readily recognisable to us. For that matter, we might not be readily recognisable to it, so that perhaps it would seem to itself to be alone in a solipsistic universe, with no need to try to communicate with anyone.

There have been various scenarios about this kind of thing in the past which I think we can dismiss without too much hesitation. I don’t think the internet is going to attain self-awareness because however complex it may become, its simply isn’t organised in the right kind of way. I don’t think any conventional digital computer is going to become conscious either, for similar reasons.

I think consciousness is basically an expanded development of the faculty of recognition. Animals have gradually evolved the ability to recognises very complex extended stimuli; in the case of human beings things have gone a massive leap further so that we can recognise abstractions and generalities. This makes a qualitative change because we are no longer reliant on what is coming in through our sense from the immediate environment; we can think about anything, even imaginary or nonsensical things.

I think this kind of recognition has an open-ended quality which means it can’t be directly written into a functional system; you can’t just code it up or design the mechanism. So no machines have been really good candidates; until recently. These days I think some AI systems are moving into a space where they learn for themselves in a way which may be supported by their form and the algorithms that back them up, but which does have some of the open-ended qualities of real cognition. My perception is that we’re still a long way from any artificial entity growing into consciousness; but it’s no longer a possibility which can be dismissed without consideration; so a good time for George to be asking the question.

How would it happen? I think we have to imagine that a very advanced AI system has been set to deal with a very complex problem. The system begins to evolve approaches which yield results and it turns out that conscious thought – the kind of detachment from immediate inputs I referred to above – is essential. Bit by bit (ha) the system moves towards it.

I would not absolutely rule out something like that; but I think it is extremely unlikely that the researchers would fail to notice what was happening.

First, I doubt whether there can be forms of consciousness which are unrecognisable to us. If I’m right consciousness is a kind of function which yields purposeful output behaviour, and purposefulness implies intelligibility. We would just be able to see what it was up to. Some qualifications to this conclusion are necessary. We’ve already had chess AIs that play certain end-games in ways that don’t make much sense to human observers, even chess masters, and look like random flailing. We might get some patterns of behaviour like that. But the chess ‘flailing’ leads reliably to mate, which ultimately is surely noticeable. Another point to bear in mind is that our consciousness was shaped by evolution, and by the competition for food, safety, and reproduction. The supposed AI would have evolved in consciousness in response to completely different imperatives, which might well make some qualitative difference. The thoughts of the AI might not look quite like human cognition.  Nevertheless I still think the intentionality of the AI’s outputs could not help but be recognisable. In fact the researchers who set the thing up would presumably have the advantage of knowing the goals which had been set.

Second, we are really strongly predisposed to recognising minds. Meaningless whistling of the wind sounds like voices to us; random marks look like faces; anything that is vaguely humanoid in form or moves around like a sentient creature is quickly anthropomorphised by us and awarded an imaginary personality. We are far more likely to attribute personhood to a dumb robot than dumbness to one with true consciousness. So I don’t think it is particularly likely that a conscious entity could evolve without our knowing it and keep a covert, wary eye on us. It’s much more likely to be the other way around: that the new consciousness doesn’t notice us at first.

I still think in practice that that’s a long way off; but perhaps the time to think seriously about robot rights and morality has come.


  1. 1. Christophe Menant says:

    Assuming that the consciousness we are talking about is human consciousness, it is worth recalling that it contains self-consciousness: the capability to recognize oneself as an existing entity. Self-consciousness as an expanded development of the faculty of recognition has to explicitly include the subject carrying it (and it is the difficult part of the process).
    It is true that AI systems are moving into a space where they learn for themselves, but the ‘self’ of an artificial agent is not the self of a living entity. If the latter has to satisfy a ‘stay alive’ constraint, the former has only to comply with a material design. Such a difference in nature brings us back to our classics (turing test, chinese room argument, symbol grounding problem) that can be addressed in terms of meaning generation (more on this at http://philpapers.org/rec/MENTTC-2).
    On the same thread, are we sure that human consciousness can be characterized as a kind of function which yields purposeful output behaviour? Such performance exists at the level of basic life where purposeful bahaviour is aimed a staying alive. Human consciousness brings in much more than that (http://philpapers.org/rec/MENPFA-3).
    Overall, I feel that an evolutionary approach based on the evolution of meaningful representations can bring a characterization of artificial agents vs animals and humans (http://philpapers.org/rec/MENCOI). And such a background leads quite naturally to a simple statetement: Human consciousness as we know it today needs the performance of life. So AI has first to access the performances of life in order to pretend emulating human consciousness. The today obstacle for AI to reach human consciousness is then an understanding about the nature of life. Our real mastering of artificial life (that will come some day) will then probably by itself highlight concerns about the consequence of artificial consciousness.

  2. 2. Mark S says:

    None of the leaders in AI nor their grad students think AI is close to GAI. Maybe in 100 years. All the stuff the do is not much different than was done decades ago. Deep learning is “toy neural nets” that have more labeled data to work with than before, cheap hardware to do matrix math (GPU’s from a vida) and better regularization (e.g, via Hinton’s dropout technique). They are just curve fitting/optimization techniques. Hence “supervised learning” has been a success in three areas. Unsupervised learning (unlabeled data) has not. Grad students spend nights figuring out how to initialize these techniques so they work well. Ask them how “intelligent” these systems are and they will laugh and roll their eyes.

  3. 3. Sci says:

    Couldn’t you ask his questions about any physical system?

  4. 4. john davey says:


    Couldn’t you ask his questions about any physical system?

    Could there be a small man in the middle of Jupiter who’s name is Geoff and who eats mice ? No? Can anybody disprove it ?

    Bertrand Russell had an answer for questions like this… when addressing the subject of “is there a god” he conjectured that there might be a huge intergalactic teapot that was controlling world events. As it couldn’t be disproved, it was (and is) a perfectly legitimate hypothesis.

    The question is – is there any reason to think the universe is controlled by a gigantic teapot ? No.

    Could there be a small man in the middle of Jupiter who’s name is Geoff and who eats mice ? Yes, but we don’t have a reason to believe it to be so ?

    Could a cup of tea be conscious ? Yes, but we don’t have a reason to believe it to be so ?

    Could a computer be conscious ? Yes, to the same extent as there might be a small man who lives on Jupiter called Geoff and who eats mice.


  5. 5. Sci says:

    @ John Davey: Heh, I’m with you. I’m still trying to figure out how hard computationalism got traction.

    I understand the idea that looking at how a computer does things *might* give us clues about how the brain does things, but the idea that a program is conscious or people can upload their minds…I’m still bewildered this kind of thing is taken seriously.

    OTOH, I’m not saying you need a soul to be conscious, a physical replication of our brains at the correct (currently unknown) level of organization seems fine.

  6. 6. Hunt says:

    @ John Davey: Heh, I’m with you. I’m still trying to figure out how hard computationalism got traction.

    That’s easy. Computers are able to manipulate symbols, even though in a superficial (some would say fake) way, which is beyond most other animals. Even animals that master rudimentary manipulation, like other apes and dolphins, they are easy to discount, and frankly most laypersons don’t even know about them. So computer can tally vote counts and process tax forms. For most people, that means they’re intelligent.

  7. 7. Hunt says:

    I agree with Musser that I think evolution was specifically evolved and selected, not just manifested as an epiphenomenal byproduct. So even though phil zombies are an interesting philosophical construct, ultimately I think they’ll be found to be impossible. Consciousness serves an actual, critical purpose. Likewise, this is also the reason I doubt machine consciousness will “just happen” and also why I think IIT is an interesting, but wrong, theory. The tacit assumption is that consciousness is, again, byproduct–this time of complexity. So IIT is also inherently epiphenomenalist.

  8. 8. John Davey says:


    OTOH, I’m not saying you need a soul to be conscious, a physical replication of our brains at the correct (currently unknown) level of organization seems fine.

    Yes – artificial consciousness is a certainty within the next fifty years, but it’ll have nothing to do with the gargantuan gravy train of AI. It will be in biology labs where tissue synthesis will bring about the necessary causal conditions. Is that truly “artificial” ? … I think so.

    I’m not convinced that computers will demonstrate anything about brain function. Firstly, a computer is a defined object : there is nothing to be found out about them, unlike brains. They are not objects of scientific enquiry. They may be usable as tools in some blunt edged way, but its unlikely that state machines would be as effective or as quick as actual physical models, which in this current world of micro-fibres is feasible.

    In short, stick ’em in the bin in this area of enquiry.

  9. 9. Witness: 14 March 2016 – Sakeel says:

    […] A long-time component to my cyberpunk contemplation, Aeon explores AI becoming conscious without our knowing it, and Conscious Entities rebuts. […]

Leave a Reply