Are robots people or people robots?

boilerplateI must admit I generally think of the argument over human-style artificial intelligence as a two-sided fight. There are those who think it’s possible, and those who think it isn’t. But a chat I had recently made it clear that there are really more differences than that, in particular among those who believe we shall one day have robot chums.

The key difference I have in mind is over whether there really is consciousness at all, or at least whether there’s anything special about it.

One school of thought says that there is indeed a special faculty of consciousness; but eventually machines of sufficient complexity will have it too. We may not yet have all the details of how this thing works; maybe we even need some special new secret. But one thing is perfectly clear; there’s no magic involved, nothing outside the normal physical account, and in fact nothing that isn’t ultimately computable. One day we will be able to build into a machine all the relevant qualities of a human mind. Perhaps we’ll do it by producing an actual direct simulation of a human brain, perhaps not; the point is, when we switch on that ultimate robot, it will have feelings and qualia, it will have moral rights and duties, and it will have the same perception of itself as a real existing personality, that we do.

The second school of thought agrees that we shall be able to produce a robot that looks and behaves exactly like a human being. But that robot will not have qualia or feelings or free will or any of the rest of it, because in reality human beings don’t have them either! That’s one of the truths about ourselves that has been helpfully revealed by the progress of AI: all those things are delusions and always have been. Our feelings that we have a real self, that there is phenomenal experience, and that somehow we have a special kind of agency, those things are just complicated by-products of the way we’re organised.

Of course we could split the sceptics too, between those who think that consciousness requires a special spiritual explanation, or is inexplicable altogether, and those who think it is a natural feature of the world, just not computational or not explained by any properties of the physical world known so far. There is clearly some scope for discussion between the former kind of believer and the latter kind of sceptic because they both think that consciousness is a real and interesting feature of the world that needs more explanation, though they differ in their assumptions about how that will turn out. Although there’s less scope for discussion, there’s also some common ground between the two other groups because both basically believe that the only kind of discussion worth having about consciousness is one that clarifies the reasons it should be taken off the table (whether because it’s too much for the human mind or because it isn’t worthy of intelligent consideration).

Clearly it’s possible to take different views on particular issues. Dennett, for example, thinks qualia are just nonsense and the best possible thing would be to stop even talking about them, while he thinks the ability of human beings to deal with the Frame Problem is a real and interesting ability that robots don’t have but could and will once it’s clarified sufficiently.

I find it interesting to speculate about which camp Alan Turing would have joined; did he think that humans had a special capacity which computers could one day share, or did he think that the vaunted consciousness of humans turned out to be nothing more than the mechanical computational abilities of his machines? It’s not altogether clear, but I suspect he was of the latter school of thought. He notes that the specialness of human beings has never really been proved; and a disbelief in the specialness of consciousness might help explain his caginess about answering the question “can machines think?”. He preferred to put the question aside: perhaps that was because he would have preferred to answer; yes, machines can think, but only so long as you realise that ‘thinking’ is not the magic nonsense you take it to be…

5 thoughts on “Are robots people or people robots?

  1. What is conciousness held to be (by some/many)? I mean, if you are asleep or knocked out, you are unconcious – that’s pretty mundane – did its opposite, conciousness, somehow gain super special qualities?

    I will pay that many animals see a mirror and think it’s another of their species, while some monkeys will recognise that its a reflection of themselves (and even use it to groom) – treating that recognition as a part of conciousness, I would say it makes conciousness something notably different from what alot of other animals have. So I’d pay it’s special in that regard (and other regards like that). But is it being taken as super special by some? And in what way?

  2. Is it possible for humans to think without language?

    A couple of days ago, I learnt in the news that a couple was arrested, somewhere in the States, for keeping there four children locked in a house, for years, treated like animals.

    None of them had a language, and they communicated with each other by grunting or growling. But they manage to survive, so they must have been thinking somehow, in an inner speechless fashion.

    What about animals, do they think?

    When comparing humans and machines in thinking skills, I always assume that language functions are taken for granted for both… well, machines cannot think without a language (info symbol coding), not digital ones, for analogue cases they would be too simple.

    So, when animals or humans think without a language, how’s it?

    Note that I have disregarded phenomenal aspects on purpose.

  3. Maybe it’s a bit like if a deaf person speaks/makes noises, without language? From the perspective of the deaf person. Certainly sounds would be being made.

    I think it’s trickier with humans – I think we have architecture involved that makes us end up not just thinking, but thinking about thinking (though the ultra recursive of thinking about how we think about thinking is not achitecturally native to us). I would suspect those poor children, especially given more idle time rather than fighting for their lives, would end up reflecting upon their growls. No doubt refining each growl so as to more efficiently get the others to respond as they would expect. Eventually shaping their own language. One we could learn.

  4. but thinking about thinking (though the ultra recursive of thinking about how we think about thinking is not achitecturally native to us)

    Yes, would that be the result of an algorithm implementation? is it possible to have a machine with such HW/SW that eventually can think about its own thinking? (apart from Blade Runner)

    This is far beyond the frame problem resolution. Could a computer be introspective?

Leave a Reply

Your email address will not be published. Required fields are marked *