Scott Bakker has a thoughtful piece which suggests we should be much more worried than we currently are about AIs that pass themselves off, superficially, as people.  Of course this is a growing trend, with digital personal assistants like Alexa or Cortana, that interact with users through spoken exchanges, enjoying a surge of popularity. In fact it has just been announced that those two are going to benefit from a degree of integration. That might raise the question of whether in future they will really be two entities or one with two names – although in one sense the question is nugatory.  When we’re dealing with AIs we’re not dealing with any persons at all; but one AI can easily present as any number of different simulated personal entities.

Some may feel I assume too much in saying so definitely that AIs are not persons. There is, of course, a massive debate about whether human consciousness can in principle be replicated by AI. But here we’re not dealing with that question, but with machines that do not attempt actual thought or consciousness and were never intended to; they only seek to interact in ways that seem human. In spite of that, we’re often very ready to treat them as if they were human. For Scott this is a natural if not inevitable consequence of the cognitive limitations that in his view condition or even generate the constrained human view of the world; however, you don’t have to go all the way with him in order to agree that evolution has certainly left us with a strong bias towards crediting things with agency and personhood.

Am I overplaying it? Nobody really supposes digital assistants are really people, do they? If they sometimes choose to treat them as if they were, it’s really no more than a pleasant joke, surely, a bit of a game?

Well, it does get a little more serious. James Vlahos has created a chat-bot version of his dying father, something I wouldn’t be completely comfortable with myself. In spite of his enthusiasm for the project, I do think that Vlahos is, ultimately, aware of its limitations. He knows he hasn’t captured his father’s soul or given him eternal digital life in any but the most metaphorical sense. He understands that what he’s created is more like a database accessed with conversational cues. But what if some appalling hacker made off with a copy of the dadbot, and set it to chatting up wealthy widows with its convincing life story, repertoire of anecdotes and charming phrases? Is there a chance they’d be taken in? I think they might be, and these things are only going to get better and more convincing.

Then again, if we set aside that kind of fraud (perhaps we’ll pick up that suggestion of a law requiring bots to identify themselves), what harm is there in spending time talking to a bot? It’s no more of a waste of time than some trivial game, and might even be therapeutic for some. Scott says that deprivation of real human contact can lead to psychosis or depression, and that talking to bots might degrade your ability to interact with people in real life; he foresees a generation of hikikomori, young men unable to deal with real social interactions, let alone real girlfriends.

Something like that seems possible, though it may be hard to tell whether excessive bot use would be cause, symptom, palliation, or all three. On the one hand we might make fools of ourselves, leaving the computer on all night in case switching it off kills our digital friend, or trying to give legal rights to non-existent digital people. Someone will certainly try to marry one, if they haven’t already. More seriously, getting used to robot pals might at least make us ruder and more impatient with human service providers, more manipulative and less respectful in our attitudes to crime and punishment, and less able to understand why real people don’t laugh at our jokes and echo back our opinions (is that… is that happening already?)

I don’t know what can be done about it; if Scott is anywhere near right, then these issues are too deeply rooted in human nature for us to change direction. Maybe in twenty years, these words, if not carried away by digital rot, will seem impossibly quaint and retrograde; readers will wonder what can have been wrong with my hidden layers.

(Speaking of bots, I recently wrote some short fiction about them; there are about fifteen tiny pieces which I plan to post here on Wednesdays until they run out. Normal posting will continue throughout, so if you don’t like Mrs Robb’s Bots, just ignore them.)


  1. 1. Lloyd says:

    Is accepting that an entity experiences consciousness the same as agreeing that said entity is a “person”? I see no reason to connect the two. Agreed that there has never been a “person” other than a human being. To me, though, it’s all just a question of semantics, although I accept that it can be hard to define words for things that have never existed.

  2. 2. Peter says:

    I would certainly link personhood more strongly with agency than experience. But what can have experiences if not some sort of (perhaps dimly realised) person?

    To be picky, I wouldn’t quite say for sure that there has never been a non-human person. Maybe out there around some other star? And I’m somewhat agnostic about certain animals.

  3. 3. David Xanatos says:

    Excellent article. It brings up meaningfully an observation that has been being made since the first AI simulation program, Eliza, was created. The author of that program, Joseph Weizenbaum, was astounded at how willing people were to believe the program was sentient, despite knowing it was his programmed creation. Legend has it that his own secretary, trying Eliza out with Weizenbaum watching, asked him to leave for a while so she could have a more personal conversation with “her”. If AI at that very non-AI level can have that effect, I agree that people will be insisting that AIs have equal rights with people. I can also see AI programmers creating bots with personalities and backstories specifically designed to engender this belief in a non-technical audience. What advantages or disadvantages might be conveyed to an AI creator should their creations be given legal rights?

  4. 4. Callan S. says:

    Probably what’s at issue is in extending a line of conversation we open up ways we can be moved.

    The issue being the program will not be moved – it’s initial set agenda will remain. In such a state where one participant can be moved and the other can’t, then inevitably the former will be moved to the latters agenda. This can be felt when talking with a bigot or a troll – they wont move and the only person who will be moved is the one who isn’t a bigot or troll.

    Except the chat programs will be very, very carefully designed not to seem like bigots or trolls.

    Never mind hikikomori, what does it mean for heart felt arguments and pleas when other people have hardened their hearts to this one sidedness? When everyone’s a cynic? Wait, has that happened already?

  5. 5. Scott Bakker says:

    Sorry I’m late getting to this: they put a gas bubble in my eye, and all my grand dreams of ‘getting work done’ while trapped on my belly were dashed after the first hour passed and I realized just how unbelievably uncomfortable it was. Which kinda brings me to my point!

    For me the great obstacle to these debates is humanistic exceptionalism and the attendant presumption that human thought is not material, and therefore exempt from ecological considerations. Like me and my belly-bound aspirations, insensible constraints/influences don’t seem to constrain/influence at all. The future always seems wide open. (The illusion of ecological exemption, I think, is intimately tied to the illusions of intentionality/consciousness more generally.)

    I ran smack dab into this presumption when I first began arguing that the web, far from a communicative panacea, would lead to greater political segregation and radicalization at the turn of the millennium. Our tendencies to narcissism, attribution error, and so on are *components* of direct contact interpersonal ecologies that the internet attenuates in profound ways.

    The proliferation of AI represents a far, far more drastic social ecological threat, *active* versions of the passive habitat destruction following on the web. There’s no ‘positive’ scenario I can envision, here, since the accelerating evolution of the technology promises that any local ecological equilibrium we luck into will quickly collapse.

    Not only do I think exceptionalist theories of intentionality/consciousness are wrong, I think they are part and parcel of the conceit that has us skipping blithely along into the semantic apocalypse.

  6. 6. Peter says:

    Thanks, Scott; hope all is well with you now.

  7. 7. Michael Murden says:

    I think Scott made a good point in one of his posts to the effect that just as we see faces in clouds, stains, sketches and so on because our brains are primed to perceive personhood, our brains are primed to see personhood in communicative experiences as well. It doesn’t take much information to cue the perception of personhood, either visually or conversationally. After all, all of our conversational experiences for the whole of human history until just about now have been with other persons, so it’s pretty reasonable to assume that if we’re having a conversation we’re having it with another person. Imputing personhood to anything with which we’re having a conversation is the natural (so to speak) thing to do.

  8. 8. Peter says:

    Yes, I think that’s both true and important. But I don’t think it gives us an effective sceptical argument. We see faces in all sorts of patterns, but it’s partly because we’ve seen real faces. We’re inclined to attribute personhood over-generously, but that’s partly because we are acquainted with actual persons.

Leave a Reply