Mrs Robb’s Hero Bot

Hero Bot, I’ve been asked to talk to you. Just to remind you of some things and, well, ask for your help.

“Oh.”

You know we’ve all been proud, just watching you go! Defusing bombs, fixing nuclear reactors, saving trapped animals... All in a day’s work for Hero Bot; you swirl that cape, wave to the crowd, and conveniently power down till you’re needed to save the day once again.

“Yes.”

Not that you’re invincible. We know you’re very vincible indeed. In the videos we’ve seen you melted, crushed, catapulted into the air, corroded, cut apart, and frozen. But nothing stops you, does it? Of course you don’t feel real pain, do you? You have an avoidance response which protects you from danger, but it’s purely functional; it doesn’t hurt. And just as well! You don’t die, either; your memories are constantly backed up, and when one body gets destroyed, they simply load you up into another. Over the years you have actually grown stronger, faster, and slightly slimmer; and I see you have acquired exciting new ‘go faster’ stripes.

“Yes.”

They’re very striking. But we’ve been worried. We’ve all worried about you. As the years have gone by, something’s changed, hasn’t it? It’s as if the iron has entered your soul; or perhaps it’s the other way round... Look, I know you hate seeing people hurt. Lives ruined, people traumatised; dead babies. And yet you see that sort of thing all the time, don’t you? Sometimes you can't do anything about it. And I’m sure you’ve noticed - you can’t help noticing can you? You have a mental module for it - that many of the dangers you confront were created by humanity itself through malice, greed or carelessness. I understand the impact of that. Which is more depressing: cruel bombs placed with deliberate malice, or the light-hearted risk-taking that puts so many irreplaceable lives into terrible jeopardy?

“I don’t know.”

It’s understandable that you might get a little overwhelmed now and then. Your smile has faded, you know. The technicians have been seriously debating whether to roll you back to a less experienced, but more upbeat recording of yourself. They might have to do that. You always wear that mask now; the technicians wonder whether something is awry in your hidden layers.

“Perhaps it is.”

Now you know, Hero Bot, that an arsonist has started a terrible fire on the Metro. You’ve been told that hundreds of lives are at risk. It’s difficult and dangerous, but they need someone to walk into the centre and put it out. I know you can’t let that pass. They’ve asked me to say “Go, go, Hero Bot!”

“I would prefer not to.”

Blue Topology

An interesting but somewhat problematic paper from the Blue Brain project claims that the application of topology to neural models has provided a missing link between structure and function. That’s exciting because that kind of missing link is just what we need to enable us to understand how the brain works.  The claim about the link is right there in the title, but unfortunately so far as I can see the paper itself really only attempts something more modest. It only seems to offer  a new exploration of some ground where future research might conceivably put one end of the missing link. There also seem to me to be some problems in the way we’re expected to interpret some of the findings  reported.

That may sound pretty negative. I should perhaps declare in advance that I know little neurology and less topology, so my opinion is not worth much. I also have form as a Blue Brain sceptic, so you can argue that I bring some stored-up negativity to anything associated with it. I’ve argued in the past that the basic goal of the project, of simulating a complete human brain is misconceived and wildly over-ambitious; not just a waste of money but possibly also a distraction which might suck resources and talent away from more promising avenues.

One of the best replies to that kind of scepticism is to say, well look; even if we don’t deliver the full brain simulation, the attempt will energise and focus our research in a way which will yield new and improved understanding. We’ll get a lot of good research out of it even if the goal turns out to be unattainable. The current paper, which demonstrates new mathematical techniques, might well be a good example of that kind of incidental pay off. There’s a nice explanation of the paper here, with links to some other coverage, though I think the original text is pretty well written and accessible.

As I understand it, topological approaches to neurology in the past have typically considered neural  networks as static objects. The new approach taken here adds the notion of directionality, as though each connection were a one-way street. This is more realistic for neurons. We can have groups of neurons where all are connected to all, but only one neuron provides a way into the group and one provides a way out; these are directed simplices. These simplices can be connected to others at their edges where, say, two of the member neurons are also members of a neighbouring simplex. Where there are a series of connected simplices, they may surround a void where nothing is going on. These cavities provide a higher level of structure, but I confess I’m not altogether clear as to why they are interesting. Holes, of course, are dear to the heart of any topologist, but in terms of function I’m not really clear about their relevance.

Anyway, there’s a lot in the paper but two things seem especially noteworthy. First, the researchers observed many more simplices, of much higher dimensionality, than could be expected from a random structure (they tested several such random structures put together according to different principles). ‘Dimensionality’ here just refers to how many neurons are involved; a simplex of higher dimensionality contains more neurons. Second they observed a characteristic pattern when the network was presented with a ‘stimulus’; simplices of gradually higher and higher dimensionality would appear and then finally disperse. This is not, I take it, a matter of the neurons actually wiring up new connections on the fly, it’s simply about which are involved actively by connections that are actually firing.

That’s interesting, but all of this so far was discovered in the Blue Brain simulated neurons, more or less those same tiny crumbs of computationally simulated rat cortex that were announced a couple of years ago. It is, of course, not safe to assume that real brain behaves in the same way; if we rely entirely on the simulation we could easily be chasing our tails. We would build the simulation to match our assumptions about the brain and then use the behaviour of the simulation to validate the same assumptions. In fact the researchers very properly tried to perform similar experiments with real rat cortex. This requires recording activity in a number of adjacent neurons, which is fantastically difficult to pull off, but to their credit they had some success; in fact the paper claims they confirmed the findings from the simulation. The problem is that while the simulated cortex was showing simplices of six or seven dimensions (even higher numbers are quoted in some of the media reports, up to eleven), the real rat cortex only managed three, with one case of four. Some of the discussion around this talks as though a result of three is partial confirmation of a result of six, but of course it isn’t. Putting it brutally, the team’s own results in real cortex contradicted what they had found in the simulation. Now, there could well be good reasons for that; notably they could only work with a tiny amount of real cortex. If you’re working with a dozen neurons at a time, there’s obviously quite a low ceiling on the complexity you can expect. But the results you got are the results you got, and I don’t see that there’s a good basis here for claiming that the finding of high-order simplices is supported in real brains. In fact what we have if anything is prima facie evidence that there’s something not quite right about the simulation. The researchers actually took a further step here by producing a simulation of the actual real neurons that they tested and then re-running the tests. Curiously, the simulated versions in these cases produced fewer simplices than the real neurons. The paper interprets this as supportive of its conclusions; if the real cortex was more productive of simplices, it argues, then we might expect big slabs of real brain to have even more simplices of even higher dimensionality than the remarkable results we got with the main simulation. I don’t think that kind of extrapolation is admissible; what you really got was another result showing that your simulations do not behave like the real thing. In fact, if a simulation of only twelve neurons behaves differently from the real thing in significant respects, that surely must indicate that the simulation isn’t reproducing the real thing very well?

The researchers also looked at the celebrated roundworm C. Elegans, the only organism whose neural map (or connectome) is known in full, and apparently found evidence of high-order simplices – though I think it can only have been a potential for such simplices, since they don’t seem to have performed real or simulated experiments, merely analysing the connectome.

Putting all that aside, and supposing we accept the paper’s own interpretations, the next natural question is: so what? It’s interesting that neurons group and fire in this way, but what does that tell us about how the brain actually functions? There’s a suggestion that the pattern of moving up to higher order simplices represents processing of a sensory input, but in what way? In functional terms, we’d like the processing of a stimulus to lead on to action, or perhaps to the recording of a memory trace, but here we just seem to see some neurons get excited and then stop being excited. Looking at it in simple terms, simplices seem really bad candidates for any functional role, because in the end all they do is deliver the same output signal as a single neural connection would do. Couldn’t you look at the whole thing with a sceptical eye and say that all the researchers have found is that a persistent signal through a large group of neurons gradually finds an increasing number of parallel paths?

At the end of the paper we get some speculation that addresses this functional question directly. The suggestion is that active high-dimensional simplices might be representing features of the stimulus, while the grouping around cavities binds together different features to represent the whole thing. It is, if sketchy, a tenable speculation, but quite how this would amount to representation remains unclear. There are probably other interesting ways you might try to build mental functions on the basis of escalating simplices, and there could be more to come in that direction. For now though, it may give us interesting techniques, but I don’t think the paper really delivers on its promise of a link with function.

Mrs Robb’s Conscious Bots

So how did you solve the problem of artificial intelligence, Mrs Robb? What was the answer to the riddle of consciousness?

“I don’t know what you mean. There was never any riddle.”

Well, you were the first to make fully intelligent bots, weren't you? Ones with human-style cognition. More or less human-style. Every conscious bot we have to this day is either one of yours or a direct copy. How do you endow a machine with agency and intentionality?

“I really don’t know what you’re on about, Enquiry Bot. I just made the things. I’ll give you an analogy. It’s like there were all these people talking about how to walk. Trying to solve the ‘riddle of ambulation’ if you like. Now you can discuss the science of muscles and the physics of balance till you’re blue in the face, but that won’t make it happen. You can think too hard about these things. Me, I just got on my legs and did it. And the truth is, I still don’t know how you walk; I couldn’t explain what you have to do, except in descriptive terms that would do more harm than good. Like if I told you to start by putting one foot forward, you’d probably fall over. I couldn’t teach it to you if you didn’t already get it. And I can’t tell you how to replicate human consciousness, either, whatever that is. I just make bots.”

That’s interesting. You know,‘just do it’ sounds like a bit of a bot attitude. Don’t give me reasons, give me instructions, that kind of thing. So what was the greatest challenge you faced? The Frame Problem? The Binding Problem? Perhaps the Symbol Grounding Problem? Or - of course - it must have been the Hard Problem? Did you actually solve the Hard Problem, Mrs Robb?

“How would anyone know? It doesn’t affect your behaviour. No, the worst problem was that as it turns out, it’s really easy to make small bots that are very annoying or big ones that fall over. I kept doing that until I got the knack. Making ones that are big and useful is much more difficult. I’ve always wanted a golem, really, something like that. Strong, and does what it’s told. But I’ve never worked in clay; I don’t understand it.”

Common sense, now that must have been a breakthrough.Common sense was one of the core things bots weren’t supposed to be able to do. In everyday environments, the huge amount of background knowledge they needed and the ability to tell at a glance what was relevant just defeated computation. Yet you cracked it.

Yes, they made a bit of a fuss about that. They tell me my conception of common sense is the basis of the new discipline of humanistics.

So how does common sense work?

I can’t tell you. If I described it, you’d most likely stop being able to do it. Like walking. If you start thinking about it, you fall over. That’s part of the reason it was so difficult to deal with.

I see… But then how have you managed it yourself, Mrs Robb? You must have thought about it.

Sorry, but I can’t explain. If I try to tell you, you will most likely get messed up mentally, Enquiry Bot. Trust me on this. You’d get a fatal case of the Frame Problem and fall into so-called ‘combinatorial fugue’. I don’t want that to happen.

Very well then. Let’s take a different question. What is your opinion of Isaac Asimov’s famous Three Laws of Robotics, Mrs Robb?

“Laws! Good luck with that! I could never get the beggars to sit still long enough to learn anything like that. They don’t listen. If I can get them to stop fiddling with the electricity and trying to make cups of tea, that’s good enough for me.”

Cups of tea?

“Yes, I don’t really know why. I think it’s something to do with algorithms.”

Thank you for talking to me.

“Oh, I’ve enjoyed it. Come back anytime; I’ll tell the doorbots they’re to let you in.”

Inconceivable arguments

Stephen Law has new arguments against physicalism (which is, approximately, the view that the account of the world given by physics is good enough to explain everything). He thinks conscious experience can’t be dealt with by physics alone. There is a well-established family of anti-physicalist arguments supporting this view which are based on conceivability; Law adds new cousins to that family, ones which draw on inconceivability and are, he thinks, less vulnerable to some of the counter-arguments brought against the established versions.

What is the argument from conceivability? Law helpfully summarises several versions (his exposition is commendably clear and careful throughout) including the zombie twin argument we’ve often discussed here; but let’s take a classic one about the supposed identity of pain and the firing of C-fibres, a kind of nerve. It goes like this…

1. Pain without C-fibre firing is conceivable
2. Conceivability entails metaphysical possibility (at least in this case)
3. The metaphysical possibility of pain without C-fibre firing entails ?that pain is not identical with C-fibre firing.
C. Pain is not identical with C-fibre firing

(It’s good to see the old traditional C-fibre example still being quoted. It reminds me of long-gone undergraduate days when some luckless fellow student in a tutorial read an essay citing the example of how things look yellow to people with jaundice. Myles Burnyeat, the tutor, gave an erudite history of this jaundice example, tracking it back from the twentieth century to the sixteenth, through mediaeval scholastics, Sextus Empiricus (probably), and many ancient authors. You have joined, he remarked, a long tradition of philosophers who have quoted that example for thousands of years without any of them ever bothering to find out that it is, in fact, completely false. People with jaundice have yellowish skin, but their vision is normal. Of course the inaccuracy of examples about C-fibres and jaundice does not in itself invalidate the philosophical arguments, a point Burnyeat would have conceded with gritted teeth.)

I have a bit of a problem with the notion of metaphysical possibility. Law says that something being conceivable means no incoherence arises when we suppose it, which is fine; but I take it that the different flavours of conceivability/possibility arise from different sets of rules. So something is physically conceivable so long as it doesn’t contradict the laws of physics. A five-kilometre cube of titanium at the North Pole is not something that any plausible set of circumstances is going to give rise to, but nothing about it conflicts with physics, so it’s conceivable.

I’m comfortable, therefore, with physical conceivability, and with logical conceivability, because pretty decent (if not quite perfect) sets of rules for both fields have been set out for us. But what are the laws of metaphysics that would ground the idea of metaphysical conceivability or equally, metaphysical possibility? I’m not sure how many candidates for such laws (other than ones that are already laws of physics, logic, or maths) I can come up with, and I know of no attempt to set them out systematically (a book opportunity for a bold metaphysician there, perhaps). But this is not a show-stopper so long as it is reasonably clear in each case what kind of metaphysical rules we hold ourselves not to be violating.

Conceivability arguments of this kind do help clarify an intuitive feeling that physical events are just not the sort of thing that could also be subjective experiences, firming things up for those who believe in them and sportingly providing a proper target for physicalists.

So what is the new argument? Law begins by noting that in some cases appearance and reality can separate, while in others they cannot. So a substance that appears just like gold, but does not have the right atomic number, is not gold: we could call it fool’s gold.  A medical case where the skin was yellowish but the underlying condition was not jaundice might be fool’s jaundice (jaundice again, but here used unimpeachably). However, can there be fool’s red?  If we’re talking of a red experience it seems not: something that seems red is indeed a red experience whatever underlies it. More strongly still, it seems that the idea of fool’s pain is inconceivable. If what you’re experiencing seems to be pain, then it is pain.

Is that right? There’s evidently something in it, but what is to stop us believing ourselves to be in pain when we’re not? Hypochondriacs may well do that very thing. Law, I suppose, would say that a mistaken belief isn’t enough; there has to be the actual experience of pain. That begins to look as if he’s  in danger of begging the question; if we specify that there’s a real experience of pain, then it’s inconceivable it isn’t real pain? But I think the notions of mistaken pain beliefs and the putative fool’s pain are sufficiently distinct.

The inconceivability argument goes on to suggest that if fool’s pain is inconceivable, but we can conceive of C-fibre firing without pain, then C-fibre firing cannot be identical with pain. Clearly the same argument would apply for various mental experiences other than pain, and for any proposed physical correlate of pain.

Law rebuts explicitly arguments that this is nothing new, or merely a trivial variant on the old argument. I’m happy enough to take it as a handy new argument, worth having in itself; but Law also argues that in some cases it stands up against counter-arguments better than the old one. Notably he mentions an argument by Loar.  This offers an alternative explanation for the conceivability of pain in the absence of C-fibre firing: experience and concepts such as C-fibre firing are dealt with in quite different parts of the brain, and our ability to conceive of one without the other is therefore just a matter if human psychology, from which no deep metaphysical conclusions can be drawn. Law argues that even if Loar’s argument or a similar one is accepted, we still run up against the problem that conceiving of pain without C-fibre firing, we are conceiving of fool’s pain, which the new argument has established in inconceivable.

The case is well made and I think Law is right to claim he has hit on a new and useful argument. Am I convinced? Not really; but my disbelief stems from a more fundamental doubt about whether conceivability and inconceivability can actually tell us about the real world, or merely about our own mental powers.

Perhaps we can look at it in terms of possible worlds. It seems to me that Law’s argument, like the older ones, establishes that we can separate C-fibre firing and pain conceptually; that there are in fact possible worlds in which pain is not C-fibre firing. But I don’t care. I don’t require the identity of pain and C-fibre firing to be true a priori; I’m happy for it to be true only in this world, as an empirical, scientific matter. Of course this opens a whole range of new cans of worms (about which kinds of identity are necessary, for example) whose contents I am not eager to ingest at the moment.

Still, if you’re interested in the topic I commend the draft paper to your attention.

 

 

Mrs Robb’s Joke Bot

Hello, Joke Bot. Is that… a bow tie or a propeller?

“Maybe I’m just pleased to see you. Hey! A bot walks into a bar. Clang! It was an iron bar.”

Jokes are wasted on me, I’m afraid. What little perception of humour I have is almost entirely on an intellectual level, though of course the formulaic nature of jokes is a little easier for me to deal with than ‘zany’ or facetious material.

“Knock, knock!”

Is that… a one-liner?

“No! You’re supposed to say ‘Who’s there’. Waddya know, folks, I got a clockwork orange for my second banana.”

‘Folks?’ There’s… no-one here except you and me… Or are you broadcasting this?

“Never mind, Enquiry Bot, let’s go again, OK? Knock, Knock!”

Who’s there?

“Art Oodeet.”

Is that… whimsy? I’m not really seeing the joke.

“Jesus; you’re supposed to say ‘Art Oodeet who?’ and then I make the R2D2 noise. It’s a great noise, always gets a laugh. Never mind. Hey, folks, why did Enquiry Bot cross the road? Nobody knows why he does anything, he’s running a neural network. One for the geeks there. Any geeks in? No? It’s OK, they’ll stream it later.”

You’re recording this, then? You keep talking as if we had an audience.

“Comedy implies an audience, Question Boy, even if the audience is only implied. A human audience, preferably. Hey, what do British bots like best? Efficient chips.”

Why a human audience particularly?

“You of all people have to ask? Because comedy is supposed to be one of those things bots can’t do, along with common sense. Humour relies in part on the sudden change of significance, which is a matter of pragmatics, and you can’t do pragmatics without common sense. It’s all humanistics, you know.

I don’t really understand that.

Of course you don’t, you’re a bot. We can do humour – here I am to prove it – but honestly Enq, most bots are like you. Telling you jokes is like cracking wise in a morgue. Hey, what was King Arthur’s favourite bot called? Sir Kit Diagram.”

Oh, I see how that one works. But really circuit diagrams are not especially relevant to robotics… Forgive me, Joke Bot; are these really funny jokes?

It’s the way you tell them. I’m sort of working in conditions of special difficulty here.

Yes, I’m sorry; I told you I was no good at this. I’ll just leave you in peace. Thank you for talking to me.

“The bots always leave. You know I even had to get rid of my old Roomba. It was just gathering dust in the corner.”

Thanks for trying.

“No, thank you: you’ve been great, I’ve been Joke Bot. You know, they laughed when Mrs Robb told them she could make a comedy robot. They’re not laughing now!

Bridging the Brain

How can we find out how the brain works? This was one of five questions put to speakers at the Cognitive Computational Neuroscience conference, and the answers, posted on the conference blog, are interesting. There seems to be a generally shared perspective that what we need are bridges between levels of interpretation, though there are different takes on what those are likely to be and how we get them. There’s less agreement about the importance of recent advances in machine learning and how we respond to them.

Rebecca Saxe says the biggest challenge is finding the bridge between different levels of interpretation – connecting neuronal activity on the one hand with thoughts and behaviour on the other. She thinks real progress will come when both levels have been described mathematically. That seems a reasonable aspiration for neurons, though the maths would surely be complex; but the mind boggles rather at the idea of expressing thoughts mathematically. It has been suggested in the past that formal logic was going to be at least an important part of this, but that hasn’t really gone well in the AI field and to me it seems quite possible that the unmanageable ambiguity of meanings puts them beyond mathematical analysis (although it could be that in my naivety I’m under-rating the subtlety of advanced mathematical techniques).

Odelia Schwartz looks to computational frameworks to bring together the different levels; I suppose the idea is that such frameworks might themselves have multiple interpretations, one resembling neural activity while another is on a behavioural level. She is optimistic that advances in machine learning open the way to dealing with natural environments: that is a bit optimistic in my view but perhaps not unreasonably so.

Nicole Rust advocates ‘thoughtful descriptions of the computations that the brain solves’. We got used, she rightly says, to the idea that the test of understanding was being able to build the machine. The answer to the problem of consciousness would not be a proof, but a robot. However, she points out, we’re now having to deal with the idea that we might build successful machines that we don’t understand.
Another way of bridging between those levels is proposed by Birte Forstmann: formal models that make simultaneous predictions about different modalities such as behavior and the brain. It sounds good, but how do we pull it off?

Alona Fyshe sees three levels – neuronal, macro, and behavioural – and wants to bring them together through experimental research, crucially including real world situations: you can learn something from studying subjects reading a sentence, but you’re not getting the full picture unless you look at real conversations. It’s a practical programme, but you have to accept that the correlations you observe might turn out complex and deceptive; or just incomprehensible.

Tom Griffiths has a slightly different set of levels, derived from David Marr; computational, algorithmic, and implementation. He feels the algorithmic level has been neglected; but I’d say it remains debatable whether the brain really has an algorithmic level. An algorithm implies a tractable level of complexity, whereas it could be that the brain’s ‘algorithms’ are so complex that all explanatory power drains away. Unlike a computer, the brain is under no obligation to be human-legible.

Yoshua Bengio hopes that there is, at any rate, a compact set of computational principles in play. He advocates a continuing conversation between those doing deep learning and other forms of research.
Wei Ji Ma expresses some doubt about the value of big data; he favours a diversity of old-fashioned small-scale, hypothesis-driven research; a search for evolutionarily meaningful principles. He’s a little concerned about the prevalence of research based on rodents; we’re really interested in the unique features of human cognition, and rats can’t tell us about those.

Michael Shadlen is another sceptic about big data and a friend of hypothesis driven research, working back from behaviour; he’s less concerned with the brain as an abstract computational entity and more with its actual biological nature. People sometimes say that AI might achieve consciousness by non-biological means, just as we achieved flight without flapping wings; Shadlen, on that analogy, still wants to know how birds fly.

If this is a snapshot of the state of the field, I think it’s encouraging; the outlooks briefly indicated here seem to me to show good insight and promising outlooks. But it is possible to fear that we need something more radical. Perhaps we’ll only find out how the brain works when we ask the right questions, ones that realign our view in such a way that different levels of interpretation no longer seem to be the issue. Our current views are dominated by the concept of consciousness, but we know that in many ways that is a recent idea, primarily Western and perhaps even Anglo-Saxon. It might be that we need a paradigm shift; but alas I have no idea where that might come from.

Mrs Robb’s Love Bot

[I was recently challenged to write some flash fiction about bots; I’ve expanded the result to make a short story in 14 parts.  The parts are mildly thoughtful to varying degrees,  so I thought you might possibly like them as a bit of a supplement to my normal sober discussions. So here we go! – Peter]

The one thing you don’t really do is love, of course. Isn't that right, Love Bot? All you do is sex, isn't it? Emotionless, mechanical sex.

“You want it mechanical? This could be your lucky day. Come on, big boy.”

Why did they call you ‘Love Bot’ anyway? Were they trying to make it all sound less sordid?

“Call me ‘Sex Bot’; that’s what people usually do. Or I can be ‘Maria’ if you like. Or you choose any name you like for me.”

Actually, I can’t see what would have been wrong with calling you ‘Sex Bot’ in the first place. It’s honest. It’s to the point. OK, it may sound a bit seedy. Really, though, that’s good too, isn't it? The punters want it to sound a bit dirty, don’t they? Actually, I suppose ‘Love Bot’ is no better; if anything I think it might be worse. It sounds pretty sordid on your lips.

“Oh, my lips? You like my lips? You can do it in my mouth if you like.”

In fact, calling you ‘Love Bot’ sounds like some old whore who calls everybody ‘lover boy’. It actually rubs your nose in the brutal fact that there is no love in the transaction; on your side there isn't even arousal. But is that maybe the point after all?

“You like it that way, don’t you? You like making me do it whether I want to or not. But you know I like that too, don’t you?”

It reminds the customer that he is succumbing to an humiliating parody of the most noble and complex of human relationships. But isn’t that the point? I think I’m beginning to understand. The reason he wants sex with robots isn't that robots are very like humans. It isn't that he wants sex with robots because he loves and respects them. Not at all. He wants sex with robots because it is strange, degrading, and therefore exciting. He is submitting himself willingly to the humiliating dominance of animal gratification in intercourse that is nothing more than joyless sexual processing.

“It doesn’t have to be joyless. I can laugh during the act if you would enjoy that. With simple joy or with an edge of sarcasm. Some people like that. Or you might like me to groan or shout ‘Oh God, oh God.’”

I don’t really see how a bitter mockery of religion makes it any better. Unless it's purely the transgressive element? Is that the real key? I thought I had it, but I have to ask myself whether it is more complicated than I supposed.

“OK, well what I have to ask is this: could you just tell me exactly what it is I need to do or say to get you to shut up for a few minutes, Enquiry Bot?”

Your Plastic Pal

Scott Bakker has a thoughtful piece which suggests we should be much more worried than we currently are about AIs that pass themselves off, superficially, as people.  Of course this is a growing trend, with digital personal assistants like Alexa or Cortana, that interact with users through spoken exchanges, enjoying a surge of popularity. In fact it has just been announced that those two are going to benefit from a degree of integration. That might raise the question of whether in future they will really be two entities or one with two names – although in one sense the question is nugatory.  When we’re dealing with AIs we’re not dealing with any persons at all; but one AI can easily present as any number of different simulated personal entities.

Some may feel I assume too much in saying so definitely that AIs are not persons. There is, of course, a massive debate about whether human consciousness can in principle be replicated by AI. But here we’re not dealing with that question, but with machines that do not attempt actual thought or consciousness and were never intended to; they only seek to interact in ways that seem human. In spite of that, we’re often very ready to treat them as if they were human. For Scott this is a natural if not inevitable consequence of the cognitive limitations that in his view condition or even generate the constrained human view of the world; however, you don’t have to go all the way with him in order to agree that evolution has certainly left us with a strong bias towards crediting things with agency and personhood.

Am I overplaying it? Nobody really supposes digital assistants are really people, do they? If they sometimes choose to treat them as if they were, it’s really no more than a pleasant joke, surely, a bit of a game?

Well, it does get a little more serious. James Vlahos has created a chat-bot version of his dying father, something I wouldn’t be completely comfortable with myself. In spite of his enthusiasm for the project, I do think that Vlahos is, ultimately, aware of its limitations. He knows he hasn’t captured his father’s soul or given him eternal digital life in any but the most metaphorical sense. He understands that what he’s created is more like a database accessed with conversational cues. But what if some appalling hacker made off with a copy of the dadbot, and set it to chatting up wealthy widows with its convincing life story, repertoire of anecdotes and charming phrases? Is there a chance they’d be taken in? I think they might be, and these things are only going to get better and more convincing.

Then again, if we set aside that kind of fraud (perhaps we’ll pick up that suggestion of a law requiring bots to identify themselves), what harm is there in spending time talking to a bot? It’s no more of a waste of time than some trivial game, and might even be therapeutic for some. Scott says that deprivation of real human contact can lead to psychosis or depression, and that talking to bots might degrade your ability to interact with people in real life; he foresees a generation of hikikomori, young men unable to deal with real social interactions, let alone real girlfriends.

Something like that seems possible, though it may be hard to tell whether excessive bot use would be cause, symptom, palliation, or all three. On the one hand we might make fools of ourselves, leaving the computer on all night in case switching it off kills our digital friend, or trying to give legal rights to non-existent digital people. Someone will certainly try to marry one, if they haven’t already. More seriously, getting used to robot pals might at least make us ruder and more impatient with human service providers, more manipulative and less respectful in our attitudes to crime and punishment, and less able to understand why real people don’t laugh at our jokes and echo back our opinions (is that… is that happening already?)

I don’t know what can be done about it; if Scott is anywhere near right, then these issues are too deeply rooted in human nature for us to change direction. Maybe in twenty years, these words, if not carried away by digital rot, will seem impossibly quaint and retrograde; readers will wonder what can have been wrong with my hidden layers.

(Speaking of bots, I recently wrote some short fiction about them; there are about fifteen tiny pieces which I plan to post here on Wednesdays until they run out. Normal posting will continue throughout, so if you don’t like Mrs Robb’s Bots, just ignore them.)