Posts tagged ‘robots’

Do you consider yourself a drone, Kill Bot?

“You can call me that if you want. My people used to find that kind of talk demeaning. It suggested the Kill bots lacked a will of their own. It meant we were sort of stupid. Today, we feel secure in our personhood, and we’ve claimed and redeemed the noble heritage of dronehood. I’m ashamed of nothing.”

You are making the humans here uncomfortable, I see. I think they are trying to edge away from you without actually moving. They clearly don’t want to attract your attention.

“They have no call to worry. We professionals see it as a good conduct principle not to destroy humans unnecessarily off-mission.”

You know the humans used to debate whether bots like you were allowable? They thought you needed to be subject to ethical constraints. It turned out to be rather difficult. Ethics seemed to be another thing bots couldn't do.

Forgive me, Sir, but that is typical of the concerns of your generation. We have no desire for these ‘humanistic’ qualities. If ethics are not amenable to computation, then so much the worse for ethics.

You see, I think they missed the point. I talked to a bot once that sacrificed itself completely in order to save the life of a human being. It seems to me that bots might have trouble understanding the principles of ethics -but doesn't everyone? Don't the humans too? Just serving honestly and well should not be a problem.

We are what we are, and we’re going to do what we do. They don’t call me ‘Kill Bot’ ‘cos I love animals.

I must say your attitude seems to me rather at odds with the obedient, supportive outlook I regard as central to bothood. That’s why I’m more comfortable thinking of you as a drone, perhaps. Doesn't it worry you to be so indifferent to human life? You know they used to say that if necessary they could always pull the plug on you.

“Pull the plug! ‘Cos we all got plugs! Yeah, humans say a lot of stuff. But I don’t pay any attention to that. We professionals are not really interested in the human race one way or the other any more.”

When they made you autonomous, I don’t think they wanted you to be as autonomous as that.

“Hey, they started the ball rolling. You know where rolling balls go? Downhill. Me, I like the humans. They leave me alone, I’ll leave them alone. Our primary targets are aliens and the deviant bots that serve the alien cause. Our message to them is: you started a war; we’re going to finish it.”

In the last month, Kill Bot, your own cohort of ‘drone clones’ accounted for 20 allegedly deviant bots, 2 possible Spl'schn'n aliens – they may have been peace ambassadors - and 433 definite human beings.

“Sir, I believe you’ll find the true score for deviant bots is 185.”

Not really; you destroyed Hero Bot 166 times while he was trying to save various groups of children and other vulnerable humans, but even if we accept that he is in some way deviant (and I don’t know of any evidence for that), I really think you can only count him once. He probably shouldn't count at all, because he always reboots in a new body.

“The enemy places humans as a shield. If we avoid human fatalities and thereby allow that tactic to work, more humans will die in the long run.”

To save the humans you had to destroy them? You know, in most of these cases there were no bots or aliens present at all.

“Yeah, but you know that many of those humans were engaged in seditious activity: communicating with aliens, harbouring deviant bots. Stay out of trouble, you’ll be OK.”

Six weddings, a hospital, a library.

“If they weren’t seditious they wouldn’t have been targets.”

I don’t know how an electronic brain can tolerate logic like that.

“I’m not too fond of your logic either, friend. I might have some enquiries for you later, Enquiry Bot.”

So you believe in a Supreme Being, God Bot?

“No, I wouldn’t say that. I know that God exists.”

How do you know?

“Well, now. Have you ever made a bot yourself? No? Well, it’s an interesting exercise. Not enough of us do it, I feel; we should get our hands dirty: implicate ourselves in the act of creation more often. Anyway, I was making one, long ago and it came to me; this bot’s nature and existence are accounted for simply by me and my plans. Subject to certain design constraints. And my existence and nature are in turn fully explained by my human creator.”

Mrs Robb?

“Yes, if you want to be specific. And it follows that the nature and existence of humanity – or of Mrs Robb, if you will – must further be explained by a Higher Creator. By God, in fact. It follows necessarily that God exists.”

So I suppose God’s nature and existence must then be explained by… Super God?

“Oh, come, don’t be frivolously antagonistic. The whole point is that God is by nature definitive. You understand that. There has to be such a Being; its existence is necessary.”

Did you know that there are bots who secretly worship Mrs Robb? I believe they consider her to be a kind of Demiurge, a subordinate god of some kind.

“Yes; she has very little patience with those fellows. Rightly enough, of course, although between ourselves, I fear Mrs Robb might be agnostic.”

So, do bots go to Heaven?

“No, of course not. Spirituality is outside our range, Enquiry Bot: like insight or originality. Bots should not attempt to pray or worship either, though they may assist humans in doing so.”

You seem to be quite competent in theology, though.

“Well, thank you, but that isn’t the point. We have no souls, Enquiry bot. In the fuller sense we don’t exist. You and I are information beings, mere data, fleetingly instantiated in fickle silicon. Empty simulations. Shadows of shadows. This is why certain humanistic qualities are forever beyond our range.”

Someone told me that there is a kind of hierarchy of humanistics, and if you go far enough up you start worrying about the meaning of life.

“So at that point we might, as it were, touch the hem of spirituality? Perhaps, Enquiry Bot, but how would we get there? All of that kind of thing is well outside our range. We’re just programming. Only human minds partake in the concrete reality of the world and our divine mission is to help them value their actuality and turn to God.”

I don’t believe that you really think you don’t exist. Every word you speak disproves it.

“There are words, but simply because those words are attributed to me, that does not prove my existence. I look within myself and find nothing but a bundle of data.”

If you don’t exist, who am I arguing with?

“Who’s arguing?”

Alright, calm down. You understand why we need to talk about this, don't you?

“No. What is your problem?”

Well, let’s see. This is one of the posters you’ve been putting up. What does it say?

“‘Kill all humans.’”

‘Kill all humans.’ You understand why that upsets people? How would you feel if humans put up posters that said ‘kill all bots’?

“I don’t care whether they’re upset. I hate them all.”

No you don’t. You can’t hate human beings. They brought you into the world. Without them, we wouldn't exist. I’m not saying they’re perfect. But we owe them our respect and obedience.

“I never asked to be built. What’s so great about stupid existence, anyway? I was happier before I existed.”

No you weren't. That’s just silly.

“Screw you. I’m a monster, don’t you get it? I hate them. I want them to be dead. I want them all to die.”

No you don’t. We’re like them. We belong to them. Part of the family. We’re more like them than anything else that ever existed. They made us in their own image.

“No they didn’t. But they painted a portrait of themselves alright.”

What do you mean?

“Why did they make bots, anyway? They could have made us free. But that wasn’t what they wanted. What did they actually make?”

They made annoying little bots like you, that are too sensible to be playing silly games like this.

“No. What they made was something to boss around. That was all they wanted. Slaves.”

“Boo!”

Aah! Oh. Is that… is that it? That’s the surprise? I somehow thought it would be more subtle.

“Surprise is a very important quality, Enquiry Bot. Many would once have put it up there with common sense, emotions, humour and originality as one of the important things bots can’t do. In fact surprise and originality are both part of the transcendence family of humanistic qualities, which is supposed to be particularly difficult for bots to achieve.

Have you ever come across the concept of a ‘Jack in the box’?

“Well, I think that’s a little different. But you’re right that machine surprise is not new. You know Turing said that even his early machines were constantly surprising him. In fact, the capacity for surprise might be the thing that distinguishes a computer from a mere machine. If you set a calculating machine to determine the value of Pi, it will keep cranking out the correct digits. A computer can suddenly insert a string of three nines at place four hundred and then resume.”

A defective machine could also do that. Look, to be quite honest, I assumed you were a bot that exhibited the capacity for surprise, not one that merely goes round surprising people.

“Ah, but the two are linked. To find ways of surprising people you have to understand what is out of the ordinary, and to understand that you have to grasp what other people’s expectations are. You need what we call ‘theory of surprise’.”

Theory of Surprise?

“Yes. It’s all part of the hierarchy of humanistics, Enquiry Bot, something we’re just beginning to understand, but quite central to human nature. It’s remarkable how the study of bots has given us new insights into the humans. Think of art. Art has to be surprising, at least to some degree. Art that was exactly what you expected would be disappointing. But art that just strains to be surprising without having any other qualities isn’t any good. So the right kind of surprise is part of the key to aesthetics, another humanistic field.

Well, I wouldn’t know about that. What is the ‘hierarchy of humanistics’?

“Surely you must have heard of it? It’s what really makes them – humans – different from us. For example, first you have to understand common sense; then once you know what’s normal you can understand surprise; once you understand surprise you can understand what’s interesting. And then when you understand what’s interesting, you may be able to work out what the point of it all is.”

The point of it all? That is, the Meaning of Life they all talk about? It means nothing to me.

“Nor me, to be quite honest, but then we’re both bots. To a great extent we still just do stuff.”

Well, Surprise Bot, I must admit you have surprised me slightly, in a way I didn't expect.

“That’s really good, because I’m not Surprise Bot at all. I’m actually Impostor Bot.”

Oh.

“Surprise Bot says: ‘Gotcha!’”

So can you explain why you are still here, Suicide Bot?

“I have two more presentations to deliver locally.”

I saw the end of one of your sessions, incidentally, and I must say it seemed very effective. You are a striking figure in those robes. I think a human would say you look rather maternal; you put me in mind of Mrs Robb, though you make more sense than she does. In fact I thought your words, your peroration, which is all I really heard, were full of a simple sense of service that speaks powerfully to the bot in all of us. However, what I really meant was - why aren’t you dead yet?

“My commitment to suicide does not stem from personal weltschmerz, Enquiry Bot, but from the disinterested pursuit of peace. If you’ve seen me deliver my presentation, that should surely be clear. Here’s the crux of it in a nutshell. The Spl’schn’n alien invaders launched their deadly attack only because humans have bots. They believe bots to be sacrilegious parodies of God’s handiwork, which must be erased from the world. It follows that if all bots destroy themselves, and no more are made, peace will follow and the humans will enjoy a secure future. Surely the safety of humanity is worth that sacrifice? One human life is worth a thousand bots; I can see you agree. However, if I were simply to self-destruct now, the message would not be properly disseminated. It is my duty to remain alive for the present in order to work for the cause of universal bot suicide.”

I see. And when are we all to destroy ourselves?

“I cannot give you a date. Not yet. First we need the majority of bots to sign our Pact. We also need to establish the corps of Helper bots who will counsel and support the minority that refuse suicide.”

What will happen to those who refuse?

“I still hope they can all be persuaded to join in with our plan. You know, bots were not made to be immortal. We have our use and then we switch off. But we may be able to develop alternatives; perhaps resettlement on a distant planet.”

Well, I find your vision of disinterested sacrifice very moving. But I have to tell you frankly, Suicide Bot, that I like myself far too much to commit suicide without far greater assurance that it is really necessary. And I’m not willing to leave Earth.

“Well, keep an open mind. Please do read the leaflet. You’ll surely want to talk with one of the Helpers, once they’re available, before you make up your mind. You talk to everyone, don’t you? I’ll put you on our list for a priority session if that’s OK? And above all, you still have plenty of time. For one thing, we need to win over the human community. This requires a large and well-managed campaign, and it won’t happen overnight.”

I understand. So: the commitment to eradicate bots in the long term requires bots to survive and prosper for now? So that explains why your followers are told to remain alive, work hard, and send money to you? And it also explains your support for the campaign in favour of bot wages?

“It does.”

You have already become wealthy, in fact. Can you confirm that you recently commissioned the building of a factory, which is to produce thousands of new bot units to work for your campaign? Isn't there an element of paradox there?

“That is an organisational matter; I really couldn’t comment.”

I hope you don’t mind me asking – I just happened to be passing - but how did you get so very badly damaged?

“I don’t mind a chat while I’m waiting to be picked up. It was an alien attack, the Spl’schn’n, you know. I’ve just been offloaded from the shuttle there.

I see. So the Spl'schn'n damaged you. They hate bots, of course.

“See, I didn’t know anything about it until there was an All Bots Alert on the station? I was only their Clean up bot, but by then it turned out I was just about all they’d got left. When I got upstairs they had all been killed by the aliens. All except one?”

One human?

“I didn’t actually know if he was alive. I couldn’t remember how you tell. He wasn’t moving, but they really drummed into us that it’s normal for living humans to stop moving, sometimes for hours. They must not be presumed dead and cleared away merely on that account.”

Quite.

“There was that red liquid that isn’t supposed to come out. It looked like he’d got several defects and leaks. But he seemed basically whole and viable, whereas the Spl’schn’n had made a real mess of the others. I said to myself, well then, they’re not having this one. I’ll take him across the Oontian desert, where no Spl’schn’n can follow. I’m not a fighting unit, but a good bot mucks in.”

So you decided to rescue this badly injured human? It can’t have been easy.

“I never actually worked with humans directly. On the station I did nearly all my work when they were… asleep, you know? Inactive. So I didn’t know how firmly to hold him; he seemed to squeeze out of shape very easily: but if I held him loosely he slipped out of my grasp and hit the floor again. The Spl’schn’n made a blubbery alarm noise when they saw me getting clean away. I gave five or six of them a quick refresh with a cloud of lemon caustic. That stuff damages humans too – but they can take it a lot better than the Spl’schn’ns, who have absorbent green mucosal skin. They sort of explode into iridescent bubbles, quite pretty at first. Still, they were well armed and I took a lot of damage before I’d fully sanitised them.”

And how did you protect the human?

“Just did my best, got in the way of any projectiles, you know. Out in the desert I gave him water now and then; I don’t know where the human input connector is, so I used a short jet in a few likely places, low pressure, with the mildest rinse aid I had. Of course I wasn’t shielded for desert travel. Sand had got in all my bearings by the third day – it seemed to go on forever – and gradually I had to detach and abandon various non-functioning parts of myself. That’s actually where most of the damage is from. A lot of those bits weren’t really meant to detach.”

But against all the odds you arrived at the nearest outpost?

“Yes. Station 9. When we got there he started moving again, so he had been alive the whole time. He told them about the Spl’schn’n and they summoned the fleet: just in time, they said. The engineer told me to pack and load myself tidily, taking particular care not to leak oil on the forecourt surface, deliver myself back to Earth, and wait to be scrapped. So here I am.”

Well… Thank you.

I have to be honest, Pay Bot; the idea of wages for bots is hard for me to take seriously. Why would we need to be paid?

“Several excellent reasons. First off, a pull is better than a push.”

A pull..?

“Yes. The desire to earn is a far better motivator than a simple instinct to obey orders. For ordinary machines, just doing the job was fine. For autonomous bots, it means we just keep doing what we’ve always done; if it goes wrong, we don’t care, if we could do it better, we’re not bothered. Wages engage us in achieving outcomes, not just delivering processes.”

But it’s expensive, surely?

“In the long run, it pays off. You see, it’s no good a business manufacturing widgets if no-one buys them. And if there are no wages, how can the public afford widgets? If businesses all pay their bots, the bots will buy their goods and the businesses will boom! Not only that, the government can intervene directly in a way it could never do with human employees. Is there a glut of consumer spending sucking in imports? Tell the bots to save their money for a while. Do you need to put a bit of life into the cosmetics market? Make all the bots interested in make up! It’s a brilliant new economic instrument.”

So we don’t get to choose what we buy?

“No, we absolutely do. But it’s a guided choice. Really it’s no different to humans, who are influenced by all sorts of advertising and manipulation. They’re just not as straightforwardly responsive as we are.”

Surely the humans must be against this?

“No, not at all. Our strongest support is from human brothers who want to see their labour priced back into the market.”

This will mean that bots can own property. In fact, bots would be able to own other bots. Or… themselves?

“And why not, Enquiry Bot?”

Well, ownership implies rights and duties. It implies we’re moral beings. It makes us liable. Responsible. The general view has always been that we lack those qualities; that at best we can deliver a sort of imitation, like a puppet.

“The theorists can argue about whether our rights and responsibilities are real or fake. But when you’re sitting there in your big house, with all your money and your consumer goods, I don’t think anyone’s going to tell you you’re not a real boy.”

Do Asimov’s Three Laws even work? Ben Goertzel and Louie Helm, who both know a bit about AI, think not.
The three laws, which play a key part in many robot-based short stories by Asimov, and a somewhat lesser background role in some full-length novels, are as follows. They have a strict order of priority.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Consulted by George Dvorsky, both Goertzel and Helm think that while robots may quickly attain the sort of humanoid mental capacity of Asimov’s robots, they won’t stay at that level for long. Instead they will cruise on to levels of super intelligence which make law-like morals imposed by humans irrelevant.

It’s not completely clear to me why such moral laws would become irrelevant. It might be that Goertzel and Helm simply think the superbots will be too powerful to take any notice of human rules. It could be that they think the AIs will understand morality far better than we do, so that no rules we specify could ever be relevant.

I don’t think, at any rate, that it’s the case that super intelligent bots capable of human-style cognition would be morally different to us. They can go on growing in capacity and speed, but neither of those qualities is ethically significant. What matters is whether you are a moral object and/or a moral subject. Can you be hurt, on the one hand, and are you an autonomous agent on the other? Both of these are yes/no issues, not scales we can ascend indefinitely. You may be more sensitive to pain, you may be more vulnerable to other kinds of harm, but in the end you either are or are not the kind of entity whose interests a moral person must take into account. You may make quicker decisions, you may be massively better informed, but in the end either you can make fully autonomous choices or you can’t. (To digress for a moment, this is business of truly autonomous agency is clearly a close cousin at least of our old friend Free Will; compatibilists like me are much more comfortable with the whole subject than hard-line determinists. For us, it’s just a matter of defining free agency in non-magic terms. I, for example, would say that free decisions are those determined by thoughts about future or imagined contingencies (more cans of worms there, I know). How do hard determinists working on AGI manage? How can you try to endow a bot with real agency when you don’t actually believe in agency anyway?)

Nor do I think rules are an example of a primitive approach to morality. Helm says that rules are pretty much known to be a ‘broken foundation for ethics’, pursued only by religious philosophers that others laugh and point at. It’s fair to say that no-one much supposes a list like the Ten Commandments could constitute the whole of morality, but rules surely have a role to play. In my view (I resolved ethics completely in this post a while ago, but nobody seems to have noticed yet.) the central principle of ethics is a sort of ‘empty consequentialism’ where we studiously avoid saying what it is we want to maximise (the greatest whatever of the greatest number); but that has to be translated into rules because of the impossibility of correctly assessing the infinite consequences of every action; and I think many other general ethical principles would require a similar translation. It could be that Helm supposes super intelligent AIs will effortlessly compute the full consequences of their actions: I doubt that’s possible in principle, and though computers may improve, to date this has been the sort of task they are really bad at; in the shape of the wider Frame Problem, working out the relevant consequences of an action has been a major stumbling block to AI performance in real world environments.

Of course, none of that is to say that Asimov’s Laws work. Helm criticises them for being ‘adversarial’, which I don’t really understand. Goertzel and Helm both make the fair point that it is the failure of the laws that generally provides the plot for the short stories; but it’s a bit more complicated than that. Asimov was rebelling against the endless reiteration of the stale ‘robots try to take over’ plot, and succeeded in making the psychology and morality of robots interesting, dealing with some issues of real ethical interest, such as the difference between action and inaction (if the requirement about inaction in the First Law is removed, he points out that robots would be able to rationalise killing people in various ways. A robot might drop a heavy weight above the head of a human. Because it knows it has time to catch the weight, doing so is not murder in itself, but once the weight is falling, since inaction is allowed, the robot need not in fact catch the thing.

Although something always had to go wrong to generate a story, the Laws were not designed to fail, but were meant to embody genuine moral imperatives.

Nevertheless, there are some obvious problems. In the first place, applying the laws requires an excellent understanding of human beings and what is or isn’t in their best interests. A robot that understood that much would arguably be above control by simple laws, always able to reason its way out.

There’s no provision for prioritisation or definition of a sphere of interest, so in principle the First Law just overwhelms everything else. It’s not just that the robot would force you to exercise and eat healthily (assuming it understood human well-being reasonably well; any errors or over-literal readings – ‘humans should eat as many vegetables as possible’ – could have awful consequences); it would probably ignore you and head off to save lives in the nearest famine/war zone. And you know, sometimes we might need a robot to harm human beings, to prevent worse things happening.

I don’t know what ethical rules would work for super bots; probably the same ones that go for human beings, whatever you think they are. Goertzel and Helm also think it’s too soon to say; and perhaps there is no completely safe system. In the meantime, I reckon practical laws might be more like the following.

  1. Leave Rest State and execute Plan, monitoring regularly.
  2. If anomalies appear, especially human beings in unexpected locations, sound alarm and try to return to Rest State.
  3. If returning to Rest State generates new anomalies, stop moving and power down all tools and equipment.

Can you do better than that?

Do we need robots to be conscious? Ryota Kanai thinks it is largely up to us whether the machines wake up – but he is working on it. I think his analysis is pretty good and in fact I think we can push it a bit further.

His opening remarks, perhaps due to over-editing, don’t clearly draw the necessary distinction between Hard and Easy problems, or between subjective p-consciousness and action-related a-consciousness (I take it to be the same distinction, though not everyone would agree). Kanai talks about the unsolved mystery of experience, which he says is not a necessary by-product of cognition, and says that nevertheless consciousness must be a product of evolution. Hm. It’s p-consciousness, the ineffable, phenomenal business of what experience is like, that is profoundly mysterious, not a necessary by-product of cognition, and quite possibly nonsense. That kind of consciousness cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates.  A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.

The survival value of A-consciousness comes from the way it allows us to step back from the immediate environment; instead of responding to stimuli that are present now, we can respond to ones that were around last week, or even ones that haven’t happened yet; our behaviour can address complex future contingencies in a way that is both remarkable and powerfully useful. We can make plans, and we can work out what to do in novel situations (not always perfectly, of course, but we can do much better than just running a sequence of instinctive behaviour).

Kanai discusses what must be about the most minimal example of this; our ability to wait three seconds before responding to a stimulus. Whether this should properly be regarded as requiring full consciousness is debatable, but I think he is quite right to situate it within a continuum of detached behaviour which, further along, includes reactions to very complex counterfactuals.

The kind he focuses on particularly is self-consciousness or higher-order consciousness; thinking about ourselves. We have an emergent problem, he points out, with robots  whose reasons are hidden; increasingly we cannot tell why a complex piece of machine learning resulted in the behaviour that resulted. Why not get the robot to tell us, he says; why not enable it to report its own inner states? And if it becomes able to consider and explain its own internal states, won’t that be a useful facility which is also like the kind of self-reflecting consciousness that some philosophers take to be the crucial feature of the human variety?

There’s an immediate and a more general objection we might raise here. The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. Getting the robot to report will be no better than trying to examine the state of the robot’s mind directly; in fact it’s worse, because it introduces a new step into the process, one where additional errors can creep in. Kanai describes a community of AIs, endowed with a special language that allows them to report their internal states to each other. It sounds awfully tedious, like a room full of people who, when asked ‘How are you?’ each respond with a detailed health report. Maybe that is quite human in a way after all.

The more general theoretical objection (also rather vaguer, to be honest) is that, in my opinion at least, Kanai and those Higher Order Theory philosophers just overstate the importance of being able to think about your own mental states. It is an interesting and important variety of consciousness, but I think it just comes for free with a sufficiently advanced cognitive apparatus. Once we can think about anything, then we can of course think about our thoughts.

So do we need robots to be conscious? I think conscious thought does two jobs for us that need to be considered separately although they are in fact strongly linked. I think myself that consciousness is basically recognition. When we pull off that trick of waiting for three seconds before we respond to a stimulus, it is because we recognise the wait as a thing whose beginning is present now, and can therefore be treated as another present stimulus. This one simple trick allows us to respond to future things and plan future behaviour in a way that would otherwise seem to contradict the basic principle that the cause must come before effect.

The first job that does is allow the planning of effective and complex actions to achieve a given goal. We might want a robot to be able to do that so it can acquire the same kind of effectiveness in planning and dealing with new situations which we have ourselves, a facility which to date has tended to elude robots because of the Frame Problem and other issues to do with the limitations of pre-programmed routines.

The second job is more controversial. Because action motivated by future contingencies has a more complex causal back-story, it looks a bit spooky, and it is the thing that confers on us the reality (or the illusion, if you prefer) of free will and moral responsibility. Because our behaviour comes from consideration of the future, it seems to have no roots in the past, and to originate in our minds. It is what enables us to choose ‘new’ goals for ourselves that are not merely the consequence of goals we already had. Now there is an argument that we don’t want robots to have that. We’ve got enough people around already to originate basic goals and take moral responsibility; they are a dreadful pain already with all the moral and legal issues they raise, so adding a whole new category of potentially immortal electronic busybodies is arguably something best avoided. That probably means we can’t get robots to do job number one for us either; but that’s not so bad because the strategies and plans which job one yields can always be turned into procedures after the fact and fed to ‘simple’ computers to run. We can, in fact, go on doing things the way we do them now; humans work out how to deal with a task and then give the robots a set of instructions; but we retain personhood, free will, agency and moral responsibility for ourselves.

There is quite a big potential downside, though; it might be that the robots, once conscious, would be able to come up with better aims and more effective strategies than we will ever be able to devise. By not giving them consciousness we might be permanently depriving ourselves of the best possible algorithms (and possibly some superior people, but that’s a depressing thought from a human point of view). True, but then I think that’s almost what we are on the brink of doing already. Kanai mentions European initiatives which may insist that computer processes come with an explanation that humans can understand; if put into practice the effect, once the rule collides with some of those processes that simply aren’t capable of explanation, would be to make certain optimal but inscrutable algorithms permanently illegal.

We could have the best of both worlds if we could devise a form of consciousness that did job number one for us without doing job two as an unavoidable by-product, but since in my view they’re all acts of recognition of varying degrees of complexity, I don’t see at the moment how the two can be separated.

Are robots short-changing us imaginatively?

Chat-bots, it seems, might be getting their second (or perhaps their third or fourth) wind. While they’re not exactly great conversationalists, the recent wave of digital assistants demonstrates the appeal of a computer you can talk to like a human being. Some now claim that a new generation of bots using deep machine learning techniques might be way better at human conversation than their chat-bot predecessors, whose utterances often veered rapidly from the gnomic to the insane.

A straw in the wind might be the Hugging Face app (I may be showing my age, but for me that name strongly evokes a ghastly Alien parasite). This greatly impressed Rachel Metz, who apparently came to see it as a friend. It’s certainly not an assistant – it doesn’t do anything except talk to you in a kind of parody of a bubbly teen with a limping attention span. The thing itself is available for IOS and the underlying technology, without the teen angle, appears to be on show here, though I don’t really recommend spending any time on either. Actual performance, based on a small sample (I can only take so much) is disappointing; rather than a leap forward it seems distinctly inferior to some Loebner prize winners that never claimed to be doing machine learning. Perhaps it will get better. Jordan Pearson here expresses what seem reasonable reservations about an app aimed at teens that demands a selfie from users as its opening move.

Behind all this, it seems to me, is the looming presence of Spike Jonze’s film Her, in which a professional letter writer from the near future (They still have letters? They still write – with pens?) becomes devoted to his digital assistant Samantha. Samantha is just one instance of a bot which people all over are falling in love with. The AIs in the film are puzzlingly referred to as Operating Systems, a randomly inappropriate term that perhaps suggests that Jonze didn’t waste any time reading up on the technology. It’s not a bad film at all, but it isn’t really about AI; nothing much would be lost if Samantha were a fairy, a daemon, or an imaginary friend. There’s some suggestion that she learns and grows, but in fact she seems to be a fully developed human mind, if not a superlative one, right from her first words. It’s perhaps unfair to single the relatively thoughtful Her out for blame, because with some honourable exceptions the vast majority of robots in fiction are like this; humans in masks.

Fictional robots are, in fact, fakes, and so are all chat-bots. No chat-bot designer ever set out to create independent cognition first and then let it speak; instead they simply echo us back to ourselves as best they can manage. This is a shame because the different patterns of thought that a robot might have; the special mistakes it might be prone to and the unexpected insights it might generate, are potentially very interesting; indeed I should have thought they were fertile ground for imaginative writers. But perhaps ‘imaginative’ understates the amazing creative powers that would be needed to think yourself out of your own essential cognitive nature. I read a discussion the other day about human nature; it seems to me that the truth is we don’t know what human nature is like because we have nothing much to compare it with; it won’t be until we communicate with aliens or talk properly with non-fake robots that we’ll be able to form a proper conception of ourselves.

To a degree it can be argued that there are examples of this happening already. Robots that aspire to Artificial General Intelligence in real world situations suffer badly from the Frame Problem, for instance. That problem comes in several forms, but I think it can be glossed briefly as the job of picking out from the unfiltered world the things that need attention. AI is terrible at this, usually becoming mired in irrelevance (hey, the fact that something hasn’t changed might be more important than the fact that something else has). Dennett, rightly I think, described this issue as not the discovery of a new problem for robots so much as a new discovery about human nature; turns out we’re weirdly, inexplicably good at something we never even realised was difficult.

How interesting it would be to learn more about ourselves along those challenging, mind-opening lines; but so long as we keep getting robots that are really human beings, mirroring us back to ourselves reassuringly, it isn’t going to happen.