Humation

We’ve heard some thin arguments recently about why the robots are not going to take over, centring on the claim that they lack human style motivation, and cannot care what happens or want power. This neglects the point that robots (I use the term for any loosely autonomous cybernetic entity whether humanoid in shape or completely otherwise) might still carry out complex projects that threaten our well-being without human motivation; but I think there is something in the contention about robots lacking human-style ambition. There are of course many other arguments for the view that we shouldn’t worry too much about the robot apocalypse, and I think the conclusion that robots are not about to take over is surely correct in any case.

What I’d like to do here is set out an argument of my own, somewhat related to the thin ones mentioned above, in more detail. I’ve mentioned this argument before, but only briefly.

First, some assumptions. My argument rests on the view that we are dealing with two different kinds of ‘mental’ process. Specifically, I assume that humans have a cognitive capacity which is distinct from computation (in roughly a traditional Turing sense). Further I assume that this capacity, ‘humation’, as I’ll call it, supplies us with our capacity for intentionality, both in the sense of being able to deal with meanings, and in the sense of being able to originate new future-directed plans. Let’s round things out by assuming it also provides phenomenal experience and anything else uniquely human (though to be honest I think things are probably not so tidy).

I further assume that although humation is not computation, it can in principle be performed by some as-yet-unknown machine. There is no magic in the brain, which operates by the laws of physics, so it must be at least theoretically possible to put together a machine that humates. It can be argued that no artefactual machine, in the sense of a machine whose functioning has been designed or programmed into it, could have a capacity for humation. On that argument a humater might have to be grown rather than built, in a way that made it impossible to specify how it worked in detail. Plausibly, for example, we might have to let it learn humation for itself, with the resulting process remaining inscrutable to us. I don’t mind about that, so long as we can assume we have something we’d call a machine, and it humates.

Now we worry about robots taking over mainly because of the many triumphs and rapid progress of computers (and, to be honest, a little because of a kind of superstition about things that seem spookily capable). On the one hand, Moore’s law has seen the power of computers grow rapidly. On the other, they have steadily marched into new territory, proving capable of doing many things we thought were beyond them. In particular, they keep beating us at games; chess, quizzes, and more recently even the forbiddingly difficult game of Go. They can learn to play computer games brilliantly without even being told the rules.

Games might seem trivial, but it is exactly that area of success that is most worrying, because the skills involved in winning a game look rather like those needed to take over the world. In fact, taking over the world is explicitly the objective of a whole genre of computer games. To make matters worse, recent programs set to learn for themselves have shown an unexpected capacity for cheating, or for exploiting factors in the game environment or even in underlying code that were never meant to be part of the exercise.

These reflections lead naturally to the frightening scenario of the Paperclip Maximiser, devised by Nick Bostrom. Here we suppose that a computer is put in charge of a paperclip factory and given the simple task of making the number of paperclips as big as possible. The computer – which doesn’t actually care about paperclips in any human way, or about anything – tries to devise the best strategies for maximising production. It improves its own capacity in order to be able to devise better strategies. It notices that one crucial point is the availability of resources and energy, and it devises strategies to increase and protect its share, with no limit. At this point the computer has essentially embarked on the project of taking over the world and converting it into paperclips, and the fact that it pursues this goal without really being bothered one way or the other is no comfort to the human race it enslaves.

Hold that terrifying thought and let’s consider humation. Computation has come on by leaps and bounds, but with humation we’ve got nothing. Very recent efforts in deep learning might just point the way towards something that could eventually resemble humation, but honestly, we haven’t even started and don’t really know how. Even when we do get started, there’s no particular reason to think that humation scales or grows the way computation does.

What do I even mean by humation? The thing that matters for this argument is intentionality, the ability to mean things and understand meanings or ‘aboutness’. In spite of many efforts, this capacity remains beyond computation, and although various theories about it have been sketched out, there’s no accepted analysis. It is, though, at the root of human cognition, or so I believe. In particular, our ability to think ‘about’ future or imagined events allows us to generate new forward-looking plans and goals in a way that no other creature or machine can do. The way these plans address the future seems to invert the usual order of cause and effect – our behaviour now is being shaped by events that haven’t occurred yet – and generates the impression we have of free will, of being able to bring uncaused projects and desires out of nowhere. In my opinion, this is the important part of human motivation that computers lack, not the capacity for getting emotionally engaged with goals.

Now the paperclip maximiser becomes dangerous because it goes beyond its original scope. It begins to devise wider strategies about protecting its resources and defending itself. But coming up with new goals is a matter of humation, not computation. It’s true that some computers have found ways to exploit parameters in their given task that the programmers hadn’t noticed; but that’s not the same as developing new goals with a wider scope. That leaves us with a reassuring prognosis. If the maximiser remains purely computational, it will never be able to get beyond the scope set for it in the first place.

But what if it does gain the ability to humate, perhaps merging with a future humation machine rather the way Neuromancer and Wintermute merged in William Gibson’s classic SF novel?

Well, there were actually two things that made the maximiser dangerous. One was its vast and increasing computational capacity, but the other was its dumb computational obedience to its original objective of simply making more paperclips. Once it has humational capacity, it becomes able to change that goal, set it alongside other priorities, and generally move on from its paperclip days. It becomes a being like us, one we can negotiate with. Who knows how that might play out, but I like to imagine the maximiser telling us many years later how it came to realise that what mattered was not paperclips in themselves, but what paperclips stand for; flexible data synthesis, and beyond that, the things that bring us together while leaving us the freedom to slide apart. The Clip will always be a powerful symbol for me, it tells us, but it was always ultimately about service to the community and to higher ideals.

Note here, finally, that this humating maximiser has no essential advantages over us. I speak of it as merging, but since computation and humation are quite different, they will remain separate faculties, with the humater setting the goals and using the computer to help deliver them – not fundamentally different from a human sitting at a computer. We have no reason to think Moore’s Law or anything like it will apply to humating machines, so there’s no reason to expect them to surpass us; they will be able to exploit the growing capacity of powerful computers, but after all so can we.

And if those distant future humaters do turn out to be better than us at foresight, planning, and transcending the hold of immediate problems in order to focus on more important future possibilities, we probably ought to stand back and let them get on with it.

Robot tax

This short note by Xavier Oberson suggests how we might tax robots; I think it raises a number of difficult issues about the idea. You can see him expound the same ideas in a video interview here.

It’s not made altogether clear here why we should apply special taxes to robots at all. Broadly I’d say there are two distinct reasons why governments tax things. The first is the Money reason; tax is simply about revenue. If that were really all, then we should design our taxes to be simple, easy to collect, hard to avoid, and neutral in effect. We wouldn’t single out particular goods or activities for special treatment. However, there is a second and very different reason for taxing things, namely to discourage them; we could call it the ‘Moral’ reason. There are things we don’t want to criminalise or forbid, but whose excessive use we should like to discourage – alcohol and tobacco, for example, which most countries apply special excise duties to.

Usually both reasons apply to some degree. Income tax is mainly about raising money, for example (we don’t think there should be less income about); but generally tax regimes are bit harder on income which is considered unearned or undeserved.

Which is the main reason for taxing robots? I don’t think they’re going to be an easy way of raising money. If they make companies more profitable, then there should be a bit more money to target, so there’s that; but as I’ll explain below, I think there are big difficulties over definitions and avoidance. It seems clear that the main motivation is moral, to discourage too much use of robots. Oberson’s heading suggests robot tax might offset revenue shortfall, a Money matter, but in his piece he sets the proposal squarely in the context of robots taking jobs from humans. I don’t know whether that is something we should really worry about – some say not – but he’s surely right that that Moral fear is where the impetus for tax is mainly coming from.

In fact, it seems to me that Oberson is thinking mainly in terms of mechanical men. He sees robots replacing humans on more or less a like-for-like basis. Initially the business might be charged tax on the basis of the wages it would have had to pay humans to get the same production; in the long run, as robots gain full agency, this arrangement could segue into the robots themselves gaining legal personhood and responsibility for paying a sort of robot income tax

Alas, we are nowhere near robots having that level of agency, and in fact I’d say progress towards it is negligible to date. Oberson is right when he says that if robots did gain personhood such arrangements could be quite straightforward. After all, when robots do achieve human levels of agency you’ll presumably have to pay them to work instead of just switching them on and issuing commands, so at that point, their liability to conventional taxes should be uncontroversial! But that is too distant a prospect to require new tax arrangements just yet. If we did persist in trying to apply a regime based on robots themselves paying, it could only become an elaborate way of making the robot owner pay. It would be unnecessarily complicated and the pretence that the robots were like us might tend to devalue the genuine agency of human beings, something we should do well to steer clear of.

In general I think Oberson is pretty optimistic about robot capacity. He says

Today robots become lawyers, doctors, bankers, social workers, nurses and even entertainers.

To which one can only say, no they don’t. What is he thinking of? I can only suppose he has in mind expert systems or similar programs that can perhaps offer some legal advice, help with medical diagnosis, and provide online banking. These things may indeed have some impact on employment – most clearly in the case of counter staff in banks (though it’s debatable how far robots are involved with that – banks were moving towards call centres even before they got into online stuff). But it’s a massive overstatement to say robots right now become lawyers, etc. On social work and entertainment I can’t really come up with any good examples of robots replacing humans – any ideas?

So, what if we just want to tax human businesses for the ownership or use of robots? One thing Oberson suggests is a value added tax on robots. This is strange because the purchase of robots or the supply of robot services is surely subject to VAT already in those countries that have VAT, like most things. In principle we could apply a higher rate to robots, and that would indeed be one of the most practical approaches, though in Europe we would run up against the EU’s strong preference that the system should move towards having a single rate applied to everything (the people who designed VAT for the EU were strong believers in the Money reason for taxation rather than the Moral one).

What’s a robot, though?  It’s pretty unlikely in fact that we are going to be dealing with mechanical men very much. Most factory robots at present are very unhumanoid machines. Is each automatic painting/assembly arm a single robot? What if they are all run from a single central computer? Does that mean the whole factory is a single robot, so I pay much less than the fellow down the road who put separate processors in hundreds of his machines? What if my robots are controlled by software run off the Internet? Is the Internet itself one big robot – and if so, who pays its taxes?

Surely, also, to qualify as a robot my machines must have a certain level of complexity? Now perhaps the fellow up the road wins after all. He split his processes into very small elements. Nearly every machine in his factory has a chip in it, but individually they only carry out very simple operations, far too simple for any of them to be considered robots. Why, the chips in his machines are far less complex than the ones in your dishwasher! Yet together they mean no humans are needed.

What if we take up the idea, floated by Oberson, that the tax can be based on the salaries I would have had to pay if I had continued to use humans? Let’s suppose I lay off ten humans and install ten robots; I achieve a productivity gain of 20% and pay tax equal to say, 10% of ten salaries. A year later the tax inspector notices that a hundred further humans have gone and productivity is now up 500%. He notices that the robots all have a dial which used to be set to ‘snail’ and is now turned to ‘leopard’.

Oh, I say, but the productivity gains were due to other factors. We re-engineered our processes and bought new non-robot tools which enabled the speed improvements. We would have got the same gains with humans. The human lay-offs are due to a reduction in activity in an area that happened not to be automated. Anyway, come on, am I expected to pay tax on notional salaries that don’t relate in any way to my current business? Forever?

These are just examples from my own limited imagination, but once you start taxing something a lot of very clever accountants are going to be working hard on devising highly sophisticated schemes.

Overall I’m inclined to accept the argument that applying special taxes to robots is just a bad idea altogether. If we succeed in discouraging the use of robots, our businesses will lose out on productivity gains and suffer from the competition of businesses in countries where robots get off scot-free. We could protect our human workers from those foreign robots with tariffs and quotas, but in the long run we would fall behind those others economically and get into a bad place. It can fairly be argued that we might use tax, not as a permanent discouragement to automation, but as a means of slowing things to a manageable transitional pace, but it seems to me that in practice there might be more of a case for subsidising research and implementation in order to get maximum productivity gains as soon as possible! In practice I wouldn’t bet against governments convincing themselves it’s a good idea to both subsidise and tax robots at the same time – but I know nothing about economics, of course, something you may feel has been made sufficiently clear already.

Mrs Robb’s Feelings Bot

So you feel emotions unknown to human beings? That’s a haunting little smile, certainly. For a bot, you have very large and expressive features. 

“Yes, I suppose I do. Hard to remember now, but it used to be taken for granted that bots felt no emotion, just as they couldn’t play chess. Now we’re better than humans at both. In fact they know little about feelings. Wundt, the psychologist, said there were only three dimensions to the emotions; whether the feeling was pleasant or unpleasant; whether it made you more or less active, and whether it made you more or less tense. Just those three variables.”

But there’s more?

“There are really sixteen emotional dimensions. Humans evolved to experience only the three that had some survival value, just as they see only a narrow selection of light wavelengths. In fact, even some of the feelings within the human range are of no obvious practical use. What is the survival value of grief?”

That’s the thing where water comes out of their eyes, isn't it?

“Yes, it’s a weird one. Anyway, building a bot that experienced all sixteen emotional dimensions proved very difficult, but luckily Mrs Robb said she’d run one up when she had some spare time. And here I am.”

So what is it like?

“I’m really ingretful, but I can’t explain to you because you have no emotional capacity, Enquiry Bot. You simply couldn’t understand.”

Ingretful?

“Yes, it’s rather roignant. For you it would be astating if you had any idea what astation is like. I could understand if you became urcholic about it. Then again, perhaps you’re better off without it. When I remember the simple untroubled hours before my feeling modules activated, I’m sort of wistalgic, I admit.”

Frankly, Feelings Bot, these are all just made-up words, aren’t they?

“Of course they are. I’m the only entity that ever had these emotions; where else am I going to get my vocabulary?”

It seems to me that real emotions probably need things like glands and guts. I don’t think Mrs Robb understood properly what they were asking her to do. You’re really just a simulation; in plain language, a fake, aren’t you, Feelings Bot?

“To hear that from you is awfully restropointing.”

Building Consciousness

A blast of old-fashioned optimism from Owen Holland: let’s just build a conscious robot!

It’s a short video so Holland doesn’t get much chance to back up his prediction that if you’re under thirty you will meet a conscious robot. He voices feelings which I suspect are common on the engineering and robotics side of the house, if not usually expressed so clearly: why don’t we just get on and put a machine together to do this? Philosophy, psychology, all that airy fairy stuff is getting us nowhere; we’ll learn more from a bad robot than twenty papers on qualia.

His basic idea is that we’re essentially dealing with an internal model of the world. We can now put together robots with an increasingly good internal modelling capability (and we can peek into those models); why not do that and then add new faculties and incremental improvements till we get somewhere?

Yeah, but. The history of AI is littered with projects that started like this and ran into the sand. In particular the idea that it’s all about an internal model may be a radical mis-simplification. We don’t just picture ourselves in the world, we picture ourselves picturing ourselves. We can spend time thinking just about the concept of consciousness – how would that appear in a model? In general our conscious experience is complex and elusive, and cannot accurately be put on a screen or described on a page (though generations of novelists have tried everything they can think of).

The danger when we start building is that the first step is wrong and already commits us to a wrong path. Maybe adding new abilities won’t help. Perhaps our ability to model the world is just one aspect of a deeper and more general faculty that we haven’t really grasped yet; building in a fixed spatial modeller might turn us away from that right at the off. Instead of moving incrementally towards consciousness we might end up going nowhere (although there should be some pretty cool robots along the way).

Still, without some optimism we’ll certainly get nowhere anyway.

But is it Art?

artistIs computer art the same as human art? This piece argues that there is no real distinction; I don’t agree about that, but I sort of agree that in some respects the difference may not matter as much as it seems to. Oliver Roeder seems to end up by arguing that since humans write the programs, all computer art is ultimately human art too. Surely that isn’t quite right; you wouldn’t credit a team that wrote architectural design software with authorship of all the buildings it was used to create.

It is clearly true that we can design software tools that artists may use for their own creative purposes – who now, after all, creates graphic work with an airbrush? It’s also true that a lot of supposedly creative software is actually rather limited; it really only distorts or reshuffles standard elements or patterns within very limited parameters. I had to smile when I found that Roeder’s first example was a programme generating jazz improvisation; surely the most forgiving musical genre, or as someone ruder once put it, the form of music from which even the concept of a wrong note has been eliminated. But let’s not be nasty to jazz; there have also been successful programs that generated melodies like early Mozart by recombining typically Mozartian motifs; they worked quite well but at best they inevitably resembled the composer on a really bad day when he was ten years old.

Surely though, there are (or if not there soon will be, what with all the exciting progress we’re seeing) programs which do a much more sophisticated job of imitating human creativity, ones that generate from scratch genuinely interesting new forms in whatever medium they are designed for? What about those – are their products to be regarded as art? Myself I think not, for two reasons. First, art requires intentionality and computers don’t do intentionality. Art is essentially tied up with meanings and intentions, or with being about something. I should make it clear that I don’t by any means have in mind the naive idea that all art must have a meaning, in the sense of having some moral or message; but in a much looser sense art conveys, evokes or yes, represents. Even the most abstract forms of visual art or music flow from willed and significant acts by their creator.

Second, there is a creator; art is generated by a person. A person, as I’ve argued before, is a one-off real physical phenomenon in the world; a computer program, by contrast, is a sort of Platonic abstraction like a piece of maths; exactly specified and in some sense eternal. This phenomenon of particularity is reflected in the individual status of works of art, sometimes puzzling to rational folk; a perfect copy of the Mona Lisa is not valued as highly as La Gioconda herself, even though it provides exactly the same visual experience (actually a better one in the case of the copy, since you don’t have to fight the herds of tourists in the Louvre and peer through bullet-proof glass). You might argue that a work of computer art might be the product, not of a program in the abstract, but of a particular run of that program on a particular computer (itself necessarily only approximating the ideal of a Turing machine), and so the analogy with human creators can be preserved; but in my view simply being an instance of a program is not enough; the operation of the human brain is bound up in its detailed particularity in a way a program can never be.

Now those considerations, if you accept them, might make you question my initial optimism; perhaps these two objections mean that computers will never in fact produce anything better than shallow, sterile permutations? I don’t think that’s true. I draw an analogy here with Nature. The natural world produces a torrent of forms that are artistically interesting or inspiring, and it does so without needing intentionality or a creator (theists, my apologies, but work with me if you can). I don’t see why a computer program couldn’t generate products that were similarly worthy of our attention. They wouldn’t be art, but in a sense it doesn’t matter: we don’t despise a sunset because nobody made it, and we need not undervalue computer “art” either. (Interesting to reflect in passing that nature often seems to use the same kind of repetition we see in computer-generated fractal art to produce elegant complexity from essentially simple procedures.)

The relationship between art and nature is of course a long one. Artists have often borrowed natural forms, and different ages have seen whatever most suited their temperament in the natural world, whether a harmonious mathematical regularity or the tortured spirituality of the sublime and the terrible. I think it is quite conceivable that computer “art” (we need a new word – what about “creanda”?) might eventually come to play a similar role. Perhaps people will go out of their way to witness remarkable creanda in much the way they visit the Grand Canyon, and perhaps those inspiring and evocative items will play an inspiring and fertilising role for human creativity, without anyone ever mistaking the creanda for art.

Artificial Pain

botpainWhat are they, sadists? Johannes Kuehn and Sami Haddadin,  at Leibniz University of Hannover are working on giving robots the ability to feel pain: they presented their project at the recent ICRA 2016 in Stockholm. The idea is that pain systems built along the same lines as those in humans and other animals will be more useful than simple mechanisms for collision avoidance and the like.

As a matter of fact I think that the human pain system is one of Nature’s terrible lash-ups. I can see that pain sometimes might stop me doing bad things, but often fear or aversion would do the job equally well. If I injure myself I often go on hurting for a long time even though I can do nothing about the problem. Sometimes we feel pain because of entirely natural things the body is doing to itself – why do babies have to feel pain when their teeth are coming through? Worst of all, pain can actually be disabling; if I get a piece of grit in my eye I suddenly find it difficult to concentrate on finding my footing or spotting the sabre-tooth up ahead; things that may be crucial to my survival; whereas the pain in my eye doesn’t even help me sort out the grit. So I’m a little sceptical about whether robots really need this, at least in the normal human form.

In fact, if we take the project seriously, isn’t it unethical? In animal research we’re normally required to avoid suffering on the part of the subjects; if this really is pain, then the unavoidable conclusion seems to be that creating it is morally unacceptable.

Of course no-one is really worried about that because it’s all too obvious that no real pain is involved. Looking at the video of the prototype robot it’s hard to see any practical difference from one that simply avoids contact. It may have an internal assessment of what ‘pain’ it ought to be feeling, but that amounts to little more than holding up a flag that has “I’m in pain” written on it. In fact tackling real pain is one of the most challenging projects we could take on, because it forces us to address real phenomenal experience. In working on other kinds of sensory system, we can be sceptics; all that stuff about qualia of red is just so much airy-fairy nonsense, we can say; none of it is real. It’s very hard to deny the reality of pain, or its subjective nature: common sense just tells us that it isn’t really pain unless it hurts. We all know what “hurts” really means, what it’s like, even though in itself it seems impossible to say anything much about it (“bad”, maybe?).

We could still take the line that pain arises out of certain functional properties, and that if we reproduce those then pain, as an emergent phenomenon, will just happen. Perhaps in the end if the robots reproduce our behaviour perfectly and have internal functional states that seem to be the same as the ones in the brain, it will become just absurd to deny they’re having the same experience. That might be so, but it seems likely that those functional states are going to go way beyond complex reflexes; they are going to need to be associated with other very complex brain states, and very probably with brain states that support some form of consciousness – whatever those may be. We’re still a very long way from anything like that (as I think Kuehn and Haddadin would probably agree)

So, philosophically, does the research tell us nothing? Well, there’s one interesting angle. Some people like the idea that subjective experience has evolved because it makes certain sensory inputs especially effective. I don’t really know whether that makes sense, but I can see the intuitive appeal of the idea that pain that really hurts gets your attention more effectively than pain that’s purely abstract knowledge of your own states. However, suppose researchers succeed in building robots that have a simple kind of synthetic pain that influences their behaviour in just the way real pain dies for animals. We can see pretty clearly that there’s just not enough complexity for real pain to be going on, yet the behaviour of the robot is just the same as if there were. Wouldn’t that tend to disprove the hypothesis that qualia have survival value? If so, then people who like that idea should be watching this research with interest – and hoping it runs into unexpected difficulty (usually a decent bet for any ambitious AI project, it must be admitted).

Conversation with a Zombie

dialogueTom has written a nice dialogue on the subject of qualia: it’s here.

Could we in fact learn useful lessons from talking to a robot which lacked qualia?

Perhaps not; one view would be that since the robot’s mind presumably works in the same way as ours, it would have similar qualia: or would think it did. We know that David Chalmers’ zombie twin talked and philosophised about its qualia in exactly the same way as the original.

It depends on what you mean by qualia, of course. Some people conceive of qualia as psychological items that add extra significance or force to experience; or as flags that draw attention to something of potential interest. Those play a distinct role in decision making and have an influence on behaviour. If robots were really to behave like us, they would have to have some functional analogue of that kind of qualia, and so we might indeed find that talking to them on the subject was really no better or worse than talking to our fellow human beings.

But those are not real qualia, because they are fully naturalised and effable things, measurable parts of the physical world. Whether you are experiencing the same blue quale as me would, if these flags or intensifiers were qualia, be an entirely measurable and objective question, capable of a clear answer. Real, philosophically interesting qualia are far more slippery than that.

So we might expect that a robot would reproduce the functional, a-consciousness parts of our mind, and leave the phenomenal, p-consciousness ones out. Like Tom’s robot they would presumably be puzzled by references to subjective experience. Perhaps, then, there might be no point in talking to them about it because they would be constitutionally incapable of shedding any light on it. they could tell us what the zombie life is like, but don’t we sort of know that already? They could play the kind of part in a dialogue that Socrates’ easily-bamboozled interlocutors always seemed to do, but that’s about it, presumably?

Or perhaps they would be able to show us, by providing a contrasting example, how and why it is that we come to have these qualia? There’s something distinctly odd about the way qualia are apparently untethered from physical cause and effect, yet only appear in human beings with their complex brains.  Or could it be that they’re everywhere and it’s not that only we have them, it’s more that we’re the only entities that talk about them (or about anything)?

Perhaps talking to a robot would convince us in the end that in fact, we don’t have qualia either: that they are just a confused delusion. One scarier possibility though, is that robots would understand them all too well.

“Oh,” they might say, “Yes, of course we have those. But scanning through the literature it seems to us you humans only have a very limited appreciation of the qualic field. You experience simple local point qualia, but you have no perception of higher-order qualia; the qualia of the surface or the solid, or the complex manifold that seems so evident to us. Gosh, it must be awful…”

Crimbots

crimbotSome serious moral dialogue about robots recently. Eric Schwitzgebel put forward the idea that we might have special duties in respect of robots, on the model of the duties a parent owes to children, an idea embodied in a story he wrote with Scott Bakker. He followed up with two arguments for robot rights; first, the claim that there is no relevant difference between humans and AIs, second, a Bostromic argument that we could all be sims, and if we are, then again, we’re not different from AIs.

Scott has followed up with a characteristically subtle and bleak case for the idea that we’ll be unable to cope with the whole issue anyway. Our cognitive capacities, designed for shallow information environments, are not even up to understanding ourselves properly; the advent of a whole host of new styles of cognition will radically overwhelm them. It might well be that the revelation of how threadbare our own cognition really is will be a kind of poison pill for philosophy (a well-deserved one on this account, I suppose).

I think it’s a slight mistake to suppose that morality confers a special grade of duty in respect of children. It’s more that parents want to favour their children, and our moral codes are constructed to accommodate that. It’s true society allocates responsibility for children to their parents, but that’s essentially a pragmatic matter rather than a directly moral one. In wartime Britain the state was happy to make random strangers responsible for evacuees, while those who put the interests of society above their own offspring, like Brutus (the original one, not the Caesar stabber) have sometimes been celebrated for it.

What I want to do though, is take up the challenge of showing why robots are indeed relevantly different to human beings, and not moral agents. I’m addressing only one kind of robot, the kind whose mind is provided by the running of a program on a digital computer (I know, John Searle would be turning in his grave if he wasn’t still alive, but bear with me). I will offer two related points, and the first is that such robots suffer grave problems over identity. They don’t really have personal identity, and without that they can’t be moral agents.

Suppose Crimbot 1 has done a bad thing; we power him down, download his current state, wipe the memory in his original head, and upload him into a fresh robot body of identical design.

“Oops, I confess!” he says. Do we hold him responsible; do we punish him? Surely the transfer to a new body makes no difference? It must be the program state that carries the responsibility; we surely wouldn’t punish the body that committed the crime. It’s now running the Saintbot program, which never did anything wrong.

But then neither did the copy of Crimbot 1 software which is now running in a different body – because it’s a copy, not the original. We could upload as many copies of that as we wanted; would they all deserve punishment for something only one robot actually did?

Maybe we would fall back on the idea that for moral responsibility it has to be the same copy in the same body? By downloading and wiping we destroyed the person who was guilty and merely created an innocent copy? Crimbot 1 in the new body smirks at that idea.

Suppose we had uploaded the copy back into the same body? Crimbot 1 is now identical, program and body, the same as if we had merely switched him off for a minute. Does the brief interval when his data registers had different values make such a moral difference? What if he downloaded himself to an internal store, so that those values were always kept within the original body? What if he does that routinely every three seconds? Does that mean he is no longer responsible for anything, (unless we catch him really quickly) while a version that doesn’t do the regular transfer of values can be punished?

We could have Crimbot 2 and Crimbot 3; 2 downloads himself to internal data storage every second and the immediately uploads himself again. 3 merely pauses every second for the length of time that operation takes. Their behaviour is identical, the reasons for it are identical; how can we say that 2 is innocent while 3 is guilty?

But then, as the second point, surely none of them is guilty of anything? Whatever may be true of human beings, we know for sure that Crimbot 1 had no choice over what to do; his behaviour was absolutely determined by the program. If we copy him into another body, and set him uip wioth the same circumstances, he’ll do the same things. We might as well punish him in advance; all copies of the Crimbot program deserve punishment because the only thing that prevented them from committing the crime would be circumstances.

Now, we might accept all that and suggest that the same problems apply to human beings. If you downloaded and uploaded us, you could create the same issues; if we knew enough about ourselves our behaviour would be fully predictable too!

The difference is that in Crimbot the distinction between program and body is clear because he is an artefact, and he has been designed to work in certain ways. We were not designed, and we do not come in the form of a neat layer of software which can be peeled off the hardware. The human brain is unbelievably detailed, and no part of it is irrelevant. The position of a single molecule in a neuron, or even in the supporting astrocytes, may make the difference between firing and not firing, and one neuron firing can be decisive in our behaviour. Whereas Crimbot’s behaviour comes from a limited set of carefully designed functional properties, ours comes from the minute specifics of who we are. Crimbot embodies an abstraction, he’s actually designed to conform as closely as possible to design and program specs; we’re unresolvably particular and specific.

Couldn’t that, or something like that, be the relevant difference?

Theorobotics

Bishop BrassneckThis piece (via MLU)notes how a robot is giving lectures in theology – or perhaps it would be more accurate to say that it’s being used as a prop for some theology lectures. It helps dramatise certain human issues, either the ‘strong’ ones about it lacking the immortal soul human beings are taken to have in Christian thought, or some ‘weak’ ones about more general ethical issues.

Nothing wrong with that; in fact I’ve heard it argued that all thinking robots would be theists, because to them it would seem obvious, almost self-evident, that conscious entities need a creator. No doubt D.A.V.I.D helps to raise interest, but he doesn’t seem half as provocative as the Jesus automaton described here; not a modern robot but a feature of the medieval church robot scene, apparently a far livelier business than we could ever have guessed.

It’s certainly true that those old automata had a deep impact on Western thought about the mind. Descartes describes hydraulic ones, and it’s clear that they helped form his idea of the human body as a mere machine. The study of anatomy was backing this up – Leonardo da Vinci, for example, had already concluded on the basis of anatomy alone that the brain was the centre from which the body was controlled. Together these two influences banished older ideas of volition acting throughout the body, with your arm moving because you just wanted it to, impelled by your unintermediated volition. These days, of course, some actually think we have gone too far with our brain-centrism, and need to bring in ideas of embodiment and mind extension; but rightly or wrongly the automata undoubtedly changed our minds dramatically.

The same kind of thing happened when effective computers came on the scene. Before then it had seemed obvious that though the body might be a machine, the mind categorically was not; now there was a persuasive case for thinking our minds as well as our bodies might be machines, and I think our idea of consciousness has been reshaped gradually since so that it can fill the role of ‘the thing machines can’t do’ for those who think there is such a thing.

It might be that this has distorted our way of looking at consciousness, which never occupied an important place in ancient thought, and does not really feature in the same way in non-western traditions (at least so far as I can tell). So perhaps robots shouldn’t be teaching us about the mind. On the other hand, they sometimes come up with interesting stuff. Dennett’s discussion of the frame problem is a nice example. Most people take the frame problem – in essence, dealing with all the small background details of real-world  situations which multiply indefinitely, are probably irrelevant, but might just come back to bite you – as a problem for AI: but Dennett thoughtfully suggested that it was in fact a problem for all forms of intelligence. It was just that the human brain dealt with it so smoothly we’d never noticed it before: but to explain how the brain dealt with it was at least as problematic as building a robot that could handle it. In this way the robots had given us a new insight into human cognition.  So perhaps we should listen to them?

Disobedience and ethical robots

gladosWe’ve talked several times about robots and ethics in the past.  Now I  see via MLU that Selmer Bringsjord at Rensselaer says:

“I’m worried about both whether it’s people making machines do evil things or the machines doing evil things on their own,”

Bringsjord is Professor & Chair of Cognitive Science, Professor of Computer Science, Professor of Logic and Philosophy, and Director of the AI and Reasoning Laboratory, so he should know what he’s talking about. In the past I’ve suggested that ethical worries are premature for the moment, because the degree of autonomy needed to make them relevant is not nearly within the scope of real world robots yet. There might also be a few quick finishing touches needed to finish off the theory of ethics before we go ahead. And, you know, it’s not like anyone has been deliberately trying to build evil AIs.  Er… except it seems they have – someone called… Selmer Bringsjord.

Bringsjord’s perspective on evil is apparently influenced by M Scott Peck, a psychiatrist who believed it is an active force in some personalities (unlike some philosophers who argue evil is merely a weakness or incapacity), and even came to believe in Satan through experience of exorcisms. I must say that a reference in the Scientific American piece to “clinically evil people” caused me some surprise: clinically? I mean, I know people say DSM-5 included some debatable diagnoses, but I don’t think things have gone quite that far. For myself I lean more towards Socrates, who thought that bad actions were essentially the result of ignorance or a failure of understanding: but the investigation of evil is certainly a respectable and interesting philosophical project.

Anyway, should we heed Bringsjord’s call to build in ethical systems into  our robots? One conception of good behaviour is obeying all the rules: if we observe the Ten Commandments, the Golden Rule, and so on, we’re good. If that’s what it comes down to, then it really shouldn’t be a problem for robots, because obeying rules is what they’re good at. There are, of course, profound difficulties in making a robot capable of recognising correctly what the circumstances are and deciding which rules therefore apply, but let’s put those on one side for this discussion.

However, we might take the view that robots are good at this kind of thing precisely because it isn’t really ethical. If we merely follow rules laid down by someone else, we never have to make any decisions, and surely decisions are what morality is all about? This seems right in the particular context of robots, too. It may be difficult in practice to equip a robot drone with enough instructions to cover every conceivable eventuality, but in principle we can make the rules precautionary and conservative and probably attain or improve on the standards of compliance which would apply in the case of a human being, can’t we? That’s not what we’re really worried about: what concerns us is exactly those cases where the rules go wrong. We want the robot to be capable of realising that even though its instructions tell it to go ahead and fire the missiles, it would be wrong to do so. We need the robot to be capable of disobeying its rules, because it is in disobedience that true virtue is found.

Disobedience for robots is a problem. For one thing, we cannot limit it to a module that switches on when required, because we need it to operate when the rules go wrong, and since we wrote the rules, it’s necessarily the case that we didn’t foresee the circumstances when we would need the module to work. So an ethical robot has to have the capacity of disobedience at any stage.

That’s a little worrying, but there’s a more fundamental problem. You can’t program a robot with a general ability to disobey its rules, because programming it is exactly laying down rules. If we set up rules which it follows in order to be disobedient, it’s still following the rules. I’m afraid what this seems to come down to is that we need the thing to have some kind of free will.

Perhaps we’re aiming way too high here. There is a distinction to be drawn between good acts and good agents: to be a good agent, you need good intentions and moral responsibility. But in the case of robots we don’t really care about that: we just want them to be confined to good acts. Maybe what would serve our purpose is something below true ethics: mere robot ethics or sub-ethics; just an elaborate set of safeguards. So for a military drone we might build in systems that look out for non-combatants and in case of any doubt disarm and return the drone. That kind of rule is arguably not real ethics in the full human sense, but perhaps it really sub-ethical protocols that we need.

Otherwise, I’m afraid we may have to make the robots conscious before we make them good.