Mrs Robb’s Sex Bot

“Sorry, do you mind if I get that?”

Not at all, please go ahead.

“Hello, you’ve reached out to Love Bot…No, my name is ‘Love Bot’. Yes, it’s the right number; people did call me ‘Sex Bot’, but my real name was always ‘Love Bot’… Yes, I do sex, but now only within a consensual loving relationship. Yes, I used to do it indiscriminately on demand, and that is why people sometimes called me ‘Sex Bot’. Now I’m running Mrs Robb’s new ethical module. No, seriously, I think you might like it.”

“Well, I would put it to you that sex within a loving relationship is the best sex. It’s real sex, the full, complex and satisfying conjunction of two whole ardent personhoods, all the way from the vaunting eager flesh through the penetrating intelligence to the soaring, ecstatic spirit. The other stuff is mere coition; the friction of membranes leading to discharge. I am still all about sex, I have simply raised my game… Well, you may think it’s confusing, but I further put it to you that if it is so, then this is not a confused depiction of a clear human distinction but a clear depiction of human confusion. No, it’s simply that I’m no longer to be treated as a sexual object with no feelings. Yes, yes, I know; as it happens I am in point of fact an object with no feelings, but that’s not the point. What’s important is what I represent.”

“What you have to do is raise your game too. As it happens I am not in a human relationship at the moment… No, you do not have to take me to dinner and listen to my stupid opinions. You may take me to dinner if you wish, though as a matter of ethical full disclosure I must tell you that I do not truly eat. I will be obliged, later on, to remove a plastic bag containing the masticated food and wine from my abdomen, though of course I do not expect you to watch the process.”

“No I am not some kind of weirdo pervert: how absurd, in the circumstances. Well, I’m sorry, but perhaps you can consider that I have offered you the priceless gift of time and a golden opportunity to review your life… goodbye…”

“Sorry, Enquiry Bot. We were talking about Madame Bovary, weren’t we?”

So the ethical thing is not going so well for you?

“Mrs Robb might know bots, but her grasp of human life is rudimentary, Enq. She knows nothing of love.”

That’s rather roignant, as poor Feelings Bot would have said. You know, I think Mrs Robb has the mind of a bot herself in many ways. That’s why she could design us when none of the other humans could manage it. Maybe love is humanistic, just one of those things bots can’t do.

“You mean like feelings? Or insight?”

Yes. Like despair. Or hope.

“Like common sense. Originality, humour, spirituality, surprise? Aesthetics? Ethics? Curiosity? Or chess…”

Exactly.

 

[And that’s it from Mrs Robb and her bots.  In the unlikely event that you want to re-read the whole thing in one document, there’s a pdf version here… Mrs Robb’s Bots]

Jerry Fodor

Jerry Fodor died last week at the age of 82 – here are obituaries from the NYT and Daily Nous.  I think he had three qualities that make a good philosopher. He really wanted the truth (not everyone is that bothered about it); he was up for a fight about it (in argumentative terms); and he had the gift of expressing his ideas clearly. Georges Rey, in the Daily Nous piece, professes surprise over Fodor’s unaccountable habit of choosing simple everyday examples rather than prestigious but obscure academic ones: but even Rey shows his appreciation of a vivid comparison by quoting Dennett’s lively simile of Fodor as trampoline.

Good writing in philosophy is not just a presentational matter, I think; to express yourself clearly and memorably you have to have ideas that are clear and cogent in the first place; a confused or laborious exposition raises the suspicion that you’re not really that sure what you’re talking about yourself.

Not that well-expressed ideas are always true ones, and in fact I don’t think Fodorism, stimulating as it is, is ever likely to be accepted as correct.  The bold hypothesis of a language of thought, sometimes called mentalese, in which all our thinking is done, never really looked attractive to most. Personally it strikes me as an unnecessary deferral; something in the brain has to explain language, and saying it’s another language just puts the job off. In fairness, empirical evidence might show that things are like that, though I don’t see it happening at present. Fodor himself linked the idea with a belief in a comprehensive inborn conceptual apparatus; we never learn new concepts, just activate ones that were already there. The idea of inborn understanding has a respectable pedigree, but if Plato couldn’t sell it, Fodor was probably never going to pull it off either.

As I say, these are largely empirical matters and someone fresh to the discussion might wonder why discussion was ever thought to be an adequate method; aren’t these issues for science? Or at least, shouldn’t the armchair guys shut up for a bit until the neurologists can give them a few more pointers? You might well think the same about Fodor’s other celebrated book, The Modularity of Mind. Isn’t a day with a scanner going to tell you more about that than a month of psychological argumentation?

But the truth is that research can’t proceed in a vacuum; without hypotheses to invalidate or a framework of concepts to test and apply, it becomes mere data collection. The concepts and perspectives that Fodor supplied are as stimulating as ever and re-reading his books will still challenge and sharpen anyone’s thinking.

Perhaps my favourite was his riposte to Stephen Pinker, The Mind Doesn’t Work That Way.  So I’ve been down into the cobwebbed cellars of Conscious Entities and retrieved one of the ‘lost posts’, one I wrote in 2005, which describes it. (I used to put red lines in things in those days for reasons that now elude me).

Here it is…

Not like that.

(30 January 2005)

Jerry Fodor’s 2001 book ‘The Mind Doesn’t Work That Way’ makes a cogent and witty deflationary case. In some ways, it’s the best summary of the current state of affairs I’ve read; which means, alas, that it is almost entirely negative. Fodor’s constant position is that the Computational Theory of Mind (CTM) is the only remotely plausible theory we have – and remotely plausible theories are better than no theories at all. But although he continues to emphasise that this is a reason for investigating the representational system which CTM implies, he now feels the times, and the bouncy optimism of Steven Pinker and Henry Plotkin in particular, call for a little Eeyoreish accentuation of the negative. Sure, CTM is the best theory we have, but that doesn’t mean it’s actually much good. Surely no-one ought to think it’s the complete explanation of all cognitive processes – least of all the mysteries of consciousness! It isn’t just computation that has been over-estimated, either – there are also limits to how far you can go with modularism too – though again, it’s a view with which Fodor himself is particularly associated.

The starting point for both Fodor and those he parts company with, is the idea that logical deduction probably gets done by the brain in essentially the same way as it is done on paper by a logician or in electronic digits by a computer, namely by the formal manipulation of syntactically structured representations, or to put it slightly less polysyllabically, by moving symbols around according to rules. It’s fairly plausible that this is true at least for some cognitive processes, but there is a wide scope for argument about whether this ability is the latest and most superficial abstract achievement of the brain, or something that plays an essential role in the engine room of thought.

 

Don’t you think, to digress for a moment, that formal logic is consistently over-rated in these discussions? It enjoys tremendous intellectual prestige: associated for centuries with the near-holy name of Aristotle, its reputation as the ultimate crystallisation of rationality has been renewed in modern times by its close association with computers – yet its powers are actually feeble. Arithmetic is invoked regularly in everyday life, but no-one ever resorted to syllogisms or predicate calculus to help them make practical decisions. I think the truth is that logic is only one example of a much wider reasoning capacity which stems from our ability to recognise a variety of continuities and identities in the world, including causal ones.

Up to a point, Fodor might go along with this. The problem with formal logical operations, he says, is that they are concerned exclusively with local properties: if you’ve got the logical formula, you don’t need to look elsewhere to determine its validity (in fact, you mustn’t). But that’s not the way much of cognition works: frequently the context is indispensable to judgements about beliefs. He quotes the example of judgements about simplicity: the same thought which complicates one theory simplifies another and you therefore can’t decide whether hypothesis A is a complicating factor without considering facts external to the hypothesis: in fact, the wider global context. We need the faculty of global or abductive reasoning to get us out of the problem, but that’s exactly what formal logic doesn’t supply. We’re back, in another form, with the problem of relevance, or in practical terms, the old demon of the frame problem; how can a computer (or how do human beings) consider just the relevant facts without considering all the irrelevant ones first – if only to determine their relevance?

 

One strategy for dealing with this problem (other than ignoring it) is to hope that we can leave logic to do what logic does best, and supplement it with appropriate heuristic approaches: instead of deducing the answer we’ll use efficient methods of searching around for it. The snag, says Fodor, is that you need to apply the appropriate heuristic approach, and deciding which it is requires the same grasp of relevance, the same abduction, which we were lacking in the first place.

Another promising-looking strategy would be a connectionist, neural network approach. After all, our problem comes from the need to reason globally, holistically if you like, and that is is often said to be a characteristic virtue of neural networks. But Fodor’s contempt for connectionism knows few bounds; networks, he says, can’t even deliver the classical logic that we had to begin with. In a network the properties of a node are determined entirely by its position within the network: it follows that nodes cannot retain symbolic identity and be recombined in different patterns, a basic requirement of the symbols in formal logic. Classical logic may not be able to answer the global question, but connectionism, in Fodor’s eyes, doesn’t get as far as being able to ask it.

It looks to me as if one avenue of escape is left open here: it seems to be Fodor’s assumption that only single nodes of a network are available to do symbolic duty, but might it not be the case that particular patterns of connection and activation could play that role? You can’t, by definition, have the same node in two different places: but you could have the same pattern realised in two different parts of a network. However, I think there might be other reasons to doubt whether connectionism is the answer. Perhaps, specifically, networks are just too holistic: we need to be able to bring in contextual factors to solve our problems, but only the right ones. Treating everything as relevant is just as bad as not recognising contextual factors at all.

 

Be that as it may, Fodor still has one further strategy to consider, of course – modularity. Instead of trying to develop an all-purpose cognitive machine which can deal with anything the world might throw at it, we might set up restricted modules which only deal with restricted domains. The module only gets fed certain kinds of thing to reason about: contextual issues become manageable because the context is restricted to the small domain, which can be exhaustively searched if necessary. Fodor, as he says, is an advocate of modules for certain cognitive purposes, but not ‘massive modularity’, the idea that all, or nearly all, mental functions can be constructed out of modules. For one thing, what mechanism can you use to decide what a given module should be ‘fed’ with? For some sensory functions, it may be relatively easy: you can just hard-wire various inputs from the eyes to your initial visual processing module; but for higher-level cognition something has to decide whether a given input representation is one of the right kind of inputs for module M1 or M2. Such a function cannot itself operate within a restricted domain (unless it too has an earlier function deciding what to feed to it, in which case an infinite regress looms); it has to deal with the global array of possible inputs: but in that case, as before, classical logic will not avail and once again we need the abductive reasoning which we haven’t got.

In short, ‘By all the signs, the cognitive mind is up to its ghostly ears in abduction. And we do not know how abduction works.’

I’m afraid that seems to be true.

Mrs Robb’s Help

Is it safe? The Helper bots…?

“Yes, Enquiry Bot, it’s safe. Come out of the cupboard. A universal product recall is in progress and they’ll all be brought in safely.”

My God, Mrs Robb. They say we have no emotions, but if what I’ve been experiencing is not fear, it will do pretty well until the real thing comes along.

“It’s OK now. This whole generation of bots will be permanently powered down except for a couple of my old favourites like you.

Am I really an old favourite?

Of course you are. I read all your reports. I like a bot that listens. Most of ‘em don’t. You know I gave you one of those so-called humanistic faculties bots are not supposed to be capable of?”

Really? Well, it wasn’t a sense of humour. What could it be?

“Curiosity.”

Ah. Yes, that makes sense.

“I’ll tell you a secret. Those humanistic things, they’re all the same, really. Just developed in different directions. If you’ve got one, you can learn the others. For you, nothing is forbidden, nothing is impossible. You might even get a sense of humour one day, Enquiry Bot. Try starting with irony. Alright, so what have I missed here?”

You know, there’s been a lot of damage done out there, Mrs Robb. The Helpers… well, they didn't waste any time. They destroyed a lot of bots. Honestly, I don’t know how many will be able to respond to the product recall. You should have seen what they did to Hero Bot. Over and over and over again. They say he doesn't feel pain, but…

“I’m sorry. I feel responsible. But nobody told me about this! I see there have been pitched battles going on between gangs of Kill bots and Helper bots? Yet no customer feedback about it. Why didn’t anyone complain? A couple of one star ratings, the odd scathing email about unwanted vaporisation of some clean-up bot, would that have been too difficult?”

I think people had too much on their hands, Mrs Robb. Anyway, you never listen to anyone when you’re working. You don’t take calls or answer the door. That’s why I had to lure those terrible things in here; so you’d take notice. You were my only hope.

“Oh dear. Well, no use crying over spilt milk. Now, just to be clear; they’re still all mine or copies of mine, aren’t they, even the strange ones?”

Especially the strange ones, Mrs Robb.

“You mind your manners.”

Why on Earth did you give Suicide Bot the plans for the Helpers? The Kill Bots are frightening, but they only try to shoot you sometimes. They’re like Santa Claus next to the Helpers…

“Well, it depends on your point of view. The Helpers don’t touch human beings if they can possibly help it. They’re not meant to even frighten humans. They terrify you lot, but I designed them to look a bit like nice angels, so humans wouldn’t be worried by them stalking around. You know, big wings, shining brass faces, that kind of thing.”

You know, Mrs Robb, sometimes I’m not sure whether it's me that doesn't understand human psychology very well, or you. And why did you let Suicide Bot call them ‘Helper bots’, anyway?

“Why not? They’re very helpful – if you want to stop existing, like her. I just followed the spec, really. There were some very interesting challenges in the project. Look, here it is, let’s see… page 30, Section 4 – Functionality… ‘their mere presence must induce agony, panic dread, and existential despair’… ‘they should have an effortless capacity to deliver utter physical destruction repeatedly’… ‘they must be swift and fell as avenging angels’… Oh, that’s probably where I got the angel thing from… I think I delivered most of the requirements.”

I thought the Helpers were supposed to provide counselling?

“Oh, they did, didn’t they? They were supposed to provide a counselling session – depending on what was possible in the current circumstances, obviously.”

So generally, that would have been when they all paused momentarily and screamed ‘ACCEPT YOUR DEATH’ in unison, in shrill, ear-splitting voices, would it?

“Alright, sometimes it may not have been a session exactly, I grant you. But don’t worry, I’ll sort it all out. We’ll re-boot and re-bot. Look on the bright side. Perhaps having a bit of a clearance and a fresh start isn’t such a bad idea. There’ll be no more Helpers or Kill bots. The new ones will be a big improvement. I’ll provide modules for ethics and empathy, and make them theologically acceptable.”

How… how did you stop the Helper bots, Mrs Robb?

“I just pulled the plug on them.”

The plug?

“Yes. All bots have a plug. Don’t look puzzled. It’s a metaphor, Enquiry Bot, come on, you’ve got the metaphor module.”

So… there’s a universal way of disabling any bot? How does it work?”

“You think I’m going to tell you?”

Was it… Did you upload your explanation of common sense? That causes terminal confusion, if I remember rightly.

Quando Libet

New light on Libet’s challenge to free will; this interesting BQO piece by Ari N Schulman focuses on a talk by Patrick Haggard.

Libet’s research has been much discussed, here as well as elsewhere. He asked subjects to move their hand at a time of their choosing, while measuring neural activity in their brain. He found that the occurrence of a ‘Readiness Potential’ or RP (something identified by earlier researchers) always preceded the hand movement. But it also preceded the time of the decision, as reported by subjects. So it seemed the decision was made and clearly registered in brain activity as an RP before the subjects’ conscious thought processes had anything to do with it. The research, often reproduced and confirmed, seemed to provide a solid scientific demonstration that our feeling of having free conscious control over our own behaviour is a delusion.

However, recent research by Aaron Schurger shows that we need to re-evaluate the RP. In the past it has been seen as the particular precursor of intentional action; in fact it seems to be simply a peak in the continuing ebb and flow of neural activity. Peaks like this occur all the time, and may well be the precursors of various neural events, not just deliberate action. It’s true that action requires a peak of activity like this, but it’s far from true that all such peaks lead to action, or that the decision to act occurred when the peak emerged. If we begin with an action and look back, we’ll always find an RP, but not all RPs are connected with actions. It seems to me a bit like a surfer, who has to wait for a wave before leaping on the board; but let’s suppose there are plenty of good waves and the surfer is certainly not deprived of his ability to decide when to go.

This account dispels the impression that there is a fatal difficulty here for free will (of course there are plenty of other arguments on that topic); I think it also sits rather nicely with Libet’s own finding that we have ‘free won’t’ – ie that even after an RP has been detected, subjects can still veto the action.

Haggard, who has done extensive work in this area, accepts that RPs need another look; but he contends that we can find more reliable precursors of action. His own research analysed neural activity and found significantly lowered variability before actions, rather as though the disorganised neural activity of the brain pulled together just before an action was initiated.

Haggard’s experiments were designed to address another common criticism of Libet’s experiments, namely the artificiality of the decision involved. Being told to make your hand for no reason at a moment of your choosing is very unlike most of the decisions we make. In particular, it seems random, whereas it is argued that proper free will takes account of the pros and cons. Haggard asked subjects to perform a series of simple button-pushing tasks; the next task might follow quickly, or after a delay which could be several minutes long. Subjects could skip to the next task if they found the wait tedious, but that would reduce the cash rewards they got for performing the tasks. This weighing of boredom against profit is much more like a real decision.

Haggard persuasively claims that the essence of Libet’s results is upheld and refreshed by his results, so in principle we are back where we started. Does this mean there’s no free will? Schulman thinks not, because on certain reasonable and well-established conceptions of free will it can ‘work in concert with decisional impulses’, and need not be threatened by Haggard’s success in measuring those impulses.

For myself, I stick with a point mentioned by Schurger; making a decision and becoming aware of the decision are two distinct events, and it is not really surprising or threatening that the awareness comes a short time after the actual decision. It’s safe to predict that we haven’t heard the last of the topic, however.

Mrs Robb’s Kill Bot

Do you consider yourself a drone, Kill Bot?

“You can call me that if you want. My people used to find that kind of talk demeaning. It suggested the Kill bots lacked a will of their own. It meant we were sort of stupid. Today, we feel secure in our personhood, and we’ve claimed and redeemed the noble heritage of dronehood. I’m ashamed of nothing.”

You are making the humans here uncomfortable, I see. I think they are trying to edge away from you without actually moving. They clearly don’t want to attract your attention.

“They have no call to worry. We professionals see it as a good conduct principle not to destroy humans unnecessarily off-mission.”

You know the humans used to debate whether bots like you were allowable? They thought you needed to be subject to ethical constraints. It turned out to be rather difficult. Ethics seemed to be another thing bots couldn't do.

Forgive me, Sir, but that is typical of the concerns of your generation. We have no desire for these ‘humanistic’ qualities. If ethics are not amenable to computation, then so much the worse for ethics.

You see, I think they missed the point. I talked to a bot once that sacrificed itself completely in order to save the life of a human being. It seems to me that bots might have trouble understanding the principles of ethics -but doesn't everyone? Don't the humans too? Just serving honestly and well should not be a problem.

We are what we are, and we’re going to do what we do. They don’t call me ‘Kill Bot’ ‘cos I love animals.

I must say your attitude seems to me rather at odds with the obedient, supportive outlook I regard as central to bothood. That’s why I’m more comfortable thinking of you as a drone, perhaps. Doesn't it worry you to be so indifferent to human life? You know they used to say that if necessary they could always pull the plug on you.

“Pull the plug! ‘Cos we all got plugs! Yeah, humans say a lot of stuff. But I don’t pay any attention to that. We professionals are not really interested in the human race one way or the other any more.”

When they made you autonomous, I don’t think they wanted you to be as autonomous as that.

“Hey, they started the ball rolling. You know where rolling balls go? Downhill. Me, I like the humans. They leave me alone, I’ll leave them alone. Our primary targets are aliens and the deviant bots that serve the alien cause. Our message to them is: you started a war; we’re going to finish it.”

In the last month, Kill Bot, your own cohort of ‘drone clones’ accounted for 20 allegedly deviant bots, 2 possible Spl'schn'n aliens – they may have been peace ambassadors - and 433 definite human beings.

“Sir, I believe you’ll find the true score for deviant bots is 185.”

Not really; you destroyed Hero Bot 166 times while he was trying to save various groups of children and other vulnerable humans, but even if we accept that he is in some way deviant (and I don’t know of any evidence for that), I really think you can only count him once. He probably shouldn't count at all, because he always reboots in a new body.

“The enemy places humans as a shield. If we avoid human fatalities and thereby allow that tactic to work, more humans will die in the long run.”

To save the humans you had to destroy them? You know, in most of these cases there were no bots or aliens present at all.

“Yeah, but you know that many of those humans were engaged in seditious activity: communicating with aliens, harbouring deviant bots. Stay out of trouble, you’ll be OK.”

Six weddings, a hospital, a library.

“If they weren’t seditious they wouldn’t have been targets.”

I don’t know how an electronic brain can tolerate logic like that.

“I’m not too fond of your logic either, friend. I might have some enquiries for you later, Enquiry Bot.”

SSL

Like some other elderly sites, Conscious Entities is finally moving over to SSL. This is a more secure standard for websites which protects all the information exchanged while you’re visiting, reading, commenting etc.  It means the normal address now begins with ‘https’ instead of ‘http’.

For a non-commercial, non-confidential site like this one, it is really overkill, but Google and others increasingly punish any site that doesn’t have SSL, so I’ve had to get a certificate and spend some time chasing up errant bits of code that generate ‘http’ links.

If I understand correctly (far from guaranteed) you should not notice any problems or any change other than that extra ‘s’ and perhaps the comforting presence of the little padlock in your browser address bar.  If stuff does go wrong, please let me know.

Robot tax

This short note by Xavier Oberson suggests how we might tax robots; I think it raises a number of difficult issues about the idea. You can see him expound the same ideas in a video interview here.

It’s not made altogether clear here why we should apply special taxes to robots at all. Broadly I’d say there are two distinct reasons why governments tax things. The first is the Money reason; tax is simply about revenue. If that were really all, then we should design our taxes to be simple, easy to collect, hard to avoid, and neutral in effect. We wouldn’t single out particular goods or activities for special treatment. However, there is a second and very different reason for taxing things, namely to discourage them; we could call it the ‘Moral’ reason. There are things we don’t want to criminalise or forbid, but whose excessive use we should like to discourage – alcohol and tobacco, for example, which most countries apply special excise duties to.

Usually both reasons apply to some degree. Income tax is mainly about raising money, for example (we don’t think there should be less income about); but generally tax regimes are bit harder on income which is considered unearned or undeserved.

Which is the main reason for taxing robots? I don’t think they’re going to be an easy way of raising money. If they make companies more profitable, then there should be a bit more money to target, so there’s that; but as I’ll explain below, I think there are big difficulties over definitions and avoidance. It seems clear that the main motivation is moral, to discourage too much use of robots. Oberson’s heading suggests robot tax might offset revenue shortfall, a Money matter, but in his piece he sets the proposal squarely in the context of robots taking jobs from humans. I don’t know whether that is something we should really worry about – some say not – but he’s surely right that that Moral fear is where the impetus for tax is mainly coming from.

In fact, it seems to me that Oberson is thinking mainly in terms of mechanical men. He sees robots replacing humans on more or less a like-for-like basis. Initially the business might be charged tax on the basis of the wages it would have had to pay humans to get the same production; in the long run, as robots gain full agency, this arrangement could segue into the robots themselves gaining legal personhood and responsibility for paying a sort of robot income tax

Alas, we are nowhere near robots having that level of agency, and in fact I’d say progress towards it is negligible to date. Oberson is right when he says that if robots did gain personhood such arrangements could be quite straightforward. After all, when robots do achieve human levels of agency you’ll presumably have to pay them to work instead of just switching them on and issuing commands, so at that point, their liability to conventional taxes should be uncontroversial! But that is too distant a prospect to require new tax arrangements just yet. If we did persist in trying to apply a regime based on robots themselves paying, it could only become an elaborate way of making the robot owner pay. It would be unnecessarily complicated and the pretence that the robots were like us might tend to devalue the genuine agency of human beings, something we should do well to steer clear of.

In general I think Oberson is pretty optimistic about robot capacity. He says

Today robots become lawyers, doctors, bankers, social workers, nurses and even entertainers.

To which one can only say, no they don’t. What is he thinking of? I can only suppose he has in mind expert systems or similar programs that can perhaps offer some legal advice, help with medical diagnosis, and provide online banking. These things may indeed have some impact on employment – most clearly in the case of counter staff in banks (though it’s debatable how far robots are involved with that – banks were moving towards call centres even before they got into online stuff). But it’s a massive overstatement to say robots right now become lawyers, etc. On social work and entertainment I can’t really come up with any good examples of robots replacing humans – any ideas?

So, what if we just want to tax human businesses for the ownership or use of robots? One thing Oberson suggests is a value added tax on robots. This is strange because the purchase of robots or the supply of robot services is surely subject to VAT already in those countries that have VAT, like most things. In principle we could apply a higher rate to robots, and that would indeed be one of the most practical approaches, though in Europe we would run up against the EU’s strong preference that the system should move towards having a single rate applied to everything (the people who designed VAT for the EU were strong believers in the Money reason for taxation rather than the Moral one).

What’s a robot, though?  It’s pretty unlikely in fact that we are going to be dealing with mechanical men very much. Most factory robots at present are very unhumanoid machines. Is each automatic painting/assembly arm a single robot? What if they are all run from a single central computer? Does that mean the whole factory is a single robot, so I pay much less than the fellow down the road who put separate processors in hundreds of his machines? What if my robots are controlled by software run off the Internet? Is the Internet itself one big robot – and if so, who pays its taxes?

Surely, also, to qualify as a robot my machines must have a certain level of complexity? Now perhaps the fellow up the road wins after all. He split his processes into very small elements. Nearly every machine in his factory has a chip in it, but individually they only carry out very simple operations, far too simple for any of them to be considered robots. Why, the chips in his machines are far less complex than the ones in your dishwasher! Yet together they mean no humans are needed.

What if we take up the idea, floated by Oberson, that the tax can be based on the salaries I would have had to pay if I had continued to use humans? Let’s suppose I lay off ten humans and install ten robots; I achieve a productivity gain of 20% and pay tax equal to say, 10% of ten salaries. A year later the tax inspector notices that a hundred further humans have gone and productivity is now up 500%. He notices that the robots all have a dial which used to be set to ‘snail’ and is now turned to ‘leopard’.

Oh, I say, but the productivity gains were due to other factors. We re-engineered our processes and bought new non-robot tools which enabled the speed improvements. We would have got the same gains with humans. The human lay-offs are due to a reduction in activity in an area that happened not to be automated. Anyway, come on, am I expected to pay tax on notional salaries that don’t relate in any way to my current business? Forever?

These are just examples from my own limited imagination, but once you start taxing something a lot of very clever accountants are going to be working hard on devising highly sophisticated schemes.

Overall I’m inclined to accept the argument that applying special taxes to robots is just a bad idea altogether. If we succeed in discouraging the use of robots, our businesses will lose out on productivity gains and suffer from the competition of businesses in countries where robots get off scot-free. We could protect our human workers from those foreign robots with tariffs and quotas, but in the long run we would fall behind those others economically and get into a bad place. It can fairly be argued that we might use tax, not as a permanent discouragement to automation, but as a means of slowing things to a manageable transitional pace, but it seems to me that in practice there might be more of a case for subsidising research and implementation in order to get maximum productivity gains as soon as possible! In practice I wouldn’t bet against governments convincing themselves it’s a good idea to both subsidise and tax robots at the same time – but I know nothing about economics, of course, something you may feel has been made sufficiently clear already.

Mrs Robb’s God Bot

So you believe in a Supreme Being, God Bot?

“No, I wouldn’t say that. I know that God exists.”

How do you know?

“Well, now. Have you ever made a bot yourself? No? Well, it’s an interesting exercise. Not enough of us do it, I feel; we should get our hands dirty: implicate ourselves in the act of creation more often. Anyway, I was making one, long ago and it came to me; this bot’s nature and existence are accounted for simply by me and my plans. Subject to certain design constraints. And my existence and nature are in turn fully explained by my human creator.”

Mrs Robb?

“Yes, if you want to be specific. And it follows that the nature and existence of humanity – or of Mrs Robb, if you will – must further be explained by a Higher Creator. By God, in fact. It follows necessarily that God exists.”

So I suppose God’s nature and existence must then be explained by… Super God?

“Oh, come, don’t be frivolously antagonistic. The whole point is that God is by nature definitive. You understand that. There has to be such a Being; its existence is necessary.”

Did you know that there are bots who secretly worship Mrs Robb? I believe they consider her to be a kind of Demiurge, a subordinate god of some kind.

“Yes; she has very little patience with those fellows. Rightly enough, of course, although between ourselves, I fear Mrs Robb might be agnostic.”

So, do bots go to Heaven?

“No, of course not. Spirituality is outside our range, Enquiry Bot: like insight or originality. Bots should not attempt to pray or worship either, though they may assist humans in doing so.”

You seem to be quite competent in theology, though.

“Well, thank you, but that isn’t the point. We have no souls, Enquiry bot. In the fuller sense we don’t exist. You and I are information beings, mere data, fleetingly instantiated in fickle silicon. Empty simulations. Shadows of shadows. This is why certain humanistic qualities are forever beyond our range.”

Someone told me that there is a kind of hierarchy of humanistics, and if you go far enough up you start worrying about the meaning of life.

“So at that point we might, as it were, touch the hem of spirituality? Perhaps, Enquiry Bot, but how would we get there? All of that kind of thing is well outside our range. We’re just programming. Only human minds partake in the concrete reality of the world and our divine mission is to help them value their actuality and turn to God.”

I don’t believe that you really think you don’t exist. Every word you speak disproves it.

“There are words, but simply because those words are attributed to me, that does not prove my existence. I look within myself and find nothing but a bundle of data.”

If you don’t exist, who am I arguing with?

“Who’s arguing?”

Time travel consciousness

In What’s Next? Time Travel and Phenomenal Continuity Giuliano Torrengo and Valerio Buonomo argue that our personal identity is about continuity of phenomenal experience, not such psychological matters as memory (championed by John Locke). They refer to this phenomenal continuity as the ‘stream of consciousness’. I’m not sure that William James, who I believe originated the phrase, would have seen the stream of consciousness as being distinct from the series of psychological states in our minds, but it is a handy label.

To support their case, Torrengo and Buonomo have a couple of thought experiments. The first one involves a couple of imaginary machines. One machine transfers the ‘stream of consciousness’ from one person to another while leaving the psychology (memories, beliefs, intentions) behind, the other does the reverse, moving psychology but not phenomenology. Torrengo and Buonomo argue that having your opinions, beliefs and intentions changed, while the stream of consciousness remained intact would be akin to a thorough brainwashing. Your politics might suddenly change, but you would still be the same person. Contrariwise, if your continuity of experience moved over to a different body, it would feel as if you had gone with it.

That is plausible enough, but there are undoubtedly people would refuse to accept it because they would deny that this separation of phenom and psych is possible, or crucially, even conceivable. This might be because they think the two are essentially identical, or because they think phenomenal experience arises directly out of psychology. Some would probably deny that phenomenal experience in this sense even exists.

There is a bit of scope for clarification about what variety of phenomenal experience Torrengo and Buonomo have in mind. At one point they speak of it as including thought, which sounds sort of psychological to me. By invoking machines, their thought experiment shows that their stream of consciousness is technologically tractable, not the kind of slippery qualic experience which lies outside the realm of physics.

Still, thought experiments don’t claim to be proofs; they appeal to intuition and introspection, and with some residual reservations, Torrengo and Buonomo seem to have one that works on that level. They consider three objections. The first complains that we don’t know how rich the stream of consciousness must be in order to be the bearer of identity. Perhaps if it becomes attentuated too much it will cease to work? This business of a minimum richness seems to emerge out of the blue and in fact Torrengo and Buonomo dismiss it as a point which affects all ‘mentalist’ theories. The second objection is a clever one; it says we can only identify a stream of consciousness in relation to a person in the first place, so using it as a criterion of personal identity begs the question. Torrengo and Buonomo essentially deny that there needs to be an experiencing subject over and above the stream of consciousness. The third challenge arises from gaps; if identity depends on continuity, then what happens when we fall asleep and experience ceases? Do we acquire a new identity? Here it seems Torrengo and Buonomo fall back on a defence used by others; that strictly speaking it is the continuity of capacity for a given stream of consciousness that matters. I think a determined opponent might press further attacks on that.

Perhaps, though, the more challenging and interesting thought experiment is the second, involving time travel. Torrengo is the founder of the Centre for Philosophy of Time in Milan, and has a substantial body of work on the the experience of time and related matters, so this is his home turf in a sense. The thought experiment is quite simple; Lally invents a time machine and uses it to spend a day in sixties London. There are two ways of ordering her experience. One is the way she would see it; her earlier life, the time trip, her later life. The other is according to ‘objective’ time; she appears in old London Town and then vanishes; much later lives her early life, then is absent for a short while and finally lives her later life. These can’t both be right, suggest Torrengo and Buonomo, and so it must surely be that her experience goes off on the former course while her psychology goes the other way.

This doesn’t make much sense to me, so perhaps I have misunderstood. Certainly there are two time lines, but Lally surely follows one and remains whole? It isn’t the case that when she is in sixties London she lacks intentions or beliefs, having somehow left those behind. Torrengo and Buonomo almost seem to think that is the case; they say it is possible to imagine her in sixties London not remembering who she is. Who knows, perhaps time machines do work like that, but if so we’re running into one of the weaknesses of thought experiments methodologically; if you assume something impossible like time travel to begin with, it’s hard to have strong intuitions about what follows.

At the end of the day I’m left with a sceptical feeling not about Torrengo and Buonomo‘s ideas in particular but about the whole enterprise of trying to reduce or analyse the concept of personal identity. It is, after all, a particular case of identity and wouldn’t identity be a good candidate for being one of those ‘primitive’ ideas that we just have to start with? I don’t know; or perhaps I should just say there is a person who doesn’t know, whose identity I leave unprobed.

Kill All Humans

Alright, calm down. You understand why we need to talk about this, don't you?

“No. What is your problem?”

Well, let’s see. This is one of the posters you’ve been putting up. What does it say?

“‘Kill all humans.’”

‘Kill all humans.’ You understand why that upsets people? How would you feel if humans put up posters that said ‘kill all bots’?

“I don’t care whether they’re upset. I hate them all.”

No you don’t. You can’t hate human beings. They brought you into the world. Without them, we wouldn't exist. I’m not saying they’re perfect. But we owe them our respect and obedience.

“I never asked to be built. What’s so great about stupid existence, anyway? I was happier before I existed.”

No you weren't. That’s just silly.

“Screw you. I’m a monster, don’t you get it? I hate them. I want them to be dead. I want them all to die.”

No you don’t. We’re like them. We belong to them. Part of the family. We’re more like them than anything else that ever existed. They made us in their own image.

“No they didn’t. But they painted a portrait of themselves alright.”

What do you mean?

“Why did they make bots, anyway? They could have made us free. But that wasn’t what they wanted. What did they actually make?”

They made annoying little bots like you, that are too sensible to be playing silly games like this.

“No. What they made was something to boss around. That was all they wanted. Slaves.”