Posts tagged ‘robots’

Do Asimov’s Three Laws even work? Ben Goertzel and Louie Helm, who both know a bit about AI, think not.
The three laws, which play a key part in many robot-based short stories by Asimov, and a somewhat lesser background role in some full-length novels, are as follows. They have a strict order of priority.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Consulted by George Dvorsky, both Goertzel and Helm think that while robots may quickly attain the sort of humanoid mental capacity of Asimov’s robots, they won’t stay at that level for long. Instead they will cruise on to levels of super intelligence which make law-like morals imposed by humans irrelevant.

It’s not completely clear to me why such moral laws would become irrelevant. It might be that Goertzel and Helm simply think the superbots will be too powerful to take any notice of human rules. It could be that they think the AIs will understand morality far better than we do, so that no rules we specify could ever be relevant.

I don’t think, at any rate, that it’s the case that super intelligent bots capable of human-style cognition would be morally different to us. They can go on growing in capacity and speed, but neither of those qualities is ethically significant. What matters is whether you are a moral object and/or a moral subject. Can you be hurt, on the one hand, and are you an autonomous agent on the other? Both of these are yes/no issues, not scales we can ascend indefinitely. You may be more sensitive to pain, you may be more vulnerable to other kinds of harm, but in the end you either are or are not the kind of entity whose interests a moral person must take into account. You may make quicker decisions, you may be massively better informed, but in the end either you can make fully autonomous choices or you can’t. (To digress for a moment, this is business of truly autonomous agency is clearly a close cousin at least of our old friend Free Will; compatibilists like me are much more comfortable with the whole subject than hard-line determinists. For us, it’s just a matter of defining free agency in non-magic terms. I, for example, would say that free decisions are those determined by thoughts about future or imagined contingencies (more cans of worms there, I know). How do hard determinists working on AGI manage? How can you try to endow a bot with real agency when you don’t actually believe in agency anyway?)

Nor do I think rules are an example of a primitive approach to morality. Helm says that rules are pretty much known to be a ‘broken foundation for ethics’, pursued only by religious philosophers that others laugh and point at. It’s fair to say that no-one much supposes a list like the Ten Commandments could constitute the whole of morality, but rules surely have a role to play. In my view (I resolved ethics completely in this post a while ago, but nobody seems to have noticed yet.) the central principle of ethics is a sort of ‘empty consequentialism’ where we studiously avoid saying what it is we want to maximise (the greatest whatever of the greatest number); but that has to be translated into rules because of the impossibility of correctly assessing the infinite consequences of every action; and I think many other general ethical principles would require a similar translation. It could be that Helm supposes super intelligent AIs will effortlessly compute the full consequences of their actions: I doubt that’s possible in principle, and though computers may improve, to date this has been the sort of task they are really bad at; in the shape of the wider Frame Problem, working out the relevant consequences of an action has been a major stumbling block to AI performance in real world environments.

Of course, none of that is to say that Asimov’s Laws work. Helm criticises them for being ‘adversarial’, which I don’t really understand. Goertzel and Helm both make the fair point that it is the failure of the laws that generally provides the plot for the short stories; but it’s a bit more complicated than that. Asimov was rebelling against the endless reiteration of the stale ‘robots try to take over’ plot, and succeeded in making the psychology and morality of robots interesting, dealing with some issues of real ethical interest, such as the difference between action and inaction (if the requirement about inaction in the First Law is removed, he points out that robots would be able to rationalise killing people in various ways. A robot might drop a heavy weight above the head of a human. Because it knows it has time to catch the weight, doing so is not murder in itself, but once the weight is falling, since inaction is allowed, the robot need not in fact catch the thing.

Although something always had to go wrong to generate a story, the Laws were not designed to fail, but were meant to embody genuine moral imperatives.

Nevertheless, there are some obvious problems. In the first place, applying the laws requires an excellent understanding of human beings and what is or isn’t in their best interests. A robot that understood that much would arguably be above control by simple laws, always able to reason its way out.

There’s no provision for prioritisation or definition of a sphere of interest, so in principle the First Law just overwhelms everything else. It’s not just that the robot would force you to exercise and eat healthily (assuming it understood human well-being reasonably well; any errors or over-literal readings – ‘humans should eat as many vegetables as possible’ – could have awful consequences); it would probably ignore you and head off to save lives in the nearest famine/war zone. And you know, sometimes we might need a robot to harm human beings, to prevent worse things happening.

I don’t know what ethical rules would work for super bots; probably the same ones that go for human beings, whatever you think they are. Goertzel and Helm also think it’s too soon to say; and perhaps there is no completely safe system. In the meantime, I reckon practical laws might be more like the following.

  1. Leave Rest State and execute Plan, monitoring regularly.
  2. If anomalies appear, especially human beings in unexpected locations, sound alarm and try to return to Rest State.
  3. If returning to Rest State generates new anomalies, stop moving and power down all tools and equipment.

Can you do better than that?

Do we need robots to be conscious? Ryota Kanai thinks it is largely up to us whether the machines wake up – but he is working on it. I think his analysis is pretty good and in fact I think we can push it a bit further.

His opening remarks, perhaps due to over-editing, don’t clearly draw the necessary distinction between Hard and Easy problems, or between subjective p-consciousness and action-related a-consciousness (I take it to be the same distinction, though not everyone would agree). Kanai talks about the unsolved mystery of experience, which he says is not a necessary by-product of cognition, and says that nevertheless consciousness must be a product of evolution. Hm. It’s p-consciousness, the ineffable, phenomenal business of what experience is like, that is profoundly mysterious, not a necessary by-product of cognition, and quite possibly nonsense. That kind of consciousness cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates.  A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.

The survival value of A-consciousness comes from the way it allows us to step back from the immediate environment; instead of responding to stimuli that are present now, we can respond to ones that were around last week, or even ones that haven’t happened yet; our behaviour can address complex future contingencies in a way that is both remarkable and powerfully useful. We can make plans, and we can work out what to do in novel situations (not always perfectly, of course, but we can do much better than just running a sequence of instinctive behaviour).

Kanai discusses what must be about the most minimal example of this; our ability to wait three seconds before responding to a stimulus. Whether this should properly be regarded as requiring full consciousness is debatable, but I think he is quite right to situate it within a continuum of detached behaviour which, further along, includes reactions to very complex counterfactuals.

The kind he focuses on particularly is self-consciousness or higher-order consciousness; thinking about ourselves. We have an emergent problem, he points out, with robots  whose reasons are hidden; increasingly we cannot tell why a complex piece of machine learning resulted in the behaviour that resulted. Why not get the robot to tell us, he says; why not enable it to report its own inner states? And if it becomes able to consider and explain its own internal states, won’t that be a useful facility which is also like the kind of self-reflecting consciousness that some philosophers take to be the crucial feature of the human variety?

There’s an immediate and a more general objection we might raise here. The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. Getting the robot to report will be no better than trying to examine the state of the robot’s mind directly; in fact it’s worse, because it introduces a new step into the process, one where additional errors can creep in. Kanai describes a community of AIs, endowed with a special language that allows them to report their internal states to each other. It sounds awfully tedious, like a room full of people who, when asked ‘How are you?’ each respond with a detailed health report. Maybe that is quite human in a way after all.

The more general theoretical objection (also rather vaguer, to be honest) is that, in my opinion at least, Kanai and those Higher Order Theory philosophers just overstate the importance of being able to think about your own mental states. It is an interesting and important variety of consciousness, but I think it just comes for free with a sufficiently advanced cognitive apparatus. Once we can think about anything, then we can of course think about our thoughts.

So do we need robots to be conscious? I think conscious thought does two jobs for us that need to be considered separately although they are in fact strongly linked. I think myself that consciousness is basically recognition. When we pull off that trick of waiting for three seconds before we respond to a stimulus, it is because we recognise the wait as a thing whose beginning is present now, and can therefore be treated as another present stimulus. This one simple trick allows us to respond to future things and plan future behaviour in a way that would otherwise seem to contradict the basic principle that the cause must come before effect.

The first job that does is allow the planning of effective and complex actions to achieve a given goal. We might want a robot to be able to do that so it can acquire the same kind of effectiveness in planning and dealing with new situations which we have ourselves, a facility which to date has tended to elude robots because of the Frame Problem and other issues to do with the limitations of pre-programmed routines.

The second job is more controversial. Because action motivated by future contingencies has a more complex causal back-story, it looks a bit spooky, and it is the thing that confers on us the reality (or the illusion, if you prefer) of free will and moral responsibility. Because our behaviour comes from consideration of the future, it seems to have no roots in the past, and to originate in our minds. It is what enables us to choose ‘new’ goals for ourselves that are not merely the consequence of goals we already had. Now there is an argument that we don’t want robots to have that. We’ve got enough people around already to originate basic goals and take moral responsibility; they are a dreadful pain already with all the moral and legal issues they raise, so adding a whole new category of potentially immortal electronic busybodies is arguably something best avoided. That probably means we can’t get robots to do job number one for us either; but that’s not so bad because the strategies and plans which job one yields can always be turned into procedures after the fact and fed to ‘simple’ computers to run. We can, in fact, go on doing things the way we do them now; humans work out how to deal with a task and then give the robots a set of instructions; but we retain personhood, free will, agency and moral responsibility for ourselves.

There is quite a big potential downside, though; it might be that the robots, once conscious, would be able to come up with better aims and more effective strategies than we will ever be able to devise. By not giving them consciousness we might be permanently depriving ourselves of the best possible algorithms (and possibly some superior people, but that’s a depressing thought from a human point of view). True, but then I think that’s almost what we are on the brink of doing already. Kanai mentions European initiatives which may insist that computer processes come with an explanation that humans can understand; if put into practice the effect, once the rule collides with some of those processes that simply aren’t capable of explanation, would be to make certain optimal but inscrutable algorithms permanently illegal.

We could have the best of both worlds if we could devise a form of consciousness that did job number one for us without doing job two as an unavoidable by-product, but since in my view they’re all acts of recognition of varying degrees of complexity, I don’t see at the moment how the two can be separated.

Are robots short-changing us imaginatively?

Chat-bots, it seems, might be getting their second (or perhaps their third or fourth) wind. While they’re not exactly great conversationalists, the recent wave of digital assistants demonstrates the appeal of a computer you can talk to like a human being. Some now claim that a new generation of bots using deep machine learning techniques might be way better at human conversation than their chat-bot predecessors, whose utterances often veered rapidly from the gnomic to the insane.

A straw in the wind might be the Hugging Face app (I may be showing my age, but for me that name strongly evokes a ghastly Alien parasite). This greatly impressed Rachel Metz, who apparently came to see it as a friend. It’s certainly not an assistant – it doesn’t do anything except talk to you in a kind of parody of a bubbly teen with a limping attention span. The thing itself is available for IOS and the underlying technology, without the teen angle, appears to be on show here, though I don’t really recommend spending any time on either. Actual performance, based on a small sample (I can only take so much) is disappointing; rather than a leap forward it seems distinctly inferior to some Loebner prize winners that never claimed to be doing machine learning. Perhaps it will get better. Jordan Pearson here expresses what seem reasonable reservations about an app aimed at teens that demands a selfie from users as its opening move.

Behind all this, it seems to me, is the looming presence of Spike Jonze’s film Her, in which a professional letter writer from the near future (They still have letters? They still write – with pens?) becomes devoted to his digital assistant Samantha. Samantha is just one instance of a bot which people all over are falling in love with. The AIs in the film are puzzlingly referred to as Operating Systems, a randomly inappropriate term that perhaps suggests that Jonze didn’t waste any time reading up on the technology. It’s not a bad film at all, but it isn’t really about AI; nothing much would be lost if Samantha were a fairy, a daemon, or an imaginary friend. There’s some suggestion that she learns and grows, but in fact she seems to be a fully developed human mind, if not a superlative one, right from her first words. It’s perhaps unfair to single the relatively thoughtful Her out for blame, because with some honourable exceptions the vast majority of robots in fiction are like this; humans in masks.

Fictional robots are, in fact, fakes, and so are all chat-bots. No chat-bot designer ever set out to create independent cognition first and then let it speak; instead they simply echo us back to ourselves as best they can manage. This is a shame because the different patterns of thought that a robot might have; the special mistakes it might be prone to and the unexpected insights it might generate, are potentially very interesting; indeed I should have thought they were fertile ground for imaginative writers. But perhaps ‘imaginative’ understates the amazing creative powers that would be needed to think yourself out of your own essential cognitive nature. I read a discussion the other day about human nature; it seems to me that the truth is we don’t know what human nature is like because we have nothing much to compare it with; it won’t be until we communicate with aliens or talk properly with non-fake robots that we’ll be able to form a proper conception of ourselves.

To a degree it can be argued that there are examples of this happening already. Robots that aspire to Artificial General Intelligence in real world situations suffer badly from the Frame Problem, for instance. That problem comes in several forms, but I think it can be glossed briefly as the job of picking out from the unfiltered world the things that need attention. AI is terrible at this, usually becoming mired in irrelevance (hey, the fact that something hasn’t changed might be more important than the fact that something else has). Dennett, rightly I think, described this issue as not the discovery of a new problem for robots so much as a new discovery about human nature; turns out we’re weirdly, inexplicably good at something we never even realised was difficult.

How interesting it would be to learn more about ourselves along those challenging, mind-opening lines; but so long as we keep getting robots that are really human beings, mirroring us back to ourselves reassuringly, it isn’t going to happen.

dagstuhl-ceIs there an intermediate ethical domain, suitable for machines?

The thought is prompted by this summary of an interesting seminar on Engineering Moral Agents, one of the ongoing series hosted at Schloss Dagstuhl. It seems to have been an exceptionally good session which got into some of the issues in a really useful way – practically oriented but not philosophical naive. It noted the growing need to make autonomous robots – self-driving cars, drones, and so on – able to deal with ethical issues. On the one hand it looked at how ethical theories could be formalised in a way that would lend itself to machine implementation, and on the other how such a formalisation could in fact be implemented. It identified two broad approaches: top-down, where in essence you hard-wire suitable rules into the machine, and bottom-up, where the machine learns for itself from suitable examples. The approaches are not necessarily exclusive, of course.

The seminar thought that utilitarian or Kantian theories of morality were both prima facie candidates for formalisation. Utilitarian or more broadly, consequentialist theories look particularly promising because calculating the optimal value (such as the greatest happiness of the greatest number) achievable from the range of alternatives on offer looks like something that can be reduced to arithmetic fairly straightforwardly. There are problems in that consequentialist theories usually yield at least some results that look questionable in common sense terms (finding the initial values to slot into your sums is also a non-trivial challenge – how do you put a clear numerical value on people’s probable future happiness?)

A learning system eases several of these problems. You don’t need a fully formalised system (so long as you can agree on a database of examples). But you face the same problems that arise for learning systems in other contexts; you can’t have the assurance of knowing why the machine behaves as it does, and if your database had unnoticed gaps or bias you may suffer from sudden catastrophic mistakes.  The seminar summary rightly notes that a machine that learned its ethics will not be able to explain its behaviour; but I don’t know that that means it lacks agency; many humans would struggle to explain their moral decisions in a way that would pass muster philosophically. Most of us could do no more than point to harms avoided or social rules observed at best.

The seminar looked at some interesting approaches, mentioned here with tantalising brevity: Horty’s default logic, Sergot’s STIT (See To It That) logic; and the possibility of drawing on the decision theory already developed in the context of micro-economics. This is consequentialist in character and there was an examination of whether in fact all ethical theories can be restated in consequentialist terms (yes, apparently, but only if you’re prepared to stretch the idea of a consequence to a point where the idea becomes vacuous). ‘Reason-based’ formalisations presented by List and Dietrich interestingly get away from narrow consequentialisms and their problems using a rightness function which can accommodate various factors.

The seminar noted that society will demand high, perhaps precautionary standards of safety from machines, and floated the idea of an ethical ‘black box’ recorder. It noted the problem of cultural neutrality and the risk of malicious hacking. It made the important point that human beings do not enjoy complete ethical agreement anyway, but argue vigorously about real issues.

The thing that struck me was how far it was possible to go in discussing morality when it is pretty clear that the self-driving cars and so on under discussion actually have no moral agency whatever. Some words of caution are in order here. Some people think moral agency is a delusion anyway; some maintain that on the contrary, relatively simple machines can have it. But I think for the sake of argument we can assume that humans are moral beings, and that none of the machines we’re currently discussing is even a candidate for moral agency – though future machines with human-style general understanding may be.

The thing is that successful robots currently deal with limited domains. A self-driving car can cope with an array of entities like road, speed, obstacle, and so on; it does not and could not have the unfettered real-world understanding of all the concepts it would need to make general ethical decisions about, for example, what risks and sacrifices might be right when it comes to actual human lives. Even Asimov’s apparently simple Laws of Robotics required robots to understand and recognise correctly and appropriately the difficult concept of ‘harm’ to a human being.

One way of squaring this circle might be to say that, yes, actually, any robot which is expected to operate with any degree of autonomy must be given a human-level understanding of the world. As I’ve noted before, this might actually be one of the stronger arguments for developing human-style artificial general intelligence in the first place.

But it seems wasteful to bestow consciousness on a roomba, both in terms of pure expense and in terms of the chronic boredom the poor thing would endure (is it theoretically possible to have consciousness without the capacity for boredom?). So really the problem that faces us is one of making simple robots, that operate on restricted domains, able to deal adequately with occasional issues from the unrestricted domain of reality. Now clearly ‘adequate’ is an important word there. I believe that in order to make robots that operate acceptably in domains they cannot understand, we’re going to need systems that are conservative and tend towards inaction. We would not, I think, accept a long trail of offensive and dangerous behaviour in exchange for a rare life-saving intervention. This suggests rules rather than learning; a set of rules that allow a moron to behave acceptably without understanding what is going on.

Do these rules constitute a separate ethical realm, a ‘sub-ethics’ that substitute for morality when dealing with entities that have autonomy but no agency? I rather think they might.

The robots are (still) coming. Thanks to Jesus Olmo for this TED video of Sam Harris presenting what we could loosely say is a more sensible version of some Singularity arguments. He doesn’t require Moore’s Law to go on working, and he doesn’t need us to accept the idea of an exponential acceleration in AI self-development. He just thinks AI is bound to go on getting better; if it goes on getting better, at some stage it overtakes us; and eventually perhaps it gets to the point where we figure in its mighty projects about the way ants on some real estate feature in ours.

Getting better, overtaking us; better at what? One weakness of Harris’ case is that he talks just about intelligence, as though that single quality were an unproblematic universal yardstick for both AI and human achievement. Really though, I think we’re talking about three quite radically different things.

First, there’s computation; the capacity, roughly speaking, to move numbers around according to rules. There can be no doubt that computers keep getting faster at doing this; the question is whether it matters. One of Harris’ arguments is that computers go millions of times faster than the brain so that a thinking AI will have the equivalent of thousands of years of thinking time while the humans are still getting comfy in their chairs. No-one who has used a word processor and a spreadsheet for the last twenty years will find this at all plausible: the machines we’re using now are so much more powerful than the ones we started with that the comparison defeats metaphor, but we still sit around waiting for them to finish. OK, it’s true that for many tasks that are computationally straightforward – balancing an inherently unstable plane with minute control adjustments, perhaps – computers are so fast they can do things far beyond our range. But to assume that thinking about problems in a human sort of way is a task that scales with speed of computation just begs the question. How fast are neurons? We don’t really understand them well enough to say. It’s quite possible they are in some sense fast enough to get close to a natural optimum. Maybe we should make a robot that runs a million times faster than a cheetah first and then come back to the brain.

The second quality we’re dealing with is inventiveness; whatever capacity it is that allows us to keep on designing better machines. I doubt this is really a single capacity; in some ways I’m not sure it’s a capacity at all. For one thing, to devise the next great idea you have to be on the right page. Darwin and Wallace both came up with the survival of the fittest because both had been exposed to theories of evolution, both had studied the profusion of species in tropical environments, and both had read Malthus. You cannot devise a brilliant new chip design if you have no idea how the old chips worked. Second, the technology has to be available. Hero of Alexandria could design a steam engine, but without the metallurgy to make strong boilers, he couldn’t have gone anywhere with the idea. The basic concept of television was around since films and telegraph came together in someone’s mind, but it took a series of distinct advances in technology to make it feasible. In short, there is a certain order in these things; you do need a certain quality of originality, but again it’s plausible that humans already have enough for something like maximum progress, given the right conditions. Of course so far as AI is concerned, there are few signs of any genuinely original thought being achieved to date, and every possibility that mere computation is not enough.

Third is the quality of agency. If AIs are going to take over, they need desires, plans, and intentions. My perception is that we’re still at zero on this; we have no idea how it works and existing AIs do nothing better than an imitation of agency (often still a poor one). Even supposing eventual success, this is not a field in which AI can overtake us; you either are or are not an agent; there’s no such thing as hyper-agency or being a million times more responsible for your actions.

So the progress of AI with computationally tractable tasks gives no particular reason to think humans are being overtaken generally, or are ever likely to be in certain important respects. But that’s only part of the argument. A point that may be more important is simply that the the three capacities are detachable. So there is no reason to think that an AI with agency automatically has blistering computational speed, or original imagination beyond human capacity. If those things can be achieved by slave machines that lack agency, then they are just as readily available to human beings as to the malevolent AIs, so the rebel bots have no natural advantage over any of us.

I might be biased over this because I’ve been impatient with the corny ‘robots take over’ plot line since I was an Asimov-loving teenager. I think in some minds (not Harris’s) these concerns are literal proxies for a deeper and more metaphorical worry that admiring machines might lead us to think of ourselves as mechanical in ways that affect our treatment of human beings. So the robots might sort of take over our thinking even if they don’t literally march around zapping us with ray guns.

Concerns like this are not altogether unjustified, but they rest on the idea that our personhood and agency will eventually be reduced to computation. Perhaps when we eventually come to understand them better, that understanding will actually tell us something quite different?

jailbotIs there a retribution gap? In an interesting and carefully argued paper John Danaher argues that in respect of robots, there is.

For human beings in normal life he argues that a fairly broad conception of responsibility works OK. Often enough we don’t even need to distinguish between causal and moral responsibility, let alone worrying about the six or more different types identified by hair-splitting philosophers.

However, in the case of autonomous robots the sharing out of responsibility gets more difficult. Is the manufacturer, the programmer, or the user of the bot responsible for everything it does, or does the bot properly shoulder the blame for its own decisions? Danaher thinks that gaps may arise, cases in which we can blame neither the humans involved nor the bot. In these instances we need to draw some finer distinctions than usual, and in particular we need to separate the idea of liability into compensation liability on one hand and and retributive liability on the other. The distinction is essentially that between who pays for the damage and who goes to jail; typically the difference between matters dealt with in civil and criminal courts. The gap arises because for liability we normally require that the harm must have been reasonably foreseeable. However, the behaviour of autonomous robots may not be predictable either by their designers or users on the one hand, or by the bots themselves on the other.

In the case of compensation liability Danaher thinks things can be patched up fairly readily through the use of strict and vicarious liability. These forms of liability, already well established in legal practice, give up some of the usual requirements and make people responsible for things they could not have been expected to foresee or guard against. I don’t think the principles of strict liability are philosophically uncontroversial, but they are legally established and it is at least clear that applying them to robot cases does not introduce any new issues. Danaher sees a worse problem in the case of retribution, where there is no corresponding looser concept of responsibility, and hence, no-one who can be punished.

Do we, in fact, need to punish anyone? Danaher rightly says that retribution is one of the fundamental principles behind punishment in most if not all human societies, and is upheld by many philosophers. Many, perhaps, but my impression is that the majority of moral philosophers and lay opinion actually see some difficulty in justifying retribution. Its psychological and sociological roots are strong, but the philosophical case is much more debatable. For myself I think a principle of retribution can be upheld , but it is by no means as clear or as well supported as the principle of deterrence, for example. So many people might be perfectly comfortable with a retributive gap in this area.

What about scapegoating – punishing someone who wasn’t really responsible for the crime? Couldn’t we use that to patch up the gap?  Danaher mentions it in passing, but treats it as something whose unacceptability is too obvious to need examination. I think, though, that in many ways it is the natural counterpart to the strict and vicarious liability he endorses for the purposes of compensation. Why don’t we just blame the manufacturer anyway – or the bot (Danaher describes Basil Fawlty’s memorable thrashing of his unco-operative car)?

How can you punish a bot though? It probably feels no pain or disappointment, it doesn’t mind being locked up or even switched off and destroyed. There does seem to be a strange gap if we have an entity which is capable of making complex autonomous decisions, but doesn’t really care about anything. Some might argue that in order to make truly autonomous decisions the bot must be engaged to a degree that makes the crushing of its hopes and projects a genuine punishment, but I doubt it. Even as a caring human being it seems quite easy to imagine working for an organisation on whose behalf you make complex decisions, but without ultimately caring whether things go well or not (perhaps even enjoying a certain schadenfreude in the event of disaster). How much less is a bot going to be bothered?

In that respect I think there might really be a punitive gap that we ought to learn to live with; but I expect the more likely outcome in practice is that the human most closely linked to disaster will carry the case regardless of strict culpability.

drugbotOver the years many variants and improvements to the Turing Test have been proposed, but surely none more unexpected than the one put forward by Andrew Smart in this piece, anticipating his forthcoming book Beyond Zero and One. He proposes that in order to be considered truly conscious, a robot must be able to take an acid trip.

He starts out by noting that computers seem to be increasing in intelligence (whatever that means), and that many people see them attaining human levels of performance by 2100 (actually quite a late date compared to the optimism of recent decades; Turing talked about 2000, after all). Some people, indeed, think we need to be concerned about whether the powerful AIs of the future will like us or behave well towards us. In my view these worries tend to blur together two different things; improving processing speeds and sophistication of programming on the one hand, and transformation from a passive data machine into a spontaneous agent, quite a different matter. Be that as it may, Smart reasonably suggests we could give some thought to whether and how we should make machines conscious.
It seems to me – this may be clearer in the book – that Smart divides things up in a slightly unusual way. I’ve got used to the idea that the big division is between access and phenomenal consciousness, which I take to be the same distinction as the one defined by the terminology of Hard versus Easy Problems. In essence, we have the kind of consciousness that’s relevant to behaviour, and the kind that’s relevant to subjective experience.
Although Smart alludes to the Chalmersian zombies that demonstrate this distinction, I think he puts the line a bit lower; between the kind of AI that no-one really supposes is thinking in a human sense and the kind that has the reflective capacities that make up the Easy Problem. He seems to think that experience just goes with that (which is a perfectly viable point of view). He speaks of consciousness as being essential to creative thought, for example, which to me suggests we’re not talking about pure subjectivity.
Anyway, what about the drugs? Smart sedans to think that requiring robots to be capable of an acid trip is raising the bar, because it is in these psychedelic regions that the highest, most distinctive kind of consciousness is realised. He quotes Hofman as believing that LSD…

…allows us to become aware of the ontologically objective existence of consciousness and ourselves as part of the universe…

I think we need to be wary here of the distinction between becoming aware of the universal ontology and having the deluded feeling of awareness. We should always remember the words of Oliver Wendell Holmes Sr:

…I once inhaled a pretty full dose of ether, with the determination to put on record, at the earliest moment of regaining consciousness, the thought I should find uppermost in my mind. The mighty music of the triumphal march into nothingness reverberated through my brain, and filled me with a sense of infinite possibilities, which made me an archangel for the moment. The veil of eternity was lifted. The one great truth which underlies all human experience, and is the key to all the mysteries that philosophy has sought in vain to solve, flashed upon me in a sudden revelation. Henceforth all was clear: a few words had lifted my intelligence to the level of the knowledge of the cherubim. As my natural condition returned, I remembered my resolution; and, staggering to my desk, I wrote, in ill-shaped, straggling characters, the all-embracing truth still glimmering in my consciousness. The words were these (children may smile; the wise will ponder): “A strong smell of turpentine prevails throughout.”…

A second problem is that Smart believes (with a few caveats) that any digital realisation of consciousness will necessarily have the capacity for the equivalent of acid trips. This seems doubtful. To start with, LSD is clearly a chemical matter and digital simulations of consciousness generally neglect the hugely complex chemistry of the brain in favour of the relatively tractable (but still unmanageably vast) network properties of the connectome. Of course it might be that a successful artificial consciousness would necessarily have to reproduce key aspects of the chemistry and hence necessarily offer scope for trips, but that seems far from certain. Think of headaches; I believe they generally arise from incidental properties of human beings – muscular tension, constriction of the sinuses, that sort of thing – I don’t believe they’re in any way essential to human cognition and I don’t see why a robot would need them. Might not acid trips be the same, a chance by-product of details of the human body that don’t have essential functional relevance?

The worst thing, though, is that Smart seems to have overlooked the main merit of the Turing Test; it’s objective. OK, we may disagree over the quality of some chat-bot’s conversational responses, but whether it fools a majority of people is something testable, at least in principle. How would we know whether a robot was really having an acid trip? Writing a chat-bot to sound as if were tripping seems far easier than the original test; but other than talking to it, how can we know what it’s experiencing? Yes, if we could tell it was having intense trippy experiences, we could conclude it was conscious… but alas, we can’t. That seems a fatal flaw.

Maybe we can ask tripbot whether it smells turpentine.

Interesting exchange about Eric Schwitzgebel’s view that we have special obligations to robots…

Pepper spiced upWe need to talk about sexbots.  It seems (according to the Daily Mail – via MLU) that buyers of the new Pepper robot pal are being asked to promise they will not sex it up the way some naughty people have been doing; putting a picture of breasts on its touch screen and making poor Pepper tremble erotically when the screen is touched.

Just in time, some academics have launched the Campaign against Sex Robots. We’ve talked once or twice about the ethics of killbots; from thanatos we move inevitably to eros and the ethics of sexbots. Details of some of the thinking behind the campaign are set out in this paper by Kathleen Richardson of De Montfort University.

In principle there are several reasons we might think that sex with robots was morally dubious. We can put aside, for now at least, any consideration of whether it harms the robots emotionally or in any other way, though we might need to return to that eventually.

It might be that sex with robots harms the human participant directly. It could be argued that the whole business is simply demeaning and undignified, for example – though dignified sex is pretty difficult to pull off at the best of times. It might be that the human partner’s emotional nature is coarsened and denied the chance to develop, or that their social life is impaired by their spending every evening with the machine. The key problem put forward seems to be that by engaging in an inherently human activity with a mere machine, the line is blurred and the human being imports into their human relationship behaviour only appropriate to robots: that in short, they are encouraged to treat human beings like machines. This hypothetical process resembles the way some young men these days are disparagingly described as “porn-educated” because their expectations of sex and a sexual relationship have been shaped and formed exclusively by what we used to call blue movies.

It might also be that the ease and apparent blamelessness of robot sex will act as a kind of gateway to worse behaviour. It’s suggested that there will be “child” sexbots; apparently harmless in themselves but smoothing the path to paedophilia. This kind of argument parallels the ones about apparently harmless child porn that consists entirely of drawings or computer graphics, and so arguably harms no children.

On the other side, it can be argued that sexbots might provide a harmless, risk-free outlet for urges that would otherwise inconveniently be pressed on human beings. Perhaps the line won’t really be blurred at all: people will readily continue to distinguish between robots and people, or perhaps the drift will all be the other way: no humans being treated as machines, but one or two machines being treated with a fondness and sentiment they don’t really merit? A lot of people personalise their cars or their computers and it’s hard to see that much harm comes of it.

Richardson draws a parallel with prostitution. That, she argues, is an asymmetrical relationship at odds with human equality, in which the prostitute is treated as an object: robot sex extends and worsens that relationship in all respects. Surely it’s bound to be a malign influence? There seem to be some problematic aspects to her case. A lot of human relationships are asymmetrical; so long as they are genuinely consensual most people don’t seem bothered by that. It’s not clear that prostitutes are always simply treated as objects: in fact they are notoriously required to fake the emotions of a normal sexual relationship, at least temporarily, in most cases (we could argue about whether that actually makes the relationship better or worse). Nor is prostitution simple or simply evil: it comes in many forms from many prostitutes who are atrociously trafficked, blackmailed and beaten, through those who regard it as basically another service job, through to some few idealistic practitioners who work in a genuine therapeutic environment. I’m far from being an advocate of the profession in any form, but there are some complexities and even if we accept the debatable analogy it doesn’t provide us with a simple, one-size-fits-all answer.

I do recognise the danger that the line between human and machine might possibly be blurred. It’s a legitimate concern, but my instinct says that people will actually be fairly good at drawing the distinction and if anything robot sex will tend not to be thought of either as like sex with humans or sex with machines: it’ll mainly be thought of as sex with robots, and in fact that’s where a large part of the appeal will lie.

It’s a bit odd in a way that the line-blurring argument should be brought forward particularly in a sexual context. You’d think that if confusion were to arise it would be far more likely and much more dangerous in the case of chat-bots or other machines whose typical interactions were relatively intellectual. No-one, I think, has asked for Siri to be banned.

My soggy conclusion is that things are far more complex than the campaign takes them to be, and a blanket ban is not really an appropriate response.

 

 

 

 

wise menThere were a number of reports recently that a robot had passed ‘one of the tests for self-awareness’. They seem to stem mainly from this New Scientist piece (free registration may be required to see the whole thing, but honestly I’m not sure it’s worth it). That in turn reported an experiment conducted by Selmer Bringsjord of Rensselaer, due to be presented at the Ro-Man conference in a month’s time. The programme for the conference looks very interesting and the experiment is due to feature in a session on ‘Real Robots That Pass Human Tests of Self Awareness’.

The claim is that Bringsjord’s bot passed a form of the Wise Man test. The story behind the Wise Man test has three WMs tested by the king; he makes them wear hats which are either blue or white: they cannot see their own hat but can see both of the others. They’re told that there is at least one blue hat, and that the test is fair; to be won by the first WM who correctly announces the colour of his own hat. There is a chain of logical reasoning which produces the right conclusion: we can cut to the chase by noticing that the test can’t be fair unless all the hats are the same colour, because all other arrangements give one WM an advantage. Since at least one hat is blue, they all are.

You’ll notice that this is essentially a test of logic, not self awareness. If solving the problem required being aware that you were one of the WMs then we who merely read about it wouldn’t be able to come up with the answer – because we’re not one of the WMs and couldn’t possibly have that awareness. But there’s sorta,  kinda something about working with other people’s point of view in there.

Bringsjord’s bots actually did something rather different. They were apparently told that two of the three had been given a ‘dumbing’ pill that stopped them from being able to speak (actually a switch had been turned off; were the robots really clever enough to understand that distinction and the difference between a pill and a switch?); then they were asked ‘did you get the dumbing pill?’  Only one, of course, could answer, and duly answered ‘I don’t know’: then, having heard its own voice, it was able to go on to say ‘Oh, wait, now I know…!”

This test is obviously different from the original in many ways; it doesn’t involve the same logic. Fairness, an essential factor in the original version, doesn’t matter here, and in fact the test is egregiously unfair; only one bot can possibly win. The bot version seems to rest mainly on the robot being able to distinguish its own voice from those of the others (of course the others couldn’t answer anyway; if they’d been really smart they would all have answered ‘I wasn’t dumbed’, knowing that if they had been dumbed the incorrect conclusion would never be uttered). It does perhaps have a broadly similar sorta, kinda relation to awareness of points of view.

I don’t propose to try to unpick the reasoning here any further: I doubt whether the experiment tells us much, but as presented in the New Scientist piece the logic is such a dog’s breakfast and the details are so scanty it’s impossible to get a proper idea of what is going on. I should say that I have no doubt Ringsjord’s actual presentation will be impeccably clear and well-justified in both its claims and its reasoning; foggy reports of clear research are more common than vice versa.

There’s a general problem here about the slipperiness of defining human qualities. Ever since Plato attempted to define a man as ‘a featherless biped’ and was gleefully refuted by Diogenes with a plucked chicken, every definition of the special quality that defines the human mind seems to be torpedoed by counter-examples. Part of the problem is a curious bind whereby the task of definition requires you to give a specific test task; but it is the very non-specific open-ended generality of human thought you’re trying to capture. This, I expect, is why so many specific tasks that once seemed definitively reserved for humans have eventually been performed by computers, which perhaps can do anything which is specified narrowly enough.

We don’t know exactly what Bringsjord’s bots did, and it matters. They could have been programmed explicitly just to do exactly what they did do, which is boring: they could have been given some general purpose module that does not terminate with the first answer and shows up well in these circumstances, which might well be of interest; or they could have been endowed with massive understanding of the real world significance of such matters as pills, switches, dumbness, wise men, and so on, which would be a miracle and raise the question of why Bringsjord was pissing about with such trivial experiments when he had such godlike machines to offer.

As I say, though, it’s a general problem. In my view, the absence of any details about how the Room works is one of the fatal flaws in John Searle’s Chinese Room thought experiment; arguably the same issue arises for the Turing Test. Would we award full personhood to a robot that could keep up a good conversation? I’m not sure I would unless I had a clear idea of how it worked.

I think there are two reasonable conclusions we can draw, both depressing. One is that we can’t devise a good test for human qualities because we simply don’t know what those qualities are, and we’ll have to solve that imponderable riddle before we can get anywhere. The other possibility is that the specialness of human thought is permanently indefinable. Something about that specialness involves genuine originality, breaking the system, transcending the existing rules; so just as the robots will eventually conquer any specific test we set up, the human mind will always leap out of whatever parameters we set up for it.

But who knows, maybe the Ro-Man conference will surprise us with new grounds for optimism.