Sexbots

Pepper spiced upWe need to talk about sexbots.  It seems (according to the Daily Mail – via MLU) that buyers of the new Pepper robot pal are being asked to promise they will not sex it up the way some naughty people have been doing; putting a picture of breasts on its touch screen and making poor Pepper tremble erotically when the screen is touched.

Just in time, some academics have launched the Campaign against Sex Robots. We’ve talked once or twice about the ethics of killbots; from thanatos we move inevitably to eros and the ethics of sexbots. Details of some of the thinking behind the campaign are set out in this paper by Kathleen Richardson of De Montfort University.

In principle there are several reasons we might think that sex with robots was morally dubious. We can put aside, for now at least, any consideration of whether it harms the robots emotionally or in any other way, though we might need to return to that eventually.

It might be that sex with robots harms the human participant directly. It could be argued that the whole business is simply demeaning and undignified, for example – though dignified sex is pretty difficult to pull off at the best of times. It might be that the human partner’s emotional nature is coarsened and denied the chance to develop, or that their social life is impaired by their spending every evening with the machine. The key problem put forward seems to be that by engaging in an inherently human activity with a mere machine, the line is blurred and the human being imports into their human relationship behaviour only appropriate to robots: that in short, they are encouraged to treat human beings like machines. This hypothetical process resembles the way some young men these days are disparagingly described as “porn-educated” because their expectations of sex and a sexual relationship have been shaped and formed exclusively by what we used to call blue movies.

It might also be that the ease and apparent blamelessness of robot sex will act as a kind of gateway to worse behaviour. It’s suggested that there will be “child” sexbots; apparently harmless in themselves but smoothing the path to paedophilia. This kind of argument parallels the ones about apparently harmless child porn that consists entirely of drawings or computer graphics, and so arguably harms no children.

On the other side, it can be argued that sexbots might provide a harmless, risk-free outlet for urges that would otherwise inconveniently be pressed on human beings. Perhaps the line won’t really be blurred at all: people will readily continue to distinguish between robots and people, or perhaps the drift will all be the other way: no humans being treated as machines, but one or two machines being treated with a fondness and sentiment they don’t really merit? A lot of people personalise their cars or their computers and it’s hard to see that much harm comes of it.

Richardson draws a parallel with prostitution. That, she argues, is an asymmetrical relationship at odds with human equality, in which the prostitute is treated as an object: robot sex extends and worsens that relationship in all respects. Surely it’s bound to be a malign influence? There seem to be some problematic aspects to her case. A lot of human relationships are asymmetrical; so long as they are genuinely consensual most people don’t seem bothered by that. It’s not clear that prostitutes are always simply treated as objects: in fact they are notoriously required to fake the emotions of a normal sexual relationship, at least temporarily, in most cases (we could argue about whether that actually makes the relationship better or worse). Nor is prostitution simple or simply evil: it comes in many forms from many prostitutes who are atrociously trafficked, blackmailed and beaten, through those who regard it as basically another service job, through to some few idealistic practitioners who work in a genuine therapeutic environment. I’m far from being an advocate of the profession in any form, but there are some complexities and even if we accept the debatable analogy it doesn’t provide us with a simple, one-size-fits-all answer.

I do recognise the danger that the line between human and machine might possibly be blurred. It’s a legitimate concern, but my instinct says that people will actually be fairly good at drawing the distinction and if anything robot sex will tend not to be thought of either as like sex with humans or sex with machines: it’ll mainly be thought of as sex with robots, and in fact that’s where a large part of the appeal will lie.

It’s a bit odd in a way that the line-blurring argument should be brought forward particularly in a sexual context. You’d think that if confusion were to arise it would be far more likely and much more dangerous in the case of chat-bots or other machines whose typical interactions were relatively intellectual. No-one, I think, has asked for Siri to be banned.

My soggy conclusion is that things are far more complex than the campaign takes them to be, and a blanket ban is not really an appropriate response.

 

 

 

 

Slippery Humanity

wise menThere were a number of reports recently that a robot had passed ‘one of the tests for self-awareness’. They seem to stem mainly from this New Scientist piece (free registration may be required to see the whole thing, but honestly I’m not sure it’s worth it). That in turn reported an experiment conducted by Selmer Bringsjord of Rensselaer, due to be presented at the Ro-Man conference in a month’s time. The programme for the conference looks very interesting and the experiment is due to feature in a session on ‘Real Robots That Pass Human Tests of Self Awareness’.

The claim is that Bringsjord’s bot passed a form of the Wise Man test. The story behind the Wise Man test has three WMs tested by the king; he makes them wear hats which are either blue or white: they cannot see their own hat but can see both of the others. They’re told that there is at least one blue hat, and that the test is fair; to be won by the first WM who correctly announces the colour of his own hat. There is a chain of logical reasoning which produces the right conclusion: we can cut to the chase by noticing that the test can’t be fair unless all the hats are the same colour, because all other arrangements give one WM an advantage. Since at least one hat is blue, they all are.

You’ll notice that this is essentially a test of logic, not self awareness. If solving the problem required being aware that you were one of the WMs then we who merely read about it wouldn’t be able to come up with the answer – because we’re not one of the WMs and couldn’t possibly have that awareness. But there’s sorta,  kinda something about working with other people’s point of view in there.

Bringsjord’s bots actually did something rather different. They were apparently told that two of the three had been given a ‘dumbing’ pill that stopped them from being able to speak (actually a switch had been turned off; were the robots really clever enough to understand that distinction and the difference between a pill and a switch?); then they were asked ‘did you get the dumbing pill?’  Only one, of course, could answer, and duly answered ‘I don’t know’: then, having heard its own voice, it was able to go on to say ‘Oh, wait, now I know…!”

This test is obviously different from the original in many ways; it doesn’t involve the same logic. Fairness, an essential factor in the original version, doesn’t matter here, and in fact the test is egregiously unfair; only one bot can possibly win. The bot version seems to rest mainly on the robot being able to distinguish its own voice from those of the others (of course the others couldn’t answer anyway; if they’d been really smart they would all have answered ‘I wasn’t dumbed’, knowing that if they had been dumbed the incorrect conclusion would never be uttered). It does perhaps have a broadly similar sorta, kinda relation to awareness of points of view.

I don’t propose to try to unpick the reasoning here any further: I doubt whether the experiment tells us much, but as presented in the New Scientist piece the logic is such a dog’s breakfast and the details are so scanty it’s impossible to get a proper idea of what is going on. I should say that I have no doubt Ringsjord’s actual presentation will be impeccably clear and well-justified in both its claims and its reasoning; foggy reports of clear research are more common than vice versa.

There’s a general problem here about the slipperiness of defining human qualities. Ever since Plato attempted to define a man as ‘a featherless biped’ and was gleefully refuted by Diogenes with a plucked chicken, every definition of the special quality that defines the human mind seems to be torpedoed by counter-examples. Part of the problem is a curious bind whereby the task of definition requires you to give a specific test task; but it is the very non-specific open-ended generality of human thought you’re trying to capture. This, I expect, is why so many specific tasks that once seemed definitively reserved for humans have eventually been performed by computers, which perhaps can do anything which is specified narrowly enough.

We don’t know exactly what Bringsjord’s bots did, and it matters. They could have been programmed explicitly just to do exactly what they did do, which is boring: they could have been given some general purpose module that does not terminate with the first answer and shows up well in these circumstances, which might well be of interest; or they could have been endowed with massive understanding of the real world significance of such matters as pills, switches, dumbness, wise men, and so on, which would be a miracle and raise the question of why Bringsjord was pissing about with such trivial experiments when he had such godlike machines to offer.

As I say, though, it’s a general problem. In my view, the absence of any details about how the Room works is one of the fatal flaws in John Searle’s Chinese Room thought experiment; arguably the same issue arises for the Turing Test. Would we award full personhood to a robot that could keep up a good conversation? I’m not sure I would unless I had a clear idea of how it worked.

I think there are two reasonable conclusions we can draw, both depressing. One is that we can’t devise a good test for human qualities because we simply don’t know what those qualities are, and we’ll have to solve that imponderable riddle before we can get anywhere. The other possibility is that the specialness of human thought is permanently indefinable. Something about that specialness involves genuine originality, breaking the system, transcending the existing rules; so just as the robots will eventually conquer any specific test we set up, the human mind will always leap out of whatever parameters we set up for it.

But who knows, maybe the Ro-Man conference will surprise us with new grounds for optimism.

Aliens are Robots

BISASusan Schneider’s recent paper argues that when we hear from alien civilisations, it’s almost bound to be super intelligent robots getting in touch, rather than little green men. She builds on Nick Bostrom’s much-discussed argument that we’re all living in a simulation.

Actually, Bostrom’s argument is more cautious than that, and more carefully framed. His claim is that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.

So that if we disbelieve the first two, we must accept the third.

In fact there are plenty of reasons to argue that the first two propositions are true. The first evokes ideas of nuclear catastrophe or an unexpected comet wiping us out in our prime, but equally it could just be that no post human stage is ever reached. We only know about the cultures of our own planet, but two of the longest lived – the Egyptian and the Chinese – were very stable, showing few signs of moving on towards post humanism. They made the odd technological advance, but they also let things slip: no more pyramids after the Old Kingdom; ocean-going junks abandoned before being fully exploited. Really only our current Western culture, stemming from the European Renaissance, has displayed a long run of consistent innovation; it may well be a weird anomaly and its five-hundred year momentum may well be temporary. Maybe our descendants will never go much further than we already have; maybe, thinking of Schneider’s case, the stars are basically inhabited by Ancient Egyptians who have been living comfortably for millions of years without ever discovering electricity.

The second proposition requires some very debatable assumptions, notably that consciousness is computable. But the notion of “simulation” also needs examination. Bostrom takes it that a computer simulation of consciousness is likely to be conscious, but I don’t think we’d assume a digital simulation of digestion would do actual digesting. The thing about a simulation is that by definition it leaves out certain aspects of the real phenomenon (otherwise it’s the phenomenon itself, not a simulation). Computer simulations normally leave out material reality, which could be a problem if we want real consciousness. Maybe it doesn’t matter for consciousness; Schneider argues strongly against any kind of biological requirement and it may well be that functional relations will do in the case of consciousness. There’s another issue, though; consciousness may be uniquely immune from simulation because of its strange epistemological greediness. What do I mean? Well, for a simulation of digestion we can write a list of all the entities to be dealt with – the foods we expect to enter the gut and their main components. It’s not an unmanageable task, and if we like we can leave out some items or some classes of item without thereby invalidating the simulation. Can we write a list of the possible contents of consciousness? No. I can think about any damn thing I like, including fictional and logically impossible entities. Can we work with a reduced set of mental contents? No; this ability to think about anything is of the essence.

All this gets much worse when Bostrom floats the idea that future ancestor simulations might themselves go on to be post human and run their own nested simulations, and so on. We must remember that he is really talking about simulated worlds, because his simulated ancestors need to have all the right inputs fed to them consistently. A simulated world has to be significantly smaller in information terms than the world that contains it; there isn’t going to be room within it to simulate the same world again at the same level of detail. Something has to give.

Without the indefinite nesting, though, there’s no good reason to suppose the simulated ancestors will ever outnumber the real people who ever lived in the real world. I suppose Bostrom thinks of his simulated people as taking up negligible space and running at speeds far beyond real life; but when you’re simulating everything, that starts to be questionable. The human brain may be the smallest and most economic way of doing what the human brain does.

Schneider argues that, given the same Whiggish optimism about human progress we mentioned earlier, we must assume that in due course fleshy humans will be superseded by faster and more capable silicon beings, either because robots have taken over the reins or because humans have gradually cyborgised themselves to the point where they are essentially super intelligent robots. Since these post human beings will live on for billions of years, it’s almost certain that when we make contact with aliens, that will be the kind we meet.

She is, curiously, uncertain about whether these beings will be conscious. She really means that they might be zombies, without phenomenal consciousness. I don’t really see how super intelligent beings like that could be without what Ned Block called access consciousness, the kind that allows us to solve problems, make plans, and generally think about stuff; I think Schneider would agree, although she tends to speak as though phenomenal, experiential consciousness was the only kind.

She concludes, reasonably enough, that the alien robots most likely will have full conscious experience. Moreover, because reverse engineering biological brains is probably the quick way to consciousness, she thinks that a particular kind of super intelligent AI is likely to predominate: biologically inspired superintelligent alien (BISA). She argues that although BISAs might in the end be incomprehensible, we can draw some tentative conclusions about BISA minds:
(i). Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISAs thinking patterns.
(ii) BISAs may have viewpoint invariant representations. (Surely they wouldn’t be very bright if they didn’t?)
(iii) BISAs will have language-like mental representations that are recursive and combinatorial. (Ditto.)
(iv) BISAs may have one or more global workspaces. (If you believe in global workspace theory, certainly. Why more than one, though – doesn’t that defeat the object? Global workspaces are useful because they’re global.)
(v) A BISA’s mental processing can be understood via functional decomposition.

I’ll throw in a strange one; I doubt whether BISAs would have identity, at least not the way we do. They would be computational processes in silicon: they could split, duplicate, and merge without difficulty. They could be copied exactly, so that the question of whether BISA x was the same as BISA y could become meaningless. For them, in fact, communicating and merging would differ only in degree. Something to bear in mind for that first contact, perhaps.

This is interesting stuff, but to me it’s slightly surprising to see it going on in philosophy departments; does this represent an unexpected revival of the belief that armchair reasoning can tell us important truths about the world?

Inscrutable robots

meetingPetros Gelepithis has A Novel View of Consciousness in the International Journal of Machine Consciousness (alas, I can’t find a freely accessible version). Computers, as such, can’t be conscious, he thinks, but robots can; however, proper robot consciousness will necessarily be very unlike human consciousness in a way that implies some barriers to understanding.

Gelepithis draws on the theory of mind he developed in earlier papers, his theory of noèmona species. (I believe he uses the word noèmona mainly to avoid the varied and potentially confusing implications that attach to mind-related vocabulary in English.) It’s not really possible to do justice to the theory here, but it is briefly described in the following set of definitions, an edited version of the ones Gelepithis gives in the paper.

Definition 1. For a human H, a neural formation N is a structure of interacting sub-cellular components (synapses, glial structures, etc) across nerve cells able to influence the survival or reproduction of H.

Definition 2. For a human, H, a neural formation is meaningful (symbol Nm), if and only if it is an N that influences the attention of that H.

Definition 3. The meaning of a novel stimulus in context (Sc), for the human H at time t, is whatever Nm is created by the interaction of Sc and H.

Definition 4. The meaning of a previously encountered Sc, for H is the prevailed Np of Np

Definition 5. H is conscious of an external Sc if and only if, there are Nm structures that correspond to Sc and these structures are activated by H’s attention at that time.

Definition 6. H is conscious of an internal Sc if and only if the Nm structures identified with the internal Sc are activated by H’s attention at that time.

Definition 7. H is reflectively conscious of an internal Sc if and only if the Nm structures identified with the internal Sc are activated by H’s attention and they have already been modified by H’s thinking processes activated by primary consciousness at least once.

For Gelepithis consciousness is not an abstraction, of the kind that can be handled satisfactorily by formal and computational systems. Instead it is rooted in biology in a way that very broadly recalls Ruth Millikan’s views. It’s about attention and how it is directed, but meaning comes out of the experience and recollection of events related to evolutionary survival.

For him this implies a strong distinction between four different kinds of consciousness; animal consciousness, human consciousness, machine consciousness and robot consciousness. For machines, running a formal system, the primitives and the meanings are simply inserted by the human designer; with robots it may be different. Through, as I take it, living a simple robot life they may, if suitably endowed, gradually develop their own primitives and meanings and so attain their own form of consciousness. But there’s a snag…

Robots may be able to develop their own robot primitives and subsequently develop robot understanding. But no robot can ever understand human meanings; they can only interact successfully with humans on the basis of processing whatever human-based primitives and other notions were given…

Different robot experience gives rise to a different form of consciousness. They may also develop free will. Human beings act freely when their Acquired Belief and Knowledge (ABK) over-rides environmental and inherited influences in determining their behaviour; robots can do the same if they acquire an Own Robot Cognitive Architecture, the relevant counterpart. However, again…

A future possible conscious robotic species will not be able to communicate, except on exclusively formal bases, with the then Homo species.

‘then Homo’ because Gelepithis thinks it’s possible that human predecessors to Homo Sapiens would also have had distinct forms of consciousness (and presumably would have suffered similar communication issues).

Now we all have slightly different experiences and heritage, so Gelepithis’ views might imply that each of our consciousnesses is different. I suppose he believes that intra-species commonality is sufficient to make those differences relatively unimportant, but there should still be some small variation, which is an intriguing thought.

As an empirical matter, we actually manage to communicate rather well with some other species. Dogs don’t have our special language abilities and they don’t share our lineage or experiences to any great degree; yet very good practical understandings are often in place. Perhaps it would be worse with robots, who would not be products of evolution, would not eat or reproduce, and so on. Yet it seems strange to think that as a result their actual consciousness would be radically different?

Gelepithis’ system is based on attention, and robots would surely have a version of that; robot bodies would no doubt be very different from human ones, but surely the basics of proprioception, locomotion, manipulation and motivation would have to have some commonality?

I’m inclined to think we need to draw a further distinction here between the form and content of consciousness. It’s likely that robot consciousness would function differently from ours in certain ways: it might run faster, it might have access to superior memory, it might, who knows, be multi-threaded. Those would all be significant differences which might well impede communication. The robot’s basic drives might be very different from ours: uninterested in food, sex, and possibly even in survival, it might speak lyrically of the joys of electricity which must remain ever hidden from human beings. However, the basic contents of its mind would surely be of the same kind as the contents of our consciousness (hallo, yes, no, gimme, come here, go away) and expressible in the same languages?