Mrs Robb’s Clean Up Bot

I hope you don’t mind me asking – I just happened to be passing - but how did you get so very badly damaged?

“I don’t mind a chat while I’m waiting to be picked up. It was an alien attack, the Spl’schn’n, you know. I’ve just been offloaded from the shuttle there.

I see. So the Spl'schn'n damaged you. They hate bots, of course.

“See, I didn’t know anything about it until there was an All Bots Alert on the station? I was only their Clean up bot, but by then it turned out I was just about all they’d got left. When I got upstairs they had all been killed by the aliens. All except one?”

One human?

“I didn’t actually know if he was alive. I couldn’t remember how you tell. He wasn’t moving, but they really drummed into us that it’s normal for living humans to stop moving, sometimes for hours. They must not be presumed dead and cleared away merely on that account.”

Quite.

“There was that red liquid that isn’t supposed to come out. It looked like he’d got several defects and leaks. But he seemed basically whole and viable, whereas the Spl’schn’n had made a real mess of the others. I said to myself, well then, they’re not having this one. I’ll take him across the Oontian desert, where no Spl’schn’n can follow. I’m not a fighting unit, but a good bot mucks in.”

So you decided to rescue this badly injured human? It can’t have been easy.

“I never actually worked with humans directly. On the station I did nearly all my work when they were… asleep, you know? Inactive. So I didn’t know how firmly to hold him; he seemed to squeeze out of shape very easily: but if I held him loosely he slipped out of my grasp and hit the floor again. The Spl’schn’n made a blubbery alarm noise when they saw me getting clean away. I gave five or six of them a quick refresh with a cloud of lemon caustic. That stuff damages humans too – but they can take it a lot better than the Spl’schn’ns, who have absorbent green mucosal skin. They sort of explode into iridescent bubbles, quite pretty at first. Still, they were well armed and I took a lot of damage before I’d fully sanitised them.”

And how did you protect the human?

“Just did my best, got in the way of any projectiles, you know. Out in the desert I gave him water now and then; I don’t know where the human input connector is, so I used a short jet in a few likely places, low pressure, with the mildest rinse aid I had. Of course I wasn’t shielded for desert travel. Sand had got in all my bearings by the third day – it seemed to go on forever – and gradually I had to detach and abandon various non-functioning parts of myself. That’s actually where most of the damage is from. A lot of those bits weren’t really meant to detach.”

But against all the odds you arrived at the nearest outpost?

“Yes. Station 9. When we got there he started moving again, so he had been alive the whole time. He told them about the Spl’schn’n and they summoned the fleet: just in time, they said. The engineer told me to pack and load myself tidily, taking particular care not to leak oil on the forecourt surface, deliver myself back to Earth, and wait to be scrapped. So here I am.”

Well… Thank you.

Scott’s Aliens return

Scott Bakker’s alien consciousnesses are back, and this time it’s peer-reviewed.  We talked about their earlier appearance in the Three Pound Brain a while ago, and now a paper in the JCS sets out a new version.

The new paper foregrounds the idea of using hypothetical aliens as a forensic tool for going after the truth about our own minds; perhaps we might call it xenophenomenology. That opens up a large speculative space, though it’s one which is largely closed down again here by the accompanying assumption that our aliens are humanoid, the product of convergent evolution. In fact, they are now called Convergians, instead of the Thespians of the earlier version.

In a way, this is a shame. On the one hand, one can argue that to do xenophenomenology properly is impractical; it involves consideration of every conceivable form of intelligence, which in turn requires an heroic if not god-like imaginative power which few can aspire to (and which would leave the rest of us struggling to comprehend the titanic ontologies involved anyway). But if we could show that any possible mind would have to be x, we should have a pretty strong case for xism about human beings. In the present case not much is said about the detailed nature of the Convergian convergence, and we’re pretty much left to assume that they are the same as us in every important respect. This means there can be no final reveal in which – aha! – it turns out that all this is true of humans too! Instead it’s pretty clear that we’re effectively talking about humans all along.

Of course, there’s not much doubt about the conclusion we’re heading to here, either: in effect the Blind Brain Theory (BBT). Scott argues that as products of evolution our minds are designed to deliver survival in the most efficient way possible. As a result they make do with a mere trickle of data and apply cunning heuristics that provide a model of the world which is quick and practical but misleading in certain important respects. In particular, our minds are unsuited to metacognition – thinking about thinking – and when we do apply our minds to themselves the darkness of those old heuristics breeds monsters: our sense of our selves as real, conscious agents and the hard problems of consciousness.

This seems to put Scott in a particular bind so far as xenophenomenology is concerned. The xenophenomenological strategy requires us to consider objectively what alien minds might be like; but Scott’s theory tells us we are radically incapable of doing so. If we are presented with any intelligent being, on his view those same old heuristics will kick in and tell us that the aliens are people who think much like us. This means his conclusion that Convergians would surely suffer the same mental limitations as us appears as merely another product of faulty heuristics, and the assumed truth of his conclusion undercuts the value of his evidence.

Are those heuristics really that dominant? It is undoubtedly true that through evolution the brains of mammals and other creatures took some short cuts, and quite a few survive into human cognition, including some we’re not generally aware of. That seems to short-change the human mind a bit though; in a way the whole point of it is that it isn’t the prisoner of instinct and habit. When evolution came up with the human brain, it took a sort of gamble; instead of equipping it with good fixed routines, it set it free to come up with new ones, and even over-ride old instincts. That gamble paid off, of course, and it leaves us uniquely able to identify and overcome our own limitations.

If it were true that our view of human conscious identity were built in by the quirks of our heuristics, surely those views would be universal, but they don’t seem to be. Scott suggests that, for example, the two realms of sky and earth naturally give rise to a sort of dualism, and the lack of visible detail in the distant heavens predisposes Convergians (or us) to see it as pure and spiritual. I don’t know about that as a generalisation across human cultures (didn’t the Greeks, for one thing, have three main realms, with the sea as the third?). More to the point, it’s not clear to me that modern western ways of framing the problems of the human mind are universal. Ancient Egyptians divided personhood into several souls, not just one. I’ve been told that in Hindu thought the question of dualism simply never arises. In Shinto the line between the living and the material is not drawn in quite the Western way. In Buddhism human consciousness and personhood have been taken to be illusions for many centuries. Even in the West, I don’t think the concept of consciousness as we now debate it goes back very far at all – probably no earlier than the nineteenth century, with a real boost in the mid-twentieth (in Italian and French I believe one word has to do duty for both ‘consciousness’ and ‘conscience’, although we mustn’t read too much into that). If our heuristics condemn us to seeing our own conscious existence in a particular way, I wouldn’t have expected that much variation.

Of course there’s a difference between what vividly seems true and what careful science tells us is true; indeed if the latter didn’t reveal the limitations of our original ideas this whole discussion would be impossible. I don’t think Scott would disagree about that; and his claim that our cognitive limitations have influenced the way we understand things is entirely plausible. The question is whether that’s all there is to the problems of consciousness.

As Scott mentions here, we don’t just suffer misleading perceptions when thinking of ourselves; we also have dodgy and approximate impressions of physics. But those misperceptions were not Hard problems; no-one had ever really doubted that heavier things fell faster, for example. Galileo sorted several of these basic misperceptions out simply by being a better observer than anyone previously, and paying more careful attention. We’ve been paying careful attention to consciousness for some time now, and arguably it just gets worse.

In fairness that might rather short-change Scott’s detailed hypothesising about how the appearance of deep mystery might arise for Convergians; those, I think, are the places where xenophenomenology comes close to fulfilling its potential.

 

Our unconscious overlords…

alien-superWe are in danger of being eliminated by aliens who aren’t even conscious, says Susan Schneider. Luckily, I think I see some flaws in the argument.

Humans are probably not the greatest intelligences in the Universe, she suggests; others probably have been going for billions of years longer. Perhaps, but maybe they have all attained enlightenment and moved on from this plane, leaving us young dummies the cleverest or the only people around?

Schneider thinks the older cultures are likely to be post-biological, having moved on into machine forms of intelligence. This transition may only take a few hundred years, she suggests, to ‘judge from the human experience’ (Have we transitioned? Did I miss it?). She says transistors are much faster than neurons and computer power is almost indefinitely expandable, so AI will end up much cleverer than us.

Then there may be a problem over controlling these superlatively bright computers, as foreseen by Stephen Hawking, Elon Musk, and Bill Gates. Bill Gates? The man who, by exploiting the monopoly handed to him by IBM was able to impose on us all the crippled memory management of DOS and the endless vulnerabilities of Windows? Well, OK; not sure he has much idea about technology, but he’s got form on trying to retain control of things.

Schneider more or less takes it for granted that computation is cogitation, and that faster computation means smarter thinking. It’s true that computers have become very good at games we didn’t think they could play at all, and she reminds us of some examples. But to take over from human beings, computers need more than just computation. To mention two things, they need agency and intentionality, and to date they haven’t shown any capacity at all for either. I don’t rule out the possibility of both being generated artificially in future, but the ever-growing ability of computers to do more sums more quickly is strictly irrelevant. Those future artificial people of whom we know nothing may be able to exploit the power of computation – but so can we. If computers are good at winning battles, our computers can fight their computers.

Schneider also takes it for granted that her computational aliens will be hostile and likely to come over and fuck us up good if they ever know we exist. They might, for example, infect our systems with computer viruses (probably not, I think, because without Bill Gates providing their operating systems computer viruses probably remained a purely theoretical matter for them). Sending signals out into the galaxy, she reckons, is a really bad idea; our radio signals are already out there but luckily it’s faint and easily missed (even by unimaginably super-intelligent aliens, it seems). Premature to worry, surely, because even our earliest radio signals can be no more than about a hundred light years away so far – not much of a distance in galactic terms. But why would super-intelligent entities behave like witless bullies anyway? Somewhere between benign and indifferent seems a more likely attitude.

To me this whole scenario seems to embody a selective prognosis anyway. The aliens have overcome the limitation of the speed of light, they feed off black holes (no clue, sorry) but they still run on the computation we currently think is really smart. A hundred years ago no-one would have supposed computation was going to be the dominant technology of our decade, let alone the next million years; maybe by 2116 we’ll look back on it the way we fondly remember steam locomotion.

Schneider’s most arresting thought is that her dangerous computational aliens might lack qualia, and so in that sense not be conscious. It seems to me more natural to suppose that acquiring human-style thought would necessarily involve acquiring human-style qualia. Schneider seems to share the Searlian view that qualia have something to do with unknown biological qualities of neural tissue which silicon can never share. Even if qualia could be engineered into silicon, why would the aliens bother, she asks – it’s just an extra overhead that might add unwanted ethical issues. Most surprisingly, she supposes that we might be able to test the proposition! Suppose that for medical reasons we replace parts of a functioning human brain with chips, we might then find that qualia are lost.

But how would we know? Ex hypothesi, qualia have no causal powers and so could not cause any change in our behaviour. Even if the qualia vanished, the fact could not be reported. None of the things we say about qualia were caused by qualia; that’s one of the bizarre things about them.

Anyway, I say if we’re going to indulge in this kind of wild speculation, let’s really go big; I say the super-intelligent aliens will be powered by hyper-computation, a technology that makes our concept of computation look like counting on your fingers; and they’ll have not only qualia, but hyper-qualia, experiential phenomenologica whose awesomeness we cannot even speak of. They will be inexpressibly kindly and wise and will be be borne to Earth to visit us on special wave-forms, beyond our understanding but hugely hyperbolic…

Scott’s Aliens

blind alienScott Bakker has given an interesting new approach to his Blind Brain Theory (BBT): in two posts on his blog he considers what kind of consciousness aliens could have, and concludes that the process of evolution would put them into the same hole where, on his view, we find ourselves.

BBT, in sketchy summary, says that we have only a starvation diet of information about the cornucopia that really surrounds us; but the limitations of our sources and cognitive equipment mean we never realise it. To us it looks as if we’re fully informed, and the glitches of the limited heuristics we use to cobble together a picture of the world, when turned on ourselves in particular, look to us like real features. Our mental equipment was never designed for self-examination and attempting metacognition with it generates monsters; our sense of personhood, agency, and much about our consciousness comes from the deficits in our informational resources and processes.

Scott begins his first post by explaining his own journey from belief in intentionalism to eliminativist scepticism about it, and sternly admonishes those of us still labouring in intentionalist error for our failure to produce a positive account of how human minds could have real intentionality.

What about aliens – Scott calls the alien players in his drama ‘Thespians’ – could they be any better off than we are? Evolution would have equipped them with senses designed to identify food items, predators, mates, and so on; there would be no reason for them to have mental or sensory modules designed to understand the motion of planets or stars, and turning their senses on their own planet would surely tell them incorrectly that it was motionless. Scott points out that Aristotle’s argument against the movement of the Earth is rather good: if the Earth were moving, we should see shifts in the relative position of the stars, just as the relative position of objects in a landscape shifts when we we view them from the window of a moving train; yet the stars remain precisely fixed. The reasoning is sound; Aristotle simply did not know and could not imagine the mind-numbingly vast distances that make the effect invisibly small to unaided observation. The unrealised lack of information led Aristotle into misapprehension, and it would surely do the same for the Thespians; a nice warm-up for the main argument.

Now it’s a reasonable assumption that the Thespians would be social animals, and they would need to be able to understand each other. They’d get good at what is often somewhat misleadingly called theory of mind; they’d attribute motives and so on to each other and read each others behaviour in a fair bit of depth. Of course they would have no direct access to other Thespians; actual inner workings. What happens when they turn their capacity for understanding other people on themselves? In Scott’s view, plausibly enough, they end up with quite a good practical understanding whose origins are completely obscure to them; the lashed-up mechanisms that supply the understanding neither available to conscious examination or in fact even visible.

This is likely enough, and in fact doesn’t even require us to think of higher cognitive faculties. How do we track a ball flying through the air so we can catch it? Most people would be hard put to describe what the brain does to achieve that, though in practice we do it quite well. In fact, those who could put down the algorithm most likely get it wrong too, because it turns out the brain doesn’t use the optimal method: it uses a quick and easy one that works OK in practice but doesn’t get your hand to the right place as quickly as it could.

For Scott all this leads to a gloomy conclusion; much of our view about what we are and our mental capacities is really attributable to systematic error, even to  something we could regard as a cognitive deficit or disease. He cogently suggests how dualism and other errors might arise from our situation.

I think the Thespian account is the most accessible and persuasive account Scott has given to date of his view, and it perhaps allows us to situate it better than before. I think the scope of the disaster is a little less than Scott supposes, in two ways. First, he doesn’t deny that routine intentionality actually works at a practical level, and I think he would agree we can even hope to give a working level description of how that goes. My own view is that it’s all a grand extension of our capacity for recognition, (and I was more encouraged than not by my recent friendly disagreement with Marcus Perlman over on Aeon Ideas; I think his use of the term ‘iconic’ is potentially misleading, but in essence I think the views he describes are right and enlightening) but people here have heard all that many times. Whether I’m right or not we probably agree that some practical account of how the human mind gets its work done is possible.

Second, on a higher level, it’s not completely hopeless. We are indeed prone to dreadful errors and to illusions about how our minds work that cannot easily be banished. But we kind of knew that. We weren’t really struggling to understand how dualism could possibly be wrong, or why it seemed so plausible.  We don’t have to resort to those flawed heuristics; we can take our pure basic understanding and try again, either through some higher meta-meta-cognition (careful) or by simply standing aside and looking at the thing from a scientific third-person point of view. Aristotle was wrong, but we got it right in the end, and why shouldn’t Scott’s own view be part of getting it righter about the mind? I don’t think he would disagree on that either (he’ll probably tell us); but he feels his conclusions have disastrous implications for our sense of what we are.

Here we strike something that came up in our recent discussion of free will and the difference between determinists and compatibilists. It may be more a difference of temperament than belief. People like me say, OK, no, we don’t have the magic abilities we looked to have, so let’s give those terms a more sensible interpretation and go merrily on our way. The determinists, the eliminativists, agree that the magic has gone – in fact they insist – but they sit down by the roadside, throw ashes on their heads, and mourn it. They share with the naive, the libertarians, and the believers in a magic power of intentionality, the idea that something essential and basically human is lost when we move on in this way. Perhaps people like me came in to have the magic explained and are happy to see the conjuring tricks set out; others wanted the magic explained and for it to remain magic?

Aliens are Robots

BISASusan Schneider’s recent paper argues that when we hear from alien civilisations, it’s almost bound to be super intelligent robots getting in touch, rather than little green men. She builds on Nick Bostrom’s much-discussed argument that we’re all living in a simulation.

Actually, Bostrom’s argument is more cautious than that, and more carefully framed. His claim is that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.

So that if we disbelieve the first two, we must accept the third.

In fact there are plenty of reasons to argue that the first two propositions are true. The first evokes ideas of nuclear catastrophe or an unexpected comet wiping us out in our prime, but equally it could just be that no post human stage is ever reached. We only know about the cultures of our own planet, but two of the longest lived – the Egyptian and the Chinese – were very stable, showing few signs of moving on towards post humanism. They made the odd technological advance, but they also let things slip: no more pyramids after the Old Kingdom; ocean-going junks abandoned before being fully exploited. Really only our current Western culture, stemming from the European Renaissance, has displayed a long run of consistent innovation; it may well be a weird anomaly and its five-hundred year momentum may well be temporary. Maybe our descendants will never go much further than we already have; maybe, thinking of Schneider’s case, the stars are basically inhabited by Ancient Egyptians who have been living comfortably for millions of years without ever discovering electricity.

The second proposition requires some very debatable assumptions, notably that consciousness is computable. But the notion of “simulation” also needs examination. Bostrom takes it that a computer simulation of consciousness is likely to be conscious, but I don’t think we’d assume a digital simulation of digestion would do actual digesting. The thing about a simulation is that by definition it leaves out certain aspects of the real phenomenon (otherwise it’s the phenomenon itself, not a simulation). Computer simulations normally leave out material reality, which could be a problem if we want real consciousness. Maybe it doesn’t matter for consciousness; Schneider argues strongly against any kind of biological requirement and it may well be that functional relations will do in the case of consciousness. There’s another issue, though; consciousness may be uniquely immune from simulation because of its strange epistemological greediness. What do I mean? Well, for a simulation of digestion we can write a list of all the entities to be dealt with – the foods we expect to enter the gut and their main components. It’s not an unmanageable task, and if we like we can leave out some items or some classes of item without thereby invalidating the simulation. Can we write a list of the possible contents of consciousness? No. I can think about any damn thing I like, including fictional and logically impossible entities. Can we work with a reduced set of mental contents? No; this ability to think about anything is of the essence.

All this gets much worse when Bostrom floats the idea that future ancestor simulations might themselves go on to be post human and run their own nested simulations, and so on. We must remember that he is really talking about simulated worlds, because his simulated ancestors need to have all the right inputs fed to them consistently. A simulated world has to be significantly smaller in information terms than the world that contains it; there isn’t going to be room within it to simulate the same world again at the same level of detail. Something has to give.

Without the indefinite nesting, though, there’s no good reason to suppose the simulated ancestors will ever outnumber the real people who ever lived in the real world. I suppose Bostrom thinks of his simulated people as taking up negligible space and running at speeds far beyond real life; but when you’re simulating everything, that starts to be questionable. The human brain may be the smallest and most economic way of doing what the human brain does.

Schneider argues that, given the same Whiggish optimism about human progress we mentioned earlier, we must assume that in due course fleshy humans will be superseded by faster and more capable silicon beings, either because robots have taken over the reins or because humans have gradually cyborgised themselves to the point where they are essentially super intelligent robots. Since these post human beings will live on for billions of years, it’s almost certain that when we make contact with aliens, that will be the kind we meet.

She is, curiously, uncertain about whether these beings will be conscious. She really means that they might be zombies, without phenomenal consciousness. I don’t really see how super intelligent beings like that could be without what Ned Block called access consciousness, the kind that allows us to solve problems, make plans, and generally think about stuff; I think Schneider would agree, although she tends to speak as though phenomenal, experiential consciousness was the only kind.

She concludes, reasonably enough, that the alien robots most likely will have full conscious experience. Moreover, because reverse engineering biological brains is probably the quick way to consciousness, she thinks that a particular kind of super intelligent AI is likely to predominate: biologically inspired superintelligent alien (BISA). She argues that although BISAs might in the end be incomprehensible, we can draw some tentative conclusions about BISA minds:
(i). Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISAs thinking patterns.
(ii) BISAs may have viewpoint invariant representations. (Surely they wouldn’t be very bright if they didn’t?)
(iii) BISAs will have language-like mental representations that are recursive and combinatorial. (Ditto.)
(iv) BISAs may have one or more global workspaces. (If you believe in global workspace theory, certainly. Why more than one, though – doesn’t that defeat the object? Global workspaces are useful because they’re global.)
(v) A BISA’s mental processing can be understood via functional decomposition.

I’ll throw in a strange one; I doubt whether BISAs would have identity, at least not the way we do. They would be computational processes in silicon: they could split, duplicate, and merge without difficulty. They could be copied exactly, so that the question of whether BISA x was the same as BISA y could become meaningless. For them, in fact, communicating and merging would differ only in degree. Something to bear in mind for that first contact, perhaps.

This is interesting stuff, but to me it’s slightly surprising to see it going on in philosophy departments; does this represent an unexpected revival of the belief that armchair reasoning can tell us important truths about the world?