Chomsky’s Mysterianism

Or perhaps Chomsky’s endorsement of Isaac Newton’s mysterianism.  We tend to think of Newton as bringing physics to a triumphant state of perfection, one that lasted until Einstein, and with qualifications, still stands. Chomsky says that in fact Newton shattered the ambitions of mechanical science, which have never recovered; and in doing so he placed permanent limits on the human mind. He quotes Hume;

While Newton seemed to draw off the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored her ultimate secrets to that obscurity, in which they ever did and ever will remain.

What are they talking about? Above all, the theory of gravity, which relies on the unexplained notion of action at a distance. Contemporary thinkers regarded this as nonsensical, almost logically absurd: how could object A affect object B without contacting it and without and internediating substance? Newton, according to Chomsky, agreed in essence; but defended himself by saying that there was nothing occult in his own work, which stopped short where the funny stuff began.  Newton, you might say, described gravity precisely and provided solid evidence to back up his description; what he didn’t do at all was explain it.

The acceptance of gravity, according to Chomsky, involved a permanent drop in the standard of intelligibility that scientific theories required. This has large implications for the mind it suggests there might be matters beyond our understanding, and provides a particular example. But it may well be that the mind itself is, or involves, similar intractable difficulties.

Chomsky reckons that Darwin reinforced this idea. We are not angels, after all, only apes; all other creatures suffer cognitive limitations; why should we be able to understand everything? In fact our limitations are as important as our abilities in making us what we are; if we were bound by no physical limitations we should become shapeless globs of protoplasm instead of human beings, and the same goes for our minds. Chomsky distinguishes between problems and mysteries. What is forever a mystery to a dog or rat may be a solvable problem for us, but we are bound to have mysteries of our own.

I think some care is needed over the idea of permanent mysteries. We should recognise that in principle there may be several things that look mysterious, notably the following.

  1. Questions that are, as it were, out of scope: not correctly definable as questions at all: these are answerable even by God.
  2. Mysterian mysteries; questions that are not in themselves unanswerable, but which are permanently beyond the human mind.
  3. Questions that are answerable by human beings, but very difficult indeed.
  4. Questions that would be answerable by human beings if we had further information which we (a) either just don’t happen to have, or which (b) we could never have in principle.

I think it’s just an assumption that the problem of mind, and indeed, the problem of gravity, fall into category 2. There has been a bit of movement in recent decades, I think, and the possibility of 3 or 4(a) remains open.

I don’t think the evolutionary argument is decisive either. Implicitly Chomsky assumes an indefinite scale of cognitive abilities matched by an indefinite scale of problems. Creatures that are higher up the first get higher up the second, but there’s always a higher problem.  Maybe, though, there’s a top to the scale of problems? Maybe we are already clever enough in principle to tackle them all.

If this seems optimistic, think of Chomsky the Lizard, millions of years ago. Some organisms, he opines, can stick their noses out of the water. Some can leap out, briefly. Some crawl out on the beach for a while. Amphibians have to go back to reproduce. But all creatures have a limit to how far they can go from the sea. We lizards, we’ve got legs, lungs, and the right kind of eggs; we can go further than any other. That does not mean we can go all over the island. Evolution guarantees that there will always be parts of the island we can’t reach.

Well, depending on the island, there may be inaccessible parts, but that doesn’t mean legs and lungs have inbuilt limits. So just because we are products of evolution, it doesn’t mean there are necessarily questions of type 2 for us.

Chomsky mocks those who claim that the idea of reducing the mind to activity of the brain is new and revolutionary; it has been widely espoused for centuries, he says. He mentions remarks of Locke which I don’t know, but which resemble the famous analogy of Leibniz’s mill.

If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters a mill. Assuming that, when inspecting its interior, we will find only parts that push one another, and we will never find anything to explain a perception.

The thing about that is, we’ll never find anything to explain a mill, either. Honestly, Gottfried, all I see is pieces of wood and metal moving around; none of them have any milliness? How on earth could a collection of pieces of wood – just by virtue of being arranged in some functional way, you say – acquire completely new, distinctively molational qualities?

The Incredible Consciousness of Edward Witten

Edward WittenWe’ll never understand consciousness, says Edward Witten. Ashutosh Jogalekar’s post here features a video of the eminent physicist talking about fundamentals; the bit about consciousness starts around 1:10 if you’re not interested in string theory and cosmology. John Horgan has also weighed in with some comments; Witten’s view is congenial to him because of his belief that science may be approaching an end state in which many big issues are basically settled while others remain permanently mysterious. Witten himself thinks we might possibly get a “final theory” of physics (maybe even a form of string theory), but guesses that it would be of a tricky kind, so that understanding and exploring the theory would itself be an endless project, rather the way number theory, which looks like a simple subject at first glance, proves to be capable of endless further research.

Witten, in response to a slightly weird question from the interviewer, declines to define consciousness, saying he prefers to leave it undefined like one of the undefined terms set out at the beginning of a maths book. He feels confident that the workings of the mind will be greatly clarified by ongoing research so that we will come to understand much better how the mechanisms operate. But why these processes are accompanied by something like consciousness seems likely to remain a mystery; no extension of physics that he can imagine seems likely to do the job, including the kind of new quantum mechanics that Roger Penrose believes is needed.

Witten is merely recording his intuitions, so we shouldn’t try to represent him as committed to any strong theoretical position; but his words clearly suggest that he is an optimist on the so-called Easy Problem and a pessimist on the Hard one. The problem he thinks may be unsolvable is the one about why there is “something it is like” to have experiences; what it is that seeing a red rose has over and above the acquisition of mere data.

If so, I think his incredulity joins a long tradition of those who feel intuitively that that kind of consciousness just is radically different from anything explained or explainable by physics. Horgan mentions the Mysterians, notably Colin McGinn, who holds that our brain just isn’t adapted to understanding how subjective experience and the physical world can be reconciled; but we could also invoke Brentano’s contention that mental intentionality is just utterly unlike any physical phenomenon; and even trace the same intuition back to Leibniz’s famous analogy of the mill; no matter what wheels and levers you put in your machine, there’s never going to be anything that could explain a perception (particularly telling given Leibniz’s enthusiasm for calculating machines and his belief that one day thinkers could use them to resolve complex disputes). Indeed, couldn’t we argue that contemporary consciousness sceptics like Dennett and the Churchlands also see an unbridgeable gap between physics and subjective, qualia-having consciousness? The difference is simply that in their eyes this makes that kind of consciousness nonsense, not a mystery.

We have to be a bit wary of trusting our intuitions. The idea that subjective consciousness arises when we’ve got enough neurons firing may sound like the idea that wine comes about when we’ve added enough water to the jar; but the idea that enough ones and zeroes in data registers could ever give rise to a decent game of chess looks pretty strange too.

As those who’ve read earlier posts may know, I think the missing ingredient is simply reality. The extra thing about consciousness that the theory of physics fails to include is just the reality of the experience, the one thing a theory can never include. Of course, the nature of reality is itself a considerable mystery, it just isn’t the one people have thought they were talking about. If I’m right, then Witten’s doubts are well-founded but less worrying than they may seem. If some future genius succeeds in generating an artificial brain with human-style mental functions, then by looking at its structure we’ll only ever see solutions to the Easy Problem, just as we may do in part when looking at normal biological brains. Once we switch on the artificial brain and it starts doing real things, then experience will happen.

Thatter way to consciousness

Picture: Raymond Tallis‘Aping Mankind’ is a large scale attack by Raymond Tallis on two reductive dogmas which he characterises as ‘Neuromania’ and ‘Darwinitis’.  He wishes especially to refute the identification of mind and brain, and as an expert on the neurology of old age, his view of the scientific evidence carries a good deal of weight. He also appears to be a big fan of Parmenides, which suggests a good acquaintance with the philosophical background. It’s a vigorous, useful, and readable contribution to the debate.

Tallis persuasively denounces exaggerated claims made on the basis of brain scans, notably claims to have detected the ‘seat of wisdom’ in the brain.  These experiments, it seems, rely on what are essentially fuzzy and ambiguous pictures arrived at by subtraction in very simple experimental conditions, to provide the basis for claims of a profound and detailed understanding far beyond what they could possibly support. This is no longer such a controversial debunking as it would have been a few years ago, but it’s still useful.

Of course, the fact that some claims to have reduced thought to neuronal activity are wrong does not mean that thought cannot nevertheless turn out to be neuronal activity, but Tallis pushes his scepticism a long way. At times he seems reluctant to concede that there is anything more than a meaningless correlation between the firing of neurons in the brain and the occurence of thoughts in the mind.  He does agree that possession of a working brain is a necessary condition for conscious thought, but he’s not prepared to go much further. Most people, I think, would accept that Wilder Penfield’s classic experiments, in which the stimulation of parts of the brain with an electrode caused an experience of remembered music in the subject, pretty much show that memories are encoded in the brain one way or another; but Tallis does not accept that neurons could constitute memories. For memory you need a history, you need to have formed the memories in the first place, he says: Penfield’s electrode was not creating but merely reactivating memories which already existed.

Tallis seems to start from a kind of Brentanoesque incredulity about the utter incompatibility of the physical and the mental. Some of his arguments have a refreshingly simple (or if you prefer, naive) quality: when we experience yellow, he points out, our nerve impulses are not yellow.  True enough, but then a word need not be printed in yellow ink to encode yellowness either. Tallis quotes Searle offering a dual-aspect explanation: water is H2O, but H2O molecules do not themselves have watery properties: you cannot tell what the back of a house loks like from the front, although it is the same house. In the same way our thoughts can be neural activity without the neurons themselves resembling thoughts. Tallis utterly rejects this: he maintains that to have different aspects requires a conscious observer, so we’re smuggling in the very thing we need to explain.  I think this is an odd argument. If things don’t have different aspects until an observer is present, what determines the aspects they eventually have? If it’s the observer, we seem to slipping towards idealism or solipsism, which I’m sure Tallis would not find congenial. Based on what he says elsewhere, I think Tallis would say the thing determines its own aspects in that it has potential aspects which only get actualised when observed; but in that case didn’t it really sort of have those aspects all along? Tallis seems to be adopting the view that an appearance (say yellowness) can only properly be explained by another thing that already has that same appearance (is yellow). It must be clear that if we take this view we’re never going to get very far with our explanations of yellow or any other appearance.

But I think that’s the weakest point in a sceptical case which is otherwise fairly plausible. Tallis is Brentanoesque in another way in that he emphasises the importance of intentionality – quite rightly, I think. He suggests it has been neglected, which I think is also true, although we must not go overboard: both Searle and Dennett, for example, have published whole books about it. In Tallis’ view the capacity to think explicitly about things is a key unique feature of human mindfulness, and that too may well be correct. I’m less sure about his characterisation of intentionality as an outward arrow. Perception, he says, is usually represented purely in terms of information flowing in, but there is also a corresponding outward flow of intentionality. The rose we’re looking at hits our eye (or rather a beam of light from the rose does so), but we also, as it were, think back at the rose. Is this a useful way of thinking about intentionality? It has the merit of foregrounding it, but I think we’d need a theory of intentionality  in order to judge whether talk of an outward arrow was helpful or confusing, and no fully-developed theory is on offer.

Tallis has a very vivid evocation of a form of the binding problem, the issue of how all our different sensory inputs are brought together in the mind coherently. As normally described, the binding problem seems like lip-synch issues writ large: Tallis focuses instead on the strange fact that consciousness is united and yet composed of many small distinct elements at the same time.  He rightly points out that it’s no good having a theory which merely explains how things are all brought together: if you combine a lot of nerve impulses into one you just mash them. I think the answer may be that we can experience a complex unity because we are complex unities ourselves, but it’s an excellent and thought-provoking exposition.

Tallis’ attack on’ Darwinitis’ takes on Cosmidoobianism, memes and the rest with predictable but entertaining vigour. Again, he presses things quite a long way. It’s one thing to doubt whether every feature of human culture is determined by evolution: Tallis seems to suggest that human culture has no survival value, or at any rate, had none until recently, too recently to account for human development. This is reminiscent of the argument put by Alfred Russel Wallace, Darwin’s co-discoverer of the principle of survival of the fittest: he later said that evolution could not account for human intelligence because a caveman could have lived his life perfectly well with a much less generous helping of it. The problem is that this leaves us needing a further explanation of why we are so brainy and cultured; Wallace, alas, ended up resorting to spiritualism to fill the gap (we can feel confident that Tallis, a notable public champion of disbelief, will never go that way). It seems better to me to draw a clear distinction between the capacity for human culture, which is wholly explicable by evolutionary pressure, and the contents of human culture, which are largely ephemeral, variable, and non-hereditary.

Tallis points out that some sleight of hand with vocabulary is not unknown in this area, in particular the tactic of the transferrred epithet: a word implying full mental activity is used metaphorically – a ‘smart’ bomb is said to be ‘hunting down’ its target – and the important difference is covertly elided. He notes the particular slipperiness of the word ‘information’, something we’ve touched on before.

It is a weakness of Tallis’ position that he has no general alternative theory to offer in place of those he is attacking – consciousness remains a mystery (he sympathises with Colin McGinn’s mysterianism to some degree, incidentally, but reproves him for suggesting that our inability to understand ourselves might be biological). However, he does offer positive views of selfhood and free will, both of which he is concerned to defend. Rather than the brain, he chooses to celebrate the hand as a defining and influential human organ: opposable thumbs allow it to address itself and us: it becomes a proto-tool and this gives us a sense of ourselves as acting on the world in a tool-like manner. In this way we develop a sense of ourselves as a distinct entity and an agent, an existential intuition.  This is OK as far as it goes though it does sound in places like another theory of how we get a mere impression, or dare I say an illusion, of selfhood and agency, the very position Tallis wants to refute. We really need more solid ontological foundations. In response to critics who have pointed to the elephant’s trunk and the squid’s tentacles, Tallis grudgingly concedes that hands alone are not all you need and a human brain does have something to contribute.

Turning to free will, Tallis tackles Libet’s experiments (which seem to show that a decision to move one’s hand is actually made a measurable time before one becomes aware of it). So, he says, the decision to move the hand can be tracked back half a second? Well, that’s nothing: if you like you can track it back days, to when the experimental subject decided to volunteer; moreover, the aim of the subject was not just to move the hand, but also to help that nice Dr Libet, or to forward the cause of science. In this longer context of freely made decisions the precise timing of the RP is of no account.

To be free according to Tallis, an act must be expressive of what the agent is, the agent must seem to be the initiator, and the act must deflect the course of events. If we are inclined to doubt that we can truly deflect the course of events, he again appeals to a wider context: look at the world around us, he says, and who can doubt that collectively we have diverted the course of events pretty substantially?  I don’t think this will convert any determinists. The curious thing is that Tallis seems to be groping for a theory of different levels of description, or well, a dual aspect theory.  I would  have thought dual-aspect theories ought to be quite congenial to Tallis, as they represent a rejection of ‘nothing but’ reductionism in favour of an attempt to give all levels of interpretation parity of esteem, but alas it seems not.

As I say, there is no new theory of consciousness on offer here, but Tallis does review the idea that we might need to revise our basic ideas of how the world is put together in order to accommodate it. He is emphatically against traditional dualism, and he firmly rejects the idea that quantum physics might have the explanation too. Panpsychism may have a certain logic but generate more problems than it solves.  Instead he points again to the importance of intentionality and the need for a new view that incorporates it: in the end ‘Thatter’, his word for the indexical, intentional quality of the mental world, may be as important as matter.

Mystery

Picture: shrouded in mystery.
Picture: Blandula. I’d like to say a word for mystery. I think Haldane summed it up:

…the universe is not only queerer than we suppose, but queerer than we can suppose.

Picture: Bitbucket. I hate that quote so much!  The complacent fake modesty; the characteristic Oxford attitude of mingled superiority and second-ratism:  don’t you go thinking you can apply your mind to these weighty issues, little man; the best you can do is listen reverently to the words of our mighty predecessors. Footnotes to Plato! Footnotes to Plato!

Picture: Blandula. Good grief, what a reaction! Well, then, let me quote someone you ought to like better; Leibniz:

…it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which push one another, and never anything by which to explain a perception.

It’s interesting, incidentally, that Leibniz should have picked a mill. In these days of computers, it’s natural for us to talk of thinking machines, but it must have been a much less obvious metaphor then; especially a mill, which doesn’t produce any very complex behaviour… though Babbage called the central processor of the Analytical Engine the ‘mill’ didn’t he… and of course Leibniz himself designed calculating machines, so perhaps a mechanical metaphor was more natural to him than it perhaps was to his readers… Anyway! The point is, this is a good example of a recurrent theme where someone holds up for our examination a mental phenomenon – in this case perception – and holds up as it were in the other hand the physical world, and says it’s just obvious that the latter cannot account for the former.

Here’s Brentano.

Every mental phenomenon is characterised by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction towards an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We could, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves.

This time we’re not talking about perception as such, but about intentionality: though Brentano claims it’s characteristic of every mental phenomenon.

Then again, Nagel.

If we acknowledge that a physical theory of mind must account for the subjective character of experience, we must admit that no presently available conception gives us a clue how this could be done. The problem is unique. If mental processes are indeed physical processes, then there is something it is like, intrinsically, to undergo certain physical processes. What it is for such a thing to be the case remains a mystery.

It would be easy to find similar sources in which people contrast the pinkish-grey jelly in the skull with the sparkling abstract mental life it apparently sustains. These claims tend to have two things in common. The first is, they are essentially ostensive. There isn’t really an argument being offered at this point, more a demonstration; we’re just being shown. Look at this; then look at that;  see?

Second, the claims are emphatic: they insist. It’s obvious, they say, just look: no-one could think that this was that.  These things have nothing whatever in common.

Picture: Bitbucket. Of course they’re emphatic: they want to hurry us on past the sketchy part before we pause and ask why this shouldn’t be that. They’ve bundled the idea into its coat, thrust its hat into its hands, and are shoving it out the front door because they’re afraid that otherwise we might entertain it for a while.

Picture: Blandula. If they didn’t want us to entertain the idea, why would they write about it? No. The absence of argument here isn’t a weakness; on the contrary, it’s a demonstration of the case. The very fact that we can’t give arguments for the existence of our mental life proves its utter distinctness; we can’t prove it, we can only notice it. But, and this is the reason for the emphasis, what noticing!  ‘In your face’ doesn’t begin to get it; phenomenal experience is inside your face; it’s so emphatically there you could fairly say it defines the word.  

What I’m saying is that these claims have a special quality; simply to pay careful attention to them is itself to experience their validity. The reality and distinctness of the mental world really deserve for once that much-misused term ‘self-evident’.

Picture: Bitbucket.  It’s going to be very convenient if the absence of argument is taken to make a claim self-evident. Proving six impossible things before breakfast will be child’s play.

Actually, I think you are giving us an argument, but it’s so feeble you prefer it not to be recognised: the argument from bogglement. It runs: I can’t see how this could be that; it follows that it’s somehow cosmically distinct. The falsity of the argument, once stated is too obvious too require further comment, but let me just remind you of all those people who couldn’t imagine how the earth could possibly be moving.

Dennett has pointed out in a similar context that mysterians rely on passing off complexity as ineffability. Of course we don’t know all the details of how particular physical events in the world trigger neuronal events in the brain, let alone how that vastly complex series of functions constitutes experience. Even if we had the information, we couldn’t hold it all in mind at once. But our inability to do that, and any bogglement which may arise, does not in any way tend to show that there is no complete story.

Now Dennett would say that if only we could hold in mind all the details of the physical account, all this fantastically complex stuff, then the bogglement would vanish. But I’m not sure about that. Let me confess something: I too, boggle at the task of understanding the incomprehensible complexity of mental phenomena. But I boggle at other things too. Take computers. Now I think I can say without claiming to be an expert that I know how computers work.  I’ve played around with one or two high-level programming languages; I’ve dabbled in machine code; I’ve run up a couple of routines for imaginary Turing machines. In short, I know how it works. And yet, it still sometimes looks like magic; my mind still boggles sometimes at what I see computers doing. Now if I can get bogglement from something I understand quite well and know is a 100% physical process, it follows that my boggling at the brain and what it does shows nothing. 

Picture: Blandula. It’s not the bogglement that matters; it’s the undeniable direct experience. It’s not because we don’t understand experience that we say it’s something distinctly non-physical; it’s because we can see it’s distinctly non-physical.  To me I confess it seems to need some kind of educated perversity to deny this. Colin McGinn has pointed out that conscious experience is non-spatial: it has no position or volume.  I don’t know quite what we should say about that – abstractions like numbers are non-spatial too in what seems a different sort of way (if I mention Plato, will you start shouting again?); but it captures something about the obvious – patent – irreducibility of the mental.

I mean, just give it fair consideration; lift your eyes out of that two-dimensional world you’re cycling round and round and just notice that there’s another whole aspect to the world. To think it possible is in this case to realise it’s true, I believe.

Picture: Bitbucket. You know, you’re right.

Picture: Blandula. You see it?

Picture: Bitbucket. No, you’re right that with you there is no argument.

Cryptic Consciousness

Picture: cryptic entity. I was thinking about the New Mysterian position the other day, and it occurred to me that there are some scary implications which I, at any rate, had never noticed before.

As you may know, the New Mysterian position, cogently set out by Colin McGinn, is that our minds may simply not be equipped to understand consciousness. Not because it is in itself magic or inexplicable, but because our brains just don’t work in the necessary way. We suffer from cognitive closure. Closure here means that we have a limited repertoire of mental operations; using them in sequence or combination will take us to all sorts of conceptual places, but only within a certain closed domain. Outside that domain there are perfectly valid and straightforward ideas which we can simply never reach, and unfortunately one or more of these unreachable ideas is required in order to understand consciousness.

I don’t think that is actually the case, but the possibility is undeniable; I must admit that personally there’s a certain element of optimistic faith involved in my rejection of Mysterianism. I just don’t want to give up on the possibility of a satisfying answer.

Anyway, suppose we do suffer from some form of cognitive closure (it’s certainly true that human minds have their limitations, notably in the complexity of the ideas they can entertain at any one time). One implication is that we can conceive of a being whose repertoire of mental operations might be different from ours. It could be a god-like being which understands everything we understand, and other things besides; its mental domain might be an overlapping territory, not very different in extent from ours; or it might deal in a set of ideas all of which are inaccessible to us, and find ours equally unthinkable.

That conclusion in itself is no more than a somewhat frustrating footnote to Mysterianism; but there’s worse to follow. It seems just about inevitable to me that if we encountered a being with the last-mentioned fully cryptic kind of consciousness, we would not recognise it. We wouldn’t realise it was conscious: we probably wouldn’t recognise that it was an agent of any kind. We recognise consciousness and intelligence in others because we can infer the drift of their thoughts from their speech and behaviour and recognise their cogency. In the case of the cryptics, their behaviour would be so incomprehensible, we wouldn’t even recognise it as behaviour.

So it could be that now and then in AI labs a spark of cryptic consciousness has flashed and died without ever being noticed. This is not quite as bonkers as it sounds. It is apparently the case that when computers were used to apply a brute force, exhaustive search was applied to a number of end-game positions in chess, it turned out that several which had long been accepted as draws turned out to have winning strategies (if anyone can provide more details of this research I’d be grateful – my only source is the Oxford Companion to the Mind) . The strategies proved incomprehensible; to human eyes, even those of expert chess players; the computer appeared merely to bimble about with its pieces in a purposeless way, until unexpectedly, after a prolonged series of moves, checkmate emerged. Let’s suppose a chess-playing program had independently come up with this strategy and begun playing it. Long before checkmate emerged – or perhaps even afterwards – the human in charge would have lost patience with the endless bimbling (in a position known to be a draw, after all), and withdrawn the program for retraining or recoding.

Perhaps, for that matter, there are cryptically conscious entities on other planets elsewhere in the Galaxy. The idea that aliens might be incomprehensible is not new, but here there is a reasonable counter-argument. All forms of life, presumably, are going to be the product of a struggle for survival similar to the one which produced us on Earth. Any entity which has come through millions of years of such a struggle is going to have to have acquired certain key cognitive abilities, and these at least will surely be held in common. Certain basic categories of thought and communication are surely going to be recognisable; threats, invitations, requests, and the like are surely indispensable to the conduct of any reasonably complex life, and so even if there are differences at the margins, there will be a basis for communication. We may or may not have a set of cognitive tools which address a closed domain – but we certainly haven’t got a random selection of cognitive tools.

That’s a convincing argument, though not totally conclusive. On Earth, the basic body plans of most animal groups were determined long ago; a few good basic designs triumphed and most animals alive today are variations on one of these themes. But it’s possible that if things had been different we might have emerged with a somewhat different set of basic blueprints. Perhaps there are completely different designs on other planets; perhaps there are phyla full of animals with wheels, say. Hard to be sure how likely that is, because we only have one planet to go on. But at any rate, if that much is true of body plans, the same is likely to be true of Earthbound minds; a few basic mental architectures that seemed to work got established way back in history, and everything since is a variation. But perhaps radically different mental set-ups would have worked equally well in ways we can’t even imagine, and perhaps on other worlds, they do.

The same negative argument doesn’t apply to artificial intelligence, of course, since AI generally does not have to be the result of thousands of generations of evolution and can jump to positions which can’t be reached by any coherent evolutionary path.

Common sense tells us that the whole idea of cryptic consciousness is more of a speculative possibility than a serious hypothesis about reality – but I see no easy way to rule it out. Never mind the AI labs and the alien planets; it’s possible in principle that the animists are right and that we’re surrounded every day by conscious entities we simply don’t recognise…