Augment me

All I want for a Christmas is a new brain? There seems to have been quite a lot of discussion recently about the prospect of brain augmentation; adding in some extra computing power to the cognitive capacities we have already. Is this a good idea? I’m rather sceptical myself, but then I’m a bit of a Luddite in this area; I still don’t like the idea of controlling a computer with voice commands all that much.

Hasn’t evolution has already optimised the brain in certain important respects? I think there may be some truth in that, but It doesn’t look as if evolution has done a perfect job. There are certainly one or two things about the human nervous system that look as if they could easily be improved. Think of the way our neural wiring crosses over from right to left for no particular reason. You could argue that although that serves no purpose it doesn’t do any real harm either, but what about the way our retinas are wired up from the front instead of the back, creating an entirely unnecessary blind spot where the bundle of nerves actually enters the eye – a blind spot which our brain then stops us seeing, so we don’t even know it’s there?

Nobody is proposing to fix those issues, of course, but aren’t there some obvious respects in which our brains could be improved by adding in some extra computational ability? Could we be more intelligent, perhaps? I think the definition of intelligence is controversial, but I’d say that if we could enhance our ability to recognise complex patterns quickly (which might be a big part of it) that would definitely be a bonus. Whether a chip could deliver that seems debatable at present.

Couldn’t our memories be improved? Human memory appears to have remarkable capacity, but retaining and recalling just those bits of information we need has always been an issue. Perhaps related, we have that annoying inability to hold more than a handful of items in our minds at once, a limitation that makes it impossible for us to evaluate complex disjunctions and implications, so that we can’t mentally follow a lot of branching possibilities very far. It certainly seems that computer records are in some respects sharper, more accurate, and easier to access than the normal human system (whatever the normal human system actually is). It would be great to remember any text at will, for example, or exactly what happened on any given date within our lives. Being able to recall faces and names with complete accuracy would be very helpful to some of us.

On top of that, couldn’t we improve our capacity for logic so that we stop being stumped by those problems humans seem so bad at, like the Wason test? Or if nothing else, couldn’t we just have the ability to work out any arithmetic problem instantly and flawlessly, the way any computer can do?

The key point here, I think, is integration. On the one hand we have a set of cognitive abilities that the human brain delivers. On the other, we have a different set delivered by computers. Can they be seamlessly integrated? The ideal augmentation would mean that, for example, if I need to multiply two seven-digit numbers I ‘just see’ the answer, the way I can just see that 3+1 is 4. If, on the contrary, I need to do something like ask in my head ‘what is 6397107 multiplied by 8341977?’ and then receive the answer spoken in an internal voice or displayed in an imagined visual image, there isn’t much advantage over using a calculator. In a similar way, we want our augmented memory or other capacity to just inform our thoughts directly, not be a capacity we can call up like an external facility.

So is seamless integration possible? I don’t think it’s possible to say for certain, but we seem to have achieved almost nothing to date. Attempts to plug into the brain so far have relied on setting up artificial linkages. Perhaps we detect a set of neurons that reliably fire when we think about tennis; then we can ask the subject to think about tennis when they want to signal ‘yes’, and detect the resulting activity. It sort of works, and might be of value for ‘locked in’ patients who can’t communicate any other way, but it’s very slow and clumsy otherwise – I don’t think we know for sure whether it even works long-term or whether, for example, the tennis linkage gradually degrades.

What we really want to do is plug directly into consciousness, but we have little idea of how. The brain does not modularise its conscious activity to suit us, and it may be that the only places we can effectively communicate with it are the places where it draws data together for its existing input and output devices. We might be able to feed images into early visual processing or take output from nerves that control muscles, for example. But if we’re reduced to that, how much better is that level of integration going to be than simply using our hands and eyes anyway? We can do a lot with those natural built-in interfaces ; simple reading and writing may well be the greatest artificial augmentation the brain can ever get. So although there may be some cool devices coming along at some stage, I don’t think we can look for godlike augmented minds any time soon.

Incidentally, this problem of integration may be one good reason not to worry about robots taking over. If robots ever get human-style motivation, consciousness, and agency, the chances are that they will get them in broadly the way we get them. This suggests they will face the same integration problem that we do; seven-digit multiplication may be intrinsically no easier for them than it is for us. Yes, they will be able to access computers and use computation to help them, but you know, so can we. In fact we might be handier with that old keyboard than they are with their patched-together positronic brain-computer linkage.

 

 

Phenomenal Epiphenomenalism

Our conscious minds are driven by unconscious processes, much as it may seem otherwise, say David A. Oakley and Peter W. Halligan. A short version is here, though the full article is also admirably clear and readable.

To summarise very briefly, they suggest three distinct psychological elements are at work. The first, itself made up of various executive processes, is what we might call the invisible works; the various unconscious mechanisms that supply the content of what we generally consider conscious thought.  Introspection shows that conscious thoughts often seem to pop up out of nowhere, so we should be ready enough to agree that consciousness is not entirely self-sustaining. When we wake up we generally find that the stream of consciousness is already a going concern. The authors also mention, in support of their case, various experiments. Some of these were on hypnotised subjects, which you might feel detracts from their credibility in explaining normal thought processes. Other ‘priming’ effects have also taken a bit of a knock in the recent trouble over reproducibility. But I wouldn’t make heavy weather of these points; the general contention that the contents of consciousness are generated by unconscious processes (at least to a great extent) seems to me one that few would object to. How could it be otherwise? It would be most peculiar if consciousness were a closed loop, like some Ouroboros swallowing its own tail.

The second element is a continuously generated personal narrative. This is an essentially passive record of some of the content generated by the ‘invisible works’, conditioned by an impression of selfhood and agency. The narrative has evolutionary survival value because it allows the exchange of experience and the co-ordination of behaviour, and enables us to make good guesses at others’ plans – the faculty often called ‘theory of mind’.

At first glance I thought the authors, who are clearly out to denounce something as an epiphenomenon (a thing that is generated by the mind but has no influence on it), had this personal narrative as their target, but that isn’t quite the case. While they see the narrative as essentially the passive product of the invisible works, they clearly believe it has some important influences on our behaviour through the way it enables us to talk to others and take their thoughts into account. One point which seems to me to need more prominence here is our ability to reflexively  ‘talk to ourselves’ mentally and speculate about our own motives. I think the authors accept that this goes on, but some philosophers consider it a vital process, perhaps constitutive of consciousness, so I think they need to give it a substantial measure of attention. Indeed, it offers a potential explanation of free will and the impression of agency; it might be just the actions that flow from the reflexive process that we regard as our own free acts.

One question we might also ask is, why not identify the personal narrative as consciousness itself? It is what we remember of ourselves, after all. Alternatively, why not include the ‘invisible works’? These hidden processes fall outside consciousness because (I think) they are not accessible to introspection; but must all conscious processes be introspectable? There’s a distinction between first and second-order awareness (between knowing and knowing that we know) which offers some alternatives here.

It’s the third element that Oakley and Halligan really want to denounce; this is personal awareness, or what we might consider actual conscious experience. This, they say, is a kind of functionless emergent phenomenon. To ask its purpose is as futile as asking what a rainbow is for; it’s just a by-product of the way things work, an epiphenomenon. It has no evolutionary value, and resembles the whistle on a steam locomotive – powered by the machine but without influence over it (I’ve always thought that analogy short-changes whistles a bit, but never mind).

I suppose the main challenge here might be to ask why the authors think personal awareness is anything at all. It has no effects on mental processes, so any talk about it was not caused by the thing itself. Now we can talk about things that did not cause that talking; but those are typically abstract or imaginary entities. Given their broadly sceptical stance, should the authors be declaring that personal awareness is in fact not even an epiphenomenon, but a pure delusion?

I have my reservations about the structure suggested here, but it would be good to have clarity and, at the risk of damning with faint praise, this is certainly one of the more sensible proposals.

Jerry Fodor

Jerry Fodor died last week at the age of 82 – here are obituaries from the NYT and Daily Nous.  I think he had three qualities that make a good philosopher. He really wanted the truth (not everyone is that bothered about it); he was up for a fight about it (in argumentative terms); and he had the gift of expressing his ideas clearly. Georges Rey, in the Daily Nous piece, professes surprise over Fodor’s unaccountable habit of choosing simple everyday examples rather than prestigious but obscure academic ones: but even Rey shows his appreciation of a vivid comparison by quoting Dennett’s lively simile of Fodor as trampoline.

Good writing in philosophy is not just a presentational matter, I think; to express yourself clearly and memorably you have to have ideas that are clear and cogent in the first place; a confused or laborious exposition raises the suspicion that you’re not really that sure what you’re talking about yourself.

Not that well-expressed ideas are always true ones, and in fact I don’t think Fodorism, stimulating as it is, is ever likely to be accepted as correct.  The bold hypothesis of a language of thought, sometimes called mentalese, in which all our thinking is done, never really looked attractive to most. Personally it strikes me as an unnecessary deferral; something in the brain has to explain language, and saying it’s another language just puts the job off. In fairness, empirical evidence might show that things are like that, though I don’t see it happening at present. Fodor himself linked the idea with a belief in a comprehensive inborn conceptual apparatus; we never learn new concepts, just activate ones that were already there. The idea of inborn understanding has a respectable pedigree, but if Plato couldn’t sell it, Fodor was probably never going to pull it off either.

As I say, these are largely empirical matters and someone fresh to the discussion might wonder why discussion was ever thought to be an adequate method; aren’t these issues for science? Or at least, shouldn’t the armchair guys shut up for a bit until the neurologists can give them a few more pointers? You might well think the same about Fodor’s other celebrated book, The Modularity of Mind. Isn’t a day with a scanner going to tell you more about that than a month of psychological argumentation?

But the truth is that research can’t proceed in a vacuum; without hypotheses to invalidate or a framework of concepts to test and apply, it becomes mere data collection. The concepts and perspectives that Fodor supplied are as stimulating as ever and re-reading his books will still challenge and sharpen anyone’s thinking.

Perhaps my favourite was his riposte to Stephen Pinker, The Mind Doesn’t Work That Way.  So I’ve been down into the cobwebbed cellars of Conscious Entities and retrieved one of the ‘lost posts’, one I wrote in 2005, which describes it. (I used to put red lines in things in those days for reasons that now elude me).

Here it is…

Not like that.

(30 January 2005)

Jerry Fodor’s 2001 book ‘The Mind Doesn’t Work That Way’ makes a cogent and witty deflationary case. In some ways, it’s the best summary of the current state of affairs I’ve read; which means, alas, that it is almost entirely negative. Fodor’s constant position is that the Computational Theory of Mind (CTM) is the only remotely plausible theory we have – and remotely plausible theories are better than no theories at all. But although he continues to emphasise that this is a reason for investigating the representational system which CTM implies, he now feels the times, and the bouncy optimism of Steven Pinker and Henry Plotkin in particular, call for a little Eeyoreish accentuation of the negative. Sure, CTM is the best theory we have, but that doesn’t mean it’s actually much good. Surely no-one ought to think it’s the complete explanation of all cognitive processes – least of all the mysteries of consciousness! It isn’t just computation that has been over-estimated, either – there are also limits to how far you can go with modularism too – though again, it’s a view with which Fodor himself is particularly associated.

The starting point for both Fodor and those he parts company with, is the idea that logical deduction probably gets done by the brain in essentially the same way as it is done on paper by a logician or in electronic digits by a computer, namely by the formal manipulation of syntactically structured representations, or to put it slightly less polysyllabically, by moving symbols around according to rules. It’s fairly plausible that this is true at least for some cognitive processes, but there is a wide scope for argument about whether this ability is the latest and most superficial abstract achievement of the brain, or something that plays an essential role in the engine room of thought.

 

Don’t you think, to digress for a moment, that formal logic is consistently over-rated in these discussions? It enjoys tremendous intellectual prestige: associated for centuries with the near-holy name of Aristotle, its reputation as the ultimate crystallisation of rationality has been renewed in modern times by its close association with computers – yet its powers are actually feeble. Arithmetic is invoked regularly in everyday life, but no-one ever resorted to syllogisms or predicate calculus to help them make practical decisions. I think the truth is that logic is only one example of a much wider reasoning capacity which stems from our ability to recognise a variety of continuities and identities in the world, including causal ones.

Up to a point, Fodor might go along with this. The problem with formal logical operations, he says, is that they are concerned exclusively with local properties: if you’ve got the logical formula, you don’t need to look elsewhere to determine its validity (in fact, you mustn’t). But that’s not the way much of cognition works: frequently the context is indispensable to judgements about beliefs. He quotes the example of judgements about simplicity: the same thought which complicates one theory simplifies another and you therefore can’t decide whether hypothesis A is a complicating factor without considering facts external to the hypothesis: in fact, the wider global context. We need the faculty of global or abductive reasoning to get us out of the problem, but that’s exactly what formal logic doesn’t supply. We’re back, in another form, with the problem of relevance, or in practical terms, the old demon of the frame problem; how can a computer (or how do human beings) consider just the relevant facts without considering all the irrelevant ones first – if only to determine their relevance?

 

One strategy for dealing with this problem (other than ignoring it) is to hope that we can leave logic to do what logic does best, and supplement it with appropriate heuristic approaches: instead of deducing the answer we’ll use efficient methods of searching around for it. The snag, says Fodor, is that you need to apply the appropriate heuristic approach, and deciding which it is requires the same grasp of relevance, the same abduction, which we were lacking in the first place.

Another promising-looking strategy would be a connectionist, neural network approach. After all, our problem comes from the need to reason globally, holistically if you like, and that is is often said to be a characteristic virtue of neural networks. But Fodor’s contempt for connectionism knows few bounds; networks, he says, can’t even deliver the classical logic that we had to begin with. In a network the properties of a node are determined entirely by its position within the network: it follows that nodes cannot retain symbolic identity and be recombined in different patterns, a basic requirement of the symbols in formal logic. Classical logic may not be able to answer the global question, but connectionism, in Fodor’s eyes, doesn’t get as far as being able to ask it.

It looks to me as if one avenue of escape is left open here: it seems to be Fodor’s assumption that only single nodes of a network are available to do symbolic duty, but might it not be the case that particular patterns of connection and activation could play that role? You can’t, by definition, have the same node in two different places: but you could have the same pattern realised in two different parts of a network. However, I think there might be other reasons to doubt whether connectionism is the answer. Perhaps, specifically, networks are just too holistic: we need to be able to bring in contextual factors to solve our problems, but only the right ones. Treating everything as relevant is just as bad as not recognising contextual factors at all.

 

Be that as it may, Fodor still has one further strategy to consider, of course – modularity. Instead of trying to develop an all-purpose cognitive machine which can deal with anything the world might throw at it, we might set up restricted modules which only deal with restricted domains. The module only gets fed certain kinds of thing to reason about: contextual issues become manageable because the context is restricted to the small domain, which can be exhaustively searched if necessary. Fodor, as he says, is an advocate of modules for certain cognitive purposes, but not ‘massive modularity’, the idea that all, or nearly all, mental functions can be constructed out of modules. For one thing, what mechanism can you use to decide what a given module should be ‘fed’ with? For some sensory functions, it may be relatively easy: you can just hard-wire various inputs from the eyes to your initial visual processing module; but for higher-level cognition something has to decide whether a given input representation is one of the right kind of inputs for module M1 or M2. Such a function cannot itself operate within a restricted domain (unless it too has an earlier function deciding what to feed to it, in which case an infinite regress looms); it has to deal with the global array of possible inputs: but in that case, as before, classical logic will not avail and once again we need the abductive reasoning which we haven’t got.

In short, ‘By all the signs, the cognitive mind is up to its ghostly ears in abduction. And we do not know how abduction works.’

I’m afraid that seems to be true.

Time travel consciousness

In What’s Next? Time Travel and Phenomenal Continuity Giuliano Torrengo and Valerio Buonomo argue that our personal identity is about continuity of phenomenal experience, not such psychological matters as memory (championed by John Locke). They refer to this phenomenal continuity as the ‘stream of consciousness’. I’m not sure that William James, who I believe originated the phrase, would have seen the stream of consciousness as being distinct from the series of psychological states in our minds, but it is a handy label.

To support their case, Torrengo and Buonomo have a couple of thought experiments. The first one involves a couple of imaginary machines. One machine transfers the ‘stream of consciousness’ from one person to another while leaving the psychology (memories, beliefs, intentions) behind, the other does the reverse, moving psychology but not phenomenology. Torrengo and Buonomo argue that having your opinions, beliefs and intentions changed, while the stream of consciousness remained intact would be akin to a thorough brainwashing. Your politics might suddenly change, but you would still be the same person. Contrariwise, if your continuity of experience moved over to a different body, it would feel as if you had gone with it.

That is plausible enough, but there are undoubtedly people would refuse to accept it because they would deny that this separation of phenom and psych is possible, or crucially, even conceivable. This might be because they think the two are essentially identical, or because they think phenomenal experience arises directly out of psychology. Some would probably deny that phenomenal experience in this sense even exists.

There is a bit of scope for clarification about what variety of phenomenal experience Torrengo and Buonomo have in mind. At one point they speak of it as including thought, which sounds sort of psychological to me. By invoking machines, their thought experiment shows that their stream of consciousness is technologically tractable, not the kind of slippery qualic experience which lies outside the realm of physics.

Still, thought experiments don’t claim to be proofs; they appeal to intuition and introspection, and with some residual reservations, Torrengo and Buonomo seem to have one that works on that level. They consider three objections. The first complains that we don’t know how rich the stream of consciousness must be in order to be the bearer of identity. Perhaps if it becomes attentuated too much it will cease to work? This business of a minimum richness seems to emerge out of the blue and in fact Torrengo and Buonomo dismiss it as a point which affects all ‘mentalist’ theories. The second objection is a clever one; it says we can only identify a stream of consciousness in relation to a person in the first place, so using it as a criterion of personal identity begs the question. Torrengo and Buonomo essentially deny that there needs to be an experiencing subject over and above the stream of consciousness. The third challenge arises from gaps; if identity depends on continuity, then what happens when we fall asleep and experience ceases? Do we acquire a new identity? Here it seems Torrengo and Buonomo fall back on a defence used by others; that strictly speaking it is the continuity of capacity for a given stream of consciousness that matters. I think a determined opponent might press further attacks on that.

Perhaps, though, the more challenging and interesting thought experiment is the second, involving time travel. Torrengo is the founder of the Centre for Philosophy of Time in Milan, and has a substantial body of work on the the experience of time and related matters, so this is his home turf in a sense. The thought experiment is quite simple; Lally invents a time machine and uses it to spend a day in sixties London. There are two ways of ordering her experience. One is the way she would see it; her earlier life, the time trip, her later life. The other is according to ‘objective’ time; she appears in old London Town and then vanishes; much later lives her early life, then is absent for a short while and finally lives her later life. These can’t both be right, suggest Torrengo and Buonomo, and so it must surely be that her experience goes off on the former course while her psychology goes the other way.

This doesn’t make much sense to me, so perhaps I have misunderstood. Certainly there are two time lines, but Lally surely follows one and remains whole? It isn’t the case that when she is in sixties London she lacks intentions or beliefs, having somehow left those behind. Torrengo and Buonomo almost seem to think that is the case; they say it is possible to imagine her in sixties London not remembering who she is. Who knows, perhaps time machines do work like that, but if so we’re running into one of the weaknesses of thought experiments methodologically; if you assume something impossible like time travel to begin with, it’s hard to have strong intuitions about what follows.

At the end of the day I’m left with a sceptical feeling not about Torrengo and Buonomo‘s ideas in particular but about the whole enterprise of trying to reduce or analyse the concept of personal identity. It is, after all, a particular case of identity and wouldn’t identity be a good candidate for being one of those ‘primitive’ ideas that we just have to start with? I don’t know; or perhaps I should just say there is a person who doesn’t know, whose identity I leave unprobed.

Renormalisation

An intriguing but puzzling paper from Simon DeDeo.

He begins by noting that while physics is good at generalised predictions, it fails to predict the particular. Working at the blackboard we can deduce laws governing the genesis of stars, but nothing about the specific existence of the blackboard. He sees this as a gap and for reasons that remain obscure to me he sees it as a matter of origins; the origin of society, consciousness, etc. To me, it’s about their nature; assuming it’s about origins constrains the possible answers unnecessarily to causal accounts.

Contrary to our expectations, says DeDeo, it’s relatively easy to describe everything, but hard to describe just one thing – the Frame Problem is an example where it’s the specifics that trip us up. By contrast, with the Library of Babel, Borges effortlessly gave us a description of everything. The Library of Babel is an imagined collection which contains every possible ordering of the letters of the alphabet; the extraordinary thing about it is that although it is finite, it contains every possible text – all the ones that were never written as well as all the ones that were.

We could quite easily write a computer program to find, within the library, all occurrences of the text string ‘Shakespeare’, says DeDeo; but there’s no way of finding all the texts about Shakespeare that make sense. That’s surely true. DeDeo says this is because what we’re asking for is more than just pattern matching. In particular, he says, we need self-reference. I can’t make out why he thinks that, and I’m pretty sure he’s wrong, though I might well be missing the point. To me, it seems clear that in order to identify texts that make sense, we need to consider meanings, which are not about self-reference but reference to other things. In fact, context and meaning are of the essence. One book from the Library of Babel contains all books if we are allowed to apply to it an arbitrary interpretation or encoding of our choice; equally any book is nonsense if we don’t know how to read it.

But for DeDeo this is a truth with a promising mathematical feel. We just need to elucidate the origin of self-reference, which he thinks lies in memory at least partly. The curious thing, in his eyes, is that physics only seems to require (or allow) certain levels of self-reference. We have velocity, we have acceleration, we have changes in acceleration; but models of worlds that have laws about third- or higher-order entities like changes in acceleration tend to be unstable, with runaway geometrical increases messing everything up.

So maybe we shouldn’t go there? The funny thing is, we seem to be able to sense a third-order physical entity. A change in acceleration is known as ‘jerk’ and we certainly feel jerked in some situations. I have to say I doubt this. DeDeo mentions the sudden motions of a lift, but those, like all instances of jerk, surely correspond with an acceleration? I wonder whether the concept of jerk as a distinct entity in physics isn’t redundant. For DeDeo, we perceive it through the use of memory, and this is the key to how we perceive other particularities not evident from the laws of physics. We tend to deal with coarse-grained laws, but the fine-grained detail is waiting to trip us up.

It’s not all bad news; perhaps, DeDeo speculates, there are new levels we have yet to explore…

I’m very unsure I’ve correctly understood what he’s proposing, and the fact that it seems to miss  the real point (meaning and context) might well be a sign it’s me that’s not really getting it. Any thoughts?

 

 

What Machines Can’t Do

Here’s an IAI debate with David Chalmers, Kate Devlin, and Hilary Lawson.

In ultra-brief summary, Lawson points out that there are still things that computers perform poorly at; recognising everyday real-world objects, notably. (Sounds like a bad prognosis for self-driving cars.) Thought is a way of holding different things as the same. Devlin thinks computers can’t do what humans do yet, but in the long run, surely they will.

Chalmers points out that machines can do whatever brains can do because the brain is a machine (in a sense not adequately explored here, though Chalmers himself indicates the main objections).

There’s some brief discussion of the Singularity.

In my view, thoughts are mental or brain states that are about something. As yet, we have no clear idea of what this aboutness is and how it works, or whether it is computational (probably not, I think) or subserved by computation in a way that means it could benefit from the exponential growth in computing power (which may have stopped being exponential). At the moment, computers do a great imitation of what human translators do, but to date they haven’t even got started on real meaning, let alone set off on an exponential growth curve. Will modern machine learning techniques change that?

Disastrous Consciousness

Hugh Howey gives a bold, amusing, and hopelessly optimistic account of how to construct consciousness in Wired. He thinks it wouldn’t be particularly difficult. Now you might think that a man who knows how to create artificial consciousness shouldn’t be writing articles; he should be building the robot mind. Surely that would make his case more powerfully than any amount of prose? But Howey thinks an artificial consciousness would be disastrous. He thinks even the natural kind is an unfortunate burden, something we have to put up with because evolution has yet to find a way of getting the benefits of certain strategies without the downsides. But he doesn’t believe that conscious AI would take over the world, or threaten human survival, so I would still have thought one demonstration piece was worth the effort? Consciousness sucks, but here’s an example just to prove the point?

What is the theory underlying Howey’s confidence? He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others. In essence, he thinks that was a really useful capacity for us to acquire, helping us compete in the cut-throat world of human society; but when we turn it on ourselves it disastrously generates wrong results, in particular about our own having of conscious states.

It remains a bit mysterious to me why he thinks a capacity that is so useful applied to others should be so disastrously and comprehensively wrong when applied to ourselves. He mentions priming studies, where our behaviour is actually determined by factors we’re unaware of; priming’s reputation has suffered rather badly recently in the crisis of non-reproducibility, but I wouldn’t have thought even ardent fans of priming would claim our mental content is entirely dictated by priming effects.

Although Dennett doesn’t get a mention, Howey’s ideas seem very Denettian, and I think they suffer from similar difficulties. So our Theory of Mind leads us to attribute conscious thoughts and intentions  to others; but what are we attributing to them? The theory tells us that neither we nor they actually have these conscious contents; all any of us has is self-attributions of conscious contents. So what, we’re attributing to them some self-attributions of self-attributions of…  The theory covertly assumes we already have and understand the very conscious states it is meant to analyse away. Dennett, of course, has some further things to say about this, and he’s not as negative about self-attributions as Howie.

But you know, it’s all pretty implausible intuitively. Suppose I take a mouthful of soft-boiled egg which tastes bad, and I spit it out. According to Howey what went on there is that I noticed myself spitting out the egg and thought to myself: hm, I infer from this behaviour that it’s very probable I just experienced a bad taste, or maybe the egg was too hot, can’t quite tell for sure.

The thing is, there are real conscious states irrespective of my own beliefs about them (which indeed, may be plagued by error). They are characterised by having content and intentionality, but these are things Howie does not believe in, or rather it seems has never thought of; his view that a big bank of indicator lights shows a language capacity suggests he hasn’t gone into this business of meaning and language quite deeply enough.

If he had to build an artificial consciousness, he might set up a community of self-driving cars, let one make inferences about the motives of the others and then apply that capacity to itself. But it would be a stupid thing to do because it would get it wrong all the time; in fact at this point Howie seems to be tending towards a view that all Theory of Mind is fatally error-prone. It would better, he reckons, if all the cars could have access to all of each other’s internal data, just as universal telepathy would be better for us (though in the human case it would be undermined by mind-masking freeloaders.

Would it, though? If the cars really had intentions, their future behaviour would not be readily deducible  simply from reading off all the measurements. You really do have to construct some kind of intentional extrapolation, which is what the Dennettian intentional stance is supposed to do.

I worry just slightly that some of the things Howey says seem to veer close to saying, hey a lot of these systems are sort of aware already; which seems unhelpful. Generally, it’s a vigorous and entertaining exposition, even if, in my view, on the wrong track.

Issues

Ned Block has produced a meaty discussion for  The Encyclopedia of Cognitive Science on Philosophical Issues About Consciousness.  

There are special difficulties about writing an encyclopedia about these topics because of the lack of consensus. There is substantial disagreement, not only about the answers, but about what the questions are, and even about how to frame and approach the subject of consciousness at all.  It is still possible to soldier on responsibly, like the heroic Stanford Encyclopedia of Philosophy, doing your level best to be comprehensive and balanced. Authors may find themselves describing and critiquing many complex points of view that neither they nor the reader can take seriously for a moment; sometimes possible points of view (relying on fine and esoteric distinctions of a subtlety difficult even for professionals to grasp), that in point of fact no-one, living or dead, has ever espoused. This can get tedious. The other approach, in my mind, is epitomised by the Oxford Companion to the Mind, edited by Richard Gregory, whose policy seemed to be to gather as much interesting stuff as possible and worry about how it hung together later, if at all.  If you tried to use the resulting volume as a work of reference you would usually come up with nothing or with a quirky, stimulating take instead of the mainstream summary you really wanted; however, it was a cracking read, full of fascinating passages and endlessly browsable.

Luckily for us, Block’s piece seems to lean towards the second approach; he is mainly telling us what he thinks is true, rather than recounting everything anyone has said, or might have said. You might think, therefore, that he would start off with the useful and much-quoted distinction he himself introduced into the subject: between phenomenal, or p-consciousness, and access, or a-consciousness. Here instead he proposes two basic forms of consciousness: phenomenality and reflexivity. Phenomenality, the feel or subjective aspect of consciousness, is evidently fundamental; reflexivity is reflection on phenomenal experience. While the first seems to be possible without the second – we can have subjective experience without thinking about it, as we might suppose dogs or other animals do – reflexivity seems on this account to require phenomenality.  It doesn’t seem that we could have a conscious creature with no sensory apparatus, that simply sits quietly and – what? Invents set theory, perhaps, or metaphysics (why not?).

Anyway, the Hard Problem according to Block is how to explain a conscious state (especially phenomenality) in terms of neurology. In fact, he says, no-one has offered even a highly speculative answer, and there is some reason to think no satisfactory answer can be given.  He thinks there are broadly four naturalistic ways you can go: eliminativism; philosophical reductionism (or deflationism); phenomenal realism (or inflationism); or  dualistic naturalism.  The third option is the one Block favours. 

He describes inflationism as the belief that consciousness cannot be philosophically reduced. So while a deflationist expects to reduce consciousness to a redundant term with no distinct and useful meaning, an inflationist thinks the concept can’t be done away with. However, an inflationist may well believe that scientific reduction of consciousness is possible. So, for example, science has reduced heat to molecular kinetic energy; but this is an empirical matter; the concept of heat is not abolished. (I’m a bit uncomfortable with this example but you see what he’s getting at). Inflationists might also, like McGinn, think that although empirical reduction is possible, it’s beyond our mental capacities; or they might think it’s altogether impossible, like Searle (is that right or does he think we just haven’t got the reduction yet?).

Block mentions some leading deflationist views such as higher-order theories and representationism, but inflationists will think that all such theories leave out the thing itself, actual phenomenal experience. How would an empirical reduction help? So what if experience Q is neural state X? We’re not looking for an explanation of that identity – there are no explanations of identities – but rather an explanation of how something like Q could be something like X, an explanation that removes the sense of puzzlement. And there, we’re back at square one; nobody has any idea.

 So what do we do? Block thinks there is a way forward if we distinguish carefully between a property and the concept of a property. Different concepts can identify the same property, and this provides a neat analysis of the classic thought experiment of Mary the colour scientist. Mary knows everything science could ever tell her about colour; when she sees red for the first time does she know a new fact – what red is like? No; on this analysis she gains a new concept of a property she was already familiar with through other, scientific concepts. Thus we can exchange a dualism of properties for a dualism of concepts. That may be less troubling – a proliferation of concepts doesn’t seem so problematic – but I’m not sure it’s altogether trouble-free; for one thing it requires phenomenal concepts which seem themselves to need some demystifying explanation. In general though, I like what I take to be Block’s overall outlook; that reductions can be too greedy and that the world actually retains a certain unavoidable conceptual, perhaps ontological, complexity.
Moving off on a different tack, he notes recent successes in identifying neural correlates of experience. There is a problem, however; while we can say that a certain experience corresponds with a certain pattern of neuronal activity, that pattern (so far as we can tell) can recur without the conscious experience. What’s the missing ingredient? As a matter of fact I think it could be almost anything, given the limited knowledge we have of neurological detail: however, Block sees two families of possible explanation. Maybe it’s something like intensity or synchrony; or maybe it’s access (aha!); the way the activity is connected up with other bits of brain that do memory or decision-making; let’s say with the global mental workspace, without necessarily committing to that being a distinct thing.
But these types of explanation embody different theoretical approaches; physicalism and functionalism respectively. The danger is that these may be theories of different kinds of consciousness. Physicalism may be after phenomenal consciousness, the inward experience, whereas functionalism has access consciousness, the sort that is about such things as regulating behaviour, in its sights. It might therefore be that researchers are sometimes talking past each other. Access consciousness is not reflexivity, by the way, although reflexivity might be seen as a special kind of access. Block counts phenomenality, reflexivity, and access as three distinct concepts.
Of course, either kind of explanation – physicalist or functionalist – implies that there’s something more going on than just plain neural correlates, so in a sense whichever way you go the real drama is still offstage. My instincts tell me that Block is doing things backwards; he should have started with access consciousness and worked towards the phenomenal. But as I say it is a meaty entry for an encyclopaedia, one I haven’t nearly done justice to; see what you make of it.

 

.

It’s out there…

Where is consciousness? It’s out there, apparently, not in here. There has been an interesting dialogue series going on between Riccardo Manzotti and Tim Parks in the NYRB (thanks to Tom Clark for drawing my attention to it) The separate articles are not particularly helpfully laid out or linked to each other: the series is

http://www.nybooks.com/daily/2016/11/21/challenge-of-defining-consciousness/
http://www.nybooks.com/daily/2016/12/08/color-of-consciousness/
http://www.nybooks.com/daily/2016/12/30/consciousness-does-information-smell/
http://www.nybooks.com/daily/2017/01/26/consciousness-the-ice-cream-problem/
http://www.nybooks.com/daily/2017/02/22/consciousness-am-i-the-apple/
http://www.nybooks.com/daily/2017/03/16/consciousness-mind-in-the-whirlwind/
http://www.nybooks.com/daily/2017/04/20/consciousness-dreaming-outside-our-heads/
http://www.nybooks.com/daily/2017/05/11/consciousness-the-body-and-us/
http://www.nybooks.com/daily/2017/06/17/consciousness-whos-at-the-wheel/

We discussed Manzotti’s views back in 2006, when with Honderich and Tonneau he represented a new wave of externalism. His version seemed to me perhaps the clearest and most attractive back then (though I think he’s mistaken). He continues to put some good arguments.

In the first part, Manzotti says consciousness is awareness, experience. It is somewhat mysterious – we mustn’t take for granted any view about a movie playing in our head or the like – and it doesn’t feature in the scientific account. All the events and processes described by science could, it seems, go on without conscious experience occurring.

He is scathing, however, about the view that consciousness is therefore special (surely something that science doesn’t account for can reasonably be seen as special?), and he suggests the word “mental” is a kind of conceptual dustbin for anything we can’t accommodate otherwise. He and Parks describe the majority of views as internalist, dedicated to the view that one way or another neural activity just is consciousness. Many neural correlates of consciousness have been spotted, says Manzotti, but correlates ain’t the thing itself.

In the second part he tackles colour, one of the strongest cards in the internalist hand. It looks to us as if things just have colour as a simple property, but in fact the science of colour tells us it’s very far from being that simple. For one thing how we perceive a colour depends strongly on what other colours are adjacent; Manzotti demonstrates this with a graphic where areas with the same RGB values appear either blue or green. Examples like this make it very tempting to conclude that colour is constructed in the brain, but Manzotti boldly suggests that if science and ordinary understanding are at odds, so much the worse for science. Maybe we ought to accept that those colours really are different, and be damned to RGB values.

The third dialogue attacks the metaphor of a computer often applied to the brain, and rejects talk of information processing. Information is not a physical thing, says Manzotti, and to speak of it as though it were a visible fluid passing through the brain risks dualism; something Tononi, with his theory of integrated information, accepts; he agrees that his ideas about information having two aspects point that way.

So what’s a better answer? Manzotti traces externalist ideas back to Aristotle, but focuses on the more ideas of affordances and enactivism. An affordance is roughly a possibility offered to us by an object; a hammer offers us the possibility of hitting nails. This idea of bashing things does not need to be represented in the head, because it is out there in the form of the hammer. Enactivism develops a more general idea of perception as action, but runs into difficulties in some cases such as that of dreams, where we seem to have experience without action; or consider that licking a strawberry or a chocolate ice cream is the same action but yields very different experience.

To set out his own view, Manzotti introduces the ‘metaphysical switchboard’: one switch toggles whether subject and object are separate, the other whether the subject is physical or not. If they’re separate, and we choose to make the subject non-physical, we get something like Cartesian dualism, with all the problems that entails. If we select ‘physical’ then we get the view of modern science; and that too seems to be failing. If subject and object are neither separate nor physical, we get Berkleyan idealism; my perceptions actually constitute reality. The only option that works is to say that subject and object are identical, but physical; so when I see an apple, my experience of it is identical with the apple itself. Parks, rightly I think, says that most people will find this bonkers at first sight. But after all, the apple is the only thing that has apple-like qualities! There’s no appliness in my brain or in my actions.

This raises many problems. My experience of the apple changes according to conditions, yet the apple itself doesn’t change. Oh no? says Manzotti, why not? You’re just clinging to the subject/object distinction; let it go and there’s no problem. OK, but if my experience of the apple is identical with the apple, and so is yours, then our experiences must be identical. In fact, since subject and object are the same, we must also be identical!

The answer here is curious. Manzotti points out that the physical quality of velocity is relative to other things; you may be travelling at one speed relative to me but a different one compared to that train going by. In fact, he says, all physical qualities are relative, so the apple is an apple experience relative to one animal (me) and at the same time relative to another in a different way. I don’t think this ingenious manoeuvre ultimately works; it seems Manzotti is introducing an intermediate entity of the kind he was trying to dispel; we now have an apple-experience relative to me which is different from the one relative to you. What binds these and makes them experiences of the same apple? If we say nothing, we fall back into idealism; if it’s the real physical apple, then we’re more or less back with the traditional framework, just differently labelled.

What about dreams and hallucinations? Manzotti holds that they are always made up out of real things we have previously experienced. Hey, he says, if we just invent things and colour is made in the head, how come we never dream new colours? He argues that there is always an interval between cause and effect when we experience things; given that, why shouldn’t real things from long ago be the causes of dreams?

And the self, that other element in the traditional picture? It’s made up of all the experiences, all the things experienced, that are relative to us; all physical, if a little scattered and dare I say metaphysically unusual; a massive conjunction bound together by… nothing in particular? Of course the body is central, and for certain feelings, or for when we’re in a dark, silent room, it may be especially salient. But it’s not the whole thing, and still less is the brain.

In the latest dialogue, Manzotti and Parks consider free will. For Manzotti, having said that you are the sum of your experiences, it is straightforward to say that your decisions are made by the subset of those experiences that are causally active; nothing that contradicts determinist physics, but a reasonable sense in which we can say your act belonged to you. To me this is a relatively appealing outlook.

Overall? Well, I like the way externalism seeks to get rid of all the problems with mediation that lead many people to think we never experience the world, only our own impressions of it. Manzotti’s version is particularly coherent and intelligible. I’m not sure his clever relativity finally works though. I agree that experience isn’t strictly in the brain, but I don’t think it’s in the apple either; to talk about its physical location is just a mistake. The processes that give rise to experience certainly have a location, but in itself it just doesn’t have that kind of property.

Panpsychism vindicated?

Could the Universe be conscious? This might seem like one of those Interesting Questions To Which The Answer Is ‘No’ that so often provide arresting headlines in the popular press. Since the Universe contains everything, what would it be conscious of? What would it think about? Thinking about itself – thinking about any real thing – would be bizarre, analogous to us thinking about the activity of the  neurons that were doing the thinking. But I suppose it could think about imaginary stuff. Perhaps even the cosmos can dream; perhaps it thinks it’s Cleopatra or Napoleon.

Actually, so far as I can see no-one is actually suggesting the Universe as a whole, as an entity, is conscious. Instead this highly original paper by Gregory L. Matloff starts with panpsychism, a belief that there is some sort of universal field of proto-consciousness permeating the cosmos. That is a not unpopular outlook these days. What’s startling is Matloff’s suggestion that some stars might be able to do roughly what our brains are supposed by panpsychists to do; recruit the field and use it to generate their own consciousness, exerting some degree of voluntary control over their own movements.

He relies for evidence on a phenomenon called Parenago’s discontinuity; cooler, less massive stars seem to circle the galaxy a bit faster than the others. Dismissing a couple of rival explanations, he suggests that these cooler stars might be the ones capable of hosting consciousness, and might be capable of shooting jets from their interior in a consistent direction so as to exert an influence over their own motion. This might be a testable hypothesis, bringing panpsychism in from the debatable realms of philosophy to the rigorous science of astrophysics (unkind people might suggest that the latter field is actually about as speculative as the former; I couldn’t possibly comment).

In discussing panpsychism it is good to draw a distinction between types of consciousness. There is a certain practical decision-making capacity in human consciousness that is relatively well rooted in science in several ways. We can see roughly how it emerged from biological evolution and why it is useful, and we have at least some idea of how neurons might do it, together with a lot of evidence that in fact, they do do it.  Then there is the much mistier business of subjective experience, what being conscious is actually like. We know little about that and it raises severe problems. I think it would be true to claim that most panpsychists think the kind of awareness that suffuses the world is of the latter kind; it is a dim general awareness, not a capacity to make snappy decisions. It is, in my view, one of the big disadvantages of panpsychism that it does not help much with explaining the practical, working kind of consciousness and in fact arguably leaves us with more to account for  than we had on our plate to start with.

Anyway, if Matloff’s theory is to be plausible, he needs to explain how stars could possibly build the decision-making kind of consciousness, and how the universal field would help. To his credit he recognises this – stars surely don’t have neurons – and offers at least some hints about how it might work. If I’ve got it right, the suggestion is that the universal field of consciousness might be identified with vacuum fluctuation pressures, which on the one hand might influence the molecules present in regions of the cooler stars under consideration, and on the other have effects within neurons more or less on Penrose/Hameroff lines. This is at best an outline, and raises immediate and difficult questions; why would vacuum fluctuation have anything to do with subjective experience? If a bunch of molecules in cool suns is enough for conscious volition, why doesn’t the sea have a mind of its own? And so on. For me the deadliest questions are the simplest. If cool stars have conscious control of their movements, why are they all using it the same way – to speed up their circulation a bit? You’d think if they were conscious they would be steering around in different ways according to their own choices. Then again, why would they choose to do anything? As animals we need consciousness to help us pursue food, shelter, reproduction, and so on. Why would stars care which way they went?

I want to be fair to Matloff, because we shouldn’t mock ideas merely for being unconventional. But I see one awful possibility looming. His theory somewhat recalls medieval ideas about angels moving the stars in perfect harmony. They acted in a co-ordinated way because although the angels had wills of their own, they subjected them to God’s. Now, why are the cool stars apparently all using their wills in a similarly co-ordinated way? Are they bound together through the vacuum fluctuations; have we finally found out there the physical manifestation of God? Please, please, nobody go in that direction!