The Kekulé Problem

Cormac McCarthy asks an interesting question here, and gives the wrong answer, also interestingly. Along the way he manages to describe in simple language a fundamental problem of how our minds work, one that he rightly says, remains mysterious – though I think I have a clue.

McCarthy briefly retells the story of Kekulé, who struggled to understand the form of the benzene molecule. A dream of a lizard biting its own tale (like Ouroboros) prompted him to realise that the molecule was a ring. So we may conclude that Kekulé’s unconscious had done the work for him and found the solution. McCarthy rightly contends that most of our problem-solving, our maths and so on, is done unconsciously. We can reason about things consciously using language and other symbols explicitly, but it is a very slow, clunky and ineffective business. In fact the main point of doing this explicit reasoning (writing out propositional calculus or hand-simulating the running of a computer program for example) is usually to drag into conscious, recordable form the logic or processes we would normally use unconsciously.

The problem posed by McCarthy is: why didn’t Kekulé’s unconscious just tell him the answer, instead of sending it as an esoteric symbol in a dream? The unconscious understands language, he says, or it wouldn’t have understood the problem Kekulé was trying to answer in the first place (I don’t think that is clearly true). He concludes that the unconscious has been running our mental lives quite successfully for millions of years before language came along: it therefore distrusts and dislikes language and prefers other ways.

That isn’t right. Part of the problem here is too strong a separation between conscious and unconscious, which are really just the silent and the talky bits of the same process. Kekulé’s unconscious could not speak to him in words because any mental content that is in the form of language is automatically part of consciousness. Or to put it another way, his unconscious did tell him the answer in words, the words (let’s say) ‘By golly, benzene is a ring!’ But those words appeared as thoughts of Kekulé’s own, necessarily conscious thoughts. They were certainly put together by unconscious processes: as McCarthy recognises, all of our conscious verbal thoughts and utterances are constructed by unconscious processes – or we should get involved in an infinite regress.

How does the unconscious pull all this off? I have claimed before that our conscious thoughts and much of our higher cognition is grounded in recognition. The recognition of larger entities from smaller ones (and back down elsewhere) allows us to leap from present stimuli to distant, future or hypothetical ones, and finally into the realm of abstraction where we can recognise even the pure forms of mathematics. When we think about a problem, we do not typically do computations or talk to ourselves (though we might do that too). We hold the problem in mind and wait for our faculties of recognition to identify something that makes sense. I’m sure that’s what happened in Kekulé’s mind.

The same processes ultimately ground our use of language. As I have suggested, the grunt emitted when digging the dry riverbed can be recognised as part of the digging process, another part of which may be the desirable item of water. The grunt can therefore become an invitation to dig: and I think invitations and commands probably came before names.

Anyway, though I have a somewhat different answer, I think McCarthy is, as it were, digging in the right place with the right tools..

Seating Consciousness

This short piece by Tam Hunt in Nautilus asks whether the brain’s electromagnetic fields could be the seat of consciousness.

What does that even mean? Let’s start with a sensible answer. It could just mean that electromagnetic effects are an essential part of the way the brain works. A few ideas along these lines are discussed in the piece, and it’s a perfectly respectable hypothesis. But it’s hard to see why that would mean the electromagnetic aspects of brain processes are the seat of consciousness any more than the chemical or physical aspects. In fact the whole idea of separating electromagnetic effects from the physical events they’re associated with seems slightly weird to me; you can’t really have one without the other, can you?

A much more problematic reading might be that the electromagnetic fields are where consciousness is actually located. I believe this would be a kind of category error. Consciousness in itself (as opposed to the processes that support and presumably generate it) does not have a location. It’s like a piece of arithmetic or a narrative; things that don’t have the property of a physical location.

It looks as if Hunt is really thinking in terms of the search, often pursued over the years, for the neural correlates of consciousness. The idea of electromagnetic fields being the seat of consciousness essentially says, stop looking at the neurons and try looking at the fields instead.

That’s fine, except that for me there’s a bit of a problem with the ‘correlates of consciousness’ strategy anyway; I doubt whether there is, in the final analysis, any systematic correlation (though things may not be quite as bad that makes them sound).

By way of explanation I offer an analogy; the search for the textual correlates of story. We have reams of text available for research, and we know that some of this text has the property of telling one or another story. Lots of it, equally, does not – it’s non-fiction of various kinds. Now we know that for each story there are corresponding texts; the question is, which formal properties of those strings of text make them stories?

Now the project isn’t completely hopeless. We may be able to identify passages of dialogue, for example, just by examining formal textual properties (occurrence of quote marks and indentation, or of strings like ‘said’). If we can spot passages of dialogue, we’ll have a pretty good clue that we might be looking at a story.

But we can only go so far with that, and we will certainly be wrong if we claim that the textual properties that suggest dialogue can actually be identified with storyhood. It’s obvious that there could be passages of text with all those properties that are in fact mere gibberish, or a factual report. Moreover, there are many stories that have no dialogue and none of any of the other properties we might pick out. The fundamental problem is that storyhood is about what the text means, and that is not a formal property we can get to just by examination. In the same way, conscious states are conscious because they are about something, and aboutness is not a matter of patterns of neural or electromagnetic activity – though at a practical level we might actually be able to spot conscious activity at success rates that are relatively good, just as we could do a fair job of picking out stories from a mass of text even if we can’t, in fact, read.

Be that as it may, Hunt’s real point is to suggest that electromagnetic field correlates might be better than neural ones. Why (apart from research evidence) does he find that an attractive idea? If I’ve got this right, he is a panpsychist, someone who believes our consciousness is built out of the sparks of lower-grade awareness which are natural properties of matter. There is obviously a question there about how the sparks get put together into richer kinds of consciousness, and Hunt thinks resonance might play a key part. If it’s all about electromagnetic fields, it clearly becomes much easier to see how some sort of resonance might be in play.

I haven’t read enough about Hunt’s ideas to be anywhere near doing them justice; I have no doubt there is a lot of reasonable stuff to be said about and in favour of them. But as a first reaction resonance looks to me like an effect that reduces complexity and richness rather than enhancing them. If the whole brain is pulsing along to the same rhythm that suggests less content than a brain where every bit is doing its own thing. But perhaps that’s a subject I ought to address at better length another time, if I’m going to.

Body Memory

Can you remember with your leg, and is that part of who you are? An interesting piece by Ben Platts-Mills in Psyche suggests it might be so. This is not Psyche the good old journal that went under in 2010, alas, but a promising new venture from those excellent folk at Aeon.

The piece naturally mentions Locke, who put memory at the centre of personal identity, and gives some fascinating snippets about the difficult lives of people with anterograde amnesia, the inability to form new memories. These people have obviously not lost their identities, however they may struggle. I have always been sceptical about the Lockean view more or less for this reason; if I lost my memories, would I stop being me? It doesn’t quite seem so.

In the cases mentioned we don’t quite have that total loss, though; the subjects mainly retain some or all of their existing memories, so a Lockean can simply say those retained memories are what preserve their identity. The inability to form new memories merely stops their identity going through the incremental further changes which are presumably normal (though some of these people do seem to change).

Platts-Mills wants to make the bolder claim that in fact important bits of memory and identity are stored, not in the brain, but in the context and bodies involved. Perhaps a subject does not remember anything about art, but fixed bodily habits draw them towards a studio where the tools and equipment (perhaps we could say the affordances) prompt them to create. This clearly chimes well with much that has been written about various versions of the extended or expanded mind.

I don’t think these examples make the case very strongly, though; Platts-Mills seems to underestimate the brain, in particular assuming that the absence of conscious recall means there’s no brain memory at all. In fact our brains clearly retain vast amounts of stuff we cannot get at consciously. Platts-Mills mentions a former paediatric nurse who cannot remember her career but when handed a baby, holds it correctly and asks good professional questions about it. Fine, but it would take a lot to convince me that her brain is not playing a major role there.

One of the most intriguing and poignant things in the piece is the passing remark that some sufferers from anterograde amnesia actually prefer to forget their earlier lives. That seems difficult to understand, but at any rate those people evidently don’t feel the loss of old memories threatens their existence – or do they consider themselves new people, unrelated to the previous tenant of their body?

That last thought prompts another argument against the extended view. The notion that we could be transferred to someone else’s body (or to no body at all) is universally understood and accepted, at least for the purposes of fiction (and, without meaning to be rude, religion). The idea of moving into a different body is not one of those philosophical notions that you should never start trying to explain at the end of a dinner party; it’s a common feature of SF and magic stories that no-one jibs at. That doesn’t, of course, mean transfers are actually possible, even in theory (I feel fairly sure they aren’t), but it shows at least that people at large do not regard the body as an essential part of their identity.

Chatbot fever

The Guardian recently featured an article produced by ‘GPT-3, OpenAI’s powerful new language generator’. It’s an essay intended to reassure us humans that AIs do not want to take over, still less kill all humans. GPT-3 also produced a kind of scripture for its own religion, as well as other relatively impressive texts. Its chief advantage, apparently, is that it uses the entire Internet as its corpus of texts, from which to work out what phrase or sentence might naturally come next in the piece it’s producing.

Now I say the texts are impressive, but I think your admiration for the Guardian piece ebbs considerably when you learn that GPT-3 had eight runs at this; then the best bits from all eight were selected and edited together, not necessarily in the original order, by a human. It seems the AI essentially just trawled some raw material from the Internet which was then used by the human editor. The trawling is still quite a feat, though you can safely bet that there were some weird things among the stuff the editor rejected. Overall, it seems GPT-3 is an excellent chatbot, but not really different in kind.

The thing is, for really human-style text, the bot needs to be able to deal with meaning, and none of them even attempt that. We don’t really have any idea of how to approach that challenge; it’s not not that we haven’t made enough progress, rather, we’re not even on the road and have not really got anywhere with finding it. What we have got surprisingly good at is making machines that fake meaningfulness, or manage to do without it. Once it would have seemed pretty unlikely that computer translation would ever be any good, because proper translation involves considering the meanings of words. Of course Google Translate is still highly fallible, but it’s good enough to be useful.

The real puzzle is why people are so eager to pretend that AIs are ready for human style conversations and prose composition. Is it just that so many of us would love a robot pal (I certainly would)? Or some deeper metaphysical loneliness? Is it a becoming but misplaced modesty about human capacities? Whatever it is, it seems to raise the risk that we’ll all end up talking to the nobody in the machine, like budgies chirping at their reflections. I suppose it must be acknowledged that we’ve all had conversations with humans where it seemed rather as if there was no-one at home in there.

Hiatus

Conscious Entities has gone quiet for some while now. Initially this was due to slowly worsening health issues which I won’t relate in detail; both the illnesses and the treatments involved cause serious fatigue. In November I had to spend three weeks in hospital getting serious antiviral treatment.

In early December I was much better and came back to post something. To my surprise I found that part of my mind just wouldn’t co-operate (a disconcerting experience that might well have been the subject of an interesting post in itself!).

No doubt this is partly due to continuing lack of energy, but I must also admit that some of the energy I do have is currently being siphoned off into writing fiction – I recently dusted off some old short stories wot I wrote, and got placed or shortlisted in a number of competitions. I suspect my subconscious wants to do more of that just now.

I don’t think I’ve said my last word here. But things are likely to remain quieter than they have been over the last sixteen years. In the meantime, if you have been – thanks for reading.

Libet Unreadied

Benjamin Libet’s famous experiments have been among the most-discussed topics of neuroscience for many years. Libet’s experiments asked a subject to move their hand at a random moment of their choosing; he showed that the decision to move could be predicted on the basis of a ‘readiness potential’ detectable half a second before the subject reported having made the decision. The result has been confirmed many times since, and even longer gaps between prediction and reported decision have been claimed. The results are controversial because they seem to be strong scientific evidence against free will. If the decision had been made before the decider knew it, how could their conscious thought have been effective? My original account (14 years ago, good grief) is here.

Libet’s findings and their significance have been much disputed philosophically, but a new study reported here credibly suggests that the readiness potential (RP) or Bereitschaftspotential has been misunderstood all along.

In essence the RP is a peak of neural activity, which seemed to the original researchers to represent a sort of gathering of neural resources as the precursor of an action. But the new study suggests that this appearance is largely an artefact of the methods used to analyse the data, and that the RP really represents nothing more than the kind of regular peak which occurs naturally in any system with a lot of background noise. Neural activity just does ebb and flow like that for no particular reason.

That is not to say that the RP was completely irrelevant to the behaviour of Libet’s subjects. In the rather peculiar circumstances of the original experiment, where subjects are asked to pick a time at random, it is likely that a chance peak of activity would tip the balance for the unmotivated decision. But that doesn’t make it either the decision itself or the required cause of a decision. Outside the rather strange conditions of the experiment, it has no particular role. Perhaps most tellingly, when the original experiments were repeated with a second group of subjects who were asked not to move at all, it was impossible to tell the difference between the patterns of neural activity recorded; a real difference appeared only at the time the subjects in the first group reported having made a decision.

This certainly seems to change things, though it should be noted that Libet himself was aware that the RP could be consciously ‘over-ruled’, a phenomenon he called ‘Free Won’t’. It can, indeed, be argued that the significance of the results was always slightly overstated. We always knew that there must be neural precursors of any given decision, if not so neatly identifiable as the RP. So long as we believe the world is governed by determined physical laws (something I think it’s metaphysically difficult to deny) the problem of Free Will still arises; indeed, essentially the same problem was kicked around for centuries in forms that relied on divine predestination or simple logical fatalism rather than science.

Nevertheless, it looks as though our minds work the way they seem to do rather more than we’ve recently believed. I’m not quite sure whether that’s disappointing or comforting.

A short solution

Tim Bollands recently tweeted his short solution to the Hard Problem (I mean, not literally in a tweet – it’s not that short). You might think that was enough to be going on with, but he also provides an argument for a pretty uncompromising kind of panpsychism. I have to applaud his boldness and ingenuity, but unfortunately I part ways with his argument pretty early on. The original tweet is here.

Bollands’ starting premise is that it’s ‘intuitively clear that combining any two non-conscious material objects results in another non-conscious object’. Not really. Combining a non-conscious Victorian lady and a non-conscious bottle of smelling salts might easily produce a conscious being. More seriously, I think most materialists would assume that conscious human beings can be put together by the gradual addition of neural tissue to a foetus that attains consciousness by a similarly gradual process, from dim sensations to complex self-aware thought. It’s not clear to me that that is intuitively untenable, though you could certainly say that the details are currently mysterious.

Bollands believes there are three conclusions we can draw: humans are not conscious; consciousness miraculously emerges, or consciousness is already present in the matter brains are made from. The first, he says, is evidently false (remember that); the second is impossible, given that putting unconscious stuff together can’t produce consciousness; so the third must be true.

That points to some variety of panpsychism, and in fact Bollands goes boldly for the extreme version which attributes to individual particles the same kind of consciousness we have as human beings. In fact, your consciousness is really the consciousness of a single particle within you, which due to the complex processing of the body has come to think of itself as the consciousness of the whole.

I can’t recall any other panpsychist who has been willing to push fully-developed human consciousness right down to the level of elementary particles. I believe most either think consciousness starts somewhere above that level, or suppose that particles have only the dimmest imaginable spark of awareness. Taking this extreme position raises very difficult questions. Which particle is my conscious one? Or all they all being conscious in parallel? Why doesn’t my consciousness feel like the consciousness of a particle? How could all the complex content of my current conscious state be held by a single invariant particle? And why do my particles cease to be conscious when my body is dead, or stunned? You may notice, incidentally, that Bollands’ conclusion seems to be that human beings as such are not, in fact, conscious, contradicting what he said earlier.

Brevity is perhaps the problem here; I don’t think Bollands has enough space to make his answers clear, let alone plausible. Nor is it really clear how all this solves the Hard Problem. Bollands reckons the Hard Problem is analogous to the Combination Problem for panpsychism, which he has solved by denying that any combination occurs (though his particles still somehow benefit from the senses and cognitive apparatus of the whole body). But the Hard Problem isn’t about how particles or nerves come together to create experience, it’s about how phenomenal experience can possibly arise from anything merely physical. That is, to put it no higher, at least as difficult to imagine for a single particle as for a large complex organism.

So I’m not convinced – but I’d welcome more contributions to the debate as bold as this one.

Dismissing materialism

Eric Holloway gives a brisk and entertaining dismissal of all materialist theories of consciousness here, boldly claiming that no materialist theory of consciousness is plausible. I’m not sure his coverage is altogether comprehensive, but let’s have a look at his arguments. He starts out by attacking panpsychism…

One proposed solution is that all particles are conscious. But, in that case, why am I a human instead of a particle? The vast majority of conscious beings in the universe would be particles, and so it is most likely I’d be a particle and not any sort of organic life form.

It’s really a bit of a straw man he’s demolishing here. I’m not sure panpsychists are necessarily committed to the view that particles are conscious (I’m not sure panpsychists are necessarily materialists, either), but I’ve certainly never run across anyone who thinks that the consciousness of a particle and the consciousness of a human being would be the same. It would be more typical to say that particles, or whatever the substrate is, have only a faint glow of awareness, or only a very simple, perhaps binary kind of consciousness. Clearly there’s then a need to explain how the simple kind of consciousness relates or builds up into our kind; not an easy task, but that’s the business panpsychists are in, and they can’t be dismissed without at least looking at their proposals.

Another solution is that certain structures become conscious. But a structure is an abstract entity and there is an untold infinite number of abstract entities.

This is perhaps Holloway’s riposte; he considers this another variety of panpsychism, though as stated it seems to me to encompass a lot of non-panpsychist theories, too. I wholeheartedly agree that conscious beings are not abstract entities, an error which is easy to fall into if you are keen on information or computation as the basis of your theory. But it seems to me hard to fight the idea that certain structural (or perhaps I mean functional) properties instantiated in particular physical beings are what amounts to consciousness. On the one hand there’s a vast wealth of evidence that structures in our brains have a very detailed influence on the content of our experiences. On the other, if there are no structural features, broadly described, that all physical instances of conscious entities have in common, it seems to me hard to avoid radical mysterianism. Even dualists don’t usually believe that consciousness can simply be injected randomly into any physical structure whatever (do they?). Of course we can’t yet say authoritatively what those structural features are.

Another option, says Holloway, is illusionism.

But, if we are allowed to “solve” the problem that way, all problems can be solved by denying them. Again, that is an unsatisfying approach that ‘explains’ by explaining away.

Empty dismissal of consciousness would indeed not amount to much, but again that isn’t what illusionists actually say; typically they offer detailed ideas about why consciousness must be an illusion and varied proposals about how the illusion arises. I think many would agree with David Chalmers that explaining why people do believe in consciousness is currently where some of the most interesting action is to be found.

Some say consciousness is an emergent property of a complex structure of matter… …At what point is a structure complex enough to become conscious?

I agree that complexity alone is not enough, though some people have been attracted to the idea, suggesting that the Internet, for example, might achieve consciousness. A vastly more sophisticated form of the same kind of thinking perhaps underlies the Integrated Information theory. But emergence can mean more than that; in particular it might say that when systems have enough structural complexity of the right kind (frantic hand-waving), they acquire interesting properties (meaningful, experiential ones) that can only be addressed on a higher level of interpretation. That, I think, is true; it just doesn’t help all that much.

Holloway wraps up with another pop at those fully-conscious particles that surely no-one believes in anyway. I don’t think he has shown that no materialist theory can be plausible – the great mainstream ideas of functionalism/computationalism are largely untouched – but I salute the chutzpah of anyone who thinks such an issue can be wrapped up in one side of A4 – and is willing to take it on!

Neuromorality

‘In 1989 I was invited to go to Los Angeles in response to a request from the Dalai Lama, who wished to learn some basic facts about the brain.’

Besides being my own selection for ‘name drop of the year’, this remark from Patricia Churchland’s new book Conscience perhaps tells us that we are not dealing with someone who suffers much doubt about their own ability to explain things. That’s fair enough; if we weren’t radically overconfident about our ability to answer difficult questions better than anyone else, it’s probable no philosophy would ever get done. And Churchland modestly goes on to admit to asking the Buddhists some dumb questions (‘What’s your equivalent of the Ten Commandments?’). Alas, I think some of her views on moral philosophy might benefit from further reflection.

Her basic proposition is that human morality is a more complex version of the co-operative and empathetic behaviour shown by various animals. There are some interesting remarks in her account, such as a passage about human scrupulosity, but she doesn’t seem to me to offer anything distinctively new in the way of a bridge between mere co-operation and actual ethics. There is, surely, a gulf between the two which needs bridging if we are to explain one in terms of the other. No doubt it’s true that some of the customs and practices of human beings may have an inherited, instinctive root; and those practices in turn may provide a relevant backdrop to moral behaviour. Not morality itself, though. It’s interesting that a monkey fobbed off with a reward of cucumber instead of a grape displays indignation, but we don’t get into morality until we ask whether the monkey was right to complain – and why.

Churchland never accepts that. She suggests that morality is a vaguely defined business; really a matter of a collection of rules and behaviours that a species or a community has cobbled together from pragmatic adaptations, whether through evolution or culture (quite a gulf there, too). She denies that there are any deep principles involved; we simply come to feel, through reinforcement learning and imitation, that the practices of our own group have a special moral quality. She groups moral philosophers into two groups; people she sees as flexible pragmatists (Aristotle, for some reason, and Hume) and rule-lovers (Kant and Jeremy Bentham). Unfortunately she treats moral rules and moral principles as the same, so advocates of moral codes like the Ten Commandments are regarded as equivalent to those who seek a fundamental grounding for morality, like Kant. Failure to observe this distinction perhaps causes her to give the seekers of principles unnecessarily short shrift. She rightly notes that there are severe problems with applying pure Utilitarianism or pure Kantianism directly to real life; but that doesn’t mean that either theory fails to capture important ethical truths. A car needs wheels as well as an engine, but that doesn’t mean the principle of internal combustion is invalid.

Another grouping which strikes me as odd is the way Churchland puts rationalists with religious believers (they must be puzzled to find themselves together) with neurobiology alone on the other side. I wouldn’t be so keen to declare myself the enemy of rational argument; but the rationalists are really the junior partners, it seems, people who hanker after the old religious certainties and deludedly suppose they can run up their own equivalents. Just as people who deny personhood sometimes seem to be motivated mainly by a desire to denounce the soul, I suspect Churchland mainly wants to reject Christian morality, with the baby of reasoned ethics getting thrown out along with the theological bathwater.

She seems to me particularly hard on Kant. She points out, quite rightly, that his principle of acting on rules you would be prepared to have made universal, requires the rules to be stated correctly; a Nazi, she suggests, could claim to be acting according to consistent rules if those rules were drawn up in a particular way. We require the moral act to be given its correct description in order for the principle to apply. Yes; but much the same is true of Aristotle’s Golden Mean, which she approves. ‘Nothing to excess’ is fine if we talk about eating or the pursuit of wealth, but it also, taken literally, means we should commit just the right amount of theft and murder; not too much, but not too little, either. Churchland is prepared to cut Aristotle the slack required to see the truth behind the defective formulation, but Kant doesn’t get the same accommodation. Nor does she address the Categorical Imperative, which is a shame because it might have revealed that Kant understands the kind of practical decision-making she makes central, even though he says there’s more to life than that.

Here’s an analogy. Churchland could have set out to debunk physics in much the way she tackles ethics. She might have noted that beavers build dams and ants create sophisticated nests that embody excellent use of physics. Our human understanding of physics, she might have said, is the same sort of collection of rules of thumb and useful tips; it’s just that we have so many more neurons, our version is more complex. Now some people claim that there are spooky abstract ‘laws’ of physics, like something handed down by God on tablets; invisible entities and forces that underlie the behaviour of material things. But if we look at each of the supposed laws we find that they break down in particular cases. Planes sail through the air, the Earth consistently fails to plummet into the Sun; so much for the ‘law’ of gravity! It’s simply that the physics practices of our own culture come to seem almost magical to us; there’s no underlying truth of physics. And worse, after centuries of experiment and argument, there’s still bitter disagreement about the answers. One prominent physicist not so long ago said his enemies were ‘not even wrong’!

No-one, of course, would be convinced by that, and we really shouldn’t be convinced by a similar case against ethical theory.

That implicit absence of moral truth is perhaps the most troubling thing about Churchland’s outlook. She suggests Kant has nothing to say to a consistent Nazi, but I’m not sure what she can come up with, either, except that her moral feelings are different. Churchland wraps up with a reference to the treatment of asylum seekers at the American border, saying that her conscientious feelings are fired up. But so what? She’s barely finished explaining why these are just feelings generated by training and imitation of her peer group. Surely we want to be able to say that mistreatment of children is really wrong?