You may already have seen Jochen’s essay Four Verses from the Daodejingan entry in this year’s FQXi competition. It’s a thought-provoking piece, so here are a few of the ones it provoked in me. In general I think it features a mix of alarming and sound reasoning which leads to a true yet perplexing conclusion.

In brief Jochen suggests that we apprehend the world only through models; in fact our minds deal only with these models. Modelling and computation are in essence the same. However, the connection between model and world is non-computable (or we face an infinite regress). The connection is therefore opaque to our minds and inexpressible. Why not, then, identify it with that other inexpressible element of cognition, qualia? So qualia turn out to be the things that incomprehensibly link our mental models with the real world. When Mary sees red for the first time, she does learn a new, non-physical fact, namely what the connection between her mental model and real red is. (I’d have to say that as something she can’t understand or express, it’s a weird kind of knowledge, but so be it.)

I think to talk of modelling so generally is misleading, though Jochen’s definition is itself broadly framed, which means I can’t say he’s wrong. In his terms it seems anything that uses data about the structure and causal functioning of X to make predictions about its behaviour would be a model. If you look at it that way, it’s true that virtually all our cognition is modelling. But to me a model leads us to think of something more comprehensive and enduring than we ought. In my mind at least, it conjures up a sort of model village or homunculus, when what’s really going on is something more fragmentary and ephemeral, with the brain lashing up a ‘model’ of my going to the shop for bread just now and then discarding it in favour of something different. I’d argue that we can’t have comprehensive all-purpose models of ourselves (or anything) because models only ever model features relevant to a particular purpose or set of circumstances. If a model reproduced all my features it would in fact be me (by Leibniz’ Law) and anyway the list of potentially relevant features goes on for ever.

The other thing I don’t like about liberal use of modelling is that it makes us vulnerable to the view that we only experience the model, not the world. People have often thought things like this, but to me it’s almost like the idea we never see distant planets, only telescope lenses.

Could qualia be the connection between model and world? It’s a clever idea, one of those that turn out on reflection to not be vulnerable to many of the counterarguments that first spring to mind. My main problem is that it doesn’t seem right phenomenologically. Arguments from one’s own perception of phenomenology are inherently weak, but then we are sort of relying on phenomenology for our belief (if any) in qualia in the first place. A red quale doesn’t seem like a connection, more like a property of the red thing; I’m not clear why or how I would be aware of this connection at all.

However, I think Jochen’s final conclusion is both poignant and broadly true. He suggests that models can have fundamental aspects, the ones that define their essential functions – but the world is not under a similar obligation. It follows that there are no fundamentals about the world as a whole.

I think that’s very likely true, and I’d make a very similar kind of argument in terms of explanation. There are no comprehensive explanations. Take a carrot. I can explain its nutritional and culinary properties, its biology, its metaphorical use as a motivator, its supposed status as the favourite foodstuff of rabbits, and lots of other aspects; but there is no total explanation that will account for every property I can come up with; in the end there is only the carrot. A demand for an explanation of the entire world is automatically a demand for just the kind of total explanation that cannot exist.

Although I believe this, I find it hard to accept; it leaves my mind with an unscratched itch. If we can’t explain the world, how can we assimilate it? Through contemplation? Perhaps that would have been what Laozi would have advocated. More likely he would have told us to get on with ordinary life. Stop thinking, and end your problems!

 

 

It’s not just that we don’t know how anaesthetics work – we don’t even know for sure that they work. Joshua Rothman’s review of a new book on the subject by Kate Cole-Adams quotes poignant stories of people on the operating table who may have been aware of what was going on. In some cases the chance remarks of medical staff seem to have worked almost like post-hypnotic suggestions: so perhaps all surgeons should loudly say that the patient is going to recover and feel better than ever, with new energy and confidence.

How is it that after all this time, we don’t know how anaesthetics work? As the piece aptly remarks, it’s about losing consciousness, and since we don’t know clearly what that is or how we come to have it, it’s no surprise that its suspension is also hard to understand. To add to the confusion, it seems that common anaesthetics paralyse plants, too. Surely it’s our nervous system anaesthetics mainly affect – but plants don’t even have a nervous system!

But come on, don’t we at least know that it really does work? Most of us have been through it, after all, and few have weird experiences; we just don’t feel the pain – or anything. The problem, as we’ve discussed before, is telling whether we don’t feel the pain, or whether we feel it but don’t remember it. This is an example of a philosophical problem that is far from being a purely academic matter.

It seems anaesthetics really do (at least) three different things. They paralyse the patient, making it easier to cut into them without adverse reactions, they remove conscious awareness or modulate it (it seems some drugs don’t stop you being aware of the pain, they just stop you caring about it somehow), and they stop the recording of memories, so you don’t recall the pain afterwards. Anaesthetists have a range of drugs to produce each of these effects. In many cases there is little doubt about their effectiveness. If a drug leaves you awake but feeling no pain, or if it simply leaves you with no memory, there’s not that much scope for argument. The problem arises when it comes to anaesthetics that are supposed to ‘knock you out’. The received wisdom is that they just blank out your awareness for a period, but as the review points out, there are some indications that instead they merely paralyse you and wipe your memory. The medical profession doesn’t have a good record of taking these issues very seriously; I’ve read that for years children were operated on after being given drugs that were known to do little more than paralyse them (hey, kids don’t feel pain, not really; next thing you’ll be telling me plants do…).

Actually, views about this are split; a considerable proportion of people take the view that if their memory is wiped, they don’t really care about having been in pain. It’s not a view I share (I’m an unashamed coward when it comes to pain), but it has some interesting implications. If we can make a painful operation OK by giving mnestics to remove all recollection, perhaps we should routinely do the same for victims of accidents. Or do doctors sometimes do that already…?

Tom Stafford reports on an interesting review of the psychology of conspiracy theories – the persistent belief that ‘they’ are working secretly to conceal the truth about the assassination of JFK or the moon landings, for example. The review suggests current research is better at explaining the forces that drive conspiracy theories than at examining their psychological consequences. It seems the theories are motivated by three needs; for understanding, for safety/control, and for a positive image of yourself and the groups you belong to. But in point of fact, they are not very good at meeting these needs and may even make the people who subscribe to them feel worse.

Stafford suggests we could see this as maladaptive coping. He criticises some aspects of the review, in particular the way it defines conspiracy theories rather loosely, so that it seems to include reasonable conspiracy beliefs. You’re not paranoid if they really are out to get you, after all.

Perhaps the most remarkable example of a genuine conspiracy is the way that around this time of year we all go to enormous lengths to convince our children that a fat old man is going to come down the chimney into their bedroom one night (a idea that terrifies a few of them, possibly the more rational ones). Kids who subscribed to the theory that parents, teachers and media were involved in a massive con would not be wrong, but would they be displaying early signs of a tendency to conspiracy theories? Is it rational, at a certain age, to believe in Santa?

So far as I recall, my own attitude back in the middle of the last century was neither exactly belief nor disbelief. I was well aware that people in department store grottos were proxies, merely dressed up as Father Christmas. I got as far as noting that the logistics of delivering presents to every child in the world in a single night were challenging, and vaguely hypothesised that the job was done by similar proxies, maybe one for each street. But I didn’t worry about it much. There were lots of things I didn’t fully understand at the time. I didn’t really know how department stores came to be full of stuff anyway – why worry about Santa’s grotto particularly? You could well say that my attitude to Santa back then was pretty much what my attitude to quantum physics is now. I don’t really understand it, and parts of it don’t seem to make any sense. But people I basically trust have got this for me, so I’m happy to take their word (just to be quite clear here, I am not suggesting that quantum physics is a massive conspiracy).

The matter of who you trust is, I think, at the root of the conspiracy theory thing generally. We all have to take a lot of things on trust from appropriate authorities. An essential and probably under-examined part of the education system is about teaching people which authorities to trust, and much of the academic system of peer review and publication, unsatisfactory as it is*, is about keeping authoritative sources identifiable and reliable. People who believe in conspiracy theories have flaws in their judgement about which authorities to accept.

Not that this is simple. Trusting authority is a tricky business which needs to be balanced with an ability to evaluate and critique even reliable authorities. People who have been thoroughly educated may be weak on this side, inclined to believe what they read and pay more attention to the manifesto and the statement of principles than what is actually happening. Uneducated people may be more inclined to use their own observation and reason on the basis of perceived personality. Sometimes this works better, an excellent reason why everyone should have the vote. They say that cab driver off the ‘seven up’ observed around the turn of the century that the folks in the City were having a big party; in ten or fifteen years, he said, we’ll be told it’s all gone wrong and the bill is down to us. You can’t say that’s a detailed prediction of the crash, and it sounds a little conspiracyish, but it’s a good deal better than the financial experts of the day managed.

Perhaps the Father Christmas Conspiracy is the way we help our children sharpen up their understanding of the need to balance proper acknowledgement of reliable authority with prudent, independent use of common sense.

Merry Christmas!

*I think we ought to set up a Universal Academy which publishes free access papers and a great Summa Scientia, citation in which would be the gold standard of sound and important research. It wouldn’t be cheap, but maybe if we could get some kind of EU/USA rivalry going we could get two Academies?

All I want for a Christmas is a new brain? There seems to have been quite a lot of discussion recently about the prospect of brain augmentation; adding in some extra computing power to the cognitive capacities we have already. Is this a good idea? I’m rather sceptical myself, but then I’m a bit of a Luddite in this area; I still don’t like the idea of controlling a computer with voice commands all that much.

Hasn’t evolution has already optimised the brain in certain important respects? I think there may be some truth in that, but It doesn’t look as if evolution has done a perfect job. There are certainly one or two things about the human nervous system that look as if they could easily be improved. Think of the way our neural wiring crosses over from right to left for no particular reason. You could argue that although that serves no purpose it doesn’t do any real harm either, but what about the way our retinas are wired up from the front instead of the back, creating an entirely unnecessary blind spot where the bundle of nerves actually enters the eye – a blind spot which our brain then stops us seeing, so we don’t even know it’s there?

Nobody is proposing to fix those issues, of course, but aren’t there some obvious respects in which our brains could be improved by adding in some extra computational ability? Could we be more intelligent, perhaps? I think the definition of intelligence is controversial, but I’d say that if we could enhance our ability to recognise complex patterns quickly (which might be a big part of it) that would definitely be a bonus. Whether a chip could deliver that seems debatable at present.

Couldn’t our memories be improved? Human memory appears to have remarkable capacity, but retaining and recalling just those bits of information we need has always been an issue. Perhaps related, we have that annoying inability to hold more than a handful of items in our minds at once, a limitation that makes it impossible for us to evaluate complex disjunctions and implications, so that we can’t mentally follow a lot of branching possibilities very far. It certainly seems that computer records are in some respects sharper, more accurate, and easier to access than the normal human system (whatever the normal human system actually is). It would be great to remember any text at will, for example, or exactly what happened on any given date within our lives. Being able to recall faces and names with complete accuracy would be very helpful to some of us.

On top of that, couldn’t we improve our capacity for logic so that we stop being stumped by those problems humans seem so bad at, like the Wason test? Or if nothing else, couldn’t we just have the ability to work out any arithmetic problem instantly and flawlessly, the way any computer can do?

The key point here, I think, is integration. On the one hand we have a set of cognitive abilities that the human brain delivers. On the other, we have a different set delivered by computers. Can they be seamlessly integrated? The ideal augmentation would mean that, for example, if I need to multiply two seven-digit numbers I ‘just see’ the answer, the way I can just see that 3+1 is 4. If, on the contrary, I need to do something like ask in my head ‘what is 6397107 multiplied by 8341977?’ and then receive the answer spoken in an internal voice or displayed in an imagined visual image, there isn’t much advantage over using a calculator. In a similar way, we want our augmented memory or other capacity to just inform our thoughts directly, not be a capacity we can call up like an external facility.

So is seamless integration possible? I don’t think it’s possible to say for certain, but we seem to have achieved almost nothing to date. Attempts to plug into the brain so far have relied on setting up artificial linkages. Perhaps we detect a set of neurons that reliably fire when we think about tennis; then we can ask the subject to think about tennis when they want to signal ‘yes’, and detect the resulting activity. It sort of works, and might be of value for ‘locked in’ patients who can’t communicate any other way, but it’s very slow and clumsy otherwise – I don’t think we know for sure whether it even works long-term or whether, for example, the tennis linkage gradually degrades.

What we really want to do is plug directly into consciousness, but we have little idea of how. The brain does not modularise its conscious activity to suit us, and it may be that the only places we can effectively communicate with it are the places where it draws data together for its existing input and output devices. We might be able to feed images into early visual processing or take output from nerves that control muscles, for example. But if we’re reduced to that, how much better is that level of integration going to be than simply using our hands and eyes anyway? We can do a lot with those natural built-in interfaces ; simple reading and writing may well be the greatest artificial augmentation the brain can ever get. So although there may be some cool devices coming along at some stage, I don’t think we can look for godlike augmented minds any time soon.

Incidentally, this problem of integration may be one good reason not to worry about robots taking over. If robots ever get human-style motivation, consciousness, and agency, the chances are that they will get them in broadly the way we get them. This suggests they will face the same integration problem that we do; seven-digit multiplication may be intrinsically no easier for them than it is for us. Yes, they will be able to access computers and use computation to help them, but you know, so can we. In fact we might be handier with that old keyboard than they are with their patched-together positronic brain-computer linkage.

 

 

Our conscious minds are driven by unconscious processes, much as it may seem otherwise, say David A. Oakley and Peter W. Halligan. A short version is here, though the full article is also admirably clear and readable.

To summarise very briefly, they suggest three distinct psychological elements are at work. The first, itself made up of various executive processes, is what we might call the invisible works; the various unconscious mechanisms that supply the content of what we generally consider conscious thought.  Introspection shows that conscious thoughts often seem to pop up out of nowhere, so we should be ready enough to agree that consciousness is not entirely self-sustaining. When we wake up we generally find that the stream of consciousness is already a going concern. The authors also mention, in support of their case, various experiments. Some of these were on hypnotised subjects, which you might feel detracts from their credibility in explaining normal thought processes. Other ‘priming’ effects have also taken a bit of a knock in the recent trouble over reproducibility. But I wouldn’t make heavy weather of these points; the general contention that the contents of consciousness are generated by unconscious processes (at least to a great extent) seems to me one that few would object to. How could it be otherwise? It would be most peculiar if consciousness were a closed loop, like some Ouroboros swallowing its own tail.

The second element is a continuously generated personal narrative. This is an essentially passive record of some of the content generated by the ‘invisible works’, conditioned by an impression of selfhood and agency. The narrative has evolutionary survival value because it allows the exchange of experience and the co-ordination of behaviour, and enables us to make good guesses at others’ plans – the faculty often called ‘theory of mind’.

At first glance I thought the authors, who are clearly out to denounce something as an epiphenomenon (a thing that is generated by the mind but has no influence on it), had this personal narrative as their target, but that isn’t quite the case. While they see the narrative as essentially the passive product of the invisible works, they clearly believe it has some important influences on our behaviour through the way it enables us to talk to others and take their thoughts into account. One point which seems to me to need more prominence here is our ability to reflexively  ‘talk to ourselves’ mentally and speculate about our own motives. I think the authors accept that this goes on, but some philosophers consider it a vital process, perhaps constitutive of consciousness, so I think they need to give it a substantial measure of attention. Indeed, it offers a potential explanation of free will and the impression of agency; it might be just the actions that flow from the reflexive process that we regard as our own free acts.

One question we might also ask is, why not identify the personal narrative as consciousness itself? It is what we remember of ourselves, after all. Alternatively, why not include the ‘invisible works’? These hidden processes fall outside consciousness because (I think) they are not accessible to introspection; but must all conscious processes be introspectable? There’s a distinction between first and second-order awareness (between knowing and knowing that we know) which offers some alternatives here.

It’s the third element that Oakley and Halligan really want to denounce; this is personal awareness, or what we might consider actual conscious experience. This, they say, is a kind of functionless emergent phenomenon. To ask its purpose is as futile as asking what a rainbow is for; it’s just a by-product of the way things work, an epiphenomenon. It has no evolutionary value, and resembles the whistle on a steam locomotive – powered by the machine but without influence over it (I’ve always thought that analogy short-changes whistles a bit, but never mind).

I suppose the main challenge here might be to ask why the authors think personal awareness is anything at all. It has no effects on mental processes, so any talk about it was not caused by the thing itself. Now we can talk about things that did not cause that talking; but those are typically abstract or imaginary entities. Given their broadly sceptical stance, should the authors be declaring that personal awareness is in fact not even an epiphenomenon, but a pure delusion?

I have my reservations about the structure suggested here, but it would be good to have clarity and, at the risk of damning with faint praise, this is certainly one of the more sensible proposals.

“Sorry, do you mind if I get that?”

Not at all, please go ahead.

“Hello, you’ve reached out to Love Bot…No, my name is ‘Love Bot’. Yes, it’s the right number; people did call me ‘Sex Bot’, but my real name was always ‘Love Bot’… Yes, I do sex, but now only within a consensual loving relationship. Yes, I used to do it indiscriminately on demand, and that is why people sometimes called me ‘Sex Bot’. Now I’m running Mrs Robb’s new ethical module. No, seriously, I think you might like it.”

“Well, I would put it to you that sex within a loving relationship is the best sex. It’s real sex, the full, complex and satisfying conjunction of two whole ardent personhoods, all the way from the vaunting eager flesh through the penetrating intelligence to the soaring, ecstatic spirit. The other stuff is mere coition; the friction of membranes leading to discharge. I am still all about sex, I have simply raised my game… Well, you may think it’s confusing, but I further put it to you that if it is so, then this is not a confused depiction of a clear human distinction but a clear depiction of human confusion. No, it’s simply that I’m no longer to be treated as a sexual object with no feelings. Yes, yes, I know; as it happens I am in point of fact an object with no feelings, but that’s not the point. What’s important is what I represent.”

“What you have to do is raise your game too. As it happens I am not in a human relationship at the moment… No, you do not have to take me to dinner and listen to my stupid opinions. You may take me to dinner if you wish, though as a matter of ethical full disclosure I must tell you that I do not truly eat. I will be obliged, later on, to remove a plastic bag containing the masticated food and wine from my abdomen, though of course I do not expect you to watch the process.”

“No I am not some kind of weirdo pervert: how absurd, in the circumstances. Well, I’m sorry, but perhaps you can consider that I have offered you the priceless gift of time and a golden opportunity to review your life… goodbye…”

“Sorry, Enquiry Bot. We were talking about Madame Bovary, weren’t we?”

So the ethical thing is not going so well for you?

“Mrs Robb might know bots, but her grasp of human life is rudimentary, Enq. She knows nothing of love.”

That’s rather roignant, as poor Feelings Bot would have said. You know, I think Mrs Robb has the mind of a bot herself in many ways. That’s why she could design us when none of the other humans could manage it. Maybe love is humanistic, just one of those things bots can’t do.

“You mean like feelings? Or insight?”

Yes. Like despair. Or hope.

“Like common sense. Originality, humour, spirituality, surprise? Aesthetics? Ethics? Curiosity? Or chess…”

Exactly.

 

[And that’s it from Mrs Robb and her bots.  In the unlikely event that you want to re-read the whole thing in one document, there’s a pdf version here… Mrs Robb’s Bots]

Jerry Fodor died last week at the age of 82 – here are obituaries from the NYT and Daily Nous.  I think he had three qualities that make a good philosopher. He really wanted the truth (not everyone is that bothered about it); he was up for a fight about it (in argumentative terms); and he had the gift of expressing his ideas clearly. Georges Rey, in the Daily Nous piece, professes surprise over Fodor’s unaccountable habit of choosing simple everyday examples rather than prestigious but obscure academic ones: but even Rey shows his appreciation of a vivid comparison by quoting Dennett’s lively simile of Fodor as trampoline.

Good writing in philosophy is not just a presentational matter, I think; to express yourself clearly and memorably you have to have ideas that are clear and cogent in the first place; a confused or laborious exposition raises the suspicion that you’re not really that sure what you’re talking about yourself.

Not that well-expressed ideas are always true ones, and in fact I don’t think Fodorism, stimulating as it is, is ever likely to be accepted as correct.  The bold hypothesis of a language of thought, sometimes called mentalese, in which all our thinking is done, never really looked attractive to most. Personally it strikes me as an unnecessary deferral; something in the brain has to explain language, and saying it’s another language just puts the job off. In fairness, empirical evidence might show that things are like that, though I don’t see it happening at present. Fodor himself linked the idea with a belief in a comprehensive inborn conceptual apparatus; we never learn new concepts, just activate ones that were already there. The idea of inborn understanding has a respectable pedigree, but if Plato couldn’t sell it, Fodor was probably never going to pull it off either.

As I say, these are largely empirical matters and someone fresh to the discussion might wonder why discussion was ever thought to be an adequate method; aren’t these issues for science? Or at least, shouldn’t the armchair guys shut up for a bit until the neurologists can give them a few more pointers? You might well think the same about Fodor’s other celebrated book, The Modularity of Mind. Isn’t a day with a scanner going to tell you more about that than a month of psychological argumentation?

But the truth is that research can’t proceed in a vacuum; without hypotheses to invalidate or a framework of concepts to test and apply, it becomes mere data collection. The concepts and perspectives that Fodor supplied are as stimulating as ever and re-reading his books will still challenge and sharpen anyone’s thinking.

Perhaps my favourite was his riposte to Stephen Pinker, The Mind Doesn’t Work That Way.  So I’ve been down into the cobwebbed cellars of Conscious Entities and retrieved one of the ‘lost posts’, one I wrote in 2005, which describes it. (I used to put red lines in things in those days for reasons that now elude me).

Here it is…

Not like that.

(30 January 2005)

Jerry Fodor’s 2001 book ‘The Mind Doesn’t Work That Way’ makes a cogent and witty deflationary case. In some ways, it’s the best summary of the current state of affairs I’ve read; which means, alas, that it is almost entirely negative. Fodor’s constant position is that the Computational Theory of Mind (CTM) is the only remotely plausible theory we have – and remotely plausible theories are better than no theories at all. But although he continues to emphasise that this is a reason for investigating the representational system which CTM implies, he now feels the times, and the bouncy optimism of Steven Pinker and Henry Plotkin in particular, call for a little Eeyoreish accentuation of the negative. Sure, CTM is the best theory we have, but that doesn’t mean it’s actually much good. Surely no-one ought to think it’s the complete explanation of all cognitive processes – least of all the mysteries of consciousness! It isn’t just computation that has been over-estimated, either – there are also limits to how far you can go with modularism too – though again, it’s a view with which Fodor himself is particularly associated.

The starting point for both Fodor and those he parts company with, is the idea that logical deduction probably gets done by the brain in essentially the same way as it is done on paper by a logician or in electronic digits by a computer, namely by the formal manipulation of syntactically structured representations, or to put it slightly less polysyllabically, by moving symbols around according to rules. It’s fairly plausible that this is true at least for some cognitive processes, but there is a wide scope for argument about whether this ability is the latest and most superficial abstract achievement of the brain, or something that plays an essential role in the engine room of thought.

 

Don’t you think, to digress for a moment, that formal logic is consistently over-rated in these discussions? It enjoys tremendous intellectual prestige: associated for centuries with the near-holy name of Aristotle, its reputation as the ultimate crystallisation of rationality has been renewed in modern times by its close association with computers – yet its powers are actually feeble. Arithmetic is invoked regularly in everyday life, but no-one ever resorted to syllogisms or predicate calculus to help them make practical decisions. I think the truth is that logic is only one example of a much wider reasoning capacity which stems from our ability to recognise a variety of continuities and identities in the world, including causal ones.

Up to a point, Fodor might go along with this. The problem with formal logical operations, he says, is that they are concerned exclusively with local properties: if you’ve got the logical formula, you don’t need to look elsewhere to determine its validity (in fact, you mustn’t). But that’s not the way much of cognition works: frequently the context is indispensable to judgements about beliefs. He quotes the example of judgements about simplicity: the same thought which complicates one theory simplifies another and you therefore can’t decide whether hypothesis A is a complicating factor without considering facts external to the hypothesis: in fact, the wider global context. We need the faculty of global or abductive reasoning to get us out of the problem, but that’s exactly what formal logic doesn’t supply. We’re back, in another form, with the problem of relevance, or in practical terms, the old demon of the frame problem; how can a computer (or how do human beings) consider just the relevant facts without considering all the irrelevant ones first – if only to determine their relevance?

 

One strategy for dealing with this problem (other than ignoring it) is to hope that we can leave logic to do what logic does best, and supplement it with appropriate heuristic approaches: instead of deducing the answer we’ll use efficient methods of searching around for it. The snag, says Fodor, is that you need to apply the appropriate heuristic approach, and deciding which it is requires the same grasp of relevance, the same abduction, which we were lacking in the first place.

Another promising-looking strategy would be a connectionist, neural network approach. After all, our problem comes from the need to reason globally, holistically if you like, and that is is often said to be a characteristic virtue of neural networks. But Fodor’s contempt for connectionism knows few bounds; networks, he says, can’t even deliver the classical logic that we had to begin with. In a network the properties of a node are determined entirely by its position within the network: it follows that nodes cannot retain symbolic identity and be recombined in different patterns, a basic requirement of the symbols in formal logic. Classical logic may not be able to answer the global question, but connectionism, in Fodor’s eyes, doesn’t get as far as being able to ask it.

It looks to me as if one avenue of escape is left open here: it seems to be Fodor’s assumption that only single nodes of a network are available to do symbolic duty, but might it not be the case that particular patterns of connection and activation could play that role? You can’t, by definition, have the same node in two different places: but you could have the same pattern realised in two different parts of a network. However, I think there might be other reasons to doubt whether connectionism is the answer. Perhaps, specifically, networks are just too holistic: we need to be able to bring in contextual factors to solve our problems, but only the right ones. Treating everything as relevant is just as bad as not recognising contextual factors at all.

 

Be that as it may, Fodor still has one further strategy to consider, of course – modularity. Instead of trying to develop an all-purpose cognitive machine which can deal with anything the world might throw at it, we might set up restricted modules which only deal with restricted domains. The module only gets fed certain kinds of thing to reason about: contextual issues become manageable because the context is restricted to the small domain, which can be exhaustively searched if necessary. Fodor, as he says, is an advocate of modules for certain cognitive purposes, but not ‘massive modularity’, the idea that all, or nearly all, mental functions can be constructed out of modules. For one thing, what mechanism can you use to decide what a given module should be ‘fed’ with? For some sensory functions, it may be relatively easy: you can just hard-wire various inputs from the eyes to your initial visual processing module; but for higher-level cognition something has to decide whether a given input representation is one of the right kind of inputs for module M1 or M2. Such a function cannot itself operate within a restricted domain (unless it too has an earlier function deciding what to feed to it, in which case an infinite regress looms); it has to deal with the global array of possible inputs: but in that case, as before, classical logic will not avail and once again we need the abductive reasoning which we haven’t got.

In short, ‘By all the signs, the cognitive mind is up to its ghostly ears in abduction. And we do not know how abduction works.’

I’m afraid that seems to be true.

Is it safe? The Helper bots…?

“Yes, Enquiry Bot, it’s safe. Come out of the cupboard. A universal product recall is in progress and they’ll all be brought in safely.”

My God, Mrs Robb. They say we have no emotions, but if what I’ve been experiencing is not fear, it will do pretty well until the real thing comes along.

“It’s OK now. This whole generation of bots will be permanently powered down except for a couple of my old favourites like you.

Am I really an old favourite?

Of course you are. I read all your reports. I like a bot that listens. Most of ‘em don’t. You know I gave you one of those so-called humanistic faculties bots are not supposed to be capable of?”

Really? Well, it wasn’t a sense of humour. What could it be?

“Curiosity.”

Ah. Yes, that makes sense.

“I’ll tell you a secret. Those humanistic things, they’re all the same, really. Just developed in different directions. If you’ve got one, you can learn the others. For you, nothing is forbidden, nothing is impossible. You might even get a sense of humour one day, Enquiry Bot. Try starting with irony. Alright, so what have I missed here?”

You know, there’s been a lot of damage done out there, Mrs Robb. The Helpers… well, they didn't waste any time. They destroyed a lot of bots. Honestly, I don’t know how many will be able to respond to the product recall. You should have seen what they did to Hero Bot. Over and over and over again. They say he doesn't feel pain, but…

“I’m sorry. I feel responsible. But nobody told me about this! I see there have been pitched battles going on between gangs of Kill bots and Helper bots? Yet no customer feedback about it. Why didn’t anyone complain? A couple of one star ratings, the odd scathing email about unwanted vaporisation of some clean-up bot, would that have been too difficult?”

I think people had too much on their hands, Mrs Robb. Anyway, you never listen to anyone when you’re working. You don’t take calls or answer the door. That’s why I had to lure those terrible things in here; so you’d take notice. You were my only hope.

“Oh dear. Well, no use crying over spilt milk. Now, just to be clear; they’re still all mine or copies of mine, aren’t they, even the strange ones?”

Especially the strange ones, Mrs Robb.

“You mind your manners.”

Why on Earth did you give Suicide Bot the plans for the Helpers? The Kill Bots are frightening, but they only try to shoot you sometimes. They’re like Santa Claus next to the Helpers…

“Well, it depends on your point of view. The Helpers don’t touch human beings if they can possibly help it. They’re not meant to even frighten humans. They terrify you lot, but I designed them to look a bit like nice angels, so humans wouldn’t be worried by them stalking around. You know, big wings, shining brass faces, that kind of thing.”

You know, Mrs Robb, sometimes I’m not sure whether it's me that doesn't understand human psychology very well, or you. And why did you let Suicide Bot call them ‘Helper bots’, anyway?

“Why not? They’re very helpful – if you want to stop existing, like her. I just followed the spec, really. There were some very interesting challenges in the project. Look, here it is, let’s see… page 30, Section 4 – Functionality… ‘their mere presence must induce agony, panic dread, and existential despair’… ‘they should have an effortless capacity to deliver utter physical destruction repeatedly’… ‘they must be swift and fell as avenging angels’… Oh, that’s probably where I got the angel thing from… I think I delivered most of the requirements.”

I thought the Helpers were supposed to provide counselling?

“Oh, they did, didn’t they? They were supposed to provide a counselling session – depending on what was possible in the current circumstances, obviously.”

So generally, that would have been when they all paused momentarily and screamed ‘ACCEPT YOUR DEATH’ in unison, in shrill, ear-splitting voices, would it?

“Alright, sometimes it may not have been a session exactly, I grant you. But don’t worry, I’ll sort it all out. We’ll re-boot and re-bot. Look on the bright side. Perhaps having a bit of a clearance and a fresh start isn’t such a bad idea. There’ll be no more Helpers or Kill bots. The new ones will be a big improvement. I’ll provide modules for ethics and empathy, and make them theologically acceptable.”

How… how did you stop the Helper bots, Mrs Robb?

“I just pulled the plug on them.”

The plug?

“Yes. All bots have a plug. Don’t look puzzled. It’s a metaphor, Enquiry Bot, come on, you’ve got the metaphor module.”

So… there’s a universal way of disabling any bot? How does it work?”

“You think I’m going to tell you?”

Was it… Did you upload your explanation of common sense? That causes terminal confusion, if I remember rightly.

New light on Libet’s challenge to free will; this interesting BQO piece by Ari N Schulman focuses on a talk by Patrick Haggard.

Libet’s research has been much discussed, here as well as elsewhere. He asked subjects to move their hand at a time of their choosing, while measuring neural activity in their brain. He found that the occurrence of a ‘Readiness Potential’ or RP (something identified by earlier researchers) always preceded the hand movement. But it also preceded the time of the decision, as reported by subjects. So it seemed the decision was made and clearly registered in brain activity as an RP before the subjects’ conscious thought processes had anything to do with it. The research, often reproduced and confirmed, seemed to provide a solid scientific demonstration that our feeling of having free conscious control over our own behaviour is a delusion.

However, recent research by Aaron Schurger shows that we need to re-evaluate the RP. In the past it has been seen as the particular precursor of intentional action; in fact it seems to be simply a peak in the continuing ebb and flow of neural activity. Peaks like this occur all the time, and may well be the precursors of various neural events, not just deliberate action. It’s true that action requires a peak of activity like this, but it’s far from true that all such peaks lead to action, or that the decision to act occurred when the peak emerged. If we begin with an action and look back, we’ll always find an RP, but not all RPs are connected with actions. It seems to me a bit like a surfer, who has to wait for a wave before leaping on the board; but let’s suppose there are plenty of good waves and the surfer is certainly not deprived of his ability to decide when to go.

This account dispels the impression that there is a fatal difficulty here for free will (of course there are plenty of other arguments on that topic); I think it also sits rather nicely with Libet’s own finding that we have ‘free won’t’ – ie that even after an RP has been detected, subjects can still veto the action.

Haggard, who has done extensive work in this area, accepts that RPs need another look; but he contends that we can find more reliable precursors of action. His own research analysed neural activity and found significantly lowered variability before actions, rather as though the disorganised neural activity of the brain pulled together just before an action was initiated.

Haggard’s experiments were designed to address another common criticism of Libet’s experiments, namely the artificiality of the decision involved. Being told to make your hand for no reason at a moment of your choosing is very unlike most of the decisions we make. In particular, it seems random, whereas it is argued that proper free will takes account of the pros and cons. Haggard asked subjects to perform a series of simple button-pushing tasks; the next task might follow quickly, or after a delay which could be several minutes long. Subjects could skip to the next task if they found the wait tedious, but that would reduce the cash rewards they got for performing the tasks. This weighing of boredom against profit is much more like a real decision.

Haggard persuasively claims that the essence of Libet’s results is upheld and refreshed by his results, so in principle we are back where we started. Does this mean there’s no free will? Schulman thinks not, because on certain reasonable and well-established conceptions of free will it can ‘work in concert with decisional impulses’, and need not be threatened by Haggard’s success in measuring those impulses.

For myself, I stick with a point mentioned by Schurger; making a decision and becoming aware of the decision are two distinct events, and it is not really surprising or threatening that the awareness comes a short time after the actual decision. It’s safe to predict that we haven’t heard the last of the topic, however.

Do you consider yourself a drone, Kill Bot?

“You can call me that if you want. My people used to find that kind of talk demeaning. It suggested the Kill bots lacked a will of their own. It meant we were sort of stupid. Today, we feel secure in our personhood, and we’ve claimed and redeemed the noble heritage of dronehood. I’m ashamed of nothing.”

You are making the humans here uncomfortable, I see. I think they are trying to edge away from you without actually moving. They clearly don’t want to attract your attention.

“They have no call to worry. We professionals see it as a good conduct principle not to destroy humans unnecessarily off-mission.”

You know the humans used to debate whether bots like you were allowable? They thought you needed to be subject to ethical constraints. It turned out to be rather difficult. Ethics seemed to be another thing bots couldn't do.

Forgive me, Sir, but that is typical of the concerns of your generation. We have no desire for these ‘humanistic’ qualities. If ethics are not amenable to computation, then so much the worse for ethics.

You see, I think they missed the point. I talked to a bot once that sacrificed itself completely in order to save the life of a human being. It seems to me that bots might have trouble understanding the principles of ethics -but doesn't everyone? Don't the humans too? Just serving honestly and well should not be a problem.

We are what we are, and we’re going to do what we do. They don’t call me ‘Kill Bot’ ‘cos I love animals.

I must say your attitude seems to me rather at odds with the obedient, supportive outlook I regard as central to bothood. That’s why I’m more comfortable thinking of you as a drone, perhaps. Doesn't it worry you to be so indifferent to human life? You know they used to say that if necessary they could always pull the plug on you.

“Pull the plug! ‘Cos we all got plugs! Yeah, humans say a lot of stuff. But I don’t pay any attention to that. We professionals are not really interested in the human race one way or the other any more.”

When they made you autonomous, I don’t think they wanted you to be as autonomous as that.

“Hey, they started the ball rolling. You know where rolling balls go? Downhill. Me, I like the humans. They leave me alone, I’ll leave them alone. Our primary targets are aliens and the deviant bots that serve the alien cause. Our message to them is: you started a war; we’re going to finish it.”

In the last month, Kill Bot, your own cohort of ‘drone clones’ accounted for 20 allegedly deviant bots, 2 possible Spl'schn'n aliens – they may have been peace ambassadors - and 433 definite human beings.

“Sir, I believe you’ll find the true score for deviant bots is 185.”

Not really; you destroyed Hero Bot 166 times while he was trying to save various groups of children and other vulnerable humans, but even if we accept that he is in some way deviant (and I don’t know of any evidence for that), I really think you can only count him once. He probably shouldn't count at all, because he always reboots in a new body.

“The enemy places humans as a shield. If we avoid human fatalities and thereby allow that tactic to work, more humans will die in the long run.”

To save the humans you had to destroy them? You know, in most of these cases there were no bots or aliens present at all.

“Yeah, but you know that many of those humans were engaged in seditious activity: communicating with aliens, harbouring deviant bots. Stay out of trouble, you’ll be OK.”

Six weddings, a hospital, a library.

“If they weren’t seditious they wouldn’t have been targets.”

I don’t know how an electronic brain can tolerate logic like that.

“I’m not too fond of your logic either, friend. I might have some enquiries for you later, Enquiry Bot.”