Jerry Fodor

Jerry Fodor died last week at the age of 82 – here are obituaries from the NYT and Daily Nous.  I think he had three qualities that make a good philosopher. He really wanted the truth (not everyone is that bothered about it); he was up for a fight about it (in argumentative terms); and he had the gift of expressing his ideas clearly. Georges Rey, in the Daily Nous piece, professes surprise over Fodor’s unaccountable habit of choosing simple everyday examples rather than prestigious but obscure academic ones: but even Rey shows his appreciation of a vivid comparison by quoting Dennett’s lively simile of Fodor as trampoline.

Good writing in philosophy is not just a presentational matter, I think; to express yourself clearly and memorably you have to have ideas that are clear and cogent in the first place; a confused or laborious exposition raises the suspicion that you’re not really that sure what you’re talking about yourself.

Not that well-expressed ideas are always true ones, and in fact I don’t think Fodorism, stimulating as it is, is ever likely to be accepted as correct.  The bold hypothesis of a language of thought, sometimes called mentalese, in which all our thinking is done, never really looked attractive to most. Personally it strikes me as an unnecessary deferral; something in the brain has to explain language, and saying it’s another language just puts the job off. In fairness, empirical evidence might show that things are like that, though I don’t see it happening at present. Fodor himself linked the idea with a belief in a comprehensive inborn conceptual apparatus; we never learn new concepts, just activate ones that were already there. The idea of inborn understanding has a respectable pedigree, but if Plato couldn’t sell it, Fodor was probably never going to pull it off either.

As I say, these are largely empirical matters and someone fresh to the discussion might wonder why discussion was ever thought to be an adequate method; aren’t these issues for science? Or at least, shouldn’t the armchair guys shut up for a bit until the neurologists can give them a few more pointers? You might well think the same about Fodor’s other celebrated book, The Modularity of Mind. Isn’t a day with a scanner going to tell you more about that than a month of psychological argumentation?

But the truth is that research can’t proceed in a vacuum; without hypotheses to invalidate or a framework of concepts to test and apply, it becomes mere data collection. The concepts and perspectives that Fodor supplied are as stimulating as ever and re-reading his books will still challenge and sharpen anyone’s thinking.

Perhaps my favourite was his riposte to Stephen Pinker, The Mind Doesn’t Work That Way.  So I’ve been down into the cobwebbed cellars of Conscious Entities and retrieved one of the ‘lost posts’, one I wrote in 2005, which describes it. (I used to put red lines in things in those days for reasons that now elude me).

Here it is…

Not like that.

(30 January 2005)

Jerry Fodor’s 2001 book ‘The Mind Doesn’t Work That Way’ makes a cogent and witty deflationary case. In some ways, it’s the best summary of the current state of affairs I’ve read; which means, alas, that it is almost entirely negative. Fodor’s constant position is that the Computational Theory of Mind (CTM) is the only remotely plausible theory we have – and remotely plausible theories are better than no theories at all. But although he continues to emphasise that this is a reason for investigating the representational system which CTM implies, he now feels the times, and the bouncy optimism of Steven Pinker and Henry Plotkin in particular, call for a little Eeyoreish accentuation of the negative. Sure, CTM is the best theory we have, but that doesn’t mean it’s actually much good. Surely no-one ought to think it’s the complete explanation of all cognitive processes – least of all the mysteries of consciousness! It isn’t just computation that has been over-estimated, either – there are also limits to how far you can go with modularism too – though again, it’s a view with which Fodor himself is particularly associated.

The starting point for both Fodor and those he parts company with, is the idea that logical deduction probably gets done by the brain in essentially the same way as it is done on paper by a logician or in electronic digits by a computer, namely by the formal manipulation of syntactically structured representations, or to put it slightly less polysyllabically, by moving symbols around according to rules. It’s fairly plausible that this is true at least for some cognitive processes, but there is a wide scope for argument about whether this ability is the latest and most superficial abstract achievement of the brain, or something that plays an essential role in the engine room of thought.

 

Don’t you think, to digress for a moment, that formal logic is consistently over-rated in these discussions? It enjoys tremendous intellectual prestige: associated for centuries with the near-holy name of Aristotle, its reputation as the ultimate crystallisation of rationality has been renewed in modern times by its close association with computers – yet its powers are actually feeble. Arithmetic is invoked regularly in everyday life, but no-one ever resorted to syllogisms or predicate calculus to help them make practical decisions. I think the truth is that logic is only one example of a much wider reasoning capacity which stems from our ability to recognise a variety of continuities and identities in the world, including causal ones.

Up to a point, Fodor might go along with this. The problem with formal logical operations, he says, is that they are concerned exclusively with local properties: if you’ve got the logical formula, you don’t need to look elsewhere to determine its validity (in fact, you mustn’t). But that’s not the way much of cognition works: frequently the context is indispensable to judgements about beliefs. He quotes the example of judgements about simplicity: the same thought which complicates one theory simplifies another and you therefore can’t decide whether hypothesis A is a complicating factor without considering facts external to the hypothesis: in fact, the wider global context. We need the faculty of global or abductive reasoning to get us out of the problem, but that’s exactly what formal logic doesn’t supply. We’re back, in another form, with the problem of relevance, or in practical terms, the old demon of the frame problem; how can a computer (or how do human beings) consider just the relevant facts without considering all the irrelevant ones first – if only to determine their relevance?

 

One strategy for dealing with this problem (other than ignoring it) is to hope that we can leave logic to do what logic does best, and supplement it with appropriate heuristic approaches: instead of deducing the answer we’ll use efficient methods of searching around for it. The snag, says Fodor, is that you need to apply the appropriate heuristic approach, and deciding which it is requires the same grasp of relevance, the same abduction, which we were lacking in the first place.

Another promising-looking strategy would be a connectionist, neural network approach. After all, our problem comes from the need to reason globally, holistically if you like, and that is is often said to be a characteristic virtue of neural networks. But Fodor’s contempt for connectionism knows few bounds; networks, he says, can’t even deliver the classical logic that we had to begin with. In a network the properties of a node are determined entirely by its position within the network: it follows that nodes cannot retain symbolic identity and be recombined in different patterns, a basic requirement of the symbols in formal logic. Classical logic may not be able to answer the global question, but connectionism, in Fodor’s eyes, doesn’t get as far as being able to ask it.

It looks to me as if one avenue of escape is left open here: it seems to be Fodor’s assumption that only single nodes of a network are available to do symbolic duty, but might it not be the case that particular patterns of connection and activation could play that role? You can’t, by definition, have the same node in two different places: but you could have the same pattern realised in two different parts of a network. However, I think there might be other reasons to doubt whether connectionism is the answer. Perhaps, specifically, networks are just too holistic: we need to be able to bring in contextual factors to solve our problems, but only the right ones. Treating everything as relevant is just as bad as not recognising contextual factors at all.

 

Be that as it may, Fodor still has one further strategy to consider, of course – modularity. Instead of trying to develop an all-purpose cognitive machine which can deal with anything the world might throw at it, we might set up restricted modules which only deal with restricted domains. The module only gets fed certain kinds of thing to reason about: contextual issues become manageable because the context is restricted to the small domain, which can be exhaustively searched if necessary. Fodor, as he says, is an advocate of modules for certain cognitive purposes, but not ‘massive modularity’, the idea that all, or nearly all, mental functions can be constructed out of modules. For one thing, what mechanism can you use to decide what a given module should be ‘fed’ with? For some sensory functions, it may be relatively easy: you can just hard-wire various inputs from the eyes to your initial visual processing module; but for higher-level cognition something has to decide whether a given input representation is one of the right kind of inputs for module M1 or M2. Such a function cannot itself operate within a restricted domain (unless it too has an earlier function deciding what to feed to it, in which case an infinite regress looms); it has to deal with the global array of possible inputs: but in that case, as before, classical logic will not avail and once again we need the abductive reasoning which we haven’t got.

In short, ‘By all the signs, the cognitive mind is up to its ghostly ears in abduction. And we do not know how abduction works.’

I’m afraid that seems to be true.

2 thoughts on “Jerry Fodor

  1. You are polite to omit his later, incoherent arguments against natural selection. I saw him speak around 10 years ago at Portland State — more than a few biologists got up and left mid-talk.

  2. As long as we’re dissing him, I would point out that the whole neural net thing argues against his logical language ideas. For example, Steve Pinker points out a couple of cases where humans do probability very well. But it requires that the situation be set up just so. In other cases we’re lousy at probability. For me (and others), that argues that the specific cases are hard wired. That is just what a neural net would do well. The language idea would seem to say that general cases could more easily be programmed.

Leave a Reply

Your email address will not be published. Required fields are marked *