Minds, Matter and Mechanisms

Will the mind ever be fully explained by neuroscience? A good discussion from IAI, capably chaired by Barry C. Smith.

Raymond Tallis puts intentionality at the centre of the question of the mind (quite rightly, I think). Neuroscience will never explain meaning or the other forms of intentionality, so it will never tell us about essential aspects of the mind.

Susanna Martinez-Conde says we should not fear reductive explanation. Knowing how an illusion works can enhance our appreciation rather than undermining it. Our brains are designed to find meanings, and will do so even in a chaotic world.

Markus Gabriel says we are not just a pack of neurons – trivially, because we are complete animals, but more interestingly because of the contents of our mind – he broadly agrees that intentionality is essential. London is not contained in my head, so aliens could not decipher from my neurons that I was thinking I was in London. He adds the conceot of geist – the capacity to live according to a conception of ourselves as a certain kind of being – which is essential to humanity, but relies on our unique mental powers.

Martinez-Conde points out that we can have the experience of being in London without in fact being there; Tallis dismisses such ‘brain in a vat’ ideas; for the brain to do that it must have had real experiences and there must be scientists controlling what happens in the vat. The mind is irreducibly social.

My sympathies are mainly with Tallis, but against him it can be pointed out that while neuroscience has no satisfactory account of intentionality, he hasn’t got one either. While the subject remains a mystery, it remains possible that a remarkable new insight that resolves it all will come out of neuroscience. The case against that possibility, I think, rests mainly on a sense of incredulity: the physical is just not the sort of thing that could ever explain the mental. We find this in Brentano of course, and perhaps as far back as Leibniz’s mill, or in the Cartesian point that mental things have no extension. But we ought to admit that this incredulity is really just an intuition, or if you like, a failure to be able to imagine. It puzzles me sometimes that numbers, those extensionless abstract concepts, can nevertheless drive the behaviour of a computer. But surely it would be weird to say they don’t, or that how computers do arithmetic must remain an unfathomable mystery.

The Mark of the Mental

An interesting review by Guillaume Fréchette, of Mark Textor’s new book Brentano’s Mind. Naturally this deals among other things with the claim for which Brentano is most famous; that intentionality is the distinctive feature of the mental (and so of thoughts, consciousness, awareness, and so on). Textor apparently makes five major claims, but I only want to glance at the first one, that in fact ‘Intentionality is an implausible mark of the mental’.

What was Brentano on about, anyway? Intentionality is the property of pointing at, or meaning, or being about, something. It was discussed in medieval philosophy and then made current again by Brentano when he, like others, was trying to establish an empirical science of psychology for the first time. In his view:

intentional in-existence, the reference to something as an object, is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar.”

Textor apparently thinks that there’s a danger of infinite regress here. He reads Brentano’s mention of in-existence as meaning we need to think of an immanent object ‘existing in’ our mind in order to think of an object ‘out there’; but in that case, doesn’t thinking of the immanent object require a further immanent object, and so on. There seems to be more than one way of escaping this regress, however. Perhaps we don’t need to think of the immanent object, it just has to be there. Maybe awareness of an external object and introspecting an internal one are quite different processes, the latter not requiring an immanent object. Perhaps the immanent object is really a memory, or perhaps  the whole business of immanent objects reads more into Brentano than we should.

Textor believes Brentano is pushed towards primitivism – hey, this just is the mark of the mental, full stop – and thinks it’s possible to do better. I think this is nearly right, except it assumes Brentano must be offering a theory, even if it’s only the bankrupt one of primitivism. I think Brentano observes that intentionality is the mark of the mental, and shrugs. The shrug is not a primitivist thesis, it just expresses incomprehension. To say that one does not know x is not to say that x is unknowable. I could of course be wrong, both about Brentano, and particularly about Textor.

What I think you have to do is go back and ask why Brentano thought intentionality was the mark of the mental in the first place. I think it’s a sort-of empirical observation. All thoughts are about, or of, something. If we try to imagine a thought that isn’t about anything, we run into difficulty. Is there a difference between not thinking of anything and not thinking at all (thinking of nothing may be a different matter)? Similarly, can there be awareness which isn’t awareness of anything? One can feel vast possible disputes about this opening up even as we speak, but I should say it is at least pretty plausible that all mental states feature intentionality.

Physical objects, such as stones, are not about anything; though they can be, like books, if we have used the original intentionality of our minds to bestow meaning on them; if in fact we intend them to mean something. Once again, this is disputable, but not, to me, implausible.

Intentionality remains a crucially important aspect of the mind, not least because we have got almost nowhere with understanding it. Philosophically there are of course plenty of contenders; ideas about how to build intentionality out of information, out of evolution, or indeed to show how original intentionality is a bogus idea in the first place. To me, though, it’s telling that we’ve got nowhere with replicating it. Where AI would seem to require some ability to handle meaning – in translation, for example – it has to be avoided and a different route taken. While it remains mysterious, there will always be a rather large hole in our theories of consciousness.

Intentionality and Introspection

Some people, I know, prefer to get their philosophy in written form; but if you like videos it’s well worth checking out Richard Brown’s YouTube series Consciousness This Month.

This one, Ep 4, is about mental contents, with Richard setting out briefly but clearly a couple of the major problems (look at the camera, Richard!).

Introspection, he points out, is often held to be incorrigible or infallible on certain points. You can be wrong about being at the dentist, but you can’t be wrong about being in pain. This is because of the immediacy of the experience. In the case of the dentist, we know there is a long process between light hitting your retina and the dentist being presented to consciousness. Various illusions and errors provide strong evidence for the way all sorts of complex ‘inferences’ and conclusions have been drawn by your unconscious visual processing system before the presence of the dentist gets inserted into your awareness in the guise of a fact. There is lots of scope for that processing to go wrong, so that the dentist’s presence might not be a fact at all. There’s much less processing involved in our perception of someone tugging on a tooth, but still maybe you could be dreaming or deluded. But the pain is inside your mind already; there’s no scope for interpretation and therefore no scope for error.

My own view on this is that it isn’t our sense data that have to be wrong, it’s our beliefs about our experiences. If the results of visual processing are misleading, we may end up with the false belief that there is a dentist in the room. But that’s not the only way for us to pick up false beliefs, and nothing really prevents our holding false beliefs about being in pain. There is some sense in which the pain can’t be wrong, but thatks more a matter of truth and falsity being properties of propositions, not of pains.

Richard also sketches the notion of intentionality, or ‘aboutness’, reintroduced to Western philosophy as a key idea by Brentano, who took it to be the distinguishing feature of the mental. When we think about things it seems as if our thought is directed towards an external object. In itself that seems to require some explanation, but it gets especially difficult when you consider that we can easily talk about non-existent or even absurd things. This is the kind of problem that caused Meinong to introduce a distinction between existence and subsistence, so that the objects of thought could have a manageable ontological status without being real in the same way as physical objects.

Regulars may know that my own view is that consciousness is largely a matter of recognition. Humans, we might say, are superusers of recognition. Not only can we recognise objects, we can recognise patterns and use them for a sort of extrapolation. The presence of a small entity is recognised, but also a larger entity of which it is part. So we recognise dawn, but also see that it is part of a day. From the larger entity we can recognise parts not currently present, such as sunset, and this allows us to think about entities that are distant in time or space. But the same kind of extrapolation allows to think about things that do not, or even could not, exist.

I’m looking forward to seeing Richard’s future excursions.

No problem

The older I get, the less impressed I am by the hardy perennial of free will versus determinism. It seems to me now like one of those completely specious arguments that the Sophists supposedly used to dumbfound their dimmer clients.

One of their regulars, apparently went like this. Your dog has had pups? And it belongs to you? Then it’s a mother, and it’s yours. Ergo, it’s your mother!!!

If we can take this argument seriously enough to diagnose it, we might point out that ‘your’ is a word with several distinct uses. One is to pick out items that are your legal property; another is the wider one of picking out items that pertain to you in some other sense. We can, for example, use it to pick out the single human being that is your immediate female progenitor. So long as we are clear about these different senses, no problem arises.

Is free will like that? The argument goes something like this. Your actions were ultimately determined by the laws of physics. An action which was determined in advance is not free. Ergo, physics says none of your actions were free!!!

But there are two entirely different senses of “determined” in play here. When I ask if you had a free choice, I’m not asking a metaphysical question about whether an interruption to the causal sequence occurred. I’m asking whether you had a gun to your head, or something like that.

Now some might argue that although the two senses are distinct, the physics one over-rides the psychological one and renders it meaningless. But it doesn’t. A metaphysical interruption to the causal sequence wouldn’t give me freedom anyway; it might give me a random factor, but freedom is not random. What I want to know is, did your actions arise out of your conscious thoughts, or did external factors constrain them? That’s all. The undeniable fact that my actions are ultimately constrained by the laws of nature simply isn’t what I’m concerned with.

That constraint really is undeniable, of course; in fact we don’t really need physics. If the world is coherent at all it must be governed by laws, and those laws must determine what happens. If things happened for no reason, we could make no sense of anything. So any comprehensive world view must give us some kind of determinism. We know this well enough, because we are familiar with at least one other comprehensive theory; the view that things happen only because God wills them. This means everything is predestined, and that gives rise to just the same sort of pseudo-problems over free will. In fact, if we want we can get the same problems from logical fatalism, without appealing to either science or theology. Will I choose A tomorrow or not? Necessarily there is a truth of the matter already, so although we cannot know, my decision is already a matter of fact, and in that sense is already determined.

So fundamental determinism is rock solid; it just isn’t a problem for freedom.

Hold on, you may say; you frame this as being about external constraints, but the real question is, am I not constrained internally? Don’t my own mental processes force me to make a particular decision? There are two versions of this argument. The first says that the mere fact that mental processes operate mechanistically means there can be no freedom. I just deny that; my own conscious processes count as a source of free decisions no matter how mechanistic they may be, just so long as they’re not constrained from outside.

The second version of the argument says that while free decisions of that kind might be possible in pure theory, as an empirical matter human beings don’t have the capacity for them. No conscious processes are actually effectual; consciousness is an epiphenomenon and merely invents rationales for decisions taken in a predetermined manner elsewhere in the brain. This argument is appealing because there is, of course, lots of evidence that unconscious factors influence our decisions. But the strong claim that my conscious deliberations are always irrelevant seems wildly implausible to me. Speech acts are acts, so to truly believe this strong version of the theory I’d have to accept that what I think is irrelevant to what I say, or that I adjust my thoughts retrospectively to fit whatever just came out of my mouth (I’m not saying there aren’t some people of whom one could believe this).

Now I may also be attacked from the other side. There may be advocates of free will who say, hold on, Peter, we do actually want that special metaphysical interruption you’re throwing away so lightly. Introspect, dear boy, and notice how your decisions come from nowhere; that over and above the weighing of advantage there just is that little element of inexplicable volition.

This impression comes, I think, from the remarkable power of intentionality. We can think about anything at all, including future or even imaginary contingencies. If our actions are caused by things that haven’t happened yet, or by things that will never actually happen (think of buying insurance) that looks like a mysterious disruption of the natural order of cause and effect. But of course it isn’t really. You may have a different explanation depending on your view of intentionality; mine is briefly that it’s all about recognition. Our ability to recognise entities that extend into the future, and then recognise within them elements that don’t yet exist, gives us the ability to make plans for the future, for example, without any contradiction of causality.

I’m afraid I’ve ended up by making it sound complicated again. Let me wrap it up in time-honoured philosophical style: it all depends what you mean by “determined”…

 

Not just un

dog-beliefThe unconscious is not just un. It works quite differently. So says Tim Crane in a persuasive draft paper which is to mark his inauguration as President of the Aristotelian Society (in spite of the name, the proceedings of that worthy organisation are not specifically concerned with the works or thought of Aristotle). He is particularly interested in the intentionality of the unconscious mind; how does the unconscious believe things, in particular?

The standard view, as Crane says, might probably be that the unconscious and conscious believe things in much the same way, and that it is basically a propositional one. (There is, by the way, scope to argue about whether there really is an unconscious mind – myself I lean towards the view that it’s better to talk of us doing or thinking things unconsciously, avoiding the implied claim that the unconscious is a distinct separate entity – but we can put that aside for present purposes.) The content of our beliefs, on this ‘standard’ view can be identified with a set of propositions – in principle we could just write down a list of our beliefs. Some of our beliefs certainly seem to be like that; indeed some important beliefs are often put into fixed words that we can remember and recite. Thou shalt not bear false witness, we hold these truths to be self-evident; the square on the hypotenuse is equal to the sum of the squares on the other two sides.

But if that were the case and we could make that list then we could say how many beliefs we have, and that seems absurd. The question of how many things we believe is often dismissed as silly, says Crane – how could you count them? – but it seems a good one to him. One big problem is that it’s quite easy to show that we have all sorts of beliefs we never consider explicitly. Do I believe that some houses are bigger than others? Yes, of course, though perhaps I never considered the question in that form before.

One common response (one which has been embodied in AI projects in the past) is that we have a set of core beliefs, which do sit in our brains in an explicit form; but we also have a handy means of quickly inferring other beliefs from them. So perhaps we know the typical range of sizes for houses and we can instantly work out from that that some are indeed bigger than others. But no-one has shown how we can distinguish what the supposed core beliefs are, nor how these explicit beliefs would be held in the brain (the idea of a ‘language of thought’ being at least empirically unsatisfactory in Crane’s view). Moreover there are problems with small children and animals who seem to hold definite beliefs that they could never put into words. A dog’s behaviour seems to show clearly enough that it believes there is a cat in this tree, but it could never formulate the belief in any explicit way. The whole idea that our beliefs are propositional in nature seems suspect.

Perhaps it is better, then,  to see beliefs as essentially dispositions to do or say things. The dog’s belief in the cat is shown by his disposition to certain kinds of behaviour around the tree – barking, half-hearted attempts at climbing. My belief that you are across the room is shown by my disposition to smile and walk over there. Crane suggests that in fact rather than sets of discrete beliefs what we really have is a worldview; a kind of holistic network in which individual nodes do not have individual readings. Ascriptions of belief, like attributing to someone a belief in a particular proposition, are really models that bring out particular aspects of their overall worldview. This has the advantage of explaining several things. One is that we can attribute the same belief – “Parliament is on the bank of the Thames” – to different people even though the content of their beliefs actually varies (because, for example, they have slightly different understandings about what ‘Parliament’ is).

It also allows scope for the vagueness of our beliefs, the ease with which we hold contradictory ones, and the interesting point that sometimes we’re not actually sure what we believe and may have to mull things over before reaching only tentative conclusions about it. Perhaps we too are just modelling as best we can the blobby ambiguity of our worldview.

Crane, in fact, wants to make all belief unconscious. Thinking is not believing, he says, although what I think and what I believe are virtually synonyms in normal parlance. One of the claimed merits of his approach is that if beliefs are essentially dispositions, it explains how they can be held continuously and not disappear when we are asleep or unconscious. Belief, on this view, is a continuous state; thinking is a temporary act, one which may well model your beliefs and turn them into explicit form. Without signing up to psychoanalytical doctrines wholesale, Crane is content that his thinking chimes with both Freudian and older ideas of the unconscious, putting the conscious interpretation of unconscious belief at the centre.

This all seems pretty sensible, though it does seem Crane is getting an awful lot of very difficult work done by the idea of a ‘worldview’, sketched here in only vague terms. It used to be easy to get away with this kind of vagueness in philosophy of mind, but these days I think there is always a ghostly AI researcher standing at the philosopher’s shoulder and asking how we set about doing the engineering, often a bracing challenge. How do we build a worldview into a robot if it’s not propositional? Some of Crane’s phraseology suggests he might be hoping that the concept of the worldview, with its network nodes with no explicit meaning might translate into modern neural network-based practice. Maybe it could; but even if it does, that surely won’t do for philosophers. The AI tribe will be happy if the robot works; but the philosophers will still want to know exactly how this worldview gizmo does its thing. We don’t know, but we know the worldview is already somehow a representation of the world. You could argue that while Crane set out to account for the intentionality of our beliefs, that is in the event the exact thing that he ends up not explaining at all.

There are some problems about resting on dispositions, too. Barking at a tree because I believe there’s a cat up there is one thing; my beliefs about metaphysics, by contrast, seem very remote from any simple behavioural dispositions of that kind. I suppose they would have to be conditional dispositions to utter or write certain kinds of words in the context of certain discussions. It’s a little hard to think that when I’m doing philosophy what  I’m really doing is modelling some of my own particularly esoteric pre-existing authorial dispositions. And what dispositions would they be? I think they would have to be something like dispositions to write down propositions like ‘nominalism is false’ – but didn’t we start off down this path because we were uncomfortable with the idea that the content of beliefs is propositional?

Moreover, Crane wants to say that our beliefs are preserved while we are asleep because we still have the relevant dispositions. Aren’t our beliefs similarly preserved when we’re dead? It would seem odd to say that Abraham Lincoln did not believe slavery should be abolished while he was asleep, certainly, but it would seem equally odd to say he stopped believing it when he died. But does he still have dispositions to speak in certain ways? If we insist on this line it seems the only way to make it intelligible is to fall back on counterfactuals (if he were still alive Lincoln would still be disposed to say that it was right to abolish slavery…) but counterfactuals notoriously bring a whole library of problems with them.

I’d also sort of like to avoid paring down the role of the conscious. I don’t think I’m quite ready to pack all belief away into the attic of the unconscious. Still, though Crane’s account may have its less appealing spots I do rather like the idea of a holistic worldview as the central bearer of belief.

Jochen’s Intentional Automata

intentional automatonJochen’s paper Von Neumann Minds: Intentional Automata has been published in Mind and Matter.

Intentionality is meaningfulness, the quality of being directed at something, aboutness. It is in my view one of the main problems of consciousness, up there with the Hard Problem but quite distinct from it; but it is often under-rated or misunderstood. I think this is largely because our mental life is so suffused with intentionality that we find it hard to see the wood for the trees; certainly I have read more than one discussion by very clever people who seemed to me to lose their way half-way through without noticing and end up talking about much simpler issues.

That is not a problem with Jochen’s paper which is admirably clear.  He focuses on the question of how to ground intentionality and in particular how to do so without falling foul of an infinite regress or the dreaded homunculus problem. There are many ways to approach intentionality and Jochen briefly mentions and rejects a few (basing it in phenomenal experience or in something like Gricean implicature, for example) before introducing his own preferred framework, which is to root meaning in action: the meaning of a symbol is, or is to be found in, the action it evokes. I think this is a good approach; it interprets intentionality as a matter of input/output relations, which is clarifying and also has the mixed blessing of exposing the problems in their worst and most intractable form. For me it recalls the approach taken by Quine to the translation problem – he of course ended up concluding that assigning certain meanings to unknown words was impossible because of radical under-determination; there are always more possible alternative meanings which cannot be eliminated by any logical procedure. Under-determination is a problem for many theories of intentionality and Jochen’s is not immune, but his aim is narrower.

The real target of the paper is the danger of infinite regress. Intentionality comes in two forms, derived on the one hand and original or intrinsic on the other. Books, words, pictures and so on have derived intentionality; they mean something because the author or the audience interprets them as having meaning. This kind of intentionality is relatively easy to deal with, but the problem is that it appears to defer the real mystery to the intrinsic intentionality in the mind of the person doing the interpreting. The clear danger is that we then go on to defer the intentionality to an homunculus, a ‘little man’ in the brain who again is the source of the intrinsic intentionality.

Jochen quotes the arguments of Searle and others who suggest that computational theories of the mind fail because the meaning and even the existence of a computation is a matter of interpretation and hence without the magic input of intrinsic intentionality from the interpreter fails through radical under-determination. Jochen dramatises the point using an extension of Searle’s Chinese Room thought experiment in which it seems the man inside the room can really learn Chinese – but only because he has become in effect the required homunculus.

Now we come to the really clever and original part of the paper; Jochen draws an analogy with the problem of how things reproduce themselves. To do so it seems they must already have a complete model of themselves inside themselves… and so the problem of regress begins. It would be OK if the organism could scan itself, but a proof by Svozil seems to rule that out because of problems with self-reference.  Jochen turns to the solution proposed by the great John Von Neumann (a man who might be regarded as the inventor of the digital computer if Turing had never lived). Von Neumann’s solution is expressed in terms of a tw0-dimensional cellular automaton (very simplistically, a pattern on a grid that evolves over time according to certain rules – Conway’s Game of Life surely provides the best-known examples). By separating the functions of copying and interpretation, and distinguishing active and passive states Von Neumann managed to get round Svozil successfully.

Now by importing this distinction between active and passive into the question of intentionality, Jochen suggests we can escape the regress. If symbols play either an active or a passive role (in effect, as semantics or as syntax) we can have a kind of automaton which, in a clear sense, gives its own symbols their interpretation, and so escapes the regress.

This is an ingenious move. It is not a complete solution to the problem of intentionality (I think the under-determination monster is still roaming around out here), but it is a novel and very promising solution to the regress. More than that, it offers a new perspective which may well offer further insights when fully absorbed; I certainly haven’t managed to think through what the wider implications might be, but if a process so central to meaningful thought truly works in this unexpected dual way it seems there are bound to be some. For that reason, I hope the paper gets wide attention from people whose brains are better at this sort of thing than mine…

Freedom – why worry?

chainWhy does the question of determinism versus free will continue to trouble us? There’s nothing too strange, perhaps, about a philosophical problem remaining in play for a while – or even for a few hundred years: but why does this one have such legs and still provoke such strong and contrary feelings on either side?

For me the problem itself is solved – and the right solution, broadly speaking, has been known for centuries: determinism is true, but we also have free choice in a meaningful sense. St Augustine, to go no earlier, understood that free will and predestination are not contradictory, but people still find it confusing that he spoke up for both.

If this view – compatibilism – is right, why hasn’t it triumphed? I’m coming to think that the strongest opposition on the question might not in fact be between the hard-line determinists and the uncompromising libertarians but rather a matter of both ends against the middle. Compatibilists like me are happy to see the problem solved and determinism reconciled with common sense, whereas people from both the extremes insist that that misses something crucial. They believe the ‘loss’ of free will radically undercuts and changes our traditional sense of what we are as human beings. They think determinism, for better or worse, wipes away some sacred mark from our brows. Why do they think that?

Let’s start by quickly solving the old problem. Part one: determinism is true. It looks, with some small reservations about the interpretation of some esoteric matters, as if the laws of physics completely determine what happens. Actually even if contemporary physics did not seem to offer the theoretical possibility of full determination, we should be inclined to think that some set of rules did. A completely random or indeterminate world would seem scarcely distinguishable from a nullity; nothing definite could be said about it and no reliable predictions could be made because everything could be otherwise. That kind of scenario, of disastrous universal incoherence is extreme, and I admit I know of no absolute reason to rule out a more limited, demarcated indeterminacy. Still, the idea of delimited patches of randomness seems odd, inelegant and possibly problematic. God, said Einstein, does not play dice.

Beyond that, moreover, there’s a different kind of point. We came into this business in pursuit of truth and knowledge, so it’s fair to say that if there seemed to be patches of uncertainty we should want to do our level best to clarify them out of existence. In this sense it’s legitimate to regard determinism not just as a neutral belief, but as a proper aspiration. Even if we believe in free will, aren’t we going to want a theory that explains how it works, and isn’t that in the end going to give us rules that determine the process? Alright, I’m coming to the conclusion too soon: but in this light I see determinism as a thing that lovers of truth must strive towards (even if in vain) and we can note in passing that that might be some part of the reason why people champion it with zeal.

We’re not done with the defence, anyway. One more thing we can do against indeterminacy is to invoke the deep old principle which holds that nothing comes of nothing, and that nothing therefore happens unless it must; if something particular must happen, then the compulsion is surely encapsulated in some laws of nature.

Further still, even if none of that were reliable, we could fall back on a fatalistic argument. If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday; so your turning that way rather than left was already determined.

Finally, we must always remember that failure to establish determinism is not success in establishing liberty. Determinism looks to be true; we should try to establish its truth if by any means we can: but even if we fail, that failure in itself leaves us not with free will but with an abhorrent void of the unknowable.

Part two: we do actually make free decisions. Determinism is true, but it bites firmly only at a low level of description; not truly above the level of particles and forces. To look for decisions or choices at that level is simply a mistake, of the same general kind as looking for bicycles down there. Their absence from the micro level does not mean that cyclists are systematically deluded. Decisions are processes of large neural structures, and I suggest that when we describe them as free we simply mean the result was not constrained externally. If I had a gun to my head or my hands were tied, then turning left was not a free decision. If no-one could tell which way I should go without knowledge of what was going on in the large neural structures that realise my mind, then it was free. There are of course degrees of freedom and plenty of grey areas, but the essential idea is clear enough. Freedom is just the absence of external constraint on a level of description where people and decisions are salient, useful concepts.

For me, and I suppose other compatibilists, that’s a satisfying solution and matches well with what I think I’ve always meant when I talk about freedom. Indeed, it’s hard for me to see what else freedom could mean. What if God did play dice after all? Libertarians don’t want their free decisions to be random, they want them to belong to them personally and reflect consideration of the circumstances; the problem is that it’s challenging for them to explain in that case how the decisions can escape some kind of determination. What unites the libertarians and the determinists is the conviction that it’s that inexplicable, paradoxical factor we are concerned to affirm or deny, and that its presence or absence says something important about human nature. To quietly do without the magic, as compatibilists do, is on their view to shoot the fox and spoil the hunt. What are they both so worried about?

I speculate that the one factor here is a persistent background confusion. Determinism, we should remember, is an intellectual achievement, both historically and often personally. We live in a world where nothing much about human beings is certainly determined; only careful reflection reveals that in the end, at the lowest level of detail and at the very last knockings of things, there must be certainty. This must remain a theoretical conclusion, certainly so far as human beings are concerned; our behaviour may be determinate, but it is not determinable; certainly not in practice and very probably not even in theory, given the vast complexity, chaotic organisation and marvellously emergent properties of our brains. Some of those who deny determinism may be moved, not so much by explicit rejection of the true last-ditch thesis, but by the certainty that our minds are not determinable by us or by anyone. This muddying of the waters is perpetuated even now by arguments about how our minds may be strongly influenced by high-level factors: peer pressure, subliminal advertising, what we were given to read just before making a decision. These arguments may be presented in favour of determinism together with the water-tight last-ditch case, but they are quite different, and the high-level determinism they support is not certainly true but rather an eminently deniable hypothesis. In the end our behaviour is determined, but can we be programmed like robots by higher level influences? Maybe in some cases – generally, probably not.

The second, related factor is a certain convert enthusiasm. If determinism is a personal intellectual achievement it may well be that we become personally invested in it. When we come to appreciate its truth for the first time it may seem that we have grasped a new perspective and moved out of the confused herd to join the scientifically enlightened. I certainly felt this on my first acquaintance with the idea; I remember haranguing a friend about the truth of determinism in a way that must, with hindsight, have resembled religious conviction and been very tiresome.

“Yes, yes, OK, I get it,” he would say in a vain attempt to stop the flow.

Now no-one lives pure determinism; we all go behaving as if agency and freedom were meaningful. The fact that this involves an unresolved tension between your philosophy and the ideas about people you actually live by was not a deterrent to me then, however; in fact it may even have added a glamorous sheen of esoteric heterodoxy to the whole thing. I expect other enthusiasts may feel the same today. The gradual revelation, some years later, that determinism is true but actually not at all as important as you thought is less exciting: it has rather a dying fall to it and may be more difficult to assimilate. Consistency with common sense is perhaps a game for the middle aged.

“You know, I’ve been sort of nuancing my thinking about determinism lately…”

“Oh, what, Peter? You made me live through the conversion experience with you – now I have to work through your apostasy, too?”

On the libertarian side, it must be admitted that our power of decision really does look sort of strange, with a power far exceeding that of mere absence of constraint. There are at least two reasons for this. One is our ability to use intentionality to think about anything whatever, and base our decisions on those thoughts. I can think about things that are remote, non-existent, or even absurd, without any difficulty. Most notably, when I make decisions I am typically thinking about future events: will I turn left or right tomorrow? How can future events influence my behaviour now?

It’s a bit like the time machine case where I take the text of Hamlet back in time and give it to Shakespeare – who never actually produced it but now copies it down and has it performed. Who actually wrote it, in these circumstances? No-one, it just appeared at some point. Our ability to think about the future, and so use future goals as causes of actions now, seems in the same way to bring our decisions into being out of nowhere inside us. There was no prior cause, only later ones, so it really seems as if the process inverts and disrupts the usual order of causality.

We know this is indeed remarkable but it isn’t really magic. On my view it’s simply that our recognition of various entities that extend over time allows a kind of extrapolation. The actual causal processes, down at that lowest level, tick away in the right order, but our pattern-matching capacity provides processes at a higher level which can legitimately be said to address the future without actually being caused by it. Still, the appearance is powerful, and we may be impatient with the kind of materialist who prefers to live in a world with low ceilings, insists on everything being material and denies any independent validity to higher levels of description. Some who think that way even have difficulty accepting that we can think directly about mathematical abstractions – quite a difficult posture for anyone who accepts the physics that draws heavily on them.

The other thing is the apparent, direct reality of our decisions. We just know we exercise free will, because we experience the process immediately. We can feel ourselves deciding. We could be wrong about all sorts of things in the world, but how could I be wrong about what I think? I believe the feeling of something ineffable here comes from the fact that we are not used to dealing with reality. Most of what we know about the world is a matter of conscious or unconscious inference, and when we start thinking scientifically or philosophically it is heavily informed by theory. For many people it starts to look as if theory is the ultimate bedrock of things, rather than the thin layer of explanation we place on top. For such a mindset the direct experience of one’s own real thoughts looks spooky; its particularity, its haecceity, cannot be accounted for by theory and so looks anomalous. There are deep issues here, but really we ought not to be foxed by simple reality.

That’s it, I think, in brief at least. More could be said of course; more will be said. The issues above are like optical illusions: just knowing how they work doesn’t make them go away, and so minds will go on boggling. People will go on furiously debating free will: that much is both determined and determinable.

Pointing

knight 2This is the second of four posts about key ideas from my book The Shadow of Consciousness. This one looks at how the brain points at things, and how that could provide a basis for handling intentionality, meaning and relevance.

Intentionality is the quality of being about things, possessed by our thoughts, desires, beliefs and (clue’s in the name) our intentions. In a slightly different way intentionality is also a property of books, symbols, signs and, pointers. There are many theories out there about how it works; most, in my view, have some appeal, but none looks like the full story.

Several of the existing theories touch on a handy notion of natural meaning proposed by H.P.Grice. Natural meaning is essentially just the noticeable implication of things. Those spots mean measles; those massed dark clouds mean rain. If we regard this kind of ‘meaning’ as the wild, undeveloped form of intentionality we might be able to go on to suggest how the full-blown kind might be built out of it; how we get to non-natural meaning, the kind we generally use to communicate with and the kind most important to consciousness.

My proposal is that we regard natural meaning as a kind of pointing, and that pointing, in turn, is the recognition of a higher-level entity that links the pointer with the target. Seeing dark clouds and feeling raindrops on your head are two parts of the recognisable over-arching entity of a rain-storm. Spots are just part of the larger entity of measles. So our basic ability to deal with meanings is simply a consequence of our ability to recognise things at different levels.

Looking at it that way, it’s easy enough to see how we could build derived intentionality, the sort that words and symbols have; the difference is just that the higher-level entities we need to link everything up are artificial, supplied by convention or shared understanding: the words of a language, the conventions of a map. Clouds and water on my head are linked by the natural phenomenon of rain: the word ‘rain’ and water on my head are linked by the prodigious vocabulary table of the language. We can imagine how such conventions might grow up through something akin to a game of charades; I use a truncated version of a digging gesture to invite my neighbour to help with a hole: he gets it because he recognises that my hand movements could be part of the larger entity of digging. After a while the grunt I usually do at the same time becomes enough to convey the notion of digging.

External communication is useful, but this faculty of recognising wholes for parts and parts for wholes enables me to support more ambitious cognitive processes too, and make a bid for the original (aka ‘intrinsic’) intentionality that characterises my own thoughts, desires and beliefs. I start off with simple behaviour patterns in which recognising an object stimulates the appropriate behaviour; now I can put together much more complex stuff. I recognise an apple; but instead of just eating it, I recognise the higher entity of an apple tree; from there I recognise the long cycle of tree growth, then the early part in which a seed hits the ground; and from there I recognise that the apple in my hand could yield the pips required, which are recognisably part of a planting operation I could undertake myself…

So I am able to respond, not just to immediate stimuli, but to think about future apples that don’t even exist yet and shape my behaviour towards them. Plans that come out of this kind of process can properly be called intentional (I thought about what I was doing) and the fact that they seem to start with my thoughts, not simply with external stimuli, is what justifies our sense of responsibility and free will. In my example there’s still an external apple that starts the chain of thought, but I could have been ruminating for hours and the actions that result might have no simple relationship to any recent external stimulus.

We can move thinks up another notch if I begin, as it were, to grunt internally. From the digging grunt and similar easy starts, I can put together a reasonable kind of language which not only works on my friends, but on me if I silently recognise the digging grunt and use it to pose to myself the concept of excavation.

There’s more. In effect, when I think, I am moving through the forest of hierarchical relationships subserved by recognition. This forest has an interesting property. Although it is disorderly and extremely complex, it automatically arranges things so that things I perceive as connected in any way are indeed linked. This means it serves me as a kind of relevance space, where the things I may need to think about are naturally grouped and linked. This helps explain how the human brain is so good at dealing with the inexhaustible: it naturally (not infallibly) tends to keep the most salient things close.

In the end then, human style thought and human style consciousness (or at any rate the Easy Problem kind) seem to be a large and remarkably effective re-purposing of our basic faculty of recognition. By moving from parts to whole to other parts and then to other wholes, I can move through a conceptual space in a uniquely detached but effective way.

That’s a very compressed version of thoughts that probably need a more gentle introduction, but I hope it makes some sense. On to haecceity!

 

Personhood Week

Banca RuritaniaPersonhood Week, at National Geographic is a nice set of short pieces briefly touring the issues around the crucial but controversial issue of what constitutes a person.

You won’t be too surprised to hear that in my view personhood is really all about consciousness. The core concept for me is that a person is a source of intentions – intentions in the ordinary everyday sense rather than in the fancy philosophical sense of intentionality (though that too).  A person is an actual or potential agent, an entity that seeks to bring about deliberate outcomes. There seems to be a bit of a spectrum here; at the lower level it looks as if some animals have thoughtful and intentional behaviour of the kind that would qualify them for a kind of entry-level personhood. At its most explicit, personhood implies the ability to articulate complicated contracts and undertake sophisticated responsibilities: this is near enough the legal conception. The law, of course, extends the idea of a person beyond mere human beings, allowing a form of personhood to corporate entities, which are able to make binding agreements, own property, and even suffer criminal liability. Legal persons of this kind are obviously not ‘real’ ones in some sense, and I think the distinction corresponds with the philosophical distinction between original (or intrinsic, if we’re bold) and derived intentionality. The latter distinction comes into play mainly when dealing with meaning. Books and pictures are about things, they have meanings and therefore intentionality, but their meaningfulness is derived: it comes only from the intentions of the people who interpret them, whether their creators or their ‘audience’.  My thoughts, by contrast, really just mean things, all on their own and however anyone interprets them: their intentionality is original or intrinsic.

So, at least, most people would say (though others would energetically contest that description). In a similar way my personhood is real or intrinsic: I just am a person; whereas the First Central Bank of Ruritania has legal personhood only because we have all agreed to treat it that way. Nevertheless, the personhood of the Ruritanian Bank is real (hypothetically, anyway; I know Ruritania does not exist – work with me on this), unlike that of, say, the car Basil Fawlty thrashed with a stick, which is merely imaginary and not legally enforceable.

Some, I said, would contest that picture: they might argue that ;a source of intentions makes no sense because ‘people’ are not really sources of anything; that we are all part of the universal causal matrix and nothing comes of nothing. Really, they would say, our own intentions are just the same as those of Banca Prima Centrale Ruritaniae; it’s just that ours are more complex and reflexive – but the fact that we’re deeming ourselves to be people doesn’t make it any the less a matter of deeming.  I don’t think that’s quite right – just because intentions don’t feature in physics doesn’t mean they aren’t rational and definable entities – but in any case it surely isn’t a hit against my definition of personhood; it just means there aren’t really any people.

Wait a minute, though. Suppose Mr X suffers a terrible brain injury which leaves him incapable of forming any intentions (whether this is actually possible is an interesting question: there are some examples of people with problems that seem like this; but let’s just help ourselves to the hypothesis for the time being). He is otherwise fine: he does what he’s told and if supervised can lead a relatively normal-seeming life. He retains all his memories, he can feel normal sensations, he can report what he’s experienced, he just never plans or wants anything. Would such a man no longer be a person?

I think we are reluctant to say so because we feel that, contrary to what I suggested above, agency isn’t really necessary, only conscious experience. We might have to say that Mr X loses his legal personhood in some senses; we might no longer hold him responsible or accept his signature as binding, rather in the way that we would do for a young child: but he would surely retain the right to be treated decently, and to kill or injure him would be the same crime as if committed against anyone else.  Are we tempted to say that there are really two grades of personhood that happen to coincide in human beings,  a kind of ‘Easy Problem’ agent personhood on the one hand and a ‘Hard Problem’ patient personhood?  I’m tempted, but the consequences look severely unattractive. Two different criteria for personhood would imply that I’m a person in two different ways simultaneously, but if personhood is anything, it ought to be single, shouldn’t it? Intuitively and introspectively it seems that way. I’d feel a lot happier if I could convince myself that the two criteria cannot be separated, that Mr X is not really possible.

What about Robot X? Robot X has no intentions of his own and he also has no feelings. He can take in data, but his sensory system is pretty simple and we can be pretty sure that we haven’t accidentally created a qualia-experiencing machine. He has no desires of his own, not even a wish to serve, or avoid harming human beings, or anything like that. Left to himself he remains stationary indefinitely, but given instructions he does what he’s told: and if spoken to, he passes the Turing Test with flying colours. In fact, if we ask him to sit down and talk to us, he is more than capable of debating his own personhood, showing intelligence, insight, and understanding at approximately human levels. Is he a person? Would we hesitate over switching him off or sending him to the junk yard?

Perhaps I’m cheating. Robot X can talk to us intelligently, which implies that he can deal with meanings. If he can deal with meanings, he must have intentionality, and if he has that perhaps he must, contrary to what I said, be able to form intentions after all – so perhaps the conditions I stipulated aren’t possible after all? And then, how does he generate intentions, as a matter of fact? I don’t know, but on one theory intentionality is rooted in desires or biological drives. The experience of hunger is just primally about food, and from that kind of primitive aboutness all the fancier kinds are built up. Notice that it’s the experience of hunger, so arguably if you had no feelings you couldn’t get started on intentionality either! If all that is right, neither Robot X nor Mr X is really as feasible as they might seem: but it still seems a bit worrying to me.

Now That’s What I Call Dennett

dennettProfessors are too polite. So Daniel Dennett reckons. When leading philosophers or other academics meet, they feel it would be rude to explain their theories thoroughly to each other, from the basics up. That would look as if you thought your eminent colleague hadn’t grasped some of the elementary points. So instead they leap in and argue on the basis of an assumed shared understanding that isn’t necessarily there. The result is that they talk past each other and spend time on profitless misunderstandings.

Dennett has a cunning trick to sort this out. He invites the professors to explain their ideas to a selected group of favoured undergraduates (‘Ew; he sounds like Horace Slughorn’ said my daughter); talking to undergraduates they are careful to keep it clear and simple and include an exposition of any basic concepts they use. Listening in, the other professors understand what their colleagues really mean, perhaps for the first time, and light dawns at last.

It seems a good trick to me (and for the undergraduates, yes, by ‘good’ I mean both clever and beneficial); in his new book Intuition Pumps and Other Tools for Thinking Dennett seems covertly to be playing another. The book offers itself as a manual or mental tool-kit offering tricks and techniques for thinking about problems, giving examples of how to use them. In the examples, Dennett runs through a wide selection of his own ideas, and the cunning old fox clearly hopes that in buying his tools, the reader will also take up his theories. (Perhaps this accessible popular presentation will even work for some of those recalcitrant profs, with whom Dennett has evidently grown rather tired of arguing…. heh, heh!)

So there’s a hidden agenda, but in addition the ‘intuition pumps’ are not always as advertised. Many of them actually deserve a more flattering description because they address the reason, not the intuition. Dennett is clear enough that some of the techniques he presents are rather more than persuasive rhetoric, but at least one reviewer was confused enough to think that Reduction ad Absurdum was being presented as an intuition pump – which is rather a slight on a rigorous logical argument: a bit like saying Genghis Khan was among the more influential figures in Mongol society.

It seems to me, moreover, that most of the tricks on offer are not really techniques for thinking, but methods of presentation or argumentation. I find it hard to imagine someone trying to solve a problem by diligently devising thought-experiments and working through the permutations; that’s a method you use when you think you know the answer and want to find ways to convince others.

What we get in practice is a pretty comprehensive collection of snippets; a sort of Dennettian Greatest Hits. Some of the big arguments in philosophy of mind are dropped as being too convoluted and fruitless to waste more time on, but we get the memorable bits of many of Dennett’s best thought-experiments and rebuttals.  Not all of these arguments benefit from being taken out of the context of a more systematic case, and here and there – it’s inevitable I suppose – we find the remix or late cover version is less successful than the original. I thought this was especially so in the case of the Giant Robot; to preserve yourself in a future emergency you build a wandering robot to carry you around in suspended animation for a few centuries. The robot needs to survive in an unpredictable world, so you end up having to endow it with all the characteristics of a successful animal; and you are in a sense playing the part of the Selfish Gene. Such a machine would be able to deal with meanings and intentionality just the way you do, wouldn’t it? Well, in this brief version I don’t really see why or, perhaps more important, how.

Dennett does a bit better with arguments against intrinsic intentionality, though I don’t think his arguments succeed in establishing that there is no difference between original and derived intentionality. If Dennett is right, meaning would be built up in our brains through the interaction of gradually more meaningful layers of homunculi; OK (maybe), but that’s still quite different to what happens with derived intentionality, where things get to mean something because of an agreed convention or an existing full-fledged intention.

Dennett, as he acknowledges, is not always good at following the maxims he sets out. An early chapter is given over to the rules set out by Anatol Rapoport, most notably:

You should attempt to re-express your target’s position so clearly, vividly and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

As someone on Metafilter said, when Dan Dennett does that for Christianity, I’ll enjoy reading it; but there was one place in the current book where I thought Dennett fell short on understanding the opposition. He suggests that Kasparov’s way of thinking about chess is probably the same as Deep Blue’s in the end. What on earth could provoke one to say that they were obviously different, he protests. Wishful thinking? Fear? Well, no need to suppose so: we know that the hardware (brain versus computer) is completely different and runs a different kind of process; we know the capacities of computer and brain are different and, in spite of an argument from Dennett to the contrary, we know the heuristics are significantly different. We know that decisions in Kasparov’s case involve consciousness, while Deep Blue lacks it entirely. So, maybe the processes are the same in the end, but there are some pretty good prima facie reasons to say they look very different.

One section of the book naturally talks about evolution, and there’s good stuff, but it’s still a twentieth century, Dawkinsian vision Dennett is trading in. Can it be that Dennett of all people is not keeping up with the science? There’s no sign here of the epigenetic revolution; we’re still in a world where it’s all about discrete stretches of DNA. That DNA, moreover, got to be the way it is through random mutation; no news has come in of the great struggle with the viruses which we now know has left its wreckage all across the human genome, and more amazing,  has contributed some vital functional stretches without which we wouldn’t be what we are. It’s a pity because that seems like a story that should appeal to Dennett, with his pandemonic leanings.

Still, there’s a lot to like; I found myself enjoying the book more and more as it went on and the pretence of being a thinking manual dropped away a bit.  Naturally some of Dennett’s old attacks on qualia are here, and for me they still get the feet tapping. I liked Mr Clapgras, either a new argument or more likely one I missed first time round; he suffers a terrible event in which all his emotional and empathic responses to colour are inverted without his actual perception of colour changing at all. Have his qualia been inverted – or are they yet another layer of experience? There’s really no way of telling and for Dennett the question is hardly worth asking. When we got to Dennett’s reasonable defence of compatibilism over free will, I was on my feet and cheering.

I don’t think this book supersedes Consciousness Explained if you want to understand Dennett’s views on consciousness. You may come away from reading it with your thinking powers enhanced, but it will be because your mental muscles have been stretched and used, not really because you’ve got a handy new set of tools. But if you’re a Dennett fan or just like a thoughtful and provoking read, it’s worth a look.