Downward Causation

heterogeneous-ontologyIs downward causation the answer? Does it explain how consciousness can be a real and important part of the world without being reducible to physics? Sean Carroll had a sensible discussion of the subject recently.

What does ‘down’ even mean here? The idea rests on the observation that the world operates on many distinct levels of description. Fluidity is not a property of individual molecules but something that ’emerges’ when certain groups of them get together. Cells together make up organisms that in turn produce ecosystems. Often enough these levels of description deal with progressively larger or smaller entities, and we typically refer to the levels that deal with larger entities as higher, though we should be careful about assuming there is one coherent set of levels of description that fit into one another like Russian dolls.

Usually we think that reality lives on the lowest level, in physics. Somewhere down there is where the real motors of the universe are driving things. Let’s say this is the level of particles, though probably it is actually about some set of entities in quantum mechanics, string theory, or whatever set of ideas eventually proves to be correct. There’s something in this view because it’s down here at the bottom that the sums really work and give precise answers, while at higher levels of description the definitions are more approximate and things tend to be more messy and statistical.

Now consciousness is quite a high-level business. Particles make proteins that make cells that make brains that generate thoughts. So one reductionist point of view would be that really the truth is the story about particles: that’s where the course of events is really decided, and the mental experiences and decisions we think are going on in consciousness are delusions, or at best a kind of poetic approximation.

It’s not really true, however, that the entities dealt with at higher levels of description are not real. Fluidity is a perfectly real phenomenon, after all. For that matter the Olympics were real, and cannot be discussed in terms of elementary particles. What if our thoughts were real and also causally effective at lower levels of description? We find it easy to think that the motion of molecules ’caused’ the motion of the football they compose, but what if it also worked the other way? Then consciousness could be real and effectual within the framework of a sufficiently flexible version of physics.

Carroll doesn’t think that really washes, and I think he’s right. It’s a mistake to think that relations between different levels of description are causal. It isn’t that my putting the beef and potatoes on the table caused lunch to be served; they’re the same thing described differently. Now perhaps we might allow ourselves a sense in which things cause themselves, but that would be a strange and unusual sense, quite different from the normal sense in which cause and effect by definition operate over time.

So real downward causality, no: if by talk of downward causality people only mean that real effectual mental events can co-exist with the particle story but on a different level of description, that point is sound but misleadingly described.

The thing that continues to worry me slightly is the question of why the world is so messily heterogeneous in its ontology – why it needs such a profusion of levels of description in order to discuss all the entities of interest. I suppose one possibility is that we’re just not looking at things correctly. When we look for grand unifying theories we tend to look to ever lower levels of description and to the conjectured origins of the world. Perhaps that’s the wrong approach and we should instead be looking for the unimaginable mental perspective that reconciles all levels of description.

Or, and I think this might be closer to it, the fact that there are more things in heaven and earth than are dreamt of in anyone’s philosophy is actually connected with the obscure reason for there being anything. As the world gets larger it gets, ipso facto, more complex and reduction and backward extrapolation get ever more hopeless. Perhaps that is in some sense just as well.

(The picture is actually a children’s puzzle from 1921 – any solutions? You need to know it is called ‘Illustrated Central Acrostic’)  

 

Have you a short self-life?

age-smHow far back in time do you recognise yourself? There may be long self-life and short self-life people; speculatively, the difference may even be genetic.

Some interesting videos here on the question of selves and persons (two words often used by different people to indicate different distinctions, so you can have a long talk at cross-purposes all too easily).

Too much content for me to summarise quickly, but I was particularly struck by Galen Strawson’s view of self-life (as it were). Human beings may live three score and ten years, but the unchanged self really only lasts a short while. Rigorously speaking he thinks it might only last a fraction of a second, but he believes that there are, as it were, different personality types here; people who have either a long or a short sense of identity over time. He is apparently near one end of the spectrum, not really identifying with the Galen Strawson who was here only half an hour ago. Myself, I think I’m towards the other end. When I look at photographs of my five-year-old self, I feel it’s me. There are many differences, of course, but I remember with special empathy what it was like to look out through those eyes.

Strawson thinks this is a genuine difference, not yet sufficiently studied by psychology; perhaps it even has a genetic basis. But he thinks short self-life and long self-life people can get along perfectly well; in fact the combination may make a strong partnership.

One other interesting point, Raymond Tallis thinks personhood is strongly social. On a desert island your personhood would gradually attenuate until you became more or less ‘Humean’ and absorbed in your environment and daily island tasks. It doesn’t sound altogether bad…

How can panpsychists sleep?

sleepOUP Blog has a sort of preview by Bruntrup and Jaskolla of their forthcoming collection on panpsychism, due out in December, with a video of David Chalmers at the end: they sort of credit him with bringing panpsychist thought into the mainstream. I’m using ‘panpsychism’ here as a general term, by the way, covering any view that says consciousness is present in everything, though most advocates really mean that consciousness or experience is everywhere, not souls as the word originally implied.

I found the piece interesting because they put forward two basic arguments for panpsychism, both a little different from the desire for simplification which I’ve always thought was behind it – although it may come down to the same basic ideas in the end.

The first argument they suggest is that ‘nothing comes of nothing’; that consciousness could not have sprung out of nowhere, but must have been there all along in some form. In this bald form, it seems to me that the argument is virtually untenable. The original Scholastic argument that nothing comes of nothing was, I think, a cosmological argument. In that form it works. If there really were nothing, how could the Universe get started? Nothing happens without a cause, and if there were nothing, there could be no causes.  But within an existing Universe, there’s no particular reason why new composite or transformed entities cannot come into existence.  The thing that causes a new entity need not be of the same kind as that entity; and in fact we know plenty of new things that once did not exist but do now; life, football, blogs.

So to make this argument work there would have to be some reason to think that consciousness was special in some way, a way that meant it could not arise out of unconsciousness. But that defies common sense, because consciousness coming out of unconsciousness is something we all experience every day when we wake up; and if it couldn’t happen, none of us would be here as conscious beings at all because we couldn’t have been born., or at least, could never have become aware.

Bruntrup and Jaskolla mention arguments from Nagel and William James;  Nagel’s, I think rests on an implausible denial of emergentism; that is, he denies that a composite entity can have any interesting properties that were not present in the parts. The argument in William James is that evolution could not have conferred some radically new property and that therefore some ‘mind dust’ must have been present all the way back to the elementary particles that made the world.

I don’t find either contention at all appealing, so I may not be presenting them in their best light; the basic idea, I think is that consciousness is just a different realm or domain which could not arise from the physical. Although individual consciousnesses may come and go, consciousness itself is constant and must be universal. Even if we go some way with this argument I’d still rather say that the concept of position does not apply to consciousness than say it must be everywhere.

The second major argument is one from intrinsic nature. We start by noticing that physics deals only with the properties of things, not with the ‘thing in itself’. If you accept that there is a ‘thing in itself’ apart from the collection of properties that give it its measurable characteristics, then you may be inclined to distinguish between its interior reality and its external properties. The claim then is that this interior reality is consciousness. The world is really made of little motes of awareness.

This claim is strangely unmotivated in my view. Why shouldn’t the interior reality just be the interior reality, with nothing more to be said about it? If it does have some other character it seems to me as likely to be cabbagey as conscious. Really it seems to me that only someone who was pretty desperately seeking consciousness would expect to find it naturally in the ding an sich.  The truth seems to be that since the interior reality of things is inaccessible to us, and has no impact on any of the things that are accessible, it’s a classic waste of time talking about it.

Aha, but there is one exception; our own interior reality is accessible to us, and that, it is claimed, is exactly the mysterious consciousness we seek. Now, moreover, you see why it makes sense to think that all examples of this interiority are conscious – ours is! The trouble is, our consciousness is clearly related to the functioning of our brain. If it were just the inherent inner property of that brain, or of our body, it would never go away, and unconsciousness would be impossible. How can panpsychists sleep at night? If panpsychism is true, even a dead brain has the kind of interior awareness that the theory ascribes to everything. In other words, my human consciousness is a quite different thing from the panpsychist consciousness everywhere; somehow in my brain the two sit alongside without troubling each other. My consciousness tells us nothing about the interiority of objects, nor vice versa: and my consciousness is as hard to explain as ever.

Maybe the new book will have surprising new arguments? I doubt it, but perhaps I’ll put it on my Christmas present list.

The End of Truth

Some deep and heavy  philosophy in another IAI video: After the End of Truth. This one got a substantial and in places rather odd discussion on Reddit recently.

Watch more videos on iai.tv

Hilary Lawson starts with the premise that Wittgenstein and others have sufficiently demonstrated that there is no way of establishing objective truth; but we can’t, he says, rest there. He thinks that we can get a practical way forward if we note that the world is open but we can close it in various ways and some of them work better for us than others.

Perhaps an analogy might be (as it happens) the ideas of truth and falsity themselves in formal logic. Classical logic assigns only two values to propositions; true or not true. People often feel this is unintuitive. We can certainly devise formal logics with more than two values – we could add one for ‘undetermined’, say. This is not a matter of what’s right or wrong; we can carve up our system of logic any way we like. The thing is, two-valued logic just gives us a lot more results than its rivals. One important reason is that if we can exclude a premise, in a two-value system its negation must be true; that move doesn’t work if there are three or more values). So it’s not that two-valued logic is true and the others are false, it’s just that doing it the two-valued way gets us more. Perhaps something similar might be true of the different way we might carve up the world.

John Searle, by video (and briefly doing that thing old people seem mysteriously prone to; sinking to the bottom of the frame as though peering over a wall) goes for common sense, as ever, albeit cloaked in technical terms. He distinguishes between epistemic and ontological senses of objectivity. Our views are unavoidably ontologically subjective to some degree (ie, different people have different views:  ‘perspectivalism’ is true); but that does not at all entail that epistemic objectivity is unattainable; indeed, if we didn’t assume some objective truths we couldn’t get started on discussion. That’s a robust refutation of the view that perspectivalism implies no objective truth, though I’m not sure that’s quite the case Lawson was making.  Perhaps we could argue that after all, there are such things as working assumptions; to say ‘let’s treat this as true and see where we get’ does not necessarily require belief in objectively determinable truth.

Hannah Dawson seems to argue emphatically on both sides; no two members of a class gave the same account of an assembly (though I bet they could all agree that no pink elephant walked in half-way through). It seems the idea of objective truth sits uneasily in history;  but no-one can deny the objective factuality of the Holocaust; sometimes, after all, reality does push back. This may be an expression of the curious point that it often seems easier to say that nothing is objectively true than it is to say that nothing is objectively false, illogical as that is.

Dawson’s basic argument looks to me a bit like an example of ‘panic scepticism’; no perfect objective account of an historical event is possible, therefore nothing at all is objectively true. I think we get this kind of thing in philosophy of mind too; people seem to argue that our senses mislead us sometimes, therefore we have no knowledge of external reality (there are better arguments for similar conclusions, of course). Maybe after all we can find ways to make do with imperfect knowledge.

A Case for Human Thinking

neural-netInteresting piece here reviewing the way some modern machine learning systems are unfathomable. This is because they learn how to do what they do, rather than being set up with a program, so there is no reassuring algorithm – no set of instructions that tells us how they work. In fact they way they make their decisions may be impossible to grasp properly even if we know all the details because it just exceeds in brute complexity what we can ever get our minds around.

This is not really new. Neural nets that learn for themselves have always been a bit inscrutable. One problem with this is brittleness: when the system fails it may not fail in ways that are safe and manageable, but disastrously. This old problem is becoming more important mainly because new approaches to deep machine learning are doing so well; all of a sudden we seem to be getting a rush of new systems that really work effectively at quite complex real world tasks. The problems are no longer academic.

Brittle behaviour may come about when the system learns its task from a limited data set. It does not understand the data and is simply good at picking out correlations, so sometimes it may pick out features of the original data set that work well within that set, and perhaps even work well on quite a lot of new real world data, but don’t really capture what’s important. The program is meant to check whether a station platform is dangerously full of people, for example; in the set of pictures provided it finds that all it needs to do is examine the white platform area and check how dark it is. The more people there are, the darker it looks. This turns out to work quite well in real life, too. Then summer comes and people start wearing light coloured clothes…

There are ways to cope with this. We could build in various safeguards. We could make sure we use big and realistic datasets for training or perhaps allow learning to continue in real world contexts. Or we could just decide never to use a system that doesn’t have an algorithm we can examine; but there would be a price to pay in terms of efficiency for that; it might even be that we would have to give up on certain things that can only be effectively automated with relatively sophisticated deep learning methods. We’re told that the EU contemplates a law embodying a right to explanations of how software works. To philosophers I think this must sound like a marvellous new gravy train, as there will obviously be a need to adjudicate what counts as an adequate explanation, a notoriously problematic issue. (I am available as a witness in any litigation for reasonable hourly fees.)

The article points out that the incomprehensibility of neural network-based systems is in some ways really quite like the incomprehensibility of the good old human brain. Why wouldn’t it be? After all, neural nets were based on the brain. Now it’s true that even in the beginning they were very rough approximations of real neurology and in practical modern systems the neural qualities of neural nets are little more than a polite fiction. Still, perhaps there are properties shared by all learning systems?

One reason deep learning may run into problems is the difficulty AI always has in dealing with relevance.  The ability to spot relevance no doubt helps the human brain check whether it is learning about the right kind of thing, but it has always been difficult to work out quite how our brains do it, and this might mean an essential element is missing from AI approaches.

It is tempting, though, to think that this is in part another manifestation of the fact that AI systems get trained on limited data sets. Maybe the radical answer is to stop feeding them tailored data sets and let  our robots live in the real world; in other words, if we want reliable deep learning perhaps our robots have to roam free and replicate the wider human experience of the world at large? To date the project of creating human-style cognition has been in some sense motivated by mere curiosity (and yes, by the feeling that it would be pretty cool to have a robot pal) ; are we seeing here the outline of an argument that human-style AGI might actually be the answer to genuine engineering problems?

What about those explanations? Instead of retaining philosophers and lawyers to argue the case, could we think about building in a new module to our systems, one that keeps overall track of the AI and can report the broad currents of activity within it? It wouldn’t be perfect but it might give us broad clues as to why the system was making the decisions it was, and even allow us to delicately feed in some guidance. Doesn’t such a module start to sound like, well, consciousness? Could it be that we are beginning to see the outline of the rationales behind some of God’s design choices?

Architectonics and the Hard Problem

twinsCan we solve the Hard Problem with scanners? This article by Brit Brogaard and Dimitria E. Gatzia argues that recent advances in neuroimaging techniques, combined with the architectonic approach advocated by Fingelkurts and Fingelkurts, open the way to real advances.

But surely it’s impossible for physical techniques to shed any light on the Hard Problem? The whole point is that it is over and above any account which could be given by physics. In the Zombie Twin though experiment I have a physically identical twin who has no subjective experience. His brain handles information just the way mine does, but when he registers the colour red, it’s just data; he doesn’t experience real redness. If you think that is conceivable, then you believe in qualia, the subjective extra part of experience. But how could qualia be explained by neuroimaging; my zombie twin’s scans are exactly the same as mine, yet he has no qualia at all?

This, I think, is where the architectonics come in. The foundational axiom of the approach, as I understand it, is that the functional structure of phenomenal experience corresponds to dynamic structure within brain activity; the operational architectonics provide the bridge . (I call it an axiom, but I think the Fingelkurts twins would say that empirical research already provides support for a nested hierarchical structure which bridges the explanatory gap. They seem to take the view that operational architectonics uses a structured electrical field, which on the one hand links their view with the theories of Johnjoe McFadden and Sue Pockett, while on the other making me wonder whether advances in neuroimaging are relevant if the exciting stuff is happening outside the neurons.) It follows that investigating dynamic activity structures in the brain can tell us about the structure of phenomenal, subjective experience. That seems reasonable. After all, we might argue, qualia may be mysterious, but we know they are related to physical events; the experience of redness goes with the existence of red things in the physical world (with due allowance for complications). Why can’t we assume that subjective experience also goes with certain structured kinds of brain activity?

Two points must be made immediately. The first is that the hunt for Neural Correlates of Consciousness (NCCs) is hardly new. The advocates of architectonics, however, say that approaches along these lines fail because correlation is simply too weak a connection. Noticing that experience x and activation in region y correlate doesn’t really take us anywhere. They aim for something much harder-edged and more specific, with structured features of brain activity matched directly back to structures in an analysis of phenomenal experience (some of the papers use the framework of Revonsuo, though architectonics in general is not committed to any specific approach).

The second point is that this is not a sceptical or reductive project. I think many sceptics about qualia would be more than happy with the idea of exploring subjective experience in relation to brain structure; but someone like Dan Dennett would look to the brain structures to fully explain all the features of experience; to explain them away, in fact, so that it was clear that brain activity was in the end all we were dealing with and we could stop talking about ‘nonsensical’ qualia altogether.

By contrast the architectonic approach allows philosophers to retain the ultimate mystery; it just seeks to push the boundaries of science a bit further out into the territory of subjective experience. Perhaps Paul Churchland’s interesting paper about chimerical colours which we discussed a while ago provides a comparable case if not strictly an example.

Churchland points out that we can find the colours we experience mapped out in the neuronal structures of the brain; but interestingly the colour space defined in the brain is slightly more comprehensive than the one we actually encounter in real life. Our brains have reserved spaces for colours that do not exist, as it were. However, using a technique he describes we can experience these ‘chimerical’ colours, such as ‘dark yellow’ in the form of an afterglow. So here you experience for the first time a dark yellow quale, as predicted and delivered by neurology. Churchland would argue this shows rather convincingly that position in your brain’s colour space is essentially all there is to the subjective experience of colour. I think a follower of architectonics would commend the research for elucidating structural features of experience but hold that there was still a residual mystery about what dark yellow qualia really are in themselves, one that can only be addressed by philosophy.

It all seems like a clever and promising take on the subject to me; I do have two reservations. The first is a pessimistic doubt about whether it will ever really be possible to deliver much. The sort of finding reported by Churchland is the exception more than the rule. Vision and hearing offer some unusual scope because they both depend on wave media which impose certain interesting structural qualities; the orderly spectrum and musical scale. Imaginatively I find it hard to think of other aspects of phenomenal experience that seem to be good candidates for structural analysis. I could be radically wrong about this and I hope I am.

The other thing is, I still find it a bit hard to get past my zombie twin; if phenomenal experience matches up with the structure of brain activity perfectly, how come he is without qualia? The sceptics and the qualophiles both have pretty clear answers; either there just are no qualia anyway or they are outside the scope of physics. Now if we take the architectonic view, we could argue that just as the presence of red objects is not sufficient for there to be red qualia, so perhaps the existence of the right brain patterns isn’t sufficient either; though the red objects and the relevant brain activity do a lot to explain the experience. But if the right brain activity isn’t sufficient, what’s the missing ingredient? It feels (I put it no higher) as if there ought to be an explanation; but perhaps that’s just where we leave the job for the philosophers?