Archive for January, 2009

Picture: handful of dust. Eric Schwitzgebel had an interesting post the other day asking why the universe isn’t permeated with minds made of complex conjunctions of dust or other stuff.  Most people would accept that ‘brains’ can in principle be made out of anything so long as the functional relationships are right; those relationships can be spread out over time and space, and so long as the right relationships are maintained, we don’t even have to keep them in the right order. So shouldn’t there be copies of your mind, and lots of other minds, spread all over the Universe? But that couldn’t be true…

For a more careful exposition and an interesting discussion, see the original post.  I think I fall on the sceptical side of this dialogue, more or less for some of the reasons articulated in the comments over there, so I won’t repeat the points already made.  Briefly, thought, consciousness, is surely a process, and if you split it up over time and space the process isn’t really there. Never mind cogitation; would we even say that digestion could be instantiated by such a set-up? Here is a set of atoms; they are scattered over a huge area and thousands of years, but taken together in the right order they could constitute a steak and kidney pie; if you don’t like that, perhaps the pie could appear momentarily, or even for several minutes, as the result of a bizarre but providential quantum accident inside my fridge, say. Then here’s another set, or another bizarre occurrence, which duplicates the pie in a state of beginning to be bitten. And so on through to the unmentionable end. That’s not really digestion, is it (and to be honest, perhaps not a totally accurate translation of Eric’s argument either)?

But it’s interesting to take a different tack and bite the bullet instead of the pie. Yes, OK, all that dust does have mental experience. But surely it only has the most tenuous and ethereal kind. To take another ridiculous example, suppose we were talking about an axe. Normally we require all the parts of an axe to be well co-ordinated in space and time, but we could have a dust-axe which was made up of different parts from different places and centuries. If we make the right selection of parts, we can have a series of axe-slices which even constitute its swinging and cutting wood. But it’s not really a very good axe; its existence relies completely on the support of our imagination, even though it consists entirely of unobjectionable physical items. Its existence is thin and dependent; it’s like the hrönir in the Borges story Tlön, Uqbar and Orbis Tertius, objects that exist only because someone thought they did. The experiences of the dust mind are almost as insubstantial as the experiences of fictional characters, or it seems as if it ought to be so.

But somehow I find myself reluctant to say even that much; reluctant to grant the dust even the most evanescent form of experience. Why is that? It’s because I feel a mind is something real, not just an abstraction which can arise as well out of a mistily conceptual object as out of a solid, fleshy brain. Strangely, I worry less about the axe because it’s clear to me that whether something has axehood is a matter of how we use and think of it. Various mis-shapen stones can have a significant degree of axe-worthiness; you can say they are axes if that suits you, and deny it another time. Surely you can’t endow something with a mind and then take it away as your convenience requires?

But at the same time another part of my mind is taking the opposite tack. All this nonsense about pies and axes misses the point completely, because no-one supposes that such things are constituted by high-order functional relations. Minds, on the other hand, seem very likely to be entities of that kind, and so they are uniquely suited to be the abstract result of some reinterpretation of suitable sections of the world. It’s just your hankering for some soul, some magic homunculus, that prevents you from realising this.

Come to that, what’s the point of the dust? Suppose we couldn’t find a suitable candidate for the role of one mote. OK, we say, it doesn’t really matter, let’s just arrange things so that the other parts of our dust mind behave as if this missing one were actually there. We can easily do that. But if we can remove one mote, why not remove them all? Let the functional relationships remain without physical realisation. If you’re worried that imaginary dust has no causal powers, reflect that at this very moment it is having effects in making me describe it; even as I write, and you in some remote place and time read, it is rearranging in its insidious way the contents of both our minds. But if we can do without the physical realisation, space and time become irrelevant; the Universe is full of minds; in fact every point is a point of view, and Leibniz’s Monadology is vindicated.

Once again I have reached that familliar state of complete confusion…

Picture: Freeman Dyson. I see that at Edge they have kept up their annual custom of putting a carefully-chosen question to a group of intellectuals.  This year, they asked “What game-changing scientific ideas and developments do you expect to live to see?”. There were many interesting answers. Freeman Dyson foresaw what he called ‘radiotelepathy‘. The idea is that a set of small implants record the activity of your brain which is then transmitted and delivered into someone else’s (and vice versa). Hey presto, at once your thoughts and feelings are shared.

As the Thinking Meat Project remarked,  this idea opens up a number of questions. Given our particular interests here, the first point that came to mind was that this kind of telepathy would surely resolve at last the vexed question of qualia. How do we know that the red seen by others looks the same as the red seen by us? Perhaps the experience they have when they see red is the experience we have when we see green? Or perhaps (a less-often discussed possibility) their red is a bit like our middle C on a badly-tuned piano? Or perhaps their colour experiences are nothing like any of our experiences; perhaps there are an infinite number of phenomenal experiences which go with the perception of colour, and everyone has their own unique and ineffable set.

Well, with Dyson telepathy, there would be no need for us to wonder any more; just tune in to someone else’s brain, and we can have their experiences ourselves. Or can we? Perhaps not. Even as I write, I can sense hard-liners getting ready to insist that qualia recorded, transmitted, and inserted into a new brain, are not the same as the freshly gathered original ones.  You still wouldn’t know what the real thing was like. It might be that it is our brains themselves that impart the special what-it-is-likeness to experiences, in which case even telepathy won’t help, and we can only ever have our own qualia. I think this exposes the insoluble problem at the heart of the whole qualia issue. Really the only way to know what someone’s experiences are like in themselves, is to be that person. But you can’t be someone else.

But steady on, because it seems highly unlikely to me that Dyson telepathy is feasible. I don’t see any insoluble problem with the hardware he calls for, but downloading brain content is a tricky business, and uploading is even worse. To start with, Dyson talks about the ‘entire brain’, but do we want the whole thing? Do I want the activity of someone else’s cerebellum reproduced in my own? Do I want the control routines for my caridovascular system overwritten? No thanks. So even on the macro scale we have to be very careful about where we put our ingoing signals. Pinpointing the right neurons seems a hopeless task. It’s true that by and large the same regions of the cortex appear to deal with the same functions in different individuals, although variation is also quite possible. It’s also true that recent research has identified individual neurons with very specific responses – neurons that fire, say, in response to the sight of Freeman Dyson, but not in response to anyone else. But so far as I know, it hasn’t been demonstrated that the Dyson neuron in everyone is in the same place even approximately; it actually seems most unlikely, given that brains are wired in highly individual ways, and that indeed, most people have never had the pleasure of meeting Freeman Dyson. I don’t think it’s even been shown that the very same neuron which responds to Dyson today continues to do so next week.  Because all our machines are made to have their states encoded in a readable way, we tend to expect the same of nature, but evolution has no need of legible code.  So it’s very likely that the neural activity which in my brain corresponds to thinking of Freeman Dyson would, when transposed to another cranium,  come out as the tantalising memory of the taste of a biscuit, or an intimation of mortality, or reservations about bicameralism.

Of course it’s worse than that.  Our brains are carefully organised, and the random dumping of alien activity would be oustandingly likely to mess things up.  Would the brain activity that was there before – my own mental activity – be wiped out, so that instead of sharing someone else’s thoughts, I suddenly thought I was someone else? Or could it somehow be merged? Dissolving the barriers and merging with another person can sound almost sensuously appealing (given the right person), but the sudden appearance of unforeshadowed alien thoughts might actually be terrifying, severely disorienting: a threat to the integrity of the psyche liable to end in trauma. In this respect, it’s worth noting that nothing is more disruptive to one signal than another similar signal. If you write a sentence on a piece of paper and then cross it out with two or three lines, it remains easily legible. But if you write even one other sentence over the top of it, it becomes pretty much illegible at once. In the same way, it seems likely that activity from another brain would be the most dirsuptive thing you could input to your own, far worse than random noise.

I think the best alternative would be to home in on sensory inputs in the brain and try to place your interface somewhere early in the system before those inputs reach the more complex functions of consciousness. The result then would be more like hearing an external voice, or seeing an external hallucination. Much easier to deal with, but of course not so extraordinary – not really different in kind from using a video phone. At the end of the day, perhaps it’s best to stick to the brain inputs provided by the designer – our normal senses.  Walter Freeman memorably lamented the cognitive isolation in which we are all ultimately confined; but perhaps that isolation is the precondition of personal identity.

Bear with me while I transfer to a new host – I hope it won’t take too long.