Is Karl Friston that bloke? You know who I mean. That really clever bloke, master of some academic field. He’s used to understanding a few important things far better than the rest of us. But when people in the pub start discussing philosophy of mind, he feels wrong-footed in this ill-defined territory.

“You see,” he says, “the philosophers make such a meal of this, with all their vague,  mystical talk of intentions and experiences. But I’m a simple man, and I can’t help looking at it as a scientist.  To me it just seems obvious that it’s basically nothing more than a straightforward matter of polyvalent symmetry relations between vectors in high-order Minkowski topologies, isn’t it?

Aha! Now you have to meet him on his own turf and defeat him there: otherwise you can’t prove he hasn’t solved the problem of consciousness!

I’m sure that isn’t really Friston at all; but his piece in Aeon, perhaps due to its unavoidable brevity, seems to invoke some relatively esoteric apparatus while ultimately saying things that turn out to be either madly optimistic or relatively banal. Let’s quickly canter through it.  Friston starts by complaining of philosophers and cognitive scientists who insist that the mind is a thing.   “As a physicist and psychiatrist,” he says ” I find it difficult to engage with conversations about consciousness.”  In physics, he says, it’s dangerous to assume that that things ‘exist’; the real question is what processes give rise to the notion that something exists. (An unexpected view: physics, then, seeks to explain the notions in our minds, not external reality? Poor old Isaac Newton wasn’t really doing proper physics at all.) Friston, instead, wants to brush all that ‘thing’ nonsense aside and argue that consciousness, in reality, is a natural process.

I’ve spent some time trying to think of anyone who would deny that, and really I’ve come up empty. Panpsychists probably think that at the most fundamental level consciousness can be a very simple form of awareness, too simple to go through complex changes: but even they, I think, would not deny that human consciousness is a natural process. Perhaps all Friston means is that he doesn’t want to spend any time on definitions of consciousness; that territory is certainly a treacherous swamp where people get lost, although setting out to understand something you haven’t defined can be kinda bold, too.

To illustrate the idea of consciousness as a process, Friston (inexplicably, to me anyway: this is one of the places where I feel something might have got lost in editing) suggests we swap the word and talk about whether evolution is a process. Scientifically, he says, we know that evolution isn’t for anything – it’s just a process that happens. Since consciousness is a product of evolution, it isn’t for anything either. I don’t know about that; it’s true it can’t borrow its purpose from evolution if evolution doesn’t have one; the thing is, putting aside all the difficult issues of ultimate purpose and the nature of teleology, there is a well-established approach within evolutionary theory of asking what, say, horns or eyes are for (defeating rivals, seeing food, etc). This is just a handy way to ask about the survival value of particular adaptations. So within evolution things like consciousness can be for something in a relatively clear sense without evolution itself having to be for anything. Actually Friston understands this perfectly well; he immediately goes on to speak approvingly of Dennett’s free-floating rationales, just the kind of intra-evolutionary purpose I mean. (He says Dennett has spent his career trying to understand the origin of the mind – what, is he one of those pesky guys who treat the mind as a thing?)

Anyway, now we’re getting nearer to the real point; inference.  Inference, claims Friston, is quite close to a theory of everything (maybe, but so is ‘stuff happens’). First, though, it seems we need to talk about complex systems, and we’re going to do it by talking about their state spaces. I wish we weren’t. Really, this business of state spaces is like a fashion – or a disease – sweeping through certain parts of the intellectual community. Is there an emerging belief, comparable to the doctrine of the Universality of Computation, that everything must be usefully capable of analysis by state space? I might be completely up the creek , but it seems to me it’s all too easy to use  hypothetical state spaces to give yourself a false assurance that something is tractable when it really isn’t. Of course proper defined state spaces are perfectly legitimate in certain cases. Friston mentions quantum systems; yes, but elementary particles have a relatively small number of properties that provide a manageable number of regular axes from which our space can be constructed. At least you’ve got to be able to say what variables you’re mapping in your space, haven’t you? Here it seems Friston wants to talk about, for example, the space of electrochemical states of the brain. I mean, he’s a professor of neurology, so he knows whereof he speaks – but what on earth are the unimaginable axes that define that one, using cleanly separated independent variables? He’s very hand-wavy about this; he half-suggests we might be dealing with a state space detailing the position of every particle, but surely that’s radically below the level of description we need even for neurology, never mind inferences about the world. It’s highly likely that mental states are uniquely resistant to simple analysis; in the state space of human thoughts every one of the infinite possible thoughts may be adjacent to every other along one of the infinite number of salient axes. I doubt whether God could construct that state space meaningfully, not even even if we gave Him a pencil and paper.

Anyway, Friston wants us to consider a Lyapunov function applied to some suitable state space of – I think – a complex system such as one of us, ie a living organism. He describes the function in general terms and how it governs movement through the space, although without too much detail. In fact after a bit of a whistle stop tour of attractors and homeostasis all he seems to want from the discussion is the point that adaptive behaviour can be read as embodying inferences about the world the organism inhabits. We could get there just by talking about hibernation or bees building hives, so the unkind suspicion crossed my mind that he brings Lyapunov into it partly in order to have a scary dead Russian on the team. I’m sure that’s unfair, and most likely there is illuminating detail about the Lyapunov function that didn’t survive into this account because of limitations on space (or probable reader comprehension). In any case it seems that all we really need to take away is that this function in complex systems like us can be interpreted as making inferences about the future, or minimising ‘surprise’.

It’s important to be clear here that we’re not actually talking literally about actual surprise or about actual inference, the process of some conscious entity inferring something. We’re using the word metaphorically to talk about the free-floating explanations, in a Dennettian sense, that complex, self-maintaining systems implicitly display. By acting in ways that keep them going, such systems can sort of be said to embody guesses about the future. Friston says these sorts of behaviour in complex systems ‘effectively’ make inferences about the world; this is one of those cases where we should remember that ‘effectively’ should strictly be read as ‘don’t’. It’s absolutely OK to talk in these metaphorical terms – I’d go so far as to say it’s almost unavoidable – but to talk of metaphorical inference in an account of consciousness, where we’re trying to explain the real thing, raises the obvious risk of losing track and kidding ourselves that we’ve solved the problem of cognition when all we’ve done is invoke a metaphor. So let’s call this kind of metaphorical inference ‘minference’. If we want to claim later that inference is minference, or that we can build inference out of minference, well and good: but we’ll be sure of noticing what we’re doing.

So complex, self-sustaining systems like us do minference; but that tells us nothing about consciousness because it’s true of all such systems. It’s true of plants and bacteria just as much as it’s true of us, and it’s even true of some non-living systems. Of course, says Friston, but for similar reasons consciousness must also be a process of inference (minference).  It’s just the (m)inference done by the brain. That’s fine, but it just tells us consciousness must tend to produce behaviour that helps us stay alive, without telling us anything at all about the distinctive nature of the process; digestion is also a process that does minference, isn’t it? But we don’t usually attribute consciousness to the gut. Brain minference is entirely different to conscious explicit inferences.

Friston does realise that he needs to explain the difference between conscious and non-conscious minferring creatures, but I don’t think he’s clear enough that the earlier talk of (m)inference is no real use to him. He suggests that in order to infer (really infer) the consequences of its actions, a creature needs an internal model. This seems quite problematic to me, though I’m led to believe he has a more extensive argument for it which doesn’t appear here. While we may use models for some purposes, I honestly don’t see that inference requires one (in fact, building a model and then making your inferences about that would be asking for trouble). I plan to go and catch a train in a minute, having inferred that there will be one at the station; does that mean I must have a small train set or a miniature timetable simulated in my brain? Nope. Friston wants to say that the difference between conscious and unconscious behaviour is that the former derives from a ‘thick’ model of time, which here seems to mean no more than one that takes account of a relatively extended period. The idea that the duration is crucial makes no great sense to me: the behaviour patterns of ants reflect hundreds of thousands or even millions of years of minference: my conscious decisions may be the work of a moment; but I think in the end what Friston means to say is that conscious thought detaches us from the immediate moment by modelling the world and so allows us to entertain plans motivated by long-term considerations. That’s fine, but it has nothing much to do with the state spaces, attractors and Lyapunov functions discussed earlier; it looks as if we can junk all that and just start afresh with the claim about consciousness being a matter of a model that helps us plan. And once that idea is shorn of all the earlier apparatus it becomes clear that it’s not an especially new insight. In fact, it’s pretty much the sort of thing a lot of those pesky mind-as-thing fellows have been saying all along.

Alas, it’s worse than that. Because of the confusion between inference and minference Friston seems to be saddled with the idea that actual consciousness is about minimising actual surprise. Is the animating purpose of human thought to avoid surprise? Do our explicit mental processes seek above all to attain a steady equilibrium, always thinking about the same few things in a tight circle and getting away from new ideas as quickly as possible? It doesn’t seem plausible to me.

Friston concludes with these two sentences.

There’s no real reason for minds to exist; they appear to do so simply because existence itself is the end-point of a process of reasoning. Consciousness, I’d contend, is nothing grander than inference about my future.

Frankly, I don’t understand the first one; the second is fine, but I’m left with the nagging feeling that I missed something more exciting along the way?

 

[The picture is Lyapunov, bu the way, not Karl Friston]

 

Matt Faw says subjective experience is caused by a simulation in the hippocampus – a bit like a holodeck. There’s a brief description on the Brains Blog, with the full version here.

In a very brief sketch, Faw says that data from various systems is pulled together in the hippocampal system and compiled into a sort of unified report of what’s going on. This is sort of a global workspace system, whose function is to co-ordinate. The ongoing reportage here is like a rolling film or holodeck simulation, and because it’s the only unified picture available, it is mistaken for the brain’s actual interaction with the world. The actual work of cognition is done elsewhere, but this simulation is what gives rise to ‘neurotypical subjective experience’.

I’m uneasy about that; it doesn’t much resemble what I would call subjective experience. We can have a model of the self in the world without any experiencing going on (Roger Penrose suggested the example of a video camera pointed at a mirror), while the actual subjectivity of phenomenal experience seems to be nothing to do with the ‘structural’ properties of whether there’s a simulation of the world going on.

I believe the simulation or model is supposed to help us think about the world and make plans; but the actual process of thinking about things rarely involves a continuous simulation of reality. If I’m thinking of going out to buy a newspaper, I don’t have to run through imagining what the experience is going to be like; indeed, to do so would require some effort. Even if I do that, I’m the puppeteer throughout; it’s not like running a computer game and being surprised by what happens. I don’t learn much from the process.

And what would the point be? I can just think about the world. Laboriously constructing a model of the world and then thinking about that instead looks like a lot of redundant work and a terrible source of error if I get it wrong.

There’s a further problem for Faw in that there are people who manage without functioning hippocampi. Although they undoubtedly have serious memory problems, they can talk to us fairly normally and answer questions about their experiences. It seems weird to suggest that they don’t have any subjective experience; are they philosophical zombies?

Faw doesn’t want to say so. Instead he likens their thought processes to the ones that go on when we’re driving without thinking. Often we find we’ve driven somewhere but cannot remember any details of the journey. Faw suggests that what happens here is just that we don’t remember the driving. All the functions that really do the cognitive work are operating normally, but whereas in other circumstances their activity would get covered (to some extent) in the ‘news bulletin’ simulation, in this case our mind is dealing with something more interesting (a daydream, other plans, whatever), and just fails to record what we’ve done with the brake and the steering wheel. But if we were asked at the very moment of turning left what we were doing, we’d have no problem answering. People with no hippocampi are like this; constantly aware of the detail of current cognition, stuff that is normally hidden from neurotypically normal people, but lacking the broader context which for us is the source of normal conscious experience.

I broadly like this account, and it points to the probability that the apparent problems for Faw are to a degree just a matter of labelling. He’s calling it subjective experience, but if he called it reportable subjective experience it would make a lot of sense. We only ever report what we remember to have been our conscious experience: some time, even if only an instant, has to have passed. It’s entirely plausible that that we rely on the hippocampus to put together these immediate, reportable memories for us.

So really what I call subjective experience is going on all the time out there; it doesn’t require a unified account or model; but it does need the hippocampal newsletter in order to become reportable. Faw and I might disagree on the fine philosophical issue of whether it is meaningful to talk about experiences that cannot, in principle, be reported; but in other ways we don’t really differ as much as it seemed.

Is this a breakthrough in robot emotion? Zhou et al describe the Emotional Chatting Machine (ECM), a chatbot which uses machine learning to return answers with a specified emotional tone.

The ultimate goal of such bots is to produce a machine that can detect the emotional tone of an input and choose the optimum tone for its response, but this is very challenging. It’s not simply a matter of echoing the emotional tone of the input; the authors suggest for example, that sympathy is not always the appropriate response to a sad story. For now, the task they address is to take two inputs; the actual content and a prescribed emotional tone, and generate a response to the content reflecting the required tone. Actually, doing more than reflecting is going to be very challenging indeed because the correct tone of a response ought to reflect the content as well as the tone of the input; if someone calmly tells you they’re about to die, or about to kill someone else, an equally calm response may not be emotionally appropriate (or it could be in certain contexts; this stuff is, to put it mildly, complex).

To train the ECM, two databases were used. The NLPCC dataset has 23,105 sentences collected from Weibo, a Chinese blog site, and categorised by human beings using eight categories: Anger, Disgust, Fear, Happiness, Like, Sadness, Surprise and Other. Fear and Surprise turned up too rarely on Weibo blogs to be usable in practice.

Rather than using the NLPCC dataset directly, the researchers used it to train a classifier which then categorised the larger STC dataset, which has 219,905 posts and 4,308,211 responses; they reckon they achieved an accuracy of 0.623, which doesn’t sound all that great, but was apparently good enough to work with; obviously this is something that could be improved in future. It was the ’emotionalised’ STC data set which was then used to train the ECM for its task.

Results were assessed by human beings for both naturalness (how human they seemed) and emotional accuracy; ECM improved substantially on other approaches and generally turned in a good performance, especially on emotional accuracy. Alas, the chattbot is not available to try out online.

This is encouraging but I have a number of reservations. The first is about the very idea of an emotional chatbot. Chatbots are almost by definition superficial. They don’t attempt to reproduce or even model the processes of thought that underpin real conversation, and similarly they don’t attempt to endow machines with real or even imitation emotion (the ECM has internal and external memory in which to record emotional states, but that’s as far as it goes). Their performance is always, therefore, on the level of a clever trick.

Now that may not matter, since the aim is merely to provide machines that deal better with emotional human beings. They might be able to do that without having anything like real or even model emotions themselves (we can debate the ethical implications of ‘deceiving’ human interlocutors like this another time). But there must be a worry that performance will be unreliable.

Of course, we’ve seen that by using large data sets, machines can achieve passable translations without ever addressing meanings; it is likely enough that they can achieve decent emotional results in the same sort of way without ever simulating emotions in themselves. In fact the complexity of emotional responses may make humans more forgiving than they are for translations, since an emotional response which is slightly off can always be attributed to the bot’s personality, mood, or other background factors. On the other hand, a really bad emotional misreading can be catastrophic, and the chatbot approach can never eliminate such misreading altogether.

My second reservation is about the categorisation adopted. The eight categories adopted for the NLPCC data set, and inherited here with some omissions, seem to belong to a family of categorisations which derive ultimately from the six-part one devised by Paul Ekman: anger, disgust, fear, happiness, sadness, and surprise. The problem with this categorisation is that it doesn’t look plausibly comprehensive or systematic. Happiness and sadness look like a pair, but there’s no comparable positive counterpart of disgust or fear, for example.  These problems have meant that the categories are often fiddled with. I conjecture that ‘like’ was added to the NLPCC set as a counterpart to disgust, and ‘other’ to ensure that everything could be categorised somewhere. You may remember that n the Pixar film Inside Out Surprise didn’t make the cut; some researchers have suggested that only four categories are really solid, with fear/surprise and anger/disgust forming pairs that are not clearly distinct.

The thing is, all these categorisations are rooted in attempts to categorise facial expressions. It isn’t the case that we necessarily have a distinct facial expression for every possible emotions, so that gives us an incomplete and slightly arbitrary list. It might work for a bot that pulled faces, but one that provides written outputs needs something better. I think a dimensional approach is better; one that defines emotions in terms of a few basic qualities set out along different axes. These might be things like attracted/repelled, active/passive, ingoing/outgoing or whatever. There are many models along these lines and they have a long history in psychology; they offer better assurance of a comprehensive account and a more hopeful prospect of a reductive explanation.

I suppose you also have to ask whether we want bots that respond emotionally. The introduction of cash machines reduced the banks’ staff costs, but I believe they were also popular because you could get your money without having to smile and talk. I suspect that in a similar way we really just want bots to deliver the goods (often literally), and their lack of messy humanity is their strongest selling point. I suspect though, that in this respect we ain’t seen nothing yet…

Do we need robots to be conscious? Ryota Kanai thinks it is largely up to us whether the machines wake up – but he is working on it. I think his analysis is pretty good and in fact I think we can push it a bit further.

His opening remarks, perhaps due to over-editing, don’t clearly draw the necessary distinction between Hard and Easy problems, or between subjective p-consciousness and action-related a-consciousness (I take it to be the same distinction, though not everyone would agree). Kanai talks about the unsolved mystery of experience, which he says is not a necessary by-product of cognition, and says that nevertheless consciousness must be a product of evolution. Hm. It’s p-consciousness, the ineffable, phenomenal business of what experience is like, that is profoundly mysterious, not a necessary by-product of cognition, and quite possibly nonsense. That kind of consciousness cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates.  A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.

The survival value of A-consciousness comes from the way it allows us to step back from the immediate environment; instead of responding to stimuli that are present now, we can respond to ones that were around last week, or even ones that haven’t happened yet; our behaviour can address complex future contingencies in a way that is both remarkable and powerfully useful. We can make plans, and we can work out what to do in novel situations (not always perfectly, of course, but we can do much better than just running a sequence of instinctive behaviour).

Kanai discusses what must be about the most minimal example of this; our ability to wait three seconds before responding to a stimulus. Whether this should properly be regarded as requiring full consciousness is debatable, but I think he is quite right to situate it within a continuum of detached behaviour which, further along, includes reactions to very complex counterfactuals.

The kind he focuses on particularly is self-consciousness or higher-order consciousness; thinking about ourselves. We have an emergent problem, he points out, with robots  whose reasons are hidden; increasingly we cannot tell why a complex piece of machine learning resulted in the behaviour that resulted. Why not get the robot to tell us, he says; why not enable it to report its own inner states? And if it becomes able to consider and explain its own internal states, won’t that be a useful facility which is also like the kind of self-reflecting consciousness that some philosophers take to be the crucial feature of the human variety?

There’s an immediate and a more general objection we might raise here. The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. Getting the robot to report will be no better than trying to examine the state of the robot’s mind directly; in fact it’s worse, because it introduces a new step into the process, one where additional errors can creep in. Kanai describes a community of AIs, endowed with a special language that allows them to report their internal states to each other. It sounds awfully tedious, like a room full of people who, when asked ‘How are you?’ each respond with a detailed health report. Maybe that is quite human in a way after all.

The more general theoretical objection (also rather vaguer, to be honest) is that, in my opinion at least, Kanai and those Higher Order Theory philosophers just overstate the importance of being able to think about your own mental states. It is an interesting and important variety of consciousness, but I think it just comes for free with a sufficiently advanced cognitive apparatus. Once we can think about anything, then we can of course think about our thoughts.

So do we need robots to be conscious? I think conscious thought does two jobs for us that need to be considered separately although they are in fact strongly linked. I think myself that consciousness is basically recognition. When we pull off that trick of waiting for three seconds before we respond to a stimulus, it is because we recognise the wait as a thing whose beginning is present now, and can therefore be treated as another present stimulus. This one simple trick allows us to respond to future things and plan future behaviour in a way that would otherwise seem to contradict the basic principle that the cause must come before effect.

The first job that does is allow the planning of effective and complex actions to achieve a given goal. We might want a robot to be able to do that so it can acquire the same kind of effectiveness in planning and dealing with new situations which we have ourselves, a facility which to date has tended to elude robots because of the Frame Problem and other issues to do with the limitations of pre-programmed routines.

The second job is more controversial. Because action motivated by future contingencies has a more complex causal back-story, it looks a bit spooky, and it is the thing that confers on us the reality (or the illusion, if you prefer) of free will and moral responsibility. Because our behaviour comes from consideration of the future, it seems to have no roots in the past, and to originate in our minds. It is what enables us to choose ‘new’ goals for ourselves that are not merely the consequence of goals we already had. Now there is an argument that we don’t want robots to have that. We’ve got enough people around already to originate basic goals and take moral responsibility; they are a dreadful pain already with all the moral and legal issues they raise, so adding a whole new category of potentially immortal electronic busybodies is arguably something best avoided. That probably means we can’t get robots to do job number one for us either; but that’s not so bad because the strategies and plans which job one yields can always be turned into procedures after the fact and fed to ‘simple’ computers to run. We can, in fact, go on doing things the way we do them now; humans work out how to deal with a task and then give the robots a set of instructions; but we retain personhood, free will, agency and moral responsibility for ourselves.

There is quite a big potential downside, though; it might be that the robots, once conscious, would be able to come up with better aims and more effective strategies than we will ever be able to devise. By not giving them consciousness we might be permanently depriving ourselves of the best possible algorithms (and possibly some superior people, but that’s a depressing thought from a human point of view). True, but then I think that’s almost what we are on the brink of doing already. Kanai mentions European initiatives which may insist that computer processes come with an explanation that humans can understand; if put into practice the effect, once the rule collides with some of those processes that simply aren’t capable of explanation, would be to make certain optimal but inscrutable algorithms permanently illegal.

We could have the best of both worlds if we could devise a form of consciousness that did job number one for us without doing job two as an unavoidable by-product, but since in my view they’re all acts of recognition of varying degrees of complexity, I don’t see at the moment how the two can be separated.

Maybe hypnosis is the right state of mind and ‘normal’ is really ‘under-hypnotised’?

That’s one idea that does not appear in the comprehensive synthesis of what we know about hypnosis produced by Terhune, Cleeremans, Raz and Lynn. It is a dense, concentrated document, thick with findings and sources, but they have done a remarkably good job of keeping it as readable as possible, and it’s both a useful overview and full of interesting detail. Terhune has picked out some headlines here.

Hypnosis, it seems, has two components; the induction and one or more suggestions. The induction is what we normally think of as the process of hypnotising someone. It’s the bit that in popular culture is achieved by a swinging watch, mystic hand gestures or other theatrical stuff; in common practice probably just a verbal routine. It seems that although further research is needed around optimising the induction, the details are much less important than we might have been led to think, and Terhune et al don’t find it of primary interest. The truth is that hypnosis is more about the suggestibility of the subject than about the effectiveness of the induction. In fact if you want to streamline your view, you could see the induction as simply the first suggestion. Post-hypnotic suggestions, which take effect after the formal hypnosis session has concluded, may be somewhat different and may use different mechanisms from those that serve immediate suggestions, though it seems this has yet to be fully explored.

Broadly, people fall into three groups. 10 to 15 per cent of people are very suggestible, responding strongly to the full range of suggestions; about the same proportion are weakly suggestible and respond to hypnosis poorly or not at all; the rest of us are somewhere in the middle. Suggestibility is a fairly fixed characteristic which does not change over time and seems to be heritable; but so far as we know it does not correlate strongly with many other cognitive qualities or personality traits (nor with dissociative conditions such as Dissociative Identity Disorder, formerly known as Multiple Personality Disorder). It does interestingly resemble the kind of suggestibility seen in the placebo effect – there’s good evidence of hypnosis itself being therapeutically useful for certain conditions – and both may be correlated with empathy.

Terhune et al regard the debate about whether hypnosis is an altered state of consciousness as an unproductive one; but there are certainly some points of interest here when it comes to consciousness. A key feature of hypnosis is the loss of the sense of agency; hypnotised subjects think of their arm moving, not of having moved their arm. Credible current theories attribute this to the suppression of second-order mental states, or of metacognition; amusingly, this ‘cold control theory’ seems to lend some support to the HOT (higher order theory) view of consciousness (alright, please yourselves). Typically in the literature it seems this is discussed as a derangement of the proper sense of agency, but of course elsewhere people have concluded that our sense of agency is a delusion anyway. So perhaps, to repeat my opening suggestion, it’s the hypnotised subjects who have it right, and if we want to understand our own minds properly we should all enter a hypnotic state. Or perhaps that’s too much like noticing that blind people don’t suffer from optical illusions?

There’s a useful distinction here between voluntary control and top-down control. One interesting thing about hypnosis is that it demonstrates the power of top-down control, where beliefs, suggestions, and other high-level states determine basic physiological responses, something we may be inclined to under-rate. But hypnosis also highlights strongly that top-down control does not imply agency; perhaps we sometimes mistake the former for the latter? At any rate it seems to me that some of this research ought to be highly relevant to the analysis of agency, and suggests some potentially interesting avenues.

Another area of interest is surely the ability of hypnosis to affect attention and perception. It had been shown that changes in colour perception induced by hypnosis are registered in the brain differently from mere imagined changes. If we tell someone under hypnosis to see red for green and green for red, does that change the qualia of the experience or not? Do they really see green instead of red, or merely believe that’s what is happening? If anything the facts of hypnosis seem to compound the philosophical problems rather than helping to solve them; nevertheless it does seem to me that quite a lot of the results so handily summarised here should have a bigger impact on current philosophical discussion than they have had to date.

 

What is it like to be a Singularity (or in a Singularity)?

You probably know the idea. At some point in the future, computers become generally cleverer than us. They become able to improve themselves faster than we can do, and an accelerating loop is formed where each improvement speeds up the process of improving, so that they quickly zoom up to incalculable intelligence and speed, in a kind of explosion of intellectual growth. That’s the Singularity. Some people think that we mere humans will at some point have the opportunity of digitising and uploading ourselves, so that we too can grow vastly cleverer and join in the digital world in which these superhuman could scious entities will exist.

Just to clear upfront, I think there are some basic flaws in the plausibility of the story which mean the Singularity is never really going to happen: could never happen, in fact. However, it’s interesting to consider what the experience would be like.

How would we digitise ourselves? One way would be to create a digital model of our actual brain, and run that. We could go the whole hog and put ourselves into a fully simulated world, where we could enjoy sweet dreams forever, but that way we should miss out on the intellectual growth which the Singularity seems to offer, and we should also remain at the mercy of the vast new digital intellects who would be running the show. Generally I think it’s believed that only by joining in the cognitive ascent of these mighty new minds can we assure our own future survival.

In that case, is a brain simulation enough? It would run much faster than a meat brain, a point we’ll come back to, but it would surely suffer some of the limitations that biological brains are heir to. We could perhaps gradually enhance our memory and other faculties and gradually improve things that way, a process which might provide a comforting degree of continuity, but it seems likely that entities based on a biological scheme like this would be second-class citizens within the digital world, falling behind the artificial intellects who endlessly redesign and improve themselves. Could we then preserve our identity while turning fully digital and adopting a radical new architecture?

The subject of what constitutes personal identity, be it memory, certain kinds of continuity, or something else, is too large to explore here, except to note a basic question; can our identity ultimately be boiled down to a set of data? If the answer is yes (I actually believe it’s ‘no’, but today we’ll allow anything) , then one way or another the way is clear for uploading ourselves into an entirely new digital architecture.

The way is also clear for duplicating and splitting ourselves. Using different copies of our data we can become several people and follow different paths. Can we then re-merge? If the data that constitutes us is static, it seems we should be able to recombine it with few issues; if it is partly a description of a dynamic process we might not be able to do the merger on the fly, and might have to form a third, merged individual. Would we terminate the two contributing selves? Would we worry less about ‘death’ in such cases? If you know your data can always be brought back into action, terminating the processes using that data (for now) might seem less frightening than the irretrievable destruction of your only brain.

This opens up further strange possibilities. At the moment our conscious experience is essentially linear (it’s a bit more complex than that, with layers and threads of attention, but broadly there’s a consistent chronological stream). In the brave new world our consciousness could branch out without limit; or we could have grid experiences, where different loci of consciousness follow crossing paths, merging at each node and the splitting again, before finally reuniting in one node with (very strange) remembered composite experience.

If merging is a possibility, then we should be able to exchange bits of ourselves with other denizens of the digital world, too. When handed a copy of part of someone else we might retain it as exterior data, something we just know about, or incorporate it into a new merged self, whether a successor to ourselves as ourselves, or as a kind of child; if all our data is saved the difference perhaps ceases to be of great significance. Could we exchange data like this with the artificial entities that were never human, or would they be too different?

I’m presupposing here that both the ex-humans and the artificial consciousnesses here remain multiple and distinct. Perhaps there’s an argument for generally merging into one huge consciousness? I think probably not, because it seems to me that multiple loci of consciousness would just get more done in the way of thinking and experiencing. Perhaps when we became sufficiently linked and multi-threaded, with polydimensional multi-member grid consciousnesses binding everything loosely together anyway the question of whether we are one or many – and how many – might not seem important any more.

If we can exchange experiences, does that solve the Hard Problem? We no longer need to worry whether your experience of red is the same as mine, we just swap. Now many people (and I am one) would think that fully digitised entities wouldn’t be having real experiences anyway, so any data exchange they might indulge in would be irrelevant. There are several ways it could be done, of course. It might be a very abstract business or entities of human descent might exchange actual neural data from their old selves. If we use data which, fed into a meat brain, definitely produces proper experience, it perhaps gets a little harder to argue that the exchange is phenomenally empty.

The strange thing is, even if we put all the doubts aside and assume that data exchanges really do transfer subjective experience, the question doesn’t go away. It might be that attachment to a particular node of consciousness conditions the experience so that it is different anyway.

Consider the example of experiences transferred within a single individual, but over time. Let’s think of acquired tastes. When you first tasted beer, it seemed unpleasant; now you like it. Does it taste the same, with you having learnt to like that same taste? Or did it in fact taste different to you back then – more bitter, more sour? I’m not sure it’s possible to answer with great confidence. In the same way, if one node within the realm of the Singularity ‘runs’ another’s experience, it may react differently, and we can’t say for sure whether the phenomenal experience generated is the same or not…

I’m assuming a sort of cyberspace where these digital entities live – but what do they do all day? At one end of the spectrum, they might play video games constantly – rather sadly reproducing the world they left behind. Or at the intellectually pure end, they might devote themselves to the study of maths and philosophy. Perhaps there will be new pursuits that we, in our stupid meaty way, cannot even imagine as yet. But it’s hard not to see a certain tedious emptiness in the pure life of the mind as it would be available to these intellectual giants. They might be tempted to go on playing a role in the real world.

The real world, though, is far too slow. Whatever else they have improved, they will surely have racked up the speed of computation to the point where thousands of years of subjective time take only a few minutes of real world time. The ordinary physical world will seem to have slowed down very close to the point of stopping altogether; the time required to achieve anything much in the real world is going to seem like millions of years.

In fact, that acceleration means that from the point of view of ordinary time, the culture within the Singularity will quickly reach a limit at which everything it could ever have hoped to achieve is done. Whatever projects or research the Singularitarians become interested in will be completed and wrapped up in the blinking of an eye. Unless you think the future course of civilisation is somehow infinite, it will be completed in no time. This might explain the Fermi Paradox, the apparently puzzling absence of advanced alien civilisations: once they invent computing, galactic cultures go into the Singularity, wrap, themselves up in a total intellectual consummation, and within a few days at most, fall silent forever.

Is there a Hard Problem of physics that explains the Hard Problem of consciousness?

Hedda Hassel Mørch has a thoughtful piece in Nautilus’s interesting Consciousness issue (well worth a look generally) that raises this idea. What is the alleged Hard Problem of physics? She say it goes like this…

What is physical matter in and of itself, behind the mathematical structure described by physics?

To cut to the chase, Mørch proposes that things in themselves have a nature not touched by physics, and that nature is consciousness. This explains the original Hard Problem – we, like other things, just are by nature conscious; but because that consciousness is our inward essence rather than one of our physical properties, it is missed out in the scientific account.

I’m sympathetic to the idea that the original Hard Problem is about an aspect of the world that physics misses out, but according to me that aspect is just the reality of things. There may not, according to me, be much more that can usefully be said about it. Mørch, I think, takes two wrong turns. The first is to think that there are such things as things in themselves, apart from observable properties. The second is to think that if this were so, it would justify panpsychism, which is where she ends up.

Let’s start by looking at that Hard problem of physics.  Mørch suggests that physics is about the mathematical structure of reality, which is true enough, but the point here is that physics is also about observable properties; it’s nothing if not empirical. If things have a nature in themselves that cannot be detected directly or indirectly from observable properties, physics simply isn’t interested, because those things-in-themselves make no difference to any possible observation. No doubt some physicists would be inclined to denounce such unobservable items as absurd or vacuous, but properly speaking they are just outside the scope of physics, neither to be affirmed nor denied. It follows, I think, that this can’t be a Hard Problem of physics; it’s actually a Hard Problem of metaphysics.

This is awkward because we know that human consciousness does have physical manifestations that are readily amenable to physical investigation; all of our conscious behaviour, our speech and writing, for example. Our new Hard Problem (let’s call it the NHP) can’t help us with those; it is completely irrelevant to our physical behaviour and cannot give us any account of those manifestations of consciousness. That is puzzling and deeply problematic – but only in the same way as the old Hard Problem (OHP) – so perhaps we are on the right track after all?

The problem is that I don’t think the NHP helps us even on a metaphysical level. Since we can’t investigate the essential nature of things empirically, we can only know about it by pure reasoning; and I don’t know of any purely rational laws of metaphysics that tell us about it. Can the inward nature of things change? If so, what are the (pseudo-causal?) laws of intrinsic change that govern that process? If the inward nature doesn’t change, must we take everything to be essentially constant and eternal in itself? That Parmenidean changelessness would be particularly odd in entities we are relying on to explain the fleeting, evanescent business of subjective experience.

Of course Mørch and others who make a similar case don’t claim to present a set of a priori conclusions about their own nature; rather they suggest that the way we know about the essence of things is through direct experience. The inner nature of things is unknowable except in that one case where the thing whose inner nature is to be known is us. We know our own nature, at least. It’s intuitively appealing – but how do we know our own real nature? Why should being a thing bring knowledge of that thing? Just because we have an essential nature here’s no reason to suppose we are acquainted with that inner nature; again we seem to need some hefty metaphysics to explain this, which is actually lacking. All the other examples of knowledge I can think of are constructed, won through experience, not inherent. If we have to invent a new kind of knowledge to support the theory the foundations may be weak.

At the end of the day, the simplest and most parsimonious view, I think, is to say that things just are made up of their properties, with no essential nub besides. Leibniz’s Law tells us that that’s the nature of identity. To be sure, the list will include abstract properties as well as purely physical ones, but abstract properties that are amenable to empirical test, not ones that stand apart from any possible observation. Mørch disagrees:

Some have argued that there is nothing more to particles than their relations, but  intuition rebels at this claim. For there to be a relation, there must be two things being related. Otherwise, the relation is empty—a show that goes on without performers, or a castle constructed out of thin air.

I think the argument is rather that the properties of a particle relate to each other, while these groups of related properties relate in turn to other such groups. Groups don’t require a definitive member, and particles don’t require a single definitive essence. Indeed, since the particle’s essential self cannot determine any of its properties (or it could be brought within the pale of physics) it’s hard to see how it can have a defined relation to any of them and what role the particle-in-itself can play in Mørch’s relational show.

The second point where I think Mørch goes wrong is in the leap to panpsychism. The argument seems to be that the NHP requires non-structural stuff (which she likens to the hardware on which the software of the laws of physics runs – though I myself wouldn’t buy unstructured hardware); the OHP gives us the non-structural essence of conscious experience (of course conscious experience does have structure, but Mørch takes it that down there somewhere is the structureless ineffable something-it-is-like); why not assume that the latter is universal and fills the gap exposed by the NHP?

Well, because other matter exhibits no signs of consciousness, and because the fact that our essence is a conscious essence just wouldn’t warrant the assumption that all essences are conscious ones. Wouldn’t it be simpler to think that only the essences of outwardly conscious beings are conscious essences? This is quite apart from the many problems of panpsychism, which we’ve discussed before, and which Mørch fairly acknowledges.

So I’m not convinced, but the case is a bold and stimulating one and more persuasively argued than it may seem from my account. I applaud the aims and spirit of the expedition even though I may regret the direction it took.

Self-discovery: fascinating journey of life or load of tosh? An IAI discussion.

On the whole, I think the vastness of the subject means we get no more than first steps here, though the directions are at least interesting. Joanna Kavenna notes the paradoxical entanglements that can arise from self-examination and makes an interesting comparison with the process of novelists finding their ‘voice’. Exploration of selves is of course the bedrock of the novel, a topic which could take up many pages in itself. She asserts that the self is experientially real, but that thought also floats away unexamined.

David Chalmers has a less misty proposition; people have traits and we are inclined to think of some as deep or essential. Identifying these is a reasonable project, but not without dangers if we settle on the wrong ones.

Ed Stafford seems to be uncomfortable with philosophy unless it comes from an ayahuasca session or a distant tribe. He likes the idea of thinking with your stomach, but does not shed any light on the interesting question of how stomach thoughts differ from brain ones. In general he seems to take the view that for well-adjusted people there is no mystery, one knows who one is and there’s no need to wibble about it. Oddly, though he mentions being dropped on a desert island where the solitude was so severe, that even when the helicopter was still in view, he vomited. To suffer radical depersonalisation after a couple of minutes alone on a beach seems an extraordinary example of personal fragility, but I suppose we are to understand this was before he centred himself through contact with more robust cultures. Of course, those who reject theory always in fact have a theory; it’s just one that they either haven’t examined or don’t want examined. In response to Chalmers’ suggestion that a loving environment can surely lead to personal growth, he seems to begin adding qualifications to his view of the robustly settled personality, but if we are witnessing actual self-discovery here it doesn’t go far.

Myself I reckon that you don’t need to identify your essential traits to experience self-discovery; merely becoming conscious of your own traits renders them self-conscious and transforms them, an iterative process that represents a worthwhile kind of growth, both moral and psychological. But I’ve never tried ayahuasca.

Is colour the problem or the solution? Last year we heard about a way of correcting colour blindness with glasses. It only works for certain kinds of colour blindness, but the fact that it works at all is astonishing. Human colour vision relies on three different kinds of receptor cone cells in the retina; each picks up a different wavelength and the brain extrapolates from those data to fill in the spectrum. (Actually, it’s far more complex than that, with the background and light conditions taken into account so that the brain delivers a consistent colour reading for the same object even though in different conditions the light reflected from it may be of completely different wavelengths. But let’s leave that aside for now and stick with the simplistic view.) The thing is, receptor cells actually respond to a range of wavelengths; in some people two kinds of receptors have ranges that overlap so much the brain can’t discriminate. What the glasses do is cut out most of the overlapping wavelengths; suddenly the data from the different receptor cells are very different, and the brain can do a full-colour job at last.

Now a somewhat similar approach has been used to produce glasses that turn normal vision into super colour vision. These new lenses exploit the fact that we have two eyes; by cutting out different parts of the range of wavelengths detected by same kind of receptor in the right and left eyes, they give the effect of four kinds of receptor rather than three. In principle the same approach could double up all three kinds of receptor, giving us the effective equivalent of six kinds of receptor, though this has not been tried yet.

This tetrachromacy or four-colour system is not unprecedented. Some animals, notably pigeons, naturally have four or even more kinds of receptor. And a significant percentage of women, benefiting from the second copy of the relevant genes that you get when you have two ‘X’ chromosomes, have four kinds of receptor, though it doesn’t always lead to enhanced colour vision because in most cases the range of the fourth receptor overlaps the range of another one too largely to be useful.

There is no doubt that all three kinds of tetrachromat – pigeons, women with lucky genes, and people with special glasses – can discriminate between more colours than the rest of us. Because our trichromat eyes have only three sources of data, they have to treat mixtures of wavelengths as though they were the same as pure wavelengths with values equivalent to the average of the mixtures – though they’re not. Tetrachromats can do a bit better at this (and I conjecture that colour video and camera images, which use only the three colours needed to fool normal eyes, must sometimes look a bit strange to tetrachromats).

Do tetrachromats see the same spectrum as we do, but in better detail, or do they actually see different colours? There’s never been a way to tell for sure. Tetrachromats can’t tell us what colours they see any more than we can tell each other whether my red is the same as yours, or instead is the same as what you experience for green.The curious fact that the ends of the spectrum join up into a complete colour wheel might support the idea that the spectrum is in some sense an objective reality, based on mathematical harmonic relationships analogous to those of sound waves; in effect we see a single octave of colour with the wavelength at one end double (or half) that at the other. I’ve sort of speculated in the past that if our eyes could see a much wider range of wavelengths we would see lower and higher octaves of colour; not wholly new colours like Terry Pratchett’s octarine, but higher and lower reds, greens and blues. I speculated further that ‘lower’ and ‘higher’ might actually be experienced as ‘cooler’ and ‘hotter’. That is of course the wildest guesswork, but the thesis that everyone – tetrachromats included – sees the same spectrum but in lesser or greater detail seems to be confirmed by the experimenters if I’m reading it right.

Of course, colour vision is not just a matter of what happens in the retina; there is also a neural colour space mapped out in the brain (which interestingly is a little more extensive than the colour space of the real world, leading to the hidden existence of ‘chimerical’ colours).  Do pigeons, human tetrachromats, and human trichromats all map colours to similar neural spaces? I haven’t been able to find out, but I’m guessing the answer is yes. If it weren’t so, there would be potential issues over neural plasticity. If your brain receives no signals from one eye during your early life, it re-purposes the relevant bits of neural real estate and you cannot get your vision back later even if the eye starts sending the right kind of signal. We might expect that people who were colour blind from birth would be affected in a similar way, yet in fact use of the new glasses seems to bring an intact colour system straight into operation for the first time. So it might be that a standard spectral colour space is hard-wired into the genes of all of us (even pigeons), or again it might be that the spectrum is a mathematical reality which any visual system must represent, albeit with varying fidelity.

All of this is skating around the classic philosophical issues. Does Mary, who never saw colours, know something new when she has seen red? Well, we can say with confidence that the redness will be registered and mapped properly; she will not have lost the ability to see colour through being brought up in a monochrome world. More importantly, the scientifically tractable aspects of colour vision have moved another step closer to the subjective experience. We have some objective reasons for supposing that Mary’s colour experience will be arranged along the same spectral structure as ours, though not necessarily graduated with the same fineness.

None of this will banish the Hard Problem, or dispel our particular sense that colours especially are subjective optional extras. For a long time some have thought of colour as a ‘secondary’ property, in the observer, not the world; not like such properties as mass or volume, which are more ‘real’. The newly-understood complexity of colour vision leads to new arguments that it is in fact artificial, a useful artefact in the brain, in some sense not really there in objective reality.  My feeling though is that if we can all experience tetrachromacy, the gap between the objective and the subjective will not be perceived as being so unbridgeable as it has been to date.

 

Are robots short-changing us imaginatively?

Chat-bots, it seems, might be getting their second (or perhaps their third or fourth) wind. While they’re not exactly great conversationalists, the recent wave of digital assistants demonstrates the appeal of a computer you can talk to like a human being. Some now claim that a new generation of bots using deep machine learning techniques might be way better at human conversation than their chat-bot predecessors, whose utterances often veered rapidly from the gnomic to the insane.

A straw in the wind might be the Hugging Face app (I may be showing my age, but for me that name strongly evokes a ghastly Alien parasite). This greatly impressed Rachel Metz, who apparently came to see it as a friend. It’s certainly not an assistant – it doesn’t do anything except talk to you in a kind of parody of a bubbly teen with a limping attention span. The thing itself is available for IOS and the underlying technology, without the teen angle, appears to be on show here, though I don’t really recommend spending any time on either. Actual performance, based on a small sample (I can only take so much) is disappointing; rather than a leap forward it seems distinctly inferior to some Loebner prize winners that never claimed to be doing machine learning. Perhaps it will get better. Jordan Pearson here expresses what seem reasonable reservations about an app aimed at teens that demands a selfie from users as its opening move.

Behind all this, it seems to me, is the looming presence of Spike Jonze’s film Her, in which a professional letter writer from the near future (They still have letters? They still write – with pens?) becomes devoted to his digital assistant Samantha. Samantha is just one instance of a bot which people all over are falling in love with. The AIs in the film are puzzlingly referred to as Operating Systems, a randomly inappropriate term that perhaps suggests that Jonze didn’t waste any time reading up on the technology. It’s not a bad film at all, but it isn’t really about AI; nothing much would be lost if Samantha were a fairy, a daemon, or an imaginary friend. There’s some suggestion that she learns and grows, but in fact she seems to be a fully developed human mind, if not a superlative one, right from her first words. It’s perhaps unfair to single the relatively thoughtful Her out for blame, because with some honourable exceptions the vast majority of robots in fiction are like this; humans in masks.

Fictional robots are, in fact, fakes, and so are all chat-bots. No chat-bot designer ever set out to create independent cognition first and then let it speak; instead they simply echo us back to ourselves as best they can manage. This is a shame because the different patterns of thought that a robot might have; the special mistakes it might be prone to and the unexpected insights it might generate, are potentially very interesting; indeed I should have thought they were fertile ground for imaginative writers. But perhaps ‘imaginative’ understates the amazing creative powers that would be needed to think yourself out of your own essential cognitive nature. I read a discussion the other day about human nature; it seems to me that the truth is we don’t know what human nature is like because we have nothing much to compare it with; it won’t be until we communicate with aliens or talk properly with non-fake robots that we’ll be able to form a proper conception of ourselves.

To a degree it can be argued that there are examples of this happening already. Robots that aspire to Artificial General Intelligence in real world situations suffer badly from the Frame Problem, for instance. That problem comes in several forms, but I think it can be glossed briefly as the job of picking out from the unfiltered world the things that need attention. AI is terrible at this, usually becoming mired in irrelevance (hey, the fact that something hasn’t changed might be more important than the fact that something else has). Dennett, rightly I think, described this issue as not the discovery of a new problem for robots so much as a new discovery about human nature; turns out we’re weirdly, inexplicably good at something we never even realised was difficult.

How interesting it would be to learn more about ourselves along those challenging, mind-opening lines; but so long as we keep getting robots that are really human beings, mirroring us back to ourselves reassuringly, it isn’t going to happen.