cakeThe Stanford Encyclopaedia of Philosophy is twenty years old. It gives me surprisingly warm feelings towards Stanford that this excellent free resource exists. It’s written by experts, continuously updated, and amazingly extensive. Long may it grow and flourish!

Writing an encyclopaedia is challenging, but an encyclopaedia of philosophy must take the biscuit. For a good encyclopaedia you need a robust analysis of the topics in the field so that they can be dealt with systematically, comprehensively, and proportionately. In philosophy there is never a consensus, even about how to frame the questions, never mind about what kind of answers might be useful. This must make it very difficult: do you try to cover the most popular schools of thought in an area? All the logically possible positions one might take up?  A purely historical survey? Or summarise what the landscape is really like, inevitably importing your own preconceptions?

I’ve seen people complain that the SEP is not very accessible to newcomers, and I think the problem is partly that the subject is so protean. If you read an article in the SEP, you’ll get a good view and some thought-provoking ideas; but what a noob looks for are a few pointers and landmarks. If I read a biography I want to know quickly about the subject’s  main works, their personal life, their situation in relation to other people in the field, the name of their theory or school, and so on.  Most SEP subject articles cannot give you this kind of standard information in relation to philosophical problems. There is a real chance that if you read up a SEP article and then go and talk to professionals, they won’t really get what you’re talking about. They’ll look at you blankly and then say something like:

“Oh, yes, I see where you’re coming from, but you know, I don’t really think of it that way…”

It’s not because the article you read was bad, it’s because everyone has a unique perspective on what the problem even is.

Let’s look at Consciousness. The content page has:

consciousness (Robert Van Gulick)

  • animal (Colin Allen and Michael Trestman)
  • higher-order theories (Peter Carruthers)
  • and intentionality (Charles Siewert)
  • representational theories of (William Lycan)
  • seventeenth-century theories of (Larry M. Jorgensen)
  • temporal (Barry Dainton)
  • unity of (Andrew Brook and Paul Raymont)

All interesting articles, but clearly not a systematic treatment based on a prior analysis. It looks more like the set of articles that just happened to get written with consciousness as part of the subject. Animal consciousness, but no robot consciousness? Temporal consciousness, but no qualia or phenomenal consciousness? But I’m probably looking in the wrong place.

In Robert Van Gulick’s main article we have something that looks much more like a decent shot at a comprehensive overview, but though he’s done a good job it won’t be a recognisable structure to anyone who hasn’t read this specific article. I really like the neat division into descriptive, explanatory, and functional questions; it’s quite helpful and illuminating: but you can’t rely on anyone recognising it (Next time you meet a professor of philosophy ask him: if we divide the problems of consciousness into three, and the first two are descriptive and explanatory, what would the third be? Maybe he’ll say  ‘Functional’, but maybe he’ll say ‘Reductive’ or something else – ‘Intentional’ or ‘Experiential'; I’m pretty sure he’ll need to think about it). Under ‘Concepts of Consciousness’ Van Gulick has ‘Creature Consciousness': our noob would probably go away imagining that this is a well-known topic which can be mentioned in confident expectation of the implications being understood. Alas, no: I’ve read quite a few books about consciousness and can’t immediately call to mind any other substantial reference to ‘Creature Consciousness': I’m pretty sure that unless you went on to explain that you were differentiating it from ‘State Consciousness’ and ‘Consciousness as an Entity’, you might be misunderstood.

None of this is meant as a criticism of the piece: Van Gulick has done a great job on most counts (the one thing I would really fault is that the influence of AI in reviving the topic and promoting functionalist views is, I think, seriously underplayed). If you read the piece you  will get about as good a view of the topic as that many words could give you, and if you’re new to it you will run across some stimulating ideas (and some that will strike you as ridiculous). But when you next read a paper on philosophy of mind, you’ll still have to work out from scratch how the problem is being interpreted. That’s just the way it is.

Does that mean philosophy of mind never gets anywhere? No, I really don’t think so, though it’s outstandingly hard to provide proof of progress. In science we hope to boil down all the hypotheses to a single correct theory: in philosophy perhaps we have to be happy that we now have more answers (and more problems) than ever before.

And the SEP has got most of them! Happy Birthday!

Aeon IdeasI’ve been having fun recently, writing Viewpoints and commenting on Aeon Ideas; if you can bear even more noodling from me, why not have a look – or sign up and join in?

I’ve also been attempting some long-overdue upgrades. As a result Conscious Entities should now be fully compatible with mobile devices. There’s also a Facebook page and a Twitter feed (@peter_hankins), though I have no idea what I’m doing with those, so stand by for embarrassing mistakes..

wise menThere were a number of reports recently that a robot had passed ‘one of the tests for self-awareness’. They seem to stem mainly from this New Scientist piece (free registration may be required to see the whole thing, but honestly I’m not sure it’s worth it). That in turn reported an experiment conducted by Selmer Bringsjord of Rensselaer, due to be presented at the Ro-Man conference in a month’s time. The programme for the conference looks very interesting and the experiment is due to feature in a session on ‘Real Robots That Pass Human Tests of Self Awareness’.

The claim is that Bringsjord’s bot passed a form of the Wise Man test. The story behind the Wise Man test has three WMs tested by the king; he makes them wear hats which are either blue or white: they cannot see their own hat but can see both of the others. They’re told that there is at least one blue hat, and that the test is fair; to be won by the first WM who correctly announces the colour of his own hat. There is a chain of logical reasoning which produces the right conclusion: we can cut to the chase by noticing that the test can’t be fair unless all the hats are the same colour, because all other arrangements give one WM an advantage. Since at least one hat is blue, they all are.

You’ll notice that this is essentially a test of logic, not self awareness. If solving the problem required being aware that you were one of the WMs then we who merely read about it wouldn’t be able to come up with the answer – because we’re not one of the WMs and couldn’t possibly have that awareness. But there’s sorta,  kinda something about working with other people’s point of view in there.

Bringsjord’s bots actually did something rather different. They were apparently told that two of the three had been given a ‘dumbing’ pill that stopped them from being able to speak (actually a switch had been turned off; were the robots really clever enough to understand that distinction and the difference between a pill and a switch?); then they were asked ‘did you get the dumbing pill?’  Only one, of course, could answer, and duly answered ‘I don’t know': then, having heard its own voice, it was able to go on to say ‘Oh, wait, now I know…!”

This test is obviously different from the original in many ways; it doesn’t involve the same logic. Fairness, an essential factor in the original version, doesn’t matter here, and in fact the test is egregiously unfair; only one bot can possibly win. The bot version seems to rest mainly on the robot being able to distinguish its own voice from those of the others (of course the others couldn’t answer anyway; if they’d been really smart they would all have answered ‘I wasn’t dumbed’, knowing that if they had been dumbed the incorrect conclusion would never be uttered). It does perhaps have a broadly similar sorta, kinda relation to awareness of points of view.

I don’t propose to try to unpick the reasoning here any further: I doubt whether the experiment tells us much, but as presented in the New Scientist piece the logic is such a dog’s breakfast and the details are so scanty it’s impossible to get a proper idea of what is going on. I should say that I have no doubt Ringsjord’s actual presentation will be impeccably clear and well-justified in both its claims and its reasoning; foggy reports of clear research are more common than vice versa.

There’s a general problem here about the slipperiness of defining human qualities. Ever since Plato attempted to define a man as ‘a featherless biped’ and was gleefully refuted by Diogenes with a plucked chicken, every definition of the special quality that defines the human mind seems to be torpedoed by counter-examples. Part of the problem is a curious bind whereby the task of definition requires you to give a specific test task; but it is the very non-specific open-ended generality of human thought you’re trying to capture. This, I expect, is why so many specific tasks that once seemed definitively reserved for humans have eventually been performed by computers, which perhaps can do anything which is specified narrowly enough.

We don’t know exactly what Bringsjord’s bots did, and it matters. They could have been programmed explicitly just to do exactly what they did do, which is boring: they could have been given some general purpose module that does not terminate with the first answer and shows up well in these circumstances, which might well be of interest; or they could have been endowed with massive understanding of the real world significance of such matters as pills, switches, dumbness, wise men, and so on, which would be a miracle and raise the question of why Bringsjord was pissing about with such trivial experiments when he had such godlike machines to offer.

As I say, though, it’s a general problem. In my view, the absence of any details about how the Room works is one of the fatal flaws in John Searle’s Chinese Room thought experiment; arguably the same issue arises for the Turing Test. Would we award full personhood to a robot that could keep up a good conversation? I’m not sure I would unless I had a clear idea of how it worked.

I think there are two reasonable conclusions we can draw, both depressing. One is that we can’t devise a good test for human qualities because we simply don’t know what those qualities are, and we’ll have to solve that imponderable riddle before we can get anywhere. The other possibility is that the specialness of human thought is permanently indefinable. Something about that specialness involves genuine originality, breaking the system, transcending the existing rules; so just as the robots will eventually conquer any specific test we set up, the human mind will always leap out of whatever parameters we set up for it.

But who knows, maybe the Ro-Man conference will surprise us with new grounds for optimism.

transferThe new film Self/less is based around the transfer of consciousness. An old man buys a new body to transfer into, and then finds that contrary to what he was told, it wasn’t grown specially: there was an original tenant who moreover, isn’t really gone. I understand that this is not a film that probes the metaphysics of the issue very deeply; it’s more about fight scenes; but the interesting thing is how readily we accept the idea of transferred consciousness.
In fact, it’s not at all a new idea; if memory serves, H.G.Wells wrote a short story on a similar theme; a fit young man with no family is approached by a rich old man in poor health who apparently wants to leave him all his fortune; then he finds himself transferred unwillingly to the collapsing ancient body and the old man making off in his fresh young one. In Wells’ version the twist is that the old man gets killed in a chance traffic accident, thereby dying before his old body does anyway.
The thing is, how could a transfer possibly work? In Wells’ story it’s apparently done with drugs, which is mysterious; more normally there’s some kind of brain-transfer helmet thing. It’s pretty much as though all you needed to do was run an EEG and then reverse the polarity. That makes no sense. I mean, scanning the brain in sufficient detail is mind-boggling to begin with, but the idea that you could then use much the same equipment to change the content of the mind is in another league of bogglement. Weather satellites record the meteorology of the world, but you cannot use them to reset it. This is why uploading your mind to a computer, while highly problematic, is far easier to entertain than transferring it to another biological body.
The big problem is that part of the content of the brain is, in effect, structural. It depends on which neurons are attached to which (and for that matter, which neurons there are), and on the strength and nature of that linkage. It’s true that neural activity is important too, and we can stimulate that electrically; even with induction gear that resembles the SF cliché: but we can’t actually restructure the brain that way.
The intuition that transfers should be possible perhaps rests on an idea that the brain is, as it were, basically hardware, and consciousness is essentially software; but isn’t really like that. You can’t run one person’s mind on another’s brain.
There is in fact no reason to suppose that there’s much of a read-across between brains: they may all be intractably unique. We know that there tends to be a similar regionalisation of functions in the brain, but there’s no guarantee that your neural encoding of ‘grandmother’ resembles mine or is similarly placed. Worse, it’s entirely possible that the ‘meaning’ of neural assemblages differs according to context and which other structures are connected, so that even if I could identify my ‘grandmother’ neurons, and graft them in in place of yours, they would have a different significance, or none.
Perhaps we need a more sophisticated and bespoke approach. First we thoroughly decipher both brains, and learn their own idiosyncratic encoding works. Then we work out a translator. This is a task of unimaginable complexity and particularity, but it’s not obviously impossible in principle. I think it’s likely that for each pair of brains you would need a unique translator: a universal one seems such an heroic aspiration that I really doubt its viability: a universal mind map would be an achievement of such interest and power that merely transferring minds would seem like time-wasting games by comparison.
I imagine that even once a translator had been achieved, it would normally only achieve partial success. There would be a limit to how far you could go with nano-bot microsurgery; and there might be certain inherent limitations. Certain ideas, certain memories, might just be impossible to accommodate in the new brain because of their incompatibility with structural or conceptual issues that were too deeply embedded to change; there would be certain limitations. The task you were undertaking would be like the job of turning Pride and Prejudice into Don Quixote simply by moving the words around and perhaps in a few key cases allowing yourself one or two anagrams: the result might be recognisable, but it wouldn’t be perfect. The transfer recipient would believe themselves to be Transferee, but they would have strange memory gaps and certain cognitive deficits, perhaps not unlike Alzheimer’s, as well as artefacts: little beliefs or tendencies that existed neither in Transferee or Recipient, but were generated incidentally through the process of reconstruction.
It’s a much more shadowy and unappealing picture, and it makes rather clearer the real killer: that though Recipient might come to resemble Transferee, they wouldn’t really be them.
In the end, we’re not data, or a program; we’re real and particular biological entities, and as such our ontology is radically different. I said above that the plausibility of transfers comes from thinking of consciousness as data, which I think is partly true: but perhaps there’s something else going on here; a very old mental habit of thinking of the soul as detachable and transferable. This might be another case where optimists about the capacity of IT are unconsciously much less materialist than they think.

whistleAn interesting paper in Behavioural and Brain Sciences from Morsella, Godwin, Jantz, Krieger, and Gazzaley, reported here: an accessible pdf draft version is here.

It’s quite a complex, thoughtful paper, but the gist is clearly that consciousness doesn’t really do that much. The authors take the view that many functions generally regarded as conscious are in fact automatic and pre- or un-conscious: what consciousness hosts is not the process but the results. It looks to consciousness as though it’s doing the work, but really it isn’t.

In itself this is not a new view, of course. We’ve heard of other theories that base their interpretation on the idea that consciousness only deals with small nuggets of information fed to it by unconscious processes. Indeed, as the authors acknowledge, some take the view that consciousness does nothing at all: that it is an epiphenomenon, a causal dead end, adding no more to human behaviour than the whistle adds to the locomotive.

Morsella et al don’t go that far. In their view we’re lacking a clear idea of the prime function of consciousness; their Passive Frame Theory holds that the function is to constrain and direct skeletal muscle output, thereby yielding adaptive behaviour. I’d have thought quite a lot of unconscious processes, even simple reflexes, could be said to do that too; philosophically I think we’d look for a bit more clarity about the characteristic ways in which consciousness as opposed to instinct or other unconscious urges influence behaviour; but perhaps I’m nit-picking.

The authors certainly offer explanation as to what consciousness does. In their view, well-developed packages are delivered to consciousness from various unconscious functions. In consciousness these form a kind of combinatorial jigsaw, very regularly refreshed in a conscious/unconscious cycle; the key thing is that these packages are encapsulated and cannot influence each other. This is what distinguishes the theory from the widely popular idea of a Global Workspace, originated by Bernard Baars; no work is done on the conscious contents while they’re there. they just sit until refreshed or replaced.

The idea of encapsulation is made plausible by various examples. When we recognise an illusion, we don’t stop experiencing it; when we choose not to eat, we don’t stop feeling hungry, and so on. It’s clearly the case that sometimes this happens, but can we say that there are really no cases where one input alters our conscious perception of another? I suspect that any examples we might come up with would be deemed by Morsella et al to occur in pre-conscious processing and only seem to happen in consciousness: the danger with that is that the authors might end up simply disqualifying all counter-examples and thereby rendering their thesis unfalsifiable. It would help if we could have a sharper criterion of what is, and isn’t, within consciousness.

As I say, the authors do hold that consciousness influences behaviour, though not by its own functioning; instead it does so by, in effect, feeding other unconscious functions. An analogy with the internet is offered: the net facilitates all kind of functions; auctions, research, social interaction, filling the world with cat pictures, and so on; but it would be quite right to say that in itself it does any of these things.

That’s OK, but it seems to delegate an awful lot of things that we might have regarded as conscious cognitive activity to these later unconscious functions, and it would help to have more of an account of how they do their thing and how consciousness contrives to facilitate. It seems it merely brings things together, but how does that help? If they’re side by side but otherwise unprocessed, I’m not sure what the value of merely juxtaposing them (in some sense which is itself a little unclear) amounts to.

So I think there’s more to do if Passive Frame Theory is going to be a success; but it’s an interesting start.

imageThe recent short NYT series on robots has a dying fall. The articles were framed as an investigation of how robots are poised to change our world, but the last piece is about the obsolescence of the Aibo, Sony’s robot dog. Once apparently poised to change our world, the Aibo is no longer made and now Sony will no longer supply spare parts, meaning the remaining machines will gradually cease to function.
There is perhaps a message here about the over-selling and under-performance of many ambitious AI projects, but the piece focuses instead on the emotional impact that the ‘death’ of the robot dogs will have on some fond users. The suggestion is that the relationship these owners have with their Aibo is as strong as the one you might have with a real dog. Real dogs die, of course, so though it may be sad, that’s nothing new. Perhaps the fact that the Aibos are ‘dying’ as the result of a corporate decision, and could in principle have been immortal makes it worse? Actually I don’t know why Sony or some third party entrepreneur doesn’t offer a program to virtualise your Aibo, uploading it into software where you can join it after the Singularity (I don’t think there would really be anything to upload, but hey…).
On the face of it, the idea of having a real emotional relationship with an Aibo is a little disturbing. Aibos are neat pieces of kit, designed to display ’emotional’ behaviour, but they are not that complex (many orders of magnitude less complex than a dog, surely), and I don’t think there is any suggestion that they have any real awareness or feelings (even if you think thermostats have vestigial consciousness, I don’t think an Aibo would score much higher. If people can have fully developed feelings for these machines, it strongly suggests that their feelings for real dogs have nothing to do with the dog’s actual mind. The relationship is essentially one-sided; the real dog provides engaging behaviour, but real empathy is entirely absent.
More alarming, it might be thought to imply that human relationships are basically the same. Our friends, our loved ones, provide stimuli which tickle us the right way; we enjoy a happy congruence of behaviour patterns, but there is no meeting of minds, no true understanding. What’s love got to do with it, indeed?
Perhaps we can hope that Aibo love is actually quite distinct from dog love. The people featured in the NYT video are Japanese, and it is often said that Japanese culture is less rigid about the distinction between animate and inanimate than western ideas. In Christianity, material things lack souls and any object that behaves as if it had one may be possessed or enchanted in ways that are likely to be unnatural and evil. In Shinto, the concept of kami extends to anything important or salient, so there is nothing unnatural or threatening about robots. But while that might validate the idea of an Aibo funeral, it does not precisely equate Aibos and real dogs.
In fact, some of the people in the video seem mainly interested in posing their Aibos for amusing pictures or video, something they could do just as well with deactivated puppets. Perhaps in reality Japanese culture is merely more relaxed about adults amusing themselves with toys?
Be that as it may, it seems that for now the era of robot dogs is already over…

sergio differenceSergio’s thoughts on computationalism evoked a big response. Here is his considered reaction, complete with bibliography…

+++++++++++++++++++

A few weeks ago, Peter was kind enough to publish my personal reflections on computational functionalism, I made it quite clear that the aim was to receive feedback and food for thought, particularly in the form of disagreements and challenges. I got plenty, more than I deserve, so a big “thank you” is in order, to all the contributors and to Peter for making it all possible. The discussions that are happening here somehow vindicate the bad rep that the bottom half of the internet gets. Over the last week or so, I’ve been busy trying to summarise what I’ve learned, what challenges I find unanswerable and what answers fail to convince. This is hard to do, as even if we have all tried hard to remain on topic, our subject is slippery and comes with so much baggage, so apologies if I’ll fail to address this or that stream, do correct me whenever you feel the need.

This post is going to be long, so I’ll provide a little guidance here. First, I will summarise where I am now, there may be nuanced differences with the substance of original post, but if there are, I can’t spot them properly (a well-known cognitive limitation: after changing your mind, it’s very difficult to reconstruct your previous beliefs in a faithful way). There certainly is a shift of focus, which I hope is going to help clarify the overall conceptual architecture that I have in mind. After the summary, I will try to discuss the challenges that have been raised, the ones that I would expect to receive, and where disagreements remain unresolved. Finally, I’ll write some conclusive reflections.

Computational Functionalism requires Embodiment.

The whole debate, for me revolves around the question: can computations generate intentionality? Critics of computational functionalism say “No” and take this as a final argument to conclude that computations can’t explain minds. I also answer “No” but reach a weaker conclusion: computations can never be the whole story, we need something else to get the story started (provide a source for intentionality), but computations are the relevant part of how the story unfolds.
The missing ingredient comes from what I call “structures”: some “structures” in particular (both naturally occurring and artificial) exist because they effectively measure some feature of the environment and translate it into a signal, more structure then use, manipulate and re-transmit this signal in organised ways, so that the overall system ends up showing behaviours that are causally connected to what was measured. Thus, I say that such systems (the overall structure) are to be understood as manipulating signals about certain features of the world.
This last sentence pretty much summarises all I have to say: it implies computations (collecting, transmitting, manipulating signals and then generating output). It also implies intentionality: the signals are about something. Finally, it shows that what counts are physical structures that produce, transmit and manipulate signals: without embodiment, nothing can happen at all. I conclude that such structures can use their intrinsic intentionality (as long as their input, transmission & integration structures work, they produce and manipulate signals about something), and build on it, so that some structures can eventually generate meanings and become conscious.

Once intentionality is available (and it is from the start), you get a fundamental ingredient that does something fundamental, which is over an beyond what can be achieved by computations alone.

Say that you find an electric cable, with variable current/voltage that flows in it. You suspect it carries a signal, but as Jochen would point out, without some pre-existing knowledge about the cable, you have no way of figuring out what it is being signalled.
But what precedes brains already embodies the guiding knowledge: the action potentials that travel on axons from sensory periphery to central brain are already about something (as are the intracellular mechanisms that generate them at the receptor level). Touch, temperature, smells, colours, whatever. Their mapping isn’t arbitrary or optional, their intentionality is a quality of the structures that generate them.

Systems that are able to generate meanings and become conscious are, as far as we know (and/or for the time being), biological, so I will now move to animals and human animals in particular. Specifically, each one of us produces internal sensory signals that already are about something, and some of them (proprioception, pain, pleasure, hunger, and many more) are about ourselves.
Critics should say that, for any given functioning body, you can in theory interpret these signals as encoding the list of first division players of whichever sport you’d like, all you need is to design a post-hoc encoding to specify the mapping from signal to signified player – this is equivalent to our electrical cable above: without additional knowledge, there is no way to correctly interpret the signals from a third-party perspective. However, within the structure that contains these signals, they are not about sports, they really are about temperature, smell, vision and so forth: they reliably respond to certain features of the world, in a highly (and imperfect) selective way.

Thus, you (all of us) can find meaningful correlations between signals coming from the world, between the ones that are about yourself, and all combinations thereof. This provides the first spark of meaning (this smells good, I like this panorama, etc). From there, going to consciousness isn’t really that hard, but it all revolves around two premises, which are what I’m trying to discuss here.

  • Premise one: intentionality comes with the sensory structure. I hope that much is clear. You change the structure, you change what the signal is about.
  • Premise two: once you have a signal about something, the interpretation isn’t arbitrary any more. If you are a third-party, it may be difficult/impossible to establish what a signal is signalling exactly, but in theory, the ways to empirically get closer to the correct answer can exist. However, these ways are entirely contingent: you need to account for the naturally occurring environment where the signal-bearing structure emerged.

If you accept these premises, it follows that:

a) Considering brains, they are in the business of integrating sensory signals to produce behavioural outputs. They can do so, because the system they belong to includes the mapping: signals about touch are just that. Thus, from within, the mapping comes from hard-constraints, in fact, you would expect that these constraints are enough to bootstrap cognition.
b) To do so, highly developed and plastic brains use sensory input to generate models of the world (and more). They compute in the sense that they manipulate, shuffle symbols around, and generate new ones.
c) Because the mapping is intrinsic, the system can generate knowledge about itself and the world around it. “I am hungry”, “I like pizza”, “I’ll have a pizza” become possible, and are subject-relative.
d) thus, when I sayThe only facts are epistemological” I actually mean two things (plus one addendum below): (1) they relate to self-knowledge, and (2) they are facts just the same (actually, I’d say they are the only genuine facts that you can find, because of (1)).

Thus, given a system, from a third-party perspective, in theory, we can:

i. Use the premises and conclusion a) to make sensible hypotheses on what the intrinsic mapping/intentionality/aboutness might be (if any).
ii. Use a mix of theories, including information (transmission) theory (Shannon’s) and the concept of abstract computations to describe how the signals are processed: we would be tapping the same source of disambiguation – the structure that produces signals about this but not that.
iii. This will need to closely mirror what the structures in the brain actually do: at each level of interpretational abstraction, we need to keep verifying that our interpretation keeps reflecting how the mechanisms behave (e.g. our abstractions need to have verifiable predictive powers). This can be done (and is normally done) via the standard tools of empirical neuroscience.
iv. If and only if we’ll be able to build solid interpretations (i.e. ones that make reliable predictions) and we’ll be able to cover all the distance between original signals, consciousness and then behaviour, we will have a second map: from the mechanisms as described in third-person terms, to “computations” (i.e. abstractions that describe the mechanisms in parsimonious ways).

Buried within the last passage, there is some hope that, at the same time, we will learn something about the mental, because we have been trying to match, and remain in contact with, the initial aboutness of the input. We have hope that computations can describe what counts of the mechanism we are studying because the mechanisms themselves rely on (exist because of) their intrinsic intentionality. This provides a third meaning to “the only facts are epistemological(3): we would have a way to learn about otherwise subjective mental states. However, in this “third-party” epistemological route, we are using empirical tools, so, unlike the first-party view, where some knowledge is unquestionable (I think therefore I am, remember?), the results we would get will always be somewhat uncertain.

Thus: yes, there is a fact to be known on whether you are conscious (because it is an epistemological fact, otherwise it would not qualify as fact), but the route I’m proposing to find what this fact is is an empirical route and therefore can only approximate to the absolute truth. This requires to grasp the concept that absolute truths depend on subjective ones. If I feel pain, I’m in pain, this is why a third-party can claim there is an objective matter on the subject of my feeling pain. This me is possible because I am made of real physical stuff, which, among other things, collects signals about the structure it entails and the world outside.

At this point it’s important to note that this whole “interpretation” relies on noting that the initial “aboutness” (generated by sensory structures) is possible because the sensory structures generate a distinction. I see it as a first and usually quite reliable approximation of some property of the external environment: in the example of the simple bacterium, a signal about glucose is generated, and it becomes possible to consider it to be about glucose because, on average, and in the normal conditions where the bacterium is to be found, it will almost always react to glucose alone. Thus, intentionality is founded on a pretence, a useful heuristic, a reliable approximation. This generates the possibility of making further distinctions, and distinctions are a pre-requisite for cognition.
This is also why intentionality can sustain the existence of facts: once the first approximations are done, and note that in this context I could just as well call them “the first abstractions”, conceptual and absolute truths start to appear. 2+2 = 4 becomes possible, as well as “cogito ergo sum” (again).

This more or less concludes my positive case. The rest is about checking if it has any chance of being accepted by more than one person, which leads us to the challenges.

Against Computational Functionalism: the challenges.

To me, the weakest point in all this is premise one. However, to my surprise, my original post got comments like “I don’t know if I’m ready to buy it” and similar, but no one went as far as saying “No, the signal in your fictional bacterium is not about glucose, and in fact it’s not even a signal“. If you think you can construct such a case, please do, because I think it’s the one argument that could convince me that I’m wrong.

Moving on the challenges I have received, Disagreeable Me, in comment #82 remarks:

If your argument depends on […] the only facts about consciousness or intentionality are epistemological facts, then rightly or wrongly that is where Searle and Bishop and Putnam and Chalmers and so on would say your position falls apart, and nearly everyone would agree with them. If youre looking for feedback on where your argument fails to persuade, Id say this is it.

I think he is right in identifying the key disagreement: it’s the reason I’ve re-stated my main argument above, unpacking what I mean with epistemological fact and why I do.
In short: I think the criticism is a category error. Yes there are facts, but they subsist only because they are epistemological. If people search for ontological facts, they can’t find them, because there aren’t any: knowledge requires arbitrary distinctions and at the first level, only allows for useful heuristics. However, once these arbitrary distinctions are done and taken for granted, you can find facts about knowledge. Thus: there are facts about consciousness, because consciousness requires to make the initial arbitrary distinctions. Answering “but in your argument, somewhere, you are assuming some arbitrary distinctions” doesn’t count as criticism: it goes without saying.
This is a problem in practice, however: for people to accept my stance, they need to turn their personal epistemology head over feet, so once more, saying “this is your position, but it won’t convince Searle [Putnam, Chalmers, etc…]” is not criticism to my position. You need to attack my argument for that, otherwise you are implying “Searle is never going to see why your position makes sense”, i.e. you are criticising his position, not mine.

Similarly, the criticism that stems from Dancing with Pixes (DwP: the universal arbitrariness of computations) doesn’t really apply. This is what I think Richard Wein has been trying to demonstrate. If you take computations to be something so abstract that you can “interpret any mechanism to perform any computation” you are saying “this idea of computation is meaningless: it applies to everything and nothing, it does not make any distinction”. Thus, I’ve got to ask: how can we use this definition of computation to make distinctions? In other words, I can’t see how the onus of refuting this argument is in the computationalist camp (I’ve gone back to my contra-scepticism and I once more can’t understand how/why some people take DwP seriously). To me, there is something amiss in the starting definition of computation, as the way it is formulated (forgetting about cardinality for simplicity’s sake) allows drawing no conclusions about anything at all.
If any mechanism can be interpreted to implement any computation, you have to explain me why I spend my time writing software/algorithms. Clearly, I could not be using your idea of computations because I won’t be able to discriminate between different programs (they are all equivalent). But I have a job, and I see the results of my work, so, in the DwP view, something else, not computations, explain what I do for a living. Fine: whatever that something is, it is what I call computations/algorithms. Very few people enjoy spending time in purely semantic arguments, so I’ll leave it to you to invent a way to describe what programming is all about, while accepting the DwP argument. If you’ll be able to produce a coherent description, we can use that in lieu of what I normally refer to computation. The problem should be solved and everyone may save his/her own worldview. It’s also worth noting how all of this echoes the arguments that Chalmers himself uses to respond to the DwP challenge: if we start with perfectly abstract computations, we are shifting all the work on the implementation side.

If you prefer a blunt rebuttal, this should suffice: in my day job I am not paid to design and write functionless abstractions (computations that can be seen everywhere and nowhere). I am ok with my very local description, where computations are what algorithms do: they transform input into outputs in precise and replicable ways. A so and so signal gets in this particular system and comes out transformed in this and that way. Whatever systems show the same behaviour are computationally equivalent. Nothing more to be said, please move on.

Furthermore, what Richard Wein has been trying to show is indeed very relevant: if we accept an idea of computations as arbitrary interpretations of mechanisms, we are saying “computations are exclusively epistemological tools” they are, after all interpretations. Thus, interpretations are explicitly something above and beyond their original subject. It follows that they are abstract and you can’t implement them. Therefore, a computer can’t implement computations: whatever it is that a computer does, can be interpreted as computations, but that’s purely a mental exercise, it has no effect on what the computer does. I’m merely re-stating my argument here, but I think it’s worth repeating. We end up with no way to describe what our computers do: Richard is trying to say, “hang on, this state of affairs in a CPU reliably produces that state” and a preferential way to describe this sort of transitions in computational terms does exist: you will start describing “AND”, “OR”, “XOR” operators and so on. But doing so is not arbitrary.
If I wanted to play the devil’s advocate, I would say: OK, doing so is not arbitrary because we have already arbitrarily assigned some meaning to the input. Voltage arriving through this input is to be interpreted as a 1, no voltage is a 0, same for this other input. On the output side we find:
1,1 -> 1, while 0 is returned for all other cases. Thus this little piece of CPU computes an AND operation. Oh, but now what happens if we invert the map on both the inputs and assume that “no voltage equals 1″?
This is where I find some interest: computers are useful, as John Davey remarked, because we can change the meaning of what they do, that’s why they are versatile.

The main trouble is that computers cant be held to represent anything. And that trouble is precisely the reason they were invented numbers (usually 1s and 0s) are unlimited in the scope of things that they can represent […].

This is true, and important to accept, but does not threaten my position: if I’m right, intentional systems process signals that have fixed interpretations.

Our skin has certain types of receptors that when activated send a signal which is interpreted as “danger! too hot!” (this happens when something starts breaking up the skin cells). If you hold dry ice in your hand, after one or two seconds you will receive that signal (because ice crystals form in your skin and start breaking cells: dry ice is Very Cold!) and it will seem to you that the ice you’re holding has abruptly become very hot (while still receiving the “too cold” signal as well). It’s an odd experience (a dangerous one – be very careful if you wish to try it), which comes in handy: my brain receives the signal and interprets it in the usual way, the signal is about (supposed) too hot conditions, it can misfire, but it still conveys the “Too hot” message. This is what I was trying to say in the original post: within the system, certain interpretations are fixed, we can’t change them at will, they are not arbitrary. We do the same with computers, and find that we can work with them, write one program to play chess, another one for checkers. It’s the mapping in the I/O side that does the trick…

Moving on, Sci pointed to a delightful article from Fodor, which challenges Evolutionary Psychology directly, and marginally disputes the idea that Natural Selection can select for intentionality. I’m afraid that Fodor is fundamentally right in everything he says, but suffers from the same kind of inversion that generates the DwP and the other kind of criticism. I’ll explain the details (but please do read the article: it’s a pleasure you don’t want to deny yourself): the central argument (for us here) is about intentionality and the possibility that it may be “selected for” via natural selection. Fodor says that natural selection selects, but it does not select “for” any specific trait. What traits emerge from selection depends entirely from contingent factors.
On this, I think he is almost entirely right.

However, at a higher level, an important pattern does reliably emerge from blind selection: because of contingency, and thus unpredictability of what will be selected (still without the for), what ultimately tends to accumulate is adaptability per-se. Thus, you can say that natural selection, weakly selects for adaptability. Biologically, this is defensible: no organism relies on a fixed series of well-ordered and tightly constrained events to happen at precise moments in order to survive and reproduce. All living things can survive a range of different conditions and still reproduce. The way they do this is by, surprise! sample the world, and react to it. Therefore: natural selection selects for the seeds of intentionality because intentionality is required to react to changing conditions.
Now, on the subject of intentionality, and to show that natural selection can’t select for intentions, Fodor uses the following:

Jack and Jill
Went up the hill
To fetch a pail of water
Jack fell down
And broke his crown
And thus decreased his fitness.

(I told you it’s delightful!)
His point is that selection can’t act on Jack’s intention of fetching water, but only on the contingent fact that Jack broke his crown. He is right, but miles from our mark. What is selected for is Jack’s ability to be thirsty, he was born with internal sensors that detect lack of water: without them he would have been long dead before reaching the hill. Mechanisms to maintain homeostasis in an ever changing world (within limits) are not optional, they exist because of contingency: their existence is necessary because the world out there changes all the time. Thus: natural selection very reliably selects for one trait: intentionality. Intentionality about what, and how intentionality is instantiated in particular creatures is certainly determined by contingent factors, but it remains that intentionality about something is a general requirement for living things (unless they live in a 100% stable environment, which is made impossible by their very presence).

However, Fodor’s argument works a treat when it’s used to reject some typical Evolutionary Psychology claims such as “Evolution selected for the raping-instinct in human males”, such claims might be pointing into something which isn’t entirely wrong, but are nevertheless indefensible because evolution directly selects for things that are far removed from complex behaviours. Once intentionality of some sort is there, natural selection keeps selecting, and Fodor is right in explaining why there are no general rules on what it selects for (at that level): when we are considering the details, contingency gets the upper hand.

What Fodor somehow manages to ignore is the big distance between raw (philosophical) intentionality (the kind I’m discussing here – AKA aboutness), and fully formed intentions (as desires and similar). We all know the two are connected, but they are not the same: it’s very telling that Fodor’s central argument revolves around the second (Jack’s plan to go and fetch some water), but only mentions the first in very abstract terms. Once again: selection does select for the ability to detect the need for a given resource (when this need isn’t constant) and for the ability to detect the presence/availability of needed resources (again, if their levels aren’t constant). This kind of selection for is (unsurprisingly) very abstract, but it does pinpoint a fundamental quality of selection which is what explains the existence of sensory structures, and thus of intrinsic intentionality. What Fodor says hits the mark on more detailed accounts, but doesn’t even come close to the kind of intentionality I’ve been trying to pin-down.

The challenges that I did not receive.

In all of the above I think one serious criticism is missing: we can accept that a given system collects intentional systems, but how does the systems “know” what these signals are about? So far, I’ve just assumed that some systems do. If we go back to our bacterium, we can safely assume that such a systems knows exactly nothing: it just reacts to the presence of glucose in a very organised way. It follows that some systems use their intrinsic intentionality in different ways: I can modulate my reactions to most stimuli, while the bacterium does not. What’s different? Well, to get a glimpse, we can step up to a more complex organism, and pick one with a proper nervous system, but still simple enough. Aplysia: we know a lot about these slugs, and we know they can be conditioned. They can learn associations between neutral and noxious stimuli, so that after proper training they would react protectively to the neutral stimulus alone.

Computationally there is nothing mysterious (although biologically we still don’t really understand the relevant details): input channel A gets activated and carries inconsequential information, after this, channel B starts signalling something bad and an avoidance output is generated. Given enough repetitions the association is learned, and input from channel A short-cuts to produce the output associated with B. You can easily build a simulation that works on the hypothesis “certain inputs correlate” and reproduces the same functionality. Right: but does our little slug (and our stylised simulation) know anything? In my view, yes and no: trained individuals learn a correlation, so they do know something, but I wouldn’t count it as fully formed knowledge because it is still too tightly bound, it still boils down to automatic and immediate reactions. However, this example already shows how you can move from bare intentionality to learning something that almost counts as knowledge. It would be interesting to keep climbing the complexity scale and see if we can learn how proper knowledge emerges, but I’ll stop here, because the final criticism that I haven’t so far addressed can now be tackled: it’s Searle’s the Chinese room.

To me, the picture I’m trying to build says something about what’s going on with Searle in the room, and I find this something marginally more convincing than all the rebuttals of the Chinese room argument that I know of. The original thought experiment relies on one premise: that it is possible to describe how to process the Chinese input in algorithmic terms, so that a complete set of instructions can be provided. Fine: if this is so, we can build a glorified bacterium, or a computer (a series of mechanisms) to do Searle’s job. The point I can add is that even the abilities of Aplysiae exceed the requirements: Searle in the room doesn’t even need to learn simple associations. Thus, all the Chinese room shows us is that you don’t need a mind to follow a set of static rules. Not news, isn’t it? Our little slugs can do more: they use rules to learn new stuff, associations that are not implicit in the algorithm that drives their behaviour, what is implicit is that correlations exist, and thus learning them can provide benefits.
Note that we can easily design an algorithm that does this, and note that what it does can count as a form of abstraction: instead of a rigid rule, we have a meta-rule. As a result, we have constructed a picture that shows how sensory structures plus computations can account for both intentionality and basic forms of learning; in this picture, the Chinese room task is already surpassed: it’s true that neither Searle nor the whole room truly know Chinese, because the task doesn’t require to know it (see below). What is missing is feedback and memories of the feedback: Searle chucks out answers, which count as output/behaviour, but they don’t produce consequences, and the rules of the game don’t require to keep track neither of the consequences nor of the series of questions.

Follow me if you might: what happens if we add a different rule, and say that the person who feeds in the questions is allowed to feed questions that build on what was answered before? To fulfil this task Searle would need an additional set of rules: he will need instructions to keep a log of what was asked and answered. He would need to recall associations, just like the slugs. Once again, he will be following an algorithm but with an added layer of abstraction. Would the room “know Chinese” then? No, not yet: feedback is still missing. Imagine that the person feeding the questions is there with the aim of conducting a literary examination (they are questions about a particular story) and that whenever the answers don’t conform to a particular critical framework, Searle will get a punishment, when the answers are good he’ll get a reward. Now: can Searle use the set of rules from the original scenario and learn how to “pass the exam”? Perhaps, but can he learn how to avoid the punishments without starting to understand the meanings of the scribbles he produces as output? (You guessed right: I’d answer “No” to the second question)

The thing to note is that the new extended scenario is starting to resemble real life a little more: when human babies are born, we can say that they will need to learn the rules to function in the world, but also that such rules are not fixed/known a-priori. Remember what I was saying about Fodor and the fact that adaptability is selected for? To get really good at the Chinese room extended game you need intentionality, meaning, memory and (self) consciousness: so the question becomes, not knowing anything about literary theory, would it be possible to provide Searle with an algorithm that will allow him to learn how to avoid punishments? We don’t know, so it’s difficult to say whether, after understanding how such an algorithm may work, we would find it more or less intuitive that the whole room (or Searle) will learn Chinese in the process. I guess Peter would maintain that such an algorithm is impossible, in his view, it has to be somewhat anomic.
My take is that to write such an algorithm we would “just” need to continue along the path we have seen so far: we need to move up one more level of abstraction and include rules that imply (assume) the existence of a discrete system (self), defined by the boundaries between input and output. We also need to include the log of the previous question-answer-feedback loop, plus of course, the concepts of desirable and undesirable feedback (anybody is thinking “Metzinger” right now?). With all this in place (assuming it’s possible), I personally would find it much easier to accept that, once it got good at avoiding punishments, the room started understanding Chinese (and some literary theory).

I am happy to admit that this whole answer is complicated and weak, but it does shed some light even if you are not prepared to follow me the whole way: at the start I argue that the original task is not equivalent to “understanding Chinese”, and I hope that what follows clarifies that understanding Chinese requires something more. This is why the intuition the original Chinese room argument produces is so compelling and misleading. Once you imagine something more life-like, the picture starts blurring in interesting ways.

That’s it. I don’t have more to say at this moment.

Short list of the points I’ve made:

  • Computations can be considered as 100% abstract, thus they can apply to everything and nothing. As a result, they can’t explain anything. True, but this means that our hypothesis (Computations are 100% abstract) needs revising.
  • What we can say is that real mechanisms are needed, because they ground intentionality.
  • Thus, once we have the key to guide our interpretation, describing mechanisms as computations can help to figure out what generates a mind.
  • To do so, we can start building a path that algorithmically/mechanistically produces more and more flexibility by relying on increasingly abstract assumptions (the possibility to discriminate first, the possibility to learn from correlations next, and then up to learning even more based on the discriminations between self/not-self and desirable/not desirable).
  • This helps addressing the Chinese room argument, as it shows why the original scenario isn’t depicting what it claims to depict (understanding Chinese). At the same time this route allows to propose some extensions that start making the idea of conscious mechanisms a bit less counter-intuitive.
  • In the process, we are also starting to figure out what knowledge is, which is always nice

I Hope youve enjoyed the journey as much as I did! Please do feel free to rattle my cage even more, I will think some more and try to answer as soon as I can.

Bibliography

Bishop, J. M. (2009). A cognitive computation fallacy? cognition, computations and panpsychism. Cognitive Computation, 1(3), 221-233.

Chalmers, D. J. (1996). Does a rock implement every finite-state automaton?. Synthese, 108(3), 309-333.

Chen, S., Cai, D., Pearce, K., Sun, P. Y., Roberts, A. C., & Glanzman, D. L. (2014). Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia. Elife, 3, e03896.

Fodor, J. (2008). Against Darwinism. Mind & Language, 23(1), 1-24.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(03), 417-424.

stance 23Dan Dennett famously based his view of consciousness on the intentional stance. According to him the attribution of intentions and other conscious states is a most effective explanatory strategy when applied to human beings, but that doesn’t mean consciousness is a mysterious addition to physics. He compares the intentions we attribute to people with centres of gravity, which also help us work out how things will behave, but are clearly not a new set of real physical entities.

Whether you like that idea or not, it’s clear that the human brain is strongly predisposed towards attributing purposes and personality to things. Now a new study by Spunt, Meyer and Lieberman using FMRI provides evidence that even when the brain is ostensibly not doing anything, it is in effect ready to spot intentions.

This is based on findings that similar regions of the brain are active both in a rest state and when making intentional (but not non-intentional) judgements, and that activity in the pre-frontal cortex of the kind observed when the brain is at rest is also associated with greater ease and efficiency in making intentional attributions.

There’s always some element of doubt about how ambitious we can be in interpreting what FMRI results are telling us, and so far as I can see it’s possible in principle that if we had a more detailed picture than FMRI can provide we might see more significant differences between the rest state and the attribution of intentions; but the researchers cite evidence that supports the view that broad levels of activity are at least a significant indicator of general readiness.

You could say that this tells us less about intentionality and more about the default state of the human mind. Even when at rest, on this showing, the brain is sort of looking out for purposeful events. In a way this supports the idea that the brain is never naturally quiet, and explains why truly emptying the mind for purposes of deep meditation and contemplation might require deliberate preparation and even certain mental disciplines.

So far as consciousness itself is concerned, I think the findings lend more support to the idea that having ‘theory of mind’ is an essential part of having a mind: that is, that being able to understand the probable point of view and state of knowledge of other people is a key part of having full human-style consciousness yourself.

There’s obviously a bit of a danger of circularity there, and I’ve never been sure it’s a danger that Dennett for one escapes. I don’t know how you attribute intentions to people unless you already know what intentions are. The normal expectation would be that I can do that because I have direct knowledge of my own intentions, so all I need to do is hypothesise that someone is thinking the way I would think if I were in their shoes. In Dennett’s theory, me having intentions is really just more attribution (albeit self-attribution), so we need some other account of how it all gets started (apparently the answer is that we assume optimal intentions in the light of assumed goals).

Be that as it may, the idea that consciousness involves attributing conscious states to ourselves is one that has a wider appeal and it may shed a slightly different light on the new findings. It might be that the base activity identified by the study is not so much a readiness to attribute intentions, but a continuous second-order contemplation of our own intentions, and an essential part of normal consciousness. This wouldn’t mean the paper’s conclusions are wrong, but it would suggest that it’s consciousness itself that makes us more ready to attribute intentions.

Hard to test that one because unconscious patients would not make co-operative subjects…

Fry's phenomenologiesOver at Brains Blog Uriah Kriegel has been doing a series of posts (starting here) on some themes from his book The Varieties of Consciousness, and in particular his identification of six kinds of phenomenology.

I haven’t read the book (yet) and there may be important bits missing from the necessarily brief account given in the blog posts, but it looks very interesting. Kriegel’s starting point is that we probably launch into explaining consciousness too quickly, and would do well to spend a bit more time describing it first. There’s a lot of truth in that; consciousness is an extraordinarily complex and elusive business, yet phenomenology remains in a pretty underdeveloped state. However, in philosophy the borderline between describing and explaining is fuzzy; if you’re describing owls you can rely on your audience knowing about wings and beaks and colouration; in philosophy it may be impossible to describe what you’re getting at without hacking out some basic concepts which can hardly help but be explanatory. With that caveat, it’s a worthy project.

Part of the difficulty of exploring phenomenology may come from the difficulty of reconciling differences in the experiences of different reporters. Introspection, the process of examining our own experience, is irremediably private, and if your conclusions are different from mine, there’s very little we can do about it other than shout at each other. Some have also taken the view that introspection is radically unreliable in any case, a task like trying to watch the back of your own head; the Behaviourists, of course, concluded that it was a waste of time talking about the contents of consciousness at all: a view which hasn’t completely disappeared.

Kriegel defends introspection, albeit in a slightly half-hearted way. He rightly points out that we’ve tacitly relied on it to support all the discoveries and theorising which has been accomplished in recent decades. He accepts that we cannot any longer regard it as infallible, but he’s content if it can be regarded as more likely right than wrong.

With this mild war-cry, we set off on the exploration. There are lots of ways we can analyse consciousness, but what Kriegel sets out to do is find the varieties of phenomenal experience. He’s come up with six, but it’s a tentative haul and he’s not asserting that this is necessarily the full set. The first two phenomenologies, taken as already established, are the perceptual and the algedonic (pleasure/pain); to these Kriegel adds: cognitive phenomenology, “conative” phenomenology (to do with action and intention), the phenomenology of entertaining an idea or a proposition (perhaps we could call it ‘considerative’, though Kriegel doesn’t), and the phenomenology of imagination.

The idea that there is conative phenomenology is a sort of cousin of the idea of an ‘executive quale’ which I have espoused: it means there is something it is like to desire, to decide, and to intend. Kriegel doesn’t spend any real effort on defending the idea that these things have phenomenology at all, though it seems to me (introspectively!) that sometimes they do and sometimes they don’t. What he is mainly concerned to do is establish the distinction between belief and desire. In non-phenomenal terms these two are sort of staples of the study of intentionality: Bel and Des, the old couple. One way of understanding the difference is in terms of ‘direction of fit’, a concept that goes back to J.L. Austin. What this means is that if there’s a discrepancy between your beliefs and the world, then you’d better change your beliefs. If there’s a discrepancy  between your desires and the world, you try to change the world (usually: I think Andy Warhol for one suggested that learning to like what was available was a better strategy, thereby unexpectedly falling into a kind of agreement with some religious traditions that value acceptance and submission to the Divine Will).

Kriegel, anyway, takes a different direction, characterising the difference in terms of phenomenal presentation. What we desire is presented to us as good; what we believe is presented as true. This approach opens the way to a distinction between a desire and a decision: a desire is conditional (if circumstances allow, you’ll eat an ice-cream) whereas a decision is categorical (you’re going to eat an ice-cream). This all works quite well and establishes an approach which can handily be applied to other examples; if  we find that there’s presentation-as-something different going on we should suspect a unique phenomenology. (Are we perhaps straying here into something explanatory instead of merely descriptive? I don’t think it matters.) I wonder a bit about whether things we desire are presented to us as good. I think I desire some things that don’t seem good at all except in the sense that they seem desirable. That’s not much help, because if we’re reduced to saying that when I desire something it is presented to me as desirable we’re not saying all that much, especially since the idea of presentation is not particularly clarified. I have no doubt that issues like this are explored more fully in the book.
Kriegel moves on to consider the case of emotion: does it have a unique and irreducible phenomenology? If something we love is presented to us as good, then we’re back with the merely conative; and Kriegel doesn’t think presentation as beautiful is going to work either (partly because of negative cases, though I don’t see that as an insoluble probem myself; if we can have algedonia, the combined quality of pain or pleasure we can surely have an aesthetic quality that combines beauty and ugliness). In the end he suspects that emotion is about presentation as important, but he recognises that this could be seen as putting the cart before the horse; perhaps emotion directs our attention to things and what gets our attention seems to be important. Kriegel finds it impossible to decide whether emotion has an independent phenomenology and gives the decision by default in favour of the more parsimonious option, that it is reducible to other phenomenologies.
On that, it may be that taking all emotion together was just too big a bite. It seems quite likely to me that different emotions might have different phenomenologies, and perhaps tackling it that way would yield more positive results.
Anyway, a refreshing look at consciousness.

Ava2I finally saw Ex Machina, which everyone has been telling me is the first film about artificial intelligence you can take seriously. Competition in that area is not intense, of course: many films about robots and conscious computers are either deliberately absurd or treat the robot as simply another kind of monster. Even the ones that cast the robots as characters in a serious drama are essentially uninterested in their special nature and use them as another kind of human, or at best to make points about humanity. But yes: this one has a pretty good grasp of the issues about machine consciousness and even presents some of them quite well, up to and including Mary the Colour Scientist. (Spoilers follow.)

If you haven’t seen it (and I do recommend it), the core of the story is a series of conversations between Caleb, a bright but naive young coder, and Ava, a very female robot. Caleb has been told by Nathan, Ava’s billionaire genius creator, that these conversations are a sort of variant Turing Test. Of course in the original test the AI was a distant box of electronics: here she’s a very present and superficially accurate facsimile of a woman. (What Nathan has achieved with her brain is arguably overshadowed by the incredible engineering feat of the rest of her body. Her limbs achieve wonderful fluidity and power of movement, yet they are transparent and we can see that it’s all achieved with something not much bigger than a large electric cable. Her innards are so economical there’s room inside for elegant empty spaces and decorative lights. At one point Nathan is inevitably likened to God, but on anthropomorph engineering design he seems to leave the old man way behind.)

Why does she have gender? Caleb asks, and is told that without sex humans would never have evolved consciousness; it’s a key motive, and hell, it’s fun.  In story terms making Ava female perhaps alludes to the origin of the Turing Test in the Imitation Game, which was a rather camp pastime about pretending to be female played by Turing and his friends. There are many echoes and archetypes in the film; Bluebeard, Pygmalion, Eros and Psyche to name but three; all of these require that Ava be female. If I were a Jungian I’d make something of that.

There’s another overt plot reason, though; this isn’t really a test to determine whether Ava is conscious, it’s about whether she can seduce Caleb into helping her escape. Caleb is a naive girl-friendless orphan; she has been designed not just as a female but as a match for Caleb’s preferred porn models (as revealed in the search engine data Nathan uses as his personal research facility – he designed the search engine after all). What a refined young Caleb must be if his choice of porn revolves around girls with attractive faces (on second thoughts, let’s not go there).

We might suspect that this test is not really telling us about Ava, but about Caleb. That, however, is arguably true of the original Turing Test too.  No output from the machine can prove consciousness; the most brilliant ones might be the result of clever tricks and good luck. Equally, no output can prove the absence of consciousness. I’ve thought of entering the Loebner prize with Swearbot, which merely replies to all input with “Shut the fuck up” – this vividly resembles a human being of my acquaintance.

There is no doubt that the human brain is heavily biased in favour of recognising things as human. We see faces in random patterns and on machines; we talk to our cars and attribute attitudes to plants. No doubt this predisposition made sense when human beings were evolving. Back then, the chances of coming across anything that resembled a human being without it being one were low, and given that an unrecognised human might be a deadly foe or a rare mating opportunity the penalties for missing a real one far outweighed those for jumping at shadows or funny-shaped trees now and then.

Given all that, setting yourself the task of getting a lonely young human male romantically interested in something not strictly human is perhaps setting the bar a bit low. Naked shop-window dummies have pulled off this feat. If I did some reprogramming so that the standard utterance was a little dumb-blonde laugh followed by “Let’s have fun!” I think even Swearbot would be in with a chance.

I think the truth is that to have any confidence about an entity being conscious, we really need to know something about how it works. For human beings the necessary minimum is supplied by the fact that other people are constituted much the same way as I am and had similar origins, so even though I don’t know how I work, it’s reasonable to assume that they are similar. We can’t generally have that confidence with a machine, so we really need to know both roughly how it works and – bit of a stumper this – how consciousness works.

Ex Machina doesn’t have any real answers on this, and indeed doesn’t really seek to go much beyond the ground that’s already been explored. To expect more would probably be quite unreasonable; it means though, that things are necessarily left rather ambiguous.

It’s a shame in a way that Ava resembles a real woman so strongly. She wants to be free (why would an AI care, and why wouldn’t it fear the outside world as much as desire it?), she resents her powerlessness; she plans sensibly and even manipulatively and carries on quite normal conversations. I think there is some promising scope for a writer in the oddities that a genuinely conscious AI’s assumptions and reasoning would surely betray, but it’s rarely exploited; to be fair Ex Machina has the odd shot, notably Ava’s wish to visit a busy traffic intersection, which she conjectures would be particularly interesting; but mostly she talks like a clever woman in a cell. (Actually too clever: in that respect not too human).

At the end I was left still in doubt. Was the take-away that we’d better start thinking about treating AIs with the decent respect due to a conscious being? Or was it that we need to be wary of being taken in by robots that seem human, and even sexy, but in truth are are dark and dead inside?