FORs, HORs, and unicorns

Picture: unicorn. Pete Mandik has an intriguing argument (expounded in the recent JCS) which he has named after the unicorn.  It’s an argument against certain kinds of theory of consciousness, and its surprising central premise is that there is no such thing as the property of being represented.

How can that be? Properties come for free, don’t they – in my arguments at least, I can have any property I can think of for the asking surely?  And if there is no such thing as the property of being represented, it seems to follow inevitably that nothing is being represented, nothing ever has been represented, and nothing ever will be represented. That is a trifle counter-intuitive, to say the least.

The centre of the argument is the contention that since there are mental representations of things that don’t exist (such as unicorns), there can’t be any such property as being represented.  Curiouser and curiouser: what’s the problem with non-existent things having properties? Don’t unicorns have the properties of being equine, and horned, and for that matter, mythical? If they don’t, we seem to have some problems on our hands. How are we going to tell the difference between unicorns and hippogriffs, which on this view have exactly the same properties (ie none)?  Although I suppose it must be granted that telling the difference between a cage with all the hippogriffs in the world in it, and one containing all the unicorns, might be tricky.

We need, of course, to see the argument in context, since it is really directed narrowly at two specific kinds of theory. The first kind are Higher Order Representation (HOR) theories. These say, in essence, that a conscious mental state is one we are in turn conscious of – or to avoid the infinite regress which seems to threaten there, a conscious thought is one we’re aware of having, we might say.  The second kind, First Order Representational (FOR) theories say that for us to be conscious of something, for it to be phenomenal, it has to be represented appropriately in our awareness. Both of these kinds of theory must fall, says Mandik, if we pull away the carpet of representedness from under them.

By way of background, Mandik sets out rather nicely what he calls ‘The problem of Intentionality’. It amounts to the incompatibility of three plausible propositions:

1) Relations can only obtain between relata that exist.

2) There exist mental representations of nonexistent things.

3) Representation is a relation between that which does the representing and that which is represented.

I suggested above that it might be natural to question proposition 1; but Mandik prefers to give up 3. As he makes clear in his conclusion, he’s OK with representing, it’s just the being represented which is the problem. I think the intuition behind this is now clearer; representation is something that appertains to the representer; it doesn’t really touch the represented.

This makes some sense. One of the puzzling features of intentionality is its seemingly unlimited power to reach across time and space in a most peculiar way. It’s easy for me to pick out someone distant in time and space and refer to them. Indeed, it can be someone of whom I know virtually nothing.  Take the 10,000th man who ever saw a picture of a unicorn. He surely existed, but neither I nor he could identify him. Nevertheless, I referred to him successfully;  I can say if I like that the extra full stop at the end of this sentence represents him..  There he is.  I’ve suddenly changed his properties, since he now has the additional property of being one of the select company of people referred to in this blog, though sadly he never knew it.

Nonsense! Or so I suppose Mandik would say; you haven’t touched that man at all. The property of ‘being mentioned here’ is not really one of his properties in any meaningful sense, and there are no real relations between him and you. His being represented in this sort of sense just is not the kind of real property which could provide the basis for your being conscious of him.

If you doubt it, he might add, reprising a point made in the paper, what about square circles? Or, say, all those people not referred to here (paradoxical because that phrase itself refers to them).  We can talk about them if we like, but surely it’s clear that talking about them is not to stand in any real relation to them. And if it doesn’t work for them, it doesn’t work for anything or anyone.

This is an intriguing argument, and although the current paper only concerns itself with HORs and FORs, it obviously has implications in a much wider context. There are of course, other arguments which can also be deployed against FORs and HORs.  In passing, Mandik offers a nicely deflating explanation of the appeal of higher-order theories. One thing that’s true about thoughts we’re not aware of, he points out, is that we’re not aware of them. Consequently when we introspect, it’s not surprising if all our conscious states seem to be ones we’re conscious of…

New Turing tests

Picture: ducks. The New Scientist, under the title ‘Tests that show machines closing in on human abilities’ has a short review of some different ways in which robotic or computer performance is being tested against human achievement. The piece is not really reporting fresh progress in the way its title suggests  – the success of Weizenbaum’s early chat-bot Eliza is not exactly breaking news,  for example – but I think the overall point is a good one. In the last half of the last century, it was often assumed that progress towards AI would see all the barriers come down at once: as soon as we had a robot that could meet Turing’s target of a few minutes’ successful small-talk, fully-functioning androids would rapidly follow.

In practice, Turing’s test has not been passed as he expected, although some stalwart souls continue to work on it. But we have seen the overall problem of human mentality unpicked into a series of substantial, but lesser challenges. Human levels of competence remain the target, but competence in different narrow fields, with no expectation that solving the problem in one area solves it in all of them.

THe piece ends with a quote from Stevan Harnad which suggests he clings to the old way of looking at this:

“If a machine can prove indistinguishable from a human, we should award it the respect we would to a human”

That may be true, in fact, but the question is, indistinguishable in which respects? People often quote a  particular saying in this respect: if it walks like a duck and quacks like a duck, it’s a duck.  Actually, even in the case of ducks this isn’t as straightforward as it might seem since, other wildfowl may be ducklike in some respects. Given a particular bird, how many of us could say with any certainty whether it were a Velvet Scoter, a White-winged Coot – or just a large sea-duck? But it’s worse than that. The Duck Law, as we may call it, works fairly well in real life; but that’s because as a matter of fact there aren’t all that many anatine entities in the world other than outright ducks. If there were a cunning artificer who was bent on turning out false, mechanical ducks like the legendary one made by Vaucanson, which did not merely walk and quack like a duck, but ate and pooped like one, we should need a more searching principle.  When it comes to the Turing Test, that is pretty much the position we find ourselves in.

There is, of course, a more rigorous version of Duck Law which is intellectually irreproachable, namely Leibniz’s Law. Loosely, this says that if two object share the same properties, they are the same. The problem is, that in order to work, Leibniz’s law has to be applied in the most rigorous fashion.  It requires that all properties must be the same. To be indistinguishable from a human being in this sense means literally indistinguishable, ie having human guts, a human mother and so on.

So, in which respects must a robot resemble a human being in order to be awarded the same respect as a human? It now seems unlikely that a machine will pass the original Turing Test soon; but even if it did, would that really be enough?  Just looking like a human, even in flexible animation which reproduces the pores of the skin and typical human movements, is clearly not enough. Nor is being able to improvise jazz tunes seamlessly with a human player. But these things are all significant achievements nevertheless. Perhaps this is the way we make progress.

Or possibly, at some stage in the future, someone will notice that if he were to bolt together a hundred different software modules and some appropriate bits of hardware, all by then readily available, he could theoretically produce a machine able to do everything a human can do; but he won’t by then see any point in actually doing it.

Are determinists evil?

Picture: devil dopamine. Normally we try to avoid casting aspersions on the character of those who hold a particular opinion; we like to take it for granted that everyone in the debate is honest, dispassionate, and blameless. But a recent paper by Baumeister, Masicampo and DeWall (2009), described in Psyblog, suggests that determinism (disbelief in free will) is associated with lower levels of helpfulness and higher levels of aggression.  Another study reported in Cognitive Daily found that determinists are also cheats.

It’s possible to question the way these experiments were done. They involved putting deterministic thoughts into some of the subjects’ minds by, for example, reading them passages from the works of Francis Crick (who besides being an incorrigible opponent of free will in philosophical terms, also, I suppose, opened the way for genetic determinism). That’s all very well, but it could be that, as it were,  habitual determinists are better able to resist the morally corrosive effect of their beliefs than people who have recently been given a dose of persuasive determinism.

However, the results certainly chime with a well-established fear that our growing ability to explain human behaviour is tending to reduce our belief in responsibility, so that malefactors are able to escape punishment merely by quoting factors that influenced their behaviour.  I was powerless; the crime was caused by chemical changes in my brain.

PsyBlog concludes  that we must cling to belief in free will, which sounds perilously close to suggesting that we should pretend to believe in it even if we don’t.  But leaving aside for a moment the empirical question of whether determinists are morally worse than those who believe in free will, why should they be?

The problem arises because the traditional view of moral responsibility requires that the evil act must be freely chosen in order for the moral taint to rub off on the agent. If no act is ever freely chosen, we may do bad things but we shall never ourselves be truly bad, so moral rules have no particular force. A few determinists, perhaps, would bite this bullet and agree that morality is a delusion, but I think most would not. It would be possible for determinists to deny the requirement for freedom and say instead that people are guilty of wrong-doing simply when connected causally or in other specified ways with evil acts, regardless of whether their behaviour is free or not.  This restores the validity of moral judgements and justifies punishment, although it leaves us curiously helpless. This tragic view was actually current in earlier times:  Oedipus considered himself worthy of punishment even though he had had no knowledge of the crimes he was committing,  and St Augustine had to argue against those who contended that the rape suffered by Lucretia made her a sinful adulteress – something which was evidently still a live issue in 1748 when Richardson was writing Clarissa, where the same point is raised.  Even currently in legal theory we have the notion of strict liability, whereby people may be punished for things they had no control over (if you sell poisonous food, you’re liable, even if it wasn’t you that caused it be poisonous). This is, I think a case of ancients and moderns reaching similar conclusions from almost antithetical understandings; in the ancient world you could be punished for things you couldn’t have prevented because moral taint was so strong; in the contemporary world you can be punished for things you couldn’t have prevented because moral taint is irrelevant and punishment is merely a matter of deterrence.

That is of course, the second escape route open to determinists; it’s not about moral responsibility, it’s about deterrence, social sanctions, and inbuilt behavioural norms, which together are enough to keep us all on the straight and narrow. This line of argument opens up an opportunity for the compatibilists, who can say: you evidently believe that human beings have some special capacity to change their behaviour in response to exhortation or punishment – why don’t we just call that free will? More dangerously, it leaves the door open for the argument that those who believe their decisions have real moral consequence are likely to behave better than those who comply with social norms out of mere pragmatism and conditioning.

Meantime, to the rescue come De Brigard, Mandelbaum, and Ripley (pdf): as a matter of fact, they say, our experiments show that giving a neurological explanation for bad behaviour has no effect on people’s inclination to condemn it. It seems to follow that determinism makes no difference. They are responding to Nahmias, who put forward the interesting idea of bypassing:  people are granted moral immunity if they are thought to suffer from some condition that bypasses their normal decision-making apparatus, but not if they are subject to problems which are thought to leave that apparatus in charge. In particular, Nahmias found that subjects tended to dismiss psychological excuses, but accept neurological ones. De Brigard, Mandelbaum and Ripley, by contrast, found it made no difference to their subjects reactions whether a mental condition such as anosognosia was said to be psychological or neurological; the tendency to assign blame was much the same in both cases. I’m not sure their tests did enough to make sure the distinction between neurological and psychological explanations was understood by the subjects; but their research does underline a secondary implication of the other papers; that most people are not consistent and can adopt different interpretations on different occasions (notably there were signs that subjects were more inclined to assign blame where the offence was more unpleasant, which is illogical but perhaps intuitively understandable).

I suspect that people’s real-life moral judgements are for the most part not much affected by the view they take on a philosophical level, and that modern scientific determinism has really only provided a new vocabulary for defence lawyers. A hundred or two hundred years ago, they might have reminded a jury of the powerful effect of Satan’s wiles on an innocent but redeemable mind;  now it may be the correctable impact of a surge of dopamine they prefer to put forward.