FORs, HORs, and unicorns

Picture: unicorn. Pete Mandik has an intriguing argument (expounded in the recent JCS) which he has named after the unicorn.  It’s an argument against certain kinds of theory of consciousness, and its surprising central premise is that there is no such thing as the property of being represented.

How can that be? Properties come for free, don’t they – in my arguments at least, I can have any property I can think of for the asking surely?  And if there is no such thing as the property of being represented, it seems to follow inevitably that nothing is being represented, nothing ever has been represented, and nothing ever will be represented. That is a trifle counter-intuitive, to say the least.

The centre of the argument is the contention that since there are mental representations of things that don’t exist (such as unicorns), there can’t be any such property as being represented.  Curiouser and curiouser: what’s the problem with non-existent things having properties? Don’t unicorns have the properties of being equine, and horned, and for that matter, mythical? If they don’t, we seem to have some problems on our hands. How are we going to tell the difference between unicorns and hippogriffs, which on this view have exactly the same properties (ie none)?  Although I suppose it must be granted that telling the difference between a cage with all the hippogriffs in the world in it, and one containing all the unicorns, might be tricky.

We need, of course, to see the argument in context, since it is really directed narrowly at two specific kinds of theory. The first kind are Higher Order Representation (HOR) theories. These say, in essence, that a conscious mental state is one we are in turn conscious of – or to avoid the infinite regress which seems to threaten there, a conscious thought is one we’re aware of having, we might say.  The second kind, First Order Representational (FOR) theories say that for us to be conscious of something, for it to be phenomenal, it has to be represented appropriately in our awareness. Both of these kinds of theory must fall, says Mandik, if we pull away the carpet of representedness from under them.

By way of background, Mandik sets out rather nicely what he calls ‘The problem of Intentionality’. It amounts to the incompatibility of three plausible propositions:

1) Relations can only obtain between relata that exist.

2) There exist mental representations of nonexistent things.

3) Representation is a relation between that which does the representing and that which is represented.

I suggested above that it might be natural to question proposition 1; but Mandik prefers to give up 3. As he makes clear in his conclusion, he’s OK with representing, it’s just the being represented which is the problem. I think the intuition behind this is now clearer; representation is something that appertains to the representer; it doesn’t really touch the represented.

This makes some sense. One of the puzzling features of intentionality is its seemingly unlimited power to reach across time and space in a most peculiar way. It’s easy for me to pick out someone distant in time and space and refer to them. Indeed, it can be someone of whom I know virtually nothing.  Take the 10,000th man who ever saw a picture of a unicorn. He surely existed, but neither I nor he could identify him. Nevertheless, I referred to him successfully;  I can say if I like that the extra full stop at the end of this sentence represents him..  There he is.  I’ve suddenly changed his properties, since he now has the additional property of being one of the select company of people referred to in this blog, though sadly he never knew it.

Nonsense! Or so I suppose Mandik would say; you haven’t touched that man at all. The property of ‘being mentioned here’ is not really one of his properties in any meaningful sense, and there are no real relations between him and you. His being represented in this sort of sense just is not the kind of real property which could provide the basis for your being conscious of him.

If you doubt it, he might add, reprising a point made in the paper, what about square circles? Or, say, all those people not referred to here (paradoxical because that phrase itself refers to them).  We can talk about them if we like, but surely it’s clear that talking about them is not to stand in any real relation to them. And if it doesn’t work for them, it doesn’t work for anything or anyone.

This is an intriguing argument, and although the current paper only concerns itself with HORs and FORs, it obviously has implications in a much wider context. There are of course, other arguments which can also be deployed against FORs and HORs.  In passing, Mandik offers a nicely deflating explanation of the appeal of higher-order theories. One thing that’s true about thoughts we’re not aware of, he points out, is that we’re not aware of them. Consequently when we introspect, it’s not surprising if all our conscious states seem to be ones we’re conscious of…

4 thoughts on “FORs, HORs, and unicorns

  1. Methinks the man doth think too much. It’s all in your head, anyway. Both the relatee and the relator are in your head. And both the representor and the represented are also in your head, when it comes down to it. I think you said it well, Peter, in your paragraph about the 10,000th man. The extra period (full stop) is just a mark made by my printer. All else it might mean is in our heads. So it’s actually quite simple to refer to such things. After all, they are just neural impulses a few neurons away. Of course, this quickly gets down to what meaning means. On that point, I highly recommend some George Lakoff and/or W. J. Freeman. Again, it’s all in our heads. It’s not solipsism. We can quite nicely agree between us upon what’s out there. But we do that in our respective heads.

  2. It seems to me (and I am an engineer not a philosopher) that Mandik is taking a rather circuitous route to make a very straightforward observation. Specifically, brains do not use symbols/abstractions, e.g., words, directly. IOW, actual determinations about meaning or intention – when made consciously – require a detailed awareness of the constellation of properties that define the symbol. Quite simply, brains don’t understand symbols any more than computers do.

    Michael Baggot

  3. Yes, Michael. It seems to me (and I am a programmer, not a philosopher) that our consciousness, as well as a huge amount of non-conscious stuff, comes to us as a series of percepts from all the senses and that we evaluate each one based on a very large array of connections to other percepts that are in some way or another connected or related. To me, it is that connectedness or relatedness that is the essence of meaning. Formal logic and much else of what thinking gets us has little to do with it — IMHO.

  4. Lloyd, thank you for the comments. It seems from your web site and my Google searches that we have both pursued these language and meaning issues rather vigorously although I am not into phonetics and computational speech. It looks like we have both agreement and disagreement on a number of issues. We could have a very interesting exchange but Peter’s blog is not the place to do it. I attempted to contact you through your web site but received an error message when I tried to send the message. You can reach me at, baggot(at)cruzio.com

    Michael Bagggot

Leave a Reply

Your email address will not be published. Required fields are marked *