Personhood Week

Banca RuritaniaPersonhood Week, at National Geographic is a nice set of short pieces briefly touring the issues around the crucial but controversial issue of what constitutes a person.

You won’t be too surprised to hear that in my view personhood is really all about consciousness. The core concept for me is that a person is a source of intentions – intentions in the ordinary everyday sense rather than in the fancy philosophical sense of intentionality (though that too).  A person is an actual or potential agent, an entity that seeks to bring about deliberate outcomes. There seems to be a bit of a spectrum here; at the lower level it looks as if some animals have thoughtful and intentional behaviour of the kind that would qualify them for a kind of entry-level personhood. At its most explicit, personhood implies the ability to articulate complicated contracts and undertake sophisticated responsibilities: this is near enough the legal conception. The law, of course, extends the idea of a person beyond mere human beings, allowing a form of personhood to corporate entities, which are able to make binding agreements, own property, and even suffer criminal liability. Legal persons of this kind are obviously not ‘real’ ones in some sense, and I think the distinction corresponds with the philosophical distinction between original (or intrinsic, if we’re bold) and derived intentionality. The latter distinction comes into play mainly when dealing with meaning. Books and pictures are about things, they have meanings and therefore intentionality, but their meaningfulness is derived: it comes only from the intentions of the people who interpret them, whether their creators or their ‘audience’.  My thoughts, by contrast, really just mean things, all on their own and however anyone interprets them: their intentionality is original or intrinsic.

So, at least, most people would say (though others would energetically contest that description). In a similar way my personhood is real or intrinsic: I just am a person; whereas the First Central Bank of Ruritania has legal personhood only because we have all agreed to treat it that way. Nevertheless, the personhood of the Ruritanian Bank is real (hypothetically, anyway; I know Ruritania does not exist – work with me on this), unlike that of, say, the car Basil Fawlty thrashed with a stick, which is merely imaginary and not legally enforceable.

Some, I said, would contest that picture: they might argue that ;a source of intentions makes no sense because ‘people’ are not really sources of anything; that we are all part of the universal causal matrix and nothing comes of nothing. Really, they would say, our own intentions are just the same as those of Banca Prima Centrale Ruritaniae; it’s just that ours are more complex and reflexive – but the fact that we’re deeming ourselves to be people doesn’t make it any the less a matter of deeming.  I don’t think that’s quite right – just because intentions don’t feature in physics doesn’t mean they aren’t rational and definable entities – but in any case it surely isn’t a hit against my definition of personhood; it just means there aren’t really any people.

Wait a minute, though. Suppose Mr X suffers a terrible brain injury which leaves him incapable of forming any intentions (whether this is actually possible is an interesting question: there are some examples of people with problems that seem like this; but let’s just help ourselves to the hypothesis for the time being). He is otherwise fine: he does what he’s told and if supervised can lead a relatively normal-seeming life. He retains all his memories, he can feel normal sensations, he can report what he’s experienced, he just never plans or wants anything. Would such a man no longer be a person?

I think we are reluctant to say so because we feel that, contrary to what I suggested above, agency isn’t really necessary, only conscious experience. We might have to say that Mr X loses his legal personhood in some senses; we might no longer hold him responsible or accept his signature as binding, rather in the way that we would do for a young child: but he would surely retain the right to be treated decently, and to kill or injure him would be the same crime as if committed against anyone else.  Are we tempted to say that there are really two grades of personhood that happen to coincide in human beings,  a kind of ‘Easy Problem’ agent personhood on the one hand and a ‘Hard Problem’ patient personhood?  I’m tempted, but the consequences look severely unattractive. Two different criteria for personhood would imply that I’m a person in two different ways simultaneously, but if personhood is anything, it ought to be single, shouldn’t it? Intuitively and introspectively it seems that way. I’d feel a lot happier if I could convince myself that the two criteria cannot be separated, that Mr X is not really possible.

What about Robot X? Robot X has no intentions of his own and he also has no feelings. He can take in data, but his sensory system is pretty simple and we can be pretty sure that we haven’t accidentally created a qualia-experiencing machine. He has no desires of his own, not even a wish to serve, or avoid harming human beings, or anything like that. Left to himself he remains stationary indefinitely, but given instructions he does what he’s told: and if spoken to, he passes the Turing Test with flying colours. In fact, if we ask him to sit down and talk to us, he is more than capable of debating his own personhood, showing intelligence, insight, and understanding at approximately human levels. Is he a person? Would we hesitate over switching him off or sending him to the junk yard?

Perhaps I’m cheating. Robot X can talk to us intelligently, which implies that he can deal with meanings. If he can deal with meanings, he must have intentionality, and if he has that perhaps he must, contrary to what I said, be able to form intentions after all – so perhaps the conditions I stipulated aren’t possible after all? And then, how does he generate intentions, as a matter of fact? I don’t know, but on one theory intentionality is rooted in desires or biological drives. The experience of hunger is just primally about food, and from that kind of primitive aboutness all the fancier kinds are built up. Notice that it’s the experience of hunger, so arguably if you had no feelings you couldn’t get started on intentionality either! If all that is right, neither Robot X nor Mr X is really as feasible as they might seem: but it still seems a bit worrying to me.

11 thoughts on “Personhood Week

  1. Pretty interesting. What would be the relation of personhood with the self or the ego? but, don’t you think that personhood also requires of a biographical background in place? If Mr. X loses all his memories, but keeps his intentions (or desires), won’t his personhood be even more questionned? intentions are not just rooted in biography.

    But I think you are quite right, a person is its will.

    As a matter of fact, monks reaching profound contemplative states, report to experience to cease a persons, no desire, no will, no ego, no person.

  2. “Robot X can talk to us intelligently, which implies that he can deal with meanings. If he can deal with meanings, he must have intentionality, and if he has that perhaps he must, contrary to what I said, be able to form intentions after all – so perhaps the conditions I stipulated aren’t possible after all?”

    Don’t we need a causal theory of intentionality before we can assume robots have it? After all, Rosenberg goes the other way -> Since robots/computers can’t have intentionality, it means we don’t either.

  3. Interesting presentation that could fit in an evolutionary background.
    1) First an abiotic universe with ubiquist physico-chemical laws. No teleology. Only some (local) trend to increasing complexity.
    2) Then emergence of local constraints coming in addition (like ‘stay alive’ for a basic cell). Birth of teleology, of agency and of intentionality (‘primitive aboutness’) as related to local constraint satisfaction. Are intrinsic to the agent. Animal life with ‘entry level personhood’.
    3) Then evolution of animal life bringing in self-consciousness & free will. Comes in addition and transforms primitive aboutness and entry level personhood into human ones.
    4) Humans build robots in which they program derived constraints. Robots are capable of derived meanings/intentionality. They can implement actions to satisfy the derived constraints.
    Indeed, ’robot X has no intentions of his own’. We have not ‘created a qualia-experiencing machine’. But I’m afraid robot X won’t be able to pass the Turing Test. As he does not carry human type constraints (like look for happiness, avoid anxiety, ..) he is not in a position to generate human type meanings when receiving a question coming from humans.
    Robot X can talk to us, deal with meanings and have intentionality, but they are of derived type.
    As of today, only living entities can carry intrinsic constraints , perform intrinsic intentionality and generate intrinsic meanings.
    What about the future ? Many new concepts may come up. One option would be to imagine that some ‘merger’ of life and computing could bring up intrinsic constraints/meaning/intentionality in robots.
    Is meat in the computer the future of AI ?
    More on this at http://philpapers.org/rec/MENTTC-2

  4. @Christophe: Regarding ‘primitive-intentionality’, do you mean that intentionality is born from natural selection via some kind of proto-intentionality or do you think some kind of proto-consciousness exists beforehand?

  5. Bit off topic of me, but
    In fact, if we ask him to sit down and talk to us, he is more than capable of debating his own personhood, showing intelligence, insight, and understanding at approximately human levels.

    Suppose after a conversation (or a number of conversations), the robot ceases to just sit there afterward, though. Suppose it can run conversations in it’s own processes – continuing the conversation. And the conversation it has with itself prompts it to act?

    Might not be the right speculation for the thread – if not, please forgive my over enthusiasm! 🙂

  6. @ Sci: Intentionality, IMHO, is born from a trend of increasing complexity in the primitive abiotic universe when local constraints came in. Not sure that ‘natural selection’ can be part of the story, at this level at least.
    We can agree that intentionality is relative to an agent. Aboutness for an agent, aboutness about something. And no agent in an abiotic universe => no aboutness, no intentionality nor proto-intentionality.
    So, for me, aboutness came in the universe with local constraints, with life. But as it is difficult to position the emergence of life in a Darwinian selection process I prefer to use the wording ‘trend in increasing complexity’ (maybe the Darwinian process can be looked as part of it).
    Regarding consciousness, if we accept that life is needed to support her then the scenario is the same as above: no proto-consciousness in the abiotic universe.
    But perhaps we do not understand consciousness the same way. I use the Ned Block concepts of consciousness (phenomenal, access, monitoring and self).

  7. Christophe, I think you should consider an alternative explanation that ‘stay alive’ doesn’t exist. It’s just the dead didn’t repeat – those patterns that continued to repeat are sort of a photographic negative of the dead (of the unrepeating). Which is quite a different explantion from some sort of ‘stay alive’ semantic coming into existance.

    Birth of teleology, of agency and of intentionality (‘primitive aboutness’) as related to local constraint satisfaction.

    I would say these are references to the massive simplifications used in regular cognition. As much as ‘I am hungry’ is a massive simplification of the various resource depletions/headings toward not repeating that is happening across the repeating patterns mass.

    From the ‘inside’ however, it’d seem these things are coming into existance/being birthed at some point.

  8. Callan, if I understand you well, you think it is better to consider ‘no repeat of death’ than ‘stay alive’. But I’m not sure it is possible to speak about death without having first considered life. Death can exist only as a consequence of life, when life stops (the abiotic universe that was in place before life came in was inert, not dead). In the same background is the misleading and circular definition of life as ‘the set of functions that resist death’.
    If I have misunderstood you, please correct me.
    Regarding the ‘references to the massive simplifications used in regular cognition’, I agree that the MGS alone applies mostly for reflex type behaviours. Using it alone for human behaviours (or even for animal ones) is indeed too much a simplification. It cannot alone explain elaborated behaviours where different constraints and meaning generation can exist associated to memorized experiences. We then have to introduce meaningful representations of items of the environment for an agent. These meaningful representations are then made of:
    – Meanings generated by information relative to the item and received by the agent from its environment
    – Meanings generated by information related to the item and coming from inside the agent: memorized experiences and action scenarios with history of outcomes. (for an organism: interoception, proprioception and emotions)
    These elements of the representation are interactive and dynamically characterize the represented entity. The agent will use the content of these (meaningful) representations as needed to satisfy its constraints. Such representations are built by and for the agent. They link/embed the agent in its environment. We are far from GOFAI type representations.
    Coming back to ‘Birth of teleology, of agency and of intentionality (‘primitive aboutness’) as related to local constraint satisfaction’, I feel we should keep such a perspective when the first living elements appeared in the evolution of the universe. Would you agree?
    But I’m not sure I have understood what you mean by ‘various resource depletions/headings toward not repeating that is happening across the repeating patterns mass’. Could you please develop a bit?

  9. But I’m not sure I have understood what you mean by ‘various resource depletions/headings toward not repeating that is happening across the repeating patterns mass’. Could you please develop a bit?

    This explains your life/death issues – yet at the same time it’s so not personally relatable it doesn’t seem to be talking about the subject.

    I put in the long text to avoid using kludge words like ‘death’ – precisely to avoid supporting an ecology of semantics simply from having used a word like that (which is like avoiding getting your prints on a weapon, in a way). I am really trying to avoid developing it, to avoid any landmine words like ‘death’.

    Resource depletion – joules of energy dissipated. A repeating pattern that repeats by the use of joules. So it’s running out of what makes it repeat. It fails to repeat in various places, this makes other parts (in adition to lacking joules) to fail to enact physical actions of their pattern (from lack of fuel). Integrity of the repeating pattern starts to break in each individual component. Other repeating patterns start collecting the remaining resources – I’ll dare two landmine words: the worms start eating the body/the mass.

    You might call it death. But since I haven’t used that word, I don’t have any performative contradiction or enter into any circular defintions.

    Coming back to ‘Birth of teleology, of agency and of intentionality (‘primitive aboutness’) as related to local constraint satisfaction’, I feel we should keep such a perspective when the first living elements appeared in the evolution of the universe. Would you agree?

    I wouldn’t agree. I did in the past, until I found non-meaning substitutes.

  10. Callan. Thanks for these developments.
    One last point, about your position regarding my evolutionary perspective (the first living elements appearing in the universe associated to local constraints satisfaction).
    You say ‘I wouldn’t agree. I did in the past, until I found non-meaning substitutes.’
    Would you tell about these ‘non-meaning substitutes’?

  11. Christophe,

    I had been trying to figure out how I would build an AI. I’d decided I’d need ‘positive’ input to get things done – but how do you make ‘positive’?

    Then I realised you wouldn’t – it’d just be 5 volts on the wire. Through a complex series of pursuit codes (and by pursuit I’m not saying something more fancy than the code for badguys in the Doom FPS pursuing you, except some of the pursuit is internal), it would seem (for a system that took some input from internal conditions, not just external sensors) from its ‘inside’ as ‘positive’. That’s what it’d report. Constrained by the pursuit frame, it wouldn’t have the range to report anything else (and without a pursuit frame, it would do nothing/pursue nothing). Not directly/at the default, anyway.

    A bit like Terry Pratchet’s dwarves singing ‘gold, gold, gold, gold!’. An obsession that comes directly from the mould of their making – and so so much else is ignored/neglected except for the frame of gold. Or you could sing ‘meaning, meaning, meaning, meaning!’, instead. Agency, intentionality – can we let go of these in our songs anymore than the dwarves can let go of gold?

Leave a Reply

Your email address will not be published. Required fields are marked *