yogurtCan machines be moral agents? There’s a bold attempt to clarify the position in the
IJMC
, by Parthemore and Whitby (draft version). In essence they conclude that the factors that go to make a moral agent are the same whether the entity in question is biological or an artefact. On the face of it that seems like common sense – although the opposite conclusion would be more interesting; and there is at least one goodish reason to think that there’s a special problem for artefacts.

But let’s start at the beginning. Parthemore and Whitby propose three building blocks which any successful candidate for moral agency must have: the concept of self, the concept of morality, and the concept of concept.

By the concept of self, they mean not simply a basic awareness of oneself as an object in the landscape, but a self-reflective awareness along the lines of Damasio’s autobiographical self. Their rationale is not set out very explicitly, but I take it they think that without such a sense of self, your acts would seem to be no different from other events in the world; just things you notice happening, and that therefore you couldn’t be seen as responsible for them. It’s a reasonable claim, but I think it properly requires more discussion. A sceptic might make the case that there’s a difference between feeling you’re not responsible and actually not being responsible; that someone could have the cheerily floaty feeling of being a mere observer while actually carrying out acts that were premeditated in some level of the mind. There’s scope then for an extensive argument about whether conscious awareness of making a decision is absolutely necessary to moral ownership of the decision. We’ll save that for another day, and of course Parthemore and Whitby couldn’t hope to deal with every implication in a single paper. I’m happy to take their views on the self as reasonable working assumptions.

The second building block is the concept of morality. Broadly, Parthemore and Whitby want their moral agent to understand and engage with the moral realm; it’s not enough, they say, to memorise by rote learning a set of simple rules. Now of course many people have seemed to believe that learning and obeying a set of rules such as the Ten Commandments was necessary or even sufficient for being a morally good person. What’s going on here? I think this becomes clearer when we move on to the concept of concept, which roughly means that the agent must understand what they’re doing. Parthemore and Whitby do not mean that a moral agent must have a philosophical grasp of the nature of concepts, only that they must be able to deal with them in practical situations, generalising appropriately where necessary. So I think what they’re getting at is not that moral rules are invalid in themselves, merely that a moral agent has to have sufficient conceptual dexterity to apply them properly.  A rule about not coveting your neighbour’s goods may be fine, but you need to able to recognise neighbours and goods and instances of their being coveted, without needing say, a list of the people to be considered neighbours.

So far, so good, but we seem to be missing one item normally considered fundamental: a capacity for free action. I can be fully self-aware, understand and appreciate that stealing is wrong, and be aware that by picking up a chocolate bar without paying I am in fact stealing; but it won’t generally be considered a crime if I have a gun to my head, or have been credibly told that if I don’t steal the chocolate several innocent people will be massacred. More fundamentally, I won’t be held responsible if it turns out that because of internal factors I have no ability to choose otherwise: yet the story told by physics seems to suggest I never really have the ability to choose otherwise. I can’t have real responsibility without real free will (or can I?).

Parthemore and Whitby don’t really acknowledge this issue directly; but they do go on to add what is effectively a fourth requirement for moral agency; you have to be able to act against your own interests. It may be that this is in effect their answer; instead of a magic-seeming capacity for free will they call for a remarkable but fully natural ability to act unselfishly. They refer to this as akrasia; consciously doing the worse thing: normally I think the term refers to the inability to do what you can see is the morally right thing; here Parthemore and Whitby seem to reinterpret it as the ability to do morally good things which run counter to your own selfish interests.

There are a couple of issues with that principle. First, it’s not actually the case that we only act morally when going against our own interests; it’s just that those are the morally interesting cases because we can be sure in those instances that morality alone is the motivation. Worse than that, someone like Socrates would argue that moral action is always in your own best interests, because being a good person is vastly more important than being rich or successful; so no rational person who understood the situation properly would ever choose to do the wrong thing. Probably though, Parthemore and Whitby are really going for something a little different. They link their idea to personal boundaries, citing Andy Clark, so I think they have in mind an ability to extend sympathy or a feeling of empathy to others. The ability they’re after is not so much that of acting against your own interests as that of construing your own interests to include those of other entities.

Anyway, with that conception of moral agency established, are artefacts capable of qualifying? Parthemore and Whitby cite a thought-experiment of Zlatev: suppose someone who lived a normal life were found after death to have had no brain but a small mechanism in their skull: would we on that account disavow the respect and friendship we might have felt for the person during life? Zlatev suggests not; and Parthemore and Whitby, agreeing, proposing that it would make no difference if the skull were found to be full of yogurt; indeed, supposing someone who had been found to have yogurt instead of brains were able to continue their life, they would see no reason to treat them any differently on account their acephaly (galactocephaly?). They set this against John Searle’s view that it is some as-yet-unidentified property of nervous tissue that generates consciousness, and that a mind made out of beer cans is a patent absurdity. Their view, which certainly has its appeal, is that it is performance that matters; if an entity displays all the signs of moral sense, then let’s treat it as a moral being.

Here again Parthemore and Whitby make a reasonable case but seem to neglect a significant point. The main case against artefacts being agents is not the the Searlian view, but a claim that in the case of artefacts responsibility devolves backwards to the person who designed them, who either did or should have foreseen how they would behave; is at any rate responsible for their behaving as they do; and therefore bears any blame. My mother is not responsible for my behaviour because she did not design me or program my brain (well, only up to a point), but the creator of Robot Peter would not have the same defence; he should have known what he was bringing into the world. It may be that in Parthemore and Whitby’s view akrasia takes care of this too, but if so it needs explaining.

If Parthemore and Whitby think performance is what matters, you might think they would be well-disposed towards a Moral Turing Test; one in which the candidate’s ability to discuss ethical issues coherently determines whether we should see it as a moral agent or not. Just such a test was proposed by Allen et al, but in fact Parthemore and Whitby are not keen on it. For one thing, as they point out, it requires linguistic ability, whereas they want moral agency to extend to at least some entities with no language competence. Perhaps it would be possible to devise pre-linguistic tests, but they foresee difficulties: rightly, I think. One other snag with a Moral Turing Test would be the difficulty of spotting cases where the test subject had a valid system of ethics which nevertheless differed from our own; we might easily end up looking for virtuous candidates and ignore those who consciously followed the ethics of Caligula.

The paper goes on to describe conceptual space theory and its universal variant: an ambitious proposal to map the whole space of ideas in a way which the authors think might ground practical models of moral agency. I admire the optimism of the project, but doubt whether any such mapping is possible. Tellingly, the case of colour space, which does lend itself beautifully to a simple 3d mapping, is quoted: I think other parts of the conceptual repertoire are likely to be much more challenging. Interestingly I thought the general drift suggested that the idea was a cousin of Arnold Trehub’s retinoid theory, though more conceptual and perhaps not as well rooted in neurology.

Overall it’s an interesting paper: Parthemore and Whitby very reasonably say at several points that they’re not out to solve all the problems of philosophy; but I think if they want their points to stick they will unavoidably need to delve more deeply in a couple of places.

 

16 Comments

  1. 1. scott bakker says:

    Another fascinating post, Peter.

    I find it interesting they begin their abstract thus. “In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences.”

    Here’s one problem, at least: since this question is the question of when a system can *responsibly* be held responsible we need to pause and ask the question of the former ‘responsibility.’ When is it morally responsible to hold machines morally responsible. It’s worth noting that we do this whenever we curse or punish machinery that fails us. One can assume that this is simply anthropomorphism for the most part, the irresponsible holding of machines responsible. So approached from this angle, Parthemore and Whitby’s argument can be looked at as laying out the conditions of *responsible anthropomorphization.*

    Someone like Dennett would simply answer, ‘Only so far as it serves our interests,’ the point being that there are no fixed necessary conditions demarcating the applicability of moral anthropomorphization. There’s nothing irresponsible about verbally upbraiding your iPhone, so long as it serves some need. Viewed this way, Parthemore and Whitby are clearly chasing something chimerical simply because the answer will always be, ‘Well, it depends…’ The context in which a machine can be responsibly held responsible will simply depend on the suite of pragmatic interests we bring to any given machine at any given time. If holding them responsible works to serve our interests, then it’s a go. If not, then it’s a no-go.

    In my own terms, this is simply because our moral intuitions are simply heuristic kluges geared to the solution of problems *regardless* of the ‘facts on the ground.’ There are no fixed ontic finishing lines that can be laid out beforehand because the question of whether the application of any given moral heuristic works is always empirical. Only trial and error will provide the kinds of metaheuristics we need to govern the application of moral heuristics in a generally effective manner.

    I can’t help see all this machine ethics stuff as a way to shadow-box around the REAL problem, which is the question of when it is appropriate to treat humans like machines, as opposed to moral agents. More and more the corporate answer seems to be, ‘When it serves our interests…’ Then there’s the further question of whether it is *even possible* to treat people like moral agents once the mechanisms of morality are finally laid bare – because at that point, it seems pretty clear you’re treating people as moral agents for mechanistic reasons.

    This is my bigger argument, anyway. That many things, such as morality, require the *absence* of certain kinds of information to function properly.

  2. 2. What Makes Any Biomechanism a Nihilistic Biomechanism? | Three Pound Brain says:

    […] at Conscious Entities has another fascinating post on the issue of machines and morality, this time in response to a paper by Joel Parthemore and Blay […]

  3. 3. Vicente says:

    Yes, great post.

    This is the greatest of all crossroads, all issues converge here… all questions need to be answered before solving this problem.

    Scott:

    I can’t help see all this machine ethics stuff as a way to shadow-box around the REAL problem, which is the question of when it is appropriate to treat humans like machines, as opposed to moral agents. More and more the corporate answer seems to be, ‘When it serves our interests…’ Then there’s the further question of whether it is *even possible* to treat people like moral agents once the mechanisms of morality are finally laid bare – because at that point, it seems pretty clear you’re treating people as moral agents for mechanistic reasons.

    nice one !!

  4. 4. Arnold Trehub says:

    Scott: “Then there’s the further question of whether it is *even possible* to treat people like moral agents once the mechanisms of morality are finally laid bare – because at that point, it seems pretty clear you’re treating people as moral agents for mechanistic reasons.”

    I don’t think it would be treating people as moral agents for mechanistic reasons. The concept of morality is a human invention that serves what a culture believes is a very important purpose. It is a set of values to be expressed in behavior to prevent the infliction of damage (culturally defined) and to promote the well being of society. The related concepts/actions of reward and punishment are expected to serve similar purposes — artifacts created by the human brain.

  5. 5. scott bakker says:

    Arnold: I’m not a big believer that any collective appreciation for the ‘value of morality’ will act as anything more than a smokescreen as the utility of explicitly mechanical communicative interactions continues to rise. Institutional powers are funding them, and they will use them.

  6. 6. Vicente says:

    Arnold,

    The concept of morality is a human invention that serves what a culture believes is a very important purpose. It is a set of values to be expressed in behavior to prevent the infliction of damage (culturally defined) and to promote the well being of society.

    Do you really believe this? except for that it is a human invention, the rest has nothing to do with reality. The range of “immoral” ulterior motives that have driven the establishment of those set of values, at different times, has no limits. Legal systems are a bit closer to that idea, not much anyway, although they try to organise societies. Think of how easy exceptions to the rules were made when convenient for the Establishment.

    Fundamentalisms and integrisms are tainted with this approach.

    The only meaningful moral is the one stemming from reason and compassion, this is the naturalised ethics to be saught, the rest only seeks to subjugate the meek and the idiots.

  7. 7. Arnold Trehub says:

    Vicente, I think we can say that the invention of moral norms for social behavior has never been fully successful. But taking into account all ups and downs, don’t you think some progress has been made over the millennia?

    By the way, how would you define the “Establishment”? And what are “integrisms”?

  8. 8. Vicente says:

    Arnold:

    Establishment /??stæbl??m?nt/ n the Establishment ? a group or class of people having institutional authority within a society, esp those who control the civil service, the government, the armed forces, and the Church: usually identified with a conservative outlook.

    As for “integrism”, sorry, wrong translation (can’t find an english word), I meant the practice to integrate, or combine, religious and political authority into a single governmental structure, frequent case in “officially” muslim countries,or in the catholic countries time ago.

    Yes, some progress has been made. I believe that S. Pinker have a say in this… the new scenarios create an evolutive pressure in which probably certain behaviours (some might consider more morally acceptable) are favoured, but this is not really the result of human will, not to say free will !? I don’t know, sometimes I feel that today there’s no explicit slavery but implicit one.

  9. 9. Vicente says:

    Arnold:

    Establishment /??stæbl??m?nt/ n the Establishment ? a group or class of people having institutional authority within a society, esp those who control the civil service, the government, the armed forces, and the Church: usually identified with a conservative outlook.

    As for “integrism”, sorry, wrong translation (can’t find an english word), I meant the practice to integrate, or combine, religious and political authority into a single governmental structure, frequent case in “officially” muslim countries,or in the catholic countries time ago.

    Yes, some progress has been made. I believe that S. Pinker have a say in this… the new scenarios create an evolutive pressure in which probably certain behaviours (some might consider more morally acceptable) are favoured, but this is not really the result of human will, not to say free will !? I don’t know, sometimes I feel that today there’s no explicit slavery but implicit one.

  10. 10. Arnold Trehub says:

    Vicente: “I don’t know, sometimes I feel that today there’s no explicit slavery but implicit one.”

    Is there a better alternative to institutional constraints on individual action (implicit slavery?) in any organized society?

  11. 11. Doru says:

    Yes, machines can be moral agents if “microtubule like” quantum structures could be implemented, according to this interesting and very updated 1 hr interview with Stuart Hameroff: http://www.singularityweblog.com/stuart-hameroff-quantum-consciousness/

  12. 12. Vicente says:

    Arnold,

    I was thinking more of the economical side: markets and marketing, big corporations and profit, etc., etc., of course all these “arrangements” are made with the corresponding institutional collusion and tolerance, sort of a “Brave new world” conceptual implementation.

    The relation to this discussion is how to become more conscious about reality, to get de-programmed, can help individuals to protect themselves from the system, and concurrently to foster a new moral practice based on reason and emerging compassion. This wold is so blind and confused, that spends billions in public programmes to increase competitiveness, if they just spend a minute to increase global cooperativeness. But I understand that we have ape-like brains, and it is very difficult to work with such system specification.

    First thing to be a moral agent is to know what you are doing:

    Father forgive them because they know not what they do

    Sorry for the diversion I thought it was relevant to the topic, of course the point is to formally present how consciousness makes a difference. I suspect, the issue becomes meaningless when viewed from the right perspective, which is not possible for an ape-like brain (e.g. like mine).

  13. 13. Vicente says:

    I believe there is something really wrong with this approach, it does not consider emotions and feelings at all. Why wrong things are wrong? at the end of the day because they produce negative emotions and pain. The conceptual and logical approach used tells nothing to sentient beings. Think of the ocassions when remorse is felt, why? An indispensable function to be a moral agent is to have emotions and empathy towards other beings.

    To have the concept of concept is irrelevant, if anything, to know what to suffer is, and that you can inflict pain and suffering on others, is the base to construct a self moral code.

    Is there a Turing test for true remorse?

    To have emotions (human) and empathy is the root.

    Of course the problem is not solved, emotions are culture dependant, and empathy is a psychological trait that can be inherited or cultivated more ore less….

    To summarize: moral is there because there are emotions and pain, if not, if nobody cares or feels anything what is the purpose of moral (sensible moral)? So, what is it required to be a moral agent, to beging with?

  14. 14. Vicente says:

    I have used emotions and feelings a bit carelessly… probably feeling play o more central role.

  15. 15. Tom Clark says:

    Peter:

    “Their view, which certainly has its appeal, is that it is performance that matters; if an entity displays all the signs of moral sense, then let’s treat it as a moral being….”

    Right, in particular we needn’t worry whether the agent has phenomenal states to hold it responsible, as argued in “Holding mechanisms responsible” at http://www.naturalism.org/glannon.htm

  16. 16. 20dollarptc.com says:

    Thank you so much a lot for revealing this specific with all individuals you will determine what you will be preaching about! Added. Kindly in addition pay a visit to my website Implies). We might have a hyperlink modify arrangement between people

Leave a Reply