The Nuffield Council on bioethics is running a consultation on the ethics of new brain technologies: specifically they mention neurostimulation and neural stem cell therapy. Neurostimulation includes transcranial magnetic stimulation (TMS) which typically requires nothing more than putting on a special cap or set of electrodes; and  deep brain stimulation (DBS) where the electrodes are surgically inserted into the brain.

All of these are existing technologies which are already in use to varying degrees, and the consultation is prudently geared towards gathering real experience. But of course we can range a bit more freely than that, and it raises an interesting general question: what new crimes can we now commit?

Disappointingly it actually seems that there aren’t many really new neurocrimes; most of the candidates turn out to be variations or extensions of the old ones. Even where there is an element of novelty there’s often a strong analogy which allows us to transpose an existing moral framework to the new conditions (not that that necessarily means that there are easy or uncontroversial answers to the questions, of course).

I think I’ve said before, for example, that TMS seems to hold out the prospect of something analogous to the trade in illicit drugs. An unscrupulous neurologist could surely sell wonderful experiences produced by neural stimulation and might well be able to create a dependency which could be exploited for money and general blackmail. The main difference here is that the crucial lever is control of the technology rather than control of the substance, but that is  a relatively small matter which has some blurry edges anyway.

It’s possible the new technologies might also be able to enhance your brain – if they allow better concentration or recall of information, for example. There is apparently some evidence that TMS might be capable of improving your exam scores. That clearly opens up a question as to whether enhanced performance in an exam, produced by neural stimulation, is cheating; and the wider question of whether easier access to TMS by wealthier citizens would build in a politically unacceptable advantage for those who are already privileged. So far as I know there’s no current drug or regime which automatically and reliably boosts academic performance; nevertheless, the issues are essentially the same as those which arise in the case of various other forms of exam cheating, or over access to superior educational facilities. There may be a new aspect to the problem here in that traditional approaches generally rest on the idea that each person has a genuine inherent level of ability; this may become less clear. If a quick shot of TMS through the skull boosts your performance for the next hour only, we might see things one way; whereas if wearing a set of electrodes helps you study and acquire permanently better understanding, we might be more inclined to think it is legitimate in at least some respects. Moreover a boost which can be represented as therapeutic, correcting a deficit rather than providing an enhancement, is far more likely to be deemed acceptable. All in all, we haven’t got anything much more than new twists on existing questions.

There is likely to be some scope for improperly influencing the behaviour of others through neural techniques, but this has clear parallels in hypnotism, confidence trickery, and other persuasive techniques; again there’s nothing completely novel here. Indeed, it could be argued that many con tricks and feats of conjuring rest on exploiting neurological quirks as it is.

To steal information from someone’s brain is morally not fundamentally different from stealing it out of their diary; and to injure someone by destroying a mental faculty broadly resembles physical injury – the two may indeed go together in many cases.

So what is new? I think if there is fresh scope for evil-doing it is probably to be found in the manipulation of personality and identity. Even here the path is not untrodden, with a substantial history of attempts to modify the personality through drugs or leucotomy; but there is now at least a prospect, albeit still some way off, of far better and more precise tools. As with cosmetic surgery, we might expect the modification of personality to be limited to cases where it has a therapeutic value, together with a range of elective cases over which there might be some argument. The novel thing here is that many cases would require consent; but unlike a nose job, personality modification attacks the basis of consent.

Consider an absurd example in which subject A seeks modification to achieve greater courage and maturity; having achieved both, the improved A now disapproves of the idea of personality modification and insists the changes constitute an injury which must be reversed; once they are reversed, A, with the old personality, wants them done again.

It could be worse; since personality and identity are linked, the new A might take a different line and insist that the changes made in the brain he inhabits were effectively the murder of an older self. This would be as bad a crime as old-fashioned killing, but now it’s no good reversing the changes because that amounts to a further murder, and it could be argued that the restored A is in fact not the original come back, but a third person who merely resembles the original; a kind of belated twin. A’s brother might sue all the new personalities on the basis that none of them has any more rights to the property and body of the original than a squatter in someone’s house.

In circumstances like these there might be a lobby for the view that personality modification should be subject to a blanket ban, in rather the same way that society generally bans us from editing out undesirable personalities with a gun – even our own.

Of course there is in principle another novel crime we might be able to commit: the removal of someone’s qualia, their inward subjective experience. This has often been contemplated in the philosophical literature (it is remarkable how many of the most popular thought-experiments in philosophy of mind – whose devotees generally seem the mildest and most enlightened of people – involve atrocious crimes); perhaps now it can become real. The crime would be undetectable since the ineffable qualities it removes could never be mentioned by the victim; the snag is that since there could be no way of measuring our success it’s probably impossible to devise or test the required diabolical apparatus…

24 Comments

  1. 1. Libby Fagen says:

    USA has already had that surgery-as a nation-the removal of it’s former personality. I guess it works all too well. Libby

  2. 2. Rodger Cunningham says:

    Libby, as an American who was already half my present age when Reagan was elected, I was just about to say about the same thing.

    More to the question: If TMS to improve test scores led to actual improved retention of material, I don’t think it could be called cheating; but it’d constitute a serious social injustice if access to it was governed by money/status.

  3. 3. Vicente says:

    To steal information from someone’s brain is morally not fundamentally different from stealing it out of their diary

    Well, I don’t know, this seems worse, it is like trespassing the very last barrier for privacy.

    It is possible to lie, to distort information, when writing a diary. If somebody is accused of anything using its diary as “evidence”, he can always claim that he was lying, that is was fantasy rather than a record.

    You can burn your diary but you can’t burn your brain.

  4. 4. Kar Lee says:

    Peter,
    “Of course there is in principle another novel crime we might be able to commit: the removal of someone’s qualia, their inward subjective experience.” Someone should write a Sc-fi about it. Let me offer a plot:

    A lone-wolf philophysicist invented a qualia detector. When he was within 5 feet from anyone and pointed the device at the person, he could feel that person’s qualia (stealing mind, mind melding…) in his head. As he was still optimizing his design, one of his qualia detector prototypes was somehow stolen. And soon after, weird crimes started to spread. People’s bank accounts were emptied out. Classified government documents started appearing on random websites. Very soon, the news of the existence of the qualia detector was leaked.

    After the news of his device was leaked, someone created a qualia detector blocker, and started selling it. It claimed to be able to block the mind melding function of the qualia detector. It would protect the ultimate personal privacy by treating the brain once, and only once.

    So this qualia detector guy went out and tested his detector on those treated people and indeed, he detected no qualia. It was like he was pointing his device at a tree or something. He went back and re-checked his design and re-checked and re-checked and he could not figure out why any brain could have resisted his detection. After the process of elimination, he came to the realization that the only way qualia detector blocker could have done that was by turning people into zombies. It literally killed those people it treated, only leaving intact the automaton part. The blocker was selling fast because everyone thought they were just protecting themselves, protecting their own privacy from this evil qualia detector. The world was rapidly turning into a zombie world! The world was in grave danger!

    The philophysicist guy was the only person in the world who could save the world! He was the only one who could save the world from turning into a zombie land. He had no choice but to embark on a daunting journey to combat the evil blocker by convincing people who still had qualia that the blocker would kill them, and to catch whoever stole his invention……

  5. 5. Vicente says:

    Kar Lee,

    Am I witnessing the seed for a future best-seller, I mean your coming novel, not the qualia blocker…

  6. 6. Peter says:

    You should definitely write that one.

    I suppose if you can take away someone’s qualia you could perhaps also confer them on zombie people who never had them or had lost them. You could argue that was sort of what happened to Scrooge.

  7. 7. john davey says:

    Peter

    You can take away qualia already – I’ve known two people who’ve lost their sense of smell through injuries. Brain damage is brain damage.

    I don’t believe that qualia aren’t/won’t be detectable. It sounds counterintuitive but it’s actually common sense.

    Few things in reality are measured in what might be termed a direct sense. The experiment to determine the radius of an atom, for instance, sounds direct but its actually as indirect a measurement as its possible to have. You hear people say ‘we know that atoms exist because we now have pictures of them’ – this of course is a mistake, as the images of atoms or molecules from the electron microscope are built upon a huge bed of theory that assumes that atoms exist in the first place. These images are not taken with polaroids …

    Likewise, if we take the common sense view that qualia emerge from the brain’s material and physical properties, a measurement of qualia is evidently possible from a measurement of the material and physical properties of the brain. It’s not direct but then few measurements actually are, being based frequently on huge beds of theory. When qualia have a theoretical basis, measurement will be straightforward.

    If you think about it, even measuring you own height is an indirect measurement – you assume that the space between the top of your head and the top of the ruler is constant. Even that measurement is built upon theoretical assumptions.

  8. 8. Vicente says:

    John,

    I believe there is a misconception in your comment. There are people who have lost senses in an accident, sight, smell… or were born without them. They lack the whole sense not just qualia. I believe Peter is refering to something like blindsight, in which somehow the sense, and part of the information is still there, and the brain can use it, but you lack the conscious part, and qualia are absent. The extreme case is that of philosophical zombies.

    So you have to develop a technique to probe or damage those brain areas just responsible for raising qualia, which I don’t know if it is possible, since most of them will probably overlap with pure processing information ones.

    I don’t agree with your approach for measuring. Even though most (all) measurements are indirect, I agree, you would have physical-physical correlates values for the calculation. In this case, nobody has proven that qualia are strictly physical, so NCCs are physical-? correlates, therefore no measurement theory can be applied for the moment.

    Sorry for jumping the queue.

    Kar Lee,

    If you decide to write the story, I wouldn’t mind lending a hand. Looks great fun.

  9. 9. john davey says:

    “I don’t agree with your approach for measuring. Even though most (all) measurements are indirect, I agree, you would have physical-physical correlates values for the calculation.”

    You are assuming that conscious states are unquantifiable, which is something I’d dispute – not least of which is because it’s already done, by anaesthetists in a normal days’ work, and physicians looking at individuals in comas and persistent vegetative states. “Half asleep” is an expression that makes perfect sense to me.

    “Nobody has proven that qualia are physical”

    I think this is based upon a false supposition as to the burden of truth. Everything that is phenomenal must be assumed to be physical, or physically caused. To say that qualia are not physical is to make a statement that runs contrary to a trend of three centuries of physical science. The burden of proof is on you to prove otherwise, I would suggest.

    What are qualia if not physical ? They are not information. They represent nothing and are not observer relative. They are not conceptual as they are not informational. That leaves only the physical/biological realm, unless you believe in God (in which case all rational discussion is pointless) or for some reason, which again you must justify, that qualia can’t be understood.

  10. 10. Arnold Trehub says:

    The qualia that were measured/quantified in the SMTT experiment were predicted to occur on the basis of the structure and dynamics of a putative biophysical mechanism in the brain, the retinoid system.

  11. 11. Tom Clark says:

    The idea that a conscious system could have its qualia removed but remain behaviorally unchanged – the idea that philosophical zombies are possible, not just conceivable – seems unlikely given the evidence in hand, which is that consciousness is associated with particular cognitive, behavior controlling capacities. When those capacities are abridged, consciousness degrades or disappears. There’s no evidence that those capacities produce anything extra that constitute, or results in, qualia. So short of disabling those capacities, I don’t see any way to remove qualia from a system that we judge to be conscious.

  12. 12. Juan says:

    @Tom: I think you’re over-extending your thesis there. Some sort of qualia may be necessary for the execution of behavioural patterns, but I don’t see the inverse as being necessarily true. There are many special “cases” where this dichotomy might break: for example, a patient with locked in syndrome will experience as reduced selection of qualia that nevertheless don’t manifest as behaviour (for obvious reasons). Other counter-examples that come to mind are day-dreaming and sleep itself, wherein you can experience quite vivid qualia quite apart from denoting behaviour.

    It’s also seems intuitive to me that you could have a robot that could execute a wide range of realistic behavioural actions that nevertheless doesn’t experience qualia as commonly defined*. For example, take this robotic replica of famous sci-fi author Philip K Dick:

    http://www.youtube.com/watch?v=UIWWLg4wLEY

    The robot speaks and moves in ways very similar to that of a human being (although it’s far from perfect, of course). Now, would you say that those “behaviours” were coupled with some sort of qualia?

    *Btw, I’m not ruling out the possibility of AI with qualia, just pointing out that philosophical zombies as defined by Tom are perfectly feasible.

  13. 13. Arnold Trehub says:

    There is one quale (call it quale-1) that “rules” all other qualia. This is our conscious experience of being in a spatial surround — centered in a world around us. A special system of brain mechanisms serves to provide us this fundamental conscious experience. The notion of a zombie that is exactly like us in every biophysical respect except that it lacks consciousness is incoherent, because if it has the same brain mechanisms that give us quale-1 then it IS conscious, and if it lacks these brain mechanisms then it is NOT exactly like us.

  14. 14. Juan says:

    @Arnold: Maybe I misunderstood or misinterpreted Tom, but I thought he was suggesting that ANY conscious system must have qualia coupled with behaviour capabilities. My point was that it is easy to conceive an artificial “zombie” who has a realistic set of behaviours but nevertheless experiences no qualia. If we are talking about philosophical zombies in the more traditional sense then I of course agree with you that the concept is, as you say, “incoherent”.

  15. 15. Tom Clark says:

    Juan:

    It’s the difference between *conceiving* of a behaviorally identical-to-us zombie and the *actual possibility* of such a creature that I was getting at. The former is easy to do, but all the evidence is against the latter. As you point out, experience can of course be divorced from actual behavior, as in dreams and locked-in syndrome. But that experience exists at all seems to be a matter of brain processes whose original function is complex information integration and behavior control, including the internal simulation of past and future situations and as Arnold points out the modeling of a self centered in its surround. Such processes are active in dreams and locked-in syndrome, even though no behavior results since motor outputs are blocked.

    The upshot, again, is that once a system exhibits behavior approaching or equivalent to ours, the default presumption should be that it’s conscious. And the evidence strongly suggests there’s no way to subtract qualia from a system operating at our level, making it a zombie, without dramatically compromising its behavior control capacities. So the way I see it, crimes against consciousness can’t include the stealth elimination of someone’s qualia while having no impact on their behavior.

  16. 16. Vicente says:

    Tom,

    And the evidence strongly suggests there’s no way to subtract qualia from a system operating at our level

    What kind of “system” are you thinking of?

    What evidence? what are qualia? what does “operating at our level” mean?

    What would be closer to us in those terms, a mouse, ok a dog, or the most powerful AI system ever developed? To me, evidence strongly suggests that the dog is conscious and has qualia, and that the AI system is a chunk of copper and silicon, with no consciousness and qualia, and whose only intelligence is the one borrowed from a human engineer.

  17. 17. Tom Clark says:

    Vincente:

    “What kind of “system” are you thinking of?”

    Any free-standing autonomous system, artificial or not, that has our behavioral flexibility and the internal control processes to support it.

    “What evidence?”

    See http://www.naturalism.org/kto.htm#Neuroscience

    “what are qualia?”

    See http://www.naturalism.org/kto.htm#Qualia

    “what does “operating at our level” mean?”

    Just what it says: having our behavioral flexibility, intelligence, etc.

    “What would be closer to us in those terms, a mouse, a dog, or the most powerful AI system ever developed?”

    AIs thus far don’t approach what a dog can do in terms of autonomous behavior control in service to internally instantiated goal states, but as AIs gain in autonomy, they get closer to us.

  18. 18. john davey says:

    “The upshot, again, is that once a system exhibits behavior approaching or equivalent to ours, the default presumption should be that it’s conscious. ”

    Why ? Consciousness is NOT behaviour. Consciousness is a phenomena. You can have one without the other, they don’t even belong to the same ontology. The statement is baseless.

    Behaviour is inherent but it’s also ‘as-if’. I can look at a computer and say “it’s not behaving properly” when what I really mean is it’s not doing what I want. I can ascribe behaviour to a machine that makes furniture but it’s obvious that I’m not on about it’s thinking processes, I’m on about it’s malfunction. Likewise I can ascribe behaviour to a computing system replicating a brain but that behaviour is only relative to my perspective.

    The “behaviour” in the circumstance of a robot is not inherent (it can’t be, because there are no such things as natural computers, they are all observer relative). It’s purely exists from the observer’s perspective. You wouldn’t look at a painting of Einstein and attribute the same mental properties to the painting, would you? Likewise, an image of behaviour caused by a computer (as in a robot)would not for one second suggest the source of the image is identical. A duck is not the same thing as a painting of a duck.

    QED

  19. 19. Juan says:

    I don’t get the hostility towards computational paradigms. I get that it’s something to do with an image of the field as somehow inherently Neo-Platonist; it’s something I used to be sympathetic to, but not so much now. Computation is by nature a physical process, and while we may not have “cracked” the brain’s code that doesn’t mean that there isn’t something computational going on there. So davey’s quasi-Wittgenstenian dinstinction between the physical and the digital doesn’t quite work for me.

  20. 20. john davey says:

    “Computation is by nature a physical process”

    No it isn’t. It’s a branch of mathematics.

    See http://en.wikipedia.org/wiki/Computation

  21. 21. Juan says:

    @Davey From the wiki article:

    “A computation can be seen as a purely physical phenomenon occurring inside a closed physical system called a computer. Examples of such physical systems include digital computers, mechanical computers, quantum computers, DNA computers, molecular computers, analog computers or wetware computers. This point of view is the one adopted by the branch of theoretical physics called the physics of computation.”

    So that doesn’t disprove my point. When a relatively intelligent human does a computation inside his head, a math calculation for example, what do you think is happening physically? Is he or she not doing something fundamentally similar to what happens in a computer?

  22. 22. Vicente says:

    Juan,

    Is he or she not doing something fundamentally similar to what happens in a computer?

    Fundamentally similar in what terms? I don’t think so. For example, not even an analogue computer and a digital computer are fundamentally similar. Definitely, the brain has nothing to do with digital coding…

    I can’t find the reference, sorry, but I believe that there is the accepted as possibility that, most times when we add, what we do is to recall (retrieve) results for simple sums and then combine them, not computing (you could call that computing of course…).

  23. 23. john davey says:

    ““A computation can be seen as a purely physical phenomenon occurring inside a closed physical system called a computer. ”

    I think you’ll find that you’ve misinterpreted it – although admittedly the wording of the wiki article doesn’t help (it’s pretty bad). The ‘physics of computing’ is a physical analysis of “computing” systems – that is, systems which functionally implement the logically prior</i/ mathematical theory of computing.

    There is no inherent physics to computation in any way whatsoever. I can ascribe any value to any physical attribute I like, it changes neither the object I am attributing the value to, nor the value itself. I can ascribe a value '1' to +1V and a value 0 to '-1' V on an array of semiconductor nodes. The choice is entirely mine and entirely arbitrary. My attribution of value to electrical current causes no change of state whatsoever.

    When a person does a calculation in his head he is of course not doing the same physical thing as a computer. He might be doing the same “logical” thing – we know that because computer programs are programmed by us to do the exactly same things we do. However there are of course microcode shortcuts in most contemporary chips, so not even they will use the standard algorithms of long division and log multiplication entirely.

    Actually in reality – at a philosophical level – the last passage is both wrong and right. We can thing of a chip as ‘doing’ long division, but in fact it’s ‘doing’ no such thing, as it doesn’t think, and thought is required to intentionally long divide. In reality the silicon chip ‘does’ nothing but obey the laws of physics, and has no idea whatsoever about how external users might decide to allot digital values to its physical attributes. An external user is required to know that a silicon chip is ‘long dividing’ – note that this places the chip itself in no greater position of wisdom. A silicon chip can never be aware of it being a computer, as its computation components are only available to external users.

    There is no parallel with the brain – the brain’s conscious mind is aware of intentionally doing long division, and requires no external users to make it so. T

  24. 24. Kar Lee says:

    This just in:
    “Stanford researchers produce first complete computer model of an organism”
    http://www.opli.net/magazine/med_eng/2012/news/stanford_fccm.aspx?goback=.gde_104612_member_136451208

    So, what’s next? What is so special about the brain that cannot be simulated, in principle?

    Even though simulated rain cannot make you wet, a simulated brain can think.

Leave a Reply