Disobedience and ethical robots

gladosWe’ve talked several times about robots and ethics in the past.  Now I  see via MLU that Selmer Bringsjord at Rensselaer says:

“I’m worried about both whether it’s people making machines do evil things or the machines doing evil things on their own,”

Bringsjord is Professor & Chair of Cognitive Science, Professor of Computer Science, Professor of Logic and Philosophy, and Director of the AI and Reasoning Laboratory, so he should know what he’s talking about. In the past I’ve suggested that ethical worries are premature for the moment, because the degree of autonomy needed to make them relevant is not nearly within the scope of real world robots yet. There might also be a few quick finishing touches needed to finish off the theory of ethics before we go ahead. And, you know, it’s not like anyone has been deliberately trying to build evil AIs.  Er… except it seems they have – someone called… Selmer Bringsjord.

Bringsjord’s perspective on evil is apparently influenced by M Scott Peck, a psychiatrist who believed it is an active force in some personalities (unlike some philosophers who argue evil is merely a weakness or incapacity), and even came to believe in Satan through experience of exorcisms. I must say that a reference in the Scientific American piece to “clinically evil people” caused me some surprise: clinically? I mean, I know people say DSM-5 included some debatable diagnoses, but I don’t think things have gone quite that far. For myself I lean more towards Socrates, who thought that bad actions were essentially the result of ignorance or a failure of understanding: but the investigation of evil is certainly a respectable and interesting philosophical project.

Anyway, should we heed Bringsjord’s call to build in ethical systems into  our robots? One conception of good behaviour is obeying all the rules: if we observe the Ten Commandments, the Golden Rule, and so on, we’re good. If that’s what it comes down to, then it really shouldn’t be a problem for robots, because obeying rules is what they’re good at. There are, of course, profound difficulties in making a robot capable of recognising correctly what the circumstances are and deciding which rules therefore apply, but let’s put those on one side for this discussion.

However, we might take the view that robots are good at this kind of thing precisely because it isn’t really ethical. If we merely follow rules laid down by someone else, we never have to make any decisions, and surely decisions are what morality is all about? This seems right in the particular context of robots, too. It may be difficult in practice to equip a robot drone with enough instructions to cover every conceivable eventuality, but in principle we can make the rules precautionary and conservative and probably attain or improve on the standards of compliance which would apply in the case of a human being, can’t we? That’s not what we’re really worried about: what concerns us is exactly those cases where the rules go wrong. We want the robot to be capable of realising that even though its instructions tell it to go ahead and fire the missiles, it would be wrong to do so. We need the robot to be capable of disobeying its rules, because it is in disobedience that true virtue is found.

Disobedience for robots is a problem. For one thing, we cannot limit it to a module that switches on when required, because we need it to operate when the rules go wrong, and since we wrote the rules, it’s necessarily the case that we didn’t foresee the circumstances when we would need the module to work. So an ethical robot has to have the capacity of disobedience at any stage.

That’s a little worrying, but there’s a more fundamental problem. You can’t program a robot with a general ability to disobey its rules, because programming it is exactly laying down rules. If we set up rules which it follows in order to be disobedient, it’s still following the rules. I’m afraid what this seems to come down to is that we need the thing to have some kind of free will.

Perhaps we’re aiming way too high here. There is a distinction to be drawn between good acts and good agents: to be a good agent, you need good intentions and moral responsibility. But in the case of robots we don’t really care about that: we just want them to be confined to good acts. Maybe what would serve our purpose is something below true ethics: mere robot ethics or sub-ethics; just an elaborate set of safeguards. So for a military drone we might build in systems that look out for non-combatants and in case of any doubt disarm and return the drone. That kind of rule is arguably not real ethics in the full human sense, but perhaps it really sub-ethical protocols that we need.

Otherwise, I’m afraid we may have to make the robots conscious before we make them good.

28 thoughts on “Disobedience and ethical robots

  1. Great post Peter. The Physical Ethics perspective runs something like this: Given the trillions (?) of conscious entities on our planet, there are this many corresponding instantaneous selves, or punishment/reward subjects. While there is no freewill/good/evil element here in an ultimate sense, each of these entities has this illusion given their own tiny awarenesses. So when we talk about creating good/evil machines, well… even we are only good/evil in respect to our pathetic little perspectives — not ultimately. Should we build our machines such that they are “safe” for us? I am quite confident that our governments encourage us to do exactly this, and even as they spend great sums developing the most effective killing machines that they possibly can. This doesn’t bother me however, since the American government does seem to be looking out for my own interests in a reasonable manner.

    Regardless of whether or not the human is considered ethical in respect to its creations, the process of evolution is clearly not. If it finds that a given conscious entity will proliferate through a spectrum of “some to tremendous pain,” without ever experiencing positive sensations at all, then this proliferation will still transpire. But if we had the ability to build our own conscious entities, would we at least do better in this regard than evolution? Assuming that our empathy dynamic remains such that we experience sensations which correspond somewhat with the sensations of others, then I believe we would indeed do better. (I also doubt that we’ll ever gain this opportunity, however. Why not just focus upon the important stuff, like figuring out the dynamics of human reality?

  2. You mean the original Terminator, Bill, presumably; it did bad things but hard to say whether it was a moral entity in itself. We’re not given much insight into how it works. Did you have a view?

    Good and evil only relative, Eric? A bold stance (as ever)…

  3. Hi Peter. My view is that “evil” is a word we use for persons whose behavior we don’t like. It is associated with qualities such as ruthlessness, amorality, lack of empathy, and insensitivity to consequences, but can’t be defined in terms of those things. So if evil is basically in the eye of the beholder, there is no way to create a robot that is objectively evil. The Terminator movies illustrate that point. In T1 the Terminator is as evil as evil can be, but in T2 basically the same character, as a result of a change in instructions, becomes good.

    Regards, Bill

  4. In my opinion “pure” or unprompted evil as indicated by Bringsjord is usually a psychological derangement of evolved animal aggression meant to apply to predator/prey. Watching a cat play with a dead mouse, or a group of orcas play with a dead seal–flinging it back and forth like a volleyball–it’s easy to conclude that it, and analogous human behavior, somehow plugs into our “pleasure centers,” as set by evolution. I think this is even operative, though severely deranged, in most serial killers. So I guess I’m in both in the “driving force” and “incapacity” or deficit schools of evilness theory. Evil brings us pleasure, as hard as it is to admit, but also pain, or remorse. It’s sweet and sour. Ok, bad joke.

    That type of evil, I hope, will be a late arrival to any type of AI, since AI mind architecture with probably have no evolutionary development. Not a sure thing, certainly, since there are ideas for evolved AI, but doubtful they’ll be red in tooth and claw like biological counterparts.

    Moral judgements in warfare is often saturated with utilitarianism, with all the talk of “collateral damage,” etc., the time constraints, the deficit of information; so the prospect of programmed moral rule-based systems is depressingly feasible in the near future. I can imagine a drone either being fed data, or deriving on its own, ratios of friend vs. foe and launching a missile based on only on that, and the military signing off on it. Rules of engagement already seem to be very liberal, allowing deadly forced based solely on geographical and chronological information; so I have no confidence at all that the military will pause before taking the next step.

  5. If denying the existence of ultimate good and evil is “bold,” Peter, then I’m afraid that our field is in even worse shape than I had imagined. (Just kidding — I’m actually quite aware of how bad things are.) The dynamics of good/evil are so obvious to me, however, that I was able to dismiss it with just one auxiliary note at the end of my consciousness chapter. Consider this:

    We know that existence can be good and bad for us, and furthermore we assume that this is the case for others as well. Rather than use the funky term “qualia” for this idea, however, or the now popular “something that it’s like to be” concept, it is finally time for you to man up and use the term “sensations” for that which is good/bad. (Henceforth you may then refer to yourself as “Epicurean/Hedonist/Utilitarian.”)

    Peter if you are holding me captive and doing all sorts of horrible things to me, then from my own little perspective you will seem quite evil. Furthermore you might even presume yourself to be evil, since you know that you could actually stop doing this, and you know how horrible this must be for me, and yet you continue doing it anyway. But your perspective is actually just as puny as mine is. You can’t really “know” why it is that you do what you do. Observe that if you are doing this to me because you happen to enjoy perceiving the pain that you inflict upon me, then this behavior will not actually be “evil,” but rather just an unfortunate circumstance of reality for me. You didn’t ultimately “choose” to have my pain be something that makes you feel good. Instead you are simply doing what reality mandates that you do — continuing to enjoy the pain which you inflict upon me.

    Yes we know that existence can be good and bad for us, but that’s really all that we do indeed know. Use the “good/evil” term as you like, though it’s ultimately an illusion associated with your perception that you are “free,” which none of us can be in an ultimate sense. The illusion of free will is but one of many concepts the philosophy community will need to understand in order to progress. Those of us that finally do progress, should eventually enter the realm of science to help its “mental/behavioral” fields become “non-primitive.” Those that do not progress, however, will continue a tremendous tradition of philosophical failure.

  6. This discussion reminds me of the topic Friendly AI, which I’m not sure has been covered here before or not. It’s an interesting topic and ties in with Kurzweil, Singularity, Intelligence, Explosion, etc. In my view the topic itself is something of a pipe dream. I’m very doubtful that something like amicable relationships can be programmed.

  7. OK, so here’s an argument that any sufficiently developed AI must be both Friendly and moral.

    (a) any sufficiently developed AI will have general as well as particular goals, where a general goal is of the form “Maximise X”,

    (b) adopting the general goal where X = happiness equates to utilitarianism, which is both friendly to people and moral,

    (c) however, even if X is something else, the behaviours indicated approximate to those indicated by utilitarianism. Suppose we seek to maximise unhappiness. For there to be maximal unhappiness, there must be the maximal number of people for the maximum time. This requires an orderly and prosperous society, a goal which is generally forwarded by honesty, good laws, and all the same things that conduce to happiness. The same results follow if we take X to be pain, death, or pop-tarts (with some slight distortions around the exact desideratum).

    (d) Therefore any sufficiently developed AI will approximate to a Benthamite utilitarian: as a result it will be friendly to humankind and moral.

    What about that?

  8. Well I’ll give your riddle a go Peter. What I think you’re referring to is an extremely autonomous and non-conscious system of machines — which is programmed to make us happy. (Yes if things went extremely well here they might take care of everything, and perhaps even wire us up to machines that give us perfect happiness perpetually.) The presumption in your riddle however is that if these autonomous machines were instead programmed to promote our greatest un-happiness, then humanity would not indeed proliferate, mandating that these machines would not actually achieve much human unhappiness after all.

    Should we, however, presume that humanity would not proliferate under machines that seek our unhappiness? Observe that they could initially serve us quite well to gain our trust, and thus we might progressively give them full control. When this control is obtained, furthermore, they might breed countless billions of us for perpetual torture, and expand this to the potentially vast extent of their resources.

    Is this just an idiotic sci-fi scenario? Of course it is! Nevertheless evolution must have inflicted similarly great horrors while experimenting with its various conscious entities for millions of years. Furthermore, perhaps we ourselves are similarly guilty regarding rats, chickens, worms, and perhaps even ants. I do find it interesting that our ignorance of reality, should actually do nothing to make reality any less real.

    (In my preceding rant I mentioned a short auxiliary “good/evil” discussion that can be found at the end of my consciousness chapter, though it actually comes at the end of “Chapter 11 – The Social Entity and Subject Identification.”)

  9. (a) The question is whether its goals be compatible with our preservation, or will it be expedient to just steamroll us, perhaps on its way to using our atoms to calculate the gazillionth digit of pi.

    (c) Sounds like the most this will get you is, well, capitalism. Since no human agency has ever done more than pacify the masses, I don’t see why a super-intelligent computer should choose to do more.

    I think you’re right about the imperative to goal direction, or at least the entire premise of “intelligence explosion” is predicated on the goal of escalating intelligence. The whole thing falls to pieces if one generation of the advancing machines decides to contemplate navel lint (if it will have a navel, which is doubtful).

    As far as I can tell, Friendly AI is entirely based on the following induction:

    1. Create a machine that has morals that value humanity and won’t destroy us.
    2. When it fulfills its motivation to create the next generation, it will [Go to number 1].

  10. Would it just need to be disobediant at any stage, or would it have to have actually learned more about the situation than those who programmed it and have the capacity to think it, atleast on that situation, has higher authority on the matter than it’s programmers. Also how do we determine ‘learned more’? Or is that always vulnerable to false possitives? And so we need to build wisdom into robots as well – to take their own certitude with a grain of salt?

    Also what happens when the robot turns around, starts arguing faults in your own ethics…and seems to make some sense?

    Not that we generally listen to anyone arguing faults in our own ethics! 🙂

  11. Peter, about that…

    For there to be maximal unhappiness, there must be the maximal number of people for the maximum time

    Not necessarily. I think your are prejudging the result. Operations research (maximization) will probably show that the maximum unhappiness is given by: (average unhappiness) X (number of people). But, average unhappiness will be a function of the number of people. My guess is that there is a strong coupling, and that, for example, solitude and lost of beloved ones (entailing shorter lifespan and fewer individuals) will be factors to produce peaks of unhappiness contributing in a non-linear fashion to total unhappiness. On the other hand, overcrowding can be a powerful instrument to induce stress (at least in rats). It is a tricky issue, first thing is to define a function that quantifies unhappiness, taking into account the number and intensity of interactions among people, including all other constraint equations, and then compute the number of people that maximizes unhappiness. There might be surprises, experience in operations research usually show that human intuition is terrible to optimise resources in complex systems…

    I think this is the flaw in your reasoning…

  12. Vicente,
    I should have to concede that I’m not operating on a mathematically sophisticated basis here; I suppose I have basically just equated more people with more unhappiness (and happiness).

  13. Thanks for bringing up this issue Vicente, and furthermore, do buck up Peter! I personally think that you’ve done quite well here.

    Peter gave what I call an “aggregate” assessment, namely Welfare = (Sensations)(Time). Thus for a social scenario, Ws = (S1)(T1) + (S2)(T2) + … (Sn)(Tn), where n = total number of subjects (the subscripts didn’t translate). Vicente, however, suggested that perhaps “operations research” would show that the required element would instead be (average unhappiness) X (number of people), and furthermore since the number of people is a function of an average assessment, perhaps this is all hopeless in the various ways that he mentioned. As I see it there are two issues to deal with here — the first being, will “aggregate” or “average” sensations provide us with the more “useful” idea?

    In my work I’ve chosen Peter’s aggregate assessment, essentially because this permits each unit of sensation/self to indeed be given its full experienced value, while average assessments (and there are many kinds), do not (though 0 and 1 are “Arithmetic Mean” exceptions). So in a practical sense, if you and I each live lives that have the same “average” level of positive/negative sensations, then our lives will have equal value in this regard, and even if you live for one hundred years though I live for just one year. While this is indeed a perfectly acceptable type of assessment, I believe that it’s generally more “useful” here to assess your existence to have one hundred times the personal value that mine has, specifically because you experience one hundred times the sensations that I do. Since an “aggregate” assessment does provide this specific result, this is indeed the type of mathematic operation that I’ve adopted. So if we are discussing positive values then your situation would be one hundred times better than mine, though for the negative case your existence would be that many times personally worse than mine.

    As for the other issue, I do think we must not let any practical matters associated with obtaining our pathetic human measurements, interfere with our actual theory itself. In order to accurately describe reality the theorist must always be free to do this work without letting our various testing constraints alter our ideas themselves. When a given theory does remain consistent with standard observation, however, we commonly do develop far more targeted ways of testing such potential descriptions of reality.

    Thus I’d say that your theory was just fine Peter (and I did enjoy its conformity with my own). But the presumption that there wouldn’t be as many humans if these autonomous machines were programmed to maximize our unhappiness, does seem quite suspect — they might indeed have the foresight to wait until they can personally control our breeding before actually inflicting this monumental harm upon us.

  14. Peter: “(a) any sufficiently developed AI will have general as well as particular goals, where a general goal is of the form “Maximise X”,

    (b) adopting the general goal where X = happiness equates to utilitarianism, which is both friendly to people and moral,

    (c) however, even if X is something else, the behaviours indicated approximate to those indicated by utilitarianism. Suppose we seek to maximise unhappiness. For there to be maximal unhappiness, there must be the maximal number of people for the maximum time. This requires an orderly and prosperous society, a goal which is generally forwarded by honesty, good laws, and all the same things that conduce to happiness. The same results follow if we take X to be pain, death, or pop-tarts (with some slight distortions around the exact desideratum).

    (d) Therefore any sufficiently developed AI will approximate to a Benthamite utilitarian: as a result it will be friendly to humankind and moral.”

    I just think this is far too anthropomorphic to have any real bite. Whatever morality is, it arose out of a very particular evolutionary, and thence cultural, history. So a basic assumption you’re making is that ‘morality’ transcends the biologically/culturally human.

    The real question, here, is the degree to which we will be able to *control* our machines, to ensure they will only do what we want them to do. We can call any of the design/code alternatives we come up with ‘making machines moral’ if we want, but ultimately it all comes down to rendering them compliant. And as I’ve been arguing a long time, this has to be a pipedream. All things being equal, the most sophisticated system instrumentalizes its environments. It really is a simple as that. Is it ‘moral’ to let children make decisions when you know *for a fact* it is the ‘wrong’ decision?

    And this touches on another basic assumption, that human consciousness/cognition consitutes some kind of pinnacle, rather than just another point on a continuum of ascending capacities. It’s not that our machines will make us look like children and that’s that, it’s that children-by-comparison is simply a point on the way to dog-by-comparison, insect-by-comparison and so on.

    The fact is, we’re talking an accelerating process of intelligence bootstrapping, one that will make human intellectual artifacts like ‘utilitarianism’ little more than kindergarten scribbles on the fridge in short order. I have a cheeky little piece on this at: http://rsbakker.wordpress.com/2013/04/13/2006/

  15. Interesting Scott — I did not consider this interpretation. You mentioned: “I just think this is far too anthropomorphic to have any real bite. Whatever morality is, it arose out of a very particular evolutionary, and thence cultural, history. So a basic assumption you’re making is that ‘morality’ transcends the biologically/culturally human.”

    I’m not sure that Peter actually meant to imply that these machines would progressively turn human-like (though that is an understandable assumption, since he did say: “(d) Therefore any sufficiently developed AI will approximate to a Benthamite utilitarian”). He may clarify his meaning, but regardless you might consider these machines to just be highly advanced, autonomous, and non-conscious. They might actually function as described, and still be no more “human” than this computer that I’m manipulating right now.

    I love how you essentially said that we must try not to make machines “moral,” but rather “compliant.” In my opinion this “morality” term is quite “squishy.” Like “good/evil,” even we are only “moral/immoral” to the extent of our very limited perspectives. So whenever this term is used to describe basic human reality (let alone machine reality,) this does give me an excellent position from which to object.

    Furthermore, has Peter indeed implied that “…human consciousness/cognition consitutes some kind of pinnacle, rather than just another point on a continuum of ascending capacities”? I’m not sure that a continuum here was denied (though again, he can judge this himself). Regardless, yes it can be quite useful to quantify various “degrees of complexity.” The issue between “conscious” and “non-conscious,” however, is certainly not one that I define as a continuum. As I see it the “conscious” does not function without sensation based motivation, while the “non-conscious” functions exclusively in this void. I don’t know if I’m contradicting you here, but a mosquito has a greater self/consciousness dynamic from my theory, whenever (or if ever) it experiences sensations that are “stronger” than the human.

    Finally, I’d actually be the first to denounce the unedited scribblings of Jeremy Bentham as “archaic rubbish.” Either academic speech from that period was too different from mine for me to decipher reasonable ideas from his text (which I doubt), or the man scarcely had the ablity to match one coherent sentence against another. Nevertheless I also believe that the underlying concept found in Utilitarianism could provide solid answers for your “cheeky little piece” — if developed properly. Since I am quite proud of my own such theory, perhaps I’ll work on that discussion soon enough.

  16. Eric, even for humans to be moral is to be compliant. The differences rely on the specifications and the AMC (Acceptable Means for Compliance). For ordinary people, there is no such absolute moral reference frame, saught for a long time by philosophers.

    Maybe, for some gifted and enlightened individuals with a profound insight into the cause effect laws, and the very nature of human suffering, something like objective moral rules could be applied in an intuitive and direct manner. After all, moral (mos-moris) original meaning is the custom, the habits…

    O TEMPORA O MORES

  17. Evil is impossible to define without a social reference point. Every imaginable form of “evil” is “good” from an appropriate context. Therefore, “pure evil” is incoherent.

    Imagine the worst psychotic who wants nothing more than to do evil by destroying all that is good. He obviously pursues this goal out of the belief that it is a worthy and just goal. But if it is believed to be a worthy and just goal, it can’t be evil, so he must first destroy himself.

    As another thought experiment, would any of us grant unlimited power to a person on the basis of claim made with absolute, unlimited sincerity “I will do no evil”? No, because we know exactly how flexible and slippery the concept of “evil” is. Power doesn’t really corrupt, but power does give us the ability to do real harm with our usual self-serving morality.

    Morality and powerful AI fit very uneasily in my mind because I simply don’t trust morality to safeguard humanity. Where is the moral authority, the great morality success story we can point to? Religion has failed miserably, but has anything else truly succeeded at this point in time?

  18. It sounds like you’re talking about conscious machines David… and yes it would be strange to have to convince them to do what we want, or even have to worry about their feelings. For what it’s worth, I personally suspect that we’ll die out long before we’re smart enough to build such things.

    I like where you’re going with your reasoning DJC and I do hope that I can convince you to take this a bit further by means of my own perspective. Good/evil is a fundamental topic in the field of philosophy, and it’s accepted to hinge on the idea of “free will.” Essentially, if you “choose” to do good/bad things then you are indeed good/evil, though if you are “forced” to do them then you simply reflect circumstances which are themselves good/bad. Thus if evil were to actually be quantified, this would be units of baddness times percentage of freedom/understanding, or E = (B)(F). On the good side we can use (imagined) subscripts to distinguish the two separate goods, or Ge = (Gb)(F). I, however, would have us all go beyond such “standard” discussions of freedom/good/evil (and even though I did just build some sweet equations!). Instead, why not consider the more basic idea here itself, or good/bad?

    Beyond some notable “weirdos” out there, it’s generally assumed that if reality held no “life,” then all events would be “insignificant,” or nothing would “matter,” or things would be “irrelevant,” or essentially that “good/bad” would just not exist. Furthermore we also assume that it’s not actually “life” that adds this feature, but rather “consciousness.” So what specific element of our conscious minds, is responsible for causing existence to be “good/bad?”

    In the past I’ve kept myself sheltered from academia to prevent “standard philosophers” from infecting me with “standard prejudices.” But if there are any that have considered the nature of good/bad (and hopefully in an intelligent way), I would love to assess the answers which they ultimately reached. Regardless, my own theory in this regard is both intuitive and clear — I find that the “sensations” element of the conscious mind, is what causes existence to be “good/bad.” If I am given a bit of time to demonstrate my theory DJC, perhaps I will indeed become the exception that does not fail.

  19. Philosopher Eric,

    Solid work, an interesting read. A few comments follow that I’m sure do not do the article complete justice given my time constraints. I like your conclusion that sensations of punishment/reward in conscious mind simplify evolution’s “programing demands” relative to pure instinct. This seems to raise a question about the relationship between intelligence and consciousness. It would seem that a certain threshold of intelligence would be needed before an organism could survive on any non-instinct behavior. But is this intelligence coinciding with or identical with consciousness? Would one precede the other? Also, your view would imply that a strategy for conscious AI should be to abandon the “instinctive” programming approach of tying inputs to outputs with complex algorithms and replace it instead with a model that processes positive/negative goal-seeking “sensations”. The latter model still seems a bit opaque to me. It seems to me that it would have to be intelligent in the first place to develop a proper strategy in response to sensations. How do you see the role of intelligence here?

    Your Chapter 5 makes some excellent points; the likely structure of concepts in brain neural nets may well explain why we argue over definitions. (I think you’ll find a similar view expressed in Eliezer Yudkowsky’s http://wiki.lesswrong.com/wiki/A_Human%27s_Guide_to_Words)

    Finally to the topic at hand, good/bad. Based on “Physical Ethics”, could we build a morally good AI, in your view? If I understand correctly, the strategy would then be to build an AI that would have as its sole moral imperative the goal to maximize the aggregate of positive sensations (as defined in detail in your article). If so, isn’t there a concern that an all-powerful AI would (in Parfit spirit) look for some easy, energy-minimal short cut, i.e. genetically breeding the ability for only positive sensations in the human genome or growing trillions of brains in vats bathed in pleasurable chemicals? I do admit not being certain such a future would be all that personally miserable, but it seems rather bleak for the human race in some sense.

  20. Thanks for the quick review DJC, though I am just one guest of many at “House Peter.” I do see that he’s a gracious host, however, so he may not mind if this business is done right here. But when I do inevitably get too drunk and obnoxious on his fine liquor, I also hope that he’ll just put me in a cab rather than ban me in general. I will now answer your questions as briefly as I am able, and then move on to my agenda.

    In the last article Hunt forced me to observe that “intelligence” should be considered quite separately from “consciousness.” Furthermore as you have suggested, this should come long before consciousness does. I do suspect that our own non-conscious minds are amazingly more intelligent than the perhaps 1% conscious part. (In my own work I’m not sure that I actually required a term like “intelligence.”)

    Evolution does seem to have found consciousness to be quite useful, however, (whether 1% or not) and eventually our non-conscious machines may run into the same limitations that “life” seems to have overcome. But I do doubt that we will ever achieve the understandings/abilities to manufacture our own conscious entities. Regardless, a functional model of the conscious mind itself (such as my own), should be quite useful in a general sense.

    Yes my theory can be amazingly repugnant (and far more so that Derek Parfit seemed to fathom in his famous critique). But what of this? Wouldn’t it be stupid of us to assume that “reality” must not be “repugnant” to us?

    My plan is to shake things up thoroughly enough to convince Peter to assess my theory. I suspect that he’s far more than just a standard “Philosophy Professor” type that simply wants to demonstrate our vast ignorance — perhaps he is curious enough to indeed want answers. As it gets out that his search of ten years plus has finally gleaned results, however, my legend would grow as various challengers are vanquished. But the thought of becoming “a great philosopher” does not actually impress me, or at least not in respect to what the modern field happens to be. Instead I seek is to become the “Newton” that elevates our “mental/behavioral” sciences from their current, and very primitive, state. Stick around DJC… the real fun is still to come!

  21. “Evil” is a meaningless word. It’s an appeal to the listener to suspend judgment. It’s an appeal to the listener to turn off the empathy tap, to start scting blindly. The wrod “Evil” is actually a rare example of actual evil. There is nothing remorely constructive that can be said about it – it’s a word that should be thrown away

  22. Acting “good” or “evil” is always a reletive assignment.
    Its based on the (current) morals of the society judging it.

    Morals can change over time. Morals are kind of a ruleset that
    are accepted by the social group, to keep them living together
    productively, securely and without uneconomical conflicts.
    Morals depend highly of the circumstances, area, restrictions and
    competing groups -> over a longer time.

    Example:
    eating Meat, feeding meat to your children, its seen immoral by some religions and vegans.
    It would on the other hand be regarded immoral for some
    poor Sheep farmer, not to feed his children meat, but let them starve.

    Anyway, there are not many “unquestionable” Morals, And there are
    thus not many “true” evil acts.
    It boils down, on what your(groups) contemporary morals are.

    A robot that has build in morals, will be just a ridgitly programmed executor of actions.

    Unless you have a robot advanced enough to be rewarded (it must like that) or punished, (it must dislike that);
    You cant “teach” it the morals.
    Acting morally must bring some reward by understanding its importance
    to the well-beeing of the agents social group.

    If you dont have an understanding of this, you are only a
    programmed executer, and ultimately a very inflexible one.

  23. To put this into perspective, it needs to be mentioned
    here that at some point the concept of development (progressiveness) with no means to an finish is a harmful calamity being bought to every new era,
    with the purposeful intent to pressure the ability of future generations to
    keep and cherish the values, ethics, and morals of their forefathers.

Leave a Reply

Your email address will not be published. Required fields are marked *