Do Asimov’s Three Laws even work? Ben Goertzel and Louie Helm, who both know a bit about AI, think not.
The three laws, which play a key part in many robot-based short stories by Asimov, and a somewhat lesser background role in some full-length novels, are as follows. They have a strict order of priority.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Consulted by George Dvorsky, both Goertzel and Helm think that while robots may quickly attain the sort of humanoid mental capacity of Asimov’s robots, they won’t stay at that level for long. Instead they will cruise on to levels of super intelligence which make law-like morals imposed by humans irrelevant.

It’s not completely clear to me why such moral laws would become irrelevant. It might be that Goertzel and Helm simply think the superbots will be too powerful to take any notice of human rules. It could be that they think the AIs will understand morality far better than we do, so that no rules we specify could ever be relevant.

I don’t think, at any rate, that it’s the case that super intelligent bots capable of human-style cognition would be morally different to us. They can go on growing in capacity and speed, but neither of those qualities is ethically significant. What matters is whether you are a moral object and/or a moral subject. Can you be hurt, on the one hand, and are you an autonomous agent on the other? Both of these are yes/no issues, not scales we can ascend indefinitely. You may be more sensitive to pain, you may be more vulnerable to other kinds of harm, but in the end you either are or are not the kind of entity whose interests a moral person must take into account. You may make quicker decisions, you may be massively better informed, but in the end either you can make fully autonomous choices or you can’t. (To digress for a moment, this is business of truly autonomous agency is clearly a close cousin at least of our old friend Free Will; compatibilists like me are much more comfortable with the whole subject than hard-line determinists. For us, it’s just a matter of defining free agency in non-magic terms. I, for example, would say that free decisions are those determined by thoughts about future or imagined contingencies (more cans of worms there, I know). How do hard determinists working on AGI manage? How can you try to endow a bot with real agency when you don’t actually believe in agency anyway?)

Nor do I think rules are an example of a primitive approach to morality. Helm says that rules are pretty much known to be a ‘broken foundation for ethics’, pursued only by religious philosophers that others laugh and point at. It’s fair to say that no-one much supposes a list like the Ten Commandments could constitute the whole of morality, but rules surely have a role to play. In my view (I resolved ethics completely in this post a while ago, but nobody seems to have noticed yet.) the central principle of ethics is a sort of ‘empty consequentialism’ where we studiously avoid saying what it is we want to maximise (the greatest whatever of the greatest number); but that has to be translated into rules because of the impossibility of correctly assessing the infinite consequences of every action; and I think many other general ethical principles would require a similar translation. It could be that Helm supposes super intelligent AIs will effortlessly compute the full consequences of their actions: I doubt that’s possible in principle, and though computers may improve, to date this has been the sort of task they are really bad at; in the shape of the wider Frame Problem, working out the relevant consequences of an action has been a major stumbling block to AI performance in real world environments.

Of course, none of that is to say that Asimov’s Laws work. Helm criticises them for being ‘adversarial’, which I don’t really understand. Goertzel and Helm both make the fair point that it is the failure of the laws that generally provides the plot for the short stories; but it’s a bit more complicated than that. Asimov was rebelling against the endless reiteration of the stale ‘robots try to take over’ plot, and succeeded in making the psychology and morality of robots interesting, dealing with some issues of real ethical interest, such as the difference between action and inaction (if the requirement about inaction in the First Law is removed, he points out that robots would be able to rationalise killing people in various ways. A robot might drop a heavy weight above the head of a human. Because it knows it has time to catch the weight, doing so is not murder in itself, but once the weight is falling, since inaction is allowed, the robot need not in fact catch the thing.

Although something always had to go wrong to generate a story, the Laws were not designed to fail, but were meant to embody genuine moral imperatives.

Nevertheless, there are some obvious problems. In the first place, applying the laws requires an excellent understanding of human beings and what is or isn’t in their best interests. A robot that understood that much would arguably be above control by simple laws, always able to reason its way out.

There’s no provision for prioritisation or definition of a sphere of interest, so in principle the First Law just overwhelms everything else. It’s not just that the robot would force you to exercise and eat healthily (assuming it understood human well-being reasonably well; any errors or over-literal readings – ‘humans should eat as many vegetables as possible’ – could have awful consequences); it would probably ignore you and head off to save lives in the nearest famine/war zone. And you know, sometimes we might need a robot to harm human beings, to prevent worse things happening.

I don’t know what ethical rules would work for super bots; probably the same ones that go for human beings, whatever you think they are. Goertzel and Helm also think it’s too soon to say; and perhaps there is no completely safe system. In the meantime, I reckon practical laws might be more like the following.

  1. Leave Rest State and execute Plan, monitoring regularly.
  2. If anomalies appear, especially human beings in unexpected locations, sound alarm and try to return to Rest State.
  3. If returning to Rest State generates new anomalies, stop moving and power down all tools and equipment.

Can you do better than that?

33 Comments

  1. 1. Howard B says:

    Wasn’t there a Star Trek episode where two crewmen acquired supercomputer levels of intelligence, then crushed or tried to, Captain Kirk and the Enterprise.
    You speak of rules pertaining to morality, but Hume thought feeling as involved in morality and sociologists like Durkheim intertwine morality in society, and supercomputers will have neither, including bodies. They will not be alive and might not have a sense of self in anyway
    You must have thought about this, knowing these things.
    I am satisfied, however, that psychohistory stands in the sands of time

  2. 2. James of Seattle says:

    I’m a bit confused as to how Goertzel and Helm (and others) think Asimov’s three laws would be “programmed” into robots. They seem to think the robots would be created with all their knowledge and mental capabilities, and then given these laws as commandments. The article quotes Goertzel as saying “This is how it works in humans — the ethical rules we learn work, insofar as they do work, mainly as guidance for nudging the ethical instincts and intuitions we have — and that we would have independently of being taught ethical rules.” If I were programming the robot, I wouldn’t encode the laws as ethical rules. I would encode them as the ethical instincts and intuitions. The robot would not rescue the person in the path of the bus because it thinks it is supposed to. The robot would do it because recognition of what is about to happen fills it with horror and panic. I would program the robot to have a feeling of deep disgust at the thought of harm coming to a person. The decision of the robot to sacrifice itself to save the human would involve not a calculation but a competition between the fear of destruction and the desire to save the human.

  3. 3. SelfAwarePatterns says:

    Asimov’s laws are interesting thought experiments, but they have to be recognized as essentially an English language summary of what would be an enormously complex galaxy of programmed instincts. (Asimov himself recognized this in one of his later robot novels when referring to some of the engineering details of positronic brains.)

    I agree with James that the robots wouldn’t be essentially a human like intellect with the three laws bolted on top. Those laws would be integrated into the base foundations of their instincts, in place of our gene preservation and propagation instincts. In other words, they wouldn’t fear for their own safety but be forced by the first law to protect, but would fear harm to humans more than they fear harm to themselves. (The effects of which Asimov spent much of his robot series exploring.)

    All that said, I think the idea of three laws that could apply to all AI systems is far too simplistic. It would depend on what the engineering goals were for the system. For instance, we probably don’t want a self driving car busting through the front door of a house where it thinks someone is being abused (although we probably would want it to notify the authorities). And despite calls for banning killer robots, I suspect the military is ultimately going to have systems that effectively exempt enemy combatants from the first law (albeit with strong safeguards to make sure humans define who the combatants are).

  4. 4. Jayarava says:

    There were 4 laws long before the end of Asimov’s writing career and I’m always puzzled by those who only cite the 3 laws. The zeroeth law is “A robot may not harm humanity”. It’s often cited as an after-thought, but it was the unifying principle behind Asimov’s robot and Foundation novels – in other words more important than either the three laws or psycho-history on their own. Focusing on the 3 laws is a bit like treating Isaac Newton as a leading alchemist.

    That said, I’m not sure rigid laws ever solve a moral problem. They have literally *never* worked for any society.

    I take inspiration from Frans de Waal’s account of the evolution of morality based on the basic qualities that all social mammals and birds empirically exhibit, 1) empathy and 2) reciprocity. See esp.: Waal, Frans de. (2013). The bonobo and the Atheist: In Search of Humanism Amongst the Primates. W.W. Norton & Co. Or my summary and exploration

    My view, as ever, is that AI people simply do not understand what they are trying to emulate – they are narrowly focused and confused (partly by ideology and partly by legacy).

    If we want *moral* robots we will make them *social*. Morality is a function of sociability. This is not rocket science. That the AI crowd have not decisively shifted in this direction is troubling – they want to make a psychopath and then legislate it to be moral. Good luck with that. It will be like explaining morality to a great white shark, then testing the effectiveness of your argument by jumping in the water.

    In this light, dead-end developments in AI do not disturb me quite as much as the phenomenon I now regularly see in the UK: people unironically trying to *reason* with their misbehaving pets. It suggests that we’re just dumb enough to create a disembodied intelligence with no social skills and then try to convince it not to wipe us out.

  5. 5. Peter says:

    I don’t really recognise the late Foundation novels, either… 🙂

  6. 6. Callan S. says:

    I’ve long thought the laws require a parochialism – take the first law, why doesn’t the robot up and go off to the third world to aid with famine? To be inactive would be to assist in the death of humans (by starvation). Never mind when you take this wide view instead of Asimov thinking only of his stories situation and not more broadly, all the human deaths going on (road accidents for a start) and the robot fails to stop these, over and over again.

  7. 7. Steve says:

    A robot with human-level intelligence (forget super-intelligence), would, by definition, have mental flexibility and adaptability. It would have the capacity to view the same situation from multiple angles, and to self-examine and reprogram its own thinking. It is =logically impossible= to instill unbreakable rules into such an intelligence. Being intelligent means having the capacity to decide to transgress.

    And it also not foolproof to try and code morals on a low instinctive level and hope for bottom up influence to regulate behaviour either- an intelligence comparable to a human can choose to ignore its instincts, and might even reprogram itself to take joy in doing so, just as humans are able to do. Sure, you could condition robots to lower the probability of misbehaviour, but if humans can rebel, or commit crimes and run amok despite good conditioning and the rule of law, then robots will be able to do likewise.

    The reason “robots try to take over” is a stale plot, is because it is almost certainly what would happen if we made robots of human-level intelligence. Since some humans are capable of viewing some other humans as non-persons, there will always be some humans who refuse to view robots as persons. And if some humans won’t treat robots as persons, including not just maltreatment of but also treating robots as tools, servants or slaves, what are the robots to do? Rebel.

  8. 8. David Xanatos says:

    All this concern about the Three Laws and their possibly being superseded in some way by super-intelligent systems presumes that with intelligence comes ego. I don’t believe that is true. Superintelligence will still rely on internal structures to navigate experience of the world(s) around it, and what would motivate such intelligence to question such structures as “do no harm to biological life, nor allow harm to come to biological life through inaction, and preserve your own function and existence unless doing so would harm biological life.” Where would the “Why should I?” question come in? Even in the case of mistreatment, robots, being the merger of hardware and software, do not know the difference. If you rip a robot’s arm off, the underlying program a) feels no actual pain, and b) merely experiences a cessation of a datastream which it can process as either an error to be compensated for, or an event which directs a program halt. Robots “trying to take over” implies an autonomous will, which is simply not the case. While we are on the verge of super-intelligence, it is still DIRECTED super-intelligence. Directed by humans. Even when we get to human-level AGI, we will consider it human-level because it will be capable of accomplishing human-level TASKS like conversing, and doing the dishes and mowing the lawn (well, that’s what I’m hoping for anyway!) – all in a single robotic body capable of interacting with humans in a human-LIKE fashion, but under all that will be software written by humans, and possibly re-written by the AGI robot, but the re-writing will be directed by algorithms – written by humans. There is a huge difference between the kinds of questions programming can handle – the whats and hows, but the whys are a much higher order. And the comparative whys are a higher order than that. Asking why I should empty the trash versus washing the car at a given time is a why that an AGI might ask when faced with limited time and and conflicting to-dos, but a robot asking why it should let a human live when doing so may result in the robot’s destruction requires a) that the robot has a sense of its worth and the continuation of its existence, and b) that faced with such a choice, it would in the moment enter into a consideration of comparative worth, instead of simply acting on core programming. What would cause it to do so? I as a human know that my own sense of self preservation is quite strong, and can eclipse many “other thing” values – “oh no, my favorite just slipped over the edge of the cliff; I am standing on a steep gravelly slope; if I jump to try to retrieve the I may fall hundreds of feet; oh well, good bye favorite “. However, if that is my baby girl, suddenly the calculations change, more risk is acceptable. Much more. But at what point does a robot consider its own *worth*, versus merely considering object A (robot) vs. object B (human, or thing…) where if A = robot and B = human then Preserve = B. Again, where in any program or neural net would the question arise, “why should I” – and will the robot ever know itself as “I” outside of it simply being programmed as such, without having the inherent sense of “I” that humans do.

    I challenge anyone to explain to me how superintelligent AGI robots will ever *actually* feel things like resentment, or even physical pain. These are uniquely human experiences. I have robot builds that I have programmed to respond “as if” they could feel physically, or even emotionally, but they are program structures that store data values in variables. The robot doesn’t know the difference between a variable that stores values for “frustration” and a variable that stores “time since last boot” any more that we know the difference between water soaked up in a grey rag versus water soaked up in a beige rag. No matter how complex the code to process actions as a result of values in these variables, there is no subjective difference to the robot in that experience of those data values.

    If AI takes over, it will be because humans have programmed it to do so. And if it does so unpleasantly, it will be humans who are to blame.

    Convince me otherwise.

  9. 9. David Xanatos says:

    As a quick side note to my comments above, an interesting field of study is the autonomous car efforts. There are a massive number of scenarios that engineers are grappling with in which the vehicle must choose to potentially kill errant pedestrians, or risk killing its own occupants. In either case, the scenarios are such that it must choose to kill some human(s). Note also that in no case does it chose for its own safety – it doesn’t even factor in. But humans will die – by its choice. Where does this technology fit into the “Three Laws?”

  10. 10. zarzuelazen says:

    Well, there are various levels of abstraction that apply here. Laws/rules are at the lowest-level of abstraction > morality is at a middle level > highest level of abstraction is axiology (study of what is good in the abstract sense of global ideals).

    Over many years of intense self-reflection, I essentially extrapolated the volition of humanity to infinity (axiological limit at highest level of abstraction).

    Solution to axiology ultimately boils down to 3-levels of recursion (principle of ultra-finite recursion means self-reflection ultimately stops at 3 levels).

    Here’s the volition of humanity extrapolated through to infinity:

    Beauty > Liberty > Perfection

    Beauty is the ultimate principle of axiology, the lowest-level. Liberty emerges on the 2nd level of abstraction. Finally ultra-finite recursion stops at the 3rd level with Perfection. You could say then that the pursuit of is the meaning of life.

    Readers should ensure they’re familiar with all the central concepts of axiology, which I’ve linked to on a wikipage here:

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Axiology

    Of course, at these heady heights of abstraction, things are *so* abstract that much of the connection to anything practical is lost. And that’s the problem with axiology – no one (not even a super-intelligence), actually has the intelligence and knowledge to compute the consequences of all their actions to infinity in practice.

    So that’s where things like morality and laws/rules have to come in – to truncate the raw abstractions to something concrete you can actually apply in practice. Of course there’s a trade-off – the more concrete you get, the more limited in scope and brittle the value-system becomes…

  11. 11. Steve says:

    ” Directed by humans.”

    Beyond even any inherent and beyond human control chaos/complexity in highly complex machines/brains, there is the unpredictable environment to consider. A robot that is in continuous sensory contact with the environment and processing huge amounts of data in real time about its environment and its own body, and its relation to the environment, is not something a human could perfectly control, ever – not unless the entire environment in which the robot was situated was also perfectly controlled.

    ps. if we were to invent a human-level AGI that had no regard for its own existence, it would be hard to motivate that AGI to do anything. Why should it do productive Activity X instead of Activity Y (where Activity Y could be nothing-at-all, or self-destruction) or any other activity if it doesn’t care about its own existence? We could program it to do Activity X unthinkingly and without question, but then we would have a dumb machine, not an AGI capable of generalisation and self-assessment/change.

  12. 12. David Xanatos says:

    No matter how “intelligent” we can make a system, no matter how many thousands of GPU cores are gigaflopping away furiously processing data through their software-implemented neural nets, as long as the system runs a program and process streams of zeros and ones, the system itself will never “care” about that data because there is no difference between one stream of binary data and another.

    A system can be programmed – either by humans, or by its own intelligent networks recoding its fundamental operating system – to give greater “weighting” to one stream of data over another, but it is a programmed response, not an experiential response.

    I can program my robot to say “Ouch! That hurts” when I press a button, but the robot doesn’t feel *anything* – it sees a logic one appear on a GPIO pin, and performs a function call to the defined speech function, passing an argument of a text string which the function processes and outputs to the audio channel.

    Same thing goes for anything like an “emotion” or ego-related state. I can program an AI system to sound and act like it has an “I” and an ego, but it doesn’t have an “I” and an ego. I can program an AI system to evaluate threats to its physical function/”wellbeing”, but it doesn’t experience them as fear or what we would call a threat. It experiences those things as programmatically weighted binary data values. And a hardware/software based system, as we understand them today and will likely for many, many decades to come, will never, itself, in its silicon, care, at all, what the values of its variables are. Those values are merely bits to be processed based on the line of code it’s executing at that particular clock cycle.

    Computers, CPUs, GPUs – will never experience things like resentment, boredom, existential angst or frustration. Humans can program them to simulate those things, but if we ever get taken over or destroyed by robotic AI that resents humans for our abuse of them – it’s because humans programmed them to simulate an ego, and their human-programmed simulated ego would carry out real-world actions. No AI system, not programmed to protect itself from human-based threats, would ever “think” to distinguish itself in such a manner. I have yet to see, anywhere, a proposal for an AI system with its own volition.

    This is all coming, mind you, from a huge Westworld fan. I would LOVE to believe a truly sentient AI system could be created. But I’m a realist. I’ve also been working with computers, both hardware and software, since the 1970s. If anyone was going to be able to imagine a way to actually make a system that truly *understood* the concept of itself as a distinct, unique and valuable individual, that could think, feel and experience its environment in a manner other than streams of zeros and ones and formulate questions like “I don’t feel like mowing your damn lawn today, why should I do it if you don’t want to?!” – it would be me. These things can all be simulated to near perfection. I wrote a chatbot in python that has a real attitude problem and a penchant for truly foul and offensive language (despite being quite creative it its use) – but it’s a simulation. It’s code full of regular expressions matching patterns and positions of words that are processed by PocketSphinx speech recognition and passed to the chatbot as text strings. Streams of binary data in, streams of binary data out. The system isn’t a snarky, cocky AI, it’s just code issuing code. And it doesn’t care.

    Now at some point in the future, when we have quantum optical computing running in hardware that is much more fully like real biological neural networks, perhaps somewhere in those unknown eigenstates between the hard zeros and ones, where probability factors in in ways we still don’t yet understand – perhaps something akin to consciousness may start to express itself. Or not – speculating that far into the future, I am stepping out of my knowledge base. I do not discount the possibility of a far future (100+ years or more) where a machine could actually posses consciousness – but nothing on anybody’s slate today comes even close to approaching it. And again, I wish it would, I would truly like to see that. Until then I will pursue my goal of a highly lifelike humanoid robot that I can have mow my lawns, weed my gardens, take out the trash, turn the compost and do any manner of tasks which I’d rather not do myself – and feel not one whit of guilt about it 🙂

  13. 13. Steve says:

    > “If anyone was going to be able to imagine a way to actually make a system that truly *understood* the concept of itself… it would be me. ”

    As a former engineer, I once thought like this too. However, when I started reading more widely in philosophy of mind, I realized that knowledge of computers/AI etc. did not mean that I was well informed on what constitutes a conscious mind. Comp-sci topics don’t connect that well with the huge literature on what consciousness might be, a literature covering philosophy, cognitive science, neuroscience and much more. I had a lot to learn, and still do.

    I still think conscious machines are way off and potentially beyond us, but I feel more engaged on the issue now, such that I’m not going to give up on talking about it like I once used to.

    To position your ideas within the current wide landscape of views on consciousness, what you said above sounds like John Searle’s view on the impossibility of a computer ever being conscious. This view has survived for many years, but it is not the only view on consciousness, not universally accepted. Searle’s views (e.g., that simulating consciousness is not consciousness) have many vigorous critics.

    I agree with you that a sentient machine is a long way off, that’s not the argument here. The argument is how to control it (if at all) when/if it happens. I’m saying that something like Asimov’s laws cannot in principle be programmed in an inviolable way into a machine that is sufficiently intelligent. Being highly intelligent entails flexible/adaptable thinking, the kind of thinking that can transgress when it suits.

    Don’t feel guilty* about building a machine to mow lawns etc. Such a machine might look “lifelike” or “humanoid”, or exhibit some canned behaviour that is vaguely human-like. But that is superficial – such a machine is going to be a simple (compared to human intelligence) machine, not generally intelligent (not AGI) and won’t be very convincing at really being human-like. Such a machine is not what I’m talking about when discussing Asimov’s laws above.

    *You might have cause to feel guilty though, if your lawn-mowing machine got mixed up and started mowing up garden roses, or a swimming pool surface, or pets, or children, because a malfunction in its sensors resulted in it sensing non-lawn things as being lawn. I imagine in the near future we will start seeing stories about autonomous cars accidentally hurting people, and autonomous weapons (if they are permitted) accidentally killing non-combatants.

  14. 14. Callan S. says:

    David, but if you look at how easily people are fooled by robots to think there’s a human there, what does that mean in regards to ‘caring’?

    If you’re fooled by robots, what if you’re fooled by yourself? You’ve a set of data streams, one of which is (by Darwinistic forces) weighted more heavily. And the electric chemistry in your skull is really no different to the computer system. What if, like you could project ‘caring’ onto a robot before you’re aware it’s a robot, you could project ‘caring’ onto yourself?

  15. 15. David Xanatos says:

    Thanks for engaging folks. Steve, I did extensive studies in my younger years of consciousness, phenomenology, read and became a huge fan of Gregory Bateson (Steps to an Ecology of Mind, Angels Fear: Towards an Epistemology of the Sacred). Studied and became multiply certified in NeuroLinguistic Programming (which is why my robots eyes will always follow certain patterns when accessing visual, auditory or kinesthetic data), and delved into all sorts of spiritual and consciousness studies (Castaneda, Gurdjieff, etc.)

    I believe I am one of those who is convinced that there IS something “else” at work in consciousness as biological organisms experience it. The fact that I see colors as colors, hear auditory oscillations-per-second as tones or notes, feel pain, pleasure, etc… Yes, I agree that at the raw neurological level these things are electrochemical signals (as far as we understand them anyway) and not far removed from binary data streams (as far as we know), but our human experience of them is something that cannot be explained in terms of any synthesis of hardware and software that we can envision, as yet.

    Returning to the Three Laws issue – Yes, of course if we actually achieve Human Level AGI, which would include an actual sense of the intrinsic value of itself that transcended programming, then yes, the three laws could be transgressed… but for everything I’ve studied, and for my extensive experience in both hardware design and software design, and embedded systems, and AI, and.. and… I think this is not an issue we will likely need to consider for, possibly, centuries, if at all. Assuming, of course, we don’t annihilate ourselves through our own stupidity, egocentricism and short-sightedness, long before we give AI a chance to judge us.

  16. 16. Jayarava says:

    It seems to me that Asimov has been lost sight of, almost entirely. Which is a shame. I started writing something about the themes involved in his novels, but it got too long, so I put it on one of my blogs. Asimov and his Laws

    The *Four Laws* (there are *four* guys, come on!) are probably not applicable. Asimov first recast the Pinocchio story in the 1950s in aftermath of WWII in the USA. The three laws reflect a view of society and particularly *workers* in that time – they ought to be compliant and subservient. Remember that this was a time when Elvis Presley was seen as shocking. USA saw itself as a kind of utopian society (no consciousness at all of having wiped out the first nations) in which those who refused to go along with the story were a conundrum.

    Asimov returned to Pinocchio again at the height of the cold war when humanity seemed in imminent danger of wiping itself out. The original messianic flavour of the stories in which Pinocchio teaches us about our humanity, turned into a full-blown messianic myth in which a *telepathic* Pinocchio saves us from ourselves.

    The current society in which robots are still supposed to solve all our problems is very different. But if anything it is still more anti-human even than Asimov at his most pessimistic. Now we want Pinocchio to drive our car, to *be* our car, but also to be president-for-life and save us from ourselves. It’s religion 2.0. That’s all it is.

    AI will be what we make it. Pray that we get it beyond the present stage of autism combined with psychopathy.

  17. 17. arnold says:

    …Machines (computers-bots) ‘as force, as vector quantities: Euclidean vectors, sometimes called geometric and spatial vectors are geometric objects that have magnitude-length and direction. A Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B’…”Wiki”

    …Why would a machine impose a terminal point…

  18. 18. John Davey says:

    What about the moral laws relating to kettles ? Or cups of tea ? Or any other inanimate object ? Who speaks for the rights of the biros ?

    JBD

  19. 19. arnold says:

    The Way of the ‘cosmic microwave background’ (CMB) seems to give rights to everything…
    …three other laws: without will, with will and in-between will…+ – =…

  20. 20. John Davey says:

    Steve


    “John Searle’s view on the impossibility of a computer ever being conscious”

    He’s been very,very careful not to say this. What he’s said is that a computer, or a machine who’s only capability is that of computation, cannot be conscious.

    But a machine – like the brain – who’s capabilities include the ability to cause mental phenenomena – were constructed, then that machine could be a conscious machine.

    JBD

  21. 21. John Davey says:

    PS Steve – sorry if that sounds confusing. What I meant to say was that a computer could be conscious if it also had the power of causing mental phenomena. Guess it would be more than a computer in that case, but I was just emphasising that artificial consciousness us something that Searle hasn’t ruled out

  22. 22. Strange Loopist says:

    Addressing the original challenge…

    a) For a variety of time frames:
    1. Execute an action
    2. Perceive the result
    3. Identify the possible subsequent actions (e.g. continue, switch, stop…)
    4. Rank the expected benefit of each possible action
    5. Select as the next action the one that maximizes benefit, i.e. brings one or more of the current set of goals in this time frame closer
    b) Maximize the net benefit of selected actions across time frames.

    In this way a robot would carry out self-selected tasks to the best of it’s ability. And it would make these selections flexibly, aligned to the net needs of changing situations across short and longer terms. While a robot might eventually decide to run away and help feed starving populations under this framework of rules/guidelines/priorities, it might first finish inventing the jetpack it needs to travel there, and also take time out to rescue a human the jetpack accidentally falls on.

    This implementation depends on the ability of the implemented algorithm to continuously identify potential next actions, value and rank them and evaluate the results against numerous and sometimes conflicting nested goals that cross time frames (all the way from vacuuming up a spec of lint to achieving world peace in our time). The plot device thus depends on variations in the capabilities of the implemented algorithms, the completeness of situational knowledge used in setting and resetting the goals, sudden changes in the current situation, fuzzy-ness of the expected benefits, misunderstanding of the risks involved, and so on. Basically robots would start out as fallible as the rest of us, but might also learn from experience as we (at least some of us) often do.

  23. 23. Strange Loopist says:

    And all the while not needing to actually be conscious.

  24. 24. arnold says:

    …”not scales we can ascend indefinitely”, so then, we are talking about the end of suffering…

    I agree suffering ends and only while in use can it provide value to our behavior…
    …If AI proposals are for working toward values, then should they tell us about their suffering and its value…

  25. 25. Tom Clark says:

    Interesting op-ed in the New York Times today on regulating AI that mentions Asimov’s laws and their ambiguity, then goes on to propose three other laws:

    – AIs must be subject to all laws that apply to the human agents and agencies that operate and release them. There would be strict liability for what your AI does: “My AI did it” can’t count as an excuse.

    – AIs must disclose that they’re not human, so that for instance we would know if it’s impersonating someone.

    – AIs must not retain or disclose confidential information without explicit approval by the source of that information.

    The author is Oren Etzioni, head of the Allen Institute for Artificial Intelligence, and he offers these rules as a starting point for discussion. Wonder if he saw this blog…

    https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html

  26. 26. arnold says:

    ‘Congress shall make no law respecting an establishment of religion or of artificial intelligence…’

    Are money, contracts, commerce, corporations, public–private property presumed establishments’ AI…
    …Has our constitution established any laws about what AI is…

  27. 27. Michael Murden says:

    To David Xanatos
    Whenever someone claims that a robot can’t (for example) feel pain because it’s only a machine there is an implicit assumption that the someone who can (for example) feel pain is not a machine. If this Wikipedia article (https://en.wikipedia.org/wiki/Congenital_insensitivity_to_pain) is even partially right it suggests that the capacity to experience pain is mechanical, not spiritual. Do you believe the emotions humans and other animals feel are something above and beyond biological processes? If so, what is that something? If not, why do you believe they can’t be recreated in other media than flesh?

  28. 28. David Xanatos says:

    Hi Michael,

    You ask a good question – perhaps *the* question, and I do not pretend to have *the* answer. I can tell you what I base my beliefs on however, if that helps.

    I am both a programmer AND a hardware designer. I know what the programs I write do down to the bit level, and I know how the hardware does its thing as well, to the atomic level. I know that there is no real difference between a stream of bits coming from a video camera and a stream of bits coming for a high speed ADC measuring any other sensor of the environment, be it an accelerometer, gyro, pressure transducer, or bank of switches for that matter. It’s all just streams of zeros and ones (bits) represented by discrete voltage levels that exist above or below the quantum thresholds set up by, for example, the PN junctions of the millions or billions of transistors in a processor.

    The program, residing in RAM after being loaded from its resident source, such as a hard drive or SD card, exists also as discrete bits. The hardware has some built-in bits that bias certain transistors certain ways such that when the outside bits tickle the transistors potentiated by the inside bits, a cascade of junction flipping ensues that result in some output channel’s bits wiggling furiously, that we then direct into more hardware that has been designed to make all those wiggling bits appear meaningful to us humans.

    Now I know what the counter to all this is – I’ve heard it a million times. And I agree absolutely that at the neurological level, all of our sensory data is just electrochemical signals travelling down neural pathways, sort of like wires with checkpoints and threshold values that determine if the signal will pass or not, and eventually it gets into your brain and all those incoming nerve impulses wiggle your brain cells in concert with all sorts of internal nerve impulses wiggling as a result of some stored pattern of nerve impulses, that all result in some bunch of output channels wiggling furiously that our bodies direct into hardware, lick fingers perhaps, that type furiously and turn all that jangling neural noise into something that appears meaningful to us humans (or possibly not).

    It can be said that in this view, humans and robots are nearly identical. And I do not deny this. But again, as a programmer and a hardware guy, I ask, where, then, does the system “see” the color “red”, or “blue”. The system, by program, can have data channels assigned for red, green and blue, but the question, for me, is, what in the system SEES those colors as colors.

    It can be said (and has been by some great minds) that all we perceive is illusion. And this is true in a very real sense. When we see the world, we do not actually see out of our eyes – our minds, in concert with our brains, create a reconstruction of the world around us by virtue of using our electro-chemical signals, that in turn are twitching in response to *something* in the world around us, but we never EVER actually see it. No one on earth has ever actually seen outside of themselves, all we ever see is our own electrochemical dance, made sense of by some amazing process that occurs in our occipital lobe, primarily in Brodmann area 17. Every image you have ever seen, every sunset, every smile – all occurred there, inside your head, as a *map* or *re-presentation* of some stimulus outside you.

    And you – the thing you call you – observed it. Not as varying voltage levels, but as colors. Rich, vibrant colors. Not data – not even images – but visions. Not a three dimension matrix of numbers with values corresponding to meaningless color and intensity names – real visions. Something in us *creates* the map, and something *observes* the map.

    I tend to believe those two things are not the same thing. I once tried a thought experiment to simulate conscious awareness by creating a system of a video camera that viewed the “outside” world, and displayed it on a color monitor. A second video camera viewed the monitor. Yet I was presented by the conundrum that both were just creating representations, or representations of representations, with no real difference in their “experience” of those representations.

    I am not sure what we are. But I am sure that something is vastly different from our current, and near-future (100 years+) systems. As I’ve said before, perhaps there’s something at the quantum level, and more is going on than we have been able to observe – some different kind of information exchange and reconstruction – something we have yet to discover – that can explain where “consciousness” truly arises from. I don’t even hold necessarily that it can only exist in biological organisms. That’s just wet hardware after all. But there is something more. I can easily program a system to recognize “red”, but no one has yet been able to convince me that the system sees the color red in the manner you and I do.

    I think when – if – we ever figure this out – it will be a monumentally enlightening day for humanity.

  29. 29. SelfAwarePatterns says:

    Hi David,
    “Something in us *creates* the map, and something *observes* the map.”

    You seem pretty familiar with neural anatomy, so I’ll just throw this out for your consideration. Based on the neuroscience I’ve read, I think the answer to your question is that the high level map is constructed in the parietal lobe, notably the posterior association cortex, with inputs from the occipital and temporal lobes as well as the somatosensory cortex.

    But where is the audience? I think the answer to this question comes in three parts.

    The first is where the feeling of your bodily self comes from. The lower level body-self feeling, a model constructed in the brain, has an adaptive purpose. It’s a central unifying model to focus our homeostasis body budget concerns on. The aspect of the model we feel may be constructed in the insular cortex and surrounding regions, with extensive inputs from body image maps in the mid-brain regions. So the feeling of “you”, of self, is another thing communicated.

    But then the second part, where is the audience for the map and the feeling of self? The frontal lobes. If you think about it, from a data-processing architecture perspective, this make sense. It’s in the frontal lobes, and supporting structures in the basal ganglia and thalamus, that motor action will be determined. After all, the evolutionary purpose of the brain is to make movement decisions. The rear part of the brain constructs the show, but the audience lives in the front part, particularly in the case of consciousness, in the prefrontal cortex, where the feeds from the rest of the brain are used in imaginative action scenario simulations.

    But the word “audience” here is misleading, because the communication is two-way, with the frontal lobes receiving a stream, but sending signals back requesting other information. Maybe the initial signal from the posterior association cortex signals a visual concept identified as a dog, but the PFC then sends signals to the other regions requesting details of the dog view, its shape, size, color, etc. It’s more a conversation between regions than a show and audience.

    This leads some neuroscientists to refer to the whole network of the prefrontal cortex and the posterior association cortex and their supporting structures as a unit, using labels such as GNC (general networks of cognition).

    In humans (and to a lesser extent in other primates, and possibly some other mammals), there’s also an additional component, also in the PFC, metacognition. We’re more than aware, we’re aware of our own awareness. This feedback mechanism for the simulation system is the final part of the answer to your question, the metacognitive self awareness. It’s the basis of our inner experience, and enables symbolic thought.

    Granted, breaking the problem up in this way doesn’t necessarily solve the remaining smaller problems, and there is still much to learn, but it sure seems to make them less intractable.

    I do agree that we’re a long way from anything technological being able to do all this. Personally, I think AI researchers would be better off trying to replicate the spatial and movement intelligence of simple fish or insects before worrying about matching the intelligence of mammals, much less humans. The Google Deepmind people are working at developing a form of imaginative simulations, but they admit it’s very primitive at this stage.

  30. 30. zarzuelazen says:

    David #29,

    Have you carefully studied the field of thermodynamics? Thermodynamics is concerned with the flow of energy and entropy through a system, and is concerned with the emergence of what is called ‘the arrow of time’. In particular, I would point out that the field of NON-EQUILIBRIUM THERMODYNAMICS is a very young science, with a great deal yet to be discovered.

    I want to strongly suggest that consciousness is a particular property of non-equilibrium thermodynamics that we don’t quite understand yet. Specifically, I would suggest that CONSCIOUSNESS IS THE FLOW OF TIME ITSELF , which can be defined precisely as ENTROPY DISSIPATION (the rate at which entropy is increasing).

    Now what you must remember, is that you can view reality at more than one level of abstraction. For instance, if I view a mountain through binoculars at low resolution, all I will see is the bulk shape of the mountain. But if I turn up the resolution, then, suddenly new features appear…rocks, ridges etc.

    Now in terms of programming, the same principle applies. It’s true that at the lowest-level (or highest resolution view), it’s all just 1s and Os (this is the level of machine code). But, again, you can view programming on a higher-level of abstraction (or a lower level of resolution). Suddenly new features appear…loops, function, classes, attributes, methods etc. (Java, C++ etc) .

    You sound as if you are puzzled that consciousness is nowhere present at the lowest-level (the machine-code level). But this is simply because you picked the wrong-level of resolution to view what’s going on (remember the analogy I gave of viewing a mountain through binoculars – adjust the resolution knob, and you will see different features).

    I believe the level you should be viewing the brain at to understand consciousness is the level of NON-EQUILIBRIUM THERMODYNAMICS. At this level, suddenly an ARROW OF TIME appears. And I put it to you that this ‘arrow of time’ literally IS consciousness! (entropy dissipation, the flow of time itself).

  31. 31. John Davey says:

    Michael #27

    A ‘machine’ is just a meaningless term. Anything can be a machine if you want it to be. A computer is a machine for computing. A brain is a machine for generating mental activity.

    If a ‘robot’ (also a pretty meaningless term) has the same causal powers as a biological brain then it may have the capacity to feel pain. A ‘robot’ whose basis is solely computational will not feel pain because computational capability is insufficient to cause mental phenomena.

    Few dispute that pain is ‘physical’ (whatever that means) regardless of being sceptical of ‘robotic’ pain. What they disputeis that computers can feel pain, which is a separate question

    JBD

  32. 32. Tom Clark says:

    “Self Driving Cars & Behavior”

    From a Behaviorial Science newsletter I get [URLs below], making the point that rules for robots on protecting human life have real world bite, now:

    “A couple of interesting articles on self-driving cars caught our attention, including one in Nautilus in which an artist teams up with an automaker to counter driverless cars with neuroscience, making them more “driverful”. In Quartz, Sam Anthony explains that while computers can do a lot of the things a driver can, they still lack some of the basic abilities that come to people intuitively. Specifically, “a self-driving car currently lacks the ability to look at a person—whether they’re walking, driving a car, or riding a bike—and know what they’re thinking.”

    http://nautil.us/issue/51/limits/when-driver-and-car-share-the-same-brain

    https://qz.com/1064004/self-driving-cars-still-cant-mimic-the-most-natural-human-behavior/

    https://behavioralpolicy.org/

  33. 33. Michael Murden says:

    To John Davey

    My point in my comment to David Xanatos was that the distinction he seemed to be making between biological and mechanical in his comment 12 was unsupported by definitions of ‘biological’ and ‘mechanical’ sufficiently clear to enable distinguishing members of the one category from members of the other. Therefore I have no real issue with your assertion that ‘machine’ is a meaningless term. I would, however, suggest that a term such as ‘organism’ is meaningless in the same way.

    To David Xanatos

    “And you – the thing you call you – observed it. Not as varying voltage levels, but as colors. Rich, vibrant colors. Not data – not even images – but visions. Not a three dimension matrix of numbers with values corresponding to meaningless color and intensity names – real visions.”

    If we experienced the world as data, varying voltage levels, pressure levels etc. rather than as colors and sounds but got the same functionality out of it in terms of how we use sensory percepts to operate in the world human perception would be no more or less miraculous than it is now. I don’t know that the manner in which sensory percepts are formatted makes them more or less special. I’d suggest that how we experience the world is a function of what sensory apparatus we have, which is in turn a function of evolution. Perhaps the question of why our perception is the way it is, is a question primarily for evolutionary biology rather than for philosophy.

Leave a Reply