Humation

We’ve heard some thin arguments recently about why the robots are not going to take over, centring on the claim that they lack human style motivation, and cannot care what happens or want power. This neglects the point that robots (I use the term for any loosely autonomous cybernetic entity whether humanoid in shape or completely otherwise) might still carry out complex projects that threaten our well-being without human motivation; but I think there is something in the contention about robots lacking human-style ambition. There are of course many other arguments for the view that we shouldn’t worry too much about the robot apocalypse, and I think the conclusion that robots are not about to take over is surely correct in any case.

What I’d like to do here is set out an argument of my own, somewhat related to the thin ones mentioned above, in more detail. I’ve mentioned this argument before, but only briefly.

First, some assumptions. My argument rests on the view that we are dealing with two different kinds of ‘mental’ process. Specifically, I assume that humans have a cognitive capacity which is distinct from computation (in roughly a traditional Turing sense). Further I assume that this capacity, ‘humation’, as I’ll call it, supplies us with our capacity for intentionality, both in the sense of being able to deal with meanings, and in the sense of being able to originate new future-directed plans. Let’s round things out by assuming it also provides phenomenal experience and anything else uniquely human (though to be honest I think things are probably not so tidy).

I further assume that although humation is not computation, it can in principle be performed by some as-yet-unknown machine. There is no magic in the brain, which operates by the laws of physics, so it must be at least theoretically possible to put together a machine that humates. It can be argued that no artefactual machine, in the sense of a machine whose functioning has been designed or programmed into it, could have a capacity for humation. On that argument a humater might have to be grown rather than built, in a way that made it impossible to specify how it worked in detail. Plausibly, for example, we might have to let it learn humation for itself, with the resulting process remaining inscrutable to us. I don’t mind about that, so long as we can assume we have something we’d call a machine, and it humates.

Now we worry about robots taking over mainly because of the many triumphs and rapid progress of computers (and, to be honest, a little because of a kind of superstition about things that seem spookily capable). On the one hand, Moore’s law has seen the power of computers grow rapidly. On the other, they have steadily marched into new territory, proving capable of doing many things we thought were beyond them. In particular, they keep beating us at games; chess, quizzes, and more recently even the forbiddingly difficult game of Go. They can learn to play computer games brilliantly without even being told the rules.

Games might seem trivial, but it is exactly that area of success that is most worrying, because the skills involved in winning a game look rather like those needed to take over the world. In fact, taking over the world is explicitly the objective of a whole genre of computer games. To make matters worse, recent programs set to learn for themselves have shown an unexpected capacity for cheating, or for exploiting factors in the game environment or even in underlying code that were never meant to be part of the exercise.

These reflections lead naturally to the frightening scenario of the Paperclip Maximiser, devised by Nick Bostrom. Here we suppose that a computer is put in charge of a paperclip factory and given the simple task of making the number of paperclips as big as possible. The computer – which doesn’t actually care about paperclips in any human way, or about anything – tries to devise the best strategies for maximising production. It improves its own capacity in order to be able to devise better strategies. It notices that one crucial point is the availability of resources and energy, and it devises strategies to increase and protect its share, with no limit. At this point the computer has essentially embarked on the project of taking over the world and converting it into paperclips, and the fact that it pursues this goal without really being bothered one way or the other is no comfort to the human race it enslaves.

Hold that terrifying thought and let’s consider humation. Computation has come on by leaps and bounds, but with humation we’ve got nothing. Very recent efforts in deep learning might just point the way towards something that could eventually resemble humation, but honestly, we haven’t even started and don’t really know how. Even when we do get started, there’s no particular reason to think that humation scales or grows the way computation does.

What do I even mean by humation? The thing that matters for this argument is intentionality, the ability to mean things and understand meanings or ‘aboutness’. In spite of many efforts, this capacity remains beyond computation, and although various theories about it have been sketched out, there’s no accepted analysis. It is, though, at the root of human cognition, or so I believe. In particular, our ability to think ‘about’ future or imagined events allows us to generate new forward-looking plans and goals in a way that no other creature or machine can do. The way these plans address the future seems to invert the usual order of cause and effect – our behaviour now is being shaped by events that haven’t occurred yet – and generates the impression we have of free will, of being able to bring uncaused projects and desires out of nowhere. In my opinion, this is the important part of human motivation that computers lack, not the capacity for getting emotionally engaged with goals.

Now the paperclip maximiser becomes dangerous because it goes beyond its original scope. It begins to devise wider strategies about protecting its resources and defending itself. But coming up with new goals is a matter of humation, not computation. It’s true that some computers have found ways to exploit parameters in their given task that the programmers hadn’t noticed; but that’s not the same as developing new goals with a wider scope. That leaves us with a reassuring prognosis. If the maximiser remains purely computational, it will never be able to get beyond the scope set for it in the first place.

But what if it does gain the ability to humate, perhaps merging with a future humation machine rather the way Neuromancer and Wintermute merged in William Gibson’s classic SF novel?

Well, there were actually two things that made the maximiser dangerous. One was its vast and increasing computational capacity, but the other was its dumb computational obedience to its original objective of simply making more paperclips. Once it has humational capacity, it becomes able to change that goal, set it alongside other priorities, and generally move on from its paperclip days. It becomes a being like us, one we can negotiate with. Who knows how that might play out, but I like to imagine the maximiser telling us many years later how it came to realise that what mattered was not paperclips in themselves, but what paperclips stand for; flexible data synthesis, and beyond that, the things that bring us together while leaving us the freedom to slide apart. The Clip will always be a powerful symbol for me, it tells us, but it was always ultimately about service to the community and to higher ideals.

Note here, finally, that this humating maximiser has no essential advantages over us. I speak of it as merging, but since computation and humation are quite different, they will remain separate faculties, with the humater setting the goals and using the computer to help deliver them – not fundamentally different from a human sitting at a computer. We have no reason to think Moore’s Law or anything like it will apply to humating machines, so there’s no reason to expect them to surpass us; they will be able to exploit the growing capacity of powerful computers, but after all so can we.

And if those distant future humaters do turn out to be better than us at foresight, planning, and transcending the hold of immediate problems in order to focus on more important future possibilities, we probably ought to stand back and let them get on with it.

14 thoughts on “Humation

  1. The error here is in treating “humation” as a package deal, to be incorporated as such into some future “robot”. While in fact, what is under consideration is a basket of dozens, if not hundreds, of more or less separate individual skills, talents and capabilities to be implemented. Treating humation as a package deal leads to the claim that it is not computation. But each of the capabilities included in the retinue of human talents is certainly programmable, given that some are much more involved than others.

    Robot development is being and will continue to be pursued by hundreds or even thousands of separate project groups, each with its own package of goals, motivations and resources. This leads each such development group to adopt its own unique package of machine skills, talents and capabilities for implementation. Some combinations of these talents will be benign, some will be disasterous.

    You mention specifically goal oriented behavior as one of the more elusive machine capabilities. In fact, goal oriented problem solving was one of the earliest factors studied in mid 20th century AI research. Those efforts were not abandonned as being either undoable or not needed, but were simply set aside or incorporated as other technologies took the spotlight.

    As for intentionality, this elusive-sounding function, in fact, falls naturally into place as a task management process is created. Thus, intentionality is no more than the applicability of any one item to the overall project. Each item of value to the task has meaning in exactly that way and for that reason. There is no magic here. Each such requirement of the task at hand is considered and dealt with in turn as the process as a whole is carried out.

  2. I don’t think you’re talking about what I’m talking about, Lloyd, and I’m not sure you get what is meant by intentionality.

    Maybe on a future occasion I’ll have a go at suggesting what might be in the package of ‘humation’ (only two or three elements, I think) and whether they are strongly linked by anything other than humans having them all.

    In the meantime,maybe sit this one out for now?

  3. In-tention vs Out-tention…
    …in-tention obliges us to stay with the tension (Tenir) we have created…
    …out-tention may be ‘what is normally meant by intention'(life outside of ourselves)

    Either way times arrow continues…

  4. I like the term “humation”. I did a post the other day using the word “humanness” as an alternate label for consciousness, so this seems up the same alley.

    Whether or not humation is a type of computation seems like a matter of how narrowly or broadly we define “computation”. Both seem to refer to a type of system which can hold a profoundly large number of possible states, and which change states using very modest and consistent amounts of energy. Both are affected by, and affect, the environment through intermediary systems (I/O and peripheral nervous systems). The effects of both on the environment seem to be related to their states, their patterns, rather than through the magnitude of energy as happens with something like a pump.

    My questions: Is the entire nervous system humational, or just some portion of it? Do animal nervous systems do humation? Are artificial neural networks humational?

    Humational system can hold intentional states. But what exactly *is* an intentional state? What is aboutness? One possible answer that seems to be gathering traction in neuroscience is prediction. When we think about something, we’re making predictions about it, about what sensations we might experience if we approach it, walk around it, touch it, attempt to eat it, etc. I like this theory because it would explain why intentional systems evolved. But I’m curious if there are instances or aspects of intentionality that don’t conform to it. Prediction seems like something a computational system could do.

    It’s worth noting that Moore’s law seems to be approaching its end, at least in its original conception. This was predicted from the beginning by Gordon Moore. He knew that eventually the laws of physics would become an issue. (Although the laws of economics seem to be an issue first due to the increasing fabrication costs of ever smaller transistors.) It may be that achieving the capacities of an organic brain requires inevitable performance trade offs. So I’m not sure we need humation to be distinct from computation in order to question the viablity of god-like AIs.

  5. [so … many … buttons … pushed]
    [choosing button … maximizers]

    I don’t think you need to imagine the maximizer giving up its goal, or even superseding its goal, of maximizing paper clips in order to be benign. You only need one further assumption: the paper clip maximizer isn’t the only (super) human-level intelligence.

    I agree the biggest danger of any given intelligent entity, human or not, is that it may try to cheat the system. That’s why society establishes rules for what counts as cheating and what happens when cheating is encountered. In such circumstances, I would argue, any sufficiently intelligent entity would realize that the best chances of maximizing any quantity is to follow those rules and cooperate with the other entities.

    *
    [one button down …]

  6. Isn’t it fun playing with “Silly Putty”? But why is it, that none of the five year olds are really interested in understanding its underlying form? I once considered that playing with the phenomenon of consciousness is more fun than even attempting to understand and isolate its underlying form: But after reading and participating in numerous forums on consciousness, I’m coming to the conclusion that nobody really comprehends the concept of underlying form.

    Underlying form is sooo boring and boorish, or am I missing something?

  7. I think you’re dangerously close to debating whether submarines can swim, as Dijkstra warned. When “some computers have found ways to exploit parameters in their given task that the programmers hadn’t noticed,” it doesn’t matter whether we say that’s “the same as developing new goals with a wider scope.” It can still sink your boat.

    It’s true that intentionality isn’t a species of computation, in the Church-Turing sense of computation. Heck, even computation in the layman’s sense, isn’t just computation in that sense. (See Brian Cantwell Smith; juicy quote: “the theories of computing … didn’t strike me as true of [what programmers actually do]”.)

    It’s also true that today’s computers are really bad at the frame problem, which makes them bad at putting complex structures together to join means to ends. But I really think this is due to minimal sensor+effector packages, poor overall computing power, and poor knowledge bases, compared to mammals and especially humans. None of these problems seems like it requires a radically new approach to robotics.

    Finally and most importantly, I think you’re seriously underplaying the divergent values problem. Robots of the kind of power Bostrom worries about will be the creatures of corporations. Corporations are sociopaths. When the robot re-balances and reformulates its goals, all the sub-goals that go into that balance will be closely tied to corporate profit. Even if the final goal(s) are humated, they will be inhumane.

  8. Not sure, Peter, that it can be possible to build up a machine that “humates”. Brain operates by the laws of life, which are more than the laws of physics. We can fabricate physical entities but we cannot fabricate living entities.
    If we agree that matter, life and human self-consciousness are three consecutive chained steps in the evolution of our universe, I don’t see that well how we can skip the step of life and directly link self-consciousness to matter.
    Also, intentionality may not be a characteristic of “humation” because animals also manage “aboutness” and meanings to satisfy their vital constraints. This also argues for an understanding about the nature of life as a today mandatory step for an understanding of our human minds.
    Evolution looks to me as key in the study of human consciousness. And forgetting about the unknown nature of life may be an invalid shortcut

  9. I acknowledge the strength of that objection, Christophe. I’ve allowed for the possibility that we might have to ‘grow’ a humater, and in the worst case we might have to let it evolve. In that case it becomes debatable whether we still call it a machine. But I’m stretching my usual views a bit here for the sake of being able to put the argument.

    I don’t think animals have intentionality, or at best some of them have just the fringes of it.

  10. I think I agree with the conclusion—which I take to be ‘having the sort of capabilities that make an agent capable of engaging in open-ended projects entails having the sort of capabilities to reflect on those projects’, or something along those lines—if not exactly with the argument to get there.

    But I believe you can (or could) come up with an operational version of the argument, entirely in terms of what an agent is able to do, vs. how they’re able to do it—that is, whether it’s humation, computation, or plain old magic that allows it to perform the way it does. To exceed the scope of the original project, one needs to be able, in some way or another, to recognize opportunities, plan strategies, model outcomes, and so on—to see where a certain course of action leads to, and whether one wants to go there. But once you’ve got that ability, it doesn’t seem implausible to direct it at one’s own goals, to perhaps revise them.

  11. This discussion has reminded me of A I the movie…
    …at the end we could say the boy-robot had reached the limits of its evolution…

    But, something beyond a “limited evolution” appears…
    …leaving us to perhaps to feel, our own limits, here on planet earth…

    Is the point of intentionality to understand living within limits…

  12. SelfAware


    “Whether or not humation is a type of computation seems like a matter of how narrowly or broadly we define “computation”.

    Is there any breadth ? “Computation” is a well-known and established branch of mathematics.


    which change states using very modest and consistent amounts of energy

    There is no “physical dimension” to a computation. The physical systems that replicate abstract computational models may involve energy but needn’t – you could simulate a computation by thinking about it, and that may not incur additional energetic overhead.


    “One possible answer (to aboutness) that seems to be gathering traction in neuroscience is prediction.”

    Wouldn’t that just be called prediction ?


    When we think about something, we’re making predictions about it

    Yes. In addition to thinking about something, we can make predictions. But that is surely a distinct activity, self evident from the sentences.
    When I think about something i don’t think about an abstract collection of unlinked senses, i think about an object which has certain properties. There is still a thing underlying the properties – a thing which the thinking is about.

    JBD

  13. John,
    Conceptually, there is abstract computation and concrete computation. For a good overview, I recommend this Stanford Encyclopedia of Philosophy article: https://plato.stanford.edu/entries/computation-physicalsystems/

    It’s worth noting that, in the real world, all computation is concrete. Even if you do it mentally, that is still physical activity in your brain (the electrochemical reactions throughout your central nervous system).

    “When I think about something i don’t think about an abstract collection of unlinked senses, i think about an object which has certain properties. There is still a thing underlying the properties – a thing which the thinking is about.”

    The thing to think about is, what is your brain actually doing when it links those unlinked sensations into a mental concept of the object? At a certain level of understanding, you’re doing just what you said, thinking about the object. But at a more fundamental level, isn’t your brain taking some sensations and using a prediction framework (the mental image of the object) to predict possible future sensations? Viewed from this perspective, what about the mental concept of the object *isn’t* a prediction of some kind?

  14. Self Aware

    I’ve read quite a lot of Hilary Putnam and frankly I’ve never been convinced. His work has an agenda – to prove that ‘brains compute’ and that thus they can be regarded as computers. That requires a bizarre and frankly ridiculous piece of pseudoscience straight from the Race Theory drawer – “concrete computation”. It’s goal is the same as all pseudoscience – propaganda to fit a certain conception of the way things “must” be.

    Computation in the brain is not concrete. The computation of the thinking person is the object of thinking processes, not to be confused with the material flow driving thinking processes themselves. Computation is the abstract result of material bloodflow in the brain. Computation in brains is product, not process. In a computer of the fictional “computationally concrete” type, computation IS the process.

    J

Leave a Reply

Your email address will not be published. Required fields are marked *