Gofai’s not dead

I was brought up short by this bold assertion recently:

 … since last two decades, research towards making machines conscious has gained great momentum

 Surely not? My perception of the state of play is quite different, at any rate; it seems to me that the optimism and energy which was evident in the quest for machine consciousness twenty or perhaps thirty years ago has gradually ebbed. Not that anyone has really changed their minds on the ultimate feasibility of a conscious artefact, but expectations about when it might be achieved have generally been slipped back further into the future. Effort has been redirected towards related but more tractable problems, ones that typically move us on in useful ways but do not attempt to crack the big challenge in a single bite. You could say that the grand project was in a state of decline, though I think it would be kinder and perhaps more accurate to say it has moved into a period of greater maturity.

That optimistic aside was from Starzyk and Prasad, at the start of their paper A Computational Model of Machine Consciousness in the International Journal of Machine Consciousness. It’s not really fair to describe what they’re offering as GOFAI (Good Old-Fashioned Artificial Intelligence) in the original derogatory sense of the term, but I thought their paper had a definite whiff of the heady optimism of those earlier days.

Notwithstanding that optimism their view is that there are still some bits of foundational work to be done…

there is a lack of a physical description of consciousness that could become a foundation for such models.

…and of course they propose to fill the gap.

They start off with a tip of the hat to five axioms proposed by Aleksander and Dunmall back in 2003 which suggest essential functional properties of a conscious system:  embodiment and situatedness, episodic memory, attention, goals and motivation, and emotions. These are said to be necessary but not sufficient conditions for consciousness. If we were in an argumentative mood, I think we could put together a case that some of these are not necessarily essential but we can take it as a useful list for all practical purposes, which is very much where Starzyk and Prasad are coming from.

They give a quick review to some scientific theories (oddly John Searle gets a mention here) and then some philosophical ones. This is avowedly not a comprehensive survey, nor a very lengthy one and the selection is idiosyncratic. Prominent mentions go to Rosenthal’s HOT (Higher Order Theory – roughly the view that a thought is conscious if there is another thought about it) and Baars’ Global Workspace (which must be the most-cited theoretical model by some distance).

In an unexpected and refreshing leap away from the Anglo-Saxon tradition they also cite Nisargadatta, a leading exponent of the Advaita tradition on the distinction between awareness and consciousness. The two words have been recruited to deal with a lot of different distinctions over the years: generally I think ‘awareness’ tends to get used for mere sensory operation or something of that kind which ‘consciousness’ is reserved for something more ruminative. Nisargadatta seems to be on a different tack:

When he uses the term “consciousness”, he seems to equate that term with the knowledge of “I Am”. On the other hand, when he talks about “awareness”, he points to the absolute, something altogether beyond consciousness, which exists non-dualistically irrespective of the presence or absence of consciousness. Thus, according to him, awareness comes first and it exists always. Consciousness can appear and disappear, but awareness always remains.

But then we’re also told:

 A plant may be aware of a damage done to it [!], but it cannot be conscious about it, since it is not intelligent. In a similar way a worm cut in half is aware and may even feel the pain but it is not conscious about its own body being destroyed.

To allow awareness to a plant seems like the last extreme of generosity in this area – right at the final point before we tip over into outright panpsychism- but in general these remarks makes it sound more as if we’re dealing with something like the kind of distinction we’re most used to.

Then we have a section on evolution and the development of consciousness, including a table. While I understand that Starzyk and Prasad are keen to situate their account securely in an evolutionary context, it does seem premature to me to discuss the evolution of consciousness before you’ve defined it. In the case of giraffe’s necks, say, there may not be too much of a difficulty, but consciousness is a slippery concept and there is always a danger that you end up discussing something different from, and simpler than, what you first had in mind.

So on to the definition.

…our aim was not to remove ambiguity from philosophers’ discussion about intelligence or various types of intelligence, but to describe mechanisms and the minimum requirements for the machine to be considered intelligent. In a similar effort, we will try to define machine consciousness in functional terms, such that once a machine satisfies this definition, it is conscious, disregarding the level or form of consciousness it may possess.

 I felt a slight sense of foreboding at this, but here at last is the definition itself.

A machine is conscious if besides the required mechanisms for perception, action, learning and associative memory, it has a central executive that controls all the processes (conscious or subconscious) of the machine…

…[the] central executive, by relating cognitive experience to internal motivations and plans, creates self-awareness and conscious states of mind.

 I’m afraid this too reminds me of the old days when we were routinely presented with accounts that described a possible architecture without ever giving us any explanation of how this particular arrangement of functions gave rise to the magic inner experience of consciousness itself. We ask ourselves: could we put together a machine with a central executive of this kind that wasn’t conscious at all? It’s hard to see why not.

I suppose the central executive idea is why Starzyk and Prasad feel keen on the Global Workspace, though I think it would be wrong to take that as genuinely similar; rather than a central control module, it’s a space where miscellaneous functions can co-operate in an anarchic but productive manner.

Starzyk and Prasad go on to flesh out the  model, which consists of three main functional blocks –  Sensory-motor, Episodic Memory and Learning, and Central Executive; but none of this helped me with the basic question. There’s also an interesting suggestion that attention switching is analogous to the saccades performed by our eyes. These are instant ballistic movements of the eye’s direction towards things of potential interest (like a flash of light in a dark place); I’m not precisely sure how the analogy works but it seems thought-provoking at least.

They also provide a very handy comparison with a number of other models – a useful section, but it sort of undermines the earlier assertion that there is a gap in this area waiting for attention.

Overall, it turns out that the sense of older times we got at the beginning of the paper is sort of borne out by the conclusion. Many technically inclined people have always been impatient with the philosophical whiffling and keen to get on with building the machine instead, confident that the theoretical insights would bloom from a soil fertilised with practical achievements. But by now we can surely say we’ve tried that, and it didn’t work. Researchers forging ahead without a clear philosophy ran into the frame problem, or simple under-performance, or the inability of their machines to deal with intentionality and meaning. They either abandoned the attempt, adopted more modest goals, or got sidetracked into vast enabling projects such as the compilation of a total encyclopedia to embody background knowledge or the neuronal modelling of an entire brain. In a few cases brute force approaches actually ended up yielding some practical results, but never touched the core issue of consciousness.

I don’t quite know whether it’s sort of depressing that effort is still being devoted to essentially doomed avenues, or sort of heartening that optimism never quite dies.

34 thoughts on “Gofai’s not dead

  1. Peter, I share your skepticism. This definition of consciousness, for example:

    “A machine is conscious if besides the required mechanisms for perception, action, learning and associative memory, it has a central executive that controls all the processes (conscious or subconscious) of the machine…”

    But isn’t the “central executive” mechanism also a part of the machine? So if the central executive controls all the processes of the machine, it must also control its own processes. Doesn’t this seem to you to lead to an infinite regress?

  2. A purely minor semantic point, but can anything “slip *back* further into the future”? Surely it would slip forwards into it?

  3. Arnold – there is certainly that danger.

    Jon – you’re probably right: I’ve had similar conversations in the past. To me the ‘back’ merely implies further away from my current point of view (the future event is backing away instead of coming closer), but I think most people naturally see ‘back’ as back in time, i.e. against the perceived chronological flow.

    It’s interesting, incidentally, how spatial metaphors have been recruited to deal with time (if that’s what’s happened here – it could be that a general vocabulary is used mainly for spatial relations but can also be used for other orderly sequential relations… but let’s not go there just now). In recent times the mental x-axis has become so strongly fixed in the mind that some of my colleagues refer to delays by saying a project is ‘drifting to the right’

  4. Be careful… or you might end up again in the “dimensional time” debate.

    This distinction of awareness and consciousness is very interesting.

    Consider a “beware of the dog” sign compared to a possible “be conscious about the dog” one… (I assume beware = be aware).

    As an individual once you are in an alert state you are sort of aware of events around you, even subconsciously. What you need are good reactions, not knowledge. Your body(brain) can be aware of an attack and skip it, before you are conscious about it.

    Our subconscious behaviour, part of our daily lifes, relies on awareness. Probably awareness is what the evolutionist biologists refer to as proto-consciousness.

    Consciousness states latency is something to consider when surrounded by predators. And the fast detection/reaction (awareness process) can be unproductive when an intelligent decision is required.

    We already have machines that are *aware* of many things, but not conscious about them.

    The dynamic blogroll currently includes a link to a Galen Strawson interview video, in which he uses the concept of “cognitive phenomenology” as an epistemic extension of classic qualia, which I think is a brillian expression. IMO, this might be a good avenue to explore.

  5. Vicente: “The dynamic blogroll currently includes a link to a Galen Strawson interview video, in which he uses the concept of “cognitive phenomenology” as an epistemic extension of classic qualia, which I think is a brillian expression. IMO, this might be a good avenue to explore.”

    I agree. In my model of the cognitive brain *cognitive phenomenology*/qualia would correspond to those constituent regions of retinoid space that have been highlighted by the heuristic self-locus (selective attention) and subjected to cognitive identification. In this view any pattern of sensory excitation that can be abstracted from our global phenomenal world can be considered to be a quale — color, shape, sound, feelings, etc.

  6. Consciousness requires the perception of qualia that is preparatory to the taking of an action in response, and that also requires the assessment and selection of options that the responder has both inherited and learned of on its own. All of these responders have so far as we know been limited to the biological forms that we refer to as living, and we clearly do so precisely because they are freed from the rigors of nature’s reactive “programming” to be proactive programmers of themselves. No artificial life machines have so far been made to replicate these functions and in effect be living.

  7. Roy,

    consciousness requires the perception of qualia ” or rather “consciousness requires the perception through qualia”?

    they are freed from the rigors of nature’s reactive “programming” to be proactive programmers of themselves

    Not completely though, they are to some extent… a crab very little, a buddhist monk much more.

    I think you are right overall, there seems to be an intimate entanglement between life and consciousness, in this Universe at least. I wonder if we would have to sort out what life is, before we can progress in the study of consciousness. And, to clarify the very nature of life, we need to fully understand the physical aspects of the Universe… so it looks like we might have to wait for a real T.O.E. to come.

  8. Vicente,
    I agree that a crab has fewer options than a buddhist monk. Which ones make better use of their limitations is another question. Also I do suspect that there are proactive strategists in abundance in the universe.

    And you’re right, I could have said “through” qualia instead of “of” qualia, But if qualia are defined as “the internal and subjective component of sense perceptions, arising from stimulation of the senses by phenomena,” and also singularly as “a quality or property as perceived or experienced by a person,” then the question is, are we perceiving these sensations objectively or subjectively? Or both?

  9. Roy asks:

    “…are we perceiving these sensations objectively or subjectively? Or both?”

    Seems to me we’re not in a perceptual relation to our sensations or any other conscious experience, but rather to the world, including the body, which present themselves to us *in terms of* sensations and other conscious experiences. As subjects we *consist* of conscious experiences, including, possibly, the (illusory) experience of being an observer of experience, http://www.naturalism.org/kto.htm#Observing

    This parallels the objective fact that I’m not in a perceptual relation to my brain, which itself instantiates my perceptual and cognitive apparatus. The limits of recursive representation might have something to do with the existence of conscious experience. The concrete, unanalyzable qualitativeness of subjective sensory particulars is perhaps a function of the fact that, as Thomas Metzinger puts it, the objective, neurally instantiated state space dimensions corresponding to qualia are ‘impenetrable to cognitive operations,’ http://www.naturalism.org/kto.htm#Limits

  10. I suspect we can experience our sensations consciously as well as unconsciously without drawing some imaginary line between where we precisely leave off and the world doesn’t. In fact all of the assessments I referred to are probabilistic exactly for that inexact reason.

  11. Tom,

    I agree in a first order approach, we are not in perceptual relation to our sensations. May I say that you say that we are those perceptions, that consciousness is its contents.

    But what about a second order approach in which we consider a certain perception? What about being conscious of being conscious? what kind of relation would that be? An epistemic relation to a perceptual relation. So could we say that our epistemic relation to the world is a staggered process?

  12. Vicente,

    Yes, the fact that we are conscious can be the referent of a conscious thought, “I am conscious now,” so we can of course know about consciousness. But we’re not in an observational relationship to consciousness and its contents. So our epistemic relation to the world is mediated by consciousness, but not by observing consciousness. We observe (represent, model) the world itself.

  13. Tom says, “But we’re not in an observational relationship to consciousness and its contents.”
    I think we are, which is why we’re allowed to be confused about its purpose.

  14. ’tis a waste of time.

    Somebody should tell these guys a computer simulation of rain in a met office computer hasn’t made a weather forecaster wet yet. And never will.

    No amount of ‘physical descriptions’ of rain will ever make computers cause rain. As usual, the analysts are falling into the trap of thinking that just because consciousness isn’t material, it isn’t physical.

  15. Hi Roy,

    Re:Comment 13, Could it be that our “confusion” is generated by trying to “observe” that which can not be observed from our referential position? The old subjective/objective fun-house hall of mirrors? Or, “what Tom said”.

    The computer will be labelled “conscious” (by us) when it “feels” its existential presence in the universe, and tells us so. I’m assuming we’ll learn it to talk good English.

  16. Bill, we’re in an observational relationship, etc., but it’s clearly not all seeing, feeling, or all knowing.
    That knowing would include at least some larger sense of conscious or consciousness’s purposes than we’ve come up with so far.

    *In the pragmatic way of thinking in terms of conceivable practical implications, every thing has a purpose, and its purpose is the first thing that we should try to note about it.*
    Charles Sanders Peirce

    And so we try. But will that computer?

  17. Eric,

    Speaking from the standpoint of a student of “Naturalism”: If we know anything, we know that consciousness evolved or emerged in social primates, (we are not alone: http://www.youtube.com/watch?v=gG-xBHiz4OU), as an “epi-phenomenon” of the physical, biological processes within “the cognitive brain”.

    Was it the “purpose” of this evolution or a byproduct of increasing neuro-biological modeling accuracy and complexity. I don’t know but it seems to have worked out rather well. Evolution does not set out with a “purpose” but the fit survive.

    Could it be engineered into a computer? Given time, and better understanding of the executive function hierarchies/interactions in biological systems, I don’t see why not.

    Tom?

    Arnold?

  18. I know you didn’t ask me, but I’d like to add that there’s a difference between being given a purpose to serve, and acquiring your own purposes to both serve and evolve. I think an essential quality of biological forms is their ability to acquire their own purposes and evolve them. When a computer is evolved by us to self evolve and find purposes of its own, then the catch may be that it will no longer be a true computer but a form of life. (And I’ll shut up now.)

  19. Getting back to the distinction between consciousness and awareness. Colin McGinn makes a point about dreaming: while dreaming we are somehow conscious, but mostly unaware of the surrounding events. But we are not conscious in the strict sense of understanding, or having a meaningful experience. In fact, dreams are many times confussing and meaningless. What kind of state is dreaming consciousnesswise?

  20. Vicente, I would say that dreaming is a stream of conscious content without sensory input and without the engagement of logical inference. In the dream state (image-bound recall), features of recollected images can combine in ways that are unconstrained by the patterns of sensory stimulation and the logical corrections that normally shape our experience when we are awake. See, for example, “Pattern Recognition and Associative Sequential Recall” in TCB, pp. 169-174, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter10.pdf

  21. Bill

    “Could it be engineered into a computer? Given time, and better understanding of the executive function hierarchies/interactions in biological systems, I don’t see why not. ”

    Unlikely – well, impossible. Consciousness is a not a logical sequence of steps but a physical phenomena. The ontologies don’t mix. Modelling a lump of rock in a computer doesn’t create one. It “creates” a mental image in the eyes of computer users only (as far as the computer itself is concerned, one program is no different to any other, so a program to simulate consciousness is only detectable as such by users).

    In the natural sense, a computer never creates anything, of course. It’s physical characteristics are logically mapped to digits by observers who know the rules – -1MV == -1, 0Mv = 0.

    In that sense, computers don’t even exist. Can something that doesn’t exist think ? I doubt it !

  22. Bill:

    “If we know anything, we know that consciousness evolved or emerged in social primates…”

    What was naturally selected for in evolution were the adaptive, neurally instantiated cognitive capacities *associated with* being conscious, capacities which seem to have to do with information integration, flexible behavior, learning, memory and the simulation of past and future, http://www.naturalism.org/kto.htm#Neuroscience Why such capacities entail the existence of consciousness (qualitative experience available to the system alone) isn’t clear, but were an AI to exhibit such capacities, it would be chauvinistic of us not to attribute consciousness to it, given the evidence.

    Btw, lucid dreams (dreams with full awareness that you’re dreaming), although unconstrained by sensory input, are conscious episodes informed by reasoning, logic, and other higher-level mental capacities available to us when awake, http://www.naturalism.org/dreaming.htm

  23. “but were an AI to exhibit such capacities, it would be chauvinistic of us not to attribute consciousness to it, given the evidence. ”

    is it “chauvinistic” to distinguish between a duck and painting of a duck ?

  24. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck – unless proven otherwise. That the problem of other minds seems to apply more to AIs with our capacities is the chauvinism of supposing that differences in how intelligence is instantiated count for more than intelligence itself in entailing the existence of consciousness. We should assume sentience in human-equivalent machines unless there’s very good reason to doubt it, otherwise we might be inflicting suffering without knowing it, a point Thomas Metzinger makes in the last chapter of Being No One.

  25. Tom,

    human-equivalent machines

    equivalent !? equivalent as indistinguishable ?

    equivalent as having the same performance for some tasks?

    There are millions of human-humans inflicting suffering to other human-humans, perfectly knowing about it. I don’t mean to be sarcastic, but should we care about inflicting suffering to the roomba vacuum cleaner?

    Finally, sentience itself does not mean that suffering can be experienced or not, actually you could have sentient beings with no emotions or feelings at all, or would they just be conscious?

    Maybe there we have another term for the list of confussion: consciousness, awareness, sentience.

    I believe you might be making too many assumptions.

  26. “human-equivalent machines” and suffering.

    All of this gets me thinking of the androids in the wonderful movie “Blade Runner”, a contemplation on these very themes.

    Which gets me wondering…would the easiest way to create such a machine be to reverse engineer a fertilized egg with manufactured humanoid DNA and then support its development? Would we consider such a “being” a “machine” or does it have to be made of metal and silicon?

    Finally, if “suffering” is defined as the psychological state that causes a sentient entity to experience feelings of anxiety, restlessness, or sadness then just being conscious should be sufficient to allow for suffering. To use Koko (the gorilla) as an example again, there is a very touching You Tube video of her receiving the news of her kitten’s death. Her mournful sobs are heart rending. Speaking of chauvinism, there is no rational ethical justification for interfering with the natural lives of social primates given what know about their consciousness, ( http://en.wikipedia.org/wiki/Mirror_test ).

    Nor would there be with a conscious machine, human equivalent or otherwise. A slippery slope indeed.

    Thanks Tom.

  27. Bill,

    if “suffering” is defined as the psychological state that causes a sentient entity to experience feelings of anxiety, restlessness, or sadness then just being conscious should be sufficient to allow for suffering.

    What a bizarre and incoherent definition.

    Anxiety, restlessness and sadness, are the results that define such a state, not the cause. Of course, this state could be the cause for other subsequent states.

    Consciousnss *allows* suffering, which doesn’t mean that necessarilly entails suffering. Actually, the ultimate goal to achieve very high conscious states would be to put an end to suffering.

    Suffering is related to human psychology, not to consciousness per se. As (Edelman?) says, to stop suffering, you have to stop being human.

  28. Vicente,

    “What a bizarre and incoherent definition.”

    You are right, my apologies. I cut and pasted that from an on-line psychological dictionary without thinking it through.

    And, I agree that consciousness “allows” for suffering but doesn’t necessarily entail it. Brain states are fluid and discontinuous, but aren’t some “causes” of suffering generated inside “the machine”? Is it possible to be a conscious entity and never know suffering? That is, does existential angst necessarily follow consciousness?

    And, do you also disagree about Koko? Must one be human to suffer?

  29. “We should assume sentience in human-equivalent machines unless there’s very good reason to doubt it”

    I don’t like the burden of proof here – it’s totally unscientific. The scientific approach is to doubt all unsubsantiated guesses unless there’s very good reason – i.e strong evidence.

    In the absence of evidence, reason prevails and the starting assumptionm must be doubt.

    There is simply no reason to assume that computers think. None at all. The only thinking objects in the universe we know of are anime’s brains, and computers are nothing like them. Computers are logical, not physical (hence they don’t exist) ; chemical constitiuency and physical location are key components of anime brains, necessary characteristics, whereas computers can be constructed of “boiled cabbage and old rag mats” as Orwell might say.

    As far as this argument is concerned, a computer is ontologically similar to a book : a user relative object with no intrinsic existence or boundary. This we know to be at odds with thinking minds, which are bounded to the brain object that causes them.

    The onus of proof is on YOU Tom, not on doubters. We await your evidence !

  30. oops – spellchecker. I should have said “animal” not “anime” ..

  31. Bill,

    Difficult questions. First, I suppose, we should distinguish between psychological pain and suffering. While Psych-pain seems to be more objective, an emotion resulting from an objective cause, suffering is a much more subjective and complex process. Probably most people will agree on why somebody is in pain or grieves over something, but there might be a lot a debate about why somebody should suffer or not in a particular situation. Usually, in suffering, significant cultural, educational and experiential components are involved, while pain is a direct shot.

    So, I would dare to say that, to suffer like a human, you have to be human.

    Koko probably was grieving over the kitten, but I don’t know if suffering followed after that basic primary emotion.

    Oriental philosophy will tell you that pain is unavoidable but suffering is not.

    Difficult matter.

  32. John,

    Are you inclined to believe that pain and suffering (as well as other contents of consciousness) are products of a particular kind of biological mechanism in a human brain?

  33. Arnold

    Yes. I think there is no other conclusion that can be drawn that makes even the slightest sense.

    JBD

Leave a Reply

Your email address will not be published. Required fields are marked *