Consciousness Not Needed

Artificial General Intelligence – human-style consciousness in machines – is unnecessary, says Daniel Dennett, in an interesting piece that makes several unexpected points. His thinking seems to have moved on in certain respects, though I think the underlying optimism about digital cognition is still there.

He starts out by highlighting some dangers that arise even with the non-conscious kinds of AI we have already. Recent developments make it easy to fake very realistic video of recognisable people doing or saying whatever we want. We can imagine, Dennett says ‘… a return to analog film-exposed-to-light, kept in “tamper-proof” systems until shown to juries, etc.’ Well, I think that scenario will remain imaginary. These are not completely new concerns; similar ones go back to the people who in the last century used to say ‘the camera cannot lie’ and were so often proven wrong. Actually the question of whether a piece of evidence is digital or analog is pretty well irrelevant; but its history, whether it could have been interfered with, that bit about tamper-proof containers, has always been a concern and will no doubt remain so in one form or another (I remember the special cassette recorders once used by the authorities to interview suspects, which automatically made a copy for the interviewee and could not erase or rewind the tape).

I think his concerns have a more solid foundation though, when he goes on to say that there is now some danger of people mistaking simple AI for the kind of conscious entity they can trust. People do sometimes seem willing to be convinced rather easily that a machine has a mind of its own. That tendency in itself is also not exactly new, going back to Weizenbaum’s simple chat program ELIZA (as Dennett says); but these days serious discussion is beginning about topics like robot rights and robot morality. No reason why we shouldn’t discuss them – I think we should – but the idea that they are issues of immediate practical concern seems radically premature. Still, I’m not that worried. It’s true that some people will play along with a chatbot in the Loebner contest, or pretend Siri or Alexa is a thinking being, but I think anyone who is even half-serious about it can easily tell the difference. Dennett suggests we might need licensed operators trained to be hard-nosed and unsympathetic to AIs (‘an ugly talent, reeking of racism’ – !!!), but I don’t think it’s going to be that bad.

Dennett emphasises that current AI lacks true agency and calls on the creators of new systems to be more upfront about the fact that their humanising touches are fakery and even ‘false advertising’. I have the impression that Dennett would once have considered agency, as a concept, a little fuzzy round the edges, a matter of explanatory stances and optimality rather than a clear reality whose sharp edges needed to be strongly defended. Years ago he worked with Ray Brooks on Cog, a deliberately humanoid robot they hoped would attain consciousness (it all seemed so easy back then…) and my impression is that the strategy then had a large element of ‘fake it till you make it’. But hey, I wouldn’t criticise Dennett for allowing his outlook to develop in the light of experience.

On to the two main points. Dennett says we don’t need conscious AI because there is plenty of natural human consciousness around; what we need is intelligent tools, somewhat like oracles perhaps (given the history of real oracles that might be a dubious comparison – is there a single example of an oracle that was both clear and helpful? In The Golden Ass there’s a pair of soothsayers who secretly give the same short verse to every client; in the end they’re forced to give up the profitable business, not by being rumbled, but by sheer boredom.)

I would have thought that there were jobs a conscious AI could do for us. Consciousness allows us to reflect on future and imagined contingencies, spot relevant factors from scratch, and rise above the current set of inputs. Those ought to be useful capacities in a lot of routine planning and management (I can’t help thinking they might be assets for self-driving vehicles); yes, humans could do it all, but it’s boring, detailed, and takes too long. I reckon, if there’s a job you’d give a slave, it’s probably right for a robot.

The second main point is that we ought to be wary of trusting conscious AIs because they will be invulnerable. Putting them in jail is meaningless because they live in boxes anyway; they can copy themselves and download backups, so they don’t die; unless we build in some pain function, there are really no sanctions to underpin their morality.

This is interesting because Dennett by and large assumes that future conscious AIs would be entirely digital, made of data; but the points he makes about their immortality and generally Platonic existence implicitly underline how different digital entities are from the one-off reality of human minds. I’ve mentioned this ontological difference before, and it surely provides one good reason to hesitate before assuming that consciousness can be purely digital. We’re not just data, we’re actual historical entities; what exactly that means, whether something to do with Meinongian distinctions between existence and subsistence, or something else entirely, I don’t really think anyone knows, frustrating as that is.

Finally, isn’t it a bit bleak to suggest that we can’t trust entities that aren’t subject to the death penalty, imprisonment, or other punitive sanctions? Aren’t there other grounds for morality? Call me Pollyanna, but I like to think of future conscious AIs proving irrefutably for themselves that virtue is its own reward.

89 thoughts on “Consciousness Not Needed

  1. I looked into the many directions our host and Dennett proclaimed…
    …it took about a half an hour, and thank you…

    Trust and verify (virtue) in our time, seems to have been the issue …
    …I am as old as Dennett, and understand looking back..

    Peter’s post this time could have been more about…
    …Meinongian distinctions than Pollyannian distinctions…

  2. AIs will also have to be “historical entities”, a point I believe you are making here. However, that doesn’t mean they need be one-off like humans. Since a digital AI can be copied, all or in part, the possible architectures and configurations are huge. The AI I’m talking to may be saying something similar to thousands of others at the same time. Each AI may have its own history in some areas but share a single history in others. For example, a digital personal assistant can duplicate itself for the purpose of pursuing several tasks at once but bring the experiences together later to form a single history. Such an AI’s concept of history will be quite different from a human’s. There will be both useful and dangerous configurations that we can’t even guess at this point.

  3. Sure enough, an AI won’t need external incentives to be moral if morality is written into its core programming. But given how poorly humans understand how we make moral judgments, I don’t see us successfully programming robots to be moral, at least not any time soon.

    Peter, you say “consciousness allows us to reflect on future and imagined contingencies,” but I think we should be more specific. It’s self-consciousness that allows these. Other aspect of consciousness, like qualia, are not required to anticipate future contingencies (except of course subjective experience itself), it seems to me.

  4. Thanks Paul for bringing the focus on self-consciousness as it is a characteristic of human intelligence that AI has to take into account. It is a human characteristic resulting of primate evolution, and the problem is that we do not today know the nature of self-consciousness. Quite surprising a status: we talk about the possibility for conscious machines without considering that we do not know what self-consciousness is. Also, talking about the digital possibility of something we do not know looks a bit troubling.
    I don’t want to be negative about robot ethics which is for sure an important subject but I feel we should not skip the question about the nature of human mind as it is the entity that makes the moral jugments.
    Also, the comment about current AI lacking true agency highlights implicitly our lack of understanding about the nature of life. Agency came up with life in the evolution of our universe. And we do not know the nature of life. This is linked to the previous subject: even if we can get an understanding about the nature of self-consciousness in living entities we will still have to understand the nature of life to really address human intelligence in material artefacts.
    Bottom line, I feel that evolutionary approaches to AI need more work.

  5. I think Dennett is right that, by and large, we don’t need AI systems to be like us. No one wants a navigation system that wants to self actualize. Such systems can simulate past and future scenarios to evaluate courses of action, just like we do, but without the self concern we do them with, that is, without the organic programming we work under. Without it, it’s hard to imagine them triggering out intuition of a fellow consciousness.

    There may be exceptions. For example, I could see an argument that a caregiver robot might benefit from having real compassion as opposed to some fake variety. Even then though, the caregiver robot doesn’t need to worry about itself beyond how useful it is being. Still, we might regard such a system as conscious.

    I personally don’t see any obstacle to a digital system being conscious. Generally, a digital system can approximate the functionality of an analog one to arbitrary levels of precision providing we can add enough capacity.

  6. An AI/robot needs emotion in order to carry on conversations that appear human. There would be huge demand for an AI personal assistant (home and business) to which a human could give commands as if to a human personal assistant. This requires more than natural language understanding. It requires understanding of human needs and ways of speaking in shorthand. For example, when we want our assistant to book travel, we expect it to take short commands like, “I need to go to this conference in Austria” (provides a link). The AI needs to know whether the human should apply for the student rate and whether they prefer to stay in the conference hotel or an AirBnB nearby. The AI needs to ask about anything that it doesn’t know or is unsure. It needs to be able to review the plans with the human. In short, it needs to know a lot and, perhaps more importantly, it needs to know what is important and what is not.

    This is hard to do well even for a human assistant. I’m sure many of us have given another human a task like this. The difference between a bad assistant and a good one comes down to their worldly knowledge and how to interpret complicated websites and phone conversations. It often comes down to their ability to “Do what I mean” (DWIM). DWIM has been a joke in the computer world for decades. Anyone who had designed a command language jokes about how nice it would be if it had a DWIM command, or despairs at users who wrongly expect the computer to DWIM. DWIM is what an AI assistant needs to do.

    DWIM requires we communicate with the AI as a human does and, to a certain extent, that requires emotion or at least human-like motivations. We don’t need the AI to cry or want to hug it out with us but we do need it to understand our hopes and desires based on normal, everyday human language. I think that requires some level of emotion in the AI.

  7. It is possible that at some time in the future we will be able to decide whether an AI will have consciousness or not. It is just as likely that we will never find a way to make an AI conscious or that when an AI gets to a certain level of capability we will have no choice in the matter and the AI will have some form of consciousness. We simply don’t know enough yet to say one way or another.

  8. While I agree we can better answer this question of whether an AI needs to be conscious when we know more about how the brain works, I still think we should go about it assuming we’re never going to find that it’s due to some non-scientific, non-mechanical secret sauce. Perhaps we would be better off trying to figure out exactly what traits an AI might have or need without referring to the resulting functionality as “conscious”. I’m with Dennett on this. The whole “hard problem” comes down to a sense of wonderment about what it feels like to be somebody else (another human, animal, or AI) since I only know what it feels like to be me. If we restrict our discussion to AI traits that we can actually observe and test, the mystery can be avoided.

  9. Think about “sense and reference” as AI language, and think about “sense and reference” as HI language…
    …vertuality verses virtuosity…

    “singular term”, “sense and reference”, for meaning in language…thanks again

  10. My theory of consciousness (TPTA – Temporal Peception & Temporal Action) predicts that you can’t have general intelligence without consciousness, since it says that consciousness is equivalent to fully general purpose causal modeling (reflecting on the past, acting in the present, planning for the future).

    Of course, AGI consciousness need not be anything like ours – you don’t need emotions for instance. And that’s not necessarily a bad thing – you *don’t* want an AGI with human emotions like jealously, anger etc.

  11. An AGI may not need jealousy or anger but I think there are other emotions that have more use: wonderment, fear, curiosity, competitiveness, etc. Of course, these emotions do intersect with other cognitive functions but that’s really my point. They provide motivations driven by perceived context.

  12. Innumerable are the number and kinds of cognitive capacity that our kind of consciousness enables or “allows” for (contingently-speaking). Or is it, the other way around, that our kind of intelligence enables our kind of consciousness? Equally unedifying questions. The issues are orthogonal. No variety of exercise of human or artificial intelligence requires “felt” subjectivity as a principled pre-condition. Neither does a capacity for “felt” subjectivity require any threshold level of exercise of intelligence.

  13. It seems I agree with mr Dennett for once, who thinks that the current “AI” news is nothing more than marketing nonsense.

    There have been no changes in the principles of “digital” block structured programming and the Von Neumann architecture for 50 years. Why all of a sudden computers should start to be ‘intelligent’ when they are epistemogically completely identical to the machines of 1960s is anybody’s guess.

    Where Dennett is inconsistent is his beliefs around agency. He’s right that current programs are – as simply value-added versions of the noughts and crosses players of the 60s – never going to have agency, but as an alleged believer in the supreme sovereignty of physics he shouldn’t believe in agency at all, as physics – when espoused as a one-fits-all solution – and the idea of agency are totally incompatible.

    The current nonsense about AI is largely testament to the marketing power of Google and it’s choice-predicting software. Using algorithms from huge banks of choice data to predict more choices isn’t “intelligent” software. It’s just software. Written, in Java, C++ and all the other tools of traditional “stupid” software.

    JBD

  14. “It seems I agree with mr Dennett for once, who thinks that the current “AI” news is nothing more than marketing nonsense.”

    A lot of the current AI news is hype but you would be better following Gary Marcus on that subject than Dan Dennett. He’s written a lot on the subject lately and he is part of the AI community.

    “Why all of a sudden computers should start to be ‘intelligent’ when they are epistemogically completely identical to the machines of 1960s is anybody’s guess.”

    This represents a category error in its thinking. It is certainly true that modern computers have similar architectures to those in the 1960s in many ways but the fact that all digital computers are Turing equivalent kind of makes this so regardless. In short, it is true but not that significant to our topic here. Software, on the other hand, has changed significantly. That’s where the real problem solving occurs. The current commercial AI hype is about so-called “deep learning” algorithms. These are powerful for some applications but, as Gary Marcus points out in detail, are not going to give us AGI (Artificial General Intelligence). For one thing, deep learning requires huge amounts of data to do its learning whereas humans often learn from as little as a single event. Humans just don’t work the same way. We assume that an AGI won’t either.

    What you seem to be hinting at here is that digital computers lack some sort of “secret sauce” making it impossible for them to implement an AGI. As far as I’m concerned, that is an unsupported belief. Just because we have not discovered the algorithms that will allow us to create an AGI, doesn’t at all mean we won’t ever. The pushback against Deep Learning is just saying that we need to work on algorithms other than DL ones for that to happen.

    When you start to talk about “agency”, you are getting into “free will” and “determinism” territory. My take on this is similar to that of Dennett and Sean Carroll. Everything may well be determined but we can’t use that to predict anything about our behavior so its a moot point relative to our agency and free will. Both may technically be an illusion but no more so than a chair is an illusion because it is a collection of atoms. To say we lack agency also defies common sense. Do those that argue this expect us to simply stop thinking this way? To even argue this requires one act as if one has agency. If it acts and smells like agency, it is agency.

  15. Is Mr. Dennett, like many of us, realizing time’s arrow is a Direction for ‘Human Intelligence’…
    …underlying universe, cosmos, nature, infinity, space, doubt…

    ‘Relative direction’…language about ourselves…

  16. Paul


    “In short, it is true but not that significant to our topic here.”

    I beg to differ. The hype suggest there has been a sea change in the computerization of labour in the past year or two, which is patent nonsense. The current phase of the computerization of labour is indistinguishable from that which started in the late 70s/early 80s. There is no difference. Software methodology is always changing, always has done and will continue to do so.


    “What you seem to be hinting at here is that digital computers lack some sort of “secret sauce” making it impossible for them to implement an AGI.”

    ..

    “As far as I’m concerned, that is an unsupported belief. ”

    If you’re asking me to disprove a negative that is of course a nonsense. But can I ask you to tell me what there is in a finite-state machine that contains or can deliver semantic ? When you can, I’ll accept the possibility.In the meantime I’ll continue to beleve that semantic is not achievable within a state machine, and that discussion of “intelligence” within a machine remains a huge sales pitch.


    Everything may well be determined but we can’t use that to predict anything about our behavior so its a moot point relative to our agency and free will.

    I had to reread this .. !!

    “Everything may well be determined …” (step 1)

    “but we can’t use that to predict anything about our behavior” (step 2)

    Do I need to continue ?


    “Both may technically be an illusion but no more so than a chair is an illusion because it is a collection of atoms. ”

    The chair is not an illusion. The chair is a visual image linked to real semantic – the chair, a functional object in human consciousness. The belief that the object corresponding to the image consists of atoms doesn’t have a bearing on the reality of the image. “Illusions” don’t exist in nature – that’s one of Dennet’s “illusions”. Illusions only exist in theories.


    To say we lack agency also defies common sense.

    I’d agree, but physics says otherwise – IF you believe that physics is all that matters (as Dennett does, making his views on free will possibly the most absurd opinion he has, and there is some stiff competition on that front).

    JBD

  17. JBD, you seem not to really be reading what I wrote. When something someone writes doesn’t make sense to you, your first reaction should be to re-read it and think about it carefully, not immediately dismiss it. I stand by what I said earlier.

    “Everything may well be determined but we can’t use that to predict anything about our behavior so its a moot point relative to our agency and free will.”

    Somehow you failed to parse this properly but didn’t give me anything at all that might let me help you with that.

    “To say we lack agency also defies common sense.”

    What physics says about this has nothing to do with common sense. Ask the person in the street whether they have agency or free will. Virtually all will say “yes”. That is the very definition of “common sense”. It is not what physicists think. And regardless of what a scientist might say, everyone acts like they have agency and free will. This is well known. Surely you don’t dispute that?

    “Illusions only exist in theories.”

    I’m not sure what this means. However, you may be mistaken about what Dennett means by “illusion”. Sometimes it is used to refer to something that doesn’t exist. That’s not what is meant here. It also can refer to something that is not what it seems. A chair is an illusion in this sense. While we assign the “chair” label to it, the sense it which it is a single object is contained solely in our brains. In reality it is a collection of atoms, which in turn are collections of more fundamental particles.

    Thinking about semantic perception this way, it is just our assignment of a label “chair” to a certain subset of light rays coming into our eyes. We find that particular labelling useful in our thought processes.

    “In the meantime I’ll continue to believe that semantic is not achievable within a state machine, and that discussion of “intelligence” within a machine remains a huge sales pitch.”

    Computers apply semantic processing all the time. Using a camera as an input device, software processes the incoming data and, given a suitable image identification algorithm, can assign the semantic label “chair” to it. It might print out, “I see a chair.” While this is not as sophisticated as human semantic processing, it seems that it only really differs in degree. Humans are clearly better at doing semantic assignment. They are also better at making conclusions from such semantic assignments over time. Computers can do bits of this but we have not yet put it all together. I’m sure there are things we still need to discover but I am not so defeatist as to think we won’t ever discover these things or that they are something other than just algorithms implemented in the brain’s wetware and they can be approximated, simulated on a digital computer. We don’t yet know how powerful a computer we’ll need but so what?

  18. Philosophy of skepticism: ‘thought is illusion’…

    Philosophy of Kant: ‘thought is noumenon and phenomenon’…

    Metaphysics of presence: “present to thought”…

  19. I’d reckon, that it’s a job you’d give a slave, it’s either not worth doing or you should do it yourself. If it’s conscious, it’s wrong to enslave it.

  20. Paul


    “Everything may well be determined”

    Ok – primary assertion of a certain traidtional reading of mathemtical physics. Understood.


    but we can’t use that to predict anything about our behavior

    so either “Everything” isn’t everything, or behaviour doesn’t fall within the meaning of “everything”.

    “determined” in the mathematical physics sense means that we can predict everything – to within a very small degree of innacuracy. Otherwise it’s not determined, something else is at play. So if everything is determined, and behaviour is part of everything, then behaviour is determined. You can’t have your cake and ear it, Dennett-style. He wants to be a liberal – he wants morals and agency, but doesn’t like the idea that it’s not determined, due to his completely irrational belief that undetermined behaviour necessitates the existence of God – an overdramatic,silly and very unscientific response.


    What physics says about this has nothing to do with common sense.

    I agree


    Ask the person in the street whether they have agency or free will. Virtually all will say “yes”.

    I agree. Only in cognitve science departments, lined with books by Dennett, are free will and consciousness rejected out of hand. It takes a lot of education to believe the most monstrous propaganda – look at the Catholic Church.


    Sometimes it is used to refer to something that doesn’t exist.

    Such as ? If you mean a mirage, there is no illusion, You see what you see. It’s your cognitive apparatus, not your sensory, that deludes you into thinking it’s water. dennett claims that there are such things as inherent illusions, which is unheard of in nature.


    In reality it is a collection of atoms, which in turn are collections of more fundamental particles.

    In reality an image of a chair is an image of a chair. There is a theory of atomic structure that the chair consists of atoms. There is no reality clash.

    When you see a chair you don’t see a collection of atoms in any case – you see light absorbed and reflected in a particular way that the collection of atoms generates. Atoms are not visible in any sense – only the light that they interact with. So to talk of a ‘deeper reality’ of the chair beyond the image makes no scientific sense, let alone the dramatic conclusion that such a sight is an “illusion”. What else would a chair be meant to look like with the human visual system ?

    “Computers apply semantic processing all the time. Using a camera as an input device, software processes the incoming data and, given a suitable image identification algorithm, can assign the semantic label “chair” to it”

    No it doesn’t. It maps an image to a presupplied image dictionary that a human being decided should be labelled “chair”.


    it seems that it only really differs in degree.

    yes- computers process semantic 0%. Humans 100%. A degree – of 100%


    I’m sure there are things we still need to discover but I am not so defeatist as to think we won’t ever discover these things or that they are something other than just algorithms implemented in the brain’s wetware and they can be approximated, simulated on a digital computer.

    Why is it “defeatist” to conclude that model of mentality that is totally incorrect should not waste everybody’s time ? Isn’t that a victory for common sense ?

    Why use such value-added language ? There is no evidence that the brain is a computer. Not a scrap. And by god, there has been enough money spent on trying to prove it.

    JBD

  21. JBD,

    ““determined” in the mathematical physics sense means that we can predict everything”

    Only in theory, not in practice. First, we don’t have the ability to read the state of anything at the level at which determinism operates. Second, even if we could read the initial state, we don’t have the computing power to apply the rules of physics to such a state in any situation applicable to human behavior. Third, even if we had the computing power, it would not give an answer fast enough to beat the system being predicted. My point is that our inability to predict pretty much anything at the scale of human behavior relegates determinism to simply an interesting physics fact.

    “If you mean a mirage, there is no illusion, You see what you see.”

    My point is that while Dennett does call consciousness an illusion, he is not saying it doesn’t exist but it is just not what it seems.

    “In reality an image of a chair is an image of a chair.”

    An image of a chair is only an image of a chair to the person doing the perceiving. The collection of light rays entering our eyes does not come with a “chair” label. Furthermore, the collection of light rays representing the chair is not separate from the collections of light rays from the table, the drapes, and so on. To use the computer metaphor, our eyes take in raw light rays, turn them into pixels, then process them further in order to identify the chair and apply the “chair” label. This is no different in kind than what a computer vision system. The brain is simply better at it.

    “It maps an image to a presupplied image dictionary that a human being decided should be labelled “chair”.”

    Sure, that’s often the case but that’s just how we designed the system to work. In fact, there are computer systems that do object detection and then point to an unknown object, asking a human to identify it. The human responds by typing a label like “chair” and the program associates it with the formerly unknown object. This is similar to how a human child learns the names of things. The human child is just very much better at the task for lots of reasons, none of which is because it isn’t a computer.

    “There is no evidence that the brain is a computer.”

    There is lots of evidence that the brain is a computer implemented as billions of neurons. If the brain is not a computer, then what is it? It clearly processes information obtained through senses. It has memory and output devices. I still don’t see anything in what you have written that even hints at “not a computer”.

  22. Paul


    “Only in theory, not in practice.”

    Thats evidently totally untrue, or there wouldn’t be satellites roaming the surface of mars, a chemical industry, and electronics industry, GPS , aeroplanes, cars, the nuclear electrical industry, nuclear weapons, the list is endless. I think you’re confusing accuracy, chaos theory with quantum mechanics with the basic determistic principles of physics. Chaos theory is reallty about the accuracy of predicting multi-component systems using mathematics. Chaos theory doesn’t alter the fact that each component of the atmosphere is identically determined by the same set of physical laws. Whatever one thinks, there is no role for anything other than deterministic physical laws in the development of a weather system. Nobody’s saying the weather makes choices.

    If you’re a phyicalist – like Dennett – then you may assert (without a scrap of evidence incidentally) that a brain is a chaotic system but that doesn’t alter the fact that’s it’s a totally physically determined system. There is no role for free will or the making of choices or the interruption of the continuous flow of mathemtical metrics by arbitrary choice mechanics. It’s the same if you stick the magic ‘quantum’ in there.


    My point is that while Dennett does call consciousness an illusion, he is not saying it doesn’t exist but it is just not what it seems.

    . which is total nonsense of course. It’s a claim of inherent illusion, unheard of in nature. Consciousness is irreducible and doesn’t ‘seem’ like anything other consciousness. What else is it comparable too ? Neither does ‘time’ seem like anything else, nor spatial extension. I wonder why these aren’t illusions too ?


    The collection of light rays entering our eyes does not come with a “chair” label.

    The passive consumption model of visual data is being increasingly undermined every day. You’ll probably find that the brain imposes a ‘chair’ label on the sense data the brain picks up before creating it’s final image. The visual system is no longer viewed as a simple passive processor of input – on the contrary, it’s a manufacturer of images. There is a mass of inbuilt stuff the brain just does


    Sure, that’s often the case but that’s just how we designed the system to work.

    How else can a computer work ?


    There is lots of evidence that the brain is a computer implemented as billions of neurons. If the brain is not a computer, then what is it? It clearly processes information obtained through senses. It has memory and output devices. I still don’t see anything in what you have written that even hints at “not a computer”.

    This paragraph sums up the problem. ! You repeat it as a dogma without ANY evidence that a brain works a computer. You almsot repeat it as ‘self-evident’. That’s what happens when a lot of people repeat the same claim time after time after time. It becomes a fact by virtue of repetition.

    i) we know how computers work – they are defined. There is nothing to elude about what they can do or what they are capable of. They are not objects of scientific interest.

    ii) we know net to nothing about brains, beyond some details about their material structure. How they process information beyond primitive analysis of the visual system no-one knows.Nobody. The notion that there are specific tructures allocated to function – as in a computer – is not established. There are vague links between emotions and regions, but nothing too precise

    iii) i) & ii) are sufficient to ditch the claim that brains cannot be assumed to be computers

    iv) .. however, emotions, feelings and colours have absolute, non-relative aspectual shape. Green is not “like” blue. Wanting to urinate is not “a bit like” wanting to eat a hamburger. Non-relative aspectual shape is not possible in a syntactical-only system like a computer. Green is not a number, it’s a quality. Qualities and digital computers are incompatible. It therefore is clear that a brain – or at least it’s most interesting bits – cannot be a computer. It may be part computer, but the generation of irreducible mental phenomena and non-relative aspectual shape mean that the bit everybody’s interested in just can’t. Remember – computers are not objects of scientific interest. Nothing about their capabilities is “waiting to be discovered”

    JBD

  23. $ git merge dev
    Updating 623c067..3356c3c
    Fast-forward
    atlas/bin/auto_scv.ksh | 9 ++
    atlas/bin/stage_etm_data.ksh | 153 ++++++++++++++++++++
    atlas/mail/etm_running.txt | 17 +++
    atlas/mail/warning.txt | 1 +
    atlas/sql/CRT_CONTACT_SCV.SQL | 32 ++++-
    atlas/sql/CRT_CONTACT_SCV_STAGING.SQL | 10 +-
    atlas/sql/CRT_ETM_RETURN_FLOW_PACKAGE.SQL | 73 +++++++++-
    atlas/sql/CRT_ICF_DOMAIN.SQL | 8 +-
    atlas/sql/CRT_INDIV_CURR_FACT_STAGING.SQL | 7 +-
    atlas/sql/Deployment_Scripts/CR149739/CR149739.SQL | 17 +++
    atlas/sql/INS_CONTACT_SCV_STAGING.SQL | 119 ++++++———-
    atlas/sql/INS_ETM_STAGING.SQL | 16 +++
    atlas/sql/INS_ICF_ALL.SQL | 16 ++-
    atlas/sql/INS_ICF_DOMAIN.SQL | 157 +++++++++————
    atlas/sql/INS_INDIV_CURR_FACT_STAGING.SQL | 24 +++-
    atlas/sql/SITE_SCV_STAGE6_FUNC_REPL.SQL | 4 +-
    edw_return_flow/sql/CRT_EDW_RETURN_FLOW_CREATE.SQL | 12 +-
    etm_return_flow/bin/etmload.ksh | 103 ++++++++++++–
    18 files changed, 580 insertions(+), 198 deletions(-)
    create mode 100644 atlas/bin/stage_etm_data.ksh
    create mode 100644 atlas/mail/etm_running.txt
    create mode 100644 atlas/mail/warning.txt
    create mode 100644 atlas/sql/Deployment_Scripts/CR149739/CR149739.SQL
    create mode 100644 atlas/sql/INS_ETM_STAGING.SQL

    Paul


    “Only in theory, not in practice.”

    Thats evidently totally untrue, or there wouldn’t be satellites roaming the surface of mars, a chemical industry, and electronics industry, GPS , aeroplanes, cars, the nuclear electrical industry, nuclear weapons, the list is endless. I think you’re confusing accuracy, chaos theory with quantum mechanics with the basic determistic principles of physics. Chaos theory is reallty about the accuracy of predicting multi-component systems using mathematics. Chaos theory doesn’t alter the fact that each component of the atmosphere is identically determined by the same set of physical laws. Whatever one thinks, there is no role for anything other than deterministic physical laws in the development of a weather system. Nobody’s saying the weather makes choices.

    If you’re a phyicalist – like Dennett – then you may assert (without a scrap of evidence incidentally) that a brain is a chaotic system but that doesn’t alter the fact that’s it’s a totally physically determined system. There is no role for free will or the making of choices or the interruption of the continuous flow of mathemtical metrics by arbitrary choice mechanics. It’s the same if you stick the magic ‘quantum’ in there.


    My point is that while Dennett does call consciousness an illusion, he is not saying it doesn’t exist but it is just not what it seems.

    . which is total nonsense of course. It’s a claim of inherent illusion, unheard of in nature. Consciousness is irreducible and doesn’t ‘seem’ like anything other consciousness. What else is it comparable too ? Neither does ‘time’ seem like anything else, nor spatial extension. I wonder why these aren’t illusions too ?


    The collection of light rays entering our eyes does not come with a “chair” label.

    The passive consumption model of visual data is being increasingly undermined every day. You’ll probably find that the brain imposes a ‘chair’ label on the sense data the brain picks up before creating it’s final image. The visual system is no longer viewed as a simple passive processor of input – on the contrary, it’s a manufacturer of images. There is a mass of inbuilt stuff the brain just does


    Sure, that’s often the case but that’s just how we designed the system to work.

    How else can a computer work ?


    There is lots of evidence that the brain is a computer implemented as billions of neurons. If the brain is not a computer, then what is it? It clearly processes information obtained through senses. It has memory and output devices. I still don’t see anything in what you have written that even hints at “not a computer”.

    This paragraph sums up the problem. ! You repeat it as a dogma without ANY evidence that a brain works a computer. You almsot repeat it as ‘self-evident’. That’s what happens when a lot of people repeat the same claim time after time after time. It becomes a fact by virtue of repetition.

    i) we know how computers work – they are defined. There is nothing to elude about what they can do or what they are capable of. They are not objects of scientific interest.

    ii) we know net to nothing about brains, beyond some details about their material structure. How they process information beyond primitive analysis of the visual system no-one knows.Nobody. The notion that there are specific tructures allocated to function – as in a computer – is not established. There are vague links between emotions and regions, but nothing too precise

    iii) i) & ii) are sufficient to ditch the claim that brains cannot be assumed to be computers

    (iv) .. however, emotions, feelings and colours have absolute, non-relative aspectual shape. Green is not “like” blue. Wanting to urinate is not “a bit like” wanting to eat a hamburger. Non-relative aspectual shape is not possible in a syntactical-only system like a computer. Green is not a number, it’s a quality. Qualities and digital computers are incompatible. It therefore is clear that a brain – or at least it’s most interesting bits – cannot be a computer. It may be part computer, but the generation of irreducible mental phenomena and non-relative aspectual shape mean that the bit everybody’s interested in just can’t. Remember – computers are not objects of scientific interest. Nothing about their capabilities is “waiting to be discovered”

  24. I disagree entirely with pretty much everything you’ve written here. I am not going to respond to most of it. I think you’ve jumped the shark with your final thoughts:

    “Remember – computers are not objects of scientific interest. Nothing about their capabilities is “waiting to be discovered””

    This is based on some naive view of what computers are all about. If true this would be sad news to AI researchers and computer scientists all over the world. You have a huge amount of hubris for trashing entire scientific fields with just your gut feel. You obviously do not understand computers and programming one bit. I would be a waste of my time to continue this conversation. Good luck.

  25. What about Turing Machines is waiting to be discovered?

    I think that was J.Davey’s point. We, AFAICTell, know enough about them to note there is nothing there that could give us qualia or even about-ness.

  26. Also pretty clear Dennett’s ideas of wedding free will and determinism make no sense at all? Compatibilism only seems to make sense if you stack a lot of words on top of each other, but the short hand version always reveals the hollow-ness.

    Better to pursue some idea of Causation as Dispostion in the vein of Mumford and Amjun, a vein of immmaterialism, and quantum brain ideas, if one seeks to preserve free will.

  27. “What about Turing machines is waiting to be discovered?”

    The way this question is phrased indicates the problem. A Turing machine is a very simple mechanism. I don’t know if there’s anything left to discover about them per se but I will agree that we do understand them pretty well, or at least we think we do. However, a Turing machine can run any computable algorithm which is a huge, largely unexplored space. The question that is relevant to our discussion here is whether there’s anything left to be discovered about algorithms that can run on Turing machines. In other words, computer programs. Sure, we’ve written lots of computer programs but, collectively, all the programs ever written are still just a microscopic subset of all possible programs.

    This really isn’t about Turing machines at all. A Turing machine is an abstract concept invented to study computing. It is really irrelevant to how the brain works, or even really how modern, real-life computers work. Computer designers never consider Turing machines except as an academic exercise or they are interested in the fundamentals of computing.

  28. What is it about a program – expressible as a string of 0’s and 1’s if my recall my discrete math class correctly – that you think would make it conscious?

    It seems to me the Turing Machine is at the heart of it? We project meaning onto the Turing Machine, which is why you could have a flock of pigeons instantiate one.

    This is not to make an argument for souls, as from a metaphysically neutral position we can get deeper into the brain’s structures and recreate them in silicon or some other material. It’s just not clear what role a program would play there outside of helping us determine/reproduce the fundamental structures.

    As Bakker asked on this blog, “What is it about consciousness that would make it more Frogger than Frog?”

  29. paul


    “This is based on some naive view of what computers are all about. ”

    I think you’ll find your view is naive, not to say “imaginative”

    This is what a computer is :- https://en.wikipedia.org/wiki/Von_Neumann_architecture

    If you’re not on about one of those, you’re living in a world of fantasy.


    You have a huge amount of hubris for trashing entire scientific fields with just your gut feel.

    It’s not gut feel – it’s because ..


    You obviously do not understand computers and programming one bit. I would be a waste of my time to continue this conversation

    .. i’ve worked with them for 35 years of my life. 35 years. Unix, linux, python – started on 8086 assembler. I’ve written compilers, device drivers, the lot. You ?

    And through all that 35 years, computers have never changed. Changes in software production techniques don;t change the reality that all that a computer does is execute microinstruction sets. Don’t confuse the source for a program with the program iself – which is always just a series of basic, stone-age simple computational primitives. It’s never changed.

    I’ve also got a degree in Physics from a “prestigious” university for what it counts. But I’ve never really cared about that.


    This really isn’t about Turing machines at all. A Turing machine is an abstract concept invented to study computing. It is really irrelevant to how the brain works, or even really how modern, real-life computers work. Computer designers never consider Turing machines except as an academic exercise or they are interested in the fundamentals of computing.

    Paul – you need to do some reading on the nature of computation. Frankly, what you said here is olympic-stature nonsense.

    JBD

  30. Sci – What is it about a collection of neurons, that fire or don’t fire depending on whether they received inputs from other neurons, that you think would make them conscious?

    Until you can answer that question you really can’t exclude the possibility of software running on a Turing machine being conscious.

  31. Stephen


    “What is it about a collection of neurons, that fire or don’t fire depending on whether they received inputs from other neurons, that you think would make them conscious?”

    What is your evidence that neuron firingscause consciousness ?


    Until you can answer that question you really can’t exclude the possibility of software running on a Turing machine being conscious.

    That’s not actually true. Sci has no idea how neurons work as do neither you, nor anybody else. He does though know – in entirety – what a computer is capable of. The latter point enables him to dispose of the suggestion that computers could cause consciousness. It’s a common fallacy to suggest that simply because you don’t know how something works, then any scheme is an appropriate suggestion, no matter how bizarre.

    It’s like suggesting that if I don’t know if there is a small hamster on the surface of Jupiter called Nigel, who eats cheese, that ignorance enables me to assume there is a hamster on the surface of Jupiter who eats cheese. What matters is the evidence – what is a justfied belief based upon the evidence ?

    JBD

  32. “What is it about a program – expressible as a string of 0’s and 1’s if my recall my discrete math class correctly – that you think would make it conscious?”

    What is it about quarks and energy that you think would make the universe? A human being? This is a vacuous argument. You are looking for some kind of magic sauce that is consciousness. Consciousness is clearly made out of quarks and energy. (If you disagree with that, then we can stop right here.) However, that knowledge really tells us very little about life as we know it. We also can represent the Mona Lisa as a digital image or “Bohemian Rhapsody” as a string of zeroes and ones as on a CD. What we are dealing with here are multiple levels of description. We know the levels are connected and that one level emerges from the other but the interesting part is exactly how one emerges from the other. All knowledge is perhaps contained in descriptions (most yet to be discovered) of how one level comes from another.

    To answer your rhetorical question another way, how consciousness can come from a process involving, at its bottom level as far as we know, quarks and energy. The zeros and ones are merely how we describe those processes as computable data and/or algorithms.

    One open question is whether there are non-digital “algorithms”. These do not use ones and zeros but perhaps as real numbers of infinite precision. We call these analog computers. The jury is out on this. We don’t know if real numbers of infinite precision even exists except in our imaginations. We certainly don’t know if the universe is analog or digital. If you google this question, you will see that it is a mystery that greater minds than ours have contemplated for a long time. Even if the universe is analog, we don’t know if analog algorithms are more powerful than digital ones.

    If the brain is a collection of algorithms implemented as neurons and such, we have yet to discover it, but proving that the brain is or is not a collection of algorithms is a different question. It has been pointed out that, at least in theory, we can simulate the human brain at any desired level of precision and, presumably, it would be conscious.

    “Paul – you need to do some reading on the nature of computation. Frankly, what you said here is olympic-stature nonsense.”

    So what would you suggest I read? I am fairly well-versed in the nature of computation. See if you can show me the errors in my thinking. Just to keep us focused here, I want to remind you that I am not claiming I know the algorithms that make up the brain, just that we need to keep working on it. My claim is that there’s no proof that the brain is not just a collection of algorithms. You seem to be claiming that the brain just can’t be a collection of algorithms.

    You’ve got literally thousands, perhaps millions, of AI researchers looking algorithms that explain how the brain works. If you are going to claim that you have proof that their efforts are a complete waste of time, then you need to put this down on paper in much greater detail than you have here. If you are successful, you’ll be famous.

  33. “Computers have never changed” in your years of working with them. Programs have though, right? We have software now that can automate the driving of a car, for example. We didn’t have that when you and I started working with computers. Your focus on computers rather than programs mystifies me. Computers (and Turing machines) are relatively simple mechanisms but the programs that run on them are something else entirely.

    By the way, the focus on Turing machines has some other problems that should not go unmentioned:

    1. Turing machines have infinite memory. Real computers do not. This is a limitation on computers and programs relative to Turing machines.

    2. Real computers and programs have input and output that allows them to interact with the rest of the universe. Turing machines do not. This is a big limitation on Turing machines relative to computers and programs and is very relevant to our discussion. Some AI researchers believe that programs’ interaction with the world around them is a key component of AI. Obviously programs meant to model brain processes must also model sensory processes.

  34. “He does though know – in entirety – what a computer is capable of.”

    Unless he knows all programs that have been written or will be written, then he does NOT know what a computer is capable of. This is the flaw in your argument. You somehow think that knowing how computers work means that you know how all programs work. That is clearly not the case or there would be no need for programming to be a profession.

  35. John:

    “What is your evidence that neuron firingscause consciousness ?”

    Well a couple of pieces of evidence are:
    – if neural firings stop, so does consciousness
    – or coming from a different direction, it has been shown that if a person thinks about a specific activity then it can determined that this is occurring by looking at neural activity

    There are lots more. If you believe that it is some other unknown thing causing consciousness it is a bit like your Nigel example.

    Now, your Nigel example is not really appropriate here. We can note the similarities between the brain and a computer and both observe and surmise they might be able to do similar things. That isn’t a bizarre suggestion. However we know for many reasons that there is no cheese eating hamster on Jupiter.

    It is also clear that no one knows everything a computer can do as the capabilities of software driven computers keep increasing. For example, what is the effect of introducing the ability for AI computers to build causal models to assist in predicting outcomes? Maybe someone has already done that, but a few years ago we would not even have asked the question.

  36. @ Stephen: I also cannot rule out me bouncing a rubber ball instantiates consciousness by that argument.

  37. I don’t think it really matters whether consciousness is quarks (or whatever goes on at the lowest scale) though I actually don’t see this mattering much. Admittedly I may just be off in my intuitions ->

    We can simulate all sorts of aspects of the universe, but it seems to me this simulation is markedly different that the actual unless one goes for something like the Matrix being real.

    After all, when a thrown ball is struck by a batter’s swing, the causal power is not based on the registration of hit boxes and then a change in the variables for the next slice of the game loop.

    In reality the causal power is in the bat and the causal receptivity is in the ball. In act I’d say given that consciousness is how we find interest relative cause-effect relations it seems these real causal relationships may be a good reason why we have consciousness (Intentionality/Subjectivity/Rationality) and a clue as to what consciousness even is.

  38. Ah to be able to edit late night postings…sigh. That first part should be,

    “We can assume for the sake of argument that consciousness is quarks (or whatever goes on at the lowest scale) though I actually don’t see this mattering much.”

    To expand on the last bit, consciousness gives us a carving up of the world into objects and processes that a universe with no conscious entities lack. As such I suspect the exact cause-effect relations matter in its instantiation, and that (AFAICTell) clearly marks a brain as being different from what we usually think of as a computer.

  39. Paul


    “My claim is that there’s no proof that the brain is not just a collection of algorithms.”

    This is an old fallacy. Failure to disprove a negative is no evidence of the positive. See previous comments about a mice on the surface of jupiter


    What is it about quarks and energy that you think would make the universe? A human being? This is a vacuous argument. You are looking for some kind of magic sauce that is consciousness.

    Quarks and energy are physical entities. They are real. 0’s and 1’s are not. Quarks and energy are governed by the laws of physics and an exist in the ontology of the phenomenal. 0’s and 1’s are cultural entities. They are no more capable of causing a natural phenomena (like consciosuness ) than a noughts and crosses game can spontaneously explode.


    Your focus on computers rather than programs mystifies me.

    I just laughed when I read this. A program – I’ve altready pointed this out – a program is a series of (very primitive) microinstructions that are defined by the chip manufacturer. It’s never changed. You are confusing the ambition of the source (a cultural goal – such as driving a car) with the way that programs work. Programs haven’t changed one bit in 50 years. But the whole point I guess is that you can’t have a program without a computer, so the point you made is totally baffling.


    This is a big limitation on Turing machines relative to computers and programs and is very relevant to our discussion.

    Thanks for pointing that out Paul. I’ve only known that since 1985.

    You seriously think that neither myself nor Sci knows what a Turing machine is ? A Turing machine is a conceptual model upon which all computers are based. To simplify things – in conceptual discussions like this – if a Turing machine isn’t capable of it, then neither is a computer. It’s a basisfor principled discussion, which you don’t seem to understand.


    Unless he knows all programs that have been written or will be written, then he does NOT know what a computer is capable of.

    You really need to do some reading Paul. start at the beginning https://en.wikipedia.org/wiki/Theory_of_computation

    JBD

  40. Stephen


    Well a couple of pieces of evidence are:
    – if neural firings stop, so does consciousness

    is that true ? Does a person in a deep sleep or anaesthesia have no neural firings ? I think that’s false.


    – or coming from a different direction, it has been shown that if a person thinks about a specific activity then it can determined that this is occurring by looking at neural activity

    That’s undoubted concurrence but that’s hardly a basis for establishing cause.


    That isn’t a bizarre suggestion.

    I think it’s worse than bizarre – i think it’s plain silly IF you are prepared to do the work to look at the conceptual frameworks involved, which alas most people aren’t. Computers are machines which produce arbitrary physical deliverables (voltage levels on a chip) which are mapped to 0’s and 1’s by cultural, sentient beings, but (here is the bit that confuses people) – so indigestible is a computers output you actually need more physical machinery to make sense of it (screens, I/O, character translation etc). But people just see what’s on the screen and think it’s speaking from the heart.

    Computers have no phenomenal existence and no phenomenal causal powers, which helps explain why absolute aspect and shape – the distinguising feature of mentality (like the quality of the colour blue, or the specific feeling of needing to urinate) cannot arise from computational activity. Absolute aspect and shape can arise from the powers of nature however, as they are not limited by the rules of computation.

    It is also clear that no one knows everything a computer can do as the capabilities of software driven computers keep increasing.

    They can fulfil a unlimited number of cultural functions, but have no natural causal powers whatsoever.

    JBD

  41. So your secret sauce has a name: “phenomenal causal powers”. People have it but computers + programs don’t, right? How do we test something for phenomenal causal powers? My guess is that this some modern philosophers’ new name for what used to be called a soul. Although I am not sure what it refers to, I am not a believer. I just don’t see that we have any evidence for some special property that is present in humans but impossible to achieve in machines. I think we’re at the end of this discussion.

  42. Actually I think the program as an informational abstraction is far more akin to the idea of a soul than simply observing interest relative consciousness & thus causal power by agents is done through evolved organisms from microscopic to human.

    Whether one is a materialist or idealist or something it doesn’t matter for the rejection of computationalism. That simply can be done by observing biological forms and realizing computers are intellectual abstractions (Turing Machines) that depend on our minds to have any Intentionality.

    And again, no one said machines cannot achieve it – just not Turing Machines.

  43. Ok. Why not Turing Machines? If you are referring to their lack of input and output, that’s a triviality. Any other objection is going to run into the fact that all computers are equivalent to a Turing Machine in their ability to implement computational algorithms. Besides, I think some here are saying that machines can’t achieve it, whatever “it” is, including Turing Machines.

    “… computers are intellectual abstractions (Turing Machines) that depend on our minds to have any Intentionality.”

    The computer I’m writing this on is very real and not (only) an intellectual abstraction. They can have intentionality. What about those 737s that crashed. They have a subsystem that makes judgements on receiving certain input to cause the plane to dive. The fact that the input was in error doesn’t matter. Humans can also have intentions based on bad input. Perhaps some will say that it has no consciousness but can’t really define a test for it that doesn’t simply boil down to a test of humanity. Perhaps it isn’t complicated enough to have consciousness but surely such systems will grow and grow in complexity as time goes on. Where would you like to draw the line? You might say that even if it has consciousness someday, it won’t be like human consciousness. Of course it won’t. It’s a different species. Again we come back to definitions based on a simple test. If it’s human, it’s got the magic ingredient. If it isn’t human, it doesn’t. That’s a worthless, unscientific test.

  44. Sci writes “computers are intellectual abstractions (Turing Machines) that depend on our minds to have any Intentionality”.

    Consider (again) the language of bees. Does it have (teleo-)semantics and reference, is there intentionality? The grammar and vocabulary is genetically hard wired. Does bee language present evidence of anything more than what a Turing machine could do?

  45. I’d say if bees have their own intentionality they’ve already crossed the chasm no Turing Machine can.

    But if someone made a synthetic bee, duplicated the necessary structures in a physical medium to preserve the causal relations, I would be happy to say it’s as conscious as the “real” thing.

    So to with humans.

  46. Sci, would you say a TM could process information with sufficient sophistication to act conscious? In other words, could one be a behavioral zombie sufficient to give an observer the impression that it’s conscious?

  47. Paul: “The computer I’m writing this on is very real and not (only) an intellectual abstraction. They can have intentionality.”

    No, they have derived intentionality, in the same way a book or an abacus does.

    “Again we come back to definitions based on a simple test. If it’s human, it’s got the magic ingredient. If it isn’t human, it doesn’t.”

    As per above, happy to assume synthetic bees and humans have consciousness if these “androids” preserve any of the evidence-based structures for consciousness we’ve found in nature.

    If Computationalism is saying a particular program, when run, can make a non-conscious Turing Machine magically become a conscious entity like the fairy did for Pinocchio it seems like (bad) Platonism/Dualism to me. But apologies if I’ve misunderstood.

  48. Is there a test for this “derived intentionality”? You may find that I ask about tests quite a bit. That’s because I’m a firm believer that if something is untestable, it isn’t scientific. I also notice that you never answer any of my “test” questions. That is significant.

    I get the feeling that you think that if man created X then X can’t be conscious. It still sounds like “secret sauce” to me. What if we created a machine that simulated a human down to whatever level is necessary? Could that be conscious? Do you believe that, in principle, we could create such a machine? If not, then where’s the magic that we can’t put into the machine? What level of technology would make this possible?

    On the other hand, perhaps you believe we can create a conscious machine but don’t yet have the technology to do so. What bit of technology is needed to cross the divide between non-conscious machine to conscious machine? What is its functionality?

  49. Derived intentionality is simply the observation that books, computers, abacuses have meaning because of lingual agreements between minds.

    But perhaps we can find some common ground on this test question. Is there a test to show it isn’t the Lord holding the laws of physics from changing? Or a test to falsify Idealism?

    I don’t find the test questions to be very convincing arguments for Computationalism, because it seems to me they lead to defenses of Panpyschism, IIT, Dualism, etc.

    “I get the feeling that you think that if man created X then X can’t be conscious.”

    Even though I’ve stated the opposite many times? I’m always bemused when Computationalists accuse others of not being materialist enough, as the idea a Holy Grail program that can make a Turing Machine conscious seems very much like “secret sauce” to me.

    At least no one has suggested we can upload our minds in this conversation.

    “What bit of technology is needed to cross the divide between non-conscious machine to conscious machine? What is its functionality?”

    That is an as yet to be determined empirical matter, but it will involved finding the relevant level of structure within the brain (and possibly other parts of the body) to be reproduced.

  50. Perhaps I confused “Sci” with “john davey”. Sorry about that.

    Sci, please tell me more about this “derived intentionality” and the role it plays in this discussion. If you believe that computers may one day have consciousness then will that also be when its intentionality is no longer derived? Is it lack of consciousness that makes it impossible for computers to have true, non-derived intentionality?

  51. Intentionality, schmentitality. There are conscious states absent of “content”, pace, Bretano, just as there are contentful mental states absent awareness. People need to read more cutting-edge neuro-psychology—no personal experimentation please!

  52. John re #42:

    “– if neural firings stop, so does consciousness
    is that true ? Does a person in a deep sleep or anaesthesia have no neural firings ? I think that’s false.”

    I think you need to read what I said. Neural activity concerns itself with more than consciousness, so of course neurons are firing when unconscious. I can think of no example where there are no neural firing and a person is still conscious. To claim otherwise would be – to use your terminology – silly.

    “– or coming from a different direction, it has been shown that if a person thinks about a specific activity then it can determined that this is occurring by looking at neural activity
    That’s undoubted concurrence but that’s hardly a basis for establishing cause.”

    I can explain the causation in more detail by describing an experiment. First you ask a person to think of an activity such as playing tennis and examine the neural activity using fMRI. You then ask them to think of another activity such as doing pushups and examine their neural activity. Do this multiple times to establish that you know when they are thinking of tennis and when they are thinking of pushups.
    Next you show them a picture of a circle and ask them to think of tennis if they think it is a circle and pushups if they think it is not. Ask them to think of tennis if they think it is a square and pushups if not. Continue with other shapes with varying patterns of questions. If they answer your questions correctly then causation has been established.

    It is interesting that your argument “Computers are machines which produce arbitrary physical deliverables” could be applied to brains just as easily. Brains however, do create consciousness. Why not computers? Well we don’t know that yet.

  53. I’ll withdraw the second argument. It is conceivable that thought woo causes neurons to fire.

  54. I’m inclined to think that consciousness is where questions are for the individual. It’s why very bigoted people can seem kind of asleep/insensate or zombie like, because they face no question in their dogmatic attitude.

    By that measure an adaptive machine will face things it doesn’t understand and so will face questions/have a consciousness of a sort. But it is unlikely to face the sort of question that humans regularly face, so it will have a different form of consciousness. I’m assuming we’re not treating it that there is only one type of consciousness?

  55. Paul


    So your secret sauce has a name: “phenomenal causal powers”. People have it but computers + programs don’t, right?

    It’s not secret is it ? It’s literally the evidence in front of your eyes.

    Computers and programs don’t possess phenomenal causal powers no, because they are not phenomenal entities. They are literate constructs : cultural entities. Computers and programs are totally understood and of no scientific interest at all, like all engineering products.

    Brains are however completely unknown – objects of complete mystery.


    My guess is that this some modern philosophers’ new name for what used to be called a soul.

    In which case your guess would be wrong.


    Although I am not sure what it refers to, I am not a believer.

    Neither am I. Although you must believe in some kind of magic, if you assume that an abstract entity like the number ‘1’ can explode.Or get wet. Or think.


    I just don’t see that we have any evidence for some special property that is present in humans but impossible to achieve in machines.

    I agree. But a machine that causes consciousness – as the brain is – cannot be a computer. You are just dogmatized to the point of blindness with the belief that it must be.

    JBD

  56. Stephen


    “It is interesting that your argument “Computers are machines which produce arbitrary physical deliverables” could be applied to brains just as easily.”

    Are you saying that brains are not phenomenal objects associated with animals ? That I can make one from sellotape and old bits of string, as I could with a computer?


    Brains however, do create consciousness. Why not computers?

    Why not toasters ? Or television sets ? Or teapots ? Or anything else that we have precisely zero reason to think is conscious ?

    JBD

  57. John Davey. Unless you describe some kind of test we can apply to an object (man, computer, whatever) that tells us whether said object has “phenomenal causal powers” or is a “phenomenal entity” that isn’t equivalent to asking whether the object is a human, you have nothing. It should come as no surprise to you that I find “It’s literally the evidence in front of your eyes” to not be at all persuasive.

  58. Tbh I don’t get the compelling nature of the “there is no test to disprove program consciousness” argument…How do we know video game characters being mown down by players right now don’t have consciousness? Is this the civil rights movement of our time?

    Or perhaps the program running this very blog is screaming out in agony as I type this comment?

    This, I would say, is the main issue with Computationalism – Given the assumption there’s no symbol manipulation at the atomic level it seems to me Computationalism is an argument for Panpsychism and/or Platonism but isolates itself to the desired Holy Grail program that will make a Turing Machine conscious. Even then I assume we aren’t talking about various Rube Goldberg constructions that could serve as Turing Machines, just particular kinds of instantiations…

  59. “Given the assumption there’s no symbol manipulation at the atomic level”
    “make a Turing Machine conscious”

    These phrases aren’t really in the vocabulary computer scientists would use in this context.

    Symbol manipulation is not some fundamental kind of process but just a description applied to certain computer programming techniques. Perhaps the confusion comes from the fact that very early AI researchers seemed to think that human level performance could be achieved using ONLY symbol manipulation as its main operating principle.

    The focus on Turing Machines seems misguided. A Turing Machine is not a practical construct that anyone would choose to build but an abstraction used by mathematicians and computer scientists in fundamental research into the nature of computation. No one in their right mind would attempt to construct a real AI based on a Turing Machine. From a theoretical point of view, all programs can be executed on a Turing Machine though for practical programs like we run every day or a human-level AI program it would remain purely abstract and not something one would create in real life.

  60. @ Paul: If you’re taking the broad-based view that we can, in principle, create a synthetic conscious entity we’re in agreement. Admittedly I’ve no idea exactly what this would entail, but AFAICTell neither does anyone else…

    What I, and I believe J.Davey, contend is that you can’t have an entity that is just a “bucket of bolts” that becomes conscious upon said bucket reading a bit-string a certain way.

  61. John @58

    You seem to suffer from what I think of as the “It’s only a machine” fallacy. People look around and see that people make machines and none of them are sentient and draw the conclusion that machines made by people are all pretty much the same and cannot be sentient.

    If you take a machine like a computer or person and disrupt it’s organization, say by shooting a bullet into the CPU or brain, it will most likely cease to function. There isn’t any ability to function within its basic material, it is how the material is organized that matters.

    A computer is organized to look much more like a brain than a toaster. A toaster has only a few states and cannot change it’s behaviour. Both computers and brains have a large number of states embodied by bi-stable interconnected elements and can change their states depending on previous states and new input.

    The function of an entity doesn’t depend on whether you call it a machine or not, what material it is made of or who made it. It only matters how the materials it is made of are organized. That is why it is why there is the potential for humans to build a sentient machine and doing it with some sort of computer cannot be ruled out.

  62. Sci, we are in agreement as you suggest. We obviously can’t prove that a machine can be created by man that is conscious. Only by actually creating one will we know that. But we believe it is possible and will strive for it until we see convincing proof that it can’t work. I’m not sure about “the bucket of bolts” concept.

    Stephen (#63). I agree with your assessment. Furthermore, let me add a bit more to your “it’s only a machine” fallacy. Those that succumb to this fallacy also seem to believe that if the whole has some capability then you can also find that same capability when you look at its parts. These are the same people that believe Searle’s Chinese Room shows something significant. They apply this kind of thinking to a computer and say something like, “Where’s the consciousness? All I see are many simple electronic components and lots of ones and zeros.” As you say, this is a fallacious way of thinking. If a car can transport us from A to B, which of its components also has that ability? It’s only when you have the right parts assembled correctly.

  63. Sci

    What makes you think brains aren’t just a “bucket of bolts” with a string of neurons firing a certain way?

  64. What is meant by this “bucket of bolts” concept? As opposed to what? If it refers to a conscious brain or computer having lots of parts, then it is fine. However, the bolts are not connected to each other and brains and computers are both way more complex. This sounds similar to referring to an AI program as a Turing Machine. A Turing Machine is a very simple device. The machine part is deliberately simple. It was created to be the simplest possible computing mechanism that can still implement any computable function. It pushes all the complexity into the data you feed it.

  65. By “bucket of bolts” I mean any machine we would agree is not conscious. I don’t see anyone demanding civil rights for video game NPCs, so I assume my Playstation is not conscious even when running a particular game. I assume most of us don’t think running Macbook Pro is conscious when started up.

  66. @ Paul – I think I agree with the gist of your post though I am not sure Turing Tests are the best example. I feel they might obscure the issue.

  67. Paul


    Unless you describe some kind of test we can apply to an object (man, computer, whatever) that tells us whether said object has “phenomenal causal powers” or is a “phenomenal entity” that isn’t equivalent to asking whether the object is a human, you have nothing.

    Ok. Do you need a test to prove that a packet of tomato sauce does not originate in nature ?

    You need to understand the difference between a man-made object – an object of engineering such as a car or a computer, whose parameters and capabilities are of necessity entirely encompassed by the design of the engineers – and an object in the universe identifiable only by certain parameters which we use to point it out – physical location, chemical consistency etc- but the properties of which we know nothing about.

    Computers are not objects of scientific interest. Find me one article where a computer professional is sat with a test tube waiting for a computer to do something he’s not aware of it being capable of. It’s not going to happen, but I challenge you to do so.

    I really don’t think this is complicated unless you haven’t worked with computers or – in your case – don’t really know what computers are . You aren’t alone. There is an industry out there trying to make computers out to be far more complicated than they are. By Design – not by material property – computers are fantastically simple objects. They also don’t physically exist. if you knew what computers were, you’d know that that statement made sense. Computers exist in the same way as books – as cultural artefacts with arbitrary physical instantiations. But (as the world of VMs should make perfectly clear) computers are not physical objects, and hence cannot have physical properties nor a capability to interact with the universe in a physical way. The arbitrary objects that are used to represent computational activiy of course can do so.

    JBD

  68. stephen


    People look around and see that people make machines and none of them are sentient and draw the conclusion that machines made by people are all pretty much the same and cannot be sentient.

    Typical, predictable stuff from an “AI is brains” champion.

    I didn’t say that. Find where i did. I said computers couldn’t be sentient.

    I have no doubt that a sentient machine could be made – it just wouldn’t be a computer.

    JBD

  69. JBD. Computers produce answers that programmers didn’t expect all the time. Why write a program if you already know what it is going to output? I know that sounds trivial but it actually captures the essence of the argument. Somehow you have decided that a programmer who understands all about computers and all about the program they just wrote will also know what that program will do. Of course they know something about what that program will do but clearly they don’t know everything it will do.

    Let’s say I created a program that learns from its environment. In other words, it take input and uses it to modify its internal state. Furthermore, it produces output that depends on its internal state. It follows directly that the output depends on the input it receives. If the input is open-ended (ie, I keep the program running for an unspecified time and the input is also not given ahead of time but extracted from the environment), then the output can’t be predicted without actually just letting the program run.

    In short, this idea that you have that the program is limited by the knowledge of its creators is just flat wrong. If you don’t see that, then there’s no need to take this further.

  70. John #71
    “I have no doubt that a sentient machine could be made – it just wouldn’t be a computer.”

    Well that’s progress. Now it depends on what we call a computer.

    If you are talking only about currently available digital machines you might be right.

    If you are excluding devices that take input, convert them to an internal representation, process the data in some way and provide outputs, then even brains would be excluded.

    If you are saying that if it is sentient, then in can’t be a computer, then I just have to chuckle.

  71. If spacetime is AI…

    then we could say spacetime is a feature of observation…

    and sense and sentience too…

  72. Here is a Lego Turing Machine:

    https://www.youtube.com/watch?v=FTSAiF9AHN4

    Here is a Tinkertoy Computer:

    https://www.retrothing.com/2006/12/the_tinkertoy_c.html

    Are either those capable of becoming conscious? What about the example Ned Block once gave, or a computer made of mice running after cheese, or the idea from Pylyshn of a “group of pigeons trained to peck as a Turing Machine”?

    What would the nature of a program entail if it could bring self-awareness to that division of matter which, in its particular arrangement, originally lacks it? Does electricity make a difference, and if so why?

  73. Sci…yes, but at this moment, accepting ‘returning’ here, could be enough…

  74. Sci:
    Could any simple machine be self-aware? What about a brain with 100 neurons? I think your examples are just a distraction from the real issues.

    Your question about what would be the nature of a program that would bring self-awareness is a good one. We don’t even know why a brain’s arrangement of neurons produces self-awareness, though, so an answer doesn’t appear to be available any time soon.

  75. @ Stephen: Heh, of course rather than distraction I think my examples show that at the level of physics/chemistry there is no symbol manipulation and thus nothing fundamental about computation.

    It’s thus hard to see how a particular or set of Holy Grail program(s) turn a structure that we presumably agree is non-conscious into a conscious entity. The machine that could run my PC games, my tax software has a variety of moving parts but it seems to me the Computationalist is saying you can move those parts in just the right way to bring about self-awareness.

    Isn’t this like assuming the world falls under our expectations of physics but particular hand gestures and words can produce violations of said expectations?

    OTOH brains are structures that have evolved to grant the body that contains them self-awareness (though more & more it seems that consciousness is influenced by if not reliant on more than the nervous system, for example the relation between our gut and our emotional states).

  76. @Sci

    “I think my examples show that at the level of physics/chemistry there is no symbol manipulation and thus nothing fundamental about computation”

    You can apply the same statement to brains. Yet brains can be self-aware, therefore why not computers?

    What is essentially different about them? Both can be turned off and become inanimate (but brains are much more difficult to turn back on – they are required to be ‘on’ in order to maintain their environment, unlike computers). Computers have the bulk of their organization in their software while brains have it in their connectome, but that’s just technology differences. Fundamentally they are both just a bunch of interconnected switch states, changing themselves over time.

    The similarity between computers and brains is at the heart of why I think it might be possible to emulate self-awareness in computers.

    “OTOH brains are structures that have evolved to grant the body that contains them self-awareness”

    Kidneys evolved to grant the body the ability to cleanse it’s blood, yet we have dialysis machines. Birds evolved to fly, yet we have airplanes. That something was created through evolution is no reason to believe it cannot be emulated with a machine.

  77. “That something was created through evolution is no reason to believe it cannot be emulated with a machine”

    Sure I agree a machine built to parallel the as-yet unknown necessary/sufficient processes/structures should be able to the do the job in theory, but this instantiation of a conscious entity won’t be done with a Turing Machine instantiation that we all agree is non-conscious at the start, but some us think if its parts move in the right way it will be conscious when running a particular set of programs…and even that last part lacks clarity.

    Are the parts of the instantiated Turing Machine conscious of a virtual environment, or of the physical world? Are they conscious after one of the Holy Grail programs stops running? And is it enough to instantiate a Turing Machine w/ Legos or Tinkertoys or pigeons or even a group of humans, or does the computer running the program need to made out of certain materials + involve electricity?

  78. Sci

    I’m not really clear why you consider it important that the parts of a computer need to be conscious in order to instantiate consciousness. After all, the parts of a brain aren’t conscious on their own.

    I’m also not clear why you think a non-conscious beginning excludes the possibility of later consciousness. When does a brain begin to be self-aware? There is a point somewhere in it’s development. I just don’t see a logical rationale for the argument. Whether it is a computer or a brain, if it stops running there can be no consciousness. That seems to be clear evidence that it is the process of interconnected neurons firing (or possibly computer gates switching?) that somehow creates consciousness.

    I know of no evidence that the materials used to make something or whether electricity is used have anything at all to do with consciousness. The evidence points to the the electrochemical activity in and between neurons to generate brain activity. Generating this behaviour is what is important, not the technology underlying how it is done. The likelihood of generating this behaviour with tinkertoy or lego technology seems rather low.

  79. @ Stephen – it’s not that the parts of the brain are conscious on their own, it’s that the brain is the major necessary biological component for consciousness. And once it aids in our self-awareness it continues to do so barring some major issue. Beyond that, we consider consciousness to have an integral relationship with the brain no matter one’s metaphysics, whereas with a computer it is far less clear what parts are tied to consciousness given all the parts that can be involved with running a program alongside other programs.

    As for the non-conscious beginning, the problem is that unlike a brain granting self-awareness to the body a Turing Machine instantiation is reading/responding to however a program (a bit string) has been instantiated physically. What Computationalists are suggesting is that I can take a Turing Machine instantiation and play a video game, then load up some tax software, then watch Netflix, then run the Holy Grail program that makes the Turing Machine self aware…and then maybe watch some more Netflix. So what is it about the program that makes the physical parts of a computer capable of instantiating a conscious entity when that particular program is run?

    Regarding the special role of electricity to me it just seems odd to say a special type of Turing Machine (one made in the manner of modern computers) is enough to instantiate a conscious entity when it’s clear Tinkertoys and Legos can instantiate Turing Machines and conceivably run the necessary programs. If we can see a Turing Machine running Holy Grail programs alone is not enough what is so magical about those Turing Machines made in a special way? Nothing that I can see…

  80. As addendum: To be clear I’m fine with some synthetic reproduction of the relevant structures and whatever other criteria might be involved with the instantiation of a conscious entity.

    All to say, if one is familiar with the relevant Scripture, I do think Data is a conscious entity (as per The Measure of a Man) but the characters on the Holodeck are not (assuming the Holodeck’s characters are programs run on a Turing Machine).

  81. @Sci

    I think we are getting to the end of this thread. We seem to be repeating ourselves and not making any forward progress.

    Summarizing, you seem to be saying that it isn’t clear how a computer would become conscious, what particular thing would make it conscious. Since I cannot tell you, then your position is that it can’t become conscious.

    My position is that I cannot tell you what particular thing makes a brain conscious (so why would I know what would make a computer conscious?), but I know that it is and that it has to do something with the interconnection of firing neurons. Furthermore I can see similarities between brains and computers in the way they are organized with interconnected switching elements allowing them to do mundane non-conscious tasks like tax software or maintaining sugar levels in your body as well as more complicated tasks. To me, this means that there is a possibility that a computer using appropriate underlying technology could become conscious.

    As an aside, you are saying that lego and tinkertoy technologies (or even your typical laptop) would be capable of doing anything any other Turing machine can do, but that is simply not reasonable. Technology does matter. A tinkertoy computer cannot do what a 70’s era 8 bit computer could do and that cannot do what a modern laptop can do and that is not as capable as a computer that can beat a Go master. Using such examples seems to be a device to minimize the perceived abilities of computers using an emotional rather than a logical argument.

  82. @ Stephen: “Using such examples seems to be a device to minimize the perceived abilities of computers using an emotional rather than a logical argument.”

    But it’s not emotional at all, if anything the Computationalist is making an emotional leap in thinking running a program would be sufficient? Brains have specific structures, but computers can vary incredibly as what’s required is an agreement with the mind of the user.

    If you want to say the machine that ultimately is a conscious entity could partially involve Turing Machines running particular programs, I think that’s the best you can say regarding the parallels you see. But I don’t think that’s the whole of what’s going with the brain, nor do I think there is program so incredible that it instantiates a conscious entity when the previously non-conscious computer parts move around to run it.

  83. Stephen


    “Well that’s progress. Now it depends on what we call a computer.If you are talking only about currently available digital machines you might be right.”

    That’s the usual “definition” of a computer – or a Turing machine.


    “If you are excluding devices that take input, convert them to an internal representation, process the data in some way and provide outputs, then even brains would be excluded.”

    Not if they also generate consciousness. As they do generate consciousness and irreducible mental phenemena, we know that brains can’t be computers – as the capabilities of computers are totally understood


    If you are saying that if it is sentient, then in can’t be a computer, then I just have to chuckle.

    There can be a computer that is sentient, but it would need capacities above and beyond it’s mere computational facilities. It would need to be capable of generating mental phenomena, and computation on its own cannot do that.

    JBD

  84. Stephen


    You can apply the same statement to brains. Yet brains can be self-aware, therefore why not computers?

    Because brains DO cause consciousness, that doesn’t mean anything can. Your point is strange. Just because a brain is a “thing” and a computer is a “thing” they can both have the same capabilities ?

    Uranium-235 is used to create an atomic weapon. Could a program do the same, just because it’s a “thing” ?

    You’ve made the same implicit identifcation without paying it much notice : brains and computers are the same types of “thing”, with no evidence whatsoever.

    Computers can think but they would need the additional causal capability to generate consciousness. Brains are scientific mysteries : computers are not. if you think they are, you don’t know enough about them. We can exclude computation alone as being insufficient for the creation of mental phenomena as computation cannot generate semantic.

    That is a limitation of computation : it is not a limit of all machines, and certainly not the brain

    JBD

Leave a Reply

Your email address will not be published. Required fields are marked *