Do we need robots to be conscious? Ryota Kanai thinks it is largely up to us whether the machines wake up – but he is working on it. I think his analysis is pretty good and in fact I think we can push it a bit further.

His opening remarks, perhaps due to over-editing, don’t clearly draw the necessary distinction between Hard and Easy problems, or between subjective p-consciousness and action-related a-consciousness (I take it to be the same distinction, though not everyone would agree). Kanai talks about the unsolved mystery of experience, which he says is not a necessary by-product of cognition, and says that nevertheless consciousness must be a product of evolution. Hm. It’s p-consciousness, the ineffable, phenomenal business of what experience is like, that is profoundly mysterious, not a necessary by-product of cognition, and quite possibly nonsense. That kind of consciousness cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates.  A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.

The survival value of A-consciousness comes from the way it allows us to step back from the immediate environment; instead of responding to stimuli that are present now, we can respond to ones that were around last week, or even ones that haven’t happened yet; our behaviour can address complex future contingencies in a way that is both remarkable and powerfully useful. We can make plans, and we can work out what to do in novel situations (not always perfectly, of course, but we can do much better than just running a sequence of instinctive behaviour).

Kanai discusses what must be about the most minimal example of this; our ability to wait three seconds before responding to a stimulus. Whether this should properly be regarded as requiring full consciousness is debatable, but I think he is quite right to situate it within a continuum of detached behaviour which, further along, includes reactions to very complex counterfactuals.

The kind he focuses on particularly is self-consciousness or higher-order consciousness; thinking about ourselves. We have an emergent problem, he points out, with robots  whose reasons are hidden; increasingly we cannot tell why a complex piece of machine learning resulted in the behaviour that resulted. Why not get the robot to tell us, he says; why not enable it to report its own inner states? And if it becomes able to consider and explain its own internal states, won’t that be a useful facility which is also like the kind of self-reflecting consciousness that some philosophers take to be the crucial feature of the human variety?

There’s an immediate and a more general objection we might raise here. The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. Getting the robot to report will be no better than trying to examine the state of the robot’s mind directly; in fact it’s worse, because it introduces a new step into the process, one where additional errors can creep in. Kanai describes a community of AIs, endowed with a special language that allows them to report their internal states to each other. It sounds awfully tedious, like a room full of people who, when asked ‘How are you?’ each respond with a detailed health report. Maybe that is quite human in a way after all.

The more general theoretical objection (also rather vaguer, to be honest) is that, in my opinion at least, Kanai and those Higher Order Theory philosophers just overstate the importance of being able to think about your own mental states. It is an interesting and important variety of consciousness, but I think it just comes for free with a sufficiently advanced cognitive apparatus. Once we can think about anything, then we can of course think about our thoughts.

So do we need robots to be conscious? I think conscious thought does two jobs for us that need to be considered separately although they are in fact strongly linked. I think myself that consciousness is basically recognition. When we pull off that trick of waiting for three seconds before we respond to a stimulus, it is because we recognise the wait as a thing whose beginning is present now, and can therefore be treated as another present stimulus. This one simple trick allows us to respond to future things and plan future behaviour in a way that would otherwise seem to contradict the basic principle that the cause must come before effect.

The first job that does is allow the planning of effective and complex actions to achieve a given goal. We might want a robot to be able to do that so it can acquire the same kind of effectiveness in planning and dealing with new situations which we have ourselves, a facility which to date has tended to elude robots because of the Frame Problem and other issues to do with the limitations of pre-programmed routines.

The second job is more controversial. Because action motivated by future contingencies has a more complex causal back-story, it looks a bit spooky, and it is the thing that confers on us the reality (or the illusion, if you prefer) of free will and moral responsibility. Because our behaviour comes from consideration of the future, it seems to have no roots in the past, and to originate in our minds. It is what enables us to choose ‘new’ goals for ourselves that are not merely the consequence of goals we already had. Now there is an argument that we don’t want robots to have that. We’ve got enough people around already to originate basic goals and take moral responsibility; they are a dreadful pain already with all the moral and legal issues they raise, so adding a whole new category of potentially immortal electronic busybodies is arguably something best avoided. That probably means we can’t get robots to do job number one for us either; but that’s not so bad because the strategies and plans which job one yields can always be turned into procedures after the fact and fed to ‘simple’ computers to run. We can, in fact, go on doing things the way we do them now; humans work out how to deal with a task and then give the robots a set of instructions; but we retain personhood, free will, agency and moral responsibility for ourselves.

There is quite a big potential downside, though; it might be that the robots, once conscious, would be able to come up with better aims and more effective strategies than we will ever be able to devise. By not giving them consciousness we might be permanently depriving ourselves of the best possible algorithms (and possibly some superior people, but that’s a depressing thought from a human point of view). True, but then I think that’s almost what we are on the brink of doing already. Kanai mentions European initiatives which may insist that computer processes come with an explanation that humans can understand; if put into practice the effect, once the rule collides with some of those processes that simply aren’t capable of explanation, would be to make certain optimal but inscrutable algorithms permanently illegal.

We could have the best of both worlds if we could devise a form of consciousness that did job number one for us without doing job two as an unavoidable by-product, but since in my view they’re all acts of recognition of varying degrees of complexity, I don’t see at the moment how the two can be separated.

51 Comments

  1. 1. Brain Molecule Marketing says:

    The simple question is – how is “consciousness” measured, across species? Same question with emotions, free will, “decision making, etc. All the subjective, intuitive, self-reported ideological, solipsistic crud that populates the humanities, social sciences and business schools….ugh

    Consciousness is just a silly cultural myth. But aren’t all cultural myths/beliefs silly? That’s why they sell so well!

  2. 2. SelfAwarePatterns says:

    I think the idea that p-consciousness is something separate and apart from the information processing of the brain is an illusion. P-consciousness is simply the most basic aspects of subjective experience, which is what makes it ineffable. A-consciousness is composed of composite concepts built on p-consciousness primitives. In other words, without p-consciousness, there can be no a-consciousness. And p-consciousness itself appears to be built on lower-level primitives that we simply don’t have introspective access to.

    I’m not sure I get the recognition idea of consciousness. My current laptop logs me in by recognizing my face, but maybe something more sophisticated is meant by the word “recognition”?

    Personally, I currently think consciousness is composed of perception (sensory modeling), emotion (instinctive / conditioned goals), attention, and imagination (simulations of various courses of action), with the system having introspective access to a subset of this functionality (in other words, modeling some aspects of its own processing). How similar another system has to be to this framework for us to label it “conscious” is more a matter of normative philosophical conclusions than any fact of the matter. It seems like the system’s programmatic goals will have to be at least somewhat similar to human or animal emotions in order to trigger most people’s intuition of fellow conscious being.

  3. 3. howard berman says:

    Hi:

    P and A consciousness might in the real time of the stream of consciousness, in fact only be separable conceptually, maybe not even like form and matter.
    Your assuming a lot and further the zombie experiment proves no more than the Meditations of Descartes proves.
    Interesting as it may be

  4. 4. Tom Clark says:

    Peter: “That kind of consciousness [the phenomenal kind] cannot in any useful sense be a product of evolution, because it does not affect my behaviour, as the classic zombie twin thought experiment explicitly demonstrates. A-consciousness, on the other hand, the kind involved in reflecting and deciding, absolutely does have survival value and certainly is a product of evolution, for exactly the reasons Kanai goes on to discuss.”

    Not sure why we would consider so-called access consciousness a type of consciousness at all were it not for the fact that phenomenal consciousness (experience) accompanies conscious reflecting, deciding and other informationally intensive capacities. Evolution most certainly selected for the various neurally instantiated cognitive functions *associated with* having experience, which suggests that phenomenal consciousness comes along for free once they are up and running, whether biologically or artificially instantiated. So designers of robots needn’t concern themselves with conferring experience on their creations, only the capacities associated with having it.

    Experience itself, as you suggest, might not have a behavior controlling role over and above what neurons (or silicon elements) are doing in supporting cognitive functions. Of course, subjectively we can’t help but feel that experience, e.g., pain, plays a behavior controlling role. So when robots achieve our level of flexible intelligence, coupled with a commensurate reporting capacity, they might well report having experiences (private qualitative episodes) they judge as causally affecting their behavior. And on what grounds would we dispute them?

  5. 5. Paul Torek says:

    The idea that p-consciousness cannot affect behavior refutes itself as soon as we inquire into the reference of terms like “subjective feeling of pain”. Behavior includes verbal behavior. If painfulness doesn’t regularly cause me to talk about pain, then something else does. But in that case, following standard intuitions about reference (Twin Earth, gold and atomic number 79, etc.) *that* thing is what the word “painful” in my mouth refers to.

    I really like the point that self-awareness “comes for free with a sufficiently advanced cognitive apparatus.” Well, maybe not free, but at a helluva bargain for evolution’s “goals”, since there’s already plenty of information about the body and nervous system floating around the nervous system.

  6. 6. lorenzo sleakes says:

    If we start by defining P-consciousness as ineffectual then we are denying our own ability to talk about what we are talking about. Rather than define A-consciousness as the ability to act I prefer to think of it as the ability to reflect cognitively on our experience. That includes thinking, speaking and remembering. P-consciousness may still be effective as a basic instinctual core level responsiveness for instance to pain, with no capacity for self reflection or understanding.

  7. 7. zarzuelazen says:

    Excellent peice by Kanai – note the close connection between consciousness and *time*.

    As to the role of consciousness, I think it’s the operating system of the brain. There are certain finite ‘mental resources’ available (for example memory and attention) and competing processes in the brain all demanding access to these resources. Consciousness decides how these resources will be allocated – it’s exactly analagous to the Windows operating system on your computers right now deciding how much CPU access each active process gets.

    At the deeper level, I think there’s a good chance consciousness is a fundamental physics property – a thermodynamic property that ubiquitous (panpsychism).

    I have taken a rough first attempt at an actual definition, and I’m posting my attempt here as well.

    Drum-roll please…

    *There are two different types of entropy, *informational* entropy, and *physical* entropy.

    *The 2 different types of entropy represent a 2-dimensional time (time as a plane). Represent informational entropy on the x-axis, physical entropy on the y-axis. Then the thermodynamics of any system can be plotted on this graph.

    *The rate of change of the ratio of the 2 entropies (1st derivative) defines the *time-flow*, the rate at which time is flowing. So I’m defining the flow of time as a function of the ratio of 2 types of entropy.

    *The degree of consciousness present in any system is the time-flow. I’m postulating that consciousness is time-flow (high level of time-flow, high degree of consciousness).

    I’m postulating that consciousness literally *is* the flow of time itself, defined as a thermodynamic property (ratio of 2 types of entropy).

  8. 8. David Duffy says:

    I think the robot is allowed to sometimes say that it cannot explain its algorithm that it applies to a particular problem, just that it is a matter of experience and judgment, or even intuition. This will be acceptable in numerous practical domains. Possibly it will be able to teach you how to do it, even though you won’t be able to explain it either.

    And Z, there’s no difference between informational and physical entropy!

  9. 9. zarzuelazen says:

    David,

    I know there isn’t a difference between informational and physical entropy in current scientific understanding. And that creates a problem for my idea.

    In order to talk about ‘time flow’, I really need two different ways to measure time. With a single timescale (a single time dimension), the notion of ‘time flow’ really doesn’t make sense. But with 2 different timescales (2 time dimensions), a comparison between them can make the notion of ‘time flow’ coherent.

    I’m wanting to use entropy as a measure of the passage of time, but it seems that I can’t define ‘time flow’ without at least 2 different measures of entropy.

    I therefore hypothesize that there *is* some difference between informational and physical entropy that we don’t quite understand yet.

  10. 10. Brain Molecule Marketing says:

    Everyday more neurology, animal biology and neurogenetics is debunking popular culture myths about agency, free will, consciousness, emotions, “decision making” and other subjective claims. For example: http://www.iflscience.com/brain/man-missing-most-of-his-brain-challenges-everything-we-thought-we-knew-about-consciousness/

  11. 11. Steve Philbrook says:

    I love this topic. It’s obvious artificial intelligence exists. I believe consciousness is another story altogether. In the first place, science seems unable to pin it down. The best descriptions I’ve seen are in old Buddhist and Hindu texts. But anyways I have a question I’ve been asking both hardware and software people for years. I have a simple chess app. It routinely crushes me horribly quickly. If it had the slightest bit of consciousness it would let me win a few so I don’t hit the off button in disgust. But of course the CPU doesn’t know it’s playing chess, it’s just running code. Likewise if a video camera hooked to a computer is focused on a rose, it’s still just ones and zeros to the CPU. I realize clever programming can make a computer seem conscious, it’s still just running code. My question is, just because computers get more complicated and quick, why would anyone think they would become conscious?

  12. 12. zarzuelazen says:

    Steve #11,

    I would question your assumption that some things are ‘conscious’ and others aren’t. Is there really a clear dividing line? Humans are conscious? What about dogs? What about birds? Bees? Rocks? Where do you draw the line?

    Instead I would suggest that there is no clear dividing line – consciousnes is everywhere! In the walls, in the table, in your coffee, everywhere!

    Why should we think this? Because the entire history of science is of phenomenon that were once thought to be separate from the natural world, later being found to be part of a single unified phenomenon!

    Excellent example: Take ‘electricity’ and ‘magnetism’. Once thought to be separate, later found to be part of a single thing ‘electomagnetism’. All of science is just like that.

    All the evidence from the history of science is that there is only a single ‘stuff’ – a monism. Consciousness *must* be a fundamental physics property, it can’t be a separate stuff.

    The only question is which physics property is consciousness?

    There’s really only one aspect of physics that science doesn’t fully understand in the every-day world , so immediately the suspicion should be that consciousness is exactly that property.

    That property is *time*.

    Which branch of physics deals with the arrow of time? Thermodynamics does! What physics property measures the flow of time? …something called ‘entropy’!

    Let me suggest that consciousness is a wholly natural physics property that can be defined thermodynamically. It’s everywhere, it’s just that some things are more conscious than others.

    If you examine your computer, indeed a lot of energy is flowing into the system and it’s generating heat (entropy), just like the brain. I want to suggest that this entropy flow *is* consciousness.

  13. 13. Steve Philbrook says:

    I’m familiar with that theory. I’m referring to the consciousness that seems to be peculiar to “awake” humans. I know, I use the term loosely. I agree, many things are conscious, but we not only know, we know we know, and reflect upon that. For instance, they gave an elephant a giant elephant pencil, the first thing it drew was an elephant. The elephant is well aware of what it is, but it seems to stop there. Whereas humans think about think about thinking, wonder about wondering. Some of them anyway. I have to stand by my position. If my computer had any idea what was going on, it would let me win a game, if not out of self preservation, simple courtesy would be nice.

  14. 14. Tom Clark says:

    Paul in #5: “If painfulness doesn’t regularly cause me to talk about pain, then something else does. But in that case, following standard intuitions about reference (Twin Earth, gold and atomic number 79, etc.) *that* thing is what the word ‘painful’ in my mouth refers to.”

    In third-person explanations of pain-related behavior, including verbal behavior, there won’t be (or needn’t be) any reference to the experience of pain, only the various observable physical and functional goings-on (see for instance the literature on whether fish feel pain or Dennett’s article online “Why you can’t make a computer that feels pain”). But what I’m referring to when I report pain is the experience, something only I undergo or partially consist of as a conscious subject. That referent doesn’t appear to, or exist for, anyone but me, so it’s no wonder that it can’t and doesn’t figure in third-person accounts of my behavior, even though *I* judge that it plays a causal role.

    Identity theorists want to claim that certain physical or functional goings-on constitute the experience (are identical to it) and that these play a causal role in my behavior; therefore the experience plays that role, so we avoid epiphenomenalism. If so, when I say I’m in pain (referring to the experience) I’m perhaps unintentionally but successfully referring to a potentially public state of affairs, something we might seen in a brain scan, nothing categorically private.

    Still, only I have access to the phenomenology – the feeling – so there’s something inaccessible to outside observers of the public state of affairs that on the identity theory just is my pain. But this inaccessibility is exactly what makes the identity claim problematic in the first place. The morning and evening stars (Venus at different times of day) have all their observable properties in common, thus are identical, so when I refer to one I’m necessarily referring to the other. But this doesn’t hold for pain and its neural correlates, since the phenomenology doesn’t exist as an observable, objective property, thus can’t be attributed to the physical and functional goings-on that correlate with the phenomenology. And since it isn’t an objective, observable property (you can’t find it poking around in the brain, or anywhere else), pain is not in a position to be epiphenomenal in third-person explanations of behavior.

  15. 15. Stephen says:

    Steve #11

    How do you know you are not just “running code”?

    There are a lot of theories about consciousness, but no one really knows how it comes about yet. The idea that consciousness is emergent is one possibility. It’s clear that if you disturb the organization of neurons in your brain, consciousness is one of the most likely things to go, and what is this organization but the “code” that causes your brain to operate in a certain manner. One of biggest differences between your computer examples and a brain is multiple orders of magnitude of complexity.

  16. 16. arnold says:

    Compare Periodic Table of Elements with Phenomenology with Observation…
    …Do we need more to find ourselves here now…when does here become now…

  17. 17. Steve Philbrook says:

    Good point. All I can say is that the information is not the experience. For instance all parameters of a ball game, video,sound, stats, weather, you name it. Playback will always be far different than actually going to the game. And the computers aren’t even getting that, the information is in binary code, in a sense even once more removed. Again, I highly recommend reading some Buddhist or Yogic writing on the subject. They dissect and analyse the ego, what most people consider their consciousness. And quite rationally explain how it’s not what you think.I have no idea what they would say about computers though.

  18. 18. zarzuelazen says:

    Steve,

    I recommend you read lots of neuro- and cognitive science, not Buddhist or Yogic writing! Spritual gurus are a dime a dozen – they all say different things, and they’ve contributed a grand total of zero to scientific knowledge on any subject.

    You might like to read through the wikipedia articles linked in my wikipedia book on ‘Phenomenology’ – it’s an A-Z of the central concepts in science and philosophy of mind (67 articles). Read through all of these, I promise, it will bring you right up to speed:

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Phenomenology

  19. 19. arnold says:

    zarzuelazen…

    Regarding phenomenology…
    …Do you see observation as phenomenon perhaps attracted to human consciousness but not human function.

  20. 20. zarzuelazen says:

    I don’t understand the question arnold.

  21. 21. arnold says:

    zarzuelazen…

    Setting philosophy aside to understand consciousness as different from observation…
    …Today, have we evolved to become a phenomena subject to interaction with other phenomena. This kind of intention, on our part provides place for consciousness of one’s body to be material for other phenomenon like observation and self…Understanding this point of view leaves our states of consciousness as useful–akin to the usefulness of evolution…

  22. 22. arnold says:

    zarzuelazen…

    Setting philosophy aside to understand consciousness as different from observation…
    …Today, have we evolved to become a phenomena subject to interaction with other phenomena. This kind of intention, on our part provides place for consciousness of one’s body to be material for other phenomenon like observation and self…
    …Understanding this point of view leaves our states of consciousness as useful–akin to the usefulness of evolution…

  23. 23. Steve Philbrook says:

    Zarzuelazen, I think we’re pretty much on the same side here. BTY I have read the gospels of all the major religions, and some pretty obscure ones.(the Aborigines have a wonderful Genesis account) I have also read plenty of cognitive science,physics, Bicarmel mind, cereal boxes,whatever. And I could be totally wrong too. I believe looking in the brain for consciousness is futile. Since you mentioned time, it’s like taking apart a clock to see where the time is. The best way to get at consciousness is to turn it back upon itself. (meditation) If you are patient and determined you may find the answer. Unfortunately, you can’t explain it other than to say that what feels like a linear single thing is actually a composite that includes emotion, memory, will, imagination and so on. And we haven’t even covered dreaming. Certainly a form of consciousness, yet lacking serial time and most importantly, reason. Here’s a fun one to ponder, could there be consciousness with no object? On another note, if you could get that screen name on a scrabble board, you’d be in good shape.

  24. 24. zarzuelazen says:

    Steve and Peter,

    Look at all the references to *time* in Kanai’s article:

    “consciousness helps us achieve what neuroscientist Endel Tulving has called *“mental time travel.”* ”

    “our sensation of the *present moment* is a construct of the conscious mind”

    “The importance of consciousness in bridging a *temporal gap*…”

    “a function of consciousness is to broaden our *temporal window* on the world—to give the present moment an extended duration.”

    “counterfactual information generation…it involves memory of the *past* or predictions for unexecuted *future* actions”

    Let’s consider the complex number plane. We could use that to represent a 2-dimensional time. Use a vector in the plane to locate any point- has a real number component (representing ordinary time), and an imaginary component (representing an extension in a *2nd* time dimension!)

    Then consciousness would be equivalent to a rotation in the complex number plane (a rotation in 2-dimensional time).

    It would consist of *both* a *physical* component (the physical processes representing the flow of ordinary time in the brain, *and* a non-physical component – an extension into the 2nd time dimension – this perhaps represents an *informationl* component (pure information).

    In other words, consciousness is a *composite* property, a vector with *both* a physical component (extension along ordinary number line) *and* a non-physical component (extension along imaginary number line).

    The rotation through the complex number plane then represents the movement through 2-dimensional time.

  25. 25. zarzuelazen says:

    Have a read of this:

    ‘Imaginary numbers present real solution to complex physics problem’

    https://news.uchicago.edu/article/2017/04/28/imaginary-numbers-present-real-solution-vexing-physics-problem

    “Two physicists at Argonne National Laboratory offered a way to mathematically describe a particular physics phenomenon called a phase transition in a system out of equilibrium.”

    “In physics, “equilibrium” refers to a state when an object is not in motion and has no energy flowing through it. As you might expect, most of our lives take place outside this state: we are constantly moving and causing other things to move.”

    ” “A rainstorm, this rotating fan, these systems are all out of equilibrium,” said study co-author of the Valerii Vinokur ”

    “To describe the phase transition, Galda and Vinokur wrote out the Hamiltonian operator, introduced an applied force to take it out of equilibrium and then made the force *imaginary*.”

    “This is a trick which is illegal from any common-sense point of view, but we saw that this combination—energy plus imaginary force—perfectly mathematically describes the dynamics of the system with friction,” Vinokur said.

    In other words, they are using a 2-d complex number plane (real+imaginary numbers), as I suggested in above post.

    ” “We can understand non-equilibrium transitions now as topological transitions in the space of energy,” Galda said. ”

    The brain is a perfect example of a non-equilibrium system. Guess what…

    “They applied the trick to describe other out-of-equilibrium phase transitions, such as a dynamic Mott transition and a spin system, and saw the results agreed with either observed experiments or simulations.”

  26. 26. Steve Philbrook says:

    Zarzuelazen this is strange, I recently read a book called An Experiment in Time. By J W Dunne. Although I couldn’t understand it, what you are saying sounds similar. He also managed to work the speed of light in there too. If you haven’t,you might like it. And judging by your reply, you would “get” it. Sometimes I was there, AHA comprehension! But then it would go away. Forget computers, maybe I’m not conscious.

  27. 27. zarzuelazen says:

    Yeah Steve, I’d heard of Dunne, but hadn’t read his stuff. Definitely worth looking into it.

    Listen, the idea of 2-d time isn’t that hard to understand.

    Just imagine a street map on the table. To get to any point on the map, you need an x-coordinate (a left-right movement) and a y-cordinate (an up-down movement). Example: 4 right, 6 up. You could draw an arrow from the bottom left-hand corner of the map to the point you want – that’s the ‘vector’ (a combination of 4 right and 6 up).

    For 2-d time it’s roughly the same thing, only the points on the map represent points in time rather than points in space. And the ‘map’ is the complex number plane:

    https://en.wikipedia.org/wiki/Complex_plane

    Your current mental state is a point on this map, consisting of a *combination* of physical time (eg. 4 right) AND non-physical time (eg. 6 up).

    Consciousness is your mind moving through the points on this map in a certain way.

  28. 28. Steve Philbrook says:

    Okay, I think I get that, possibly like a video game. No matter how many times you play, the action can be completely different, and there is the “feeling” of time. But it’s all there on the disk all the time. I read the book because of an internet article describing how his math was dismissed at the time, now they realize he may have been on to something. Again, it involves imaginary and negative numbers. And, unintentionally, he found it also explained the speed of light. We have wandered far from my original question though.I want to state it differently. At M.I.T. in Boston, there was and may still be, a simple computer display set up to play tic tac toe. It is made of tinker toys. This is to show that any electronic calculator can be replicated mechanically. Since the basic process hasn’t changed, this includes modern supercomputers. Although there would probably be no tinker toys left for kids to play with. Even though I agree everything, even particles may have a sort of awareness, being as you say included in the conscious field, I don’t think the tinker toys realize they are playing tic tac toe. Nor do I think adding more of them and speeding them up will change that. The difference is subtle but there it is.

  29. 29. arnold says:

    Are one’s thoughts, at the most just physical material, part of the means of meaning in our evolution…
    …today, confusing consciousness with observation limits ourselves to less dimensional living…
    …can our bodies identify physical phenomena like consciousness–to provide for more dimensional living…
    …do our bodies produce enough energy for transformations to include other dimensions other phenomena…

  30. 30. Paul Torek says:

    Tom (#15),

    How do you *know* that the Morning and Evening stars have all their observable properties in common? This can be admitted only *after* the identity is accepted; beforehand, the obvious “difference” is that one can be observed in the morning, the “other” in the evening. Thus, the Morning/Evening Star identity fails to provide a contrast to psycho-physical identities.

    There is a world of difference between saying that painfulness “needn’t be” part of the explanation of verbal behavior, versus saying it “won’t be”. That it needn’t be is true, but not interesting. We can explain, in principle, how drinking from the faucet helps keep your body temperature from rising over 37 C, without ever mentioning “water” – just stick to atomic levels of explanation. This fails to show that water doesn’t exist, or is purely subjective, or that it plays no causal role.

  31. 31. arnold says:

    Do we need robots to be conscious?…this must be some kind of backward contemporary philosophy…
    …Do we need to be conscious!…in traditional philosophy, ‘the horse is before the carriage/driver looking for a passenger’…

  32. 32. Tom Clark says:

    Paul:

    “…the Morning/Evening Star identity fails to provide a contrast to psycho-physical identities.”

    We’d establish phenomenal-physical identity by showing that experiences and their associated neural processes have all their properties in common, just as we’ve done for the morning and evening stars. Since as far as I know this identity hasn’t yet been established, we’re not yet entitled to claim that the feeling of pain plays the causal role of its associated neural goings-on in controlling action. Of course it sure *seems* to play a role, but all we’ve actually got is a reliable, constant conjunction of experienced pain with pain behavior. Unless the identity is illicitly assumed, neuroscientific explanations of that behavior won’t invoke the experience of pain, and indeed they usually don’t (e.g., the “do fish feel pain?” literature). Instead, they invoke various neural goings-on.

  33. 33. David Duffy says:

    “neuroscientific explanations of that behavior won’t invoke the experience of pain”

    Dear Tom,

    In one famous neuroimaging study

    http://science.sciencemag.org/content/277/5328/968

    “To differentiate cortical areas involved in pain affect, hypnotic suggestions were used to alter selectively the unpleasantness of noxious stimuli, without changing the perceived intensity. Positron emission tomography revealed significant changes in pain-evoked activity within anterior cingulate cortex, consistent with the encoding of perceived unpleasantness, whereas primary somatosensory cortex activation was unaltered.”

    This shows that the conscious experience of pain is actually a mixture of processes that can be dissociated using a top-down method. It is not that surprising that the experience of pain depends on attention, but it does kind of suggest that the feeling of pain is what drives the pain behaviour.

  34. 34. zarzuelazen says:

    Tom #32,

    Two of the most famous examples of apparently different properties that were later found to be unified are the space-time divide (unified by Einstein), and the electricity-magnetisim divide (unified by Maxwell).

    The history of science clearly points towards monism, and any dualistic conceptions are, when viewed in light of this evidence, implausible.

    I think the views of Alfred-North-Whitehead really help to dissolve the phenomenal-physical divide: there are no stable ‘things’, only ‘processes’. Time really is like a river, and the ‘block universe’ picture is just wrong (or at least, incomplete).

    According to physicist Sean Carroll, there’s a big ‘T’ for time in quantum mechanics that contradicts relativity, suggesting that indeed time really is fundamental.

    So think of everything as a river of time-flows. In some parts of the river of time the water is very calm – it’s flowing very slowly (some properties change very slowly) and we call these ‘mathematical’.

    In other parts of the river the water of time is moving faster (properties that are changing faster) and these appear as matter.

    Finally, some time-flows are very fast (parts of the river where the water is really roaring past very fast) and this appears as ‘consciousness’.

    But it’s all one thing…time.

    Think of a tornado. Energy is flowing through it continuously, and it seems to maintain a complex structure. But it’s not a ‘thing’, it’s a processes. It’s a ‘time-flow’ – a dynamical process in time that repeats over and over.

    I want to suggest that everything is really just like the tornado to some degree or another. It’s just that some time-flows are more stable than others.

    Is the tornado alive? I say yes. There simply isn’t a clear dividing line between physics and mind. And that’s the bottom line.

  35. 35. john davey says:

    Peter


    The really bad problem with machine learning is not that we don’t have access to the internal workings of the robot mind; it’s really that in some cases there just is no explanation of the robot’s behaviour that a human being can understand. ”

    Are these robots programmed ?
    Can you give me an example of how a robot’s execution plan couldn’t be understood ? If the robot’s execution plan arises from a predetermined sequence of responses to events, where does the incomprehension arise ?

    What computational feature is exploited to distinguish internal states from other ones ?

    Regs
    JBD

  36. 36. Peter says:

    John,

    As I understand it (and my understanding is pretty limited, so nobody should take my word for any of what follows!) they’re not programmed in quite the same sense as a traditional non-learning robot, though of course there is a level on which what they do is pre-determined by design and, yes, program.

    I can’t, obviously, lay out for you an example of something incomprehensible by humans. I think the case is that while at a micro level every step is perfectly legible, there is no coherent overall strategy which can be detected: yet the thing works. We must assume it depends on disjunctions with too many ‘legs’ for a human brain to hold simultaneously.

    This apparently happens even in chess. I have read that computers have demonstrated wins in end-games which have always been considered draws. In these cases we can watch the computer play out the win; it involves long sequences of apparently meaningless moves that nevertheless eventually lead to the win. Human players cannot learn how to pull off the same trick because there is no general strategy, just a vast number of vastly complicated particular sequences.

    I believe there are other, less fundamental problems with getting explanations for machine-learned behaviour, but this is the worst.

  37. 37. Tom Clark says:

    David in 33,

    Interesting findings, thanks: “…the conscious experience of pain is actually a mixture of processes that can be dissociated using a top-down method. It is not that surprising that the experience of pain depends on attention, but it does kind of suggest that the feeling of pain is what drives the pain behaviour.”

    Reports of the degree of unpleasantness are one sort of pain behavior, and what the research shows is that those reports are at least partially a function of (or at least reliably correlated with) variable activation of the anterior cingulate cortex as modulated by hypnosis. The pain affect itself (the feeling of unpleasantness) isn’t observed in this research and doesn’t figure in the explanation of the reported unpleasantness. Rather, it’s an unstated and as yet unproven inference/assumption that the activation of the anterior cingulate cortex is the feeling itself, such that we can claim that the feeling is what drives the report of unpleasantness.

  38. 38. zarzuelazen says:

    Tom,

    If you take all the emotion/spiritual/religious/philosophical/wishful thinking out of the topic, it’s really clear that ‘consciousness’ is in exactly the same class of properties as ‘information’ and ‘fields’.

    Now you wouldn’t have a massive debate whether the ‘information’ stored on your computer was ‘physical’ or not would you? Of course not! It obviously is.

    And ‘consciousness’ is just like ‘information’, It’s abstract, but it’s very clearly a physical thermodynamic property that can be precisely defined and measured as a function of ‘entropy’.

    I wouldn’t say that ‘consciousness’ is exactly *identical* to physical states though. Rather I’d say that consciousness is *composed of* (or *constituted in*) physical states. So my position is non-reductive physicalism – consciousness is still a real, distinct ontological property.

    Thank goodness for neuroscience and physics – notice they’re clear and precise. Too much philosophy just ties your brain in knots and fogs your thinking.

  39. 39. john davey says:


    “they’re not programmed in quite the same sense as a traditional non-learning robot”

    Actually, they are. They are programmed in a totally identical fashion. “Machine Learning” is (at best) a design term of no technical significance, at worst a cheap sales pitch for new automation features.

    It’s a bit like building houses. Houses are made of bricks but there are lots of schools of architecture that have different approaches to how the bricks are laid. But bricks are bricks.

    Machine learning programs will be using the same compilers, libraries and off-the-shelf software as all “un-learning” software. No difference.

    As a computational artifact there is nothing to distinguish a machine-learning program from any other type. Watch them run on a computer and you won’t see any difference. The difference is in the thought and design processes of the author of the source code that was used to “compile” (ie produce) the actual program.

    In reality, the difference isn’t huge. For instance, there is a much larger design gap between programming in an orthodox functional paradigm and an object-oriented paradigm. That is a big difference : much more of a style clash than could be noted from someone who claimed to be producing “machine learning” software.

    “Machine Learning” is a new buzzword because of Big Data, and the use of data science tools like ‘R’ to extract patterns from otherwise valueless data to (usually) detect purchase patterns or market trends.

    In time, people will stop describing this as “machine learning” and simply describe it as “customer choice prediction” software, as that is what it is. The change of label won’t change the source and it won’t change the program.


    ” I think the case is that while at a micro level every step is perfectly legible, there is no coherent overall strategy which can be detected”

    At any instant, the next step of a computer is known – 100%. For the same input data, a program will produce identical output, every time. Good job too ! They’d be useless if they didn’t. Nothing they do is incomprehensible.

    But I’ll assume that the “machine learning” method you are referring to is linked to tools like R that might typically produce a weighted grid of scalars to control the flow of their programs.

    If we create a mathematical weighted grid of options, and use it to determine the execution path of the software, that is all there is to it. There is nothing to discern save the contents of the grid. The decision to use an arbitrary method – a grid of scalars – to control execution does not endow that grid with vast semantic awareness. It’s just a grid. There is nothing to understand. We could use an infinite number of arbitrary methods to weight execution plans and they’d remain nthing more than numbers in a grid.

    Toss a coin. Heads drink a cider, tails drink a beer. Is there meaning in ‘heads or tails’ ?


    “Chess etc”

    Chess is a good analogy. Chess is entirely mathematical – theoretically ‘solvable’ by computer – but there isn’t enough time to “work it out”.

    We use terms like strategy to favour some patterns of play versus others and overcome the computational shortfall. All chess players use “weighted execution grids” to avoid their computational shortfall – ie the burden of having to work out the next 20 moves. They use practice and experience to know that some positions are likely to lead to doom – those positions in their memories are “weighted grids”

    Programmers of chess get over the CPU shortfall by using – in effect – weighted execution grids modelled on strategies. In computer chess though, as CPU capacity increases the reliance on techniques to short-circuit the CU burden is diminishing rapidly – orthodox combinational prediction will enable checkmate from an increasing number of moves out, “meaningless” or not.

    JBD

  40. 40. Tom Clark says:

    zarzuelazen:

    “…‘consciousness’ is just like ‘information’. It’s abstract, but it’s very clearly a physical thermodynamic property that can be precisely defined and measured as a function of ‘entropy’.”

    So on this account consciousness is completely objective and quantifiable just like any other physical property. But the defining characteristics of conscious experiences are subjective privacy (they only are available to/exist for the instantiating system) and qualitativeness (not amenable to quantitative specification). An interesting question and challenge for non-reductive physicalists is to explain why only certain sorts of physical/functional goings-on are associated with experiences. Or, as you might put it, why is consciousness constituted in only certain classes of physical states?

  41. 41. john davey says:

    Zarzuelan


    Now you wouldn’t have a massive debate whether the ‘information’ stored on your computer was ‘physical’ or not would you? Of course not! It obviously is.

    Not in a folk sense, but this is a philosophical forum in which this kind of question is important. Is information stored on a computer ? The correct answer is it’s actually located nowhere. Or rather, information is capable of being extracted from a medium as the last stage in a process where the arbitrary physical constructs used to represent symbols are converted into something meaningful inside someone’s head.

    This usually doesn’t matter : for the purposes of these discussions it does. “Information” is the most widely abused word in the whole subject area.

    Or at least it doesn’t make sense to regard an arbitrary physical construct – a voltage level in a CPU chip, or a ball in an abacus – as being informative without two other things :- a) a mapping system – located arbitrarily – to link a physical construct to a commonly accepted symbol ; and b) an information processing being who know both the mapping rules and how to process the end product of the mapping process and realise that output as information. It’s no good if I can’t decrypt an encrypted message. Until then it’s just noise.

    Likewise you can’t process the information on a CPU chip without a battery of aids to enable you to do it – buses, screens, LED displays, keyboards, the whole gamut.


    but it’s very clearly a physical thermodynamic property

    So what does it weigh ? How hot is it ? Seriously, I don’t see what on earth it has to do with thermodynamics.


    “Thank goodness for neuroscience and physics – notice they’re clear and precise. ”

    Neuroscience ? You must be joking. Physics is precise in the sense that it is based upon the application of mathematical models. In the sense that the conclusions to be drawn from those models are “clear”, I think you need to look at e.g the discourse between Einstein and Bohr on the subject of quantum mechanics.

    JBD

  42. 42. john davey says:

    Zarzuelan


    space-time divide (unified by Einstein)

    I think you are getting your unities mixed up.

    You need to differentiate the Electrostatic/Magnetic “unity” (in defence of him, actually discovered by Michael Faraday) from spacetime “unity”. Interestingly it wasn’t Maxwell who united EM, but Einstein’s special relativity, who demonstrated that (in a semantic sense) electrostatic forces and magnetic forces were actually the same thing.

    Maxwell wrote the famous wave equations.

    But spacetime is a mathematical construct in which space and time are ‘unified’ in the sense that they were capable of similar mathematical treatment. The actual semantic linkage isn’t there, as the semantic of space and time is not a concern of physics – with good reason, as their separate existence is one of the founding axioms of physics. Space and Time are a ‘given’ – they are assumed to exist.

    JBD

  43. 43. zarzuelazen says:

    “So on this account consciousness is completely objective and quantifiable just like any other physical property.”

    Yes

    “But the defining characteristics of conscious experiences are subjective privacy (they only are available to/exist for the instantiating system) and qualitativeness (not amenable to quantitative specification).”

    You have just given a completely tautological ‘definition’ that is devoid of any content. This simply declares by fiat that consciousness must remain ‘mysterious’ (outside the realm of objective discourse). It conveys zero insight.

    “An interesting question and challenge for non-reductive physicalists is to explain why only certain sorts of physical/functional goings-on are associated with experiences. Or, as you might put it, why is consciousness constituted in only certain classes of physical states?”

    I deny premises. If it makes sense to talk of consciousness at all, it has to be everywhere (pan-psychism). Some things are more or less conscious than others, but nothing is wholly unconscious. This is no more a problem for physicalists than saying that some things are hotter or colder than others, but all things have a temperature.

    Of course you can ask what the specific physical property is that makes some things more conscious than others. And I gave you a precise conjecture earlier in the thread – the degree of conscious present in any system is the ‘rate of time flow’, defined as entropy dissipation.

  44. 44. zarzuelazen says:

    “So what does it weigh ? How hot is it ? Seriously, I don’t see what on earth it has to do with thermodynamics.”

    The brain is a system where energy is pouring in, information is getting processed, and heat comes out, the very definition of a thermodynmaic system far from equilibrium.

  45. 45. john davey says:

    Zarzuelan


    “The brain is a system where energy is pouring in, information is getting processed, and heat comes out, the very definition of a thermodynmaic system far from equilibrium.”

    “information” is not a thermodynamic concept, let alone a physical one.

    Leaving that aside, this is not hugely helpful is it ? Any physical system I can conceive of has “energy pouring in” and “heat coming out” – a pack of wild dogs, a solar panel, a motor car, donald trump, the Iraq War – I don’t see the necessary link between consciousness and thermodynamics.

    JBD

  46. 46. David Duffy says:

    Hi Tom Clark. “not amenable to quantitative specification”

    This always struck me as special pleading, in that psychophysical laws (Fechner etc) are quantitative, and I can’t see a principled way to argue that the quantitative phenomenology is any less mysterious than the qualitative, especially in view of the existence of synaesthesia etc. Hypnotic synaesthesia has been studied a bit.

    As to the pain affect, I don’t think it is localized to particular neurones in the ACC. There is an interesting literature on social pain (pain of being ostracized etc) and how it responds to certain analgesics – the argument is that it is identical to that affective component of physical pain.

  47. 47. Tom Clark says:

    David:

    “…psychophysical laws (Fechner etc) are quantitative, and I can’t see a principled way to argue that the quantitative phenomenology is any less mysterious than the qualitative, especially in view of the existence of synaesthesia etc.”

    Psychophysical laws can specify relationships between stimuli and reports of experience, e.g., varying the hue or intensity of color stimuli can elicit judgments about just noticeable differences. This doesn’t seem to me particularly mysterious but simply a function of our physiology. And likewise for synesthesia: it’s pretty clearly an effect of cross-activation of normally separate brain modules supporting sensation and perception, even if we don’t yet know the mechanisms. But there’s no quantitative specification of what for instance a particular color looks like such that we’d know your red is like mine, nor (as far as I know) is there any strong candidate research hypothesis that might explain why experience accompanies particular neural goings-on and not others.

  48. 48. zarzuelazen says:

    Tom:

    “But there’s no quantitative specification of what for instance a particular color looks like such that we’d know your red is like mine, nor (as far as I know) is there any strong candidate research hypothesis that might explain why experience accompanies particular neural goings-on and not others.”

    That’s because I’m currently the only human being on the planet who truly understands consciousness 😉

    But briefly…

    I want you to imagine a ball-room filled with pairs of dancers. Imagine that at first each pair of dancers is doing their own thing…so the room as a whole looks chaotic…the dancers aren’t co-ordinating.

    I’m going to introduce a new kind of property here, a thing I’m calling a *Time-Flow*.

    Definition of *Time-Flow*:

    Let a time-flow be any repeated sequence of events. So imagine a series of dance-steps. Then a time-flow is just this series of dance-steps repeated over and over (it’s simply any repeated pattern or sequence of events in time)

    So for the ball-room example, the time-flows are the dance-steps that each pair of dancers performs.

    Now let me introduce a new term: The *Coherence* of the time-flows.

    Recall the situation I out-lined: every pair of dancers was initially doing their own thing. There is low co-ordination.

    Definition of *Coherent Time-Flows*: Let the coherence be the degree of coordination between many many individual time-flows.

    Imagine that all the dancers in the ball-room start to coordinate their actions (music starts and everyone starts dancing according to strict timing).

    Then the room suddenly has highly coherent time-flows: all the dancers move in unison.

    What is consciousness? Consciousness is the music in the ball-room! It is any communication system that produces coherent time-flows.

    Consciousness will be fully explained by a new kind of science…the science of time-flows.

  49. 49. David Duffy says:

    Hi Tom. It seems to me that the swapping around of qualia in synaesthesia speaks to a certain substitutability. As I’ve previously commented, our experience of a new smell is not a surprise. The number of olfactory qualities, that is the dimensions of the experienced olfactory space is only 16-20 (another bit of psychophysics), far fewer than the different receptor types, but presumably restricted to that number physiologically. Coming back to intensity, even though we can map just noticeable differences in quality onto a mathematical model, why is that less mysterious than the hearing-sight difference?

  50. 50. Tom Clark says:

    David: “Coming back to intensity, even though we can map just noticeable differences [JNDs] in quality onto a mathematical model, why is that less mysterious than the hearing-sight difference?”

    I’d say it’s because the quantitative regularities characteristic of JNDs are clearly a function of physiological mechanisms keyed to detecting the strength of the impinging stimulus (no mystery here), whereas the subjectively felt qualitative difference between color and auditory experience isn’t an obvious function of any mechanism.

  51. 51. zarzuelazen says:

    Peter, Tom and David

    Classical thermodynamics was mainly about systems close to equilibrium (where there are low energy flows). *Non-equilibrium* thermodynamics is a very new science that’s poorly understand.

    Recently, an investigation into non-equilibrium thermodynamics revealed a startling new phenonemon, ‘Time Crystals’, a new form matter that appears to oscillate in time (repeated pattern or time flow) without using any energy!

    http://www.sciencealert.com/it-s-official-time-crystals-are-a-new-crazy-state-of-matter-and-now-we-can-create-them

    Now this just scratches the surface of this new field. Science is only just starting to investigate non-equilibrium thermodynamics, and immediately something really new and weird has been found.

    I want to point out again that the brain is a non-equilibrium thermodynamic system 😉

    What this stongly suggests is that the ideas I’ve posted in this thread are very much on the right track.

    Perhaps the correct metaphor for thinking about the brain/mind is that it’s something related to the ‘time crystal’ – a very complex time-keeping device.

    In others words, I’m saying that the mind is a giant clock, and consciousness is the ‘tick-tock’.

    I also want to point to a new book by neuroscientist Dean Buonomano, ‘Your Brain is a Time Machine’, who is very much pushing the same ideas as I am:

    http://nymag.com/scienceofus/2017/04/the-human-brain-is-a-time-machine.html

Leave a Reply