Success with Consciousness

What would success look like, when it comes to the question of consciousness?

Of course it depends which of the many intersecting tribes who dispute or share the territory you belong to. Robot builders and AI designers have known since Turing that their goal is a machine whose responses cannot be distinguished from those of a human being. There’s a lot wrong with the Turing Test, but I still think it’s true that if we had a humanoid robot that could walk and talk and interact like a human being in a wide range of circumstances, most people wouldn’t question whether it was conscious or not. We’d like a theory to go with our robot, but the main thing is whether it works. Even if we knew it worked in ways that were totally unlike biological brains, it wouldn’t matter – planes don’t fly the birds do, but so what, it’s still flying. Of course we’re a million miles from such a perfectly human robot, but we sort of know where we’re going.

It’s a little harder for neurologists; they can’t rely quite so heavily on a practical demonstration, and reverse engineering consciousness is tough. Still, there are some feats that could be pulled off that would pretty much suggest the neurologists have got it. If we could reliably read off from some scanner the contents of anyone’s mind, and better yet, insert thoughts and images at will, it would be hard to deny that the veil of mystery had been drawn back quite a distance. It would have to be a general purpose scanner, though; one that worked straight away for all thoughts in any person’s brain. People have already demonstrated that they can record a pattern from one subject’s brain when that subject is thinking a known thought, and then, in the same session with the same subject, recognise that same pattern as a sign of the same thought.  That is a much lesser achievement, and I’m not sure it gets you a cigar, let alone the Nobel prize.

What about the poor philosophers? They have no way to mount a practical demonstration, and in fact no such demonstration can save them from their difficulties. The perfectly human robot does not settle things for them; they tell it they can see it appears to be able to perform a range of ‘easy’ cognitive tasks, but whether it really knows anything at all about what it’s doing is another matter. They doubt whether it really has subjective experience, even though it assures them that it’s own introspective evidence says it does. The engineer sitting with them points out that some of the philosophers probably doubt whether he has subjective experience.

“Oh, we do,” they admit, “in fact many of us are pretty sure we don’t have it ourselves. But somehow that doesn’t seem to make it any easier to wrap things up.”

Nor are the philosophers silenced by the neurologists’ scanner, which reveals that an apparently comatose patient is in fact fully aware and thinking of Christmas. The neurologists wake up the subject, who readily confirms that their report is exactly correct. But how do they know, ask the philosophers; you could be recording an analogue of experience which gets tipped into memory only at the point of waking, or your scanner could be conditioning memory directly without any actual experience. The subject could be having zomboid dreams, which convey neural data, but no actual experience.

“No, they really couldn’t,” protest the neurologists, but in vain.

So where do philosophers look for satisfaction? Of course, the best thing of all is to know the correct answer. But you can only believe that you know. If knowledge requires you to know that you know, you’re plummeting into an infinite regress; if knowing requires appropriate justification then you’re into a worm-can opening session about justification of which there is no end. Anyway, even the most self-sufficient of us would like others to agree, if not recognise the brilliance of our solution.

Unfortunately you cannot make people agree with you about philosophy. Physicists can set off a bomb to end the argument about whether e really equals mc squared; the best philosophers can do is derive melancholy satisfaction from the belief that in fifty years someone will probably be quoting their arguments as common sense, though they will not remember who invented them, or that anyone did. Some people will happen to agree with you already of course, which is nice, but your arguments will convert no-one; not only can you not get people to accept your case; you probably can’t even get them to read your paper. I sympathised recently with a tweet from Keith Frankish lamenting how he has to endlessly revisit bits of argument against his theory of illusionism, one’s he’s dealt with many times before (oh, but illusions require consciousness; oh, if it’s an illusion, who’s being deceived…). That must indeed be frustrating, but to be honest it’s probably worse than that; how many people, having had the counter-arguments laid out yet again, accept them or remember them accurately? The task resembles that of Sisyphus, whose punishment in Hades was to roll a boulder up a hill it invariably rolled down again. Camus told us we must imagine Sisyphus happy, but that itself is a mental task which I find undoes itself every time I stop concentrating…

I suppose you could say that if you have to bring out your counter-arguments regularly, that itself is some indicator of having achieved some recognition. Let’s be honest, attention is what everyone wants; moral philosophers all want a mention on The Good Place, and I suppose philosophers of mind would all want to be namechecked on Westworld if Julian Jaynes hadn’t unaccountably got that one sewn up.

Since no-one is going to agree with you, except that sterling band who reached similar conclusions independently, perhaps the best thing is to get your name associated with a colourful thought experiment that lots of people want to refute. Perhaps that’s why the subject of consciousness is so full of them, from the Chinese Room to Mary the Colour Scientist, and so on. Your name gets repeated and cited that way, although there is a slight danger that it ends up being connected forever with a point of view you have since moved on from, as I believe is the case with Frank Jackson himself, who no longer endorses the knowledge argument exemplified by the Mary story.

Honestly, though, being the author of a widely contested idea is second best to being the author of a universally accepted one. There’s a Borges story about a deposed prince thrown into a cell where all he can see is a caged jaguar. Gradually he realises that the secrets of the cosmos are encoded in the jaguar’s spots, which he learns to read; eventually he knows the words of magic which would cast down his rival’s palace and restore him to power; but in learning these secrets he has attained enlightenment and no longer cares about earthly matters. I bet every philosopher who reads this story feels a mild regret; yes, of course enlightenment is great, but if only my insights allowed me to throw down a couple of palaces? That bomb thing really kicked serious ass for the physicists; if I could make something go bang, I can’t help feeling people would be a little more attentive to my corpus of work on synthetic neo-dualism…

Actually, the philosophers are not the most hopeless tribe; arguably the novelists are also engaged in a long investigation of consciousness; but those people love the mystery and don’t even pretend to want a solution. I think they really enjoy making things more complicated and even see a kind of liberation in the indefinite exploration; what can you say for people like that!

16 thoughts on “Success with Consciousness

  1. I’ve often thought this as well, that there is no solution to the Hard Problem b/c aesthetic priors will govern what solutions people find acceptable…but I’ve recently begun to wonder if this is truly the case.

    We have theories that have made predictions and sought to go further and make treatments for varied neurological issues. Orch-OR did seem to predict quantum biology for example, though the field remains on the cusp of acceptance last I checked. I believe IIT proponents Koch and Tononi are still looking for a way to use their theory as a means of treatment? IIRC Hammeroff is doing the same, or at least looking to a quantum biological explanation for anesthetics.

    This isn’t to advocate for either theory, but rather to note that there are mere guesses vs real theories that provide some ptedictions as to structure/function of the brain/mind. As guesses move into the territory of real theory they should give us predictions of some sort. At least about the brain/mind but also – for Idealist and Panpsychic theories – some predictions about the structures/processes of reality beyond the brain.

  2. Peter, thanks for this thoughtful post. Apparently you’ve triggered (ahem) multiple touch points in me, so I feel the need to comment on each in order. (Sorry)

    1. “Of course we’re a million miles from such a perfectly human robot, but we sort of know where we’re going.” If by a million miles you mean maybe 10-20 years, okay. With deep learning neural nets we have developed the basic machinery of pattern recognition. I think we’re just about at the time to follow Mother Nature’s route and stop going deeper, and instead go wide. Adding more cerebral cortex (and some slightly different organization?) is how Nature went from lower mammals to us. This addition gave us the capacity to recognize more stuff. So our current technology includes facial recognizers, and cat recognizers, and Go position recognizers. It’s not much of a step to have one controller (recognizer) listening to lots (and lots) of such simple recognizers and determine (via some attention mechanism) which ones are important. So people talk about how a cat recognizer can be adversarially fooled. But what if the recognizer was also getting input from a fur recognizer, and a face recognizer, and an eye recognizer, and an ear recognizer, and a tail recognizer, and a leg recognizer, etc.? I think the adversary would have a little more difficulty in such a case. Finally, the simple fact that I have this idea means that people have already been working on it for a year or two.

    Remember, in 1960 the moon was a million miles away.

    *
    [I guess I’ll put further comments in another reply]

  3. 2. “If we could reliably read off from some scanner the contents of anyone’s mind, […]. It would have to be a general purpose scanner, though; one that worked straight away for all thoughts in any person’s brain.”

    This isn’t going to happen without knowing every connection of every neuron, so it’s not going to happen. The best way to explain why is with a thought experiment (which I may have posted here before, or something similar). Suppose you have one end of a bundle of wires, each with a light on the end. Each wire looks exactly the same. Each wire comes from a computer that recognizes something, based on a camera looking at a room. So one computer recognizes red objects, another recognizes cats, (see list in my comment #1 for more). Whenever a red object is in the room, the appropriate light is on. Whenever a cat is in the room, a different light is on. When a red cat is in the room, both of those lights are on. (We know cats would love to be painted red.)

    Now suppose someone else has a similar bundle of wires, but the arrangement of the wires is (mostly?, partially?) random. Even if you know which wire is the “red object” wire in one bundle, there’s no way to tell which is the “red object” wire in the other bundle, at least not until that wire gets turned on in response to a red object being in the room.

    *

  4. 3. “What about the poor philosophers? They have no way to mount a practical demonstration, and in fact no such demonstration can save them from their difficulties.”

    I don’t think this is correct. I think philosophers will have succeeded if they can

    A. describe a physical system which, when it operates, demonstrates every behavior and cognitive capability that we associate with consciousness, and

    B. explain why that system (assuming the cognitive abilities are human-level) “experiences” “qualia” (explaining exactly what those terms mean), and

    C. explain why that system would claim that its qualia are “ineffable”, that it itself is “conscious”, and that explaining its own consciousness is a “hard problem”.

    The first task is the “easy problem”. The second task is the “hard problem”, and the third is the “meta-problem”.

    If all of these can be explained by a physical system, then nothing else is needed, and adding anything more is “multiplying entities without necessity”. In other words, p-zombies should be cut out by Occam’s razor.

    *

  5. sciam.com
    “Can Art Solve the Hard Problem?”
    A play dramatizes the deepest of all mysteries, the mind–body problem
    By John Horgan on December 11, 2018

    like that, thanks

  6. The problem with philosophy is that it’s great at coming up with the questions, along with hypotheses. In that, it can provide an important service. But the only authoritative answers it can provide are tautological ones. For answers outside of logical or mathematical proofs, we need science (broadly construed), although philosophy can often help interpret those answers in a constant feedback loop.

    For consciousness in particular, I think that as the Chalmers “easy” problems are solved one by one, the overall hard problem will trouble us less and less, much as biological vitalism faded as microbiology progressed and it became evident that life is a system built on molecular chemistry.

  7. @ Arnold – Interesting piece, helps to show why there are likely no Easy Problems of Consciousness – they all require Information & arguably Intentionality. In fact of Fodor’s trinity (Intentionality, Subjectivity, Rationality) it seems Intentionality is the worst one for reductionism…IMO anyway…

    Re: Art & Consciousness, see also JF Martel’s Consciousness & the Aesthetic Vision:

    http://www.reclaimingart.com/blog/consciousness-in-the-aesthetic-vision

  8. TPTA: Temporal perception and temporal action

    Consciousness is simply an internal clock. In a dynamic system, many processes need to be coordinated and synchronized to get the system moving in the same direction (temporal action). And, the system needs to have an internal model of it’s own activities to plan for the future, in the form of a narrative for imagining alternative possibilities (temporal perception).

    SOLVED !

  9. It looks a bit surprising to consider that succeeding the Turing Test could be a criteria for being conscious in the human sense.
    The TT addresses the possibility for same meaning generation by robots and by humans. Philosophers and engineers can agree that this is not possible with today AI (https://philpapers.org/rec/MENTTC-2).
    Let us honour Sysiphus (thanks Peter) by recalling that this subject has already been introduced in Conscious Entities (https://www.consciousentities.com/2018/02/the-ontological-gap/)
    But perhaps the key point is to keep in mind that Camus has written The Rebel after The Myth of Sysiphus.

  10. Cristophe,

    In the comments on Can We Talk About This? (Nov 27th), it seemed most of us did not believe that qualia in the sense of internal experiences and perceptions that have no impact on behavior or anything else (aside from themselves?) in the physical world.

    If so, it’s quite possible that many of us believe that in the limit, as the tester gets more sophisticated, the Turing Test would be accurate. Not because consciousness is defined behaviorally, but because there is no way to imitate the behavior of a conscious system by anything but another conscious system.

    (Take, JUST AS AN EXAMPLE, non-combatilist free will. It presupposes the existence of behavior that is neither modelable with an algorithm nor random. Some middle ground.)

  11. Something needs to be said about “synthetic neo-dualism”…

    That inferring and synthesizing could be the same if leading to “thesis, antithesis, synthesis” the “triad”…
    Something new comes from a big bang as opposed to something old comes from a big bang, synthetic is big then…
    …shouldn’t consciousness be about seeing as opposed to not being about seeing, ‘like that is for’ something to talk about…

    Neo-dualism seems to be more humor…thanks

  12. The key approach to understanding consciousness, is through realising that the physical needs, deeds and seeds of the body are the initial causation of what happens in the brain. The brain evolved from the body and together they evolved. However, so many people who should know better, wrongly assume that the brain is initially causal. This has resulted in all kinds of problems in trying to understand consciousness.

  13. James

    “Remember, in 1960 the moon was a million miles away.”

    well .. about 250,000 at least.

    The scientific priciples to get to the moon were available c. 1650 (as for the science of the rocket technology .. well .. 19th century if you want to include fluid dynamics). Moon travel required newtonian mechanics, and nothing more. It’s a sophisticated engineering challenge but a very basic scientific one.

    Getting to the moon is scientifically easy. There aren’t that many variables. It’s much harder to synthesise a pig liver without pig dna, or predict the weather. What makes things complicated are the numbers of variables and the problem of defining initial conditions.Plonking a rocket on the moon could have been calculated with no small degree of accuracy by Newton 300 years ago.

    There are ZERO scientfic principles that link in a coherent way the activity of brain matter with the mental activity. Zero : nothing : nada . No theories in a field dominated by scale (in the sense that the numbers of variables are enormous, far higher than a rocket to the moon). That’s not promising.

    JBD

  14. @James: For a while, there was no scientific link between organic and inorganic chemistry, between electricity and magnetism, betweem them and light… The one thing we can be sure of is that your negativism will never expand our understanding of consciousness.

Leave a Reply

Your email address will not be published. Required fields are marked *