Hugh Howey gives a bold, amusing, and hopelessly optimistic account of how to construct consciousness in Wired. He thinks it wouldn’t be particularly difficult. Now you might think that a man who knows how to create artificial consciousness shouldn’t be writing articles; he should be building the robot mind. Surely that would make his case more powerfully than any amount of prose? But Howey thinks an artificial consciousness would be disastrous. He thinks even the natural kind is an unfortunate burden, something we have to put up with because evolution has yet to find a way of getting the benefits of certain strategies without the downsides. But he doesn’t believe that conscious AI would take over the world, or threaten human survival, so I would still have thought one demonstration piece was worth the effort? Consciousness sucks, but here’s an example just to prove the point?

What is the theory underlying Howey’s confidence? He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others. In essence, he thinks that was a really useful capacity for us to acquire, helping us compete in the cut-throat world of human society; but when we turn it on ourselves it disastrously generates wrong results, in particular about our own having of conscious states.

It remains a bit mysterious to me why he thinks a capacity that is so useful applied to others should be so disastrously and comprehensively wrong when applied to ourselves. He mentions priming studies, where our behaviour is actually determined by factors we’re unaware of; priming’s reputation has suffered rather badly recently in the crisis of non-reproducibility, but I wouldn’t have thought even ardent fans of priming would claim our mental content is entirely dictated by priming effects.

Although Dennett doesn’t get a mention, Howey’s ideas seem very Denettian, and I think they suffer from similar difficulties. So our Theory of Mind leads us to attribute conscious thoughts and intentions  to others; but what are we attributing to them? The theory tells us that neither we nor they actually have these conscious contents; all any of us has is self-attributions of conscious contents. So what, we’re attributing to them some self-attributions of self-attributions of…  The theory covertly assumes we already have and understand the very conscious states it is meant to analyse away. Dennett, of course, has some further things to say about this, and he’s not as negative about self-attributions as Howie.

But you know, it’s all pretty implausible intuitively. Suppose I take a mouthful of soft-boiled egg which tastes bad, and I spit it out. According to Howey what went on there is that I noticed myself spitting out the egg and thought to myself: hm, I infer from this behaviour that it’s very probable I just experienced a bad taste, or maybe the egg was too hot, can’t quite tell for sure.

The thing is, there are real conscious states irrespective of my own beliefs about them (which indeed, may be plagued by error). They are characterised by having content and intentionality, but these are things Howie does not believe in, or rather it seems has never thought of; his view that a big bank of indicator lights shows a language capacity suggests he hasn’t gone into this business of meaning and language quite deeply enough.

If he had to build an artificial consciousness, he might set up a community of self-driving cars, let one make inferences about the motives of the others and then apply that capacity to itself. But it would be a stupid thing to do because it would get it wrong all the time; in fact at this point Howie seems to be tending towards a view that all Theory of Mind is fatally error-prone. It would better, he reckons, if all the cars could have access to all of each other’s internal data, just as universal telepathy would be better for us (though in the human case it would be undermined by mind-masking freeloaders.

Would it, though? If the cars really had intentions, their future behaviour would not be readily deducible  simply from reading off all the measurements. You really do have to construct some kind of intentional extrapolation, which is what the Dennettian intentional stance is supposed to do.

I worry just slightly that some of the things Howey says seem to veer close to saying, hey a lot of these systems are sort of aware already; which seems unhelpful. Generally, it’s a vigorous and entertaining exposition, even if, in my view, on the wrong track.


  1. 1. Lloyd says:

    But notice that you never made a conscious decision to spit out the egg. That part was definitely automatic. The only question is the later appraisal of why your mouth acted the way it did. I find the view entirely plausible that no conscious agent was involved.

  2. 2. Peter says:

    So I never consciously tasted anything?

    Alright, Lloyd, what if I have to sip from two goblets and I know the one with delicious strawberry taste is poisoned while the one with the lovely mint flavour is safe. I get strawberry and spit; do I then find I’m wondering why I did that (and hypothesising that I don’t like strawberry)?

  3. 3. john davey says:

    Despite his use of the word ‘cool’ with such uncool frequency, he sounds like one of those spock-like characters you used to get in 50s and 60s movies. Human mentality (well – actually, just humanity) is, fundamentally, a defect. An error. A design fault. On the one hand they’re the protectors of Darwin, on the other they absolutely hate the outcomes of biological processes if they can’t see why they are like they are.

    Brains are far more complicated and efficient than computers. Computers are cuckoo clocks with bells on. They aren’t “efficient” at all – has this man worked on computers all his life ? I doubt it. Most of all the comparisons of performance are pointless because brains make mental experiences as natural consequence of operation,. Computers move digital tokens up or down, thats’ it. We read the digital tokens and act on them, usually using other machines to speed up the process of reading the digital tokens that the computer has generated in order to translate them to usable form. It’s not sophisiticated – at all.


  4. 4. Paul Topping says:

    Howey’s article seems incorrect in so many ways. As you mention here, he seems to be completely unaware of how beneficial consciousness is to our survival and success in dealing with the world. His article is certain amusing but not very informative.

    As I see it, consciousness is a necessary ingredient to many of our activities. Language, which Howey DID mention, but pretty much any working out of conclusions from premises. While consciousness does concoct stories we tell ourselves, these are only needed for those thoughts that arise unconsciously and, therefore, require explanation. The other stories are simply our conversion of our experiences (or deliberate lies) into words we can tell others.

    As far as spitting out the rotten egg is concerned, that has a simple explanation. We spit it out before we are conscious of the bad taste because evolution has given us a direct connection between the taste sense and the spitting out reflex (I’m guessing). The registering of the bad taste in our conscious mind simply happened later because it is a slower process.

  5. 5. Paul Topping says:

    Please ignore. Just so Chrome gets my new email address, not the old one.

  6. 6. Steve Phibrook says:

    Anyone involved in this, should study Buddhist and Yogic scriptures \texts. Especially before voicing opinions. They had consciousness dissected, analyzed, nailed down and explained far better than any modern scientists I have read. Having read much of both, the “experts” put on quite an amusing display of ignorance.

  7. 7. Patrick Kenny says:

    I side with Lloyd #1 on the soft-boiled egg question. The James-Lange-Neuroscience account is that an emotion like disgust consists of a bodily reaction first, followed by a (relatively slow) conscious reaction later. Think of the curate’s egg: “Parts of it are excellent, my Lord”.

    I enjoyed Howey’s piece but found myself a bit confused about his take-home message. It seems likely that Howey was commissioned by Wired to argue for what I think is a uncontroversial view about the development of AI, namely that progress will be made by focusing on specific problems (e.g. self-driving cars, machine translation) rather than attempting to build humanoid robots. To this end, Howey spins an entertaining tale that it would make no sense from an engineering point of view to equip robots with a human-like capacity for delusional confabulation. I think he gets a bit lost here.

    One point that Howey misses is that the human mind is constructed in such a way that conscious action is impossible in a situation which the mind is incapable of explaining to itself. In such an impasse, any explanation (delusional or not) will do if it gets the job done — biology doesn’t care about whether or not the explanation actually holds water.

    And Howey’s assumption that by focussing on specific problems human ingenuity will always be able to find “perfectly engineered” solutions (his words) is just a fantasy. Even successful AI technologies like machine translation or face recognition are just hacks which are guaranteed to fail miserably as soon as they are tested sufficiently rigourously.
    Fortunately for the working engineer’s self esteem, evolutions kludges are even less elegant than the hacks he has to resort to.

  8. 8. Peter says:

    Not to dig in on what may have been a weak example (and I offered a replacement, by the way), but I do not believe that bad tastes always produce uncontrolled reflexive spitting, let alone that the bad taste experience is worked out afterwards from observations of one’s own behaviour.

    I don’t know why you mention the curate as I see no sign of reflexive spitting at the Bishop’s table and Mr Jones’ words surely imply a complex and considered conscious reaction to a conscious experience of bad egg taste.

    But any reflex cases are irrelevant anyway, unless you argue that all conscious behaviour is a reflex. By insisting that the reaction to the taste of the egg is reflexive, you’re not bringing a counter-argument to my case, you’re really just refusing to consider my actual point the way I intended it.

  9. 9. Patrick Kenny says:

    Peter, I didn’t mean to suggest that we do not have conscious states (Mr Jones certainly has) but I do believe that conscious processing is a secondary process in our emotional lives and that rationality is emotion-driven.

  10. 10. Peter A. says:

    “He rests his ideas on Theory of Mind (which he thinks is little discussed); the ability to infer the thoughts and intentions of others.”

    Oh not this ‘Theory of Mind’ nonsense again! Apparently, those of us who are on the proverbial ‘autistic spectrum’ don’t have this ability according to a certain Mr. Simon Baron-Cohen, so does that mean we are therefore not quite human? That’s the implication isn’t it?

    This Hugh Howey character is a cretin if he really does believe that the issue of consciousness is “not particularly difficult”; it’s one of the most, if not THE most, difficult problem we know of. How is consciousness a burden? How can I take someone like this at all seriously?

Leave a Reply