What Machines Can’t Do

Here’s an IAI debate with David Chalmers, Kate Devlin, and Hilary Lawson.

In ultra-brief summary, Lawson points out that there are still things that computers perform poorly at; recognising everyday real-world objects, notably. (Sounds like a bad prognosis for self-driving cars.) Thought is a way of holding different things as the same. Devlin thinks computers can’t do what humans do yet, but in the long run, surely they will.

Chalmers points out that machines can do whatever brains can do because the brain is a machine (in a sense not adequately explored here, though Chalmers himself indicates the main objections).

There’s some brief discussion of the Singularity.

In my view, thoughts are mental or brain states that are about something. As yet, we have no clear idea of what this aboutness is and how it works, or whether it is computational (probably not, I think) or subserved by computation in a way that means it could benefit from the exponential growth in computing power (which may have stopped being exponential). At the moment, computers do a great imitation of what human translators do, but to date they haven’t even got started on real meaning, let alone set off on an exponential growth curve. Will modern machine learning techniques change that?

5 thoughts on “What Machines Can’t Do

  1. I hope this isn’t being too nit picky, but the first sentence regarding your personal view ran smack into my pet peeve.

    A thought must be an event, not a state. You could say a brain state is the result of a thought, or that a thought is the process of changing one brain state to another, but please stop saying that a thought is a brain state. (Same goes for consciousness. Consciousness is about events. To refer to someone’s or something’s consciousness is to refer to their/its ability to perform certain events.)

    That said, I think I know how the aboutness works, but the explanation involves a bunch of terms (like semantic information and symbolic sign and causal history) which need to be explained/defined. For what it’s worth, David Haig seems to have hit upon approximately the same idea, although he places the meaning (aboutness) in a different place than I do. (He places the meaning in the output of the process, whereas I place it in the semantic information which is the input.) Here is Dennett’s introduction to Haig’s essay. http://philsci-archive.pitt.edu/13259/1/Strange%20Inversion%20and%20Making%20sense.pdf

    So my take from the video was that the best insight came from Lawson, who mentioned that machines won’t have human-like ability until they can generate and then use their own concepts. (He used different language, but I’m pretty sure that was the gist.)

    *

  2. Well Peter,

    I don’t think Machine Learning techniques touch consciousness, they’re about something else…pattern recognition. Statistical methods (which are the foundation of machine learning) are all about finding *correlations* (patterns) between things. And machine learning is really just an extension of statistics, to cover models of random (stochastic) processes. Thus we can say that…

    Machine Learning Probablity&Statistics

    Links to my wikibooks here:
    Machine Learning https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Machine_Learning

    Probability&Statistics
    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Probability%26Statistics

    Consciousness I think, has it’s foundations in *knowledge representation*, or (to use the technical term), *ontology*. In an ontology, we seek to understand which entities exist and then the logical relationships between them. In other words, we group data together into coherent categories (which we call ‘concepts’). We’re carving reality at the joints and then using these concepts to plan for the future. So the study of consciousness (phenomenology) is really just a generalization of ontology and concepts, thus we can say that:

    Phenomenology Ontology&Concepts

    In exactly the same way that machine learning extends statistics and probability, phenomenology extends ontology.

    Links to my wiki-books here:

    Phenomenology
    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Phenomenology

    Ontology&Concepts
    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Ontology%26Concepts

  3. I can just about see why a Victorian might claim that brain is a machine, but I’m not sure how the metaphor gets us anywhere interesting in the 21st Century, let alone the insistence that it is literally a machine. The brain is clearly not a machine, or not like any machine humans have ever made. Has Chalmers moved on from panpsychism now? He seems to go from one bad idea to a worse one every time his name comes up.

    Wake me up when a computer with a volume of less than 1cc can independently walk, fly, navigate, find and process food, excrete metabolic wastes, avoid predators, fight infections, reproduce, cooperate with others, etc. Stuff any social insect can do. Build something to rival the sophistication of an amoeba, let alone a honey bee, and I’ll be impressed.

    As for computers replacing us? Only if we *let* them. We should pay attention to the Amish and their policy that a machine should never make a man unemployed. Prioritise humans over machines.

    “At the moment, computers do a great imitation of what human translators do”

    I write as someone who does translations most days. I study texts that were translated from Sanskrit into Chinese (and sometimes back the other way) in the early medieval period (ca 4th-7th Centuries). I frequently translate both languages into English, and sometimes Chinese (back) into Sanskrit. I would say that Computers are are *lousy* at imitating human translators.

    Computers have no idea whether their translation makes sense. And it often doesn’t. And when it doesn’t they are not able to adjust. They often choose the wrong synonym for the context or idiom. They have no ability to refine a translation to suit the intended audience. They have no awareness of idioms; no awareness of context or subtext; no awareness of how culture affects understanding of sentences. And this is for *prose* texts. Give them poetry and watch them fail to catch metaphors or find culturally appropriate images in translation. Or watch them flounder going from an inflected language to a prepositional language; or from a language which has few markers for grammar but relies on reading the context (like medieval Chinese), to a language that uses elaborate and precise markers for a complex set of well defined grammatical relations (like Sanskrit). Even humans struggle with this (which is one of the main themes of my work). And we’re just talking written texts, not spoken!

    Computers are nowhere near doing a “great imitation of what human translators do”. It’s not impossible that they will get there, but not yet and not soon. The trouble is that, so far, statistical methods do better than methods that involve trying to imitate *comprehension*. Without comprehension at both ends, of both language and culture, no one/thing can *ever* be a good translator.

    Computer people need to get out more. They lose sight of how amazing the natural world is by comparison with their toys. Spend a day watching insects and thinking about how they can possibly do what they do with such minimal resources on such a tiny scale. Then communicate that experience. A computer is nowhere near being able to do what we do effortlessly and without access to mains electricity.

  4. To Jayarava, nicely put. This comparing brains to computers is my pet peeve. Vacuum cleaners take stuff in and retain it too. My goodness, they’re just like brains!

Leave a Reply

Your email address will not be published. Required fields are marked *