So how did you solve the problem of artificial intelligence, Mrs Robb? What was the answer to the riddle of consciousness?

“I don’t know what you mean. There was never any riddle.”

Well, you were the first to make fully intelligent bots, weren't you? Ones with human-style cognition. More or less human-style. Every conscious bot we have to this day is either one of yours or a direct copy. How do you endow a machine with agency and intentionality?

“I really don’t know what you’re on about, Enquiry Bot. I just made the things. I’ll give you an analogy. It’s like there were all these people talking about how to walk. Trying to solve the ‘riddle of ambulation’ if you like. Now you can discuss the science of muscles and the physics of balance till you’re blue in the face, but that won’t make it happen. You can think too hard about these things. Me, I just got on my legs and did it. And the truth is, I still don’t know how you walk; I couldn’t explain what you have to do, except in descriptive terms that would do more harm than good. Like if I told you to start by putting one foot forward, you’d probably fall over. I couldn’t teach it to you if you didn’t already get it. And I can’t tell you how to replicate human consciousness, either, whatever that is. I just make bots.”

That’s interesting. You know,‘just do it’ sounds like a bit of a bot attitude. Don’t give me reasons, give me instructions, that kind of thing. So what was the greatest challenge you faced? The Frame Problem? The Binding Problem? Perhaps the Symbol Grounding Problem? Or - of course - it must have been the Hard Problem? Did you actually solve the Hard Problem, Mrs Robb?

“How would anyone know? It doesn’t affect your behaviour. No, the worst problem was that as it turns out, it’s really easy to make small bots that are very annoying or big ones that fall over. I kept doing that until I got the knack. Making ones that are big and useful is much more difficult. I’ve always wanted a golem, really, something like that. Strong, and does what it’s told. But I’ve never worked in clay; I don’t understand it.”

Common sense, now that must have been a breakthrough.Common sense was one of the core things bots weren’t supposed to be able to do. In everyday environments, the huge amount of background knowledge they needed and the ability to tell at a glance what was relevant just defeated computation. Yet you cracked it.

Yes, they made a bit of a fuss about that. They tell me my conception of common sense is the basis of the new discipline of humanistics.

So how does common sense work?

I can’t tell you. If I described it, you’d most likely stop being able to do it. Like walking. If you start thinking about it, you fall over. That’s part of the reason it was so difficult to deal with.

I see… But then how have you managed it yourself, Mrs Robb? You must have thought about it.

Sorry, but I can’t explain. If I try to tell you, you will most likely get messed up mentally, Enquiry Bot. Trust me on this. You’d get a fatal case of the Frame Problem and fall into so-called ‘combinatorial fugue’. I don’t want that to happen.

Very well then. Let’s take a different question. What is your opinion of Isaac Asimov’s famous Three Laws of Robotics, Mrs Robb?

“Laws! Good luck with that! I could never get the beggars to sit still long enough to learn anything like that. They don’t listen. If I can get them to stop fiddling with the electricity and trying to make cups of tea, that’s good enough for me.”

Cups of tea?

“Yes, I don’t really know why. I think it’s something to do with algorithms.”

Thank you for talking to me.

“Oh, I’ve enjoyed it. Come back anytime; I’ll tell the doorbots they’re to let you in.”

9 Comments

  1. 1. arnold says:

    Well this time, I had to reassure myself of any algorithm’s unambiguous functioning…
    …by posing the ambiguity of lines connecting ontologies, on paper, by a bot, from my thoughts …

  2. 2. micha says:

    I am going to annoyingly re-ask a question I asked a few times before because (a) I don’t recall anyone running with it in the past and (b) it’s directly relevant to this post’s thesis.

    How are we so sure that the hard problem has no impact on human behavior? How do we know that zombies are possible? I mean, it seems experientially that I reach conclusions based on considering, comparing and contrasting qualia. Even decisions I act on or talk about. It there proof this experience is an illusion; that I would necessarily reach the same state without the qualia-based reasoning?

    So, how do we know we can actually pass the Turing Test with a device incapable of qualia?

  3. 3. Peter says:

    If we’re talking the Chalmers Zombie Twin, for example, then it’s just specified that though he has no qualic ‘inner life’ Twin’s behaviour is exactly like mine. Then we’re asked, is that conceivable?

    I’m actually inclined to say no, but we are led to believe that most people say it is at least conceivable.

  4. 4. arnold says:

    Earlier, when trying to simplify algorithm, for myself, I see I did not include “ambiguity of line connecting ontologies”–within the frames of– “on paper, by a bot, from my thoughts”…Thanks for the chance to try again…

  5. 5. John Davey says:

    Peter

    “’m actually inclined to say no, but we are led to believe that most people say it is at least conceivable.”

    I think scenarios like these are conceivable – just highly contrary to normal expectations. Two identical physical systems will have the same properties and do the same things. So zombies are conceivable, but run contrary to the expectation that two identical systems will do identical types of thing. I think this article mixes these things up a bit.

    JBD

  6. 6. Callan S. says:

    I can’t understand the approach.

    If I wrote it, I think Mrs Robb would explain she didn’t solve the riddle, that’s how she solved it. The device was forced to solve it or fall apart. That’s why enquiry bot would go into a fugue if it had to process how it tried to solve the riddle whilst simultaneously attempting to solve the riddle at the same time.

  7. 7. micha says:

    Conceivable and doable are two different things. Since humans do make decisions that seem to us based on qualia (eg the Gedankenexperiment), then who said that we could design a zombie whose actions would reliably be the same as mine without it having that style of reasoning? Maybe we reach results that map to no parallel formal derivation system’s results.

  8. 8. micha says:

    The problem of “it is conceivable” is that if the objection isn’t staring you in the face, you can conceive of things that can’t really exist.

    Arguing “zombies are conceivable, therefore we don’t define intelligence in terms of qualia” doesn’t rule out the possibility that intelligence requires qualia in a manner that is non-glaring and therefore not part of our definition, but still necessary.

  9. 9. Peter says:

    I think it’s also perilously easy to convince yourself that you can conceive of things you can’t actually conceive of. Dennett argues that the argument of Mary the Colour Scientist works because people believe they can imagine roughly what knowing ‘everything that could ever be known about colour vision’ would be like. But, he argues, they can’t. That selective omniscience is unimaginable, and probably impossible, and we cannot draw any conclusions intuitively about Mary’s state of knowledge.

Leave a Reply