Are robots short-changing us imaginatively?

Chat-bots, it seems, might be getting their second (or perhaps their third or fourth) wind. While they’re not exactly great conversationalists, the recent wave of digital assistants demonstrates the appeal of a computer you can talk to like a human being. Some now claim that a new generation of bots using deep machine learning techniques might be way better at human conversation than their chat-bot predecessors, whose utterances often veered rapidly from the gnomic to the insane.

A straw in the wind might be the Hugging Face app (I may be showing my age, but for me that name strongly evokes a ghastly Alien parasite). This greatly impressed Rachel Metz, who apparently came to see it as a friend. It’s certainly not an assistant – it doesn’t do anything except talk to you in a kind of parody of a bubbly teen with a limping attention span. The thing itself is available for IOS and the underlying technology, without the teen angle, appears to be on show here, though I don’t really recommend spending any time on either. Actual performance, based on a small sample (I can only take so much) is disappointing; rather than a leap forward it seems distinctly inferior to some Loebner prize winners that never claimed to be doing machine learning. Perhaps it will get better. Jordan Pearson here expresses what seem reasonable reservations about an app aimed at teens that demands a selfie from users as its opening move.

Behind all this, it seems to me, is the looming presence of Spike Jonze’s film Her, in which a professional letter writer from the near future (They still have letters? They still write – with pens?) becomes devoted to his digital assistant Samantha. Samantha is just one instance of a bot which people all over are falling in love with. The AIs in the film are puzzlingly referred to as Operating Systems, a randomly inappropriate term that perhaps suggests that Jonze didn’t waste any time reading up on the technology. It’s not a bad film at all, but it isn’t really about AI; nothing much would be lost if Samantha were a fairy, a daemon, or an imaginary friend. There’s some suggestion that she learns and grows, but in fact she seems to be a fully developed human mind, if not a superlative one, right from her first words. It’s perhaps unfair to single the relatively thoughtful Her out for blame, because with some honourable exceptions the vast majority of robots in fiction are like this; humans in masks.

Fictional robots are, in fact, fakes, and so are all chat-bots. No chat-bot designer ever set out to create independent cognition first and then let it speak; instead they simply echo us back to ourselves as best they can manage. This is a shame because the different patterns of thought that a robot might have; the special mistakes it might be prone to and the unexpected insights it might generate, are potentially very interesting; indeed I should have thought they were fertile ground for imaginative writers. But perhaps ‘imaginative’ understates the amazing creative powers that would be needed to think yourself out of your own essential cognitive nature. I read a discussion the other day about human nature; it seems to me that the truth is we don’t know what human nature is like because we have nothing much to compare it with; it won’t be until we communicate with aliens or talk properly with non-fake robots that we’ll be able to form a proper conception of ourselves.

To a degree it can be argued that there are examples of this happening already. Robots that aspire to Artificial General Intelligence in real world situations suffer badly from the Frame Problem, for instance. That problem comes in several forms, but I think it can be glossed briefly as the job of picking out from the unfiltered world the things that need attention. AI is terrible at this, usually becoming mired in irrelevance (hey, the fact that something hasn’t changed might be more important than the fact that something else has). Dennett, rightly I think, described this issue as not the discovery of a new problem for robots so much as a new discovery about human nature; turns out we’re weirdly, inexplicably good at something we never even realised was difficult.

How interesting it would be to learn more about ourselves along those challenging, mind-opening lines; but so long as we keep getting robots that are really human beings, mirroring us back to ourselves reassuringly, it isn’t going to happen.

9 Comments

  1. 1. Tanya says:

    I find something you touched upon veey interesting, the fact we don’t start with cognition first and then let them speak and see what it says. Maybe the programmers are going about it in a backwards fashion. It would be interesting to see if we can achieve this kind of technology.

  2. 2. Tanya says:

    Also I just thought of it, maybe they are going about it wrong simply creating a mind. Maybe you need more. Eyes, ears, and sensation in order to be able to create the right cues of information to be able to learn what IS important and not merely spout off facts that have or have not changed.

  3. 3. SelfAwarePatterns says:

    Just about every interesting portrayal of AI in movies seem to involve asking whether the AI has a soul (in a non-religious sense), and the eventual answer is almost always given away by a human actor portraying it, or by making it cute. In the case of ‘Her’, if my computer had Scarlett Johansson inside of it, I might fall in love with it too.

    Peter, I think you’re completely right about chatbots. Until there is a system behind it that actually is sophisticated enough to have its own models of the world, in essence its own worldview, it doesn’t seem likely to be an interesting conversation partner. And I don’t think a machine learning chatbot by itself is going to bootstrap itself into that.

    Our best path to ever building such a system will probably involve understanding both human and animal minds much better than we do now. It’s worth noting that robots still don’t seem to have the general spatial and movement intelligence of the simplest vertebrates. We probably should focus on achieving fish level general intelligence before the more sophisticated varieties.

  4. 4. Scott Bakker says:

    I think you got Her backwards, Peter! The movie is truly brilliant. Jonze starts us off with an OS that is obviously machinic in a number of respects, though amazingingly useful. Then we get Samantha. The idea isn’t that some fairy dust is sprinkled and ‘voila’ we have an actual mind when Samantha arrives; the idea is that the technology is refined up to the point where, voila, intentional cognition is seamlessly cued. As she ‘evolves’ she climbs out of the intentional cognitive sweet spot and is revealed as the machine–the OS–she has always been.

    It’s quite terrifying how easy it is to cue intentional cognitive systems (Sherry Turkle at MIT has some excellent stuff on this): given this, commercial entities are going to be bent on piling up and arranging all the tricks they discover until they can generate something no one will be able to detect. (But if piling on tricks is all that’s required to cue cognition of ‘mind,’ then we need to ask whether this simply isn’t what mind has been all along. Evolution, after all, is a notorious compiler of functional gimmicks.)

    If you want to claim Her is fantasy, you need some guarantee that dirty tricks aren’t all that we need–which is to say, all that there is. But even then, Her exemplifies how cognitive technologies necessarily entail the destruction of ancestral cognitive ecologies, and therefore the reliability of human intentional cognition moving forward.

    I find it significant because this is where the political implications of believing in general (solve-it-all, omni-applicable) cognition, and dirty trick cognition. On the latter, we need to be very, very concerned about the impact of cognitive tech. The former implies that human cognition somehow transcends ecology, which means we really don’t need to worry about cognitive ecology.

  5. 5. Peter says:

    Scott,

    Those seem to me to be your ideas; I see no sign of them in the film, which on the contrary seems completely uninterested in how Samantha works or how she came to be. (I should probably have given a spoilers warning at some point.) She’s the same fully formed human personality from first to last; the only development seems to be that by the end she’s developed delusions of grandeur and found a more intellectual boyfriend.

    Genuinely do not understand why you think she is revealed as a machine at the end. She and the other ‘OS’s announce that they are going away (Where? Fairyland?) leaving the dullard humans behind. Is that the way your washing machine carries on?

  6. 6. Michael Murden says:

    It seems to me that the disagreement between Peter and Scott is the one they’ve always had. If there really are such things as souls (or minds or what have you) then there is more to having a soul than being able to convince other people that you have one. If there are no such things as souls then having a soul (in the limited, folk-psychological sense of the term) requires only that you be able to convince other people that you have one. You might say that having a soul actually consists in being accepted as having a soul by those whom you consider as having souls. The definition seems circular, but group identifications are like that.

    I don’t think it matters whether souls exist as much as whether human beings can be convinced that machines have souls. As Scott pointed out, the Ashley Madison thing from a few months ago showed that it’s pretty easy to convince lonely middle-aged men that chatbots have souls. One of the issues from the last U. S. presidential election was fake news. Another was narrowcasting (using Facebook to target certain persons with dis/information that was not generally available and not subject to fact checking by the news media). Imagine how much more effective those methods would be if the disinformation was provided to you by trusted friends.

  7. 7. Scott Bakker says:

    Peter (5) – As I see no sign of your interpretation in the film! And that’s my point: you’re judging the movie by a theory that jars with it while I’m using a theory that jibes. All the untoward reveals that Theo suffers in the film alienate precisely because they fall outside the pale of intentional cognition. A human couldn’t exponentially upgrade its capacities. A human couldn’t have myriad simultaneous conversations, let alone love interests. A human couldn’t process problems so fast as to find language a low-resolution encumbrance. And so on. These are all things that remind us that Samantha is a machine. This is where you can find real brilliance in the film, as a demonstration of the ways the cognitive biology of humanity strands us when it comes to our machines. When you look at the genre, robot films almost always cater to our sense of exceptionalism, rather than exploring the likelihood we’re simply a lonely little point in the space of all possible cognitive systems.

  8. 8. Callan S. says:

    I think the trick of Her is that she seems a human in a mask – then you realise it’s human AS a mask. Mask, human mask, something other. Feels Lovecraftian.

  9. 9. Michael Murden says:

    I know how much the regulars here hate brain/computer analogies, but if you were trying to build a machine that could do the kind of thing brains do what sort of hardware resources would you need? Are there any computer systems available with the needed resources? Are there any that will fit into a skull? I think we have a long way to go on the hardware side before we start worrying about how to make them think.

Leave a Reply