Fakebots

Are robots short-changing us imaginatively?

Chat-bots, it seems, might be getting their second (or perhaps their third or fourth) wind. While they’re not exactly great conversationalists, the recent wave of digital assistants demonstrates the appeal of a computer you can talk to like a human being. Some now claim that a new generation of bots using deep machine learning techniques might be way better at human conversation than their chat-bot predecessors, whose utterances often veered rapidly from the gnomic to the insane.

A straw in the wind might be the Hugging Face app (I may be showing my age, but for me that name strongly evokes a ghastly Alien parasite). This greatly impressed Rachel Metz, who apparently came to see it as a friend. It’s certainly not an assistant – it doesn’t do anything except talk to you in a kind of parody of a bubbly teen with a limping attention span. The thing itself is available for IOS and the underlying technology, without the teen angle, appears to be on show here, though I don’t really recommend spending any time on either. Actual performance, based on a small sample (I can only take so much) is disappointing; rather than a leap forward it seems distinctly inferior to some Loebner prize winners that never claimed to be doing machine learning. Perhaps it will get better. Jordan Pearson here expresses what seem reasonable reservations about an app aimed at teens that demands a selfie from users as its opening move.

Behind all this, it seems to me, is the looming presence of Spike Jonze’s film Her, in which a professional letter writer from the near future (They still have letters? They still write – with pens?) becomes devoted to his digital assistant Samantha. Samantha is just one instance of a bot which people all over are falling in love with. The AIs in the film are puzzlingly referred to as Operating Systems, a randomly inappropriate term that perhaps suggests that Jonze didn’t waste any time reading up on the technology. It’s not a bad film at all, but it isn’t really about AI; nothing much would be lost if Samantha were a fairy, a daemon, or an imaginary friend. There’s some suggestion that she learns and grows, but in fact she seems to be a fully developed human mind, if not a superlative one, right from her first words. It’s perhaps unfair to single the relatively thoughtful Her out for blame, because with some honourable exceptions the vast majority of robots in fiction are like this; humans in masks.

Fictional robots are, in fact, fakes, and so are all chat-bots. No chat-bot designer ever set out to create independent cognition first and then let it speak; instead they simply echo us back to ourselves as best they can manage. This is a shame because the different patterns of thought that a robot might have; the special mistakes it might be prone to and the unexpected insights it might generate, are potentially very interesting; indeed I should have thought they were fertile ground for imaginative writers. But perhaps ‘imaginative’ understates the amazing creative powers that would be needed to think yourself out of your own essential cognitive nature. I read a discussion the other day about human nature; it seems to me that the truth is we don’t know what human nature is like because we have nothing much to compare it with; it won’t be until we communicate with aliens or talk properly with non-fake robots that we’ll be able to form a proper conception of ourselves.

To a degree it can be argued that there are examples of this happening already. Robots that aspire to Artificial General Intelligence in real world situations suffer badly from the Frame Problem, for instance. That problem comes in several forms, but I think it can be glossed briefly as the job of picking out from the unfiltered world the things that need attention. AI is terrible at this, usually becoming mired in irrelevance (hey, the fact that something hasn’t changed might be more important than the fact that something else has). Dennett, rightly I think, described this issue as not the discovery of a new problem for robots so much as a new discovery about human nature; turns out we’re weirdly, inexplicably good at something we never even realised was difficult.

How interesting it would be to learn more about ourselves along those challenging, mind-opening lines; but so long as we keep getting robots that are really human beings, mirroring us back to ourselves reassuringly, it isn’t going to happen.

Global Workspace beats frame problem?

Picture: global workspace. Global Workspace theories have been popular ever since Bernard Baars put forward the idea back in the eighties; in ‘Applying global workspace theory to the frame problem’*,  Murray Shanahan and Baars suggest that among its other virtues, the global workspace provides a convenient solution to that old bugbear, the frame problem.

What is the frame problem, anyway? Initially, it was a problem that arose when early AI programs were attempting simple tasks like moving blocks around. It became clear that when they  moved a block, they not only had to update their database to correct the position of the block, they had to update every other piece of information to say it had not been changed. This led to unexpected demands on memory and processing. In the AI world, this problem never seemed too overwhelming, but philosophers got hold of it and gave it a new twist. Fodor, and in a memorable exposition, Dennett, suggested that there was a fundamental problem here. Humans had the ability to pick out what was relevant and ignore everything else, but there didn’t seem to be any way of giving computers the same capacity. Dennett’s version featured three robots: the first happily pulled a trolley out of a room to save it from a bomb, without noticing that the bomb was on the trolley, and came too; the second attempted to work out all the implications of pulling the trolley out of the room; but there were so many logical implications that it was stuck working through them when the bomb went off. The third was designed to ignore irrelevant implications, but it was still working on the task of identifying all the many irrelevant implications when again the bomb exploded.

Shanahan and Baars explain this background and rightly point out that the original frame problem arose in systems which used formal logic as their only means of drawing conclusions about things, no longer an approach that many people would expect to succeed. They don’t really believe that the case for the insolubility of the problem has been convincingly made. What exactly is the nature of the problem, they ask: is it combinatorial explosion? Or is it just that the number of propositions the AI has to sort through to find the relevant one is very large (and by the way, aren’t there better ways of finding it than searching every item in order?). Neither of those is really all that frightening; we have techniques to deal with them.

I think Shanahan and Baars, understandably enough, under-rate the task a bit here. The set of sentences we’re asking the AI to sort through is not just very large; it’s infinite. One of the absurd deductions Dennett assigns to his robots is that the number of revolutions the wheels of trolley will perform in being pulled out of the room is less than the number of walls in the room. This is clearly just one member of a set of valid deductions which goes on forever; the number of revolutions is also less than the number of walls plus one; it’s less than the number of walls plus two… It may be obvious that these deductions are uninteresting; but what is the algorithm that tells us so? More fundamentally, the superficial problems are proxies for a deeper concern; that the real world isn’t reducible to a set of propositions at all, that, as Borges put it

“it is clear that there is no classification of the Universe that is not arbitrary and full of conjectures. The reason for this is very simple: we do not know what thing the universe is.”

There’s no encyclopaedia which can contain all possible facts about any situation. You may have good heuristics and terrific search algorithms, but when you’re up against an uncategorisable domain of infinite extent, you’re surely still going to have problems.

However, the solution proposed by Shanahan and Baars is interesting. Instead of the mind having to search through a large set of sentences, it has a global workspace where things are decided and a series of specialised modules which compete to feed in information (there’s an issue here about how radically different inputs from different modules manage to talk to each other: Shanahan and Baars mention a couple of options and then say rather loftily that the details don’t matter for their current purposes. It’s true that in context we don’t need to know exactly what the solution is – but we do need to be left believing that there is one).

Anyway, the idea is that while the global workspace is going about its business each module is looking out for just one thing. When eventually the bomb-is-coming-too module gets stimulated, it begins sending very vigorously and that information gets into the workspace. Instead of having to identify relevant developments, the workspace is automatically fed with them.

That looks good on the face of it; instead of spending time endlessly sorting through propositions, we’ll just be alerted when it’s necessary. Notice, however, that instead of requiring an indefinitely large amount of time, we now need an indefinitely large number of specialised modules. Moreover, if we really cover all the bases, many of those modules are going to be firing off all the time. So when the bomb-is-coming-too module begins to signal frantically, it will be competing with the number-of-rotations-is-less-than-the-number-of-walls module and all the others, and will be drowned out. If we only want to have relevant modules, or only listen to relevant signals, we’re back with the original problem of determining just what is relevant.

Still, let’s not dismiss the whole thing too glibly. It reminded me to some degree of Edelman’s analogy with the immune system, which in a way really does work like that. The immune system cannot know in advance what antibodies it will need to produce, so instead it produces lots of random variations; then when one gets triggered it is quickly reproduced in large numbers. Perhaps we can imagine that if the global workspace were served by modules which were not pre-defined, but arose randomly out of chance neural linkages, it might work something like that. However, the immune system has the advantage of knowing that it has to react against anything foreign, whereas we need relevant responses for relevant stimuli. I don’t think we have the answer yet.

*Thanks to Lloyd for the reference.

Cognitive Planes

Picture: plane. I see via MLU that Robert Sloan at the University of Illinois at Chicago has been given half a million dollars for a three year project on common sense. Alas, the press release gives few details, but Sloan describes the goal of common sense as “the Holy Grail of artificial intelligence research”.

I think he’s right. There is a fundamental problem here that shows itself in several different forms. One is understanding: computers don’t really understand anything, and since translation, for example, requires understanding, they’ve never been very good at it. They can swap a French word for an English word, but without some understanding of what the original sentence was conveying, this mechanical substitution doesn’t work very well. Another outcrop of the same issue is the frame problem: computer programs need explicit data about their surroundings, but updating this data proves to be an unmanageably large problem, because the implications of every new piece of data are potentially infinite. Every time something changes, the program has to check the implications for every other piece of data it is holding; it needs to check the ones that are the same just as much as those that have changed, and the task rapidly mushrooms out of control. Somehow, humans get round this: they seem to be able to pick out the relevant items from a huge background of facts immediately, without having to run through everything.

In formulating the frame problem originally back in the 1960s,  John McCarthy speculated that the solution might lie in non-monotonic logics; that is, systems that don’t require everything to be simply true or false, as old-fashioned logical calculus does.  Systems based on rigid propositional/predicate calculus needed to check everything in their database every time something changed in order to ensure there were no contradictions, since a contradiction is fatal in these formalisations. On the whole, McCarthy’s prediction has been borne out in that research since then has tended towards the use of Bayesian methods, which can tolerate contradictions and which can give propositions degrees of belief rather than simply holding them true or false. As well as providing practical solutions to frame problem  issues, this seems intuitively much more like the way a human mind works.

Sloan, as I understand it, is very much in this tradition; his earlier published work deals with sophisticated techniques for the manipulation of Horn knowledge bases. I’m afraid I frankly have only a vague idea of what that means, but I imagine it is a pretty good clue to the direction of the new project. Interestingly, the press release suggests the team will be looking at CYC and other long-established projects. These older projects tended to focus on the accumulation of a gigantic database of background knowledge about the world, in the possibly naive belief that once you had enough background information, the thing would start to work. I suppose the combination of unbelievably large databases of common sense knowledge with sophisticated techniques for manipulating and updating knowledge might just be exciting. If you were a cyberpunk fan and  unreasonably optimistic, you might think that something like the meeting of Neuromancer and Wintermute was quietly happening.

Let’s not get over-excited, though, because of course the whole thing is completely wrong. We may be getting really good at manipulating knowledge bases, but that isn’t what the human brain does at all. Or does it? Well,  on the one hand, manipulating knowledge bases is all we’ve got: it may not work all that well, but for the time being it’s pretty much the only game in town – and it’s getting better. On the other hand, intuitively it just doesn’t seem likely that that’s what brains do: it’s more as if they used some entirely unknown technique of inference which we just haven’t grasped yet. Horn knowledge bases may be good, but really are they any more like natural brain functions than Aristotelian syllogisms?

Maybe, maybe not: perhaps it doesn’t matter. I mentioned the comparable issue of translation. Nobody supposes we are anywhere near doing translation by computation in the way the human brain does it, yet the available programs are getting noticeably better. There will always be some level of error in computer translation, but there is no theoretical limit to how far it can be reduced, and at some point it ceases to matter: after all, even human translators get things wrong.

What if the same were true for knowledge management? We could have AI that worked to all intents and purposes as well as the human brain, yet worked in a completely different way. There has long been a school of thought that says this doesn’t matter: we never learnt to fly the way birds do, but we learnt how to fly. Maybe the only way to artificial consciousness in the end will be the cognitive equivalent of a plane. Is that so bad?

If the half-million dollars is well spent, we could be a little closer to finding out…

The Three Laws revisited

Picture: Percy - Brains he has nix. Ages ago (gosh, it was nearly five years ago) I had a piece where Blandula remarked that any robot clever enough to understand Isaac Asimov’s Three Laws of Robotics would surely be clever enough to circumvent them.  At the time I think all I had in mind was the ease with which a clever robot would be able to devise some rationalisation of the harm or disobedience it was contemplating.  Asimov himself was of course well aware of the possibility of this kind of thing in a general way.  Somewhere (working from memory) I think he explains that it was necessary to specify that robots may not, through inaction, allow a human to come to harm, or they would be able to work round the ban on outright harming by, for example, dropping a heavy weight on a human’s head.  Dropping the weight would not amount to harming the human because the robot was more than capable of catching it again before the moment of contact. But once the weight was falling, a robot without the additional specification would be under no obligation to do the actual catching.

That does not actually wrap up the problem altogether. Even in the case of robots with the additional specification, we can imagine that ways to drop the fatal weight might be found. Suppose, for example, that three robots, who in this case are incapable of catching the weight once dropped, all hold on to it and agree to let go at the same moment. Each individual can feel guiltless because if the other two had held on, the weight would not have dropped. Reasoning of this kind is not at all alien to the human mind;  compare the planned dispersal of responsibility embodied in a firing squad.

Anyway, that’s all very well, but I think there may well be a deeper argument here: perhaps the cognitive capacity required to understand and apply the Three Laws is actually incompatible with a cognitive set-up that guarantees obedience.

There are two problems for our Asimovian robot: first it has to understand the Laws; second, it has to be able to work out what actions will deliver results compatible with them.  Understanding, to begin with, is an intractable problem.  We know from Quine that every sentence has an endless number of possible interpretations; humans effortlessly pick out the one that makes sense, or at least a small set of alternatives; but there doesn’t seem to be any viable algorithm for picking through the list of interpretations. We can build in hard-wired input-output responses, but when we’re talking about concepts as general and debatable as ‘harm’, that’s really not enough. If we have a robot in a factory, we can ensure that if it detects an unexpected source of heat and movement of the kind a human would generate, it should stop thrashing its painting arm around – but that’s nothing like intelligent obedience of a general law against harm.

But even if we can get the robot to understand the Laws, there’s an equally grave problem involved in making it choose what to do.  We run into the frame problem (in its wider, Dennettian form). This is, very briefly, the problem that arises from tracking changes in the real world. For a robot to keep track of everything that changes (and everything which stays the same, which is also necessary) involves an unmanageable explosion of data. Humans somehow pick out just relevant changes; but again a robot can only pick out what’s relevant by sorting through everything that might be relevant, which leads straight back to the same kind of problem with indefinitely large amounts of data.

I don’t think it’s a huge leap to see something in common between the two problems; I think we could say that they both arise from an underlying difficulty in dealing with relevance in the face of  the buzzing complexity of reality. My own view is that humans get round this problem through recognition; roughly speaking, instead of looking at every object individually to determine whether it’s square, we throw everything into a sort of sieve with holes that only let square things drop through. But whether or not that’s right, and putting aside the question of how you would go about building such a faculty into a robot, I suggest that both understanding and obedience involve the ability to pick out a cogent, non-random option from an infinite range of possibilities.  We could call this free will if we were so inclined, but let’s just call it a faculty of choice.

Now I think that faculty, which the robot is going to have to exercise in order to obey the Laws, would also unavoidably give it the ability to choose whether to obey them or not. To have the faculty of choice, it has to be able to range over an unlimited set of options, whereas constraining it to any given set of outcomes  involves setting limits. I suppose we could put this in a more old-fashioned mentalistic kind of way by observing that obedience, properly understood, does not eliminate the individual will but on the contrary requires it to be exercised in the right way.

If that’s true (and I do realise that the above is hardly a tight knock-down argument) it would give Christians a neat explanation of why God could not have made us all good in the first place – though it would not help with the related problem of why we are exposed to widely varying levels of temptation and opportunity.  To the rest of us it offers, if we want it, another possible compatibilist formulation of the nature of free will.