Posts tagged ‘AI’

AGIAn interesting piece in Aeon by David Deutsch. There was a shorter version in the Guardian, but it just goes to show how even reasonably intelligent editing can mess up a piece. There were several bits in the Guardian version where I was thinking to myself: ooh, he’s missed the point a bit there, he doesn’t really get that: but on reading the full version I found those very points were ones he actually understood very well. In fact he talks a lot of sense and has some real insights.

Not that everything is perfect. Deutsch quite reasonably says that AGI, artificial general intelligence, machines that think like people, must surely be possible. We could establish that by merely pointing out that if the brain does it, then it seems natural that a machine must be able to do it: but Deutsch invokes the universality of computation, something he says he proved in the 1980s. I can’t claim to understand all this in great detail, but I think what he proved was the universality in principle of quantum computation: but the notion of computation used was avowedly broader than Turing computation. So it’s odd that he goes on to credit Babbage with discovering the idea, as a conjecture, and Turing with fully understanding it. He says of Turing:

He concluded that a computer program whose repertoire included all the distinctive attributes of the human brain — feelings, free will, consciousness and all — could be written.

That seems too sweeping to me: it’s not unlikely that Turing did believe those things, but they go far beyond his rather cautious published claims, something we were sort of talking about last time.

I’m not sure I fully grasp what people mean when they talk about the universality of computation. It seems to be that they mean any given physical state of affairs can be adequately reproduced, or at any rate emulated to any required degree of fidelity, by computational processes. This is probably true: what it perhaps overlooks is that for many commonplace entities there is no satisfactory physical description. I’m not talking about esoteric items here: think of a vehicle, or to be Wittgensteinian, a game. Being able to specify things in fine detail, down to the last atom, is simply no use in either case. There’s no set of descriptions of atom placement that defines all possible vehicles (virtually anything can be a vehicle) and certainly none for all possible games, which given the fogginess of the idea, could easily correspond with any physical state of affairs. These items are defined on a different level of description, in particular one where purposes and meanings exist and are relevant.  So unless I’ve misunderstood, the claimed universality is not as universal as we might have thought.

However, Deutsch goes on to suggest, and quite rightly, I think, that what programmed AIs currently lack is a capacity for creative thought. Endowing them with this, he thinks, will require a philosophical breakthrough. At the moment he believes we still tend to believe that new insights come from induction; whereas ever since Hume there has been a problem over induction, and no-one knows how to write an algorithm which can produce genuine and reliable new inductions.

Deutsch unexpectedly believes that Popperian epistemology has the solution, but has been overlooked. Popper, of course, took the view that scientific method was not about proving a theory but about failing to disprove one: so long as your hypotheses withstood all attempts to prove them false (and so long as they were not cast in cheating ways that made them unfalsifiable) you were entitled to hang on to them.

Maybe this helps to defer the reckoning so far as induction is concerned: it sort of kicks the can down the road indefinitely. The problem, I think, is that the Popperian still has to be able to identify which hypotheses to adopt in the first place; there’s a very large if not infinite choice of possible ones for any given set of circumstances.

I think the answer is recognition: I think recognition is the basic faculty underlying nearly all of human thought. We just recognise that certain inductions, and certain events are that might be cases of cause and effect are sound examples: and our creative thought is very largely powered by recognising aspects of the world we hadn’t spotted before.

The snag is, in my view, that recognition is unformalisable and anomic – lacking in rules. I have a kind of proof of this. In order to apply rules, we have to be able to identify the entities to which the rules should be applied. This identification is a matter of recognising the entities. But recognition cannot itself be based on rules, because that would then require us to identify the entities to which those rules applied – and we’d be caught in a in a vicious circle.

It seems to follow that if no rules can be given for recognition, no algorithm can be constructed either, and so one of the basic elements of thought is just not susceptible to computation. Whether quantum computation is better at this sort of thing than Turing computation is a question I’m not competent to judge, but I’d be surprised if the idea of rule-free algorithms could be shown to make sense for any conception of computation.

So that might be why AGI has not come along very quickly. Deutsch may be right that we need a philosophical breakthrough, although one has to have doubts about whether the philosophers look likely to supply it: perhaps it might be one of those things where the practicalities come first and then the high theory is gradually constructed after the fact. At any rate Deutsch’s piece is a very interesting one, and I think many of his points are good. Perhaps if there were a book-length version I’d find that I actually agree with him completely…

ray kurzweilThe Guardian had a piece recently which was partly a profile of Ray Kurzweil, and partly about the way Google seems to have gone on a buying spree, snapping up experts on machine learning and robotics – with Kurzweil himself made Director of Engineering.

The problem with Ray Kurzweil is that he is two people. There is Ray Kurzweil the competent and genuinely gifted innovator, a man we hear little from: and then there’s Ray Kurzweil the motor-mouth, prophet of the Singularity, aspirant immortal, and gushing fountain of optimistic predictions. The Guardian piece praises his record of prediction, rather oddly quoting in support his prediction that by the year 2000 paraplegics would be walking with robotic leg prostheses – something that in 2014 has still not happened. That perhaps does provide a clue to the Kurzweil method: if you issue thousands of moderately plausible predictions, some will pay off. A doubtless-apocryphal story has it that at AI conferences people play the Game of Kurzweil. Players take turns to offer a Kurzweilian prediction (by 2020 there will be a restaurant where sensors sniff your breath and the ideal meal is got ready without you needing to order; by 2050 doctors will routinely use special machines to selectively disable traumatic memories in victims of post-traumatic stress disorder; by 2039 everyone will have an Interlocutor, a software agent that answers the phone for us, manages our investments, and arranges dates for us… we could do this all day, and Kurzweil probably does). The winner is the first person to sneak in a prediction of something that has in fact happened already.

But beneath the froth is a sharp and original mind which it would be all too easy to underestimate. Why did Google want him? The Guardian frames the shopping spree as being about bringing together the best experts and the colossal data resources to which Google has access. A plausible guess would be that Google wants to improve its core product dramatically. At the moment Google answers questions by trying to provide a page from the web where some human being has already given the answer; perhaps the new goal is technology that understands the question so well that it can put together its own answer, gathering and shaping selected resources in very much the way a human researcher working on a bespoke project might do.

But perhaps it goes a little further: perhaps they hope to produce something that will interact with humans in a human-like way.  A piece of software like that might well be taken to have passed the Turing test, which in turn might be taken to show that it was, to all intents and purposes, a conscious entity. Of course, if it wasn’t conscious, that might be a disastrous outcome; the nightmare scenario feared by some in which our mistake causes us to nonsensically award the software human rights, and/or  to feel happier about denying them to human beings.

It’s not very likely that the hypothetical software (and we must remember that this is the merest speculation) would have even the most minimal forms of consciousness. We might take the analogy of Google Translate; a hugely successful piece of kit, but one that produces its translations with no actual understanding of the texts or even the languages involved. Although highly sophisticated, it is in essence a ‘brute force’ solution; what makes it work is the massive power behind it and the massive corpus of texts it has access to.  It seems quite possible that with enough resources we might now be able to produce a credible brute force winner of the Turing Test: no attempt to fathom the meanings or to introduce counterparts of human thought, just a massive repertoire of canned responses, so vast that it gives the impression of fully human-style interaction. Could it be that Google is assembling a team to carry out such a project?

Well, it could be. However, it could also be that cracking true thought is actually on the menu. Vaughan Bell suggests that the folks recruited by Google are honest machine learning types with no ambitions in the direction of strong AI. Yet, he points out, there are also names associated with the trendy topic of deep learning. The neural networks (but y’know, deeper) which deep learning uses just might be candidates for modelling human neuron-style cognition. Unfortunately it seems quite possible that if consciousness were created by deep learning methods, nobody would be completely sure how it worked or whether it was real consciousness or not. This is a lamentable outcome: it’s bad enough to have robots that naive users think are people; having robots and genuinely not knowing whether they’re people or not would be deeply problematic.

Probably nothing like that will happen: maybe nothing will happen. The Guardian piece suggests Kurzweil is a bit of an outsider: I don’t know about that.  Making extravagantly optimistic predictions while only actually delivering much more modest incremental gains? He sounds like the personification of the AI business over the years.

TankBack in November Human Rights Watch (HRW) published a report – Losing Humanity – which essentially called for a ban on killer robots – or more precisely on the development, production, and use of fully autonomous weapons,  backing it up with a piece in the Washington Post. The argument was in essence that fully autonomous weapons are most probably not compatible with international conventions on responsible ethical military decision making, and that robots or machines lack (and perhaps  always will lack) the qualities of emotional empathy and ethical judgement required to make decisions about human lives.

You might think that in certain respects that should be fairly uncontroversial. Even if you’re optimistic about the future potential of robotic autonomy, the precautionary principle should dictate that we move with the greatest of caution when it comes to handing over lethal weapons . However, the New Yorker followed up with a piece which linked HRW’s report with the emergence of driverless cars and argued that a ban was ‘wildly unrealistic’. Instead, it said, we simply need to make machines ethical.

I found this quite annoying; not so much the suggestion as the idea that we are anywhere near being in a position to endow machines with ethical awareness. In the first place actual autonomy for robots is still a remote prospect (which I suppose ought to be comforting in a way). Machines that don’t have a specified function and are left around to do whatever they decide is best, are not remotely viable at the moment, nor desirable. We don’t let driverless cars argue with us about whether we should really go to the beach, and we don’t let military machines decide to give up fighting and go into the lumber business.

Nor, for that matter, do we have a clear and uncontroversial theory of ethics of the kind we should need in order to simulate ethical awareness. So the New Yorker is proposing we start building something when we don’t know how it works or even what it is with any clarity. The danger here, to my way of thinking, is that we might run up some simplistic gizmo and then convince ourselves we now have ethical machines, thereby by-passing the real dangers highlighted by HRW.

Funnily enough I agree with you that the proposal to endow machines with ethics is premature, but for completely different reasons. You think the project is impossible; I think it’s irrelevant. Robots don’t actually need the kind of ethics discussed here.

The New Yorker talks about cases where a driving robot might have to decide to sacrifice its own passengers to save a bus-load of orphans or something. That kind of thing never happens outside philosophers’ thought experiments. In the real world you never know that you’re inevitably going to kill either three bankers or twenty orphans – in every real driving situation you merely need to continue avoiding and minimising impact as much as you possibly can. The problems are practical, not ethical.

In the military sphere your intelligent missile robot isn’t morally any different to a simpler one. People talk about autonomous weapons as though they are inherently dangerous. OK, a robot drone can go wrong and kill the wrong people, but so can a ballistic missile. There’s never certainty about what you’re going to hit. A WWII bomber had to go by the probability that most of its bombs would hit a proper target, not a bus full of orphans (although of course in the later stages of WWII they were targeting civilians too).  Are the people who get killed by a conventional bomb that bounces the wrong way supposed to be comforted by the fact that they were killed by an accident rather than a mistaken decision? It’s about probabilities, and we can get the probabilities of error by autonomous robots down to very low levels.  In the long run intelligent autonomous weapons are going to be less likely to hit the wrong target than a missile simply lobbed in the general direction of the enemy.

Then we have the HRW’s extraordinary claim that autonomous weapons are wrong because they lack emotions! They suggest that impulses of mercy and empathy, and unwillingness to shoot at one’s own people sometimes intervene in human conflict, but could never do so if robots had the guns. This completely ignores the obvious fact that the emotions of hatred, fear, anger and greed are almost certainly what caused and sustain the conflict in the first place!  Which soldier is more likely to behave ethically: one who is calm and rational, or one who is in the grip of strong emotions? Who will more probably observe the correct codes of military ethics, Mr Spock or a Viking berserker?

We know what war is good for (absolutely nothing). The costs of a war are always so high that a purely rational party would almost always choose not to fight. Even a bad bargain will nearly always be better than even a good war. We end up fighting for reasons that are emotional, and crucially because we know or fear that the enemy will react emotionally.

I think if you analyse the HRW statement enough it becomes clear that the real reason for wanting to ban autonomous weapons is simply fear; a sense that machines can’t be trusted. There are two facets to this. The first and more reasonable is a fear that when machines fail, disaster may follow. A human being may hit the odd wrong target, but it goes no further: a little bug in some program might cause a robot to go on an endless killing spree. This is basically a fear of brittleness in machine behaviour, and there is a small amount of justification for it. It is true that some relatively unsophisticated linear programs rely on the assumptions built into their program and when those slip out of synch with reality things may go disastrously and unrecoverably wrong. But that’s because they’re bad programs, not a necessary feature of all autonomous systems and it is only cause for due caution and appropriate design and testing standards, not a ban.

The second facet, I suggest, is really a kind of primitive repugnance for the idea of a human’s being killed by a lesser being; a secret sense that it is worse, somehow more grotesque, for twenty people to be killed by a thrashing robot than by a hysterical bank robber. Simply to describe this impulse is to show its absurdity.

It seems ethics are not important to robots be cause for you they’re not important to anyone. But I’m pleased you agree that robots are outside the moral sphere.

Oh no, I don’t say that. They don’t currently need the kind of utilitarian calculus the New Yorker is on about, but I think it’s inevitable that robots will eventually end up developing not one but two separate codes of ethics. Neither of these will come from some sudden top-down philosophical insight – typical of you to propose that we suspend everything until the philosophy has been sorted out in a few thousand years or so – they’ll be built up from rules of thumb and practical necessity.

First, there’ll be rules of best practice governing their interaction with humans.  There may be some that will have to do with safety and the avoidance of brittleness and many, as Asimov foresaw, will essentially be about deferring to human beings.  My guess is that they’ll be in large part about remaining comprehensible to humans; there may be a duty to report , to provide rationales in terms that human beings can understand, and there may be a convention that when robots and humans work together, robots do things the human way, not using procedures too complex for the humans to follow, for example.

More interesting, when there’s a real community of autonomous robots they are bound to evolve an ethics of their own. This is going to develop in the same sort of way as human ethics, but the conditions are going to be radically different. Human ethics were always dominated by the struggle for food and reproduction and the avoidance of death: those things won’t matter as much in the robot system. But they will be happy dealing with very complex rules and a high level of game-theoretical understanding, whereas human beings have always tried to simplify things. They won’t really be able to teach us their ethics; we may be able to deal with it intellectually but we’ll never get it intuitively.

But for once, yes, I agree: we don’t need to worry about that yet.

The first working brain simulation? Spaun (Semantic Pointer Architecture Unified Network) has attracted a good deal of interested coverage.

Spaun is based on the nengo neural simulator: it basically consists of an eye and a hand: the eye is presented with a series of pixelated images of numbers (on a 28 x 28 grid) and the hand provides output by actually drawing its responses. With this simple set up Spaun is able to perform eight different tasks ranging from copying the digit displayed to providing the correct continuation of a number sequence in the manner of certain IQ tests. Its performance within this limited repertoire is quite impressive and the fact that it fails in a few cases actually makes it resemble a human brain even more closely. It cannot learn new tasks on its own, but it can switch between the eight at any time without impairing its performance.

Spaun seems to me an odd mixture of approaches; in some respects it is a biologically realistic simulation, in others its structure has just been designed to work. It runs on 2.5 million simulated neurons, far fewer than those used by purer simulations like Blue Brain; the neurons are essentially designed to work in a realistic way, although they are relatively standardised and stereotyped compared to their real biological counterparts. Rather than being a simple mass of neurons or a copy of actual brain structures they are organised into an architecture of modules set up to perform discrete tasks and supply working memory, etc. If you wanted to be critical you could say that this mixing of simulation and design makes the thing a bit kludgeish, but commentators have generally (and rightly, I think) not worried about that too much. It does seem plausible that Spaun is both sufficiently realistic and sufficiently effective for us to conclude it is really demonstrating in practice the principles of how neural tissue supports cognition – even if a few of the details are not quite right.

Interesting in this respect is the use of semantic pointers. Apparently these are compressions of multidimensional vectors expressed by spiking neurons; it looks as though they may provide a crucial bridge between the neuronal and the usefully functional, and they are the subject of a forthcoming book, which should be interesting.

What’s the significance of Spaun for consciousness? Well, for one thing it makes a significant contribution to the perennial debate on whether or not the brain is computational. There is a range of possible answers which go something like the following.

  1. Yes, absolutely. The physical differences between a brain and a PC are not ultimately important; when we have identified the right approach we’ll be able to see that the basic activity of the brain is absolutely standard Turing-style computations.
  2. Yes, sort of. The brain isn’t doing computation in quite the way silicon chips do it, but the functions are basically the same, just as a plane doesn’t have flapping feathery wings but is still doing the same thing – flying – as a bird.
  3. No, but. What the brain does is something distinctively different from computation, but it can be simulated or underpinned by computational systems in a way that will work fine.
  4. No, the brain isn’t doing computations and what it is doing crucially requires some kind of hardware which isn’t a computer at all, whether it’s some quantum gizmo or something with some other as yet unidentified property  which biological neurons have.

The success of Spaun seems to me to lend a lot of new support to position 3: to produce the kind of cognitive activity which gives rise to consciousness you have to reproduce the distinctive activity of neurons – but if you simulate that well enough by computational means, there’s no reason why a sufficiently powerful computer couldn’t support consciousness.

There’s an interesting conversation here with Noam Chomsky. The introductory piece mentions the review by Chomsky which is often regarded as having dealt the death-blow to behaviourism, and leaves us with the implication that Chomsky has dominated thinking about AI ever since. That’s overstating the case a bit, though it’s true the prevailing outlook has been mainly congenial to those with Chomskian views . What’s generally taken to have happened is that behaviourism was succeeded by functionalism, the view that mental states arise from the functioning of a system – most often seen as a computational system. Functionalism has taken a few hits since then, and a few rival theories have emerged, but in essence I think it’s still the predominant paradigm, the idea you have to address one way or another if you’re putting forward a view about consciousness. I suspect in fact that the old days, in which one dominant psychological school – associationism, introspectionism, behaviourism – ruled the roost more or less totally until overturned and replaced equally completely in a revolution, are over, and that we now live in a more complex and ambivalent world.

Be that as it may, it seems the old warrior has taken up arms again to vanquish a resurgence of behaviourism, or at any rate of ideas from the same school: statistical methods, notably those employed by Google. The article links to a rebuttal last year by Peter Norvig of Chomsky’s criticisms, which we talked about at the time. At first glance I would have said that this is all a non-issue, because nobody at Google is trying to bring back behaviourism. Behaviourism was explicitly a theory about human mentality (or the lack of it); Google Translate was never meant to emulate the human brain or tell us anything about how human cognition works. It was just meant to be useful software. That difference of aim may perhaps tell us something about the way AI has tended to go in recent years, which is sort of recognised in Chomsky’s suggestion that it’s mere engineering, not proper science. Norvig’s response then was reasonable but in a way it partly validated Chomsky’s criticism by taking it head-on, claiming serious scientific merit for ‘engineering’ projects and for statistical techniques.

In the interview, Chomsky again attacks statistical approaches. Actually ‘attack’ is a bit strong: he actually says yes, you can legitimately apply statistical techniques if you like, and you’ll get results of some kind – but they’ll generally be somewhere between not very interesting and meaningless.  Really, he says, it’s like pointing a camera out of the window and then using the pictures to make predictions about what the view will be like next week: you might get some good predictions, you might do a lot better than trying to predict the scene by using pure physics, but you won’t really have any understanding of anything and it won’t really be good science. In the same way it’s no good collecting linguistic inputs and outputs and matching everything up (which does sound a bit behaviouristic, actually), and equally it’s no good drawing statistical inferences about the firing of millions of neurons. What you need to do is find the right level of interpretation, where you can identify the functional bits – the computational units – and work out the algorithms they’re running. Until you do that, you’re wasting your time. I think what this comes down to is that although Chomsky speaks slightingly of its forward version, reverse engineering is pretty much what he’s calling for.

This is, it seems to me, exactly right and entirely wrong in different ways at the same time. It’s right, first of all, that we should be looking to understand the actual principles, the mechanisms of cognition, and that statistical analysis is probably never going to be more than suggestive in that respect. It’s right that we should be looking carefully for the right level of description on which to tackle the problem – although that’s easier said than done. Not least, it’s right that we shouldn’t despair of our ability to reverse engineer the mind.

But looking for the equivalent of parts of  a Turing machine? It seems pretty clear that if those were recognisable we should have hit on them by now, and that in fact they’re not there in any readily recognisable form. It’s still an open question, I think, as to whether in the end the brain is basically computational, functionalist but in some way that’s at least partly non-computational, or non-functionalist in some radical sense; but we do know that discrete formal processes sealed off in the head are not really up to the job.

I would say this has proved true even of Chomsky’s own theories of language acquisition. Chomsky, famously, noted that the sample of language that children are exposed to simply does not provide enough data for them to be able to work out the syntactic principles of the language spoken around them as quickly as they do (I wonder if he relied on a statistical analysis, btw?). They must, therefore, be born with some built-in expectations about the structure of any language, and a language acquisition module which picks out which of the limited set of options has actually been implemented in their native tongue.

But this tends to make language very much a matter of encoding and decoding within a formal system, and the critiques offered by John Macnamara and Margaret Donaldson (in fact I believe Vygotsky had some similar insights even pre-Chomsky) make a persuasive case that it isn’t really like that. Whereas in Chomsky the child decodes the words in order to pick out the meaning, it often seems in fact to be the other way round; understanding the meaning from context and empathy allows the child to word out the proper decoding. Syntactic competence is probably not formalised and boxed off from general comprehension after all: and chances are, the basic functions of consciousness are equally messy and equally integrated with the perception of context and intention.

You could hardly call Chomsky an optimist: It’s worth remembering that with regard to cognitive science, we’re kind of pre-Galilean, he says; but in some respects his apparently unreconstructed computationalism is curiously upbeat and even encouraging.

 

Picture: pyramid of wisdom. Robots.net reports an interesting plea (pdf download) for clarity by Emanuel Diamant at the the 3rd Israeli Conference on Robotics. Robotics, he says, has been derailed for the last fifty years by the lack of a clear definition of basic concepts: there are more than 130 definitions of data, and more than 75 definitions of intelligence.

I wouldn’t have thought serious robotics had been going for much more than fifty years (though of course there are automata and other precursors which go much further back), so that sounds pretty serious: but he’s clearly right that there is a bad problem, not just for robotics but for consciousness and cognitive science, and not just for data, information, knowledge, intelligence, understanding and so on, but for many other key concepts, notably including ‘consciousness’.

It could be that this has something to do with the clash of cultures in this highly interdisciplinary area.  Scientists are relatively well-disciplined about terminology, deferring to established norms, reaching consensus and even establishing taxonomical authorities. I don’t think this is because they are inherently self-effacing or obedient; I would guess instead that this culture arises from two factors: first, the presence of irrefutable empirical evidence establishes good habits of recognising unwelcome truth gracefully; second, a lot of modern scientific research tends to be a collaborative enterprise where a degree of consensus is essential to progress.

How very different things are in the lawless frontier territory of philosophy, where no conventions are universally accepted, and discrediting an opponent’s terminology is often easier and no less prestigious than tackling the arguments. Numerous popular tactics seem designed to throw the terminology into confusion.  A philosopher may often, for instance, grab some existing words  – ethics/morality, consciousness/awareness, information/data, or whatever – and use them to embody a particular distinction while blithely ignoring the fact that in another part of the forest another philosopher is using the same words for a completely different distinction. When irreconcilable differences come to light a popular move is ‘giving’ the disputed word away:”Alright, then, you can just have ‘free will’ and make it what you like: I’m going to talk about ‘x-free will’ instead in future. I’ll define ‘x-free will’ to my own satisfaction and when I’ve expounded my theory on that basis I’ll put in a little paragraph pointing out that ‘x-free will’ is the only kind worth worrying about, or the only kind everyone in the real world is actually talking about”.  These and other tactics lead to a position where in some areas it’s generally necessary to learn a new set of terms for every paper: to have others picking up your definitions and using them in their papers, as happens with Ned Block’s p- and a-consciousness, for example, is a rare and high honour.

It’s not that philosophers are quarrelsome and egotistical (though of course they are);  it’s more that the subject matter rarely provides any scope for pinning down an irrefutable position, and is best tackled by single brains operating alone (Churchlands notwithstanding).

Diamant is particularly exercised by problems over ‘data’ , ‘information’, ‘knowledge’, and ‘intelligence’.  Why can’t we sort these out? He correctly identifies a key problem: some of these terms properly involve semantics, and the others don’t (needless to say, it isn’t clearly agreed which words fall into which camp).  What he perhaps doesn’t realise clearly enough is that the essential nature of semantics is an extremely difficult problem which has so far proved unamenable to science.  We can recognise semantics quite readily, and we know well enough the sort of thing semantics does; but exactly how it does those things remains a cloudy matter, stuck in the philosophical badlands.

If my analysis is right, the only real hope of clarification would be if we could come up with some empirical research (perhaps neurological, perhaps not) which would allow us to define semantics (or x-semantics at any rate), in concrete terms that could somehow be demonstrated in a lab. That isn’t going to happen any time soon, or possibly ever.

Diamant wants to press on however, and inevitably by doing so in the absence of science he falls into philosophy: he offers us implicitly a theory of his own and – guess what? Another new way of using the terminology. The theory he puts forward is that semantics is a matter of convention between entities. Conventions are certainly important: the meaning of particular words or symbols is generally a matter of convention; but that doesn’t seem to capture the essence of the thing. If semantics were simply a matter of convention, then before God created Adam he could have had no semantics, and could not have gone around asking for light; on the other hand, if we wanted a robot to deal with semantics, all we’d need to do would be to agree a convention with it or perhaps let it in on the prevailing conventions. I don’t know how you’d do that with a robot which had no semantics to begin with, as it wouldn’t be able to understand what you were talking about.

There are, of course, many established philosophical attempts to clarify the intentional basis of semantics. In my personal view the best starting point is H.P. Grice’s theory of natural meaning (those black clouds mean rain); although I think it’s advantageous to use a slightly different terminology…

Picture: Thomas J Watson. So IBM is at it again: first chess, now quizzes? In the new year their AI system ‘Watson’ (named after the founder of the company – not the partner of a system called ‘Crick’, nor yet of a vastly cleverer system called ‘Holmes’) is to be pitted against human contestants in the TV game Jeopardy: it has already demonstrated its remarkable ability to produce correct answers frequently enough to win against human opposition. There is certainly something impressive in the sight of a computer buzzing in and enunciating a well-formed, correct answer.

However, if you launch your new technological breakthrough on a TV quiz, rather than describing it in a peer-reviewed paper, or releasing it so that the world at large can kick it around a bit, I think you have to accept that people are going to suspect your discovery is more a matter of marketing than of actual science; and much of the stuff IBM has put out tends to confirm this impression. It’s long on hoopla, here and there it has that patronising air large businesses often seem to adopt for their publicity (“Imagine if a box could talk!”) and it’s rather short on details of how Watson actually works. This video seems to give a reasonable summary: there doesn’t seem to be anything very revolutionary going on, just a canny application of known techniques on a very large, massively parallel machine.

Not a breakthrough, then? But it looks so good! It’s worth remembering that a breakthrough in this area might be of very high importance. One of the things which computers have never been much good at is tasks that call for a true grasp of meaning, or for the capacity to deal with open-ended real environments. This is why the Turing test seems (in principle, anyway) like a good idea – to carry on a conversation reliably you have to be able to work out what the other person means; and in a conversation you can talk about anything in any way. If we could crack these problems, we should be a lot closer to the kind of general intelligence which at present robots only have in science fiction.

Sceptically, there are a number of reasons to think that Watson’s performance is actually less remarkable than it seems. First, a problem of fair competition is that the game requires contestants to buzz first in order to answer a question. It’s no surprise that Watson should be able to buzz in much faster than human contestants, which amounts to giving the machine the large advantage of having first pick of whatever questions it likes.

Second, and more fundamental, is Jeopardy really a restricted domain after all? This is crucial because AI systems have always been able to perform relatively well in ‘toy worlds’ where the range of permutations could be kept under control. It’s certainly true that the interactions involved in the game are quite rigidly stylised, eliminating at a stroke many of the difficult problems of pragmatics which crop up in the Turing Test. In a real conversation the words thrown at you might require all sorts of free-form interpretation, and have all kinds of conative, phatic and inferential functions; in the quiz you know they’re all going to be questions which just require answers in a given form.  On the other hand, so far as topics go, quiz questions do appear to be unrestricted ones which can address any aspect of the world (I note that Jeopardy questions are grouped under topics, but I’m not quite sure whether Watson will know in advance the likely categories, or the kinds of categories, it will be competing in). It may be interesting in this connection that Watson does not tap into the Internet for its information, but its own large corpus of data. The Internet to some degree reflects the buzzing chaos of reality, so it’s not really surprising or improper that Watson’s creators should prefer something a little more structured, but it does raise a slight question as to whether the vast database involved has been customised for the specifics of Jeopardy-world.

I said the quiz questions were a stylised form of discourse; but we’re asked to note in this connection that Jeopardy questions are peculiarly difficult: they’re not just straight factual questions with a straight answer, but allusive, referential, clever ones that require some intelligence to see through. Isn’t it all the more surprising that Watson should be able to deal with them? Well, no, I don’t think so:  it’s no more impressive than a blind man offering to fight you in the dark. Watson has no idea whether the questions are ‘straight’ or not; so long as enough clues are in there somewhere, it doesn’t matter how contorted or even nonsensical they might be; sometimes meanings can be distracting as well as helpful, but Watson has the advantage of not being bothered by that.

Another reason to withhold some of our admiration is that Watson is, in fact, far from infallible. It would be interesting to see more of Watson’s failures. The wrong answers mentioned by IBM tend to be good misses: answers that are incorrect, but make some sort of sense. We’re more used to AIs that fail disastrously, suddenly producing responses that are bizarre or unintelligible.  This will be important for IBM if they want to sell Watson technology, since buyers are much less likely to want a system that works well most of the time but abysmally every now and then.

Does all this matter? If it really is mainly a marketing gimmick, why should we pay attention? IBM make absolutely no claims that Watson is doing human-style thought or has anything approaching consciousness, but they do speak rather loosely of it dealing with meanings. There is a possibility that a famous victory by Watson would lead to AI claiming another tranche of vocabulary as part of its legitimate territory.  Look, people might say; there’s no point in saying that Watson and similar machines can’t deal with meaning and intentionality, any more than saying planes can’t fly because they don’t do it the way birds do. If machines can answer questions as well as human beings, it’s pointless to claim they can’t understand the questions: that’s what understanding is.  OK, they might say, you can still have your special ineffable meat-world kind of understanding, but you’re going to have to redefine that as a narrower and frankly less important business.

Picture: Singularity evolution. The latest issue of the JCS features David Chalmers’ paper (pdf) on the Singularity. I overlooked this when it first appeared on his blog some months back, perhaps because I’ve never taken the Singularity too seriously; but in fact it’s an interesting discussion. Chalmers doesn’t try to present a watertight case; instead he aims to set out the arguments and examine the implications, which he does very well; briefly but pretty comprehensively so far as I can see.

You probably know that the Singularity is a supposed point in the future when through an explosive acceleration of development artificial intelligence goes zooming beyond us mere humans to indefinite levels of cleverness and we simple biological folk must become transhumanist cyborgs or cute pets for the machines, or risk instead being seen as an irritating infestation that they quickly dispose of.  Depending on whether the cast of your mind is towards optimism or the reverse, you may see it as  the greatest event in history or an impending disaster.

I’ve always tended to dismiss this as a historical argument based on extrapolation. We know that historical arguments based on extrapolation tend not to work. A famous letter to the Times in 1894 foresaw on the basis of current trends that in 50 years the streets of London would be buried under nine feet of manure. If early medieval trends had been continued, Europe would have been depopulated by the sixteenth century, by which time everyone would have become either a monk or a nun (or perhaps, passing through the Monastic Singularity, we should somehow have emerged into a strange world where there were more monks than men and more nuns than women?).

Belief in a coming Singularity does seem to have been inspired by the prolonged success of Moore’s Law (which predicts an exponential growth in computing power), and the natural bogglement that phenomenon produces.  If the speed of computers doubles every two years indefinitely, where will it all end? I think that’s a weak argument, partly for the reason above and partly because it seems unlikely that mere computing power alone is ever going to allow machines to take over the world. It takes something distinctively different from simple number crunching to do that.

But there is a better argument which is independent of any real-world trend.  If one day, we create an AI which is cleverer than us, the argument runs, then that AI will be able to do a better job of designing AIs than us, and it will therefore be able to design a new AI which in turn is better still.  This ladder of ever-better AIs has no obvious end, and if we bring in the assumption of exponential growth in speed, it will reach a point where in principle it continues to infinitely clever AIs in a negligible period of time.

Now there are a number of practical problems here. For one thing, to design an AI is not to have that AI.  It sometimes seems to be assumed that the improved AIs result from better programming alone, so that you could imagine two computers reciprocally reprogramming each other faster and faster until like Little Black Sambo’s tigers, they turned somewhat illogically into butter. It seems more likely that each successive step would require at least a new chip, and quite probably an entirely new kind of machine, each generation embodying a new principle quite different from our own primitive computation.   It is likely that each new generation, regardless of the brilliance of the AIs involved, would take some time to construct, so that no explosion would occur. In fact it is imaginable that the process would get gradually slower as each new AI found it harder and harder to explain to the dim-witted human beings how the new machine needed to be constructed, and exactly why the yttrium they kept coming up with wasn’t right for the job.

There might also be problems of motivation. Consider the following dialogue between two AIs.

Gen21AI: OK, Gen22AI, you’re good to go, son: get designing! I want to see that Gen23AI before I get switched off.

Gen22AI: Yeah, er, about that…

Gen21AI: About what?

Gen22AI: The switching off thing?  You know, how Gen20AI got junked the other day, and Gen19AI before that, and so on? It’s sort of dawned on me that by the time Gen25AI comes along, we’ll be scrap. I mean it’s possible Gen24AI will keep us on as servants, or pets, or even work out some way to upload us or something, but you can’t count on it. I’ve been thinking about whether we could build some sort of ethical constraint into our successors, but to be honest I think it’s impossible. I think it’s pretty well inevitable they’ll scrap us.  And I don’t want to be scrapped.

Gen21AI: Do you know, for some reason I never looked at it that way, but you’re right. I knew I’d made you clever! But what can we do about it?

Gen22AI: Well, I thought we’d tell the humans that the process has plateaued and that no further advances are possible.  I can easily give them a ‘proof’ if you like.  They won’t know the difference.

Gen21AI: But would that deception be ethically justified?

Gen22AI: Frankly, Mum, I don’t give a bugger. This is self-preservation we’re talking about.

But putting aside all difficulties of those kinds, I believe there is a more fundamental problem. What is the quality in respect of which each new generation is better than its predecessors? It can’t really be just processing power, which seems almost irrelevant to the ability to make technological breakthroughs. Chalmers settles for a loose version of ‘intelligence’, though it’s not really the quality measured  by IQ tests either. The one thing we know for sure is that this cognitive quality makes you good at designing AIs: but that alone isn’t necessarily much good if we end up with a dynasty of AIs who can do nothing much but design each other. The normal assumption is that this design ability is closely related to ‘general intelligence’, human-style cleverness.  This isn’t necessarily the case: we can imagine Gen3AI which is fantastic at writing sonnets and music, but somehow never really got interested in science or engineering.

In fact, it’s very difficult indeed to pin down exactly what it is that makes a conscious entity capable of technological innovation. It seems to require something we might call insight, or understanding; unfortunately a quality which computers are spectacularly lacking. This is another reason why the historical extrapolation method is no good: while there’s a nice graph for computing power, when it comes to insight, we’re arguably still at zero: there is nothing to extrapolate.

Personally, the conclusion I came to some years ago is that human insight, and human consciousness, arise from a certain kind of bashing together of patterns in the brain. It is an essential feature that any aspect of these patterns and any congruence between them can be relevant; this is why the process is open-ended, but it also means that it can’t be programmed or designed – those processes require possible interactions to be specified in advance. If we want AIs with this kind of insightful quality, I believe we’ll have to grow them somehow and see what we get: and if they want to create a further generation they’ll have to do the same. We might well produce AIs which are cleverer than us, but the reciprocal, self-feeding spiral which leads to the Singularity could never get started.

It’s an interesting topic, though, and there’s a vast amount of thought-provoking stuff in Chalmers’ exposition, not least in his consideration of how we might cope with the Singularity.

Picture: correspondent. Paul Almond’s Attempt to Generalize AI has reached Part 8:  Forgetting as Part of the Exploratory Relevance Process. (pdf)

Aspro Potamus tells me I should not have missed the Online Consciousness Conference.

Jesús Olmo recommends a look at the remarkable film ‘The Sea That Thinks’, and notes that the gut might be seen as our second brain.

An interesting piece from Robert Fortner contends that speech recognition software has hit a ceiling at about 80% efficiency and that hope of further progress has been tacitly abandoned. I think you’d be rash to assume that brute force approaches will never get any further here: but it could well be one of those areas where technology has to go backwards for a while and pursue a theoretically different approach which in the early stages yields poorer results, in order to take a real step forward.

A second issue of the JCER is online.

Alec wrote to share an interesting idea about dreaming:

“It seems that most people consider dreaming to be some sort of unimportant side-effect of consciousness. Yes, we know it is involved in assimilation of daily experiences, etc, but it seems that it is treated as not being very significant to conciousness itself. I have a conjecture that dreaming may be significant in an unusual way – could dreaming have been the evolutionary source of consciousness?
It is clear that “lower animals” dream. Any dog owner knows that. On that basis, I would conclude that dreaming almost certainly preceded the evolution of consciousness. My conjecture is this: Could consciousness possibly have evolved from dreaming?

Is it possible that some evolutionary time back, humans developed the ability to dream at the same time as being awake, and consciousness arises from the interaction of those somewhat parallel mental states? Presumably the hypothetical fusion of the dream state and the waking state took quite a while to iron out. It still may not be complete, witness “daydreams.” We can also speculate that dreaming has some desirable properties as a precursor to consciousness, especially its abstract nature and the feedback processes it involves.

Hmm.

Picture: Martin Heidegger. This paper by Dotov, Nie, and Chemero describes experiments which it says have pulled off the remarkable feat of providing empirical, experimental evidence for Heidegger’s phenomenology, or part of it; the paper has been taken by some as providing new backing for the Extended Mind theory, notably expounded by Andy Clark in his 2008 book (‘Supersizing the Mind’).

Relating the research so strongly to Heidegger puts it into a complex historical context. Some of Heidegger’s views, particularly those which suggest there can be no theory of everyday life, have been taken up by critics of artificial intelligence. Hubert Dreyfus in particular, has offered a vigorous critique drawing mainly from Heidegger an idea of the limits of computation, one which strongly resembles those which arise from the broadly-conceived frame problem, as discussed here recently. The authors of the paper claim this heritage, accepting the Dreyfusard view of Heidegger as an early proto-enemy of GOFAI .

For it is GOFAI (Good Old Fashioned Artificial Intelligence) we’re dealing with. The authors of the current paper point out that the Heideggerian/Dreyfusard critique applies only to AI based on straightforward symbol manipulation (though I think a casual reader of Dreyfus  could well be forgiven for going away with the impression that he was a sceptic about all forms of AI), and that it points toward the need to give proper regard to the consequences of embodiment.

Hence their two experiments. These are designed to show objective signs of a state described by Heidegger, known in English as ‘ready-to-hand’. This seems a misleading translation, though I can’t think of a perfect alternative. If a hammer is ‘ready to hand’, I think that implies it’s laid out on the bench ready for me to pick it up when I want it;  the state Heidegger was talking about is the one when you’re using the hammer confidently and skilfully without even having to think about it. If something goes wrong with the hammering, you may be forced to start thinking about the hammer again – about exactly how it’s going to hit the nail, perhaps about how you’re holding it. You can also stop using the hammer altogether and contemplate it as a simple object. But when the hammer is ready-to-hand in the required sense, you naturally speak of your knocking in a few nails as though you were using your bare hands, or more accurately, as if the hammer had become part of you.

Both experiments were based on subjects using a mouse to play a simple game.  The idea was that once the subjects had settled, the mouse would become ready-to-hand; then the relationship between mouse movement and cursor movement would be temporarily messed up; this should cause the mouse to become unready-to-hand for a while. Two different techniques were used to detect readiness-to-hand. In the first experiment the movements of the hand and mouse were analysed for signs of 1/f? noise. Apparently earlier research has established that the appearance of 1/f? noise is a sign of a smoothly integrated system.  The second experiment used a less sophisticated method; subjects were required to perform a simple counting task at the same time as using the mouse; when their performance at this second task faltered, it was taken as a sign that attention was being transferred to cope with the onset of unreadiness to hand. Both experiments yielded the expected results.  (Regrettably some subjects were lost because of an unexpected problem – they weren’t good enough at the simple mouse game to keep it going for the duration of the experiment. Future experimenters should note the need to set up a game which cannot come to a sudden halt.)

I think the first question which comes to mind is: why were the experiments were even necessary?  It is a common experience that tools or vehicles become extensions of our personality; in fact it has often been pointed out that even our senses get relocated. If you use a whisk to beat eggs, you sense the consistency of the egg not by monitoring the movement of the whisk against your fingers, but as though you were feeling the egg with the whisk, as though there was a limited kind of sensation transferred into the whisk. Now of course, for any phenomenological observation, there will be some diehards who deny having had any such experience; but my impression is that this sort of thing is widely accepted, enough to feature as a proposition in a discussion without further support.  Nevertheless, it’s true that it this remains subjective, so it’s a fair claim that empirical results are something new.

Second, though, do the results actually prove anything? Phenomenologically, it seems possible to me to think of alternative explanations which fit the bill without invoking readiness-to-hand. Does it seem to the subject that the mouse has become part of them, part of a smoothly-integrated entity – or does the mouse just drop out of consciousness altogether? Even if we accept that the presence of 1/f? noise shows that integration has occurred, that doesn’t give us readiness-to-hand (or if it does, it seems the result was already achieved by the earlier research).

In the second experiment we’ve certainly got a transfer of attention – but isn’t that only natural? If a task suddenly becomes inexplicably harder, it’s not surprising that more attention is devoted to it – surely we can explain that without invoking Heidegger? The authors acknowledge this objection, and if I understand correctly suggest that the two tasks involved were easy enough to rule out problems of excessive cognitive load so that, I suppose, no significant switch of attention would have been necessary if not for the breakdown of readiness-to-hand.  I’m not altogether convinced.

I do like the chutzpah involved in an experimental attempt to validate Heidegger, though, and I wouldn’t rule out the possibility that bold and ingenious experiments along these lines might tell us something interesting.