Third Wave AI?

DARPA is launching a significant new AI initiative; it could be a bad mistake.

DARPA (The Defense Advanced Research Projects Agency)has an awesome record of success in promoting the development of computer technology; without its interventions we probably wouldn’t be talking seriously about self-driving cars, and we might not have any internet. So any big DARPA project is going to be at least interesting and quite probably groundbreaking. This one seeks to bring in a Third Wave of AI. The first wave, on this showing, was a matter of humans knowing what needed to be done and just putting that knowledge into coded rules (this actually smooshes together a messy history of some very different approaches). The second wave involves statistical techniques and machines learning for themselves; recently we’ve seen big advances from this kind of approach. While there’s still more to be got out of these earlier waves, DARPA foresee a third one in which context-based programs are able to explain and justify their own reasoning. The overall idea is well explained by John Launchbury in this video.

In many ways this is timely, as one of the big fears attached to recent machine learning projects has arisen from the fact that there is often no way for human beings to understand, in any meaningful sense, how they work. If you don’t know how a ‘second wave’ system is getting its results, you cannot be sure it won’t suddenly go wrong in bizarre ways (and in fact they do). There have even been moves to make it a legal requirement that a system is explicable.

I think there are two big problems, though. The demand for an explanation implicitly requires one that human beings can understand. This might easily hobble computer systems unnecessarily, denying us immensely useful new technologies that just happen to be slightly beyond our grasp. One of the limitations of human cognition, for example, is that we can only hold so many things in mind at once. Typically we get round this by structuring and dividing problems so we can deal with simple pieces one at a time; but it’s likely there are cognitive strategies that this rules out. Already I believe there are strategies in chess, devised by computers, that clearly work but whose conditional structure is so complex no human can understand them intuitively. So it could be that the third wave actually restores some of the limitations of the first, by tying progress to things humans already get.

The second problem is that we still have no real idea how much of human cognition works. Recent advances in visual recognition have brought AI to levels that seem to match or exceed human proficiency, but the way they break down suddenly in weird cases is so unlike human thought that it shows how different the underlying mechanisms must still be. If we don’t know how humans do explainable recognition, where is our third wave going to come from?

Of course, the whole framework of the three waves is a bit of a rhetorical trick. It rewrites and recategorises the vastly complex, contentious history of AI into something much simpler; it discreetly overlooks all the dead ends and winters of disillusion that actually featured quite prominently in that story. The result makes the ‘third wave’ seem a natural inevitability, so that we ask only when and by whom, not whether and how.

Still, even projects whose success is not inevitable sometimes come through…

The Mark of the Mental

An interesting review by Guillaume Fréchette, of Mark Textor’s new book Brentano’s Mind. Naturally this deals among other things with the claim for which Brentano is most famous; that intentionality is the distinctive feature of the mental (and so of thoughts, consciousness, awareness, and so on). Textor apparently makes five major claims, but I only want to glance at the first one, that in fact ‘Intentionality is an implausible mark of the mental’.

What was Brentano on about, anyway? Intentionality is the property of pointing at, or meaning, or being about, something. It was discussed in medieval philosophy and then made current again by Brentano when he, like others, was trying to establish an empirical science of psychology for the first time. In his view:

intentional in-existence, the reference to something as an object, is a distinguishing characteristic of all mental phenomena. No physical phenomenon exhibits anything similar.”

Textor apparently thinks that there’s a danger of infinite regress here. He reads Brentano’s mention of in-existence as meaning we need to think of an immanent object ‘existing in’ our mind in order to think of an object ‘out there’; but in that case, doesn’t thinking of the immanent object require a further immanent object, and so on. There seems to be more than one way of escaping this regress, however. Perhaps we don’t need to think of the immanent object, it just has to be there. Maybe awareness of an external object and introspecting an internal one are quite different processes, the latter not requiring an immanent object. Perhaps the immanent object is really a memory, or perhaps  the whole business of immanent objects reads more into Brentano than we should.

Textor believes Brentano is pushed towards primitivism – hey, this just is the mark of the mental, full stop – and thinks it’s possible to do better. I think this is nearly right, except it assumes Brentano must be offering a theory, even if it’s only the bankrupt one of primitivism. I think Brentano observes that intentionality is the mark of the mental, and shrugs. The shrug is not a primitivist thesis, it just expresses incomprehension. To say that one does not know x is not to say that x is unknowable. I could of course be wrong, both about Brentano, and particularly about Textor.

What I think you have to do is go back and ask why Brentano thought intentionality was the mark of the mental in the first place. I think it’s a sort-of empirical observation. All thoughts are about, or of, something. If we try to imagine a thought that isn’t about anything, we run into difficulty. Is there a difference between not thinking of anything and not thinking at all (thinking of nothing may be a different matter)? Similarly, can there be awareness which isn’t awareness of anything? One can feel vast possible disputes about this opening up even as we speak, but I should say it is at least pretty plausible that all mental states feature intentionality.

Physical objects, such as stones, are not about anything; though they can be, like books, if we have used the original intentionality of our minds to bestow meaning on them; if in fact we intend them to mean something. Once again, this is disputable, but not, to me, implausible.

Intentionality remains a crucially important aspect of the mind, not least because we have got almost nowhere with understanding it. Philosophically there are of course plenty of contenders; ideas about how to build intentionality out of information, out of evolution, or indeed to show how original intentionality is a bogus idea in the first place. To me, though, it’s telling that we’ve got nowhere with replicating it. Where AI would seem to require some ability to handle meaning – in translation, for example – it has to be avoided and a different route taken. While it remains mysterious, there will always be a rather large hole in our theories of consciousness.

Intentionality and Introspection

Some people, I know, prefer to get their philosophy in written form; but if you like videos it’s well worth checking out Richard Brown’s YouTube series Consciousness This Month.

This one, Ep 4, is about mental contents, with Richard setting out briefly but clearly a couple of the major problems (look at the camera, Richard!).

Introspection, he points out, is often held to be incorrigible or infallible on certain points. You can be wrong about being at the dentist, but you can’t be wrong about being in pain. This is because of the immediacy of the experience. In the case of the dentist, we know there is a long process between light hitting your retina and the dentist being presented to consciousness. Various illusions and errors provide strong evidence for the way all sorts of complex ‘inferences’ and conclusions have been drawn by your unconscious visual processing system before the presence of the dentist gets inserted into your awareness in the guise of a fact. There is lots of scope for that processing to go wrong, so that the dentist’s presence might not be a fact at all. There’s much less processing involved in our perception of someone tugging on a tooth, but still maybe you could be dreaming or deluded. But the pain is inside your mind already; there’s no scope for interpretation and therefore no scope for error.

My own view on this is that it isn’t our sense data that have to be wrong, it’s our beliefs about our experiences. If the results of visual processing are misleading, we may end up with the false belief that there is a dentist in the room. But that’s not the only way for us to pick up false beliefs, and nothing really prevents our holding false beliefs about being in pain. There is some sense in which the pain can’t be wrong, but thatks more a matter of truth and falsity being properties of propositions, not of pains.

Richard also sketches the notion of intentionality, or ‘aboutness’, reintroduced to Western philosophy as a key idea by Brentano, who took it to be the distinguishing feature of the mental. When we think about things it seems as if our thought is directed towards an external object. In itself that seems to require some explanation, but it gets especially difficult when you consider that we can easily talk about non-existent or even absurd things. This is the kind of problem that caused Meinong to introduce a distinction between existence and subsistence, so that the objects of thought could have a manageable ontological status without being real in the same way as physical objects.

Regulars may know that my own view is that consciousness is largely a matter of recognition. Humans, we might say, are superusers of recognition. Not only can we recognise objects, we can recognise patterns and use them for a sort of extrapolation. The presence of a small entity is recognised, but also a larger entity of which it is part. So we recognise dawn, but also see that it is part of a day. From the larger entity we can recognise parts not currently present, such as sunset, and this allows us to think about entities that are distant in time or space. But the same kind of extrapolation allows to think about things that do not, or even could not, exist.

I’m looking forward to seeing Richard’s future excursions.

Secrets of Consciousness

Here’s an IAI discussion between Philip Goff, Susan Blackmore, and Nicholas Humphrey, chaired by Barry Smith. There are some interesting points made, though overall it may have been too ambitious to try to get a real insight into three radically different views on the broad subject of phenomenal consciousness in a single short discussion. I think Goff’s panpsychism gets the lion’s share of attention and comes over most clearly. In part this is perhaps because Goff is good at encapsulating his ideas briefly; in part it may be because of the noticeable bias in all philosophical discussion towards the weirdest idea getting most discussion (it’s more fun and more people want to contradict it); it may be partly just a matter of Goff being asked first and so getting more time.

He positions panpsychism (the view, approximately, that consciousness is everywhere) attractively as the alternative to the old Scylla and Charybdis of dualism on oone hand and over-enthusiastic materialist reductionism on the other. He dodges some of the worst of the combination problem by saying that his version on panpsychism doesn’t say that every arbitrary object – like a chair has to be consciousness, only that there is a general, very simple form of awareness in stuff geneerally – maybe at the level of elementary particles. Responding to the suggestion that panpsychism is the preference for theft over honest toil (just assume consciousness) he rightly says that not all explanations have to be reductive explanations, but makes a comparison I think is dodgy by saying that James Clerk Maxwell, after all, did not reduce electromagnetism to mass or other known physical entities. No, but didn’t Maxwell reduce light, electricity, and magnetism to one phenomenon? (He also provided elegant equations, which I think no-one is about to do for consciousness (Yes, Tononi, put your hand down, we’ll talk about that another time)).

Susan Blackwell is a pretty thorough sceptic: there really is no such thing as subjective consciousness. If we meditate, she says, we may get to a point where we understand this intuitively, but alas, it is hard to explain so convincingly in formal theoretical terms. Maybe that’s just what we should expect though.

Humphrey is also a sceptic, but of a more cautious kind: he doesn’t want to say that there is no such thing as consciousness, but he agrees it is a kind of illusion and prefers to describe it as a work of art (thereby, I suppose, avoiding objections along the lines that consciousness can’t be an illusion because the having of illusions presupposes the having of consciousness by definition). He positions himself as less of a sceptic in some ways than the other two, however: they, he says, hold that consciousness cannot be observed through behaviour: but if not, what are we even doing talking about it?