Illusions for robots

robot illusionsNeural networks really seem to be going places recently. Last time I mentioned their use in sophisticated translation software, but they’re also steaming ahead with new successes in recognition of visual images. Recently there was a claim from MIT that the latest systems were catching up with primate brains at last. Also from MIT (also via MLU) though, has come an intriguing study into what we could call optical illusions for robots, which cause the systems to make mistakes which are incomprehensible to us primates. The graphics in the grid on the right apparently look like a selection of digits between one and six in the eyes of these recognition systems. Nobody really knows why, because of course neural networks are trained, not programmed, and develop their own inscrutable methods.

How then, if we don’t understand, could we ever create such illusions? Optical illusions for human beings exploit known methods of visual analysis used by the brain, but if we don’t know what method a neural network is using, we seem to be stymied. What the research team did is use one of their systems in reverse, getting it to create images instead of analysing them. These were then evaluated by a similar system and refined through several iterations until they were accepted with a very high level of certainty.

This seems quite peculiar and the first impression is that it rather seriously undermines our faith in the reliability of neural network systems. However, there’s one important caveat to take into account: the networks in question are ‘used to’ dealing with images in which the crucial part to be identified is small in relation to the whole. They are happy ignoring almost all of the image. So to achieve a fair comparison with human recognition we should perhaps think of the question being not ‘do these look like numbers to you?’ and more like ‘can you find one of the digits from one to six hidden somewhere in this image?’. On that basis the results seem easier to understand.

There still seem to be some interesting implications, though. The first is that, as with language, AI systems are achieving success with methods that do not much resemble those used by the human brain. There’s an irony in this happening with neural networks, because in the old dispute between GOFAI and networks it was the network people who were trying to follow a biological design, at least in outline.  The opposition wanted to treat cognition as a pure engineering problem; define what we need, identify the best way to deliver it, and don’t worry about copying the brain. This is the school of thought that likes to point out that we didn’t achieve flight be making machines with flapping, feathery wings. Early network theory, going right back to McCulloch and Pitts, held that we were better off designing something that looked at least broadly like the neurons in the brain. In fact, of course, the resemblance has never been that close, and the focus has generally been more on results than on replicating the structures and systems of biological brains; you could argue that modern neural networks are no more like the brain than fixed-wing aircraft are to birds (or bats).  At any rate, the prospect of equalling human performance without doing it the human way raises the same nightmare scenario I was talking about last time; robots that are not people but get treated as if they were (and perhaps people being treated like machines as a consequence.

A second issue is whether the deception which these systems fall into points to a general weakness. Could it be that these systems work very well when dealing with ‘ordinary’ images but continue go wildly off the rails when faced with certain kinds of unusual ones – even when being pout to practical use? It’s perhaps not very likely that  system is going to run into the kind of truly bizarre image we seem to be dealing with, but a more realistic concern might be the potential scope for sabotage or subversion on the part of some malefactor.  One safeguard against this possibility is that the images in question were designed by, as it were, sister systems, ones that worked pretty much the same way and presumably shared the same quirks. Without owning one of these systems yourself it might be difficult to devise illusions that worked – unless perhaps there are general illusions that all network systems are more or less equally likely to be fooled by? That doesn’t seem very likely, but it might be an interesting research project.  The other safeguard is that these systems are not likely to be used without some additional safeguards, perhaps even more contextual processing of broadly the kind that the human mind obviously brings to the task.

The third question is – what is it like to be an AI deceived by an illusion? There’s no reason to think that these machines have subjective experience – unless you’re one of those who is prepared to grant a dim glow of awareness to quite simple machines – but what if some cyborg with a human brain, or a future conscious robot, had systems like these as part of its processing apparatus rather than the ones provided by the human brain?  It’s not implausible that the immense plasticity of the human brain would allow the inputs to be translated into normal visual experience, or something like it.  On the whole I think this is the most likely result, although there might be quirks or deficits (or hey, enhancements, why not) in the visual experience.  The second possibility is that the experience would be completely weird and inexpressible and although the cyborg/robot would be able to negotiate the world just fine, its experience would be like nothing we’ve ever had, perhaps like nothing we can imagine.

The third possibility is that it would be like nothing. There would be no experience as such; the data and the knowledge about the surroundings would appear in the cyborg/human’s brain but there would be nothing it was like for that to happen.  This is the answer qualophile scpetice would expect for a pure robot brain, but the cyborg is more worrying. Human beings are supposed to experience qualia, but when do they arise? Is it only after all the visual processing has been done – when the data ariive in the ‘Cartesian Theatre’ which Dennett has often told us does not exist? Is it, instead, in the visual processing modules or at the visual processing stage? If so, then we were wrong to doubt that MIT’s systems are not having experiences. Perhaps the cyborg gets flawed or partial qualia – but what would that even mean..?

 

Neurons and Free Will

wiring a neuronA few years ago we noted the remarkable research by Fried, Mukamel, and Kreiman which reproduced and confirmed Libet’s famous research. Libet, in brief, had found good evidence using EEG that a decision to move was formed about half a second before the subject in question became consciously aware of it; Fried et al produced comparable results by direct measurement of neuron firing.

In the intervening years, electrode technology has improved and should now make it possible to measure multiple sites. The scanty details here indicate that Kreiman, with support from MIT, plans to repeat the research in an enhanced form; in particular he proposes to see whether, having identified the formed intention to move, it is then possible to stop it before the action takes place. This resembles the faculty of ‘free won’t’ by which Libet himself hoped to preserve some trace of free will.

From the MIT article it is evident that Kreiman is a determinist and believes that his research confirms that position. It is generally believed that Libet’s findings are incompatible with free will in the sense that they seem to show that consciousness has no effect on our actual behaviour.

That actually sheds an interesting side-light on our view of what free will is. A decision to move still gets made, after all; why shouldn’t it be freely made even though it is unconscious? There’s something unsatisfactory about unconscious free will, it seems. Our desire for free will is a desire to be in control, and by that we mean a desire for the entity that does the talking to be in control. We don’t really think of the unconscious parts of our mind as being us; or at least not in the same way as that gabby part that claims responsibility for everything (the part of me that is writing this now, for example).

This is a bit odd, because the verbal part of our brain obviously does the verbals; it’s strange and unrealistic to think it should also make the decisions, isn’t it? Actually if we are careful to distinguish between the making of the decision and being aware of the decision – which we should certainly do, given that one is clearly a first order mental event and the other equally clearly second order – then it ceases to be surprising that the latter should lag behind the former a bit. Something has to have happened before we can be aware of it, after all.

Our unease about this perhaps relates to the intuitive conviction of our own unity. We want the decision and the awareness to be a single event, we want conscious acts to be, as it were, self- illuminating, and it seems to be that that the research ultimately denies us.

It is the case, of course, that the decisions made in the research are rather weird ones. We’re not often faced with the task of deciding to move our hands at an arbitrary time for no reason. Perhaps the process is different if we are deciding which stocks and shares to buy? We may think about the pros and cons explicitly, and we can see the process by which the conclusion is reached; it’s not plausible that those decisions are made unconsciously and then simply notified to consciousness, is it?

On the other hand, we don’t think, do we, that the process of share-picking is purely verbal? The words flowing through our consciousness are signals of a deeper imaginative modelling, aren’t they? If that is the case, then the words might still be lagging. Perhaps the distinction to be drawn is not really between conscious and unconscious, but between simply conscious and explicitly conscious. Perhaps we just shouldn’t let the talky bit pretend to be the whole of consciousness just because the rest is silent.

Brain on a chip

Following on from preceding discussion, Doru kindly provided this very interesting link to information about a new chip designed at MIT which is designed to mimic the function of real neurons.

I hadn’t realised how much was going on, but it seems MIT is by no means alone in wanting to create such a chip. In the previous post I mentioned Dharmendra Modha’s somewhat controversial simulations of mammal brains: under his project leadership IBM, with DARPA participation, is now also working on a chip that simulates neuronal interaction. But while MIT and IBM slug it out those pesky Europeans had already produced a neural chip as part of the FACETS project back in 2009. Or had they? FACETS is now closed and its work continues within the BrainScaleS project working closely with Henry Markram’s Blue Brain project at EPFL, in which IBM, unless I’m getting confused by now, is also involved. Stanford, and no doubt others I’ve missed, are involved in the same kind of research.

So it seems that a lot of people think a neuron-simulating chip is a promising line to follow; if I were cynical I would also glean from the publicity that producing one that actually does useful stuff is not as easy as producing a design or a prototype; nevertheless it seems clear that this is an idea with legs.

What are these chips actually meant to do? There is a spectrum here from the pure simulation of what real brains really do to a loose importation of a functional idea which might be useful in computation regardless of biological realism. One obstacle for chip designers is that not all neurons are the same. If you are at the realist end of the spectrum, this is a serious issue but not necessarily an insoluble one. If we had to simulate the specific details of every single neuron in a brain the task would become insanely large: but it is probable that neurons are to some degree standardised. Categorising them is, so far as I know, a task which has not been completed for any complex brain: for Caenorhabditis elegans, the only organism whose connectome is fully known, it turned out that the number of categories was only slightly lower than the number of neurons, once allowance was made for bilateral symmetry; but that probably just reflects the very small number of neurons possessed by Caenorhabditis (about 300) and it is highly likely that in a human brain the ratio  would be much more favourable. We might not have to simulate more than a few hundred different kinds of standard neuron to get a pretty good working approximation of the real thing.

But of course we don’t necessarily care that much about biological realism. Simulating all the different types of neurons might be a task like simulating real feathers, with the minute intricate barbicel latching structures – still unreplicated by human technology so far as I know – which make them such sophisticated air controllers, whereas to achieve flight it turns out we don’t need to consider any structure below the level of wing. It may well be that one kind of simulated neuron will be more than enough for many revolutionary projects, and perhaps even for some form of consciousness.

It’s very interesting to see that the MIT chip is described as working in a non-digital, analog way (Does anyone now remember the era when no-one knew whether digital or analog computers were going to be the wave of the future?). Stanford’s Neurogrid project is also said to use analog methods, while BrainScaleS speaks of non-Von Neumann approaches, which could refer to localised data storage or to parallelism but often just means ‘unconventional’. This all sounds like a tacit concession to those who have argued that the human mind was in some important respects non-computational: Penrose for mathematical insight, Searle for subjective experience, to name but two. My guess is that Penrose would be open-minded about the capacities of a non-computational neuron chip, but that Searle would probably say it was still the wrong kind of stuff to support consciousness.

In one respect the emergence of chips that mimic neurons is highly encouraging: it represents a nearly-complete bridge between neurology at one end and AI at the other. In both fields people have spoken of ‘connectionism’ in slightly different senses, but now there is a real prospect of the two converging. This is remarkable – I can’t think of another case where two different fields have tunnelled towards each other and met so neatly – and in its way seems to be a significant step towards the reunification of the physical and the mental. But let’s wait and see if the chips live up to the promise.

AI Resurgent

Picture: AI resurgent. Where has AI (or perhaps we should talk about AGI) got to now? h+ magazine reports remarkably buoyant optimism in the AI community about the achievement of Artificial General Intelligence (AGI) at a human level, and even beyond. A survey of opinion at a recent conference apparently showed that most believed that AGI would reach and surpass human levels during the current century, with the largest group picking out the 2020s as the most likely decade.  If that doesn’t seem optimistic enough, they thought this would occur without any additional fundingfor the field, and some even suggested that additional money would be a negative, distracting factor.

Of course those who have an interest in AI would tend to paint a rosy picture of its future, but the survey just might be a genuine sign of resurgent enthusiasm, a second wind for the field (‘second’ is perhaps understating matters, but still).  At the end of last year, MIT announced a large-scale new project to ‘re-think AI’. This Mind Machine Project involves some eminent names, including none other than Marvin Minsky himself. Unfortunately (following the viewpoint mentioned above) it has $5 million of funding.

The Project is said to involve going back and fixing some things that got stalled during the earlier history of AI, which seems a bit of an odd way of describing it, as though research programmes that didn’t succeed had to go back and relive their earlier phases. I hope it doesn’t mean that old hobby-horses are to be brought out and dusted off for one more ride.

The actual details don’t suggest anything like that. There are really four separate projects:

  • Mind: Develop a software model capable of understanding human social contexts- the signpost that establish these contexts, and the behaviors and conventions associated with them.
    Research areas: hierarchical and reflective common sense
    Lead researchers: Marvin Minsky, Patrick Winston
  • Body: Explore candidate physical systems as substrate for embodied intelligence
    Research areas: reconfigurable asynchronous logic automata, propagators
    Lead researchers: Neil Gershenfeld, Ben Recht, Gerry Sussman
  • Memory: Further the study of data storage and knowledge representation in the brain; generalize the concept of memory for applicability outside embodied local actor context
    Research areas: common sense
    Lead researcher: Henry Lieberman
  • Brain and Intent: Study the embodiment of intent in neural systems. It incorporates wet laboratory and clinical components, as well as a mathematical modeling and representation component. Develop functional brain and neuron interfacing abilities. Use intent-based models to facilitate representation and exchange of information.
    Research areas: wet computer, brain language, brain interfaces
    Lead researchers: Newton Howard, Sebastian Seung, Ed Boyden

This all looks very interesting.  The theory of reconfigurable asynchronous logic automata (RALA) represents a new approach to computation which instead of concealing the underlying physical operations behind high-level abstraction, makes the physical causality apparent: instead of physical units being represented in computer programs only as abstract symbols, RALA is based on a lattice of cells that asynchronously pass state tokens corresponding to physical resources. I’m not sure I really understand the implications of this – I’m accustomed to thinking that computation is computation whether done by electrons or fingers; but on the face of it there’s an interesting comparison with what some have said about consciousness requiring embodiment.

I imagine the work on Brain and Intent is to draw on earlier research into intention awareness. This seems to have been studied most extensively in a military context, but it bears on philosophical intentionality and theory of mind; in principle it seems to relate to some genuinely central and difficult issues.  Reading brief details I get the sense of something which might be another blind alley, but is at least another alley.

Both of these projects seem rather new to me, not at all a matter of revisiting old problems from the history of AI, except in the loosest of senses.

In recent times within AI I think there has been a tendency to back off a bit from the issue of consciousness, and spend time instead on lesser but more achievable targets. Although the Mind Machine Project could be seen as superficially conforming with this trend, it seems evident to me that the researchers see their projects as heading towards full human cognition with all that that implies (perhaps robots that run off with your wife?)

Meanwhile in another part of the forest Paul Almond is setting out a pattern-based approach to AI.  He’s only one man, compared with the might of MIT – but he does have the advantage of not having $5 million to delay his research…