handful of dustNew ways to monitor – and control – neurons are about to become practical. A paper in Neuron by Seo et al describes how researchers at Berkeley created “ultrasonic neural dust” that allowed activity in muscles and nerves to be monitored without traditional electrodes. The technique has not been applied to the brain and has been used only for monitoring, not for control, but the potential is clear, and this short piece in Aeon reviewing the development of comparable techniques concludes that it is time to take these emergent technologies seriously. The diagnostic and therapeutic potential of being able to directly monitor and intervene in the activity of nerves and systems all over the body is really quite mind-boggling; in principle it could replace and enhance all sorts of drug treatments and other interventions in immensely beneficial ways.

From a research point of view the possibility of getting single-neuron level data on an ongoing basis could leap right over the limitations of current scanning technology and tell us, really for the first time, exactly what is going on in the brain. It’s very likely that unexpected and informative discoveries would follow. Some caution is of course in order; for one thing I imagine placement techniques are going to raise big challenges. Throwing a handful of dust into a muscle to pick up its activity is one thing; placing a single mote in a particular neuron is another. If we succeed with that, I wonder whether we will actually be able to cope with the vast new sets of data that could be generated.

Still the way ahead seems clear enough to justify a bit of speculation about mind control.  The ethics are clearly problematic, but let’s start with a broad look at the practicalities. Could we control someone with neural dust?

The crudest techniques are going to be the easiest to pull off. Incapacitating or paralysing someone looks pretty achievable; it could be a technique for confining prisoners (step beyond this line and your leg muscles seize up) or perhaps as a secret fall-back disabling mechanism inserted into suspects and released prisoners.  If they turn up in a theatening role later, you can just switch them off. Killing someone by stopping their heart looks achievable, and the threat of doing so could in theory be used to control hostages or perhaps create ‘human drones’ (I apologise for the repellent nature of some of these ideas; forewarned is forearmed).

Although reading off thoughts is probably too ambitious for the foreseeable future, we might be able to monitor the brain’s states or arousal and perhaps even identify the recognition of key objects or people. I cannot see any obvious reason why remote monitoring of neural dust implants couldn’t pick up a kind of video feed from the optic nerve. People might want that done to themselves as a superior substitute for Google Glass and the like; indeed neural dust seems to offer new scope for the kind of direct brain control of technology that many people seem keen to have. Myself I think the output systems already built into human beings – hands, voice – are hard to beat.

Taking direct and outright control of someone’s muscles and making a kind of puppet of them seems likely to be difficult; making a muscle twitch is a long way from the kind of fluid and co-ordinated control required for effective movement. Devising the torrent of neural signals required looks like a task which is computationally feasible in principle but highly demanding; you would surely look to deep learning techniques, which in a sense were created for exactly this kind of task since they began with the imitation of neural networks.  A basic approach that might be achievable relatively early would be to record stereotyped muscular routines and then play them back like extended reflexes, though that wouldn’t work well for many basic tasks like walking that require a lot of feedback.

Could we venture further and control someone’s own attitudes and thoughts? Again the unambitious and destructive techniques are the easiest; making someone deranged or deluded is probably the most straighforward mental change to bring about. Giving them bad dreams seems likely to be a feasible option.  Perhaps we could simulate drunkenness – or turn it off – I suspect that would need massive but non-specific intervention, so it might be relatively achievable. Simulation of the effects of other drugs might be viable on similar terms, whether to impair performance, enhance it, or purely for pleasure. We might perhaps be able to stimulate paranoia, exhilaration, religiosity or depression, albeit without fully predictable results.

Indirect manipulation is the next easiest option for mind control; we might arrange, for example, to have a flood of good feelings or fear and aversion every time particular political candidates are seen, for example; it wouldn’t force the subject to vote a particular way but it might be heavily influential. I’m not sure it’s a watertight technique as the human mind seems easily able to hold contradictory attitudes and sentiments and widespread empirical evidence suggest many people must be able to go on voting for someone who appears repellent.

Could we, finally, take over the person themselves, feeding in whatever thoughts we chose? I rather doubt that this is ever going to be possible. True, our mental selves must ultimately arise from the firing of neurons, and ex hypothesi we can control all those neurons; but the chances are there is no universal encoding of thoughts; we may not even think the same thought with the same neurons a second time around. The fallback of recording and playing back the activity of a broad swathe of brain tissue might work up to a point if you could be sure that you had included the relevant bits of neural activity, but the results, even if successful, would be more like some kind of  malign mental episode than a smooth take over of the personality. Easier, I suspect, to erase a person than control one in this strong sense. As Hamlet pointed out, knowing where the holes on a flute are doesn’t make you able to play a tune. I can hardly put it better than Shakespeare…

Why, look you now, how unworthy a thing you make of
me! You would play upon me; you would seem to know
my stops; you would pluck out the heart of my
mystery; you would sound me from my lowest note to
the top of my compass: and there is much music,
excellent voice, in this little organ; yet cannot
you make it speak. ‘Sblood, do you think I am
easier to be played on than a pipe? Call me what
instrument you will, though you can fret me, yet you
cannot play upon me.

introspection2We don’t know what we think, according to Alex Rosenberg in the NYT. It’s a piece of two halves, in my opinion; he starts with a pretty fair summary of the sceptical case. It has often been held that we have privileged knowledge of our own thoughts and feelings, and indeed of our own decisions; but the findings of Benjamin Libet about decisions being made before we are aware of them; the phenomenon of blindsight which shows we may go on having visual knowledge we’re not aware of; and many other cases where it can be shown that motives are confabulated and mental content is inaccessible to our conscious, reporting mind; these all go to show that things are much more complex than we might have thought, and that our thoughts are not, as it were, self-illuminating. Rosenberg plausibly suggests that we use on ourselves the kind of tools we use to work out what other people are thinking; but then he seems to make a radical leap to the conclusion that there is nothing else going on.

Our access to our own thoughts is just as indirect and fallible as our access to the thoughts of other people. We have no privileged access to our own minds. If our thoughts give the real meaning of our actions, our words, our lives, then we can’t ever be sure what we say or do, or for that matter, what we think or why we think it.

That seems to be going too far.  How could we ever play ‘I spy’ if we didn’t have any privileged access to private thoughts?

“I spy, with my little eye, something beginning with ‘c'”
“Is it ‘chair’?”
“I don’t know – is it?”

It’s more than possible that Rosenberg’s argument has suffered badly from editing (philosophical discussion, even in a newspaper piece, seems peculiarly information-dense; often you can’t lose much of it without damaging the content badly). But it looks as if he’s done what I think of as an ‘OMG bounce’; a kind of argumentative leap which crops up elsewhere. Sometimes we experience illusions:  OMG, our senses never tell us anything about the real world at all! There are problems with the justification of true belief: OMG there is no such thing as knowledge! Or in this case: sometimes we’re wrong about why we did things: OMG, we have no direct access to our own thoughts!

There are in fact several different reasons why we might claim that our thoughts about our thoughts are immune to error. In the game of ‘I spy’, my nominating ‘chair’ just makes it my choice; the content of my thought is established by a kind of fiat. In the case of a pain in my toe, I might argue I can’t be wrong because a pain can’t be false: it has no propositional content, it just is. Or I might argue that certain of my thoughts are unmediated; there’s no gap between them and me where error could creep in, the way it creeps in during the process of interpreting sensory impressions.

Still, it’s undeniable that in some cases we can be shown to adopt false rationales for our behaviour; sometimes we think we know why we said something, but we don’t. I think by contrast I have occasionally, when very tired, had the experience of hearing coherent and broadly relevant speech come out of my own mouth without it seeming to come from my conscious mind at all. Contemplating this kind of thing does undoubtedly promote scepticism, but what it ought to promote is a keener awareness of the complexity of human mental experience: many layered, explicit to greater or lesser degrees, partly attended to, partly in a sort of half-light of awareness… There seem to be unconscious impulses, conscious but inexplicit thought; definite thought (which may even be in recordable words); self-conscious thought of the kind where we are aware of thinking while we think… and that is at best the broadest outline of some of the larger architecture.

All of this really needs a systematic and authoritative investigation. Of course, since Plato there have been models of the structure of the mind which separate conscious and unconscious, id, ego and superego: philosophers of mind have run up various theories, usually to suit their own needs of the moment; and modern neurology increasingly provides good clues about how various mental functions are hosted and performed. But a proper mainstream conception of the structure and phenomenology of thought itself seems sadly lacking to me. Is this an area where we could get funding for a major research effort; a Human Phenomenology Project?

It can hardly be doubted that there are things to discover. Recently we were told, if not quite for the first time, that a substantial minority of people have no mental images (although at once we notice that there even seen to be different ways of having mental images). A systematic investigation might reveal that just as we have four blood groups, there are four (or seven) different ways the human mind can work. What if it turned out that consciousness is not a single consistent phenomenon, but a family of four different ones, and that the four tribes have been talking past each other all this time…?

artistIs computer art the same as human art? This piece argues that there is no real distinction; I don’t agree about that, but I sort of agree that in some respects the difference may not matter as much as it seems to. Oliver Roeder seems to end up by arguing that since humans write the programs, all computer art is ultimately human art too. Surely that isn’t quite right; you wouldn’t credit a team that wrote architectural design software with authorship of all the buildings it was used to create.

It is clearly true that we can design software tools that artists may use for their own creative purposes – who now, after all, creates graphic work with an airbrush? It’s also true that a lot of supposedly creative software is actually rather limited; it really only distorts or reshuffles standard elements or patterns within very limited parameters. I had to smile when I found that Roeder’s first example was a programme generating jazz improvisation; surely the most forgiving musical genre, or as someone ruder once put it, the form of music from which even the concept of a wrong note has been eliminated. But let’s not be nasty to jazz; there have also been successful programs that generated melodies like early Mozart by recombining typically Mozartian motifs; they worked quite well but at best they inevitably resembled the composer on a really bad day when he was ten years old.

Surely though, there are (or if not there soon will be, what with all the exciting progress we’re seeing) programs which do a much more sophisticated job of imitating human creativity, ones that generate from scratch genuinely interesting new forms in whatever medium they are designed for? What about those – are their products to be regarded as art? Myself I think not, for two reasons. First, art requires intentionality and computers don’t do intentionality. Art is essentially tied up with meanings and intentions, or with being about something. I should make it clear that I don’t by any means have in mind the naive idea that all art must have a meaning, in the sense of having some moral or message; but in a much looser sense art conveys, evokes or yes, represents. Even the most abstract forms of visual art or music flow from willed and significant acts by their creator.

Second, there is a creator; art is generated by a person. A person, as I’ve argued before, is a one-off real physical phenomenon in the world; a computer program, by contrast, is a sort of Platonic abstraction like a piece of maths; exactly specified and in some sense eternal. This phenomenon of particularity is reflected in the individual status of works of art, sometimes puzzling to rational folk; a perfect copy of the Mona Lisa is not valued as highly as La Gioconda herself, even though it provides exactly the same visual experience (actually a better one in the case of the copy, since you don’t have to fight the herds of tourists in the Louvre and peer through bullet-proof glass). You might argue that a work of computer art might be the product, not of a program in the abstract, but of a particular run of that program on a particular computer (itself necessarily only approximating the ideal of a Turing machine), and so the analogy with human creators can be preserved; but in my view simply being an instance of a program is not enough; the operation of the human brain is bound up in its detailed particularity in a way a program can never be.

Now those considerations, if you accept them, might make you question my initial optimism; perhaps these two objections mean that computers will never in fact produce anything better than shallow, sterile permutations? I don’t think that’s true. I draw an analogy here with Nature. The natural world produces a torrent of forms that are artistically interesting or inspiring, and it does so without needing intentionality or a creator (theists, my apologies, but work with me if you can). I don’t see why a computer program couldn’t generate products that were similarly worthy of our attention. They wouldn’t be art, but in a sense it doesn’t matter: we don’t despise a sunset because nobody made it, and we need not undervalue computer “art” either. (Interesting to reflect in passing that nature often seems to use the same kind of repetition we see in computer-generated fractal art to produce elegant complexity from essentially simple procedures.)

The relationship between art and nature is of course a long one. Artists have often borrowed natural forms, and different ages have seen whatever most suited their temperament in the natural world, whether a harmonious mathematical regularity or the tortured spirituality of the sublime and the terrible. I think it is quite conceivable that computer “art” (we need a new word – what about “creanda”?) might eventually come to play a similar role. Perhaps people will go out of their way to witness remarkable creanda in much the way they visit the Grand Canyon, and perhaps those inspiring and evocative items will play an inspiring and fertilising role for human creativity, without anyone ever mistaking the creanda for art.

upload angelA digital afterlife is likely to be available one day, according to Michael Graziano, albeit not for some time; his piece re-examines the possibility of uploading consciousness, and your own personality, into a computer. I think he does a good job of briefly sketching the formidable difficulties involved in scanning your brain, and scanning so precisely that your individual selfhood could be captured. In fact, he does it so well that I don’t really understand where his ultimate optimism comes from.

To my way of thinking, ‘scan and build’ isn’t even the most promising way of duplicating your brain. One more plausible way would be some kind of future bio-engineering where your brain just grows and divides, rather in the way that single cells do. A neater way would be some sort of hyper-path through space that split you along the fourth spatial dimension and returned both slices to our normal plane. Neither of these options is exactly a feasible working project, but to me they seem closer to being practical than a total scan. Of course neither of them offers the prospect of an afterlife the way scanning does, so they’re not really relevant for Graziano here. He seems to think we don’t need to go down to an atom by atom scan, but I’m not sure why not. Granted, the loss of one atom in the middle of my brain would not destroy my identity, but not scanning to an atomic level generally seems a scarily approximate and slapdash approach to me given the relevance of certain key molecules in the neural process –  something Graziano fully recognises.

If we’re talking about actual personal identity I don’t think it really matters though, because the objection I consider strongest applies even to perfect copies. In thought experiments we can do anything, so let’s just specify that by pure chance there’s another brain nearby that is in every minute detail the same as mine. It still isn’t me, for the banal commonsensical reason that copies are not the original. Leibniz’s Law tells us that if B has exactly the same properties as A, then it is A: but among the properties of a brain are its physical location, so a brain over there is not the same as one in my skull (so in fact I cheated by saying the second brain was the same in every detail but nevertheless ‘nearby’).

Now most philosophers would say that Leibniz is far too strong a criterion of identity when it comes to persons. There have been hundreds of years of discussion of personal identity, and people generally espouse much looser criteria for a person than they would for a stone – from identity of memories to various kinds of physical, functional, or psychological continuity. After all, people are constantly changing: I am not perfectly identical in physical terms to the person who was sitting here an hour ago, but I am still that person. Graziano evidently holds that personal identity must reside in functional or informational qualities of the kind that could well be transferred into a digital form, and he speaks disparagingly of ‘mystical’ theories that see problems with the transfer of consciousness. I don’t know about that; if anyone is hanging on to residual spiritual thinking, isn’t it the people who think we can be ‘taken out of’ our bodies and live forever? The least mystical stance is surely the one that says I am a physical object, and with some allowance for change and my complex properties, my identity works the same as that of any other physical object. I’m a one-off, particular thing and copies would just be copies.

What if we only want a twin, or a conscious being somewhat like me? That might still be an attractive option after all. OK, it’s not immortality but I think without being rampant egotists most of us probably feel the world could stand a few more people like ourselves around, and we might like to have a twin continuing our good work once we’re gone.

That less demanding goal changes things. If that’s all we’re going for, then yes, we don’t need to reproduce a real brain with atomic fidelity. We’re talking about a digital simulation, and as we know, simulations do not reproduce all the features of the thing being simulated – only those that are relevant for the current purpose. There is obviously some problem about saying what the relevant properties are when it comes to consciousness; but if passing the Turing Test is any kind of standard then delivering good outputs for conversational inputs is a fair guide and that looks like the kind of thing where informational and functional properties are very much to the fore.

The problem, I think, is again with particularity. Conscious experience is a one-off thing while data structures are abstract and generic. If I have a particular experience of a beautiful sunset, and then (thought experiments again) I have an entirely identical one a year later, they are not the same experience, even though the content is exactly the same. Data about a sunset, on the other hand, is the same data whenever I read or display it.

We said that a simulation needs to reproduce the relevant aspects of the the thing simulated; but in a brain simulation the processes are only represented symbolically, while one of the crucial aspects we need for real experience is particular reality.

Maybe though, we go one level further; instead of simulating the firing of neurons and the functional operation of the brain, we actually extract the program being run by those neurons and then transfer that. Here there are new difficulties; scanning the physical structure of the brain is one thing; working out its function and content is another thing altogether; we must not confuse information about the brain with the information in the brain. Also, of course, extracting the program assumes that the brain is running a program in the first place and not doing something altogether less scrutable and explicit.

Interestingly, Graziano goes on to touch on some practical issues; in particular he wonders how the resources to maintain all the servers are going to be found when we’re all living in computers. He suspects that as always, the rich might end up privileged.

This seems a strange failure of his technical optimism. Aren’t computers going to go on getting more powerful, and cheaper? Surely the machines of the twenty-second century will laugh at this kind of challenge (perhaps literally). If there is a capacity problem, moreover, we can all be made intermittent; if I get stopped for a thousand years and then resume, I won’t even notice. Chances are that my simulation will be able to run at blistering speed, far faster than real time, so I can probably experience a thousand years of life in a few computed minutes. If we get quantum computers, all of us will be able to have indefinitely long lives with no trouble at all, even if our simulated lives include having digital children or generating millions of digital alternates of ourselves, thereby adding to the population. Graziano, optimism kicking back in, suggests that we can grow  in understanding and come to see our fleshly life as a mere larval stage before we enter on our true existence. Maybe, or perhaps we’ll find that human minds, after ten billion years (maybe less) exhaust their potential and ultimately settle into a final state; in which case we can just get the computers to calculate that and then we’ll all be finalised, like solved problems. Won’t that be great?

I think that speculations of this kind eventually expose the contrast between the abstraction of data and the reality of an actual life, and dramatise the fact, perhaps regrettable, perhaps not, that you can’t translate one into the other.

 

amoebaAre there units of thought? An interesting conversation here between Asifa Majid and Jon Sutton. There are a number of interesting points, but the one that I found most thought-provoking was the reference to searching for those tantalising units. I think I had been lazily assuming that any such search had been abandoned long ago – if not quite with the search for perpetual motion and the universal solvent, then at least a while back.

I may, though, have been mixing up two or more distinct ideas here. In looking for the units of thought we might be assuming that there is some ultimate set of simplest thought items, a kind of periodic table, with all thoughts necessarily built out of combinations of these elements. This sort of thinking has a bit of a history in connection with language. People (I think Leibniz was one) used to hope that if one took all the words in the dictionary and defined them in terms of other words, you would eventually get down to the basic set of ideas, the ‘primitives’ which were really fundamental. So you might start with library, define it as ‘book building’, define building as ‘enterable structure’, define structure as ‘thing made of parts’ and feel that maybe with ‘thing’, ‘made’, and ‘parts’ you were getting close to some primitives. You weren’t really, though. You could move on to define ‘made’ as ‘assembled or created’, ‘assembled’ as ‘brought or fixed together’… Sooner or later your definitions become circular and as you will have noticed, alternative meanings and distinctions keep slipping through the net of your definitions.

The idea was seductive, however; linked with the idea that there was a real ‘Adamitic’ language which truly captured the essence of things and of which all real languages were corruptions, generated at the fall of the Tower of Babel. It is still possible to argue that there must be some fundamental underpinning of this kind, some kind of breaking down to a set of common elements, or languages would not all be mutually translatable. Masjid gestures towards another idea of this kind in speaking of ‘mentalese’, the hypothetical language in which all our brains basically work, translating into real world languages for purposes of communication. Mentalese is a large and controversial subject which I can’t do justice to here; my own view is that the underpinnings of language, whether common or unique to each individual, are not likely to be another language.

Could there in fact be untranslatable languages? We’ve never found one; although capturing the nuances of another language is notoriously difficult, we’ve never come across a language that couldn’t be matched up with ‘good enough’ equivalents in English. Could it be that alien beings somewhere use a language that just carves the world up in ways that can’t be rendered across into normal Earth languages? I think not, but it’s not because there is any universal set of units of thought underneath all languages and all thoughts. Rather, it is first because all languages address the same reality, which guarantees a certain commonality, and second because language is infinitely flexible and extensible. If we encountered an entirely new phenomenon tomorrow, we should not have any real difficulty in describing it – or at least, it wouldn’t be the constraints of language that made things difficult. Equally we might have difficulty working out the unfamiliar concepts of an alien language, but surely we should be able to devise and borrow whatever words we needed (at this point I can feel Wittgenstein’s ghost breathing impatiently down my neck and insisting that we could not understand a lion, never mind an alien, but I’m going to ignore him for now).

So perhaps the search for the units of thought is not to be understood as a search for any kind of periodic table, but something much more flexible. Masjid refers to George Miller’s famous suggestion that short term memory can accommodate only seven items, plus or minus two depending on circumstances. This idea that there is a limit to how many items of a ‘one-dimensional’ set we can accommodate is appealingly tidy and intuitive, but it obviously works best with simple items; single tones or single digits. Even when we go so far as to make the numbers a little larger questions arise as to whether ‘102’ is one, two, or three items, and it only gets worse as we try to deal with complex entities. If one of the items in memory is ‘the first World War’ is it really one item which we can then decode into many or an inherently complex item?

So perhaps it’s more like asking how many arms an amoeba has. There is, in fact, no fixed set of arms, but for a given size of amoeba there may well be a limit on many pseudopodia it can extend at any one time. That feels more like it, though we should have to accept that the answer might be a bit vague; it’s not always clear whether a particular bit of amoeba counts as one, two, or no pseudopodia.

Or put it another way: how many images can a digital screen show? If we insist on a set of basic items we can break it down to pixel values; but the number of things whose picture can be displayed is large and amorphous. Perhaps it’s the same with the brain; we can analyse it down to the firing of neurons if we want, but the number of thoughts those neurons underpin is a much larger and woollier business (in spite of the awkward fact that the number of different images a digital screen can display is in fact finite – yet another thorny issue I’m going to glide past).

And surely the flexibility of an amoeba is going a bit too far, isn’t it? Masjid points out some evidence that our choice of thought items is not altogether unconstrained. Different languages have different numbers of colours, but there is a clear order of preference; if two languages have the same number of colour words, there’s an excellent chance that they will name more or less the same colours. Different languages address the human forelimb differently, some using words that include the hand while others insist that the two main halves are spoken of separately, never the arm as a whole. Yet no languages seem to define a body part composed of the thumb and first half of the forearm.

Clearly there are two things going on here; one is that our own sensory apparatus predisposes us to see things in certain ways – not insurmountable ones that would prevent us understanding aliens with different biases, but universal among humans. Second, and perhaps stranger, reality itself seems to have some inherent or salient forms that it is most convenient for us to recognise. Some of these look to be physical –  a forearm just makes more sense than a ‘thumb plus some arm’; others are mathematical or even logical. The hunt for items of thought loosely defined by our sensory or mental apparatus is an interesting and impeccably psychological one; the second kind of constraint looks to raise tough issues of philosophy. Either way I was quite wrong to have thought the hunt was over or abandoned.

axebot Robot behaviour is no longer a purely theoretical problem. Since Asimov came up with the famous Three Laws which provide the framework for his robot stories, a good deal of serious thought has been given to extreme cases where robots might cause massive disasters and to such matters as the ethics of military robots. Now, though, things have moved on to a more mundane level and we need to give thought to more everyday issues. OK, a robot should not harm a human being or through inaction allow a human being to come to harm, but can we also just ask that you stop knocking the coffee over and throwing my drafts away? Dario Amodei, Chris Olah, John Schulman, Jacob Steinhardt, Paul Christiano, and Dan Mane have considered how to devise appropriate rules in this interesting paper.

They reckon things can go wrong in three basic ways. It could be that the robot’s objective was not properly defined in the first place. It could be that the testing of success is not frequent enough, especially if the tests we have devised are complex or expensive. Third, there could be problems due to “insufficient or poorly curated training data or an insufficiently expressive model”. I take it these are meant to be the greatest dangers – the set doesn’t seem to be exhaustive.

The authors illustrate the kind of thing that can go wrong with the example of an office cleaning robot, mentioning five types of case.

  • Avoiding Negative Side Effects: we don’t want the robot to clean quicker by knocking over the vases.
  • Avoiding Reward Hacking: we tell the robot to clean until it can’t see any mess; it closes its eyes.
  • Scalable Oversight: if the robot finds an unrecognised object on the floor it may need to check with a human; we don’t want a robot that comes back every three minutes to ask what it can throw away, but we don’t want one that incinerates our new phone either.
  • Safe Exploration: we’re talking here about robots that learn, but as the authors put it, the robot should experiment with mopping strategies, but not put a wet mop in an electrical outlet.
  • Robustness to Distributional Shift: we want a robot that learned its trade in a factory to be able to move safely and effectively to an office job.How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment different from its training environment? For example, heuristics it learned for cleaning factory workfloors may be outright dangerous in an office.

The authors consider a large number of different strategies for mitigating or avoiding each of these types of problem. One particularly interesting one is the idea of an impact regulariser, either pre-defined or learned by the robot. The idea here is that the robot adopts the broad principle of leaving things the way people would wish to find them. In the case of the office this means identifying an ideal state – rubbish and dirt removed, chairs pushed back under desks, desk surfaces clear (vases still upright), and so on. If the robot aims to return things to the ideal state this helps avoid negative side effects of an over-simplified objective or other issues.

There are further problems, though, because if the robot invariably tries to put things back to an ideal starting point it will try to put back changes we actually wanted, clear away papers we wanted left out, and so on. Now in practice and in the case of an office cleaning robot I think we could get round those problems without too much difficulty; we would essentially lower our expectations of the robot and redesign the job in a much more limited and stereotyped way. In particular we would give up the very ambitious goal of making a robot which could switch from one job to another without adjustment and without faltering.

Still it is interesting to see the consequences of the more ambitious approach. The final problem, cutting to the chase, is that in order to tell how humans want their office arranged in every possible set of circumstances, you really cannot do without a human level of understanding. There is an old argument that robots need not resemble humans physically; instead you make your robot to fit the job; a squat circle on wheels if you’re cleaning the floor, a single fixed arm if you want it to build cars. The counter-argument has often been that our world has been shaped to fit human beings, and if we want a general purpose robot it will pay to have it more or less human size and weight, bipedal, with hands, and so on. Perhaps there is a parallel argument to explain why general-purpose robots need human-level cognition; otherwise they won’t function effectively in a world shaped by human activity. The search for artificial general intelligence is not an idle project after all?

bulbWhere do thoughts come from? Alva Noë provides a nice commentary here on an interesting paper by Melissa Ellamil et al. The paper reports on research into the origin of spontaneous thoughts.

The research used subjects trained in Mahasi Vipassana mindfulness techniques. They were asked to report the occurrence of thoughts during sessions when they were either left alone or provided with verbal stimuli. As well as reporting the occurrence of a thought, they were asked to categorise it as image, narrative, emotion or bodily sensation (seems a little restrictive to me – I can imagine having two at once or a thought that doesn’t fit any of the categories). At the same time brain activity was measured by fMRI scan.

Overall the study found many regions implicated in the generation of spontaneous thought; the researchers point to the hippocampus as a region of particular interest, but there were plenty of other areas involved. A common view is that when our attention is not actively engaged with tasks or challenges in the external world the brain operates the Default Mode Network (DMN); a set of neuronal areas which appear to produce detached thought (we touched on this a while ago); the new research complicates this picture somewhat or at least suggests that the DMN is not the unique source of spontaneous thoughts. Even when we’re disengaged from real events we may be engaged with the outside world via memory or in other ways.

Noë’s short commentary rightly points to the problem involved in using specially trained subjects. Normal subjects find it difficult to report their thoughts accurately; the Vipassana techniques provide practice in being aware of what’s going on in the mind, and this is meant to enhance the accuracy of the results. However, as Noë says, there’s no objective way to be sure that these reports are really more accurate. The trained subjects feel more confidence in their reports, but there’s no way to confirm that the confidence is justified. In fact we could go further and suggest that the special training they have undertaken may even make their experience particularly unrepresentative of most minds; it might be systematically changing their experience. These problems echo the methodological ones faced by early psychologists such as Wundt and Titchener with trained subjects. I suppose Ellamil et al might retort that mindfulness is unlikely to have changed the fundamental neural architecture of the brain and that their choice of subject most likely just provided greater consistency.

Where do ‘spontaneous’ thoughts come from? First we should be clear what we mean by a spontaneous thought. There are several kinds of thought we would probably want to exclude. Sometimes our thoughts are consciously directed; if for example we have set ourselves to solve a problem we may choose to follow a particular strategy or procedure. There are lots of different ways to do this, which I won’t attempt to explore in detail: we might hold different aspects of the problem in mind in sequence; if we’re making a plan we might work through imagined events; or we might even follow a formal procedure of some kind. We could argue that even in these cases what we usually control is the focus of attention, rather than the actual generation of thoughts, but it seems clear enough that this kind of thinking is not ‘spontaneous’ in the expected sense. It is interesting to note in passing that this ability to control our own thoughts implies an ability to divide our minds into controller and executor, or at least to quickly alternate those roles.

Also to be excluded are thoughts provoked directly by outside events. A match is struck in a dark theatre; everyone’s eyes saccade involuntarily to the point of light. Less automatically a whole variety of events can take hold of our attention and send our thoughts in a new direction. As well as purely external events, the sources in such cases might include interventions from non-mental parts of our own bodies; a pain in the foot, an empty stomach.

Third, we should exclude thoughts that are part of a coherent ongoing chain of conscious cogitation. These ‘normal’ thoughts are not being directed like our problem-solving efforts, but they follow a thread of relevance; by some connection one follows on from the next.

What we’re after then is thoughts that appear unbidden, unprompted, and with no perceivable connection with the thoughts that recently preceded them. Where do they come from? It could be that mere random neuronal noise sometimes generates new thoughts, but it seems unlikely to be a major contributor to me: such thoughts would be likely to resemble random nonsense and most of our spontaneous thought seem to make a little more sense than that.

We noticed above that when directing our thoughts we seem to be able to split ourselves into controller and controlled. As well as passing control up to a super-controller we sometimes pass it down, for example to the part of our mind that gets on with the details of driving along a route while the surface of our mind us engaged with other things. Clearly some part of our mind goes on thinking about which turnings to take; is it possible that one or more parts of our mind similarly goes on thinking about other topics but then at some trigger moment inserts a significant thought back into the main conscious stream? A ‘silent’ thinking part of us like this might be a permanent feature, a regular sub- or unconscious mind; or it might be that we occasionally drop threads of thought that descend out of the light of attention for a while but continue unheard before popping back up and terminating. We might perhaps have several such threads ruminating away in the background; ordinary conscious thought often seems rather multi-threaded. Perhaps we keep dreaming while awake and just don’t know it?

There’s a basic problem here in that our knowledge of these processes, and hence all our reports, rely on memory. We cannot report instantaneously; if we think a thought was spontaneous it’s because we don’t remember any relevant antecedents; but how can we exclude the possibility that we merely forgot them? I think this problem radically undermines our certainty about spontaneous thoughts. Things get worse when we remember the possibility that instead of two separate thought processes, we have one that alternates roles. Maybe when driving we do give conscious attention to all our decisions; but our mind switches back and forth between that and other matters that are more memorable; after the journey we find we have instantly forgotten all the boring stuff about navigating the route and are surprised that we seem to have done it thoughtlessly. Why should it not be the same with other thoughts? Perhaps we have a nagging worry about X which we keep spending a few moments’ thought on between episodes of more structured and memorable thought about something else; then everything but our final alarming conclusion about X gets forgotten and the conclusion seems to have popped out of nowhere.

We can’t, in short, be sure that we ever have any spontaneous thoughts: moreover, we can’t be sure that there are any subconscious thoughts. We can never tell the difference, from the inside, between a thought presented by our subconscious, and one we worked up entirely in intermittent and instantly-forgotten conscious mode. Perhaps whole areas of our thought never get connected to memory at all.

That does suggest that using fMRI was a good idea; if the problem is insoluble in first-person terms maybe we have to address it on a third-person basis. It’s likely that we might pick up some neuronal indications of switching if thought really alternated the way I’ve suggested. Likely but not guaranteed; after all a novel manages to switch back and forth between topics and points of view without moving to different pages. One thing is definitely clear; when Noë pointed out that this is more difficult than it may appear he was absolutely right.

flatlandersWrong again: just last week I was saying that Roger Penrose’s arguments seemed to have drifted off the radar a bit. Immediately, along comes this terrific post from Scott Aaronson about a discussion with Penrose.

In fact it’s not entirely about Penrose; Aaronson’s main aim was to present an interesting theory of his own as to why a computer can’t be conscious, which relies on non-copyability. He begins by suggesting that the onus is on those who think a computer can’t be conscious to show exactly why. He congratulates Penrose on doing this properly, in contrast to say, John Searle who merely offers hand-wavy stuff about unknown biological properties. I’m not really sure that Searle’s honest confession of ignorance isn’t better than Penrose’s implausible speculations about unknown quantum mechanics, but we’ll let that pass.

Aaronson rests his own case not on subjectivity and qualia but on identity. He mentions several examples where the limitless copyability of a program seems at odds with the strong sense of a unique identity we have of ourselves – including Star Trek style teleportation and the fact that a program exists in some Platonic sense forever, whereas we only have one particular existence. He notes that at the moment one of the main differences between brain and computer is our ability to download, amend and/or re-run programs exactly; we can’t do that at all with the brain. He therefore looks for reasons why brain states might be uncopyable. The question is, how much detail do we need before making a ‘good enough’ copy? If it turns out that we have to go down to the quantum level we run into the ‘no-cloning’ theorem; the price of transferring the quantum state of your brain is the destruction of the original. Aaronson makes a good case for the resulting view of our probably uniqueness being an intuitively comfortable one, in tune with our intuitions about our own nature. It also offers incidentally a sort of reconciliation between the Everett many-worlds view and the Copenhagen interpretation of quantum physics: from a God’s eye point of view we can see the world as branching, while from the point of view of any conscious entity (did I just accidentally call God unconscious?) the relevant measurements are irreversible and unrealised branches can be ‘lopped off’. Aaronson, incidentally, reports amusingly that Penrose absolutely accepts that the Everett view follows from our current understanding of quantum physics; he just regards that as a reductio ad absurdum – ie, the Everett view is so absurd the link proves there must be something wrong with our current understanding of quantum physics.

What about Penrose? According to Aaronson he now prefers to rest his case on evolutionary factors and downplay his logical argument based on Godel. That’s a shame in my view. The argument goes something like this (if I garble it someone will perhaps offer a better version).

First we set up a formal system for ourselves. We can just use the letters of the alphabet, normal numbers, and normal symbols of formal logic, with all the usual rules about how they can be put together. Then we make a list consisting of all the valid statements that can be made in this system. By ‘valid’, we don’t mean they’re true, just that they comply with the rules about how we put characters together (eg, if we use an opening bracket, there must be a closing one in an appropriate place). The list of valid statements will go on forever, of course, but we can put them in alphabetical order and number them. The list obviously includes everything that can be said in the system.

Some of the statements, by pure chance, will be proofs of other statements in the list. Equally, somewhere in our list will be statements that tell us that the list includes no proof of statement x. Somewhere else will be another statement – let’s call this the ‘key statement’ – that says this about itself. Instead of x, the number of that very statement itself appears. So this one says, there is no proof in this system of this statement.

Is the key statement correct – is there no proof of the key statement in the system? Well, we could look through the list, but as we know it goes on indefinitely; so if there really is no proof there we’d simply be looking forever. So we need to take a different tack. Could the key statement be false? Well, if it is false, then what it says is wrong, and there is a proof somewhere in the list. But that can’t be, because if there’s a proof of the key statement anywhere,the key statement must be true! Assuming the key statement is false leads us unavoidably to the conclusion that it is true, in the light of what it actually says. We cannot have a contradiction, so the key statement must be true.

So by looking at what the key statement says, we can establish that it is true; but we also establish that there is no proof of it in the list. If there is no proof in the list, there is no possible proof in our system, because we know that the list contains everything that can be said within our system; there is therefore a true statement in our system that is not provable within it. We have something that cannot be proved in an arbitrary formal system, but which human reasoning can show to be true; ergo, human reasoning is not operating within any such formal system. All computers work in a formal system, so it follows that human reasoning is not computational.

As Aaronson says, this argument was discussed to the point of exhaustion when it first came out, which is probably why Penrose prefers other arguments now. Aaronson rejects it, pointing out that he himself has no magic ability to see “from the outside” whether a given formal system is consistent; why should an AI do any better – he suggests Turing made a similar argument. Penrose apparently responded that this misses the point, which is not about a mystical ability to perceive consistency but the human ability to transcend any given formal system and move up to an expanded one.

I’ll leave that for readers to resolve to their own satisfaction. Let’s go back to Aaronson’s suggestion that the burden of proof lies on those who argue for the non-computability of consciousness. What an odd idea that is!  How would that play  at the Patent Office?

“So this is your consciousness machine, Mr A? It looks like a computer. How does it work?”

“All I’ll tell you is that it is a computer. Then it’s up to you to prove to me that it doesn’t work – otherwise you have to give me rights over consciousness! Bwah ha ha!”

Still, I’ll go along with it. What have I got? To begin with I would timidly offer my own argument that consciousness is really a massive development of recognition, and that recognition itself cannot be algorithmic.

Intuitively it seems clear to me that the recognition of linkages and underlying entities is what powers most of our thought processes. More formally, both of the main methods of reasoning rely on recognition; induction because it relies on recognising a real link (eg a causal link) between thing a and thing b; deduction because it reduces to the recognition of consistent truth values across certain formal transformations. But recognition itself cannot operate according to rules. In a program you just hand the computer the entities to be processed; in real world situations they have to be recognised. But if recognition used rules and rules relied on recognising the entities to which the rules applied, we’d be caught in a vicious circularity. It follows that this kind of recognition cannot be delivered by algorithms.

The more general case rests on, as it were, the non-universality of computation. It’s argued that computation can run any algorithm and deliver, to any required degree of accuracy, any set of physical states of affairs. The problem is that many significant kinds of states of affairs are not describable in purely physical or algorithmic terms. You cannot list the physical states of affairs that correspond to a project, a game, or a misunderstanding. You can fake it by generating only sets of states of affairs that are already known to correspond with examples of these things, but that approach misses the point. Consciousness absolutely depends on intentional states that can’t be properly specified except in intentional terms. That doesn’t contradict physics or even add to it the way new quantum mechanics might; it’s just that the important aspects of reality are not exhausted by physics or by computation.

The thing is, I think long exposure to programmable environments and interesting physical explanations for complex phenomena has turned us all increasingly into flatlanders who miss a dimension; who naturally suppose that one level of explanation is enough, or rather who naturally never even notice the possibility of other levels; but there are more things in heaven and earth than are dreamt of in that philosophy.

no botsI liked this account by Bobby Azarian of why digital computation can’t do consciousness. It has several virtues; it’s clear, identifies the right issues and is honest about what we don’t know (rather than passing off the author’s own speculations as the obvious truth or the emerging orthodoxy). Also, remarkably, I almost completely agree with it.

Azarian starts off well by suggesting that lack of intentionality is a key issue. Computers don’t have intentions and don’t deal in meanings, though some put up a good pretence in special conditions.  Azarian takes a Searlian line by relating the lack of intentionality to the maxim that you can’t get meaning-related semantics from mere rule-bound syntax. Shuffling digital data is all computers do, and that can never lead to semantics (or any other form of meaning or intentionality). He cites Searle’s celebrated Chinese Room argument (actually a thought experiment) in which a man given a set of rules that allow him to provide answers to questions in Chinese does not thereby come to understand Chinese. But, the argument goes, if the man, by following rules, cannot gain understanding, then a computer can’t either. Azarian mentions one of the objections Searle himself first named, the ‘systems response’: this says that the man doesn’t understand, but a system composed of him and his apparatus, does. Searle really only offered rhetoric against this objection, and in my view it is essentially correct. The answers the Chinese Room gives are not answers from the man, so why should his lack of understanding show anything?

Still, although I think the Chinese Room fails, I think the conclusion it was meant to establish – no semantics from syntax – turns out to be correct, so I’m still with Azarian. He moves on to make another  Searlian point; simulation is not duplication. Searle pointed out that nobody gets wet from digitally simulated rain, and hence simulating a brain on a computer should not be expected to produce consciousness. Azarian gives some good examples.

The underlying point here, I would say, is that a simulation always seeks to reproduce some properties of the thing simulated, and drops others which are not relevant for the purposes of the simulation. Simulations are selective and ontologically smaller than the thing simulated – which, by the way, is why Nick Bostrom’s idea of indefinitely nested world simulations doesn’t work. The same thing can however be simulated in different ways depending on what the simulation is for. If I get a computer to simulate me doing arithmetic by calculating, then I get the correct result. If it simulates me doing arithmetic by operating a humanoid writing random characters on a board with chalk, it doesn’t – although the latter kind of simulation might be best if I were putting on a play. It follows that Searle isn’t necessarily exactly right, even about the rain. If my rain simulation program turns on sprinklers at the right stage of a dramatic performance, then that kind of simulation will certainly make people wet.

Searle’s real point, of course, is really that the properties a computer has in itself, of running sets of rules, are not the relevant ones for consciousness, and Searle hypothesises that the required properties are biological ones we have yet to identify. This general view, endorsed by Azarian, is roughly correct, I think. But it’s still plausibly deniable. What kind of properties does a conscious mind need? Alright we don’t know, but might not information processing be relevant? It looks to a lot of people as if it might be, in which case that’s what we should need for consciousness in an effective brain simulator. And what properties does a digital computer, in itself have – the property of doing information processing? Booyah! So maybe we even need to look again at whether we can get semantics from syntax. Maybe in some sense semantic operations can underpin processes which transcend mere semantics?

Unless you accept Roger Penrose’s proof that human thinking is not algorithmic (it seems to have drifted off the radar in recent years) this means we’re still really left with a contest of intuitions, at least until we find out for sure what the magic missing ingredient for consciousness is. My intuitions are with Azarian, partly because the history of failure with strong AI looks to me very like a history of running up against the inadequacy of algorithms. But I reckon I can go further and say what the missing element is. The point is that consciousness is not computation, it’s recognition. Humans have taken recognition to a new level where we recognise not just items of food or danger, but general entities, concepts, processes, future contingencies, logical connections, and even philosophical ontologies. The process of moving from recognised entity to recognised entity by recognising the links between them is exactly the process of thought. But recognition, in us, does not work by comparing items with an existing list, as an algorithm might do; it works by throwing a mass of potential patterns at reality and seeing what sticks. Until something works, we can’t tell what are patterns at all; the locks create their own keys.

It follows that consciousness is not essentially computational (I still wonder whether computation might not subserve the process at some level). But now I’m doing what I praised Azarian for avoiding, and presenting my own speculations…

botpainWhat are they, sadists? Johannes Kuehn and Sami Haddadin,  at Leibniz University of Hannover are working on giving robots the ability to feel pain: they presented their project at the recent ICRA 2016 in Stockholm. The idea is that pain systems built along the same lines as those in humans and other animals will be more useful than simple mechanisms for collision avoidance and the like.

As a matter of fact I think that the human pain system is one of Nature’s terrible lash-ups. I can see that pain sometimes might stop me doing bad things, but often fear or aversion would do the job equally well. If I injure myself I often go on hurting for a long time even though I can do nothing about the problem. Sometimes we feel pain because of entirely natural things the body is doing to itself – why do babies have to feel pain when their teeth are coming through? Worst of all, pain can actually be disabling; if I get a piece of grit in my eye I suddenly find it difficult to concentrate on finding my footing or spotting the sabre-tooth up ahead; things that may be crucial to my survival; whereas the pain in my eye doesn’t even help me sort out the grit. So I’m a little sceptical about whether robots really need this, at least in the normal human form.

In fact, if we take the project seriously, isn’t it unethical? In animal research we’re normally required to avoid suffering on the part of the subjects; if this really is pain, then the unavoidable conclusion seems to be that creating it is morally unacceptable.

Of course no-one is really worried about that because it’s all too obvious that no real pain is involved. Looking at the video of the prototype robot it’s hard to see any practical difference from one that simply avoids contact. It may have an internal assessment of what ‘pain’ it ought to be feeling, but that amounts to little more than holding up a flag that has “I’m in pain” written on it. In fact tackling real pain is one of the most challenging projects we could take on, because it forces us to address real phenomenal experience. In working on other kinds of sensory system, we can be sceptics; all that stuff about qualia of red is just so much airy-fairy nonsense, we can say; none of it is real. It’s very hard to deny the reality of pain, or its subjective nature: common sense just tells us that it isn’t really pain unless it hurts. We all know what “hurts” really means, what it’s like, even though in itself it seems impossible to say anything much about it (“bad”, maybe?).

We could still take the line that pain arises out of certain functional properties, and that if we reproduce those then pain, as an emergent phenomenon, will just happen. Perhaps in the end if the robots reproduce our behaviour perfectly and have internal functional states that seem to be the same as the ones in the brain, it will become just absurd to deny they’re having the same experience. That might be so, but it seems likely that those functional states are going to go way beyond complex reflexes; they are going to need to be associated with other very complex brain states, and very probably with brain states that support some form of consciousness – whatever those may be. We’re still a very long way from anything like that (as I think Kuehn and Haddadin would probably agree)

So, philosophically, does the research tell us nothing? Well, there’s one interesting angle. Some people like the idea that subjective experience has evolved because it makes certain sensory inputs especially effective. I don’t really know whether that makes sense, but I can see the intuitive appeal of the idea that pain that really hurts gets your attention more effectively than pain that’s purely abstract knowledge of your own states. However, suppose researchers succeed in building robots that have a simple kind of synthetic pain that influences their behaviour in just the way real pain dies for animals. We can see pretty clearly that there’s just not enough complexity for real pain to be going on, yet the behaviour of the robot is just the same as if there were. Wouldn’t that tend to disprove the hypothesis that qualia have survival value? If so, then people who like that idea should be watching this research with interest – and hoping it runs into unexpected difficulty (usually a decent bet for any ambitious AI project, it must be admitted).