Go boardThe recent victory scored by the AlphaGo computer system over a professional Go player might be more important than it seems.

At first sight it seems like another milestone on a pretty well-mapped road; significant but not unexpected. We’ve been watching games gradually yield to computers for many years; chess notoriously, was one they once said was permanently out of the reach of the machines. All right, Go is a little bit special. It’s an extremely elegant game; from some of the simplest equipment and rules imaginable it produces a strategic challenge of mind-bending complexity, and one whose combinatorial vastness seems to laugh scornfully at Moore’s Law – maybe you should come back when you’ve got quantum computing, dude! But we always knew that that kind of confidence rested on shaky foundations; maybe Go is in some sense the final challenge, but sensible people were always betting on its being cracked one day.

The thing is, Go has not been beaten in quite the same way as chess. At one time it seemed to be an interesting question as to whether chess would be beaten by intelligence – a really good algorithm that sort of embodied some real understanding of chess – or by brute force; computers that were so fast and so powerful they could analyse chess positions exhaustively. That was a bit of an oversimplification, but I think it’s fair to say that in the end brute force was the major factor. Computers can play chess well, but they do it by exploiting their own strengths, not by doing it through human-style understanding. In a way that is disappointing because it means the successful systems don’t really tell us anything new.

Go, by contrast, has apparently been cracked by deep learning, the technique that seems to be entering a kind of high summer of success. Oversimplifying again, we could say that the history of AI has seen a contest between two tribes; those who simply want to write programs that do what’s needed, and those that want the computer to work it out for itself, maybe using networks and reinforcement methods that broadly resemble the things the human brain seems to do. Neither side, frankly, has altogether delivered on its promises and what we might loosely call the machine learning people have faced accusations that even when their systems work, we don’t know how and so can’t consider them reliable.

What seems to have happened recently is that we have got better at deploying several different approaches effectively in concert. In the past people have sometimes tried to play golf with only one club, essentially using a single kind of algorithm which was good at one kind of task. The new Go system, by contrast, uses five different components carefully chosen for the task they were to perform; and instead of having good habits derived from the practice and insights of human Go masters built in, it learns for itself, playing through thousands of games.

This approach takes things up to a new level of sophistication and clearly it is yielding remarkable success; but it’s also doing it in a way which I think is vastly more interesting and promising than anything done by Deep Thought or Watson. Let’s not exaggerate here, but this kind of machine learning looks just a bit more like actual thought. Claims are being made that it could one day yield consciousness; usually, if we’re honest, claims like that on behalf of some new system or approach can be dismissed because on examination the approach is just palpably not the kind of thing that could ever deliver human-style cognition; I don’t say deep learning is the answer, but for once, I don’t think it can be dismissed.

Demis Hassabis, who led the successful Google DeepMind project, is happy to take an optimistic view; in fact he suggests that the best way to solve the deep problems of physics and life may be to build a deep-thinking machine clever enough to solve them for us (where have I heard that idea before?). The snag with that is that old objection; the computer may be able to solve the problems, but we won’t know how and may not be able to validate its findings. In the modern world science is ultimately validated in the agora; rival ideas argue it out and the ones with the best evidence wins the day. There are already some emergent problems, with proofs achieved by an exhaustive consideration of cases by computation that no human brain can ever properly validate.

More nightmarish still the computer might go on to understand things we’re not capable of understanding. Or seem to: how could we be sure?

Alfred SmeeCharles Babbage was not the only Victorian to devise a thinking machine.

He is, of course, considered the father, or perhaps the grandfather, of digital computing. He devised two remarkable calculating machines; the Difference Engine was meant to produce error-free mathematical tables for navigation or other uses; the Analytical Engine, an extraordinary leap of the imagination, would have been the first true general-purpose computer. Although Babbage failed to complete the building of the first, and the second never got beyond the conceptual stage, his achievement is rightly regarded as a landmark, and the Analytical Engine routines published by Lady Lovelace in 1843 with a translation of Menabrea’s description of the Engine, have gained her recognition as the world’s first computer programmer.

The digital computer, alas, went no further until Turing a hundred years later; but in 1851 Alfred Smee published The Process of Thought adapted to Words and Language together with a description of the Relational and Differential Machines – two more designs for cognitive mechanisms.

Smee held the unusual post of Surgeon to the Bank of England – in practice he acted as a general scientific and technical adviser. His father had been Chief Accountant to the Bank and little Alfred had literally grown up in the Bank, living inside its City complex. Apparently, once the Bank’s doors had shut for the night, the family rarely went to the trouble of getting them unlocked again to venture out; it must have been a strangely cloistered life. Like Babbage and other Victorians involved in London’s lively intellectual life, Smee took an interest in a wide range of topics in science and engineering, with his work in electro-metallurgy leading to the invention of a successful battery; he was a leading ophthalmologist and among many other projects he also wrote a popular book about the garden he created in Wallington, south of London – perhaps compensating for his stonily citified childhood?

Smee was a Fellow of the Royal Society, as was Babbage, and the two men were certainly acquainted (Babbage, a sociable man, knew everyone anyway and was on friendly terms with all the leading scientists of the day; he even managed to get Darwin, who hated socialising and suffered persistent stomach problems, out to some of his parties). However, it doesn’t seem the two ever discussed computing, and Smee’s book never mentions Babbage.

That might be in part because Smee came at the idea of a robot mind from a different, biological angle. As a surgeon he was interested in the nervous system and was a proponent of ‘electro-biology’, advocating the modern view that the mind depends on the activity of the brain. At public lectures he exhibited his own ‘injections’ of the brain, revealing the complexity of its structure; but Golgi’s groundbreaking methods of staining neural tissue were still in the future, and Smee therefore knew nothing about neurons.

Smee nevertheless had a relatively modern conception of the nervous system. He conducted many experiments himself (he used so many stray cats that a local lady was moved to write him a letter warning him to keep clear of hers) and convinced himself that photovoltaic effects in the eye generated small currents which were transmitted bio-electrically along the nerves and processed in the brain. Activity in particular combinations of ‘nervous fibrils’ gave rise to awareness of particular objects. The gist is perhaps conveyed in the definition of consciousness he offered in an earlier work, Principles of the Human Mind Deuced from Physical Laws:

When an image is produced by an action upon the external senses, the actions on the organs of sense concur with the actions in the brain; and the image is then a Reality.
When an image occurs to the mind without a corresponding simultaneous action of the body, it is called a Thought.
The power to distinguish between a thought and a reality, is called Consciousness.

This is not very different in broad terms from a lot of current thinking.

In The Process of Thought Smee takes much of this for granted and moves on to consider how the brain deals with language. The key idea is that the brain encodes things into a pyramidal classification hierarchy. Smee begins with an analysis of grammar, faintly Chomskyan in spirit if not in content or level of innovation. He then moves on rapidly to the construction of words. If his pyramidal structure is symbolically populated with the alphabet different combinations of nervous activity will trigger different combinations of letters and so produce words and sentences. This seems to miss out some essential linguistic level, leaving the impression that all language is spelled out alphabetically, which can hardly be what Smee believed.

When not dealing specifically with language the letters in Smee’s system correspond to qualities and this pyramid stands for a universal categorisation of things in which any object can be represented as a combination of properties. (This rather recalls Bishop Wilkins’ proposed universal language, in which each successive letter of each noun identifies a position in an hierarchical classification, so that the name is an encoded description of the thing named.)

At least, I think that’s the way it works. The book goes on to give an account of induction, deduction, and the laws of thought; alas, Smee seems unaware of the problem described by Hume and does not address it. Instead, in essence, he just describes the processes; although he frames the discussion in terms of his pyramidal classification his account of induction (he suggests six different kinds) comes down to saying that if we observe two characteristics constantly together we assume a link. Why do we do that – and why is it we actually often don’t? Worse than that, he mentions simple arithmetic (one plus one equals two, two times two is four) and says:

These instances are so familiar we are apt to forget that they are inductions…

Alas, they’re not inductions. (You could arrive at them by induction, but no-one ever actually does and our belief in them does not rest on induction.)

I’m afraid Smee’s laws of thought also stand on a false premise; he says that the number of ideas denoted by symbols is finite, though too large for a man to comprehend. This is false. He might have been prompted to avoid the error if he had used numbers instead of letters for his pyramid – because each integer represents an idea; the list of integers goes on forever, yet our numbering system provides a unique symbol for every one? So neither the list of ideas nor the list of symbols can be finite. Of course that barely scratches the surface of the infinitude of ideas and symbols, but it helps suggest just how unmanageable a categorisation of every possible idea really is.

But now we come to the machines designed to implement these processes. Smee believed that his pyramidal structure could be implemented in a hinged physical mechanism where opening would mean the presence or existence of the entity or quality and closing would mean its absence. One of these structures provides the Relational Machine. It can test membership of categories, or the possession of a particular attribute, and can encode an assertion, allowing us to test consistency of that assertion with a new datum. I have to confess to having only a vague understanding of how this would really work. He allows for partial closing and I think the idea is that something like predicate calculus could be worked out this way. He says at one point that arithmetic could be done with this mechanism and that anyone who understands logarithms will readily see how; I’m afraid I can only guess what he had in mind.

It isn’t necessary to have two Relational Machines to deal with multiple assertions because we can take them in sequence; however the Differential Machine provides the capacity to compare directly, so that we can load into one side all the laws and principles that should guide a case while uploading the facts into the other.

Smee had a number of different ideas about how the machines could be implemented, and says he had a number of part-completed prototypes of partial examples on his workbench. Unlike Babbage’s designs, his were never meant to be capable of full realisation, though; although he thinks it is finite he says the Relational Machine would cover London and the mechanical stresses would destroy it immediately if it were ever used; moreover, using it to crank out elementary deductions would be so slow and tedious people would soon revert to using their wonderfully compact and efficient brains instead. But partial examples will helpfully illustrate the process of thought and help eliminate mistakes and ‘quibbles’. Later chapters of the book explore things that can go wrong in legal cases, and describe a lot of the quibbles Smee presumably hopes his work might banish.

I think part of the reason Smee’s account isn’t clearer (to me, anyway) is that his ideas were never critiqued by colleagues and he never got near enough to a working prototype to experience the practical issues sufficiently. He must have been a somewhat lonely innovator in his lab in the Bank and in fairness the general modernity of his outlook makes us forget how far ahead of his time he was. When he published his description of his machines, Wundt, generally regarded as the founder of scientific psychology, was still an undergraduate. To a first approximation, nobody knew anything about psychology or neurology. Logic was still essentially in the long Aristotelian twilight – and of course we know where computing stood. It is genuinely remarkable that Smee managed, over a hundred and fifty years ago, to achieve a proto-modern, if flawed, understanding of the brain and how it thinks. Optimists will think that shows how clever he was; pessimists will think it shows how little our basic conceptual thinking has been updated by the progress of cognitive science.

minskyMarvin Minsky, who died on Sunday, was a legend.  Here’s a slightly edited version of my 2004 post about him.

Linespace

 

Is it time to rehabilitate Marvin Minsky? As a matter of fact, I don’t think he was ever dishabilitated (so to speak) but it does seem to be generally felt that there are a couple of black marks against his name. The most widely-mentioned count against him and his views is a charge of flagrant over-optimism about the prospects for artificial intelligence. A story which gets quoted over and over again has it that he was so confident about duplicating human cognitive faculties, even way back in the 1970s when the available computing power was still relatively modest, that he gave the task of producing a working vision system to one of his graduate students as a project to sort out over the summer.

The story is apocryphal, but one can see why it gets repeated so much. The AI sceptics like it for obvious reasons, and the believers use it to say “I may seem like a gung-ho over-the-top enthusiast, but really my views are quite middle-of-the-road, compared to some people. Look at Marvin Minsky, for example, who once…”

Still, there is no doubt that Minsky did at one time predict much more rapid progress than has, in the event materialized: in 1977 he declared that the problem of creating artificial intelligence would be substantially solved within a generation.

The other and perhaps more serious charge against him is that in 1969, together with Seymour Papert, he gave an unduly negative evaluation of Frank Rosenblatt’s ‘Perceptron’ (an early kind of neural network device which was able to tackle simple tasks such as shape recognition). Their condemnation, based on the single-layer version of the perceptron rather than more complex models, is considered to have led to the effective collapse of Rosenblatt’s research project and a long period of eclipse for networks before a new wave of connectionist research came along and claimed Rosenblatt as an unfairly neglected forerunner.

There’s something in both these charges, but surely in fairness neither ought to be all that damaging? Optimism can be a virtue, without which many long and difficult enterprises could not get started, and Minsky’s was really no more starry-eyed than many others. The suggestion of AI within a generation does no more at most than echo Turing’s earlier forecast of human-style performance by the end of the century, and although it didn’t come true, you would have to be a dark pessimist to deny that there were (and perhaps still are) some encouraging signs.

It seems to be true that Minsky and Papert, by focusing on the single-layer perceptron alone, did give an unduly negative view of Rosenblatt’s ideas – but if researchers were jailed for giving dismissive accounts of their rivals’ work, there wouldn’t be many on the loose today. The question is why Minsky and Papert’s view had such a strong negative influence when a judicious audience would surely have taken a more balanced view.

I suspect that both Minsky’s optimism and his attack on the perceptron should properly be seen as crystallizing in a particularly articulate and trenchant form views which were actually widespread at the time: Minsky was not so much a lonely but influential voice as the most conspicuous and effective exponent of the emergent consensus.

What then, about his own work? I take the most complete expression of his views to be “The Society of Mind”. This is an unusual book in several ways – for one thing it is formatted like no other book I have ever seen, with each page having its own unique heading. It has an upbeat tone, compared to many books in the consciousness field, which tend to be written as much against a particular point of view as for one. It is full of thought-provoking points, and is hard to read quickly or to summarise adequately, not because it is badly or obscurely written (quite the contrary) but because it inspires interesting trains of thought which take time to mull over adequately.

The basic idea is that each simple task is controlled by an agent, a kind of sub-routine. A set of agents which happen to be active when a good result is achieved get linked together by a k-line. The multi-level hierarchical structures which gradually get built up allow complex and highly conditional forms of behaviour to be displayed. Building with blocks is used as an example, starting with simple actions such as picking up a block, and gradually ascending to the point where we debate whether to go on playing a complex block-building game or go off to have lunch instead. Ultimately all mental activity is conducted by structures of this kind.

This is recognizably the way well-designed computer programs work, and it also bears a plausible resemblance to the way we do many things without thinking (when we move our arm we don’t think about muscle groups, but somehow somewhere they do get dealt with); but it isn’t a very intuitively appealing general model of human thought from an inside point of view. It naturally raises some difficulties about how to ensure that appropriate patterns of behaviour can be developed in novel circumstances. There are many problems which arise if we just leave the agents to slug it out amongst themselves – and large parts of the book are taken up with the interesting solutions Minsky has to offer. The real problem (as always) arises when we want to move out of the toy block world and deal with the howling gale of complexity presented by the real world.

Minsky’s solution is frames (often compared with Roger Schank’s similar strategy of scripts). We deal with reality through common sense, and common sense is, in essence, a series of sets of default assumptions about given events and circumstances. When we go to a birthday party, we have expectations about presents, guests, hosts, cakes and so on which give us a repertoire of appropriate to deploy and a context in which to respond to unfolding events. Alas, we know that common sense has so far proved harder to systematize than expected – so much so that these days the word ‘frame’ in a paper on consciousness is highly likely to be part of the dread phrase ‘frame problem’.

The idea that mental activity is constituted by a society of agents who themselves are not especially intelligent is an influential one, and Minsky’s version of it is well-developed and characteristically trenchant. He has no truck at all with the idea of a central self, which in his eyes is pretty much the same thing as an homunculus, a little man inside your head. Free will, for him, is a delusion which we are unfortunately stuck with. This sceptical attitude certainly cuts out a lot of difficulties, though the net result is perhaps that the theory deals better with unconscious processes than conscious ones. I think the path set out by Minsky stops short of a real solution to the problem of consciousness and probably cannot be extended without some unimaginable new development. That doesn’t mean it isn’t a worthwhile exercise to stroll along it, however.

Newton in doubtConsciousness is not a problem, says Michael Graziano in an Atlantic piece that is short and combative. (Also, I’m afraid, pretty sketchy in places. Space constraints might be partly to blame for that, but can’t altogether excuse some sweeping assertions made with the broadest of brushes.)

Graziano begins by drawing an analogy with Newton and his theory of light. The earlier view, he says, was that white light was pure, and colour happened when it was ‘dirtied’ by contact with the surfaces of coloured objects. The detail of exactly how this happened was a metaphysical ‘hard problem’. Newton dismissed all that by showing first, that white light is in fact a mixture of all colours, and second, that our vision produces only an inaccurate and simplified model of the reality, with only three different colour receptors.

Consciousness itself, Graziano says, is also a misleading model in a somewhat similar way, generated when the brain represents its own activity to itself. In fact, to be clear, consciousness as represented doesn’t happen; it is a mistaken construct, the result of the good-enough but far from perfect apparatus bequeathed to us by evolution (this sounds sort of familiar).

We should be clear that it is really Hard Problem consciousness that is the target here, the consciousness of subjective experience and of qualia. Not that the other sort is OK: Graziano dismisses the Easy Problem kind of consciousness, more or less in passing, as being no problem at all…

These days it’s not hard to understand how the brain can process information about the world, how it can store and recall memories, how it can construct self knowledge including even very complex self knowledge about one’s personhood and mortality. That’s the content of consciousness, and it’s no longer a fundamental mystery. It’s information, and we know how to build computers that process information.

Amazingly, that’s it. Graziano writes in an impatient tone; I have to confess to a slight ruffling of my own patience here; memory is not hard to understand? I had the impression that there were quite a number of unimpeachably respectable scientists working on the neurology of memory, but maybe they’re just doing trivial detail, the equivalent of butterfly collecting, or who knows, philosophy? …we know how to build computers… You know it’s not the 1980s any more? Yet apparently there are still clever people who think you can just say that the brain is a computer and that’s not only straightforwardly true, but pretty much a full explanation? I mean, the brain is also meat, and we know how to build tools that process meat; shall we stop there and declare the rest to be useless metaphysics?

‘Information’, as we’ve often noted before, is a treacherous, ambiguous word. If we mean something akin to data, then yes, computers can handle it; if we mean something akin to understanding, they’re no better than meat cleavers. Nothing means anything to a computer, while human consciousness reads and attributes meanings with prodigal generosity, arguably as its most essential, characteristic activity. No computer was ever morally responsible for anything, while our society is built around the idea that human beings have responsibilities, rights, and property. Perhaps Graziano has debunking arguments for all this that he hasn’t leisure to tell us about; the idea that they are all null issues with nothing worthwhile to be said about them just doesn’t fly.

Anyway, perhaps I should keep calm because that’s not even what Graziano is mainly talking about. He is really after qualia, and in that area I have some moderate sympathy with him; I think it’s true that the problem of subjective experience is most often misconceived, and it is quite plausible that the limitations of our sensory apparatus and our colour vision in particular contribute to the confusion. There is a sophisticated argument to be made along these lines: unfortunately Graziano’s isn’t it; he merely dismisses the issue: our brain plays us false and that’s it. You could perhaps get away with that if the problem were simply about our belief that we have qualia; it could be that the sensory system is just misinforming us, the way it does in the case of optical illusions. But the core problem is about people’s actual direct experience of qualia. A belief can be wrong, but an experience is still an experience even if it’s a misleading one, and the existence of any kind of subjective experience is the real core of the matter. Yes, we can still deny there is any such thing, and some people do so quite cogently, but to say that what I’m having now is not an experience but the mere belief that I’m having an experience is hard and, well, you know, actually rather metaphysical…

On examination I don’t think Graziano’s analogy with Newton works well. It’s not clear to me why the ‘older’ view is to be characterised as metaphysical (or why that would mean it was worthless). Shorn of the emotive words about dirt, the view that white light picks up colour from contact with coloured things, the way white paper picks up colour from contact with coloured crayons, seems a reasonable enough scientific hypothesis to have started with. It was wrong, but if anything it seems simpler and less abstract than the correct view. Newton himself would not have recognised any clear line between science and philosophy, and in some respects he left the true nature of light a more complicated matter, not fully resolved. His choice of particles over waves has proved to be an over-simplification and remains the subject of some cloudy ontology to this day.

Worse yet, if you think about it, it was Newton who first separated the two realms: colour as it is in the world and colour as we experience it. This is the crucial distinction that opened up the problem of qualia, first recognisably stated by Locke, a fervent admirer of Newton, some years after Newton’s work. You could argue therefore, that if the subject of qualia is a mess, it is a mess introduced by Newton himself – and scientists shouldn’t castigate philosophers for trying to clear it up.

intentional automatonJochen’s paper Von Neumann Minds: Intentional Automata has been published in Mind and Matter.

Intentionality is meaningfulness, the quality of being directed at something, aboutness. It is in my view one of the main problems of consciousness, up there with the Hard Problem but quite distinct from it; but it is often under-rated or misunderstood. I think this is largely because our mental life is so suffused with intentionality that we find it hard to see the wood for the trees; certainly I have read more than one discussion by very clever people who seemed to me to lose their way half-way through without noticing and end up talking about much simpler issues.

That is not a problem with Jochen’s paper which is admirably clear.  He focuses on the question of how to ground intentionality and in particular how to do so without falling foul of an infinite regress or the dreaded homunculus problem. There are many ways to approach intentionality and Jochen briefly mentions and rejects a few (basing it in phenomenal experience or in something like Gricean implicature, for example) before introducing his own preferred framework, which is to root meaning in action: the meaning of a symbol is, or is to be found in, the action it evokes. I think this is a good approach; it interprets intentionality as a matter of input/output relations, which is clarifying and also has the mixed blessing of exposing the problems in their worst and most intractable form. For me it recalls the approach taken by Quine to the translation problem – he of course ended up concluding that assigning certain meanings to unknown words was impossible because of radical under-determination; there are always more possible alternative meanings which cannot be eliminated by any logical procedure. Under-determination is a problem for many theories of intentionality and Jochen’s is not immune, but his aim is narrower.

The real target of the paper is the danger of infinite regress. Intentionality comes in two forms, derived on the one hand and original or intrinsic on the other. Books, words, pictures and so on have derived intentionality; they mean something because the author or the audience interprets them as having meaning. This kind of intentionality is relatively easy to deal with, but the problem is that it appears to defer the real mystery to the intrinsic intentionality in the mind of the person doing the interpreting. The clear danger is that we then go on to defer the intentionality to an homunculus, a ‘little man’ in the brain who again is the source of the intrinsic intentionality.

Jochen quotes the arguments of Searle and others who suggest that computational theories of the mind fail because the meaning and even the existence of a computation is a matter of interpretation and hence without the magic input of intrinsic intentionality from the interpreter fails through radical under-determination. Jochen dramatises the point using an extension of Searle’s Chinese Room thought experiment in which it seems the man inside the room can really learn Chinese – but only because he has become in effect the required homunculus.

Now we come to the really clever and original part of the paper; Jochen draws an analogy with the problem of how things reproduce themselves. To do so it seems they must already have a complete model of themselves inside themselves… and so the problem of regress begins. It would be OK if the organism could scan itself, but a proof by Svozil seems to rule that out because of problems with self-reference.  Jochen turns to the solution proposed by the great John Von Neumann (a man who might be regarded as the inventor of the digital computer if Turing had never lived). Von Neumann’s solution is expressed in terms of a tw0-dimensional cellular automaton (very simplistically, a pattern on a grid that evolves over time according to certain rules – Conway’s Game of Life surely provides the best-known examples). By separating the functions of copying and interpretation, and distinguishing active and passive states Von Neumann managed to get round Svozil successfully.

Now by importing this distinction between active and passive into the question of intentionality, Jochen suggests we can escape the regress. If symbols play either an active or a passive role (in effect, as semantics or as syntax) we can have a kind of automaton which, in a clear sense, gives its own symbols their interpretation, and so escapes the regress.

This is an ingenious move. It is not a complete solution to the problem of intentionality (I think the under-determination monster is still roaming around out here), but it is a novel and very promising solution to the regress. More than that, it offers a new perspective which may well offer further insights when fully absorbed; I certainly haven’t managed to think through what the wider implications might be, but if a process so central to meaningful thought truly works in this unexpected dual way it seems there are bound to be some. For that reason, I hope the paper gets wide attention from people whose brains are better at this sort of thing than mine…

miceMale and female brains are pretty much the same, but male and female behaviour is different. It turns out that the same neural circuitry exists in both, but is differently used.

A word of caution. We are talking about mice, in the main: those obliging creatures who seem ready to provide evidence to back all sorts of fascinating theories that somehow don’t transfer to human beings. And we’re also talking specifically about parental behaviour patterns; it seems those are rather well conserved between species – up to a point – but we shouldn’t generalise recklessly.

Catherine Dulac of Harvard explains the gist in this short Scientific American piece. A particular network of neurons in the hypothalamus was observed to be active during nurturing parental behaviour by females; by genetic engineering (amazing what we can do these days) those neurons were edited out of some females who then showed no caring behaviour towards infants. Meanwhile a group of males in which those neurons were stimulated (having been made light-sensitive by even more remarkable genetic manipulation) did show nurturing behaviour.

For male mammals it seems the norm is to kill strange infants on sight (I did say we should be careful about extrapolating to human beings); another set of neurons in the hypothalamus proves to be associated with this behaviour in just the same kind of way as the ones associated with nurturing behaviour.

One of the interesting things here is that both networks exist in both sexes; no-one knows at the moment why one is normally active in females and the other in males. If we were talking about human beings we should be tempted to attribute the difference to cultural factors (I hope that by now nobody is going to be astonished by the idea that different cultural influences could lead to different patterns of physical activity in the brain); it doesn’t seem very plausible that that could be the case for mice. It goes without saying that to identify a new hidden factor which sets certain gender roles in mice would inevitably trigger a highly-charged discussion of the possible equivalent in human beings.

So much for the proper research. Could we dare to venture on the irresponsibly speculative hypothesis that men and women habitually think somewhat differently but that each is fully capable of thinking like the other? I shouldn’t care to advance that thesis and the whole topic of measuring the quality and style of thought processes is deeply fraught scientifically and beset with difficulty philosophically.

There is, though, one rather striking piece of evidence to suggest that men and women can enter fully into each others minds; novels. Human consciousness is often depicted in novels; indeed the depiction may be the central feature or even pretty much the whole of the enterprise. Jane Austen, who arguably played a major role in making consciousness the centre of narrative, never wrote a scene in which two men converse in the absence of women, allegedly because she considered she could have no direct experience of how men talked to one another in those circumstances. Moreover, while she was exceptionally skilful at discreetly incorporating a view from inside her heroines’ heads, she never did the same for Darcy or Mr Knightley.

But others have never been so restrained; male authors have depicted the inward world of females and vice versa with very few complaints; that rather suggests we can swap mental gender roles without difficulty.

But are we sure? I suppose it could be argued that as a man I have no more access to the minds of other men than to those of women, so to a degree I actually have to take it on trust that the male mind is accurately depicted in novels. In some cases, certain allegedly typical male thought patterns depicted in books (Nick Hornby choosing a routine football match over a good friend’s wedding) are actually rather hard for me to enter into sympathetically. For that matter I recall the indignant rebuttal I got from a female fan when I suggested that Robert Heinlein’s depiction of the female mind might be slightly off the mark. Perhaps, then, none of us knows anything about the matter for sure in the end. Still I think nil humanum me alienum puto (I think nothing human alien to me) is a good motto and, I’m slightly encouraged to think, an attainable aspiration.

whistlePhysical determinism is implausible according to Richard Swinburne in the latest JCS; he cunningly attacks via epiphenomenalism.

Swinburne defines physical events as public and mental ones as private – we could argue about that, but as a bold, broad view it seems fair enough. Mental events may be phenomenal or intentional, but for current purposes the distinction isn’t important. Physical determinism is defined as the view that each physical event is caused solely by other physical events; here again we might quibble, but the idea seems basically OK to be going on with.

Epiphenomenalism, then, is the view that while physical events may cause mental ones, mental ones never cause physical ones. Mental events are just, as they say, the whistle on the locomotive (though the much-quoted analogy is not exact: prolonged blowing of the whistle on a steam locomotive can adversely affect pressure and performance). Swinburne rightly describes epiphenomenalism as an implausible view (in my view, anyway – many people would disagree), but for him it is entailed by physical determinism, because physical events are only ever caused by other physical events. In his eyes, then, if he can prove that epiphenomenalism is wrong, he has also shown that physical determinism is ruled out. This is an unusual, perhaps even idiosyncratic perspective, but not illogical.

Swinburne offers some reasonable views about scientific justification, but what it comes down to is this; to know that epiphenomenalism is true we have to show that mental events cause no physical events; but that very fact would mean we could never register when they had occurred – so how would we prove it? In order to prove epiphenomenalism true, we must assume that what it says is false!

Swinburne takes it that epiphenomenalism means we could never speak of our private mental events – because our words would have to have been caused by the mental events, and ex hypothesi they don’t cause physical events like speech. This isn’t clearly the case – as I’ve mentioned before, we manage to speak of imaginary and non-existent things which clearly have no causal powers. Intentionality – meaning – is weirder and more powerful than Swinburne supposes.

He goes on to discuss the famous findings of Benjamin Libet, which seem to show that decisions are detectable in the brain before we are aware of having made them. These results point towards epiphenomenalism being true after all. Swinburne is not impressed; he sees no basic causal problem in the idea that a brain event precedes the mental event of the decision, which in turn precedes action. Here he seems to me to miss the point a bit, which is that if Libet is right, the mental experience of making a decision has no actual effect, since the action is already determined.

The big problem, though is that Swinburne never engages with the normal view; ie that in one way or another mental events have two aspects. A single brain event is at the same time a physical event which is part of the standard physical story, and a mental event in another explanatory realm. In one way this is unproblematic; we know that a mass of molecules may also be a glob of biological structure, and an organism; we know that a pile of paper, a magnetised disc, or a reel of film may all also be “A Christmas Carol”. As Scrooge almost puts it, Marley’s ghost may be undigested gravy as well as a vision of the grave.

It would be useless to pretend there is no residual mystery about this, but it’s overwhelmingly how most people reconcile physical determinism with the mental world, so for Swinburne to ignore it is a serious weakness.

brainsimAeon Magazine has published my Opinion piece on brain simulation. Go on over there and comment! Why not like me while you’re at it!!!

I’m sorry about that outburst – I got a little over-excited…

Coming soon (here) Babbage’s forgotten rival…

cosmosIs cosmopsychism the panpsychism we’ve all been waiting for? Itay Shani thinks so and sets out the reasons in this paper. While others start small and build up, he starts with the cosmos and works down. But he rejects the Blobject…

To begin at the beginning. Panpsychism is the belief that consciousness is everywhere; that it is in some sense a basic part of the world. Typically when people try to explain consciousness they start with the ingredients supplied by physics and try to build a mind out of them in a way which plausibly accounts for all the remarkable features of consciousness. Panpsychists just take awareness for granted, the way we often take matter or energy for granted; they take it to be primary, and this arguably gets them out of a very difficult explanatory task. There are a number of variants – panexperientialism, panentheism, and so on – which tend to be bracketed with panpsychism as similar considerations apply to all members of the family.

This kind of thinking has enjoyed quite a good level of popularity in recent years, perhaps a rising one. Regular readers may recall, though, that I’m not attracted by panpsychism. If stones have consciousness, we still have to explain how human consciousness comes to be different from what the stones have got. I suspect that that task is going to be just as difficult as explaining consciousness from scratch, so that adopting the panpsychist thesis leaves us worse off rather than better.

Shani, however, thinks some of the problems are easily dealt with; others he takes very seriously. He points out quite fairly that panpsychists are not bound to ascribe awareness to every entity at every level; they’re OK just so long as there is, as it were, universal coverage at some level. Most panpsychists, as he rightly observes, tend to push the basic home of consciousness down to a micro level, which leaves us with the problem of how these simple micro-consciousnesses can come together to form a higher level one – or sometimes not form a higher one.

Thus combination issue is a difficult one that comes in many forms: Shani picks out particularly the questions of how micro-subjects can combine to form a macro-subject; how phenomenal experience can combine, and how the structure of experience can combine. Cutting to the chase, he finds the most difficult of the three to be the problems with subjects, and in particular he quotes an argument of Coleman’s. This is, in brief, that distinct subjects require distinct points of view, but that in merging, points of view lose their identity. He mentions the simplified case of a subject that only sees red and one that only sees blue: the combined point of view includes both blue and red and the ‘just-red’ and ‘just-blue’ points of view are lost.

I think it requires a good deal more argumentation than Shani offers to make all this really convincing. He and Coleman, for example, take it as given that the combination of subjects must preserve the existence of the combined elements, more or less as the combination of hydrogen and oxygen to make water does not annihilate the component elements. Maybe that is the case, but the point seems very arguable.

Shani also seems to give way to Coleman without much of a fight, although there’s plenty of scope for one. But after all these are highly complex issues and Shani only has so much space: moreover I’m inclined to go along with him because I agree that the combination problem is very bad; perhaps worse than Shani thinks.

It just seems intuitively very unlikely that two micro-minds can be combined. Two of the things that seem clearest about our own minds is that they combine terrific complexity with a strong overall unity; both of those factors seem to throw up problems for a merger. To me it seems that two minds are like two clocks: you cannot meaningfully merge them except by taking them apart into their basic components and putting something completely new together – which is no use at all for panpsychism.

For Shani, of course, combination must fail so that he can offer his cosmic solution as an alternative route to a viable panpsychism. He sets out his stall with six postulates.

  1. The cosmos as a whole is the only ontological ultimate there is, and it is conscious.
  2. It is prior to its parts.
  3. It is laterally dual in nature, having a concealed and a revealed side (the concealed side being phenomenal experience while the revealed side is the apparently objective world around us).
  4. It is like a fluctuating ocean, with waves, ripples and vortices assuming temporary identity of their own.
  5. The cosmic consciousness grounds the smaller consciousnesses within it.
  6. Conscious entities’ are dynamic configurations within the cosmic whole.
  7. These consciousnesses are severally related to particular surges or vortices of the cosmic consciousness and never fully separate from it.

That seems at least a vision we can entertain, but it immediately faces the challenge of the Blobject. This is the universal cosmic object championed by Terry Horgan & Matjaž Potr?. They are happy with the grand cosmic unity proposed by Shani but they go further; how can it have any parts? They believe the great cosmic consciousness is the Blobject; the only thing that truly exists; the idea that there are really other things is deluded.

The austere ontology of the Blobject and its splendid parsimony can only be admired. We might talk more about it another time; but for now I’m inclined to agree with Shani that the task of reconciling it with actual experience is just too fraught with difficulty.

So does Shani succeed? He does, I think, set out, albeit briefly, a coherent and interesting view; but it does not have the advantages he supposes. He believes that starting at the top and working down avoids the difficult problems we encounter if we start at the bottom and work up. I think that is an illusion derive from the fact that the bottom-up approach has just been discussed more. I think in fact that just the same problems must recur whichever way we approach things.

Take the Coleman point. Coleman’s objection is that in combining, two points of view lose their separate identity, while it needs to be preserved. But surely, if we take his blue-and-red pov and split it into just-blue and just-red we get a similar loss of the original identity. Now as I said, I’m not altogether sure that this need be a problem, but it seems to me clear that it doesn’t really matter which way we move through the problem; and the same must be true of all arguments which relate different levels of panpsychist consciousness. Is there really any fundamental asymmetry that makes the top-down view stronger?

drugbotOver the years many variants and improvements to the Turing Test have been proposed, but surely none more unexpected than the one put forward by Andrew Smart in this piece, anticipating his forthcoming book Beyond Zero and One. He proposes that in order to be considered truly conscious, a robot must be able to take an acid trip.

He starts out by noting that computers seem to be increasing in intelligence (whatever that means), and that many people see them attaining human levels of performance by 2100 (actually quite a late date compared to the optimism of recent decades; Turing talked about 2000, after all). Some people, indeed, think we need to be concerned about whether the powerful AIs of the future will like us or behave well towards us. In my view these worries tend to blur together two different things; improving processing speeds and sophistication of programming on the one hand, and transformation from a passive data machine into a spontaneous agent, quite a different matter. Be that as it may, Smart reasonably suggests we could give some thought to whether and how we should make machines conscious.
It seems to me – this may be clearer in the book – that Smart divides things up in a slightly unusual way. I’ve got used to the idea that the big division is between access and phenomenal consciousness, which I take to be the same distinction as the one defined by the terminology of Hard versus Easy Problems. In essence, we have the kind of consciousness that’s relevant to behaviour, and the kind that’s relevant to subjective experience.
Although Smart alludes to the Chalmersian zombies that demonstrate this distinction, I think he puts the line a bit lower; between the kind of AI that no-one really supposes is thinking in a human sense and the kind that has the reflective capacities that make up the Easy Problem. He seems to think that experience just goes with that (which is a perfectly viable point of view). He speaks of consciousness as being essential to creative thought, for example, which to me suggests we’re not talking about pure subjectivity.
Anyway, what about the drugs? Smart sedans to think that requiring robots to be capable of an acid trip is raising the bar, because it is in these psychedelic regions that the highest, most distinctive kind of consciousness is realised. He quotes Hofman as believing that LSD…

…allows us to become aware of the ontologically objective existence of consciousness and ourselves as part of the universe…

I think we need to be wary here of the distinction between becoming aware of the universal ontology and having the deluded feeling of awareness. We should always remember the words of Oliver Wendell Holmes Sr:

…I once inhaled a pretty full dose of ether, with the determination to put on record, at the earliest moment of regaining consciousness, the thought I should find uppermost in my mind. The mighty music of the triumphal march into nothingness reverberated through my brain, and filled me with a sense of infinite possibilities, which made me an archangel for the moment. The veil of eternity was lifted. The one great truth which underlies all human experience, and is the key to all the mysteries that philosophy has sought in vain to solve, flashed upon me in a sudden revelation. Henceforth all was clear: a few words had lifted my intelligence to the level of the knowledge of the cherubim. As my natural condition returned, I remembered my resolution; and, staggering to my desk, I wrote, in ill-shaped, straggling characters, the all-embracing truth still glimmering in my consciousness. The words were these (children may smile; the wise will ponder): “A strong smell of turpentine prevails throughout.”…

A second problem is that Smart believes (with a few caveats) that any digital realisation of consciousness will necessarily have the capacity for the equivalent of acid trips. This seems doubtful. To start with, LSD is clearly a chemical matter and digital simulations of consciousness generally neglect the hugely complex chemistry of the brain in favour of the relatively tractable (but still unmanageably vast) network properties of the connectome. Of course it might be that a successful artificial consciousness would necessarily have to reproduce key aspects of the chemistry and hence necessarily offer scope for trips, but that seems far from certain. Think of headaches; I believe they generally arise from incidental properties of human beings – muscular tension, constriction of the sinuses, that sort of thing – I don’t believe they’re in any way essential to human cognition and I don’t see why a robot would need them. Might not acid trips be the same, a chance by-product of details of the human body that don’t have essential functional relevance?

The worst thing, though, is that Smart seems to have overlooked the main merit of the Turing Test; it’s objective. OK, we may disagree over the quality of some chat-bot’s conversational responses, but whether it fools a majority of people is something testable, at least in principle. How would we know whether a robot was really having an acid trip? Writing a chat-bot to sound as if were tripping seems far easier than the original test; but other than talking to it, how can we know what it’s experiencing? Yes, if we could tell it was having intense trippy experiences, we could conclude it was conscious… but alas, we can’t. That seems a fatal flaw.

Maybe we can ask tripbot whether it smells turpentine.