Babbage’s Rival?

Alfred SmeeCharles Babbage was not the only Victorian to devise a thinking machine.

He is, of course, considered the father, or perhaps the grandfather, of digital computing. He devised two remarkable calculating machines; the Difference Engine was meant to produce error-free mathematical tables for navigation or other uses; the Analytical Engine, an extraordinary leap of the imagination, would have been the first true general-purpose computer. Although Babbage failed to complete the building of the first, and the second never got beyond the conceptual stage, his achievement is rightly regarded as a landmark, and the Analytical Engine routines published by Lady Lovelace in 1843 with a translation of Menabrea’s description of the Engine, have gained her recognition as the world’s first computer programmer.

The digital computer, alas, went no further until Turing a hundred years later; but in 1851 Alfred Smee published The Process of Thought adapted to Words and Language together with a description of the Relational and Differential Machines – two more designs for cognitive mechanisms.

Smee held the unusual post of Surgeon to the Bank of England – in practice he acted as a general scientific and technical adviser. His father had been Chief Accountant to the Bank and little Alfred had literally grown up in the Bank, living inside its City complex. Apparently, once the Bank’s doors had shut for the night, the family rarely went to the trouble of getting them unlocked again to venture out; it must have been a strangely cloistered life. Like Babbage and other Victorians involved in London’s lively intellectual life, Smee took an interest in a wide range of topics in science and engineering, with his work in electro-metallurgy leading to the invention of a successful battery; he was a leading ophthalmologist and among many other projects he also wrote a popular book about the garden he created in Wallington, south of London – perhaps compensating for his stonily citified childhood?

Smee was a Fellow of the Royal Society, as was Babbage, and the two men were certainly acquainted (Babbage, a sociable man, knew everyone anyway and was on friendly terms with all the leading scientists of the day; he even managed to get Darwin, who hated socialising and suffered persistent stomach problems, out to some of his parties). However, it doesn’t seem the two ever discussed computing, and Smee’s book never mentions Babbage.

That might be in part because Smee came at the idea of a robot mind from a different, biological angle. As a surgeon he was interested in the nervous system and was a proponent of ‘electro-biology’, advocating the modern view that the mind depends on the activity of the brain. At public lectures he exhibited his own ‘injections’ of the brain, revealing the complexity of its structure; but Golgi’s groundbreaking methods of staining neural tissue were still in the future, and Smee therefore knew nothing about neurons.

Smee nevertheless had a relatively modern conception of the nervous system. He conducted many experiments himself (he used so many stray cats that a local lady was moved to write him a letter warning him to keep clear of hers) and convinced himself that photovoltaic effects in the eye generated small currents which were transmitted bio-electrically along the nerves and processed in the brain. Activity in particular combinations of ‘nervous fibrils’ gave rise to awareness of particular objects. The gist is perhaps conveyed in the definition of consciousness he offered in an earlier work, Principles of the Human Mind Deuced from Physical Laws:

When an image is produced by an action upon the external senses, the actions on the organs of sense concur with the actions in the brain; and the image is then a Reality.
When an image occurs to the mind without a corresponding simultaneous action of the body, it is called a Thought.
The power to distinguish between a thought and a reality, is called Consciousness.

This is not very different in broad terms from a lot of current thinking.

In The Process of Thought Smee takes much of this for granted and moves on to consider how the brain deals with language. The key idea is that the brain encodes things into a pyramidal classification hierarchy. Smee begins with an analysis of grammar, faintly Chomskyan in spirit if not in content or level of innovation. He then moves on rapidly to the construction of words. If his pyramidal structure is symbolically populated with the alphabet different combinations of nervous activity will trigger different combinations of letters and so produce words and sentences. This seems to miss out some essential linguistic level, leaving the impression that all language is spelled out alphabetically, which can hardly be what Smee believed.

When not dealing specifically with language the letters in Smee’s system correspond to qualities and this pyramid stands for a universal categorisation of things in which any object can be represented as a combination of properties. (This rather recalls Bishop Wilkins’ proposed universal language, in which each successive letter of each noun identifies a position in an hierarchical classification, so that the name is an encoded description of the thing named.)

At least, I think that’s the way it works. The book goes on to give an account of induction, deduction, and the laws of thought; alas, Smee seems unaware of the problem described by Hume and does not address it. Instead, in essence, he just describes the processes; although he frames the discussion in terms of his pyramidal classification his account of induction (he suggests six different kinds) comes down to saying that if we observe two characteristics constantly together we assume a link. Why do we do that – and why is it we actually often don’t? Worse than that, he mentions simple arithmetic (one plus one equals two, two times two is four) and says:

These instances are so familiar we are apt to forget that they are inductions…

Alas, they’re not inductions. (You could arrive at them by induction, but no-one ever actually does and our belief in them does not rest on induction.)

I’m afraid Smee’s laws of thought also stand on a false premise; he says that the number of ideas denoted by symbols is finite, though too large for a man to comprehend. This is false. He might have been prompted to avoid the error if he had used numbers instead of letters for his pyramid – because each integer represents an idea; the list of integers goes on forever, yet our numbering system provides a unique symbol for every one? So neither the list of ideas nor the list of symbols can be finite. Of course that barely scratches the surface of the infinitude of ideas and symbols, but it helps suggest just how unmanageable a categorisation of every possible idea really is.

But now we come to the machines designed to implement these processes. Smee believed that his pyramidal structure could be implemented in a hinged physical mechanism where opening would mean the presence or existence of the entity or quality and closing would mean its absence. One of these structures provides the Relational Machine. It can test membership of categories, or the possession of a particular attribute, and can encode an assertion, allowing us to test consistency of that assertion with a new datum. I have to confess to having only a vague understanding of how this would really work. He allows for partial closing and I think the idea is that something like predicate calculus could be worked out this way. He says at one point that arithmetic could be done with this mechanism and that anyone who understands logarithms will readily see how; I’m afraid I can only guess what he had in mind.

It isn’t necessary to have two Relational Machines to deal with multiple assertions because we can take them in sequence; however the Differential Machine provides the capacity to compare directly, so that we can load into one side all the laws and principles that should guide a case while uploading the facts into the other.

Smee had a number of different ideas about how the machines could be implemented, and says he had a number of part-completed prototypes of partial examples on his workbench. Unlike Babbage’s designs, his were never meant to be capable of full realisation, though; although he thinks it is finite he says the Relational Machine would cover London and the mechanical stresses would destroy it immediately if it were ever used; moreover, using it to crank out elementary deductions would be so slow and tedious people would soon revert to using their wonderfully compact and efficient brains instead. But partial examples will helpfully illustrate the process of thought and help eliminate mistakes and ‘quibbles’. Later chapters of the book explore things that can go wrong in legal cases, and describe a lot of the quibbles Smee presumably hopes his work might banish.

I think part of the reason Smee’s account isn’t clearer (to me, anyway) is that his ideas were never critiqued by colleagues and he never got near enough to a working prototype to experience the practical issues sufficiently. He must have been a somewhat lonely innovator in his lab in the Bank and in fairness the general modernity of his outlook makes us forget how far ahead of his time he was. When he published his description of his machines, Wundt, generally regarded as the founder of scientific psychology, was still an undergraduate. To a first approximation, nobody knew anything about psychology or neurology. Logic was still essentially in the long Aristotelian twilight – and of course we know where computing stood. It is genuinely remarkable that Smee managed, over a hundred and fifty years ago, to achieve a proto-modern, if flawed, understanding of the brain and how it thinks. Optimists will think that shows how clever he was; pessimists will think it shows how little our basic conceptual thinking has been updated by the progress of cognitive science.

Marvin Minsky

minskyMarvin Minsky, who died on Sunday, was a legend.  Here’s a slightly edited version of my 2004 post about him.

Linespace

 

Is it time to rehabilitate Marvin Minsky? As a matter of fact, I don’t think he was ever dishabilitated (so to speak) but it does seem to be generally felt that there are a couple of black marks against his name. The most widely-mentioned count against him and his views is a charge of flagrant over-optimism about the prospects for artificial intelligence. A story which gets quoted over and over again has it that he was so confident about duplicating human cognitive faculties, even way back in the 1970s when the available computing power was still relatively modest, that he gave the task of producing a working vision system to one of his graduate students as a project to sort out over the summer.

The story is apocryphal, but one can see why it gets repeated so much. The AI sceptics like it for obvious reasons, and the believers use it to say “I may seem like a gung-ho over-the-top enthusiast, but really my views are quite middle-of-the-road, compared to some people. Look at Marvin Minsky, for example, who once…”

Still, there is no doubt that Minsky did at one time predict much more rapid progress than has, in the event materialized: in 1977 he declared that the problem of creating artificial intelligence would be substantially solved within a generation.

The other and perhaps more serious charge against him is that in 1969, together with Seymour Papert, he gave an unduly negative evaluation of Frank Rosenblatt’s ‘Perceptron’ (an early kind of neural network device which was able to tackle simple tasks such as shape recognition). Their condemnation, based on the single-layer version of the perceptron rather than more complex models, is considered to have led to the effective collapse of Rosenblatt’s research project and a long period of eclipse for networks before a new wave of connectionist research came along and claimed Rosenblatt as an unfairly neglected forerunner.

There’s something in both these charges, but surely in fairness neither ought to be all that damaging? Optimism can be a virtue, without which many long and difficult enterprises could not get started, and Minsky’s was really no more starry-eyed than many others. The suggestion of AI within a generation does no more at most than echo Turing’s earlier forecast of human-style performance by the end of the century, and although it didn’t come true, you would have to be a dark pessimist to deny that there were (and perhaps still are) some encouraging signs.

It seems to be true that Minsky and Papert, by focusing on the single-layer perceptron alone, did give an unduly negative view of Rosenblatt’s ideas – but if researchers were jailed for giving dismissive accounts of their rivals’ work, there wouldn’t be many on the loose today. The question is why Minsky and Papert’s view had such a strong negative influence when a judicious audience would surely have taken a more balanced view.

I suspect that both Minsky’s optimism and his attack on the perceptron should properly be seen as crystallizing in a particularly articulate and trenchant form views which were actually widespread at the time: Minsky was not so much a lonely but influential voice as the most conspicuous and effective exponent of the emergent consensus.

What then, about his own work? I take the most complete expression of his views to be “The Society of Mind”. This is an unusual book in several ways – for one thing it is formatted like no other book I have ever seen, with each page having its own unique heading. It has an upbeat tone, compared to many books in the consciousness field, which tend to be written as much against a particular point of view as for one. It is full of thought-provoking points, and is hard to read quickly or to summarise adequately, not because it is badly or obscurely written (quite the contrary) but because it inspires interesting trains of thought which take time to mull over adequately.

The basic idea is that each simple task is controlled by an agent, a kind of sub-routine. A set of agents which happen to be active when a good result is achieved get linked together by a k-line. The multi-level hierarchical structures which gradually get built up allow complex and highly conditional forms of behaviour to be displayed. Building with blocks is used as an example, starting with simple actions such as picking up a block, and gradually ascending to the point where we debate whether to go on playing a complex block-building game or go off to have lunch instead. Ultimately all mental activity is conducted by structures of this kind.

This is recognizably the way well-designed computer programs work, and it also bears a plausible resemblance to the way we do many things without thinking (when we move our arm we don’t think about muscle groups, but somehow somewhere they do get dealt with); but it isn’t a very intuitively appealing general model of human thought from an inside point of view. It naturally raises some difficulties about how to ensure that appropriate patterns of behaviour can be developed in novel circumstances. There are many problems which arise if we just leave the agents to slug it out amongst themselves – and large parts of the book are taken up with the interesting solutions Minsky has to offer. The real problem (as always) arises when we want to move out of the toy block world and deal with the howling gale of complexity presented by the real world.

Minsky’s solution is frames (often compared with Roger Schank’s similar strategy of scripts). We deal with reality through common sense, and common sense is, in essence, a series of sets of default assumptions about given events and circumstances. When we go to a birthday party, we have expectations about presents, guests, hosts, cakes and so on which give us a repertoire of appropriate to deploy and a context in which to respond to unfolding events. Alas, we know that common sense has so far proved harder to systematize than expected – so much so that these days the word ‘frame’ in a paper on consciousness is highly likely to be part of the dread phrase ‘frame problem’.

The idea that mental activity is constituted by a society of agents who themselves are not especially intelligent is an influential one, and Minsky’s version of it is well-developed and characteristically trenchant. He has no truck at all with the idea of a central self, which in his eyes is pretty much the same thing as an homunculus, a little man inside your head. Free will, for him, is a delusion which we are unfortunately stuck with. This sceptical attitude certainly cuts out a lot of difficulties, though the net result is perhaps that the theory deals better with unconscious processes than conscious ones. I think the path set out by Minsky stops short of a real solution to the problem of consciousness and probably cannot be extended without some unimaginable new development. That doesn’t mean it isn’t a worthwhile exercise to stroll along it, however.

No Problem

Newton in doubtConsciousness is not a problem, says Michael Graziano in an Atlantic piece that is short and combative. (Also, I’m afraid, pretty sketchy in places. Space constraints might be partly to blame for that, but can’t altogether excuse some sweeping assertions made with the broadest of brushes.)

Graziano begins by drawing an analogy with Newton and his theory of light. The earlier view, he says, was that white light was pure, and colour happened when it was ‘dirtied’ by contact with the surfaces of coloured objects. The detail of exactly how this happened was a metaphysical ‘hard problem’. Newton dismissed all that by showing first, that white light is in fact a mixture of all colours, and second, that our vision produces only an inaccurate and simplified model of the reality, with only three different colour receptors.

Consciousness itself, Graziano says, is also a misleading model in a somewhat similar way, generated when the brain represents its own activity to itself. In fact, to be clear, consciousness as represented doesn’t happen; it is a mistaken construct, the result of the good-enough but far from perfect apparatus bequeathed to us by evolution (this sounds sort of familiar).

We should be clear that it is really Hard Problem consciousness that is the target here, the consciousness of subjective experience and of qualia. Not that the other sort is OK: Graziano dismisses the Easy Problem kind of consciousness, more or less in passing, as being no problem at all…

These days it’s not hard to understand how the brain can process information about the world, how it can store and recall memories, how it can construct self knowledge including even very complex self knowledge about one’s personhood and mortality. That’s the content of consciousness, and it’s no longer a fundamental mystery. It’s information, and we know how to build computers that process information.

Amazingly, that’s it. Graziano writes in an impatient tone; I have to confess to a slight ruffling of my own patience here; memory is not hard to understand? I had the impression that there were quite a number of unimpeachably respectable scientists working on the neurology of memory, but maybe they’re just doing trivial detail, the equivalent of butterfly collecting, or who knows, philosophy? …we know how to build computers… You know it’s not the 1980s any more? Yet apparently there are still clever people who think you can just say that the brain is a computer and that’s not only straightforwardly true, but pretty much a full explanation? I mean, the brain is also meat, and we know how to build tools that process meat; shall we stop there and declare the rest to be useless metaphysics?

‘Information’, as we’ve often noted before, is a treacherous, ambiguous word. If we mean something akin to data, then yes, computers can handle it; if we mean something akin to understanding, they’re no better than meat cleavers. Nothing means anything to a computer, while human consciousness reads and attributes meanings with prodigal generosity, arguably as its most essential, characteristic activity. No computer was ever morally responsible for anything, while our society is built around the idea that human beings have responsibilities, rights, and property. Perhaps Graziano has debunking arguments for all this that he hasn’t leisure to tell us about; the idea that they are all null issues with nothing worthwhile to be said about them just doesn’t fly.

Anyway, perhaps I should keep calm because that’s not even what Graziano is mainly talking about. He is really after qualia, and in that area I have some moderate sympathy with him; I think it’s true that the problem of subjective experience is most often misconceived, and it is quite plausible that the limitations of our sensory apparatus and our colour vision in particular contribute to the confusion. There is a sophisticated argument to be made along these lines: unfortunately Graziano’s isn’t it; he merely dismisses the issue: our brain plays us false and that’s it. You could perhaps get away with that if the problem were simply about our belief that we have qualia; it could be that the sensory system is just misinforming us, the way it does in the case of optical illusions. But the core problem is about people’s actual direct experience of qualia. A belief can be wrong, but an experience is still an experience even if it’s a misleading one, and the existence of any kind of subjective experience is the real core of the matter. Yes, we can still deny there is any such thing, and some people do so quite cogently, but to say that what I’m having now is not an experience but the mere belief that I’m having an experience is hard and, well, you know, actually rather metaphysical…

On examination I don’t think Graziano’s analogy with Newton works well. It’s not clear to me why the ‘older’ view is to be characterised as metaphysical (or why that would mean it was worthless). Shorn of the emotive words about dirt, the view that white light picks up colour from contact with coloured things, the way white paper picks up colour from contact with coloured crayons, seems a reasonable enough scientific hypothesis to have started with. It was wrong, but if anything it seems simpler and less abstract than the correct view. Newton himself would not have recognised any clear line between science and philosophy, and in some respects he left the true nature of light a more complicated matter, not fully resolved. His choice of particles over waves has proved to be an over-simplification and remains the subject of some cloudy ontology to this day.

Worse yet, if you think about it, it was Newton who first separated the two realms: colour as it is in the world and colour as we experience it. This is the crucial distinction that opened up the problem of qualia, first recognisably stated by Locke, a fervent admirer of Newton, some years after Newton’s work. You could argue therefore, that if the subject of qualia is a mess, it is a mess introduced by Newton himself – and scientists shouldn’t castigate philosophers for trying to clear it up.

Jochen’s Intentional Automata

intentional automatonJochen’s paper Von Neumann Minds: Intentional Automata has been published in Mind and Matter.

Intentionality is meaningfulness, the quality of being directed at something, aboutness. It is in my view one of the main problems of consciousness, up there with the Hard Problem but quite distinct from it; but it is often under-rated or misunderstood. I think this is largely because our mental life is so suffused with intentionality that we find it hard to see the wood for the trees; certainly I have read more than one discussion by very clever people who seemed to me to lose their way half-way through without noticing and end up talking about much simpler issues.

That is not a problem with Jochen’s paper which is admirably clear.  He focuses on the question of how to ground intentionality and in particular how to do so without falling foul of an infinite regress or the dreaded homunculus problem. There are many ways to approach intentionality and Jochen briefly mentions and rejects a few (basing it in phenomenal experience or in something like Gricean implicature, for example) before introducing his own preferred framework, which is to root meaning in action: the meaning of a symbol is, or is to be found in, the action it evokes. I think this is a good approach; it interprets intentionality as a matter of input/output relations, which is clarifying and also has the mixed blessing of exposing the problems in their worst and most intractable form. For me it recalls the approach taken by Quine to the translation problem – he of course ended up concluding that assigning certain meanings to unknown words was impossible because of radical under-determination; there are always more possible alternative meanings which cannot be eliminated by any logical procedure. Under-determination is a problem for many theories of intentionality and Jochen’s is not immune, but his aim is narrower.

The real target of the paper is the danger of infinite regress. Intentionality comes in two forms, derived on the one hand and original or intrinsic on the other. Books, words, pictures and so on have derived intentionality; they mean something because the author or the audience interprets them as having meaning. This kind of intentionality is relatively easy to deal with, but the problem is that it appears to defer the real mystery to the intrinsic intentionality in the mind of the person doing the interpreting. The clear danger is that we then go on to defer the intentionality to an homunculus, a ‘little man’ in the brain who again is the source of the intrinsic intentionality.

Jochen quotes the arguments of Searle and others who suggest that computational theories of the mind fail because the meaning and even the existence of a computation is a matter of interpretation and hence without the magic input of intrinsic intentionality from the interpreter fails through radical under-determination. Jochen dramatises the point using an extension of Searle’s Chinese Room thought experiment in which it seems the man inside the room can really learn Chinese – but only because he has become in effect the required homunculus.

Now we come to the really clever and original part of the paper; Jochen draws an analogy with the problem of how things reproduce themselves. To do so it seems they must already have a complete model of themselves inside themselves… and so the problem of regress begins. It would be OK if the organism could scan itself, but a proof by Svozil seems to rule that out because of problems with self-reference.  Jochen turns to the solution proposed by the great John Von Neumann (a man who might be regarded as the inventor of the digital computer if Turing had never lived). Von Neumann’s solution is expressed in terms of a tw0-dimensional cellular automaton (very simplistically, a pattern on a grid that evolves over time according to certain rules – Conway’s Game of Life surely provides the best-known examples). By separating the functions of copying and interpretation, and distinguishing active and passive states Von Neumann managed to get round Svozil successfully.

Now by importing this distinction between active and passive into the question of intentionality, Jochen suggests we can escape the regress. If symbols play either an active or a passive role (in effect, as semantics or as syntax) we can have a kind of automaton which, in a clear sense, gives its own symbols their interpretation, and so escapes the regress.

This is an ingenious move. It is not a complete solution to the problem of intentionality (I think the under-determination monster is still roaming around out here), but it is a novel and very promising solution to the regress. More than that, it offers a new perspective which may well offer further insights when fully absorbed; I certainly haven’t managed to think through what the wider implications might be, but if a process so central to meaningful thought truly works in this unexpected dual way it seems there are bound to be some. For that reason, I hope the paper gets wide attention from people whose brains are better at this sort of thing than mine…

Ungender me here

miceMale and female brains are pretty much the same, but male and female behaviour is different. It turns out that the same neural circuitry exists in both, but is differently used.

A word of caution. We are talking about mice, in the main: those obliging creatures who seem ready to provide evidence to back all sorts of fascinating theories that somehow don’t transfer to human beings. And we’re also talking specifically about parental behaviour patterns; it seems those are rather well conserved between species – up to a point – but we shouldn’t generalise recklessly.

Catherine Dulac of Harvard explains the gist in this short Scientific American piece. A particular network of neurons in the hypothalamus was observed to be active during nurturing parental behaviour by females; by genetic engineering (amazing what we can do these days) those neurons were edited out of some females who then showed no caring behaviour towards infants. Meanwhile a group of males in which those neurons were stimulated (having been made light-sensitive by even more remarkable genetic manipulation) did show nurturing behaviour.

For male mammals it seems the norm is to kill strange infants on sight (I did say we should be careful about extrapolating to human beings); another set of neurons in the hypothalamus proves to be associated with this behaviour in just the same kind of way as the ones associated with nurturing behaviour.

One of the interesting things here is that both networks exist in both sexes; no-one knows at the moment why one is normally active in females and the other in males. If we were talking about human beings we should be tempted to attribute the difference to cultural factors (I hope that by now nobody is going to be astonished by the idea that different cultural influences could lead to different patterns of physical activity in the brain); it doesn’t seem very plausible that that could be the case for mice. It goes without saying that to identify a new hidden factor which sets certain gender roles in mice would inevitably trigger a highly-charged discussion of the possible equivalent in human beings.

So much for the proper research. Could we dare to venture on the irresponsibly speculative hypothesis that men and women habitually think somewhat differently but that each is fully capable of thinking like the other? I shouldn’t care to advance that thesis and the whole topic of measuring the quality and style of thought processes is deeply fraught scientifically and beset with difficulty philosophically.

There is, though, one rather striking piece of evidence to suggest that men and women can enter fully into each others minds; novels. Human consciousness is often depicted in novels; indeed the depiction may be the central feature or even pretty much the whole of the enterprise. Jane Austen, who arguably played a major role in making consciousness the centre of narrative, never wrote a scene in which two men converse in the absence of women, allegedly because she considered she could have no direct experience of how men talked to one another in those circumstances. Moreover, while she was exceptionally skilful at discreetly incorporating a view from inside her heroines’ heads, she never did the same for Darcy or Mr Knightley.

But others have never been so restrained; male authors have depicted the inward world of females and vice versa with very few complaints; that rather suggests we can swap mental gender roles without difficulty.

But are we sure? I suppose it could be argued that as a man I have no more access to the minds of other men than to those of women, so to a degree I actually have to take it on trust that the male mind is accurately depicted in novels. In some cases, certain allegedly typical male thought patterns depicted in books (Nick Hornby choosing a routine football match over a good friend’s wedding) are actually rather hard for me to enter into sympathetically. For that matter I recall the indignant rebuttal I got from a female fan when I suggested that Robert Heinlein’s depiction of the female mind might be slightly off the mark. Perhaps, then, none of us knows anything about the matter for sure in the end. Still I think nil humanum me alienum puto (I think nothing human alien to me) is a good motto and, I’m slightly encouraged to think, an attainable aspiration.