Spaun

The first working brain simulation? Spaun (Semantic Pointer Architecture Unified Network) has attracted a good deal of interested coverage.

Spaun is based on the nengo neural simulator: it basically consists of an eye and a hand: the eye is presented with a series of pixelated images of numbers (on a 28 x 28 grid) and the hand provides output by actually drawing its responses. With this simple set up Spaun is able to perform eight different tasks ranging from copying the digit displayed to providing the correct continuation of a number sequence in the manner of certain IQ tests. Its performance within this limited repertoire is quite impressive and the fact that it fails in a few cases actually makes it resemble a human brain even more closely. It cannot learn new tasks on its own, but it can switch between the eight at any time without impairing its performance.

Spaun seems to me an odd mixture of approaches; in some respects it is a biologically realistic simulation, in others its structure has just been designed to work. It runs on 2.5 million simulated neurons, far fewer than those used by purer simulations like Blue Brain; the neurons are essentially designed to work in a realistic way, although they are relatively standardised and stereotyped compared to their real biological counterparts. Rather than being a simple mass of neurons or a copy of actual brain structures they are organised into an architecture of modules set up to perform discrete tasks and supply working memory, etc. If you wanted to be critical you could say that this mixing of simulation and design makes the thing a bit kludgeish, but commentators have generally (and rightly, I think) not worried about that too much. It does seem plausible that Spaun is both sufficiently realistic and sufficiently effective for us to conclude it is really demonstrating in practice the principles of how neural tissue supports cognition – even if a few of the details are not quite right.

Interesting in this respect is the use of semantic pointers. Apparently these are compressions of multidimensional vectors expressed by spiking neurons; it looks as though they may provide a crucial bridge between the neuronal and the usefully functional, and they are the subject of a forthcoming book, which should be interesting.

What’s the significance of Spaun for consciousness? Well, for one thing it makes a significant contribution to the perennial debate on whether or not the brain is computational. There is a range of possible answers which go something like the following.

  1. Yes, absolutely. The physical differences between a brain and a PC are not ultimately important; when we have identified the right approach we’ll be able to see that the basic activity of the brain is absolutely standard Turing-style computations.
  2. Yes, sort of. The brain isn’t doing computation in quite the way silicon chips do it, but the functions are basically the same, just as a plane doesn’t have flapping feathery wings but is still doing the same thing – flying – as a bird.
  3. No, but. What the brain does is something distinctively different from computation, but it can be simulated or underpinned by computational systems in a way that will work fine.
  4. No, the brain isn’t doing computations and what it is doing crucially requires some kind of hardware which isn’t a computer at all, whether it’s some quantum gizmo or something with some other as yet unidentified property  which biological neurons have.

The success of Spaun seems to me to lend a lot of new support to position 3: to produce the kind of cognitive activity which gives rise to consciousness you have to reproduce the distinctive activity of neurons – but if you simulate that well enough by computational means, there’s no reason why a sufficiently powerful computer couldn’t support consciousness.

33 thoughts on “Spaun

  1. The question is: they simulate neurons, but do they actually know how neurons work? Concerning the current experiment of IBM in this direction, this article argues that nobody does: http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-brain-simulation-compass.html. A choice quote:

    For more than twenty-five years, scientists have known the exact wiring diagram of the three hundred and two neurons in the C. Elegans roundworm, but in at least half a dozen attempts nobody has yet succeeded in building a computer simulation that can accurately capture the complexities of the simple worm’s nervous system. As the N.Y.U. neuroscientist Tony Movshon notes, “Merely knowing the connectional architecture of a nervous system is not enough to deduce its function.”

  2. It is interesting to compare the S.P.A.U.N. brain model (*Science*, 2012, 38) vs the model detailed in *The Cognitive Brain* (MIT Press 1991) in terms of their different theoretical approaches to understanding how the brain works. SPAUN models 2.5 million neurons but offers no biologically plausible mechanism for setting the synaptic connection weights among these millions of neurons. Instead, in SPAUN, the connection weights are determined by a standard mathematical procedure for least-squares optimization to perform coding and decoding. Since patterns of synaptic transfer weights are absolutely critical to the performance of cognitive brain mechanisms, this is, in my view, a serious flaw. In contrast, the structure and dynamics of relevant brain mechanisms that are modeled in *The Cognitive Brain* include a biologically plausible mechanism for forming the necessary patterns of synaptic transfer weights for learning, recognition, and action. In this theoretical approach, the object is not to model as many neurons as possible, but rather to model the minimal credible neuronal mechanisms that can perform the essential cognitive tasks. Description of this cognitive brain model and some tests of its competence can be seen in these publications:

    http://people.umass.edu/trehub

  3. There’s it’s significance vis a vis reverse engineering human cognition, then there’s it’s significance vis a vis engineering future robotics. No matter what you think of the former, the latter is utterly revolutionary. This means that something like ‘I, Robot’ is very likely right around the corner.

    I find semantic pointers very interesting, and yet I’m dreadfully curious about the *details.* I’ve been researching wavelet transforms and random projections as a way to explain how various details of BBT might work (on BBT, consciousness appears the way it does because of the ‘curse of dimensionality’), so when I first read Eliasmith’s paper I kept wondering what mechanisms they used to collapse and expand the dimensionality of the information at various stages in SPAUN’s function. Given their centrality, you would think they would be forthcoming about the details. Are they just wavelet transforms (to render the information sparse) combined with random projections (to compress the dimensionality)? In that case, they’re no big whup. Paul Thagard apparently uses them in his new book to provide a naturalistic account of concepts – so I have that on order!

  4. Scott, if you want a naturalistic account of concepts, read *(The Cognitive Brain* (1991), Ch. 3, with special attention to the distinction between *concept* and *category* on page 46. You can read it online here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter3.pdf

    On semantic pointers, read “Interaction between analogical and symbol/token representation” on page 325 and Fig. 7, here:

    http://people.umass.edu/trehub/YCCOG828%20copy.pdf

  5. Arnold, the “patterns of synaptic transfer weights” cannot be simulated (as sequences of computational steps) easily because of their parallel nature and combinatorial complexity. A limited number of them can be pre-computed separately but that makes the whole cognitive feature externally predetermined and kind of “dumb”.
    What do you think happens in a new born baby brain? Are those patterns already there and the cognition mechanism only reinforces whatever happens to get associated with specific external stimulus? Or those patterns get created into a blank medium by another mechanism (like some sort of a mind getting booted in (similar with an OS in a computer)?

  6. Doru: “What do you think happens in a new born baby brain?”

    New-born babies do not arrive with the synaptic transfer patterns in the brain that they will later develop via learning. As I write in *The Cognitive Brain*, the transfer weight of each synapse in the babies’ synaptic matrices will be determined according to the learning equation 2.3. See “Learning and Pattern Recognition” on page 39, here:

    http://people.umass.edu/trehub/thecognitivebrain/chapter3.pdf

  7. Arnold, on page 38 it says: “Given any arbitrary input, the class cell coupled with the filter cell having the highest product sum of afferent axon activity and corresponding transfer weights will fire first and inhibit all competing class cells.”
    From my engineering background in image processing, this resembles a popular computer vision technique used for image pattern recognition and employing a bi-dimensional signal digital filter. Without getting into details, the “corresponding transfer weights” seem to represent the filter coefficients and I am curios to find out the exact nature of them.
    Are they numbers? Like resistor values in a look-up table? Are they created by another mechanism or they exist prior and just get randomly selected?
    These transfer weights are also modified through learning and training. This is hard to imagine in a new born brain where there is no initial distribution of the “transfer weights” into a coherent map. Is the mapping process seeded genetically at the very low level of our brain (the reptilian level) and it grows its way up hierarchy all the way to the neo cortex?
    On page 51 it says: “Since the kind of neuronal priming is diffuse, which particular filter cells will happen to fire in the presence of a stimulus and thus undergo a change in their distribution of synaptic transfer weights is a matter of chance (though the specific input-output mapping, once established, becomes a systematic property of the matrix). Arnold, this is the reason I become a big fan of your work. It implies that fundamental operation of the brain is “natural selection” not “artificial creation” as it is assumed by the AI community including Ray Kurzweil in this recent interview:
    http://www.booktv.org/Watch/13999/After+Words+Ray+Kurzweil+How+to+Create+a+Mind+The+Secret+of+Human+Thought+Revealed+hosted+by+Ingrid+Wickelgren+Scientific+American+Mind+Editor.aspx

  8. Arnold wrote:
    “Since patterns of synaptic transfer weights are absolutely critical to the performance of cognitive brain mechanisms, this is, in my view, a serious flaw.”

    Damn, I wish I had read this post before the Spaun team did their AMA on Reddit. This is a very good point.

  9. Doru: “Without getting into details, the “corresponding transfer weights” seem to represent the filter coefficients and I am curios to find out the exact nature of them. Are they numbers? Like resistor values in a look-up table? Are they created by another mechanism or they exist prior and just get randomly selected?”

    The transfer weight of a synapse is a value that represents the relative efficacy of the macromolecular structure at the synapse to raise the relative magnitude of post-synaptic potential in the target dendrite given any fixed level of pre-synaptic excitation. For example, look at *The Cognitive Brain*, Fig. 12.3, here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

    In this figure, each horizontal line represents the dendrite of a particular filter cell, and the amplitude of each vertical line on the dendrite represents its relative synaptic transfer weight. The over all pattern of transfer weights on each dendrite determines the filtering property of that particular filter cell.

  10. Oeter
    “The success of Spaun seems to me to lend a lot of new support to position 3: to produce the kind of cognitive activity .. distinctive activity of neurons – but if you simulate that well enough by computational means, there’s no reason why a sufficiently powerful computer couldn’t support consciousness.”

    A common enough empirical claim for which there is precisely zero evidence. The problem is the latter sentence beginning “there’s no reason”.

    There’s no reason to suggest that there isn’t a big block of temperature-resistant cheese in the centre of alpha centauri, which large alien rats eat. That doesn’t mean it is even remotely likely that there is a big block of heat-resistant cheese at the centre of alpha centauri : but that is because we have no reason to think there is a big block of cheese at the centre of alpha centauri – because we know how stars work. We do not know – do not even begin to know – how brains work. We don’t even know how the brains of primitive organisms like worms work, let alone more co-operative, communication-cultured organisms like bees. Human brains aren’t even on the spectrum.

    So to make a claim that you see no reason why a computer can’t cause cansciousness is to make a claim that a) you know how brains work (empirically impossible, as noone does) and b) specifically that you know how consciousness arises in brains – also emprically impossible, as noone does. That is not to say that this will always be the case, but it’s certainly the case now.

  11. I think there are a lot of reasons to suggest that “there isn’t a big block of temperature-resistant cheese in the centre of alpha centauri, which large alien rats eat.”

    And it’s quite conceivable that a powerful computer could support consciousness if it supported a quite conceivable future robotic creature that could make independent choices based on extensive observations of its surroundings, which then would require the ability for abstract analysis that in turn would and could require a high degree of observational awareness.

  12. john davey: “We do not know – do not even begin to know – how brains work.”

    If you believe that to *know something* is to have *certain knowledge* about something, then your claim would seem well founded to you. But I don’t think that this belief is defensible. Science (or any other human endeavor) does *not* have certain knowledge about how any natural thing in the world works. The best that science can do is to make claims that are well supported by empirical evidence. With this caveat in mind, I disagree with your contention that we “do not even begin to know” how brains work. In *The Cognitive Brain* (MIT Press 1991), a large scale theoretical model of the brain is detailed and a wide-ranging body of empirical evidence is presented which lends very strong support to this neuronal theory of how the brain works. So I would say that we do know, within the norms of science, how the brain works. Might this theoretical model be wrong? Certainly. All scientific theories are provisional and can be superseded by a better theory.

  13. “I think there are a lot of reasons to suggest that “there isn’t a big block of temperature-resistant cheese in the centre of alpha centauri, which large alien rats eat.”

    Have you been there ? How do you know ?

    It’s conceivable that there is a big block of temperature-resistant cheese at the centre of alpha centauri. The last sentence is proof of it. On the other hand, there is no reason to believe that there is one, given our knowlege of stars.

    Conceivability is infinite : reasonable belief is another thing entirely. Bertrand Russell wrote a famous article about a teapot circulating the solar system that acted as a god upon the earth : at the time, it was impossible to refute and was given by Russell as an example of how religious belief was conceivable but not reasonable, as it was not based upon evidence.

    The belief that computers can be conscious is conceivable but not reasonable, as it is based upon no evidence – none whatseoever. If you think there is, then show me. Show me how phenomenelogical consciousness can be caused by computation – you won’t because nobody has. Nobody has found the Great Teapot either, and neither will you.

    “And it’s quite conceivable that a powerful computer could support consciousness if it supported a quite conceivable future robotic creature that could make independent choices based on extensive observations of its surroundings, which then would require the ability for abstract analysis that in turn would and could require a high degree of observational awareness.”

    Yes, and its’ conceivable that there is a big block of cheese at the centre of alpha centauri. Glad you agree.

  14. “So I would say that we do know, within the norms of science, how the brain works”

    With all due respect Arnold, I don’t see how a publication – from 1991, an era that predates some of the best work in neuroscience – can be held as proof of this.

    I suspect you may have worked on it,and bully for you if you think that’s proof.

    But you are in disagreement with the entire body of neuroscience and just about all scholastic opinion as far as I am aware. You are going to have to back up your claims with some big names, and the Nobel Prizes these guys will have won – one of which must be for secrecy because apparently you are the only guy who knows about it.

  15. john davey: “But you are in disagreement with the entire body of neuroscience and just about all scholastic opinion as far as I am aware.”

    I’m amazed that you are aware of the entire body of neuroscience and all scholastic opinion. I would be very much interested in the disagreements that you should be able to detail.

  16. John davey, if I say there are reasons to suggest that there isn’t a big block of cheese in some ridiculous area, etc., that’s hardly the same as saying that its completely inconceivable. Your initial analogy that compared this in-perceivable possibility with that of the perceivable internal functions of our brains was completely inapt.
    Russell’s analogy however was apt. But then he understood that analogy was “a comparison between two things, typically on the basis of their structure.” And what we think we know of the structure of a star system is hardly analogous to what we think we know of the structure of a brain.
    So no, we don’t agree, and I’m glad.

  17. “John davey, if I say there are reasons to suggest that there isn’t a big block of cheese in some ridiculous area, etc., that’s hardly the same as saying that its completely inconceivable.”

    Taken literally the above statement means you seem to agree with me. I think. If I understand your English, which is rare. The above english says “if I give reasons for a negative, that’s not the same thing as saying its inconceivable”. In other words, saying that there are reasons not to believe that there is a big block of cheese in the centre of alpha centauri does not make such cheese inconceivable – which is what I said. I presume you didn’t mean to say that.

    “your initial analogy that compared this in-perceivable possibility with that of the perceivable internal functions of our brains was completely inapt.”

    Translation anybody ? “In-perceivable” ? Imperceivable perhaps (meaning imperceptible) ? Ah. I take it you meant “inconceivable”. You see how difficult it can be to communicate with you Roy ? We need a constant secondary translation ?

    I didn’t compare the likelihood of cheese being at the centre of a star with that of ‘perceivable internal functions of the brain’ – again a statement which we must translate – presumably as ‘conceivable internal functions of the brain’……

    I compared the likelihood of cheese being at the centre of a star with the likelihood that computer processes are mental processes. If you disagree, point me at the scientific evidence that demonstrates that brain processes are computational processes. In the absence of this evidence – or anything even like it – there is simply no reason to believe it’s true, and it falls into the category of conceivability only. And that makes sense – anything can be a computer if I want it to be – the sun, the moon, the atoms in the wallpaper in my bedroom. The world would be a continuum of mental processes if the computationalist school of mental processes was true.

    The analogy is not an analogy of objects but of evidence and likelihood. Russell says, “the problem with the religious story is the fact that people believe in it because it’s plausible, not because it’s based upon evidence”. Religion IS plausible, but belief in it cannot be based upon reason, because there is no shred of evidence to support it. The same can be said of the belief that mental processes are the same as computational processes. It’s a belief that is plausible but not based upon evidence, hence not reasonable. It’s as far fetched as any other conceivable but totally unproved hypothesis.

  18. Arnold

    http://blogs.scientificamerican.com/observations/2012/04/01/neuroscientists-we-dont-really-know-what-we-are-talking-about-either/

    I’m not going to provide a list of eminent, proven academics from Steven Pinker to Noam Chomsky to Susan Greenfield to David Hubel to whoever, all of whom point out the obvious – namely that we know next to nothing about the brain in 2012, some 22 years after you crafted your magnum opus, and presumable ,more than enough time to convince these people of its merits.

    I tell you what Arnold, you provide ME with a list of people (apart from yourself) who actually think the brain is a known entity and who have a proven track record of scholarship in the subject.

  19. John Davey
    Inperceivable is a legitimate term (the spell checker hyphenated it, and there’s no error corrector here); but you’re apparently too ignorant to look it up. And your silly attempt to justify your poor analogy is just as ignorant.
    “I compared the likelihood of cheese being at the centre of a star with the likelihood that computer processes are mental processes.”
    But in fact the mental processes do compute, even though the computer processes can’t run themselves without the help of our mentality.
    In other words computer processes are as similar to mental processes as we can presently make them.
    In short, there are computational aspects to the functions of our brains. And there is no possibility of cheeses operating as a functional aspect of any stars.

    I also note that in attacking Arnold, your arguments are just as silly. Steven Pinker with his neoDarwinistic bent does know next to nothing about the brain, but in part because the things he says he DOES know are mostly wrong. Otherwise “knowing next to nothing” (a la Chomsky et al) means that what we know is relatively nothing compared to what there is left to know.
    Which in your case may also be next to impossible to ever know.

  20. Arnold,
    Davey was just kidding you it seems, as the article that he cited had the following at the end:
    “Disclaimer: This is a parody. None of the quotes are real, nor are the scientists. Happy April Fools’ Day from Scientific American!”

  21. “Inperceivable is a legitimate term”

    there is no such word. You know all about dictionaries I know, so you evidently haven’t found it anywhere, and have resorted to sad comments about your evidently inept spell checker.

    “Arnold,Davey was just kidding you it seems, as the article that he cited had the following at the end:
    “Disclaimer: This is a parody. None of the quotes are real, nor are the scientists. Happy April Fools’ Day from Scientific American!””

    I’m sure Arnold didn’t need your Holmes-like assistance here. It’s called ‘irony’. Maybe you can look up ‘irony’ on your spell checker !

    “In other words computer processes are as similar to mental processes as we can presently make them.In short, there are computational aspects to the functions of our brains. And there is no possibility of cheeses operating as a functional aspect of any stars. ”

    So? There are electrical characteristics of a brain. Must we conclude it is an oscilloscope ? Or a hoover or a television ? It is also mostly water ? Do we compare it to a swimming pool ?

    There is no evidence that mental processes are computational. That humans are capable of computation does not make them computers. They can synthesise milk in a laboratory, but that doesn’t make them cows. Most importantly humans can synthesize and create such things as Computational Theory, and there is no proof – none whatsoever – that creative acts can be reduced to formal logic. People have been trying to prove that inductive logic is formal ( hence programmatic ) for at least 100 years, and all have failed miserably. Creativity is a pyschological, not logical, act.

    “Otherwise “knowing next to nothing” (a la Chomsky et al) means that what we know is relatively nothing compared to what there is left to know.”

    Well, compared to other fields, such as physics, knowledge of the brain is more or less negligble. I wouldn’t say we know nothing about physics, because we do.

  22. “John, ad hominem recitations are not substantive arguments.”

    well, give me the names i’m looking for and I’ll take what you say seriously. I’m not an expert in the field, so I must rely on the testimony of the acknowleged experts in the field and trust them. Or trust you. What would you do in my position ?

  23. John, remember that I rejected your claim that “We do not know — do not even begin to know — how brains work”. If you were to acquaint yourself with the neuroscience literature, I don’t think you would make such a broad dismissive statement. Moreover, I don’t think that those who do can be taken seriously. In my own case, fifty years of research on the cognitive brain in the clinic, in the laboratory, and in theoretical analysis, should suggest that my opinion is not based on a shallow understanding of the brain. Of course, this is not to say that we now have a standard model of the workings of the brain as we do in quantum physics. But we do have a substantial base of empirical findings on which to base our candidate theories.

    If you are looking for the names of some experts in the field, see the list of invited participants in a workshop summarized in this publication:

    http://www.theassc.org/files/assc/Understanding_C.pdf

    As for the retinoid theory of consciousness, Peter hosted a lively discussion in *Conscious Entities*, here:

    http://www.consciousentities.com/?p=1016

  24. John Davey, if all you can do is rankle about my use of an unfamiliar word, then you’ve lost any argument right there. (Google the word next time before you shoot off your usual bile.)
    But I caught you on the April Fool aspect of the article that you took seriously, didn’t I.
    Irony indeed.
    You say, “There is no evidence that mental processes are computational.”
    Google that one too. For example, someone of prominence wrote that there’s a computer in every living cell.
    Irony seems to follow you around.

  25. Roy

    “John Davey, if all you can do is rankle about my use of an unfamiliar word, then you’ve lost any argument right there.”

    Already have – zero results. You have made history, the first person the world to use the word “inperceivable”.

    “But I caught you on the April Fool aspect of the article that you took seriously, didn’t I.”

    It was meant to be humour to indicate the general point. It’s called ‘irony’ and I wish I’d never bothered.Such moments are always somewhat ruined by the need to spell it out in small letters, but that’s showbiz I guess.

    ” For example, someone of prominence wrote that there’s a computer in every living cell.”

    So there is no evidence that mental processes are computational then.

  26. Arnold

    “Of course, this is not to say that we now have a standard model of the workings of the brain as we do in quantum physics”

    I think we are talking at cross purposes here, or at least have differing opinions on what constitutes a working knowledge of something. If so, then perhaps we don’t disagree as much as I thought.

    There of course is a huge amount of empirical evidence in the field of neuroscience, but what is clearly lacking is an overall perspective, a grand theory. Until that point is reached, or even a first crack at it yields predictions that provide hitherto unknown knowledge, then as far as I am concerned the knowledge of how the brain works is minimal, without necessarily being de minimis. You have been in the field for many years and have seen the body of evidence grow, so you may, naturally, have a different view.

    From what I can make out, there is not even that much understanding of basic animal brains – worms with countable numbers of neurons, and far more sophisticated brains in insects for example. Unless these are understood pretty comprehensively I don’t see how a massively more complicated object like a human brain can be tackled with confidence.

    It just strikes me as being very, very hard – much harder than physics for example, where the objects under examination, like atoms, are pretty simple. I studied statistical physics for a time, and that can turn to mush when the systems get very large – and a brain is a huge system, with more than just the physics to think about.

    John

  27. John Davey,
    Still trying to make a point about a word you claim you can’t find, even though everyone else that’s tried has found it? All you’re doing is accentuating the fact that your point making skills are useless. It might have been better to admit you’d found the word and then argued that those who used it had been similarly wrong to do so. You know, the way you had to admit you’ve found the April Fool comment by then pretending it was useful to the making of your “ironic” point. Although, yes, everything you write is useful in the end as humorous.
    “So there is no evidence that mental processes are computational then.” Well not, I suppose, in your case.

    Although you have had to recompute your position vis-à-vis Arnold.

  28. “All you’re doing is accentuating the fact that your point making skills are useless.”

    I think that’s the first sentence of yours – of more than five words – that I’ve understood !I have not found the word “inperceivable” anywhere. Dictionaries redirect to “imperceivable”.

    Roy, you wouldn’t know a good argument if it landed on you head. Your ideas are hopelessly vague and half-baked, which is why the English you use rarely makes sense. If the ideas are just a cloud, so is the language.

    You haven’t raised an argument to any of the points I’ve raised : I’ve tried to argue with you (for instance on the point of creativity being incompatible with formal logic) but get no response. There is a reason for that : you can’t formulate one.

    You can’t follow arguments properly : I haven’t altered my position at all with Arnold – Arnold simply made clear that his view of what constitutes a working knowledge of the brain was not that ambitious.

  29. Davey,
    There are literate people using “inperceivable” all over the internet, but as usual you pretend that if it isn’t in the dictionary it doesn’t count.

    You want an argument that creativity is incompatible with formal logic?
    I don’t recall that the question was ever addressed to me. But it’s an easy one to answer. Because whoever said it isn’t was probably wrong.

    And you’re now saying that it was Arnold who altered his position and not you? Give us a break.

  30. “but as usual you pretend that if it isn’t in the dictionary it doesn’t count.”

    I confess thats the usual test used by me and a few million other folk.

    “Because whoever said it isn’t was probably wrong.”

    … accompanied by no argument whatsoever.

    “And you’re now saying that it was Arnold who altered his position and not you? Give us a break.”

    Its like pulling teeth. No, he didn’t alter his position, nor did I. But we did have different preconceptions coming into the “argument”,

  31. Davey: “But we did have different preconceptions coming into the “argument”,”

    Which Arnold altered and you didn’t? Pull another tooth on that one.

    Davey again: ““Because whoever said it isn’t was probably wrong.”
    … accompanied by no argument whatsoever.”

    That’s all the argument needed since you didn’t tell us whether you were against the proposition that creativity is incompatible with formal logic, or for it. And either way, I don’t see what argument you (and especially you) could conceivably make that would be true.

Leave a Reply

Your email address will not be published. Required fields are marked *