Chomsky on AI

There’s an interesting conversation here with Noam Chomsky. The introductory piece mentions the review by Chomsky which is often regarded as having dealt the death-blow to behaviourism, and leaves us with the implication that Chomsky has dominated thinking about AI ever since. That’s overstating the case a bit, though it’s true the prevailing outlook has been mainly congenial to those with Chomskian views . What’s generally taken to have happened is that behaviourism was succeeded by functionalism, the view that mental states arise from the functioning of a system – most often seen as a computational system. Functionalism has taken a few hits since then, and a few rival theories have emerged, but in essence I think it’s still the predominant paradigm, the idea you have to address one way or another if you’re putting forward a view about consciousness. I suspect in fact that the old days, in which one dominant psychological school – associationism, introspectionism, behaviourism – ruled the roost more or less totally until overturned and replaced equally completely in a revolution, are over, and that we now live in a more complex and ambivalent world.

Be that as it may, it seems the old warrior has taken up arms again to vanquish a resurgence of behaviourism, or at any rate of ideas from the same school: statistical methods, notably those employed by Google. The article links to a rebuttal last year by Peter Norvig of Chomsky’s criticisms, which we talked about at the time. At first glance I would have said that this is all a non-issue, because nobody at Google is trying to bring back behaviourism. Behaviourism was explicitly a theory about human mentality (or the lack of it); Google Translate was never meant to emulate the human brain or tell us anything about how human cognition works. It was just meant to be useful software. That difference of aim may perhaps tell us something about the way AI has tended to go in recent years, which is sort of recognised in Chomsky’s suggestion that it’s mere engineering, not proper science. Norvig’s response then was reasonable but in a way it partly validated Chomsky’s criticism by taking it head-on, claiming serious scientific merit for ‘engineering’ projects and for statistical techniques.

In the interview, Chomsky again attacks statistical approaches. Actually ‘attack’ is a bit strong: he actually says yes, you can legitimately apply statistical techniques if you like, and you’ll get results of some kind – but they’ll generally be somewhere between not very interesting and meaningless.  Really, he says, it’s like pointing a camera out of the window and then using the pictures to make predictions about what the view will be like next week: you might get some good predictions, you might do a lot better than trying to predict the scene by using pure physics, but you won’t really have any understanding of anything and it won’t really be good science. In the same way it’s no good collecting linguistic inputs and outputs and matching everything up (which does sound a bit behaviouristic, actually), and equally it’s no good drawing statistical inferences about the firing of millions of neurons. What you need to do is find the right level of interpretation, where you can identify the functional bits – the computational units – and work out the algorithms they’re running. Until you do that, you’re wasting your time. I think what this comes down to is that although Chomsky speaks slightingly of its forward version, reverse engineering is pretty much what he’s calling for.

This is, it seems to me, exactly right and entirely wrong in different ways at the same time. It’s right, first of all, that we should be looking to understand the actual principles, the mechanisms of cognition, and that statistical analysis is probably never going to be more than suggestive in that respect. It’s right that we should be looking carefully for the right level of description on which to tackle the problem – although that’s easier said than done. Not least, it’s right that we shouldn’t despair of our ability to reverse engineer the mind.

But looking for the equivalent of parts of  a Turing machine? It seems pretty clear that if those were recognisable we should have hit on them by now, and that in fact they’re not there in any readily recognisable form. It’s still an open question, I think, as to whether in the end the brain is basically computational, functionalist but in some way that’s at least partly non-computational, or non-functionalist in some radical sense; but we do know that discrete formal processes sealed off in the head are not really up to the job.

I would say this has proved true even of Chomsky’s own theories of language acquisition. Chomsky, famously, noted that the sample of language that children are exposed to simply does not provide enough data for them to be able to work out the syntactic principles of the language spoken around them as quickly as they do (I wonder if he relied on a statistical analysis, btw?). They must, therefore, be born with some built-in expectations about the structure of any language, and a language acquisition module which picks out which of the limited set of options has actually been implemented in their native tongue.

But this tends to make language very much a matter of encoding and decoding within a formal system, and the critiques offered by John Macnamara and Margaret Donaldson (in fact I believe Vygotsky had some similar insights even pre-Chomsky) make a persuasive case that it isn’t really like that. Whereas in Chomsky the child decodes the words in order to pick out the meaning, it often seems in fact to be the other way round; understanding the meaning from context and empathy allows the child to word out the proper decoding. Syntactic competence is probably not formalised and boxed off from general comprehension after all: and chances are, the basic functions of consciousness are equally messy and equally integrated with the perception of context and intention.

You could hardly call Chomsky an optimist: It’s worth remembering that with regard to cognitive science, we’re kind of pre-Galilean, he says; but in some respects his apparently unreconstructed computationalism is curiously upbeat and even encouraging.

 

61 thoughts on “Chomsky on AI

  1. Our brain, as do all life’s functions that apply intelligence to problems, uses a very complicated and evolved arrangement of strategic algorithms to “predict” the most proper solutions to perceived problems. These solutions, with the help of all our cultures, may over time successfully become instinctive. As environments change, we adapt to our new experiences with newly learned successful strategies, which in turn may be incorporated after more time as instinctive.

    Chomsky doesn’t believe this, of course, which is one reason why he’s in an argument now with others concerning recursion as supposedly a necessary element of all languages. “Instinctive” to all of them in other words. He can’t accept the clear evidence that isolated cultures, such as the Piraha in the Amazon, have had no reason to develop, as an example, an understanding of the use of numbers, and so it’s quite likely that they’ve had no reason to develop the other abstract concepts that a use of recursive phrasing might facilitate.
    And it’s also possible that the Piraha culture over time lost any reason to utilize some ancestral recursive concepts, and any instincts that offered up such strategies died off, but that’s pure speculation on my part. But I also note that when taught the use of numbers in a decimal series, Piraha children can learn to apply them.
    Chomsky in any case does believe that all children must have a limited set of instinctive options where learning language is concerned. Which may or may not be true, but they also have the ability to learn what their culture offers them as additional options, some of which may clash with, and in time override, those instincts.

  2. Fascinating. This is my first time commenting here, and it’s merely to say that I have been enjoying your blog very much. I followed it here from RS Bakker’s Three Pound Brain, and I very much agree with his assessment: you seem to really address these issues in a very objective and measured way. Thank you for that. I look forward to reading more.

  3. It’s a bit of a stretch of the subject, but it makes you wonder if, through context and empathy, children learn their concept of conciousness? That there’s this idea, connected to the word ‘conciousness’ being handed back and forth, being internalised by young minds.

  4. Callan, we learn and internalize strategic behavioral or functional concepts, and the meaning or potential of what we’re conscious of may thus be felt instinctively, but what young minds then learn about the concept of consciousness itself will, in my view, come from the communicative culture that we’re born into – and that precedes us, evolves us, and so far has survived us.
    On the other hand we know that birds and some other animals inherit the knowledge and ability to communicate by specific songs, so whether we may actually inherit the knowledge of specific words, and thus their meanings, is a good question.

  5. >”but we do know that discrete formal processes sealed off in the head are not really up to the job.”

    Can you expand on this please? I don’t think any modern day computationalist/functionalist theory of mind explicitly state that the processes are ‘sealed off in the head’, in fact, interaction with environment and society extends these processes outside the head, yet still remains computational.

  6. @haig:

    It’s more of the standard straw man argument used by externalists to justify writing anything at all.

    It’s also a category error in failing to comprehend the difference between input and processing.

  7. http://www.youtube.com/watch?v=QSQwBEL4mfQ

    YOu can’t clump Chomsky in with the hard functionalists – in fact he has a strong disagreement with them. He certainly doesn’t see a link between computation and consciousness, unlike Pat Churchland for instance. In fact Chomsky’s quite right in stating that the problem with science is the “triumph” of reductionism.

    This triumph leads to the conclusion that consciousness is a problem because consciousness doesn’t fit in with it. But whereas consciuosness is an empirical fact of the universe, reductionalist theories are just theories – created by humans with limited cognitive scope, like all creatures.

    It’s arrogance therefore to dispose of facts like consciousness as ‘illusions’, or even more bizarrely, as ‘necessarily religious’, when the plain fact is it’s your own theoretical tools that don’t work. I think Chomsky – au usual – has nothing but a great contribution to make here.

  8. Functionalism, a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.
    And of course a complex system will be more than a sum, but an evolution of the complexity that, according to the changes it experiences, reforms its original parts. So that while knowing something of the the original formation is important, it’s more important to understand, from that, the nature of the original function, and do so for the aid it furnishes to our overall predictive purposes. (Because of course we instinctively must use predictive forms of logic if we think at all.)
    Functionalism in that sense serves our purposes, but then we’d have to redefine the concept accordingly. Chomsky appears to have no intention of doing that. Not consciously in any case.

  9. The Stanford Encyclopedia of Philosophy

    ………………………………………………………………………..
    Def. Functionalism:

    Functionalism in the philosophy of mind is the doctrine that what makes something a mental state of a particular type does not depend on its internal constitution, but rather on the way it functions, or the role it plays, in the system of which it is a part.
    ………………………………………………………………………..

    The problem with functionalism from the perspective of science is that it provides no physical explanation for the functions performed in the system of which it is a part.

  10. On the contrary, it’s the function that provides the explanation for its physical components rather than vice versa. You can infer a function from its physical components but not explain it. And in any case the component may have several functions that it serves.

  11. Roy

    That is not what functionalists do – they ignore the hardware and focus on function in the narrow, information-processing sense. That way they can make a neat transition to “platform independent” mental processing and the gobbledegook of strong AI, and the belief that minds are “computers” in the strict, contemporary sense of the word – i.e not just physical objects that are capable, to an external observer, of computation – something all people are capable of – but are actual examples of Von Neumann computer architecture in action. Astonishingly some people believe it, in fact they’ve made big careers out of it ..

    J

  12. John,
    Yes, I know that’s not what functionalists do, but I think that I’ve been arguing, as you are, that it’s what they could and therefor should do if they understood that functions are multi strategic; and forms, including computer forms, have no strategic inventiveness of their own. Computer architecture conceivably could be made of course to accommodate the prospect of self evolving algorithms, but we humans have yet to invent the algorithms that can be then left to their own computerized devices for the long term. (One of the bigger questions being, of course, how will they arrange to physically evolve their devices to suit any newly needed strategies when we’re no longer there to help them.)

  13. Roy: “You can infer a function from its physical components but not explain it.”

    Not so. You might be able to infer a function from specified components that are organized into a MECHANISM. The particular kind of structure and dynamics of the mechanism can predict/explain the function.

  14. “The particular kind of structure and dynamics of the mechanism can predict/explain the function.”

    If you didn’t already know the purpose of the mechanism, you could not expect to predict the function accurately, and even if you could with luck predict it, you could not explain other than the simplest tactics of it’s probable use.
    If you are an experienced mechanic, you can predict the function of a tool based on your knowledge of how functional strategies need to shape their forms. This is of course the use of inference, as I’ve said, but explanation is another ball of wax, the purpose of which only a few of us will know. In other words, you can’t start with an unfamiliar form and do other than infer its purpose.
    Forms are constructed intelligently, but the intelligence lies with the strategies they serve, not with the forms. Otherwise you’d be telling me that it’s the forms that construct their strategies for the form’s own purposes.
    I can tell from what you’ve written here earlier that you think otherwise, but you need more than inference to explain a scientific theory to the uninitiated.

  15. Roy: “Otherwise you’d be telling me that it’s the forms that construct their strategies for the form’s own purposes.”

    That is exactly what I claim about the structure and dynamics of the neuronal mechanisms in the cognitive brain (a *form* in your terms?).

  16. Arnold, the cognitive brain has a form but the form consists of its physical structure which in turn serves our life’s computational/thinking purposes. Early life forms had no brain structures as you know. They evolved because our evolving intelligence required their construction, and in the end it was our forms that our burgeoning and multiform intelligence systems used to construct and reconstruct what we now refer to as our brains.
    The natural selection theorists you subscribe to have ignored the rather obvious fact that wherever they believe the natures of our selections are determined, the construction of our forms must be done physically by ourselves. And thus in the end we must direct our own construction with our own intelligence.
    And if the argument is that we, in the beginning, had nothing that could be called intelligence, then I’d argue that we’d have had no way to do our building, which of course we have amazingly been doing for the ages.
    Our thinking processes of course make up what we’ve referred to as our minds. They do not appear in any way to be a physical substance. And yet our physical substances could not exist without these matterless entities that are essential to our lives.
    Begging the question of how any physical substances can have existed in the universe without their operative strategies, but then the answer will beg the further question of why, and off we go.

  17. Roy: “And yet our physical substances could not exist without these matterless entities that are essential to our lives.”

    Then I take it that you are a dualist. You believe that the cognitive brain is a construction of an immaterial mind rather than a product of biological evolution. I don’t believe that you can be persuaded otherwise by evidence within the norms of science. In science strong intuitions are trumped by strong evidence.

  18. Arnold,
    No, there’s no “rather than” in my conception. All biological evolution is driven by strategic forces, and such forces abound in the universe. The fact that they are not composed of tactile material is a very poor argument for denying their existence.
    And the mind does arise from the brain that had been engineered by what had earlier evolved to become life’s strategic forces. These forces did do the construction physically, since as I suggested earlier, we intelligently and provisionally directed, engineered and supervised our own physically performed construction. Or do you really think that our physical behaviors are all accidentally controlled, or that any intelligence involved is as physically constituted as our flesh and bones?
    And yes, in the sense that the mind and the physical brain are separate entities of our thinking mechanisms, then I’m a dualist. Just not in the tradition of Descartes.

  19. The best I can argue is that they’ve evolved since time and change will have allowed it. But seriously this is not just some opinion I’ve made up for the purpose of an argument. The physicist John A. Wheeler did his best to describe the intelligent strategies that universal systems form. A. N. Whitehead had his process philosophy that gave his own version of the evolution of this phenomena.
    Among many others of course. But you must have come across these arguments before, so if you didn’t accept them then, I don’t expect that you will now.
    And who knows, maybe we’re all more wrong than you’re more right.

  20. Roy, none of us is omniscient. Those who care make guesses about how the world works. The thing about science is that it is a pragmatic enterprise that insists we must test our guesses/theories by seeing if the relevant events that they predict actually happen. So science is an evidence-based enterprise. On balance, it has served us very well. By the way, many years ago I attended a lecture by John Wheeler on these issues.

  21. Well in addition science has “evolved” from philosophy, and we still find the wisdom of Aristotle, et al, quite useful in todays environment. And this is to a large extent a philosophical blog. And for me, my proposals on this subject seem eminently logical and testable from both standpoints.
    And Shakespeare, from almost 500 years ago, was at heart a philosopher who understood the strategies of our minds and brains better than any of us I know of today. Except that he believed in strategies from God. But as usual, I digress.

  22. A lot of humans seem to run on AI and can become very automated, programmed on stuff that is simply not true, and living their lives without being able to change their fictional beliefs or circumstances.

  23. Doru, the tasks that this virtual mind performs are relatively easy ones. The stimuli are presented in isolation and in a normalized position with respect to the camera. A cognitive brain has to shift attention in order to parse stimuli out of a clutter of objects in a complex visual scene. The cognitive neuronal mechanisms detailed in *The Cognitive Brain* (TCB) (MIT Press 1991) are able to accomplish this essential task in order to learn and recognize objects. See TCB, Ch. 12, “Self Directed Learning in a Complex Environment”, here:

    http://www.people.umass.edu/trehub/thecognitivebrain/chapter12.pdf

    also

    http://people.umass.edu/trehub/sparscodtre.pdf

  24. There is no doubting that humans have the effect of dual ‘sciousness’. Part time consciousness which controls our somatic nervous system connected to, but which can be isolated by sleep, from our unconscious 24/7 autonomic nervous system. AI’s do not have this ability, perhaps they should. I did read of two interacting robots which learn from each other, which is a step in that direction.

  25. Arnold, I can see the value in the testing done with simulated virtual brains in proving the points in the theory. Once the requirements are well understood and defined, it becomes much easier and deterministic to actually implement those massively parallel computer vision algorithms to emulate the “shift of attention” capability that will make recognition of subtle patterns out of cluttered visual scenes, possible.
    Richard, it will be very interesting to see how artificial computation can enhance, augment and complement the human conscious capabilities.

  26. @Roy:

    “All biological evolution is driven by strategic forces, and such forces abound in the universe. The fact that they are not composed of tactile material is a very poor argument for denying their existence.”

    This is not a fact. Asserting it doesn’t make it so.

    “And yes, in the sense that the mind and the physical brain are separate entities of our thinking mechanisms, then I’m a dualist. Just not in the tradition of Descartes.”

    How so? Descartes proposed that mind and brain were separate and that mind was immaterial; this is precisely what you are proposing here, so how is it not in the tradition of Descartes?

  27. Show me a strategy that’s made of particles. Show me an abstract thought that’s formed of them.
    Descartes proposed, in short, that mind was some variety of a soul. I propose that it’s a fact that it isn’t.

  28. Roy

    “Computer architecture conceivably could be made of course to accommodate the prospect of self evolving algorithms”

    You will have to tell me what the difference is between a “self-evolving” algorithm and any other kind of algorithm. And algorithm that changes itself is just another algorithm : they’re called computer programs.

    An algorithm that deals with its own data – feedback processing – is just another algorithm. Most computer programs on the face of the planet would fall within your definition of “self-evolving”. The famous “feedback-loop” algorithm of Dennett fame – the processing that will tell us how semantic arises from syntax – is also just another example of an algorithm. Most of the computer programs in the world constitute “feedback-loop” algorithms.

    http://en.wikipedia.org/wiki/Von_Neumann_architecture

    Computers do not have a physical architecture, they have a logical architecture. The logic is there to satisfy the requirements of the Mathematics of Computational Theory. All computers are logical variants of the Von Neumann model : none are different. The suggestion that a different architecture is awaited that will somehow compute differently is a fallacy : all computers, by definition, must compute, and all must follow the Von Neumann architecture.

    It is inconceivable therefore that there would be a different kind of algorithm than a computational algorithm of the kind so defined. We await no innovation or evoloution : algorithms are defined. There will be no point at which computers start doing something cognitively different from their inventors, as their “cognitive scope” (inasmuch as computers can be said to have any) is limited to that of their inventors.

  29. @Roy:
    All abstract thoughts are formed of particles. Any abstract thought that I have, or anyone else has, is formed of the particles that make up their brain. This is the default assumption, we know brains are material, and we know they instantiate thoughts. To declare that it is a fact that they are “not tactile material” is the essence of hubris. You are the one making the unnecessary claim, the burden of proof is on you to demonstrate it. Again, simply asserting your belief that something is so, does not actually make it so. Evidence please.

    Also, what is the conceptual difference between the immaterial causal mind you are proposing and the immaterial causal soul Descartes proposed? So far the only difference I can see is the labels you are using.

    @John:
    “all computers, by definition, must compute, and all must follow the Von Neumann architecture”

    The second part is simply untrue. The Von Neumann architecture is simply one way to instantiate a computing machine. It’s a very specific thing, and it’s not the only way. The architecture of a Turing Machine, for example, is not a Von Neumann architecture. Of course you can draw analogies between Von Newmann architecture and other computer architectures, but this doesn’t mean they are the same thing.

  30. @John,
    My “self evolving algorithm” is the fundamental “function” that activates the learning systems of all life forms. Our instincts (which I hold have been initially learned and evolved in every species) are strategic, and biological strategies are represented (as far as we can tell) by pictorially symbolic holographic algorithms rather then the essentially mathematical forms that computers were fashioned to use. Our algorithms react directly to sensory input from the world around them. So far, computers don’t without their human help. I’m sure you’ll find a way to argue differently, but your computer won’t be able to respond here on its own behalf.

    @Joe,
    Thoughts are made or formed “by” brain particles perhaps but not made “of” them. The best we can do to describe their origin is to say they “emerge.” You can call this hubris, but I call it a logical assumption. Are logical assumptions evidence? Yes. Full proof? Not necessarily.

    If you’re not willing to accept or consider that, then of course you won’t be all that curious as to how life’s strategic systems may well work, or for that matter, how any of the strategies have been made to work that clearly regulate the movements of all energy and and its particle formations in our universe.
    You ask what’s the difference between the immaterial causal mind and Descartes’ immaterial causal soul?
    The main difference to me is that I’ve seen no evidence that his soul exists anywhere outside of his imagination, and perhaps yours. So labels in this instance do help to make that difference.

  31. Hawkins, Grok’s developer, says,
    “The key to artificial intelligence has always been the representation,” he says. “You and I are streaming data engines.”

    Yes, but our purpose is to survive and our strategies, in the end, must serve that purpose. The computers that we’ve so far constructed serve our purposes in that respect, but not their own. So far.

  32. “Pictorially symbolic holographic algorithms”

    What on earth has holography got to do with it ? Algorithms can’t be holographic. Holography is a physical technique used for realising images. Algorithm’s can’t be holographic any more than they can be made of bricks.

    Brushing the holographic stuff aside, there is a word for the phrase “pictorially symbolic algorithm” : its called “algorithm”. All algorithms – which by definition must be encodeable into formal symbols – are “pictorially symbolic”. The expression “1 + 1 = 2 ” is pictorially symbolic. It is a neccessary element of an algorithm that it be “pictorially symbolic”. Mathematical forms are all “pictorially symbolic” and always have been.

    Algorithms do not react to any input as they are not physical. They are used by engineers of software systems (and electronic systems ) as methods of control – virtual methods of control that is, they don’t actually physically exist – any more than a painting of a duck can be said to be an actual duck.

    Computers cannot learn ‘more’ than humans because they are defined objects. There is no mystery about them. None at all. We aren’t sitting here waiting for computers to get lightning quick and see what happens. No matter how quick they get, they do the same thing at speed X as they did as speed Y. They just do it quicker. Speed has no causative powers. Computers won’t start learning or thinking or behaving differently just because they’re quicker. If you think they will, you’ll be waiting for ever. We know what they do. We know what they do because we’ve defined to them to do what they do.

  33. @John Davey,
    algorithm |?alg??rið?m|
    noun
    a process or set of rules to be followed in calculations or other problem-solving operations, esp. by a computer : a basic algorithm for division.

    Of course algorithms can be holographic if they can be pictorial at all. Numbers are pictorial symbols of measurement. Holographs are pictures of what numbers try to measure, and that do a much better job of measuring the real world for life’s immediate and long term purposes than mere number symbols ever would.
    (Some tribes of humans don’t use numerical symbols at all in thinking, by the way. Yet they compute.)

    You say, “Algorithms do not react to any input as they are not physical.” Well perhaps not if they are in a computer, in which case by your logic, nothing there reacts to anything. But human thoughts, for example are both reactive and causative, and you’ve yet to demonstrate that they are more physical than the algorithms that you computer was constructed with to simulate them.

    But it’s rather useless to argue with you at all. Your learning process admittedly doesn’t use the holographic process – possibly because you’ve been convinced by your computers that it can’t.

  34. Roy, making computers intelligent and understanding how they work, makes human inteligence more purposeful and useful.
    Arnold, it seems that Artificial Intelligence will never produce any new interesting propositions but will help understand, prove, demonstrate, verify, test and validate the correctness of the existing propositions.

  35. Roy

    ” Holographs are pictures of what numbers try to measure”

    This is the kind of sentence that linguists love. The grammar is correct but the statement makes no sense, at least as far as the English Language is concerned.

    Numbers are magnitude bearers, they don’t measure anything. So numbers measure nothing, representing as they do magnitude without content, The statement is meaningless.

    Holography : ‘Holography (from the Greek ???? hólos, “whole” + ????? graf?, “writing, drawing”) is a technique which enables three-dimensional images to be made. It involves the use of a laser, interference, diffraction, light intensity recording and suitable illumination of the recording. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image appear three-dimensional.”

    Holography is a graphical technique. It has no relationship to pure mathematics. None whatsoever.

    “But human thoughts, for example are both reactive and causative, and you’ve yet to demonstrate that they are more physical than the algorithms that you computer was constructed with to simulate them.”

    My computer was not constructed with algorithms. It was constructed with large semiconductor arrays, upon which I can provide a physical mapping for my own algorithms, which are anything but physical.

    The algorithms are simulated with the chips on the PC : voltage levels on the chips correspond to 0s and 1s as I,me, the observer, the computer user, please. The algorithm – in reality – only exists in my head, as only I know what voltage levels on the chips correspond to 0s and 1s.

    The computer doesn’t know if +1V means ‘0’ and -1V means ‘1’ : it has no idea. It is probably the most basic feature of a computer: it has no way of knowing it IS a computer in the first place.

    I can treat the sun as a computer if I want : not a very sophisticated one admittedly, but I can say “the sun is running a computer program that always returns a value of one”. Who can stop me ? Does the sun know it’s a computer because I decide so ? I doubt it.

    Algorithms on computers are not only not physical, they don’t actually “run on” computers in the first place.

    My personal opinion is that human thought is physical, caused by brains, but not material. The irreducible nature of mental phenomena – pure semantic, as for instance, emotions – conclusively proves the impossibility that the brain is a computational device. Computers are ony syntactical, and as any linguist will tell you,syntax alone is not enough for semantic.

    ” Your learning process admittedly doesn’t use the holographic process – possibly because you’ve been convinced by your computers that it can’t.”

    There is no holographic learning process except the one you’ve just made up. I don’t know what it means and, as you seem incapable of explaining it, neither do you.

  36. “The second part is simply untrue. The Von Neumann architecture is simply one way to instantiate a computing machine. It’s a very specific thing, and it’s not the only way. The architecture of a Turing Machine, for example, is not a Von Neumann architecture. Of course you can draw analogies between Von Newmann architecture and other computer architectures, but this doesn’t mean they are the same thing.”

    OK – the only practical alternative for computers is to run a Von Neumann (or the similar Harvard) architectures. Either way, they must provide an implementation of a Turing Machine, a theoretical entity only.

  37. John,
    My personal opinion is that human thought is physical, caused by brains, but not material

    How do you fit qualia in that statement?

  38. “How do you fit qualia in that statement?””

    Qualia are features of consciousness, and consciusness is physical withou being material : it is a phenomenon that emerges from the causal powers of brains. No contradiction. Consciousness is not an outcome of computational processes but a emergent phenomena caused by the brain’s material biology.

    We could describe it as mental, if we allow that the mental is not mathematical in nature. Thus there is no contradiction between the physical and the mental : there is only one universe and within it there are mental phenomena and non-mental phenomena : but no mathematical phenomena.

    Sometimes “physical” is the only word to revert to, as the false dichotomy that Descartes created between “Physical” and “mental” has created chaos. “Mental” objects to Descartes included mathematical objects as well as emotions, qualia, colour senses – things which are clearly irreducible, absolute in form and clearly unlike mathematical objects.

  39. After all the time since I last posted, the best John Davey can do in response is choose to select alternate meanings of my words and alternate contexts to examine them in.
    For the best example, I did not say that algorithms were physical – just the opposite in fact. But being a literal thinker, he couldn’t grasp the intended point. And he knows that when I said that numbers measure, it means we use them to measure. Yet he’d have no defensive arguments if he showed he knew what my arguments really were, of course. (There’s a word for that tactic, but I suppose I’ll need to consult a linguist before I choose to use it here.)

    He says his computer was not “constructed” with algorithms – because “the algorithms are simulated with the chips on the PC.” Was there a literal minded point being made there that relates to how they’re used?

    He says, “Holography is a graphical technique. It has no relationship to pure mathematics. None whatsoever.”
    Actually that was my point, that a holographically constructed algorithm is not a mathematical one.
    But if you want linguistic oddity, here’s his oddest: “My personal opinion is that human thought is physical, caused by brains, but not material.”
    If a physical entity causes something, then that something must be a physical entity, is that it?
    I was going to say it’s literal thinking all the way down, but that’s not even thinking.

  40. Roy

    “For the best example, I did not say that algorithms were physical”

    Actually I didn’t suggest you did, although looking at what you said again I think you are actually ambiguous on this point. You said that you computer “was constructed with algorithms” – of course I didn’t assume you literally meant that. I I just assumed that it was bad English.

    “Was there a literal minded point being made there that relates to how they’re used? ”

    Again, a statement which may or may not have a point. I’m just “literally-minded” and assume that language is a means of communication. If that communication is obstructed by the use of language that is perpetually lacking in meaning, I should of course remember that as far as you are concerned Roy, I need a good dose of addition Extra Sensory Perception an addition to the usual use of words.

    “Actually that was my point, that a holographically constructed algorithm is not a mathematical one.”

    Was it ? You didn’t exactly express that the first time. I’m glad my email has thrown some clarity of the idea of the “holographic algorithm” – although I’m still waiting for the connection to holography to be made clearer.

    “If a physical entity causes something, then that something must be a physical entity, is that it?
    I was going to say it’s literal thinking all the way down, but that’s not even thinking.”

    The point is that thoughts themselves are phenomenal. The mental is physical and immaterial – not mathematical, unlike the now widely believed idea that physical brains yield only mathematical thoughts, on the basis that brains are no more than computers.

    It’s an idea subscribed to by neuroscientists and scholars, professionals in the field, and best espoused by the philosopher John Searle. Maybe you can email them and tell them to get their act together.

  41. First, I doubt that John Searle ever argued that the mental is both physical and immaterial. For one thing, to be picky picky, immaterial is defined as:

    immaterial |?i(m)m??ti(?)r??l|
    adjective
    1 unimportant under the circumstances; irrelevant : so long as the band kept the beat, what they played was immaterial.
    2 Philosophy spiritual, rather than physical : we have immaterial souls.

    My feeling is that “rather than” would seem more definitive than “both.”

    Second, I’m not sure what you are claiming that I must have said that’s in conflict with Searle’s doctrines. Although admittedly there are some areas where he disagrees with me.

  42. Arnold, actually a magnetic field is nothing more than the exchange of photons – which are physical. So no, a magnetic field is not immaterial, it is completely physical.

    You can’t interact with it by waving your arm around (well maybe if you glues a bunch of magnets to your arm), but that doesn’t make it immaterial.

  43. Arnold, if the philosophical meaning of immaterial is spiritual rather than physical, then obviously immaterial is not in that sense physical. Searle has said that the mental aspects of the mind are higher level physical properties of our brains. But he seems to be agreeing that they are not material properties at the same time. My view is that the mind is an emerging property of the physical brain that is exhibiting a nonmaterial strategic presence.
    Yet Searle has also argued that these strategies can be activated to cause physical changes to occur. So we’re really not that far apart here conceptually.
    And I would not dismiss these differences as semantic because it’s not that simple. The real problem as that many of us, including you, don’t see the need for strategies when material entities react with each other – and it doesn’t seem to matter to you if they are living or non-living to boot. Possibly because you realize that strategic entities would not likely have evolved directly from the non strategic, and have come down on the side of the non strategic as predominate in nature – leaving life to have accidentally evolved a myriad of instinctive reaction strategies in, on, and of its own.

  44. Joe,

    From Wikipedia: “The composite particles such as atoms, atomic nuclei, and nucleons, all have both rest mass and volume. By contrast, massless particles such as photons are not considered to be matter, and these have neither rest mass or volume.”

    If this is the case then a magnetic field is both physical and immaterial.

  45. @Roy:

    “Thoughts are made or formed “by” brain particles perhaps but not made “of” them. The best we can do to describe their origin is to say they “emerge.” ”

    All of this can also be said about electrical charge, yet electrical charge is still considered a physical property of particles. I see no reason to make the assumption that thoughts are not physical properties of brains, and you have offered no compelling reasons or evidence to support that assertion.

    “You can call this hubris, but I call it a logical assumption. Are logical assumptions evidence? Yes. Full proof? Not necessarily.”

    There is no field, scientific or philosophical, which accepts logical assumptions as evidence. Conclusions drawn from logical arguments whose premises can be shown to be valid and true count as evidence, but logical assumptions do not.

    I’d like to understand what your position is, and what you’re trying to say, but so far your statements have been incoherent and filled with naked assertions lacking support. I honestly can’t make heads or tails of what you are getting at.

    For instance, you refer to “life’s strategic systems”. I have no clue how you mean that to be interpreted. What do you mean by strategic? What do you mean by system? Whatever it is you are conceptualizing, do you have any empirical support for its existence? It sounds like you are just making things up by stringing together words into grammatical sentences. Maybe there is some actual content to what you are saying, but you are going to have to be more clear and precise about what that is if you want to get your message across.

    I agree with John that the term “holographical algorithm” is nonsensical and entirely without content, it’s like saying “colourless green idea” (to borrow from Chomsky himself) – it’s syntactically correct, but the conceptual semantics simply don’t mesh.

    An algorithm is just a set of instructions for carrying out some operation, like a recipe.

    Holography is as John described, a method of using lasers to create a 3D representation of a physical thing.

    If what you mean by “holographic” is non-mathematical (as you state at some point later) – then just say non-mathematical instead. There are plenty of non-mathematical algorithms out there: I already mentioned cooking recipes; also if I gave you driving directions (turn left at the stop sign, go two blocks past the McDonald’s and make a left) that would also be a non-mathematical algorithm. Although if non-mathematical is what you mean by “holographic” a better label might be “informal algorithm” or “qualitative algorithm”. Any algorithm (recipes, driving instructions etc…) can be formalized and quantified in order to be expressed mathematically (in fact I had a prof. once who stated that if you can’t express your idea as a formal algorithm then your thinking is sloppy and your idea imprecise).

    “You ask what’s the difference between the immaterial causal mind and Descartes’ immaterial causal soul?
    The main difference to me is that I’ve seen no evidence that his soul exists anywhere outside of his imagination, and perhaps yours. So labels in this instance do help to make that difference.”

    I asked you what the difference between your concept and Descartes’ was. The fact that you have seen no evidence for Descartes’ concept of an immaterial soul is irrelevant, it doesn’t offer any conceptual difference to distinguish your concept from his. Let me try to break this down a bit for you using an example. Let’s pretend your car has the following properties:

    -red
    -four doors
    -is a Ford

    and that my car has the following properties:

    -red
    -four doors
    -is a Toyota
    -convertible

    Our cars share the properties “red” and “four doors”, but have different values for the “make” property (mine a Toyota and yours a Ford), additionally my car has a property yours lacks, i.e. it’s a convertible.

    So far all I can glean about your conceptualization of an “immaterial mind” is the following:

    -is immaterial
    -has causative powers with respect to human brains

    Descartes’ “immaterial soul” concept also has both of those properties. So my question can be broken down into the following:

    Are there properties that your concept possesses that Descartes’ does not?
    Are there properties that his concept possesses that yours does not? Are there properties for which the two concepts have different values/instantiations?

    @John:

    “OK – the only practical alternative for computers is to run a Von Neumann (or the similar Harvard) architectures. Either way, they must provide an implementation of a Turing Machine, a theoretical entity only.”

    I’ll grant that the first statement is provisionally true – with current technology and understanding, the Von Neumann architecture is probably the most practical and efficient method of implementing a computing device. However, there have been recent advances in building computer hardware using an artificial neural network (ANN) architecture (see here for example: http://www.neurdon.com/). ANNs have been demonstrated to be Turing Complete and we may indeed see hardware based on this kind of computing architecture very soon.

    I’d like to say your second statement is wrong, but I think it’s either a misunderstanding or a misstatement. A computing machine does not have to actually implement a Turing Machine in order to be considered a computer. It just has to be capable of computing anything that a Turing Machine could. The Church-Turing hypothesis essentially defines the meaning of “algorithmically computable” – something is algorithmically computable if it could be hypothetically performed by a Turing Machine. Thus something can be considered a computer if it can compute whatever a Turing Machine could – it doesn’t necessarily have to do it in the same way.

  46. JoeDuncan, It’s possible that it’s more your fault than mind that you don’t understand what I’m proposing. You’ve apparently been educated in a technical world that has mechanical systems operating automatically on the assumption that they’re furnishing their own strategies in the process.
    Take this simplistic illustration as the best example: “Our cars share the properties “red” and “four doors”, but have different values for the “make” property (mine a Toyota and yours a Ford), additionally my car has a property yours lacks, i.e. it’s a convertible.”
    Did these cars have any input into their properties or selection of their “values” or their “properties”? Did they participate in their construction or its evolution for example? Do they make optional choices on their own? Do they arrange for their own survival, or add to the designing process of their progeny?

    And take this from you: “There is no field, scientific or philosophical, which accepts logical assumptions as evidence. Conclusions drawn from logical arguments whose premises can be shown to be valid and true count as evidence, but logical assumptions do not.”

    Evidence is defined in my dictionary as the available body of facts or information indicating whether a belief or proposition is true or valid. Indicating is a key word here. Indications require logical assumptions, no?

    The evidence in any thought experiment is a logical assumption, for example. Einstein developed the theory of relativity by logical assumptions that were formulated in part from evidence developed experimentally by other scientists, yet finalized by evidence developed by experiments done intuitively in his head. And in the end, all hypotheses are based on logically assumptive premises that remain to be tested by experiment. Logically fashioned experiments that is.

    Of course you will state that this makes no sense, since you don’t understand it, etc., etc.
    Evidenced by this logical assumption, that in the end is illogical as hell:
    “The fact that you have seen no evidence for Descartes’ concept of an immaterial soul is irrelevant.” Does that mean I need to see his evidence of a soul to then declare that this evidence is not really evidence of a soul? And here I thought philosophers had already done that.
    “An algorithm is just a set of instructions for carrying out some operation, like a recipe.” A recipe to be used by humans to bake a cake perhaps, but nothing more?
    From Wikipedia: “More precisely, an algorithm is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function.[3] Starting from an initial state and initial input (perhaps empty),[4] the instructions describe a computation that, when executed, will proceed through a finite [5] number of well-defined successive states, eventually producing “output”[6] and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.[7]”

    This one is so simplistic, it’s juvenile:
    “Holography is as John described, a method of using lasers to create a 3D representation of a physical thing.”

    What you’re basically arguing then is that brains don’t do holographic recordings of their own experience because they don’t have lasers?

    No answer required. I have to stop. The rest of this diatribe of yours against my “evidence” is too pitiful to be scorned.

  47. The point is that a Magnetic Field is described by a vectorial function that assigns to each point of space and time a vector. Then, we know the effects that such field causes over matter travelling in it. Once the sources are known, the fields distribution problem can solved (Maxwell). Or the other way round, you measure the field, and you try to deduce the sources (inverse problem), this is the case for MEG (squids). We can model and measure the magnetic fields created by the brain.

    Even more, the magnetic field concept arose because we wanted to understand what we observed around us,and could be manipulated and measured.

    Any slightly similar approach for qualia treatment?

  48. Vicente: “Even more, the magnetic field concept arose because we wanted to understand what we observed around us,and could be manipulated and measured.

    Any slightly similar approach for qualia treatment?”

    How is the magnetic-field concept different from the retinoid-space concept as an explanation for conscious content/qualia? For example, the retinoid model successfully explains/predicts a vivid conscious experience of a triangle in motion (a strong quale) when there is no such object in the visual field.

  49. Roy

    “First, I doubt that John Searle ever argued that the mental is both physical and immaterial”

    Read “Minds, Brains and Science” or any of his other works. He says so dozens of times. Mental processes are objectively-existing phenomena cause by matter (brains) characterized by a first person, subjective ontology – hence both physical and immaterial – or rather just mental, his main case being that the dualist question itself is basically bogus.

    http://en.wikipedia.org/wiki/Biological_naturalism

    “However, Searle holds mental properties to be a species of physical property—ones with first-person ontology. So this sets his view apart from a dualism of physical and non-physical properties. His mental properties are putatively physical.”

    From Websters dictionary :

    1
    : not consisting of matter : incorporeal
    2
    : of no substantial consequence : unimportant
    See immaterial defined for English-language learners »
    See immaterial defined for kids »

    “not consisting of matter” is the sense in which I meant the word.

  50. Arnold

    “It seems to me that a magnetic field is both physical and immaterial. No?”

    I wouldn’t disagree – but some contemporary theories argue that the electrostatic and gravitational forces are matter in motion.

  51. Joe

    “I’d like to say your second statement is wrong, but I think it’s either a misunderstanding or a misstatement. A computing machine does not have to actually implement a Turing Machine in order to be considered a computer. It just has to be capable of computing anything that a Turing Machine could.”

    That is what I meant.

    There would be no point in using a Von Nuemann machine to simulate a redundant Turing Model layer.

    “All of this can also be said about electrical charge, yet electrical charge is still considered a physical property of particles. I see no reason to make the assumption that thoughts are not physical properties of brains, and you have offered no compelling reasons or evidence to support that assertion. ”

    I think your problem is Joe is that you are making an assumption that a lot of people do – namely that physics gives us an understanding of matter. It doesn’t. Never mind thoughts “consisting” of particles, we don’t even know what particles consist of. We have experiments, we get results, we form thoeries. We now have a primitive particle in our sights – the Higgs Boson. So what ? What is a Higgs Boson? Reduction of matter to smaller and smaller particles doesn’t answer the question “what is matter?” – it just displaces it. Knowing that matter “consists” of collections of small particles we don’t know much about doesn’t increase our understanding of it.

    Therefore demanding that mental phenomena fit to a picture of materiality is to extend too much licence to the scope and competence of theories of matter. Physics is not a universal definition of the way the universe works – it’s a best-fit approximation of the scope of things that humans could be reasonably expected to know about. It’s a man-made creation and no more than that.

    Physics tells us about how space, time and matter interact, but no more – for instance, it gives no insight into time or space themselves for instance, two ideas we seem to know a lot about without having any great understanding of them. Consciousness is the same. And the problem of consciousness and physics as that no mathematical equation will ever predict mental phenomena. That’s not – as Dennett etc might say – a problem that the phenomena has to account for. Consciousness is real enough. It is however a limit that physics has to account for – something which in practice the Newtons and Einsteins instictively always did.

    We know that mental phenomena exist and they are somehow caused by brains. That’s good enough. We also know they have a non-material nature.Therfore it’s diffult to see why we should make the assumption that they are “properties” of atoms and molecules. They are definitely phenomena realized by brains in some way, maybe one day we’ll have a better understanding of how.

    Nothing in nature has to conform to a model of human understanding and rarely does – look at the vitriol that Newton got when he proposed the theory of gravitation. All his peers said it was absurd, and made no sense. But the fact was it was a phenomena and it was true. Failure to conform to a level of understanding is never good enough to discount the phenomena.

  52. John Davey, I don’t have to read Searle again to repeat what I’d earlier written here addressed to Arnold:
    “Searle has said that the mental aspects of the mind are higher level physical properties of our brains. But he seems to be agreeing that they are not material properties at the same time. My view is that the mind is an emerging property of the physical brain that is exhibiting a nonmaterial strategic presence. Yet Searle has also argued that these strategies can be activated to cause physical changes to occur. So we’re really not that far apart here conceptually.”

    And note again that the physical locations of strategic presences remains for the most part a mystery. Animals with brains can call them up and picture them and choose from and among them, but so can animal life that has no brain structures that we can find. Entire cells must use strategies continuously, but we have no more than a vague idea of how they store them, remember them or cause them to activate their physically responsive properties. Searle doesn’t write about these problems at all to my knowledge.
    There are also strategies that regulate the spin directions of electrons, etc., etc. Where the hell are the locations of strategies of elements and/or the laws that regulate them as well? I don’t pretend to know, but I’m still curious about what I obviously don’t know.

    y located as well?

  53. Arnold,

    I don’t really understand your question.

    What are you trying to explaing based on a magnetic field?

    What I say is:

    – I can precisely define a magnetic field.
    – I can measure it.
    – I can precisely relate it to its sources.
    – I can predict, measure and observe its effects.

    1) I can do all these for the case of the magnetic fields generated by a brain activity.

    2) I can do none of these for the qualia created by a brain.

  54. Vicente, I was not asking about a magnetic field as such; I was asking about your thoughts concerning the difference between the magnetic-field *concept* in explaining magnetism and the retinoid-space *concept* in explaining qualia.

Leave a Reply

Your email address will not be published. Required fields are marked *