Against Non-Computationalism

Marcin Milkowski has produced a short survey of arguments against computationalism; his aim is in fact to show that they all fail and that computationalism is likely to be both true and non-trivial. The treatment of each argument is very brief; the paper could easily be expanded into a substantial book. But it’s handy to have a comprehensive list, and he does seem to have done a decent job of covering the ground.

There are a couple of weaknesses in his strategy. One is just the point that defeating arguments against your position does not in itself establish that your position is correct. But in addition he may need to do more than he thinks. He says computationalism is the belief ‘that the brain is a kind of information-processing mechanism, and that information-processing is necessary for cognition’. But I think some would accept that as true in at least some senses while denying that information-processing is sufficient for consciousness, or characteristic of consciousness. There’s really no denying that the brain does computation, at least for some readings of ‘computation’ or some of the time. Indeed, computation is an invention of the human mind and arguably does not exist without it. But that does not make it essential. We can register numbers with our fingers, but while that does mean a hand is a digital calculator, digital calculation isn’t really what hands do, and if we’re looking for the secret of manipulation we need to look elsewhere.

Be that as it may, a review of the arguments is well worthwhile. The first objection is that computationalism is essentially a metaphor; Milkowski rules this out by definition, specifying that his claim is about literal computation. The second objection is that nothing in the brain seems to match the computational distinction between software and hardware. Rather than look for equivalents, Milkowski takes this one on the nose rather, arguing that we can have computation without distinguishing software and hardware. To my mind that concedes quite a large gulf separating brain activity from what we normally think of as computation.

No concessions are needed to dismiss the idea that computers merely crunch numbers; on any reasonable interpretation they do other things too by means of numbers, so this doesn’t rule out their being able to do cognition. More sophisticated is the argument that computers are strictly speaking abstract entities. I suppose we could put the case by saying that real computers have computerhood in the light of their resemblance to Turing machines, but Turing machines can only be approximated in reality because they have infinite tape and and move between strictly discrete states, etc. Milkowski is impatient with this objection – real brains could be like real computers, which obviously exist – but reserves the question of whether computer symbols mean anything. Swatting aside the objection that computers are not biological, it’s this interesting point about meaning that Milkowski tackles next.

He approaches the issue via the Chinese Room thought experiment and the ‘symbol grounding problem’. Symbols mean things because we interpret them that way, but computers only deal with formal, syntactic properties of data; how do we bridge the gap? Milkowski does not abandon hope that someone will naturalise meaning effectively, and mentions the theories of Millikan and Dretske. But in  the meantime, he seems to feel we can accommodate some extra function to deal with meaning without having to give up the idea that cognition is essentially computational. That seems too big a concession to me, but if Milkowski set out his thinking in more depth it might perhaps be more appealing than it seems on brief acquaintance. Milkowski dismisses as a red herring Robert Epstein’s argument from the inability of the human mind to remember what’s on a dollar bill accurately (the way a computational mind would).

The next objection, derived from Gibson and Chemero, apparently says that people do not process information, they merely pick it up. This is not an argument I’m familiar with, so I might be doing it an injustice, but Milkowski’s rejection seems sensible; only on some special reading of ‘processing’ would it seem likely that people don’t process information.

Now we come to the argument that consciousness is not computational; that computation is just the wrong sort of process to produce consciousness. Milkowski traces it back to Leibniz’s famous mill argument; the moving parts of a machine can never produce anything like experience. Perhaps we could put in the same camp Brentano’s incredulity and modern mysterianism; Milkowski mentions Searle’s assertion that consciousness can only arise from biological properties, not yet understood. Milkowski complains that if accepted, this sort of objection seems to bar the way to any reductive explanation (some of his opponents would readily bite that bullet).

Next up is an objection that computer models ignore time; this seems again to refer to the discrete states of Turing machines, and Milkowski dismisses it similarly. Next comes the objection that brains are not digital. There is in fact quite a lot that could be said on either side here, but Milkowski merely argues that a computer need not be digital. This is true, but it’s another concession; his vision of brain computationalism now seems to be of analogue computers with no software; I don’t think that’s how most people read the claim that ‘the brain is a computer’. I think Milkowski is more in tune with most computationalists in his attitude to arguments of the form ‘computers will never be able to x’ where x has been things like playing chess. Historically these arguments have not fared well.

Can only people see the truth? This is how Milkowski describes the formal argument of Roger Penrose that only human beings can transcend the limitations which every formal system must have, seeing the truth of premises that cannot be proved within the system. Milkowski invokes arguments about whether this transcendent understanding can be non-contradictory and certain, but at this level of brevity the arguments can really only be gestured at.

The objection Milkowski gives most credit to is the claimed impossibility of formalising common sense. It is at least very difficult, he concedes, but we seem to be getting somewhere. The objection from common sense is a particular case of a more general one which I think is strong; formal processes like computation are not able to deal with the indefinite realms that reality presents. It isn’t just the Frame Problem; computation also fails with the indefinite ambiguity of meaning (the same problem identified for translation by Quine); whereas human communication actually exploits the polyvalence of meaning through the pragmatics of normal discourse, rich in Gricean implicatures.

Finally Milkowski deals with two radical arguments. The first says that everything is a computer; that would make computationalism true, but trivial. Well, says Milkowski, there’s computation and computation; the radical claim would make even normal computers trivial, which we surely don’t want. The other radical case is that nothing is really a computer, or rather that whether anything is a computer is simply a matter of interpretation. Again this seems too destructive and too sweeping. If it’s all a matter of interpretation, says Milkowski, why update your computer? Just interpret your trusty Windows Vista machine as actually running Windows 10 – or macOS, why not?

 

 

38 thoughts on “Against Non-Computationalism

  1. It’s useful, this article.

    It’s actually a good summary of everything that’s wrong with computationalism. Like a lot of computationalists, he doesn’t really seem to very grasp basic, essential points ( e.g “Computers are physical mechanisms”). And to cap it all he gives a typical classic non-refutation refutation of the Chinese Room…

    But as you say Peter, an argument against an anti-argument falls way short of being a positive argument.. not that “argument” can settle a scientific question.

    JBD

  2. I’m a computationalist, broadly construed, but I’m always interested in arguments against it. However, I find most of them, including most of the ones Mi?kowski reviews, to be definition twiddling, usually insisting on some narrow definition of “computation” or “information”, then declaring victory when nervous systems don’t meet that definition.

    But to anyone familiar with both computer engineering and neuroscience, the similarities in the way the two work are extremely compelling. Both involve gated signaling throughout their structures and both interact with the environment through peripheral systems that moderate the inbound effects from the environment down to the level the central system works with, and magnify the causal effects of outbound signalling.

    In other words, both show signs of being information processing systems with input / output interfaces: screens, keyboard, and printers in the case of computers, and the peripheral nervous system, hormones, and musculature in the case of central nervous systems.

    Ultimately, viewing the mind as computational is an outlook that is either fruitful or it isn’t. So far, in neuroscience, it has been very fruitful. But science is an inductive endeavor, and if it is ultimately wrong, then the outlook will eventually start to fray, and some new paradigm will be needed. But that doesn’t appear to be happening.

    If anything, technological computing seems to be getting closer to the way nervous systems work, adopting neural networks, neuromorphic technology, and other changes. This isn’t surprising since technological computing, from its earliest beginnings, has essentially been about reproducing mental activity.

  3. I also found Milkowski’s paper a bit too terse to do this subject justice. I suspect it doesn’t say enough to convince the unconverted.

    I don’t find John Davey’s comment very constructive. It is a classic non-refutation refutation refutation. (I think I got that right.) However, his last paragraph does hit one nail on the head. The issue won’t be settled until we have a conscious AI program or completely understand how the brain works. The non-computationalists are presumably not working on this so, computationalists, it is in your hands!

  4. Is this about the potential of any kind of A I to transcend before we do…either way–a personal experience…
    …Does experience proceed (transcend) computationalism…

  5. Peter, I love your hand / digital calculator metaphor. Just perfect.

    We don’t have to give a single explanation of all conscious phenomena. It’s possible that computation (plus Millikanish and Dretskean connections) is the essence of semantic consciousness, a.k.a. intentionality, while not being the essence of phenomenal consciousness. And by “possible” I mean “true”.

  6. Doesn’t seem to mention Fodor etc on holism and abduction – these don’t necessarily exclude computationalism (see any Bayesian network), but that is also true of the objections he does list.

  7. Hi,

    Thanks for the interest in my work. As you might have noticed, it’s a brief paper – actually, a preview of a conference poster. You cannot expect deep elucidation, of course. Follow the references, I think they discuss most of the arguments at length.

    As for not understanding the basics of physical computation, well, get my 2013 MIT Press book. It lists much more justification of my view on computation. Indeed, I think many people confuse physical computation as such with how they conceive of industrially built computers (mostly digital von Neumann machines).

    BTW, I also debunk Fodor on holism in Chapter 5 there, and I have a forthcoming expanded version of this short paper where I include a slightly longer discussion of embodiment and the notion of symbols.

    I don’t think that computationalism in the stronger version is tenable because representation goes beyond computation. I discuss the reasons why in Chapter 4 of the book. I have some papers about representation as well, most of them are on my academia.edu page.

    Note that you cannot explain a number of features of computers by appealing to their algorithms either: you can explain why a certain algorithm runs faster (because of its computational complexity) but you cannot predict its actual time without accounting for the hardware. So it’s not a concession to say that you need to know the physical implementation. Actually, every time you explain everything as a physical computer, you go beyond formal properties. So it would be indeed surprising to see that consciousness is really different, in particular because many people link it to representation. But I don’t want to claim anything about consciousness at all.

    Thanks again, Marcin

    Many thanks for responding, Marcin, and for the clarifications. Clearly I should read your book! – Peter

  8. I’ve spent a lot of time lately watching a small stream. Water tumbles over rocks, forming standing waves, eddies, bubbles (which contribute the burbles), and all kinds of turbulence. The surface of the water reflects both the bed of the stream and the evolution of the water over time under the applicable forces (gravity, hydrodynamics, etc).

    The computationalist would have me believe that the stream is somehow calculating the evolution of the system over time. Or that each water molecule is calculating its position from moment to moment (though moments don’t actually exist outside a human mind). Or some such thing. I’m entirely unconvinced that this is primarily an informational problem. No doubt we humans can extract information from the system. We might trace trajectories in macro or micro terms. But nothing about the process makes it like a computer. On the contrary the analogies work better the other way around – a computer is like a complex hydraulic system.

    As far as I can see the currents flowing around my brain are just like the stream. While we might model this using computational paradigms, as far as I can see my brain is not a computer in any sense. It has no halting state except death. It’s just a complex system evolving over time in regular ways – which we can conceive of as “rules” or “laws” or whatever. Nothing about it screams computation.

    Incidentally I heard Anil Seth on The Infinite Monkey Cage (BBC Radio 4) last night. He was arguing for the idea that we live in a simulation. Every time he started to explain the logic of his proposition, he asked the audience to accept some axiom that he plucked out of the air and did not even bother to justify. All of which struck me as demonstrably false or, at best, highly questionable. Then having accepted his own assumptions as true, he preceded to deduce that we live in a simulation.

    Simulationists, computationalists, and many others all seem to have this same method: set up axioms from assumptions based on the believe system one is promoting, use simple straight-line deduction to reproduce those starting assumptions as a paraphrase, thus “proving” one’s belief is valid.

    How do they get away with it?

  9. The notions of common sense, the framing problem and humans as computers can reflect the fact that we are composed of multiple computers. The simpler computers or modules make us seek pleasure vs avoid pain, protect our young, our community, our home etc. Digital computers can run subroutines to simulate the more basic functions or even have sensors that sense over temperature and low battery that make them shut down or enter a low power mode respectively.

    Computation itself is actually a unique form of human language based in the universal human language of mathematics. Human languages themselves are actually metadata or we don’t directly pass the qualia of color but talk about red, green, blue etc. Mathematics is the event qualia embedded in the events that we call the environment whether it is the one outside us or the ones we are tracking in our brain biology. Just seems the arguments wrap around themselves and just beg to be explained away when we have a better handle on the interaction of all of the nervous structures which make up a system we call the brain.

  10. Marcin


    “But I don’t want to claim anything about consciousness at all.”

    Does that mean you’re a computationalist or not ? If you claim to be ambivalent about consciousness I presume that means you don’t think (or rather, don’t claim) that computation causes consciousness.

    I’m interested in the last paragraph of your paper which deals with the biggest problem (imho) with computationalism – it’s irredeemable observer-relativity.


    The contemporary consensus is that computational models can be used to adequately describe causal connections in physical systems, and that these models can also be
    falsel ascribed

    Can you give me an example of this, and the reason why this can be used as a basis for refuting observer relativism ?


    In other words, computational models are not different in kind from any mathematical model used in science.

    Of that I can agree.


    If they are mere subjective metaphors and don’t describe reality, then mathematical models in physics are subjective as well

    Physics uses mathematical modelling – computation if you like – to produce predictive theories of physical metrics.

    The theories are “observer relative” as it’s impossible for a theory – as a mental/cultural artefact – to subsist any other way.
    So what exactly is the significance of the above claim ? That physics is synonymous with reality – the same thing ??!! I really don’t see how the conclusion can be used to refute Searle.


    why buy a new computer instead of ascribing new software to the old one?

    Is this really a serious argument Marcin .. come on. I think you got bored at the end of the paper !

    Regards
    JBD

    Micha


    Is there a way to subscribe to followup emails without meaningless comments like this one?

    If it was meaningless you wouldn’t have been annoyed ..

    But surely if it was meaningless you wouldn’t have got so annoyed ..

    (sorry doing this via phone and don’t know if it’s working)

  11. Observation may not be relative of computationalism or consciousness, but may be value apparent…

  12. JBD:

    You and Peter misunderstood, so I guess the fault is mine.

    I wrote (comment #2): “Is there a way to subscribe to followup emails without meaningless comments like this one?”

    The only subscription form is the checkbox when submitting a post. “Notify me of follow-up comments by email.” Since I wanted to see comments, but had nothing to add, I had to write a dummy comment. I was asking for the future, is there a way to subscribe without writing meaningless comments like #2. Self-referentially. (Douglas Hofstadter would be proud.)

    My intent was not to insult comment #1, yours.

    In fact, I had just emailed Peter asking him to delete my original comment, as I saw his reply (#3) and realize that I inadvertently gave insult. None was intended.

    [Original comments deleted – Peter]

  13. “But I don’t want to claim anything about consciousness at all.”
    > Does that mean you’re a computationalist or not ? If you claim to be
    > ambivalent about consciousness I presume that means you don’t think (or
    > rather, don’t claim) that computation causes consciousness.

    Computationalism is logically independent from views on consciousness.

    > I’m interested in the last paragraph of your paper which deals with the
    > biggest problem (imho) with computationalism – it’s irredeemable
    > observer-relativity.

    There’s nothing special about it. Either you are scientific realist, or not. One might defend many different positions here; I am myself a staunch realist but this is not the only possible stance (see Stathis Psillos on the subject).

    Anyway, I analyze this at length in chapter 2 of my book. Lots of examples. You don’t expect me to repeat myself here, I hope.

    > why buy a new computer instead of ascribing new software to the old one?
    > Is this really a serious argument Marcin .. come on. I think you got
    > bored at the end of the paper !

    One of the most serious. It just shows how preposterous and counterintuitive all kind of observer-relativity (or any kind of idealism, for this matter) is. Nobody of sane mind should believe it. It makes no sense, and nobody really does believe it.

    >The computationalist would have me believe that the stream is somehow
    > calculating the evolution of the system over time. Or that each water
    > molecule is calculating its position from moment to moment (though
    > moments don’t actually exist outside a human mind). Or some such thing

    Not at all. Neither me or Gualtiero Piccinini in his latest book on physical computation (OUP 2016) would defend this extreme view.

    Trying to find an extreme view to debunk it is called “straw-man fallacy”, by the way.

  14. JoS…lost in pretentiousness I add, measuring, moving and interacting three forces (positive negative neutral)…
    …Thanks for the link, Arnold

  15. Marcin


    Anyway, I analyze this at length in chapter 2 of my book. Lots of examples. You don’t expect me to repeat myself here, I hope.

    Actually I was rather hoping you would, rather than putting us to the trouble of reading a whole book for points that can be summarized in a few paragraphs ?

    There was a question which I asked in the previous contribution which might have not been obvious.


    The contemporary consensus is that computational models can be used to adequately describe causal connections in physical systems, and that these models can also be
    falsely ascribed

    Can you furnish me with an example of this ?


    If they are mere subjective metaphors and don’t describe reality, then mathematical models in physics are subjective as well

    Can you explain to me why this is a refutation of observer-relativity ? (see previous submission for points I raised on this )


    One of the most serious. It just shows how preposterous and counterintuitive all kind of observer-relativity (or any kind of idealism, for this matter) is. Nobody of sane mind should believe it. It makes no sense, and nobody really does believe it.

    If the initial point wasn’t an argument, then this defintely wasn’t ! Don’t take it personally Marcin, it’s just a debate.

    An observer can of course continue to ascribe new software to the logical outputs that the hardware of the computer generates, but it won’t affect the hardware as that is limited by the ascriptions that were fixed at design time by the chip designer.

    Does that help ?

    JBD

  16. “Actually I was rather hoping you would, rather than putting us to the trouble of reading a whole book for points that can be summarized in a few paragraphs ?”

    The chapter is 100 pages long in print. Sorry, I don’t even know where to start. Maybe let me say that the standard for publishing papers in computational psychology has shifted a lot. In the 1960s, we had books like “Plan and the Structure of Behavior”, which were totally based on armchair speculation. Then, there was time of limited acceptance of response times (Posner), eye-tracking and verbal reports (Simon & Newell). Now, we have people trying to use neuroimaging and EEG, optogenetics, patch clamps, and TMS to see whether the models are validated. In other words, the models are supposed to predict not only the output, given certain input (basically: stimuli), but also the process that generates the output. There are various kinds of models but this is what you get as the basic standard in journals.

    My story about this appeals to mechanistic explanation and I am unable to summarize all this in a single blog post or a comment. The comic-strip version is to say: just use what people use to explain things in biology, i.e., mechanisms as spatiotemporal collections of processes and components that are responsible for phenomena.

    A falsified example is for example Rumelhart & McClelland 1984 – Pinker & Prince have shown it would be behaviorally incorrect for a large class of data (so this is inaccurate on the input / output level). This is why connectionists have changed their underlying assumptions about the representation of past tense forms. You could also argue that early models of cricket phonotaxis produced by Barbara Webb were false (and even intentionally so) because they had physiologically incorrect assumptions about inhibitory neurons; this has been changed in later models to make them more biologically realistic (I discuss more about failings of models in my four case studies in chapter 5).

    “Can you explain to me why this is a refutation of observer-relativity ? (see previous submission for points I raised on this )”

    This is a refutation because if you think that physics is also observer-relative, then all empirical science is, therefore computationalism does not fall beyond the standards of physics. Which is a vindication, rather than refutation, of computationalism. I also assume that calling physics ‘observer-relative’ in a strong sense (the same as Searle has used in his papers) is just silly. Sorry, this is a strong word, but this is what it is: you cannot bend reality to give a certain result of an experiment. You can try to interpret it differently but this is totally innocuous kind of observer-relativity. To make the story short, there’s a nice distinction of lovely and suspect properties in Dennett: the properties are suspect when you need someone to suspect them to be some way, and they are lovely when there could be agents that would respond to them in saying that they are lovely (nobody needs to observe them). The observer-relativity in question is of the lovely quality, and it does not undermine intersubjective validity of science. To claim otherwise would be to say that you actually can change laws of physics (out there) by merely thinking of them.

    “An observer can of course continue to ascribe new software to the logical outputs that the hardware of the computer generates, but it won’t affect the hardware as that is limited by the ascriptions that were fixed at design time by the chip designer.
    Does that help ?”

    Sure. You can ascribe various kinds of theories to your measurements, and usually, no experiment will license all the theory. This is an obvious fact, and nobody should deny this. However, we can ascribe only these kinds of software (or machine states if the machine does not use software) that is in the same equivalence class as other kinds of software that match the measurement. This does not give rise to the result that Searle has talked about; he needs to say that there’s an absolute arbitrary freedom to ascribe *any* computation to any sufficiently complex piece of the world. Yeah, right, you can say that it’s 10 Centigrades by looking at my thermometer right now (it’s 31 Centigrades) but it would be just a false description. But you can of course describe the temperature in Fahrenheit or Kelvin. Still, how exciting this could be? These are all equivalent.

    You could of course be more inventive and use different kinds of idealizations. Then it wouldn’t be all in the same equivalence class. Sure, but it’s still not really different from how science works.

    I am not interested in showing that there is some absolutely-real reality out there. I only show that computational theories work just like other theories in natural science. If you think that natural science does not give access to reality, then you should rather discuss realism and anti-realism about science, not the computational theory of mind. To discuss the CTM in this context is to introduce a red herring: there’s nothing special about the CTM in the realism debate. It’s one of the zillions of modeling frameworks. It is the merits of the no miracle argument and pessimistic induction which are relevant here rather than debating the mapping between software and physical reality.

  17. Marcin


    I also assume that calling physics ‘observer-relative’ in a strong sense (the same as Searle has used in his papers) is just silly.

    Sorry, this is a strong word, but this is what it is: you cannot bend reality to give a certain result of an experiment.

    Do you understand that the terms ‘observer-relative’ and ‘subjective’ in this context have nothing to do with ‘subjective’ in the context of an opinion ?

    For instance, it is an objective fact (not an opinion) that I have subjective mental experiences (perspective).
    It is a subjective claim (an opinion) of computationalists that subjective mental experiences cannot be objectively demonstrated

    Etc.

    The word “Subjective” and “Objective” can be applied to facts/opinions AND to ontological perspective (1st person view). They mean totally different things in either context. You mustn’t get them mixed up.

    Physics is an objective mathematical/scientific discipline, but as a theoretical construct has an observer-relative, subjective ontology. That doesn’t mean we bend the facts. It just means that theories – like books, numbers and words – only exist in the mental or social space. It has no bearing whatosever on the issues of scientifoc objectivity.


    To claim otherwise would be to say that you actually can change laws of physics (out there) by merely thinking of them.

    I think this demonstrates the point I made above.


    Sure. You can ascribe various kinds of theories to your measurements, and usually, no experiment will license all the theory. This is an obvious fact, and nobody should deny this. However, we can ascribe only these kinds of software (or machine states if the machine does not use software) that is in the same equivalence class as other kinds of software that match the measurement. This does not give rise to the result that Searle has talked about; he needs to say that there’s an absolute arbitrary freedom to ascribe *any* computation to any sufficiently complex piece of the world. Yeah, right, you can say that it’s 10 Centigrades by looking at my thermometer right now (it’s 31 Centigrades) but it would be just a false description. But you can of course describe the temperature in Fahrenheit or Kelvin. Still, how exciting this could be? These are all equivalent.

    I’m mystified as to what any of that has to do with observer-relative computing.

    Let’s keep it basic. If I say the Sun has value ‘1’, what’ wrong with that ?

    regs
    JBD

  18. “Do you understand that the terms ‘observer-relative’ and ‘subjective’ in this context have nothing to do with ‘subjective’ in the context of an opinion?”

    You cannot understand something that is utterly false. It has only to do with mere fancy, or subjective opinion. Take Searle’s example of his subjective description of a wall as implementing Wordstar. It’s not about the wall’s 1st person view (whatever this might be according to some panpsychist). It’s about some special status of the CTM vis-a-vis standard science, at least in Searle’s view, and according to Putnam as well. They never talked about anything else, just epistemic subjectivity, i.e., observer-relativity.

    “The word “Subjective” and “Objective” can be applied to facts/opinions AND to ontological perspective (1st person view). They mean totally different things in either context. You mustn’t get them mixed up.”

    I’m sorry, but I fail to follow what is the 1st person view in relation to computation. Do you think that there is something like to be a Turing machine? It strikes me as odd to say something like this.

    “Physics is an objective mathematical/scientific discipline, but as a theoretical construct has an observer-relative, subjective ontology. ”

    Again, it’s a lovely kind of subjective – not the suspect one. Moreover, the ontology may be understood as the collection of things presupposed to exist. They are not at all merely subjective in physics.

    “I’m mystified as to what any of that has to do with observer-relative computing.”

    Maybe it’s my fault, but what I claim is that computation is as observer-relative as temperature in my examples. Clear now?

    “Let’s keep it basic. If I say the Sun has value ‘1’, what’ wrong with that ?”

    For starters, it gives no predictive power over and above what we know from physics. Also, it doesn’t track any causal connections, so it just sounds like an arbitrary mapping to me. It’s not about computation, it’s about your labeling things. You are not even ascribing an algorithm, just an arbitrary value in unknown number system (is it decimal or binary?). Moreover, from what we know from astronomy about the solar system, it’s not a mechanism whose component parts and operations have been selected for computing number 1. So it fails basic conditions I give for physical computation.

    At the end of this paper (open access), you can find a checklist for a computational modeler. Run your examples through them to see how my account works:

    https://link.springer.com/article/10.1007/s11229-015-0731-3

  19. marcin

    I’m also a bit concerned about this rather spurious linkage between physics and computers.

    We to distinguish between physics and computation in this sense ..

    i) Physics is a scientific discipline predicated on the belief that there exists an external world and that there are appropriate methods to do that – epistemic objectivity

    ii) Computation is an engineering discipline. There is nothing to find out per se other than the refinements of techniques. It is an epistemically objective discipline based upon the use of an arbitrary link between a physical metric and a symbol (oberver-relative, subjective linkage). I used the word ‘ontology’ before, I’m not sure that was appropriate for a discipline or theory. So I’ll use ‘linkage’.

    In short I don’t see the value in linking them together, unless as a spurious justification for the practice of modelling brains as computers.

    JBD

  20. Marcin


    I’m sorry, but I fail to follow what is the 1st person view in relation to computation.

    I’m satisified that you have failed to grasp the difference between the perspectival and epistemic sense of the term “subjective”.
    If you did, you wouldn’t be asking this question.

    The linkage between the physical metric and the symbol used in any computer is observer-relative. It can ascribe whatever value I like to anything. Unless I am there to define it, no such linkage exists.

    It’s like a book. A book is only a book to a person who can read it. It has an observer-relative existence.

    The atoms making up the book and the ink on the page are not observer-relative.

    Do you see the difference ?


    For starters, it gives no predictive power over and above what we know from physics. Also, it doesn’t track any causal connections, so it just sounds like an arbitrary mapping to me.

    It IS an arbitrary mapping. Can you tell me what is wrong with that ? All computers in existence that have ever been made have used arbitrary mappings

    JBD

  21. Have you ever read a single paper in computational neuroscience? It’s not an engineering discipline at all (or at least it’s in the business of reverse-engineering as all life sciences). Try this paper as good paradigmatic example:

    http://science.sciencemag.org/content/338/6111/1202

    My job is to understand real science, not armchair speculation of how we might build computational models by ascribing numbers to planets.

  22. “It IS an arbitrary mapping. Can you tell me what is wrong with that ? All computers in existence that have ever been made have used arbitrary mappings”

    I already said that my account excludes all arbitrary mappings because they give no explanatory nor predictive power. I can repeat this.

    You are also confusing the semantic values of computational vehicles with the computation itself. My account of computation is not linked to representation at all. It’s linked to physical differences in the ink, to use your book metaphor.

  23. Marcin

    What the discipline of “computational neuroscience” is, is irrelevant to the conversation – which is about your article on the arguments against computationalism.

    In particular, you addressed the topic – directly – of Searle’s argument that all computation is observer relative.


    I already said that my account excludes all arbitrary mappings because they give no explanatory nor predictive power. I can repeat this.

    What does this have to do with Searle’s point that computation is observer relative ? If computation is observer relative, then all mappings are arbitrary – they must be in fact.

    You cannot simply decide that computation is not observer-relative because it won’t predict things. You can’t counter an argument against a mode of existence by using some notion of a lack of utility.

    Why isn’t computation observer-relative ? Address the question and please – don’t refer me to some link, I’m not interested. I want your view. In your reply.

    JBD

  24. Computational neuroscience is directly relevant because it is the discipline that is the best current way of developing the computational theory of mind, or computationalism. Nothing is more relevant, and Searle’s views from the beginning of the 1980s are largely outdated, though in retrospect, they have helped to reject the wrong view that computation is based on a simple mapping.

    Yes, this is what I presuppose: the simple mapping view, or the view that whether there is computation depends on whether you can map a physical structure into a mathematical model is wrong, mostly (but not only) because:

    1) it doesn’t describe our best scientific practice,
    2) allows for spurious mappings,
    3) makes everything implement any computation (catastrophic pancomputationalism), which implies that you as an observer can ascribe Windows 10 to your old Nokia phone, and this then is a Windows 10 machine,
    4) it doesn’t license any predictive nor explanatory connections,
    5) generalizes to any mathematical models (so it trivializes all physics as well).

    In short, the simple mapping view has to be rejected, and we should thank Putnam and Searle for bringing this into our attention. What we require, instead, with Gualtiero Piccinini, is that (my list is over 10 items, but let’s be brief):

    1) computation has to be predicated on mechanisms, not just on arbitrary chunks of matter;
    2) computational descriptions fit causal dynamics of mechanisms;
    3) the function of these mechanisms is to compute;
    4) the mechanisms have to be understood as in the new mechanistic framework of explanation;
    5) the causal dynamics in question needs to pertain to medium-independent vehicles of structural information.

    Now, if the causal dynamics of vehicles do not predict nor explain the process in question, then all computation in this case is spurious. This is the case of ‘1’ ascribed to the Sun, or the Wordstar wall.

    Because the notion of function may be interest-relative, we still have the innocuous (lovely) observer-relativity, but the ascription may be wrong, so it’s not just our fancy to ascribe function. For example, you may be wrong in claiming that the function of lungs is to pump blood (the history of medicine is replete with some mistaken ascriptions).

    Computation is not a mapping. Computation is a causal process happening in a physical spatiotemporal mechanism, whose causal dynamics of medium-independent vehicles explains why and how certain states occur. This is why computation requires energy.

  25. marcin


    “Computation is not a mapping”

    What physical law directs that a mapping is not arbitrary ?

    Regs
    JBD

  26. marcin

    In other words, what inherent properties of matter give rise to unassailable identity to a symbol ? And what are those symbols, and what do they mean ?

    regs
    JBD

  27. marcin

    It sounds to me like you might be on the hunt for the “hitchhiker’s guide” element. That bit of the universe that is pure information .. good luck with that.

    The answer is 42, by the way.

    JBD

  28. “What physical law directs that a mapping is not arbitrary ?”

    None. Physical laws have little bearing on modeling in life and cognitive sciences, unless you understand the notion of law in a non-standard way (these laws are not universally true in all spacetime locations; at best these are ceteris paribus laws). Basically, we’re talking only of causal invariant generalization of limited scope.

    So mappings between a model of computation and it target (a computational mechanism) that adhere to true invariant causal generalizations are not arbitrary. All other are arbitrary. Yours is not even a complete mapping because ‘1’ is not a description of a computation but a numeral.

    “In other words, what inherent properties of matter give rise to unassailable identity to a symbol ? And what are those symbols, and what do they mean ?”

    It would be more useful to ask questions related to what I have said and not to your own fantasies. I have never used the notion of symbol and I find it utterly useless in this debate. The notion of symbol has been multiply ambiguous in the computational debates, and it has outlived its utility (I have found at least seven meanings of this notion in the literature, and people easily confuse one with another).

    What I require is that there is at least one physical degree of freedom. If you think that this is difficult find in the physical realm, you must have a really strange view on physics. Nowhere did I say anything about pure information. The notion of information I talk about here, and you obviously did not care to ask about it but thought it would be so funny to have a really old joke here, is the notion of structural information as defined by Donald MacKay in his excellent 1969 book. This is part of standard information theory, and not some crazy voodoo you seem to ascribe to me.

    Really, it makes little sense to try to engage with someone who has more vested interest to troll me rather than follow the argument. Just read my freely available papers and the book instead of talking to the straw man version of me.

  29. What does hope look like in automatonism
    …is our every-second-on-going experience of conflict…
    …the causal process-needed for physical spatiotemporal mechanisms to provide energy…
    …independence can hardly be presumed without a better look where-medium is…
    …dynamically-inside the physical and outside the physical…

  30. @Jayarava

    > Incidentally I heard Anil Seth on The Infinite Monkey Cage (BBC Radio 4) last night. He was arguing for the idea that we live in a simulation

    Anil Seth wasn’t arguing that we were living in a simulation on The Infinite Monkey Cage. He was highly skeptical of the idea.

    Nick Bostrom was the one arguing that we might be living in a simulation, but even he wasn’t saying we actually were — only that it was one of three possibilities. He did however take computationalism for granted, which he probably shouldn’t as it is controversial (even though I’m a computationalist myself).

  31. There’s been quite basic experiments where two chopstick tips are placed on someones back, at first some distance apart. The subject tells the researcher how many tips they can feel. But as they get closer and closer, eventually the subject cannot tell and just reports one tip, when the truth of the matter is there are two.

    Is there much thought towards this in the anti computationalism arguments, that what appears to be the case in regards to meaning might just be an inability to detect? Meaning comes up as just one thing, but really it’s two or perhaps more things at once.

  32. Good stuff…Is “detection” then, a process to generate energy-force…
    …positive-negative-neutral force, active-passive-neutral force, and…

    Today–computationalism and quantificationalism are not viewed as energy-force in any science…
    …They remain some kind of philosophy without a meta-physical possibility…

  33. Marcin


    It would be more useful to ask questions related to what I have said and not to your own fantasies. I have never used the notion of symbol and I find it utterly useless in this debate. The notion of symbol has been multiply ambiguous in the computational debates, and it has outlived its utility

    Ok. Let’s assume Putnam etc are true.

    What decides a)what metrics constitute the entire definitive state for the system and b) how this is linked to the computational state.

    If the computational state IS the physical state what metrics define the physical state in it’s entirety – who scopes it ?
    Is the scope of the system inherent ? Do the components of the system have inherent properties linking one to the other ?

    If there is no inherent scope, then the system must be observer-relatuve. Back to square one ?

    And if the computational state “mirrors” the state transitions, doesn’t this imply separation ? Where does this separation originate from if isn’t observer-relative ?

    Regs
    JBD

  34. Marcin


    Really, it makes little sense to try to engage with someone who has more vested interest to troll me rather than follow the argument. Just read my freely available papers and the book instead of talking to the straw man version of me.

    Don;’t take things so personally .. it’s just an argument

    JBD

  35. Marcin

    As far as Piccinini is concerned, I’m not clear how he decides what the “function” is. Where does the scope for the function arise ? Isn’t this just a restatement of functionalism which was also observer-relative ? Does he ever use mathematics and a formal working structure and are there any working examples of these models ? Do you have links ?

    Regs
    JBD

  36. Fascinating exchange between Marcin and John. I think you guys have beaten this beautifully to life.

Leave a Reply

Your email address will not be published. Required fields are marked *