Flatlanders

Wrong again: just last week I was saying that Roger Penrose’s arguments seemed to have drifted off the radar a bit. Immediately, along comes this terrific post from Scott Aaronson about a discussion with Penrose.

In fact it’s not entirely about Penrose; Aaronson’s main aim was to present an interesting theory of his own as to why a computer can’t be conscious, which relies on non-copyability. He begins by suggesting that the onus is on those who think a computer can’t be conscious to show exactly why. He congratulates Penrose on doing this properly, in contrast to say, John Searle who merely offers hand-wavy stuff about unknown biological properties. I’m not really sure that Searle’s honest confession of ignorance isn’t better than Penrose’s implausible speculations about unknown quantum mechanics, but we’ll let that pass.

Aaronson rests his own case not on subjectivity and qualia but on identity. He mentions several examples where the limitless copyability of a program seems at odds with the strong sense of a unique identity we have of ourselves – including Star Trek style teleportation and the fact that a program exists in some Platonic sense forever, whereas we only have one particular existence. He notes that at the moment one of the main differences between brain and computer is our ability to download, amend and/or re-run programs exactly; we can’t do that at all with the brain. He therefore looks for reasons why brain states might be uncopyable. The question is, how much detail do we need before making a ‘good enough’ copy? If it turns out that we have to go down to the quantum level we run into the ‘no-cloning’ theorem; the price of transferring the quantum state of your brain is the destruction of the original. Aaronson makes a good case for the resulting view of our probably uniqueness being an intuitively comfortable one, in tune with our intuitions about our own nature. It also offers incidentally a sort of reconciliation between the Everett many-worlds view and the Copenhagen interpretation of quantum physics: from a God’s eye point of view we can see the world as branching, while from the point of view of any conscious entity (did I just accidentally call God unconscious?) the relevant measurements are irreversible and unrealised branches can be ‘lopped off’. Aaronson, incidentally, reports amusingly that Penrose absolutely accepts that the Everett view follows from our current understanding of quantum physics; he just regards that as a reductio ad absurdum – ie, the Everett view is so absurd the link proves there must be something wrong with our current understanding of quantum physics.

What about Penrose? According to Aaronson he now prefers to rest his case on evolutionary factors and downplay his logical argument based on Godel. That’s a shame in my view. The argument goes something like this (if I garble it someone will perhaps offer a better version).

First we set up a formal system for ourselves. We can just use the letters of the alphabet, normal numbers, and normal symbols of formal logic, with all the usual rules about how they can be put together. Then we make a list consisting of all the valid statements that can be made in this system. By ‘valid’, we don’t mean they’re true, just that they comply with the rules about how we put characters together (eg, if we use an opening bracket, there must be a closing one in an appropriate place). The list of valid statements will go on forever, of course, but we can put them in alphabetical order and number them. The list obviously includes everything that can be said in the system.

Some of the statements, by pure chance, will be proofs of other statements in the list. Equally, somewhere in our list will be statements that tell us that the list includes no proof of statement x. Somewhere else will be another statement – let’s call this the ‘key statement’ – that says this about itself. Instead of x, the number of that very statement itself appears. So this one says, there is no proof in this system of this statement.

Is the key statement correct – is there no proof of the key statement in the system? Well, we could look through the list, but as we know it goes on indefinitely; so if there really is no proof there we’d simply be looking forever. So we need to take a different tack. Could the key statement be false? Well, if it is false, then what it says is wrong, and there is a proof somewhere in the list. But that can’t be, because if there’s a proof of the key statement anywhere,the key statement must be true! Assuming the key statement is false leads us unavoidably to the conclusion that it is true, in the light of what it actually says. We cannot have a contradiction, so the key statement must be true.

So by looking at what the key statement says, we can establish that it is true; but we also establish that there is no proof of it in the list. If there is no proof in the list, there is no possible proof in our system, because we know that the list contains everything that can be said within our system; there is therefore a true statement in our system that is not provable within it. We have something that cannot be proved in an arbitrary formal system, but which human reasoning can show to be true; ergo, human reasoning is not operating within any such formal system. All computers work in a formal system, so it follows that human reasoning is not computational.

As Aaronson says, this argument was discussed to the point of exhaustion when it first came out, which is probably why Penrose prefers other arguments now. Aaronson rejects it, pointing out that he himself has no magic ability to see “from the outside” whether a given formal system is consistent; why should an AI do any better – he suggests Turing made a similar argument. Penrose apparently responded that this misses the point, which is not about a mystical ability to perceive consistency but the human ability to transcend any given formal system and move up to an expanded one.

I’ll leave that for readers to resolve to their own satisfaction. Let’s go back to Aaronson’s suggestion that the burden of proof lies on those who argue for the non-computability of consciousness. What an odd idea that is!  How would that play  at the Patent Office?

“So this is your consciousness machine, Mr A? It looks like a computer. How does it work?”

“All I’ll tell you is that it is a computer. Then it’s up to you to prove to me that it doesn’t work – otherwise you have to give me rights over consciousness! Bwah ha ha!”

Still, I’ll go along with it. What have I got? To begin with I would timidly offer my own argument that consciousness is really a massive development of recognition, and that recognition itself cannot be algorithmic.

Intuitively it seems clear to me that the recognition of linkages and underlying entities is what powers most of our thought processes. More formally, both of the main methods of reasoning rely on recognition; induction because it relies on recognising a real link (eg a causal link) between thing a and thing b; deduction because it reduces to the recognition of consistent truth values across certain formal transformations. But recognition itself cannot operate according to rules. In a program you just hand the computer the entities to be processed; in real world situations they have to be recognised. But if recognition used rules and rules relied on recognising the entities to which the rules applied, we’d be caught in a vicious circularity. It follows that this kind of recognition cannot be delivered by algorithms.

The more general case rests on, as it were, the non-universality of computation. It’s argued that computation can run any algorithm and deliver, to any required degree of accuracy, any set of physical states of affairs. The problem is that many significant kinds of states of affairs are not describable in purely physical or algorithmic terms. You cannot list the physical states of affairs that correspond to a project, a game, or a misunderstanding. You can fake it by generating only sets of states of affairs that are already known to correspond with examples of these things, but that approach misses the point. Consciousness absolutely depends on intentional states that can’t be properly specified except in intentional terms. That doesn’t contradict physics or even add to it the way new quantum mechanics might; it’s just that the important aspects of reality are not exhausted by physics or by computation.

The thing is, I think long exposure to programmable environments and interesting physical explanations for complex phenomena has turned us all increasingly into flatlanders who miss a dimension; who naturally suppose that one level of explanation is enough, or rather who naturally never even notice the possibility of other levels; but there are more things in heaven and earth than are dreamt of in that philosophy.

128 thoughts on “Flatlanders

  1. I think that the Lucas/Penrose Gödelian line of argumentation needs to be retired for good—it really doesn’t establish anything whatsoever regarding the computational nature of mind.

    There are several reasons for this. One is that it’s ultimately circular: Aaronson goes somewhat into that point, but then doesn’t quite make it. The thing is, we can only ‘see’ that the Gödel sentence G(T) of a theory T is true, if we suppose that T is consistent (otherwise, T could very well prove that ‘this sentence cannot be proved within T’—in fact, it necessarily would, since an inconsistent theory proves all formulas by the principle of explosion). In fact, G(T) is equivalent to the assertion Con(T) that T is consistent—and hence, cannot be proved within T.

    But since the argument hinges on us being able to see the truth of G(T), which we only can if we’re also able to see the truth of Con(T), it’s only under the assumption that we are able to carry out reasoning outside of T—such as would allow us to prove Con(T)—that we can conclude an ability to decide G(T); but since this is used to conclude that our reasoning capacity exceeds the formal capacity of T, we’ve run full circle.

    It’s in fact fully consistent with the Gödelian issue that we’re described by a formal system T whose consistency we can’t prove (and whose Gödel sentence we consequently can’t decide).

    Moreover, the idea that we—or even our mathematical capabilities—could be described by some fixed formal system T is simply mistaken: we are in contact with the environment, and acquire new data, and hence, modify our knowledge, our set of axioms from which to deduce things. That’s not in conflict with being a TM equivalent: any concatenation of TMs still is a TM, so any TM could (and would have to) perform the same tricks.

    In fact, even asking whether we are described by some formal system T necessarily changes the formal system describing us—because the question, at minimum, needs to include a description D(T) of T in order to be intelligible. But now, being able to decide the Gödel sentence of T is not a contradiction anymore: T’ = T + D(T) is not the same as T, and from T’, G(T) may easily be decidable. Of course, there then exists a new G(T’) that remains undecidable—but again, asking the question would entail a modification of T’ to T”, which could perform the relevant deduction, without running into troubles of self-reference.

    So we can only ever introspect the truth of G(T) if we are able to prove the consistency of T—which however necessitates us already being able to carry out reasoning outside of T. And furthermore, even the act of asking for the truth of G(T) necessarily modifies it to a system within which deciding it is no longer paradoxical—so that we are, in fact, already carrying out reasoning outside of T.

    (And of course, one can always note that the fact that I can truthfully assert the sentence ‘Peter Hankins cannot truthfully assert this sentence’, while Peter Hankins can’t, doesn’t mean I have any cognitive capacities Peter lacks… ;-))

    I want to emphatically agree with this part, though:

    The thing is, I think long exposure to programmable environments and interesting physical explanations for complex phenomena has turned us all increasingly into flatlanders who miss a dimension; who naturally suppose that one level of explanation is enough, or rather who naturally never even notice the possibility of other levels; but there are more things in heaven and earth than are dreamt of in that philosophy.

    Computation, or rather, the interpretation that’s needed to make a particular physical system’s evolution be this-and-that particular computation, is a bit like air: so ubiquitous that we don’t notice it’s there. When we look at a computer screen, it’s just obvious what it shows; thus, if it calculates a sum, and gives out a result, it’s just obvious that it has performed a calculation, or it’s just obvious that it has given a particular response to a query, and so on.

    But that’s quite far from the truth: there is always an act of interpretation—what is being computed is not in the computer, but rather, in the mind of the interpreting observer. A different observer could use a different interpretation, and would conclude that a different computation has taken place. This is the reason why computation does not suffice to underwrite a theory of how the mind works: we don’t want to posit a homuncular observer who’d be necessary to interpret the computation as computing our minds, as that leads to well-trodden vicious regresses. Gödelian arguments are entirely beside the point.

  2. On the long arguments of Penrose & Lucas, noted some stuff in the Penrose section:

    http://www.consciousentities.com/roger-penrose/#comment-367759

    But I think the logical arguments will never be settled, which is why I’d hope for some strong evidence from biology. I also think anti-computationalists need to make clear they aren’t necessarily against non-biological consciousness in general. As per Anirban Bandyopadhyay, who actually gave Orch-OR it’s strongest empirical claims, rejecting a Turing Machine as conscious doesn’t mean a machine of a different sort won’t be conscious ->

    https://www.closertotruth.com/series/computational-theory-the-mind#video-48848

    Additionally it’ll be interesting to see if Jonjoe McFadden gets farther with his idea that consciousness requires an endogenous EM field. Of course, as you noted awhile back Peter, this too could be potentially utilized to create a sentient machine just not a sentient Turing Machine.

  3. That’s a great summary. The great conceit of his argument has to be the notion that computational functions somehow exhaust consciousness. It really is a peculiar presumption. Why should we think that a computer emulation of all the functions of a frog would produce another frog? The observation that consciousness is biological is just that, an observation, and not the fig leaf Aaronson frames it as. The degree it can be mechanically recreated depends just what it turns out to be, biomechanically speaking. To think there isn’t going to be surprises on the basis of apriori reasoning is to take the wrong side of history in these debates.

    The conceit turns on the sense that consciousness is itself some kind of *intrinsically* functional phenomena. By why think this? It’s an extraordinary claim, if you think about it. Consciousness is this natural thing possessing no natural dimensions… Whoa! Extraordinary claims require extraordinary evidence, so what’s our evidence that consciousness is a low-dimensional, functional phenomena? Why do we trust it?

  4. Having an identical copy of myself sitting in the opposite chair and who is talking to who? Am I sitting HERE talking to Vic or am I over there talking to Vic?…..Well copy may be an identical set of particles but not the same particles, so I am still the first person sitting HERE. However assume the particles can switch themselves every nanosecond between the two positions and now I can converse with myself. What enters the argument is the notion of space and time….more to come.

  5. I’m a computationalist, but always interested in arguments against computationalism. I think Penrose’s argument is hopelessly speculative, but at least he understands that to counter computationalism requires a new kind of physics. Arguments that particular cognitive features just can’t be computational, leave me cold. Are we saying that it’s a physical system that can never be mathematically modeled? Ever? Even in principle? Even to a functional equivalence?

    For me, the real question is whether the information processing that happens in a neural substrate would ever be *practical* in a digital silicon one. Given the tapering off of Moore’s Law, I think it’s possible that it might not be, which means that an artificial or uploaded mind may be a lot further away than many people hope / fear. But further away and forever impossible are two very different things.

  6. Peter: “…consciousness is really a massive development of recognition, and that recognition itself cannot be algorithmic.” … “In a program you just hand the computer the entities to be processed; in real world situations they have to be recognised. But if recognition used rules and rules relied on recognising the entities to which the rules applied, we’d be caught in a vicious circularity. It follows that this kind of recognition cannot be delivered by algorithms.”

    Consider a couple of examples. You are walking down the street and see a friend. What goes on in your mind? First, just as in a computer, you are presented with something to recognize through the competitive awareness process. I would hazard a guess that your brain knows the context of walking down a street, that it can recognize a person walking towards you by shape, features, gait, etc., that it can identify a face by a variety of identifiers and from the context of walking down the street with people in it and finally that it can identify the specific face from more detailed identifiers. It doesn’t do this from going through an exhaustive list, but by making associations within context. The same could be done with concepts. If you are listening to a speaker and they are talking about how difficult it is to say why they believe they are self-aware, you quickly identify the concept they are discussing as the “hard problem” through context and associations. This seems quite algorithmic to me, it’s just a more efficient algorithm than the type that goes through an exhaustive list looking for a match. If the speaker suddenly says, “and that’s why the soup is hot” or if it is an actual gorilla walking down the street texting on his phone, your algorithm throws an error, known as cognitive dissonance.

    “The problem is that many significant kinds of states of affairs are not describable in purely physical or algorithmic terms.”

    Why do you think this is true? Examples? If it is in your mind, then it must be represented there somehow. What is a representation if not a state, which is thereby usable by an algorithm? Even if a new situation arises that you don’t have a predetermined state for, why can’t a new state be created? Isn’t that what learning and experience is? We build out new states based on our existing states. I don’t think that is a limitation of a computational device or an algorithm, but there is a requirement for self-modification.

    Having said all that, it is quite likely we are, in fact, missing something and need new perspectives to discover what it is.

  7. Stephen,

    shape, features, gait, etc… a variety of identifiers

    Those are lists. You have to list the identifiers one way or another.

    ‘Context and associations’ don’t seem at all algorithmic to me. There are big problems about dealing with real world background knowledge computationally. And association? What are the rules for what to associate with what? Aren’t you going to hit Hume-style problems right away?

    If you can write an algorithm that recognises stuff in real world situations, I would get to the patent office before you tell us any more. 😉

    Why do you think this is true? Examples?

    Well, I mentioned three (and I provided a relevant argument).

  8. Selfawarepatterns: What *evidence* do you have that consciousness is more like a computer program than a frog? No one (short a digitalist) thinks digital frog emulations are frogs. Why should we think this one biological phenomena, consciousness, is any different?

  9. Selfawarepatterns:

    Are we saying that it’s a physical system that can never be mathematically modeled? Ever? Even in principle? Even to a functional equivalence?

    No; I’d say that there is a difference between a physical system and a model thereof (which is obvious, since otherwise, models would just be those physical systems, making this whole exercise of modeling somewhat pointless). Models instantiate the structural, relational, informational properties of physical systems, but there’s no reason to believe that physical systems are exhausted by those properties (and plenty of reasons to believe otherwise, such as Newman’s objection, which says that if those properties were all that there is, all we could ever settle about physical objects is questions of quantity, or of cardinality—so we’d only know how many objects are there).

    In particular, models require being interpreted as models—without such an interpretation, the model is just that physical thing that it is. Modeling requires a mapping from the states of the model to the states of the system being modeled; but that mapping is ultimately arbitrary—which is again a good thing, since that’s what allows computers to be such universal models.

    But a mind clearly can’t rely on interpretation for being that particular mind—since interpretation is a mental act, we’d be left with a vicious regress. Hence, minds have to be just that, minds; like the solar system is just the solar system, but an orrery is not, even though one can interpret it as a model of the former.

  10. Peter, Reading this and the comments above plus the comments on the previous thread; formalisms are representations and computers compute by manipulating representations. Your final statement really says that humans reason by judgement or as the judge said “you know it when you see it”. Formalisms are syntactic verbal arguments, language itself, that bind us to representations. The nervous system is like the digestive system so in the end some things can only be expressed as “Holy Sh*T!”.

  11. Hi Scott,
    I think the data from neuroscience and neurology, the signal processing between neurons and more macroscopically between the various cortices and nuclei of the brain, show that the mind, and more specifically consciousness, is a process, that it’s what the brain does, rather than some ontologically separate thing. It’s always possible that future data will change that conclusion, but until then I think it’s the simplest explanation.

    If the conscious mind is what the brain does, then it follows that something other than an organic brain may be able to do it. Of course, it would need some kind of interface to the physical world, just as human minds do, but then even computer programs need an interface if they’re going to be useful for anything. The program you’re using to read this would be useless without an output screen or some mechanism to take your input.

  12. Hi Jochen,
    I think whether any physical system implements any particular model is always open to interpretation. To assert otherwise is to say that a physical thing has an inherent purpose, a teleology. If you can find a way to objectively demonstrate something like that, I’d be very interested to see it.

    I fear all we can say is that it’s easy and productive (requires a relatively small amount of energy) to interpret some systems as implementing certain models. It’s easy for me to interpret the laptop I’m typing this on as implementing Mac OS X and the Chrome browser. And it’s easy for each of us to interpret the other’s brain as implementing a mind.

    Yes, this means that whether a system is a mind is something that is ultimately decided by…other minds. Since I think minds and consciousness are processes rather than separate ontological things, this makes sense to me. But I fully realize it’s an extremely counter-intuitive notion, particularly for anyone who hasn’t reached the mind-is-a-process conclusion.

  13. SelfAwarePatterns:

    Yes, this means that whether a system is a mind is something that is ultimately decided by…other minds.

    But think that through: it’s a perfect example of a vicious regress. After all, those minds doing the interpreting are themselves in need of interpretation; and hence, the n+1st level of minds must be interpreted, before the nth level can be. Thus, you arrive at an infinite tower of minds interpreting other minds that never bottoms out—effectively, the homunculus regress.

    But I fully realize it’s an extremely counter-intuitive notion, particularly for anyone who hasn’t reached the mind-is-a-process conclusion.

    I fully believe that mind is a process—however, it is a physical process, real things doing things to other things, which does not need any interpretation, anymore than billard balls caroming of one another needs an interpretation. But a model of such a physical process is not identical to that physical process—it can’t be, since it’s interpretation-relative.

  14. Sure, a model of a system is not the system. The most interesting question, to me, is whether we could build a system that is conscious (instead of just building a model). Whether that is a Turing Machine or not is IMHO entirely besides the (main) point.

  15. Jochen,
    On minds interpreting minds, I don’t see the infinite regress, but more a center of gravity, a consensus among minds about what make up minds. Of course, there currently isn’t any such consensus, as this discussion demonstrates, but that doesn’t mean there won’t be someday. The only infinite regress I could see if it you posit it as a mind needing the interpretation of another mind to actually exist. These systems, and many others, exist regardless of our interpretation. It’s only a matter of whether we include the system in the mind category. Do the members of a club who decide who is a member of their club suffer an infinite regress on club membership?

    On the difference between a physical process and a model, all physical processes are models, and all models are physical phenomena. The only question is whether the relevant crucial details of physical process X have isomorphic details in model Y. Does copying a mind require fidelity down to subatomic particles? If so, then copying it may well be impossible. But if it only requires fidelity to the molecular layer? Perhaps possible in principle but impractical. If the relevant patterns exist at the layer of cells, then copying starts to look more possible, someday (in the distant future).

    Even if implementing a mind requires wetware, a physical substrate of neurons and synapses, I think saying it will never be possible to technologically duplicate that substrate is unwarranted pessimism.

    Myself, I tend to think that copying to another substrate would only be practical when we thoroughly understand the information processing architecture of the mind, and know where we can implement alternate processes to produce the same results. Whether that modified copy is the same mind will always inescapably be a philosophical question. No matter how convincing such a copy is, there will be those who insist that it’s just a charade.

  16. SelfAwarePatterns:

    The only infinite regress I could see if it you posit it as a mind needing the interpretation of another mind to actually exist. These systems, and many others, exist regardless of our interpretation.

    Every computation requires interpretation. If I handed you some physical device, you could not discover that, say, it enumerates the digits of pi, the way you can discover its mass, charge, or any of its other physical properties. Rather, you could assign a mapping from its physical states to the logical states of some computation, which enumerates the digits of pi; but one could equally well assign a mapping under which the device plays chess against itself, or performs nearly arbitrary computations.

    Hence, a mind, if it is computational, would be just one possible interpretation of the underlying physical device’s evolution—and thus, we’d be right into an infinite regress.

    On the difference between a physical process and a model, all physical processes are models, and all models are physical phenomena.

    Models are interpretations of physical phenomena. So, for instance, the set of books on my shelf, ordered according to thickness, can be used to model the set of my paternal ancestors, with thicker books standing in for the ‘father’ of the next thinner book. But that interpretation is again arbitrary: I can use those same books to model the ordering of the world’s richest persons according to wealth.

    Armed with the first interpretation, somebody could take a book standing for person A, observe that there are three thinner books between it and the one standing for person B, and conclude that A is B’s great-great-grandfather. Armed with the second interpretation, somebody could take the book mapped to Bill Gates, and the book mapped to Warren Buffet, compare their thickness, and conclude that Bill Gates has more wealth than Warren Buffet. The same set of books, interpreted differently, allows to draw valid conclusions about the systems they model. But without the interpretation, it’s just a set of books.

    Note that I’m not saying it’s impossible to create artificial minds. I fully expect that to be, at least in principle, possible—there’s no extraphysical stuff necessary to make a mind, so the right sort of components, assembled in the right way, will yield a mind. But interpreting a physical system as a model of a mind does not make that physical system a mind.

  17. Selfawarepatterns: So the assumption that mind/consciousness is intrinsically functional is *evidenced* by what again? Everything you mention is biological, or froggish.

    You say it’s the “simplest conclusion,” but how so? The simplest conclusion to draw is that consciousness, as an evolved biological artifact, is biological. Surely your conclusion, that consciousness is an evolved biological artifact possessing no biological dimensions, is anything but simple! Parsimony is not on your side, here.

    So again, the question is what warrants assuming that consciousness is a kind of behaviour. Is the computer analogy all you have?

  18. Jochen,
    Totally agree with your first paragraph.

    “Hence, a mind, if it is computational, would be just one possible interpretation of the underlying physical device’s evolution—and thus, we’d be right into an infinite regress.”
    I agree with the first part of this sentence, but can’t see how the “and thus” part follows.

    “Models are interpretations of physical phenomena.”
    Models are patterns, and everything that we commonly call physical are also patterns, ultimately of elementary particles. (Which could themselves be patterns of even smaller constituents.)

    A model is simply a recognition of a particular set of patterns within the physical patterns. As you note, you can model most physical systems at a multitude of different layers. Which model of the brain is the mind? No one really knows for sure, but the neuron-synapse layer seems promising. But I’ll freely admit that we can’t rule out that modeling at the molecular layer might ultimately be required.

    My point is that, barring new physics ala Penrose, there is a model. It might ultimately be impractical to implement it anywhere but in a wetware brain, but the model, the pattern, should exist.

  19. Scott,
    On consciousness being a process, when the brain enters a reduced state of processing, as in non-rem sleep, consciousness is reduced or disappears. When drugs interfere with neural processing, as with anesthetics, consciousness disappears until the drug has flushed out. Anytime we disrupt the functioning or processing in the right way, consciousness is interrupted.

    On the distinction you’re making between biology and functional systems, I’m afraid I just can’t see it. Biology is an evolved mechanism, a functional system. It is in fact molecular nano-machinery clustered together in hierarchies of networks. A frog is a system, one that must function correctly for it to produce frog like behavior or whatever mental states a frog might have.

    On the brain most specifically, a computational system seems like the most productive way to look at it. Neuroscientists plot out neural circuit diagrams within and between various nuclei and regions such as the brainstem, thalamus, and neocortical areas. These diagrams look remarkably like electronic circuit diagrams.

    If the brain isn’t doing computation precisely according to some narrow definition, then it certainly seems to be doing something *like* computation. This shouldn’t surprise us too much, since computers were designed to automate tasks that had previously required human cognition. You can say that the brain isn’t computational since it isn’t a digital computer, but then it seems like you’d have to come up with another term to refer to the systematic signal processing that is going on in it, at least with neuroscience as it currently stands. Of course, new evidence could emerge tomorrow and change that.

  20. SelfAwarePatterns:

    I agree with the first part of this sentence, but can’t see how the “and thus” part follows.

    Since a physical system then requires interpretation to be a mind, and interpretation is something done by a mind—it’s taking something as meaning something else, like taking ‘one lit lamp’ to mean ‘the English attack by land’, or taking the set of books to model the set of my paternal ancestors.

    And as you said yourself:

    The only infinite regress I could see if it you posit it as a mind needing the interpretation of another mind to actually exist.

    Models are patterns, and everything that we commonly call physical are also patterns, ultimately of elementary particles.

    A model isn’t a pattern; it’s a relationship set up between the structural properties of one physical system (the model) and another (the system to be modeled). The difference between a model and a physical thing is that the physical thing has objectively measurable properties—weight, size, momentum, and so on—while a model’s properties depend on the interpretation—i.e., on whether I take the set of books to model my ancestors, or the world’s richest persons.

  21. Selfawarepatterns: “On consciousness being a process, when the brain enters a reduced state of processing, as in non-rem sleep, consciousness is reduced or disappears. When drugs interfere with neural processing, as with anesthetics, consciousness disappears until the drug has flushed out. Anytime we disrupt the functioning or processing in the right way, consciousness is interrupted.”

    But this is all evidence for thinking consciousness is biological. When the biology is impacted, consciousness is impacted. Like you say, any time we impact the biology in the right way, consciousness is interrupted. It stands to reason, therefore, that consciousness is a biological phenomena… froggish.

    What evidence warrants committing to the extraordinary claim that consciousness is purely behavioural? A claim that seems to saddle us with countless conundrums to boot…

    The analogy to computers is really all you got, isn’t it? Certain structural resemblances between neural and electronic circuitry. Simply because we think (despite the controversy) we have a firm theoretical handle on the latter, it seems a promising way to understand the former. I can appreciate that view when it comes to things like, say, making functional inferences in cognitive neuroscience. What I don’t understand is how this warrants the claim that consciousness is a purely behavioural biological phenomenon, more Frogger than frog.

    What am I missing?

  22. Scott,
    Behavioral is biological.
    Biological is biochemical.
    Biochemical is chemical.
    Chemical is physical.
    Physical is subatomic.
    Subatomic is strings?

    If a computer composed of electronic gates can model the neural networks, why not use zillions of bedsprings instead?

    As your pointing out it is all levels of explanation that become difficult to resolve because of skills we are limited to; so the talking past one another occurs.

    Skills Category Errors anyone?

  23. Scott,
    I think what you may be missing is that biology is a system. Of course consciousness can’t currently exist without its biology, its physical system. A computer program also can’t exist without its physical system. There is currently a difference in materials between the types of systems, and biological ones are currently far more complex, but that’s something that exists right now, not a permanent distinction.

    We are machines, just evolved ones rather than designed ones. Saying designed machines will never match the complexity and sophistication of evolved ones strikes me as a statement that requires justification. You can say it’s not possible with current technology, and I’d agree. You can say it’s unlikely to be so in the near future, and I’d also agree.

    But if you say it is forever impossible, I’d ask, why do you think that? What fundamental limitation, what law of physics stands in the way? (I’m not asking this rhetorically. If there is such a fundamental limitation I’ve missed, I’d be grateful to have it pointed out to me.)

  24. I think the point is that if consciousness is biological just as a rose is, you wouldn’t theorize that someday we could make a rose out of silicon.

    What you have is the computational conceit about consciousness and mental processes.

    Conversely you could argue that a rose is a computational process, so………

  25. “Models are interpretations of physical phenomena”, but the models in control systems (“Every good regulator of a system must be a model of that system”) are not arbitrary – different observers cannot justify alternative interpretations of the computations because these other ones lack value. We perhaps need to build our definition of consciousness upwards from Friston’s concept of the brain as the homeostatic organ (a particular flavour of embodiment). Homeostasis is maintaining identity as well as possible. The peculiarity of Friston’s definition of the brain’s function is that its task is endlessly expandable.

  26. Hi VicPanzica,
    I’m actually a limited pancomputationalist, so I do think that, ultimately, a rose is a computational process. That said, it doesn’t seem particularly productive to think of it as such a process.

    Some physical structures seem more computational than others. Computers, of course, are engineered to maximize their computational ability. I think brains are information processing organs evolved to maximize the survivability of the organism. But if brains only had the information processing aspects of a rose, the computational understanding of mind wouldn’t appeal to me (although the overall functionalist conception likely still would).

  27. On getting syntax from semantics, still unclear to me how anyone committed to materialism is getting around Alex Rosenberg’s eliminativist argument:

    …Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain…

    What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

  28. After reading the post I felt sure someone in the comments would ask about babies. No one did so I will. When babies are born, or even when they are in the womb what sorts of minds do they have? How does a human newborn acquire the mental abilities of her adult self? I would assume part of that process is learning and part of it is the growth and development of the brain. I would assume those two processes are mutually dependent.

    Perhaps a good question to ask might be whether machines can be capable of learning and whether they can be capable of physical development. It’s clear to most that a newly fertilized egg is not capable of mind. If we ask how, from the standpoint of developmental biology or embryology, this capability might come to be the answers might take us some way toward determining what is possible for machines and how to go about making those possibilities real. The fact that philosophers raise a particular question does not necessarily make them best suited to answer it.

  29. Dear SAPatterns,

    I like to think of the brain as an analog system of evaluative circuits as opposed to computational. If you study computer architecture it is interesting how we anthropomorphized the human. Data addressing and retrieval is more akin to stimulus response in a living organism. The CPU itself is more akin to how we bottom out in more fundamental structures that rely on higher structures in the neocortex that is filled with memory locations. Programming pointer retrieval is more akin to associative learning and memory retrieval. Amazing how much we have ‘kluged’ brain biology in the machines’ structure. I say kluged because programming by digital locations makes for a more brute force approach to modeling thought functions. Object oriented programming with public and private data objects etc. also make it interesting how humans design thinking machines.

  30. Sci,

    Your phenomenal world (conscious content) is about stuff outside of your brain. The retinoid system has been detailed and proposed as the neuronal brain mechanism that tells you about the outside stuff from your egocentric perspective.

  31. VicPanzica,
    I can understand that view. I think we actually agree ontologically, but perhaps differ on the scope of what we label computation. For me, the association processing of the brain is information processing, albeit with a radically different architecture than digital computers. As I noted at the top of this thread, reproducing that processing outside of a physical neural network may never be practical, but that doesn’t we won’t eventually build a suitable physical neural network.

  32. @ Sci in 27:

    What does Rosenberg take himself to be eliminating? It can’t be reference or meaning, since he’s using them in the very passage you quoted. That is, he and we understand what he’s talking about *and* we are completely material creatures (no evidence for anything non-material, which wouldn’t help in any case). So semantics has to be materially and functionally realizable. Yes, the brain is ultimately uninterpreted, but reference is a matter of appropriate behavior with respect to your environment, including your verbal community, which judges your utterances as either meaningful or nonsense. Rosenberg is making sense, not nonsense, but his thesis is false.

  33. @Arnold: Can you elaborate on how your model solves this infinite regression problem?

    @Tom: I don’t think simple pointing out Rosenberg is relying on meaning to convey his point is a refutation of his point. Someone committed to us being matter but denying eliminativism would have to actually explain how they’re getting around the infinite regression issue.

    I know Jochen has sought to do this in his work, so we’ll see how that develops. However I do think Rosenberg’s simple argument suffices to falsify “strong computationalism” (mind uploads, conscious programs) that assumes the classic definition of materialism. Whether giving ontological importance of place to Patterns or Information rescues the computationalist…maybe? But why should we do that?

    All that takes us back to Bakker’s comments on why anyone would assume consciousness transcends the biological?

  34. Sci,

    The infinite regress problem arises if you think of the brain as the observer of the space “out there”. But if there is a mechanism in the brain that creates a representation of the volumetric space all around us, events projected into this space do not have to be *interpreted* as out there — they are experienced as being out there with respect to our self. This is our immediate presence of the world we live in, our phenomenal world. For more about this, see “Where Am I? Redux” on my Research Gate page.

  35. Scott,

    What *evidence* do you have that consciousness is more like a computer program than a frog? No one (short a digitalist) thinks digital frog emulations are frogs. Why should we think this one biological phenomena, consciousness, is any different?

    If I’m understanding SelfAwarePatterns, what he’s describing is more like using a Star Trek replicator to make a frog. And indeed, such a frog, made from the atomic level upward, IS a frog. It’d have the brain of a frog, as it’s a frog, as well. SelfAware is refering to the duplication of a human brain, as far as I can tell. Human brains being attributed ‘consciousness’ and all that. What are you trying to get to with his argument?

  36. Selfawarepatterns: “I think what you may be missing is that biology is a system. Of course consciousness can’t currently exist without its biology, its physical system. A computer program also can’t exist without its physical system. There is currently a difference in materials between the types of systems, and biological ones are currently far more complex, but that’s something that exists right now, not a permanent distinction.”

    But the computationalist is saying more than this, which I agree entirely with, since it allows us to distinguish between Frogger and frogs. I never said that consciousness can’t be artificially recreated, only that such recreation of consciousness will very likely consist in far, far more than the emulation of computationally defined functions.

    Computationalism pretty clearly presumes that consciousness resembles Frogger, not frogs. Again, what evidences this extraordinary claim?

    This has gotta stand high among the questions computationalism should be able to answer, doesn’t it?

  37. Scott,
    Okay, I don’t know how much you know about computer engineering or neuroscience, so apologies if this gets too much into the weeds.

    Computer chips are composed of transistors which control the flow of electricity. A transistor can be in one of two voltages states, which we generally interpret to be on or off, true or false, 1 or 0. Each transistor is a bit allowing a sequence of them to form binary numbers and data structures.

    In processor cores, the transistors are organized in networks of logic gates. There are three basic types: AND, OR, and NOT. And AND gate receives two inputs, and if both are true, outputs true, otherwise it outputs false. And OR gate receives two inputs, if either one is true, it outputs true. A NOT gate receives one input; if it is true, it outputs false, but if the input is false, it outputs true. All computing logic is built from these basic types of gates. (Although real logic gates are often complex combinations of these.)

    Brains are composed of interacting networks of neurons and synapses. A neuron is an excitable cell which, if properly excited, produces an action potential, a nerve impulse which it passes on to the other neurons it has synaptic connections with. Synapses vary in strength and can be either excitatory or inhibitory, increasing the likelihood of generating an action potential in the post-synaptic neuron or decreasing that likelihood.

    If we have two adjacent synapses in a neuron’s dendritic tree, neither of which is strong enough individually to trigger an action potential, but if both fire concurrently they are strong enough, we effectively had an AND gate. Two adjacent synapses, either of which is strong enough to trigger an action potential, is effectively an OR gate. An inhibitory synapse can function as a NOT gate (as can an excitatory synapse reversing the effect of an inhibitory one).

    Of course, the reality in both situations is far more complicated than this brief sketch. Synapses in particular get stronger over time if they are used more, and atrophy if not used, and their strength can be affected by fatigue and the exact mix of chemicals currently floating around. In addition, synaptic gates can be time sensitive, such as a single synapse firing in rapid succession being able to trigger an action potential that it’s not strong enough to trigger if it fires more slowly.

    A neuron is a complex computational processor. It sums all its inputs and, based on this, effectively decides whether to signal its downstream neurons.

    The point is that in both cases, it’s the flow of information, in the form of electrical or electrochemical signals, that is similar. Brains are (currently) far more complex, but the basic information processing mechanisms exist in both systems. To be clear, brains are not digital computers, but when viewed at their most fundamental level, they are computational information processing systems.

    Hope this all made sense. It’s a lot to throw in a comment.

  38. Came across a talk on “Meaning and semantic externalism” by Hilary Putnam via the Facebook analytic philosophy group; a relevant quote around 18 minutes in:

    “…nothing in the head of an average speaker suffices to determine what her word ‘gold’ refers to – meanings aren’t in the head. Well, if they aren’t in the head, where are they? Of course the brain is in the head…but what fixes the meaning of a speaker’s word is not just a state of her brain. The reference of our terms if fixed by two things…other people and the world.”

    So neither brains nor computers on their own are sufficient to fix the meaning of terms, according to Putnam.

  39. Tom,

    The world, as we know it, is fixed by a state of our brain. So, it seems to me that the reference of our terms is fixed by other people and by what happens within our brain.

  40. Arnold, yes, the brain is involved, but not just the brain. More from that Putnam talk, highly recommended:

    31:45 “It is in the context of a network of social and physical interactions, and only in such a context, that I can do such a thing as think that the price of gold has become very high in recent years. If thinking that thought is what I once called a functional state, it’s not, as I mistakenly believed, simply a computational state of my brain – the function in question is a world-involving function. The thought is no more simply in my head than the meaning of gold is. And if thoughts aren’t in the head, then the mind isn’t either. The mind isn’t a thing with a definite location, but a system of world-involving abilities and exercises of those abilities. In this all externalists in the philosophy of mind agree.”

    Note that Putnam is *not* saying the brain plays no role in semantics, only that thoughts aren’t *simply* in the head.

    However, Putnam goes on to say that qualia *are* fixed by the brain, they aren’t properties of external objects, contra naive realists.

  41. The world is fixed by our brains?

    The brain, amongst its neurons, forms a responce network (responding to it’s all too biological senses) which, due to much Darwinistic field testing, often proves to continue survival. I’d get that. But the world is fixed by our brains? Cart well before the horse, surely!?

    SelfAwarePatterns,

    Of course, the reality in both situations is far more complicated than this brief sketch. Synapses in particular get stronger over time if they are used more, and atrophy if not used, and their strength can be affected by fatigue and the exact mix of chemicals currently floating around. In addition, synaptic gates can be time sensitive, such as a single synapse firing in rapid succession being able to trigger an action potential that it’s not strong enough to trigger if it fires more slowly.

    Well it’s that that’s a sticking point. I mean, the idea of running that in a computer is a lot like using electrical engines to simulate a steam engine. Sure, you might be able to make a bunch of electrical engines and electrical hot plates generate steam and guide it around, but it’s not a steam engine! Especially if you don’t use steam and just simulate it!

    And it’s like using a modern computer to simulate a valve based computer – part of a valve based computers operation is actually the break down of valves! Can’t genuinely happen in a modern comp. To err is to be valve driven and all that!

  42. Michael Murden, #28: When babies are born, or even when they are in the womb what sorts of minds do they have? How does a human newborn acquire the mental abilities of her adult self?

    At the early stages you’re talking about, emotion is ahead of intellect. For acquisition of adult mental abilities, Piaget’s work on children’s cognitive development is a classic stage theory.

    I would assume part of that process is learning and part of it is the growth and development of the brain. I would assume those two processes are mutually dependent.

    Yes, although brain growth and development have a strong independent (biological) side.

  43. SelfAwarePatterns:

    There are three basic types: AND, OR, and NOT. And AND gate receives two inputs, and if both are true, outputs true, otherwise it outputs false. And OR gate receives two inputs, if either one is true, it outputs true.

    Well, nowhere in the real world are there any truth values to be found that you could use as input to these gates, though. Rather, you for instance fix some mapping of voltages to logical values: say, ‘1 volt -> logical 0’ and ‘5 volts -> logical 1’. With such a mapping, you can interpret a device giving out 5 volts if and only if 5 volts are applied to both of its inputs a logical AND-gate.

    But you can equally well use the following mapping: ‘1 volt -> logical 1’ and ‘5 volts -> logical 0’. With this mapping, the same physical device now computes the logical OR of its inputs (yielding a 0 if and only if both its inputs are 0).

    In fact, any Boolean function of two bits can be implemented using the same physical device, with only a change in the interpretational mapping. This generalizes, of course, say to networks of Boolean gates; and hence, computation isn’t fixed by the physical system, but depends on an arbitrary mapping implemented ultimately at the user-level.

    Hence, all computational notions can be fixed only if you fix some user beforehand; but then, computation obviously can’t serve as a theory of the user itself, since this would again lead to vicious circularity.

  44. Callan,

    The world *as we know it* must be fixed by our brain. Where else can knowledge of the world be? No brain, no knowledge of the world! Does the world shape our knowledge of it? Of course it does.

    @Tom, if thoughts are in the brain, they are also in the world. But you must acknowledge that in the voluminousness of the world there is not just one resolved event that has the perspective of a personal thought. The world contains a fantastic stew of thoughts.

  45. Jochen @43,

    I know this is your argument, but does this really stand up to a reality check? If a microcontroller were to be sent into space implementing, say, the Sieve of Eratosthenes, how likely would it be that intelligent aliens would soon deduce exactly what computation it was performing? If it seems likely (and I think it does) what does that say about the arbitrariness of the interpretation of computation? If that doesn’t sound “real” enough, how about the process of reverse engineering without technical documentation? E.g. when one superpower salvages high tech from another? There does seem to be a connection between structure and function.

    On very simple examples, yes, your argument seems persuasive, but as computational function becomes more complex, it seems intuitive that ambiguity goes down, until very particular designations become more and more clear.

  46. SAPatterns,

    Yes, agree with your thoughts on a computer.

    You said: “The point is that in both cases, it’s the flow of information, in the form of electrical or electrochemical signals, that is similar. Brains are (currently) far more complex, but the basic information processing mechanisms exist in both systems.”

    The “flow of information” is actually the flow of timing states or an artificial consciousness can act in time like a biological consciousness.

    A lot of the comments above also underscore that a consciousness like ours that evolves biologically is environmentally embedded. We have no problem acting or moving in the ‘normal’ environment but for centuries they have argued the nature of reality from geocentrism to quantum mechanics.

  47. Hunt:

    On very simple examples, yes, your argument seems persuasive, but as computational function becomes more complex, it seems intuitive that ambiguity goes down, until very particular designations become more and more clear.

    It may seem intuitive, but I think if one thinks about it a little, it’s fairly clear that the opposite is the case. Think about any Boolean network with n inputs and one output. In the limit of large n, every possible computation can be implemented using such a network.

    Such a device has 2^n input configurations, and 2 output configurations—let’s call them ‘yes’ and ‘no’. Every possible n-bit binary number is mapped to one input configuration—however, the mapping is arbitrary. Hence, you can actually implement every n-bit decision problem using this same physical device, just by re-labeling the input configurations—say, the configuration that got mapped first to ‘000…0’, i.e. the all-zero string, now gets mapped to ‘111…1’, the all-1 string. However, this changes the problem: if the computation implemented this way first only accepted the 0-string, it will now only accept the 1-string, since the physical evolution remains unchanged.

    This generalizes to all other models of computation (as they’re all equivalent).

    Now, this doesn’t in general mean that we can have no idea what a given system computes—it’s a bit like the halting problem: for very many algorithms, we can in fact say if they halt or not; however, we have no general procedure of establishing this for all algorithms. Likewise, we can’t solve the implementation problem for all computations. But this is enough for my argument: there is no absolute way to say that this system implements that computation. A change of mapping such that it implements a different computation is always possible. In practice, we can maybe consider only a small fraction of possible mappings, and if we find a ‘sensible’ computation in there, we can assume that that’s the one being performed. But this doesn’t suffice to make the general implementation problem answerable.

  48. Callen S,
    Very true. They are very different architectures. I think, in principle, it’s possible to emulate the operations of a brain on a digital computer much as a you can emulate old Atari 6502 games on a modern computer, but given the declining rate of Moore’s Law and how different brains are, it may never be practical. Again though, that doesn’t eliminate the possibility of us someday building a technological version of the brain’s architecture.

    Jochen,
    I don’t have really anything to say on the circularity argument beyond what I said above. I would note however that while the interpretation of what the processor is doing is important for the engineer, once that engineer has coupled the processor to an I/O framework, a physical interface to the world, it is then an objective force, similar to the way our minds are.

    VicP,
    Totally agreed. I do think our consciousness evolved for a purpose, to help our brains make better decisions to better preserve our homeostasis and preserve and propagate our genes. It seems any understanding of human or animal consciousness has to take that into account.

  49. SelfAwarePatterns:

    I don’t have really anything to say on the circularity argument beyond what I said above.

    But the above doesn’t allow you to get out of circularity—a mind must be definite, before it can interpret some computation; hence, it can’t be grounded in computation, since that computation needs a mind in order to make it definite, which can’t be a computation, since that would need a mind in order to make it definite, and so on. You just never hit any solid ground that way; the regress doesn’t bottom out.

    I would note however that while the interpretation of what the processor is doing is important for the engineer, once that engineer has coupled the processor to an I/O framework, a physical interface to the world, it is then an objective force, similar to the way our minds are.

    It’s very tempting to think that way, but ultimately flawed. First of all, of course—who’s the engineer of the brain?

    But more importantly, what computers present to us—words, sounds, pictures—are still entities themselves in need of interpretation, i.e. representations. Words need to be understood; ‘dog’ doesn’t refer to dog out of any fundamental dog-ness it carries, but because it is interpreted a certain way, as pertaining to a dog. The same thing goes for sounds.

    Images are more treacherous. We tend to think that images ‘just are’ images of the things they represent. But even here, we need an act of interpretations. An image of a chair isn’t a chair; or, as Magritte pointed out, an image of a pipe is not itself a pipe.

    Or, if that isn’t clear, think about hieroglyphs: what to you looks like ‘reed hand snail eagle’, conjuring perhaps some implausible scene about an eagle snatching a snail from someone’s hand in the reeds, to somebody conversant in hieroglyphs, this could just mean ‘Jimbo wuz here’.

    In the same sense, some intelligence could use a vastly more complex system of hieroglyphs, adding a layer of ‘textual’ meaning to some graphics we view. Hence, those images in no sense ‘directly represent’ what we think them to; there is always an act of interpretation necessary, it’s just that this interpretation comes so naturally to us that we basically never notice. As with the fish that’s never known anything about water.

  50. Jochen,
    This is essentially the old, machines have no understanding argument, that they have no understanding of the meaning of things.

    But if you think about what that means for brains, “meaning” is most likely the associations we form between various sensory perceptions, and with various actions, whether those actions be imaginings (recreation of sensory perceptions and modeling of possible new ones based on some course of action), gland excretions, or musculoskeletal movement. When you see an image, a flurry of associative images, sounds, touch sensations, maybe even smells and tastes come to mind. This is the recognition Peter mentions in his post.

    It’s not true that computers don’t do anything like this. Inputs are regularly associated with various other data entities and various actions. It’s currently not nearly as rich or flexible as what happens in brains (indeed I’d call it a pale imitation at this stage), but this is ultimately a difference in capacities and information processing architecture, rather than any fundamental aspect of reality.

  51. Jochen,

    Yes, brains are embedded in physical environments like the fish and cultural environments.

    Halting and infinite regress problems are ways that philosophers help us display our epistemic defecits or ignorance.

    I can regress a HDTV, Computer or GPS down to a single transistor or atom, so what?
    Halting appears as a problem but that is exactly what sensorimotor systems do or restart or reenact different algorithms.

  52. Jochen in 49: “…there is always an act of interpretation necessary, it’s just that this interpretation comes so naturally to us that we basically never notice.”

    The question I think is whether non-arbitrary interpretation requires more than computation by brains or artificial systems behaving within a larger physical and social context. I think not because it’s the context (e.g., “coupl[ing] the processor to an I/O framework, a physical interface to the world” – SelfAware), that prevents the regress you think threatens.

    Our minds, and eventually those of artificial systems, are defined, and thus capable of non-arbitrary interpretation of symbols and signs, because the context validates the interpretation as successful, hence meaningful, based on the behavior of the system. If I say its raining and you pick up your umbrella we agree there’s understanding going on. Putnam again: “…what fixes the meaning of a speaker’s word is not just a state of her brain. The reference of our terms is fixed by two things…other people and the world.”

    But if you think there’s more to it than this, I’m all ears!

  53. @Tom:

    “…what fixes the meaning of a speaker’s word is not just a state of her brain. The reference of our terms is fixed by two things…other people and the world.”

    Just to be clear, is this meant to be the sort of extended mind idea Chalmers & Clark wrote about and Fodor (almost conclusively IIRC) criticized?

  54. Sci: “…what fixes the meaning of a speaker’s word is not just a state of her brain. The reference of our terms is fixed by two things…other people and the world.”

    This quote was from Putnam’s talk (in 2011 I think) linked above in which I don’t think he mentions the extended mind idea. But that idea is a type of externalism about the mind which is what Putnam endorses regarding semantics.

  55. Tom,

    How can there be just one extended mind. Doesn’t the notion of an external mind necessarily imply that the world contains billions of minds, not just yours or mine?

  56. Arnold in 55, sorry for any confusion. The extended mind hypothesis doesn’t say there is just one extended mind, rather it’s about the nature of cognition. As Andy Clark and Dave Chalmers put it:

    “Epistemic action, we suggest, demands spread of epistemic credit. If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process. Cognitive processes ain’t (all) in the head!” See http://consc.net/papers/extended.html

  57. Tom,

    Would I be correct in assuming that epistemic action/cognitive processes are broader concepts than our standard view of mind/ thought in that they include events that do not happen in the brain but are nevertheless represented in the brain as part of an epistemic act?

  58. Arnold (57), I’m no expert on this but it seems that the extended mind view of cognition is broader than the standard view by virtue of including parts of the environment outside the brain as contributors to cognition. I’m not sure whether, on this view, all the events outside the brain that contribute to cognition are necessarily represented in the brain as part of an epistemic act.

  59. Selfawarepatterns (37): I understand why you think the brain is like a computer. It just doesn’t follow that consciousness is intrinsically computational because the brain is an information processor. Why reduce consciousness to pure process and not process and processor? To me, it seems implausible to think that consciousness doesn’t belong to the hardware, that it isn’t froggy in different respects. So again I ask, what’s the evidence that it’s purely functional?

  60. SAPatterns,

    I think, in principle, it’s possible to emulate the operations of a brain on a digital computer much as a you can emulate old Atari 6502 games on a modern computer

    It’s not an Atari – you’ll never get the same effect of a cartridge being slightly not in far enough!

    You seem to be taking it that your idea of semantics is the only thing that matters.

    I would agree with a star trek like replication from the atomic level upward of a human brain as having some kind of consciousness.

    But you’re really talking about replication from the semantic level upward – ie, as long as the computer outputs the semantics of an atari, it’s an atari for you.

    But the semantic level is rubbish – we think in far cruder terms than our brains exist at. We are not complex enough to perceive how complex we are.

    It is, for example, easy to trick our semantics into thinking an atari is working. You could take an atari case, gut it and use a modern computer inside to simulate an atari – but you’d think you had a regular atari unless told otherwise.

    Why do you treat semantic replication as being important?? Semantics are rubbish perception!

  61. SelfAwarePatterns:

    But if you think about what that means for brains, “meaning” is most likely the associations we form between various sensory perceptions, and with various actions, whether those actions be imaginings (recreation of sensory perceptions and modeling of possible new ones based on some course of action), gland excretions, or musculoskeletal movement.

    First of all, gland excretions, movements, and other actions in the real world aren’t computational—that’s kind of my point. So I don’t have anything in principle against using those to ground meaning (indeed, I try to do something like that myself).

    But, more importantly, nothing every acts because of such-and-such a computation. Muscles contract because of a certain electrochemical trigger being activated; that is, it’s a physical reaction, based on a physical interaction. To think about this in a computational manner is at best obfuscatory—after all, you wouldn’t think of your large intestine to ‘compute’ the conversion of food into usable chemical energy; it does that perfectly well without any additional layer of computation on top.

    The same is true of brains: they do their thing, which is utterly biological, utterly physical, without any need to appeal to computation. This is what leads to actions, and to actions properly connected to stimuli.

    As for associations between different imagery, different sensory inputs, and so on, that, too, must be rooted in the physical level: otherwise, we’d again have an underdetermination problem regarding the computation. Because whatever computation you hold leads to some image being called up in response to another image, equally well could be interpreted differently.

    So, it’s not because a computation had a certain outcome that a certain action is being undertaken; it’s because certain electrochemical processes occurring. Any computation would, at best, be a gloss, entirely additional and superfluous to the actions being taken. Think of a game of billards: balls hit one another, interact, change their position and momenta. True, you can use this to implement a computation, but in the end, whether a ball ends up in the pocket is already completely determined by the physical interaction; the computational level has no say in this.

    When you see an image, a flurry of associative images, sounds, touch sensations, maybe even smells and tastes come to mind. This is the recognition Peter mentions in his post.

    Peter is also pretty clear that he doesn’t believe this process to be computational—and I think he’s entirely correct there. You’re still taking the seductive metaphors computation provides us with too seriously: if it were truly a merely computational process that called up images in response to other images, then we’d again need some interpreter to make this the computation that’s being executed. Whereas we have an entirely sufficient explanation of any actions taken without having to look at what’s being computed, in terms of electrical and chemical changes in response to electrical and chemical impulses.

    Tom Clark:

    Our minds, and eventually those of artificial systems, are defined, and thus capable of non-arbitrary interpretation of symbols and signs, because the context validates the interpretation as successful, hence meaningful, based on the behavior of the system. If I say its raining and you pick up your umbrella we agree there’s understanding going on.

    But if you say it’s raining, and I pick up an umbrella, then we can model the whole interaction without ever referencing any computation being done by me—I react to a physical stimulus (your voice) by a series of physical state changes, culminating in me picking up that umbrella. The computation would be some interpretive gloss on this series of state changes—which however changes nothing about the matter of fact of these state changes, and hence, has nothing to do with what I actually end up doing, how I end up acting.

    If my contention that basically any system can be interpreted as implementing any computation is right, then the physical system of me picking up the umbrella can be interpreted as computing the digit expansion of pi. Perhaps it can also be interpreted as implementing a computation corresponding to having the subjective experience of picking up an umbrella; but then, so can the computation of a stone that’s just sitting there, doing nothing at all related to umbrellas. The computational level is essentially free-floating: it does not explain anything about how physical systems act; physical causality, constraints from the laws of nature, and initial conditions do all the explanatory work there.

    If we look at the computational level, we loose any connection to the physical—I could just as well be the rock having a dream of picking up an umbrella, and be none the wiser. What actually happens does not do anything to tie down the interpretation, and hence, can’t help in making a computational mind definite.

    One can also field a variant of the inverted-spectrum argument in order to make this more clear: say my action somehow does pin down the interpretation. Then, there is still an issue of underdetermination regarding the form my experience takes: what is it like for me to pick up that umbrella? This can’t be rooted solely in behavioral terms. To me, or rather, in one computation, the umbrella could appear as what you would call ‘red’, but I call ‘blue’; while in another computation, it could appear as (your) ‘yellow’. Or, it could not have any likeness to what you call ‘umbrella’ at all; I could experience it as you would experience a car. Or, in fact, my whole subjective experience could have no relation to yours when you pick up an umbrella; I could fancy myself on a boat cruise, as long as my outward behavior is appropriate.

    On the computational theory of mind, these would all correspond to different computations; and all would yield the same behavior. Hence, what is being computed really has no bearing on what actions occur at all; and thus, one should better just Occam away the whole computational level, which is after all nothing but a way of looking at a system, and instead focus on the objective physical facts that determine behavior—neuron firing rates, muscle contractions, hormone levels, and the like. That’s where the answer lies, if it lies anywhere; but not in how a system’s physical states are mapped to some logical, computational ones. This introduces an observer-dependency we must get rid off to make progress.

  62. Jochen,

    “Think of a game of billards: balls hit one another, interact, change their position and momenta. True, you can use this to implement a computation, but in the end, whether a ball ends up in the pocket is already completely determined by the physical interaction; the computational level has no say in this.”

    True, when you closed one eye and hit the cue ball and it struck the ball that went into the pocket, the ball did not “know” where it went but you the shooter did. Initially you stood above the table with two eyes opened and computed the shot. Computationalism is something we biologically perform with our brains on the environment with our senses and extend into a machine environment of mechanical machines, computers etc. Those muscles you initially mentioned are connected into our bicameral sensorimotor system which constantly calculates balance via gravitational sense while walking just for starters.

  63. VicP:

    Those muscles you initially mentioned are connected into our bicameral sensorimotor system which constantly calculates balance via gravitational sense while walking just for starters.

    There’s no ‘calculation’ going on. For instance, in the vestibular system, liquid sloshes around in a couple of orthogonal semicircular tubes, and depending on orientation, otoliths get moved around, which then produces certain electrochemical triggers that ultimately may end up in motions correcting body orientation. The whole thing is really rather a control system than a computation—much like a spirit level with an additional system that uses the position of the air bubble to initiate a righting force.

    Nothing computes an orientation, or the direction of gravity, and then uses the result of that computation to perform a corrective motion; rather, the direction of gravity proximately causes a corrective motion. It’s just a feedback system: pushing it out of equilibrium entails a corrective motion back towards equilibrium.

    Now, in certain cases—simulations, etc.—one may usefully model such systems using computers. One may even use computational language to describe these systems, as in for example formulating an algorithm, or a flow diagram, of the processes involved. But none of this means that this is a computational process—what happens is just due to ordinary physical causality; the computational would just be an additional, and superfluous, level tacked onto that.

  64. Scott,
    The phrase “purely functional” is a bit of a false standard. Nothing is purely functional. Any function takes place in a physical substrate of some kind. The software you’re using to read this isn’t purely functional. It requires the device you’re also using. But that software can be realized on other devices (although its physical manifestation on other devices might be very different).

    The question is whether the functions of consciousness can be realized on a physical substrate other than a brain. To say no, it seems like we would need to establish that there is something about the brain that can’t be replicated anywhere else. If there is evidence for that, I’m very interested in seeing it.

  65. Jochen,
    Every argument you make could also be argued for a robot. It’s movement and possible hydraulic processes aren’t necessarily things we generally conceive of as being computational. Yet I don’t know too many people who would insist that computation isn’t happening inside the central controller of the robot. And that controller itself could be understood in purely physical terms of transistor states, yet the results of its computations, interfaced with the robotic body, can have physical objective effects on the outside world.

    All that said, it seems like the difference here is whether the term “computational” is appropriate. I think it is a useful way to look at neural processes (and most neuroscientists I read seem to agree). If you want to use another term to describe the signal processing going on in the brain, I wouldn’t necessarily object. But with scientists now developing artificial synapses and computer chips with 1000 processor cores, the distinctions are going to become increasingly blurred.

  66. SelfAwarePatterns:

    All that said, it seems like the difference here is whether the term “computational” is appropriate.

    Well, the problem with using the term ‘computational’ is that for something to be properly ‘computational’, the computation should do some useful work—it should add something to your explanation. It should be the case, for instance, that I am having that particular experience because that particular computation is being executed—but that’s not the case. So, speaking about things in terms of computation might be a useful gloss in certain situations; but it doesn’t add any explanatory value. But as it is, it just engenders misunderstandings—it obfuscates, rather than clarifies. So why not get rid of it?

  67. Selfawarepatterns (66): “Nothing is purely functional. Any function takes place in a physical substrate of some kind.”

    Yes. This is the basis of my question.

    “The question is whether the functions of consciousness can be realized on a physical substrate other than a brain. To say no, it seems like we would need to establish that there is something about the brain that can’t be replicated anywhere else. If there is evidence for that, I’m very interested in seeing it.”

    I agree. I see no reason not to think that consciousness cannot be recreated once we learn how to recreate the biological processes responsible. I also think it will be possible to make artificial frogs someday. The problem, S, is that this isn’t your position. You’re a computationalist! You think consciousness can be recreated by merely recreating computational processes.

    If you do think consciousness intimately involves noncomputational processes, then you’re not a computationalist as far as consciousness is concerned. Is this fair to say?

  68. Jochen,

    “Nothing computes an orientation, or the direction of gravity, and then uses the result of that computation to perform a corrective motion; rather, the direction of gravity proximately causes a corrective motion. It’s just a feedback system: pushing it out of equilibrium entails a corrective motion back towards equilibrium.”

    Fundamentally agree that it is an analog process but we do display one huge calculation when we use the process you cited, namely we walk by moving the left side of our body AND then the right side of our body. You can derive an idea where numbers and computations come from.

  69. Jochen,
    I explained above at length why I think the computational information processing understanding has explanatory value. I’m not satisfied by just saying that recognition or associations happen, I want to know how they happen, and from everything I can see, how they happen is a complex framework of signal processing, which it makes sense to call computation, or information processing.

    You disagree. Fair enough. It doesn’t sound like we necessarily disagree ontologically, just definitionally. I’ve never found definitional debates to be particularly productive.

  70. Scott,
    You seem to see that physical requirement as a problem. But I’ll note again, there is no such thing as a purely computational or purely functional system. If that’s our working definition of computationalism for this discussion, then even computers can’t be considered computational, since they must have a physical implementation and physical I/O system, and their performance is intimately tied to their physical architecture.

  71. Jochen in 63: “The computational level is essentially free-floating: it does not explain anything about how physical systems act; physical causality, constraints from the laws of nature, and initial conditions do all the explanatory work there.”

    I think SelfAwarePatterns rightly suggests that the computational level can be explanatory, since the behavior of computers, computerized control systems and robots (all of which are physical systems) mostly derives from their computational design and functions. How much the computational, information processing level (as distinct from the physical, e.g., biological level) ends up being explanatory for our behavior is an open question I’d say, as it is for consciousness. IIT has it that information plays a central role in the latter,
    http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003588 And re the former, the fact that silicon-based information architectures are now capable of learning and complex pattern recognition to the point where they are replacing white collar jobs should give bio-centrists pause.

  72. VicP, #71: Fundamentally agree that it is an analog process but we do display one huge calculation when we use the process you cited, namely we walk by moving the left side of our body AND then the right side of our body.

    What’s the calculation there? I don’t see it. A person can walk without thinking “1 . . . 2 . . . 1 . . . 2 . . .” And, no doubt, a dog walks without the benefit of “1 . . . 3 . . . 2 . . . 4 . . . 1 . . .”

  73. Cog, Left can be odd and right is even plus we know that by symmetry that all of the odds are inside of the evens. More subtly we learn language by listening to others. If you are observing another human being with the identical sensorimotor system, are you calculating? Think of language, mirror neurons etc.

  74. From Wikipedia “The muscular system consists of all the muscles present in a single body. There are approximately 650 skeletal muscles in the human body,[13] but an exact number is difficult to define. The difficulty lies partly in the fact that different sources group the muscles differently and partly in that some muscles, such as palmaris longus, are not always present.”

    The skeletal muscles are pairs so for every right muscle there is a left. 650?..approximately?….666

  75. VicP, #77: Cog, Left can be odd and right is even plus we know that by symmetry that all of the odds are inside of the evens. More subtly we learn language by listening to others. If you are observing another human being with the identical sensorimotor system, are you calculating? Think of language, mirror neurons etc.

    Vic, having read your explanation, I’m sorry, I still don’t get how calculation comes in when we alternate sides in walking. Left can be odd, and right will then be even, but odd and even could just as easily be assigned to right and left, respectively. More to the point, why would they be assigned to sides of one’s body at all; what does such an assignment mean? “Odd” and “even” are aspects of numbers, and human legs aren’t numbers.

    It’s possible, of course, to count off steps, aloud or silently, when you walk. You can chant “One, two” instead of “Left, right.” But you can equally chant “Red, blue” or “John, Mary” or “T. rex, H. sapiens.” But you can also walk without doing any of this.

    Our status as bipeds dictates that using the two legs alternately will be the most sensible way to get around on foot, assuming that both sides work. (Hopping is inefficient.) I’ve read that the brain has a script for walking. We alternate sides without having to think about it.

  76. Sci,

    I linked poorly – the specific comic is this one : http://www.smbc-comics.com/index.php?id=4147

    Where does it lose you? Does the question of ‘What is a heap?’ not have any ambiguity for you, it’s clear cut and very specific to you?

    SelfAwarePatterns,

    The phrase “purely functional” is a bit of a false standard. Nothing is purely functional. Any function takes place in a physical substrate of some kind. The software you’re using to read this isn’t purely functional. It requires the device you’re also using. But that software can be realized on other devices (although its physical manifestation on other devices might be very different).

    What software? Software is designed, isn’t it? Are you saying consciousness is designed?

    If human consciousness needs a particular mechanical object to be, the best you can do is run a simulation of that mechanism down to a very fine grain…indeed either it’s impractical, as it’d just make more sense to have the mechanism itself rather than use vast numbers of computers to simulate its existance, or it’s not simulatable – the simulation cannot be fine grained enough for being run on a mechanism itself.

  77. Callan S,
    You saw an assertion about design in that quote? No, I don’t think human or animal consciousness was designed; I think it evolved as a survival advantage. But I do think machine consciousness will eventually be designed.

    It may well never be practical to run a human mind on a silicon substrate. We may need to build a physical neural network to do it. Although I think certitude on any of this is unjustified at this point.

  78. VicP:

    Fundamentally agree that it is an analog process but we do display one huge calculation when we use the process you cited, namely we walk by moving the left side of our body AND then the right side of our body. You can derive an idea where numbers and computations come from.

    I don’t see how that’s a computation in any meaningful sense (and again, sure, we can map the states of some computation to a—nearly arbitrary—physical system, but doing so doesn’t tell us anything about that system, but merely about our ability to find creative enough mappings).

    Consider a different example: old steam engines possess a centrifugal governor, that is, a device that governs the speed of an engine by releasing excess pressure. The engine drives the governor, and the faster it rotates, the more its flyballs are displaced; if that displacement exceeds a certain angle, a valve is opened, releasing steam, and hence, lowering the pressure, thus lowering the engine’s speed.

    All of this is simply due to the physical interaction; it is ultimately the pressure itself that drives its release (by driving the engine faster, which makes the governor rotate faster, an so on). There is, in particular, nowhere a computation of the speed of the engine, the value of which then drives the opening of the valve.

    Nevertheless, in certain contexts, it may be useful to describe things in such a way. But this description is just that: a description; the physical processes of the governor carry on quite independently of it. Hence, computation is just a gloss on the physical system, whose operation can be exhaustively described without reference to it.

    In the case of the mind, it seems to me that excessive reliance on the computational description is, in fact, a hindrance to formulating a theory. There, the common assumption seems to be that indeed certain values are ‘computed’, and then, based on that computation, actions are executed. This is misleading at best, and directly segues into homunculus problems (since after all, computations are just an interpretation of physical systems, which need an interpreter—but it is our faculty of interpretation that we seek to explain).

    Hence, it seems to me, we should instead concentrate on finding the principles according to which the physical system that our brain is operates, rather than what computational glosses we can invent for it.

    SelfAwarePatterns:

    I’m not satisfied by just saying that recognition or associations happen, I want to know how they happen, and from everything I can see, how they happen is a complex framework of signal processing, which it makes sense to call computation, or information processing.

    I share the same wish for knowing how things happen. But computation is by far not the only framework in which we can formulate explanations—see the above explanation of the flyball governor (do you agree that there are no computational notions in that case?).

    The problem with the information processing metaphor is that the notion of information either presupposes semantics, or is better captured in terms of physical differences—that is, either you’re assuming the information to mean something to, e.g., the brain; but this is circular, as how meanings work is what we’re trying to explain. Or, you’re considering ‘information’ to be merely a string of distinct symbols, or physical states; but then, we’re really just talking about physics, as what those distinct states will do is exhaustively explained by their physical characteristics. So really, it adds nothing, or misleads.

    Fair enough. It doesn’t sound like we necessarily disagree ontologically, just definitionally. I’ve never found definitional debates to be particularly productive.

    No, there is a substantial ontological distinction between our viewpoints. You must hold that, in some sense, what computation a physical system implements must be objectively attributable to that system—since otherwise, you’ve already smuggled a subjective datum into a theory trying to explain subjectivity. I think that this is demonstrably wrong, that what computation a physical system instantiates is always a matter of arbitrary and subjective choice; and if that’s right, then the computational theory just can’t ever get off the ground.

    Tom Clark:

    I think SelfAwarePatterns rightly suggests that the computational level can be explanatory, since the behavior of computers, computerized control systems and robots (all of which are physical systems) mostly derives from their computational design and functions.

    Sure, there is a possibility of using computational language to find explanations for many different phenomena. As I said above, you can validly say that the governor, in some sense, computes the speed of the engine—its states can be mapped to an FSA doing just that. But it does not perform its work because of the computation; and the temptation in computational theories is that the mind is such-and-such a way because of the computation that is being implemented. I think this is essentially a category mistake regarding what kind of thing a computation is, and thus, it’s better to try and abstain from using such language in the first place.

    IOW, it’s fine to use computational language as long as it’s clear that one is merely using it as a descriptive technique; but when one isn’t clear about this distinction—as I think the idea of a computational mind fails to be—then it’s pernicious, and I think that it’s today the single greatest impediment to a theory of the mind there is.

    I mean, ultimately, the belief that a computation could give rise to a mind is the same as the belief that reading a description in a book of some character’s thoughts and feelings causes those thoughts to be thought, and those feelings to be felt (and while there may be a certain degree of sympathy with a well-written character, you don’t feel pain if she stubs her toe). Because the description in the book is just a series of symbols; and that’s likewise all that a computer ever produces. All interpretation comes from the reader, respectively the user.

    And re the former, the fact that silicon-based information architectures are now capable of learning and complex pattern recognition to the point where they are replacing white collar jobs should give bio-centrists pause.

    I think many so-labeled ‘bio-centrists’ are really the victims of a misunderstanding. The argument is made that consciousness is biological, and the biological is non-computational; and hence, the computational doesn’t suffice for consciousness. But this in no way implies that only the biological gives rise to consciousness—we know it’s sufficient, by our own example, but we don’t know it’s necessary, and I think few people are really arguing that it is. But what is necessary is something non-computational—which however may just as well be realized in terms of silicon chips as cells.

  79. Just like many animals we have the sense of colors and constantly use that sense to distinguish objects, along with other visual sense modalities like distinguishing depth, verticality etc. Except unlike the other animals we have a language of sight and colors which is derived from this visual system. The great question is where do we derive the language of relations known as mathematics. The left and right sides of the body are gross computations of symmetry and it is obvious from the comments that many don’t see this. Well these two sides are heavily interconnected via biological structure so it is logical that many would not see this.

  80. Jochen in 82: “…what is necessary [for consciousness] is something non-computational — which however may just as well be realized in terms of silicon chips as cells.”

    Just curious about what you think this non-computational element might be. Perhaps you’ve touched on this before so apologies if I missed it.

  81. Tom Clark:

    Just curious about what you think this non-computational element might be. Perhaps you’ve touched on this before so apologies if I missed it.

    Well, it’s essentially the non-structural, intrinsic physical properties that make a thing be that particular thing that it is. See, if you model something, then you essentially instantiate the structural properties—the relationships between its constituent parts—of the system to be modeled within the modeling system. You do this, basically, by an act of interpretation: you interpret, say, the relationships between the distances of the little spheres in an orrery as the relationships between the distances of the planets; using this interpretation, you can then model the real solar system—such as finding dates when particular constellations appear.

    But an orrery is still very different from the solar system; and in fact, every model is different from the system it models (otherwise, that whole modeling business would have no point at all). In fact, it is impossible to fully instantiate one system using a different one—of course, you can just use the same materials to build a second version of the original system, but then, all you have done is to build a copy. In fact, what can be ‘transferred’ to a different system is exactly that information that would be contained in a blueprint of the original system. But the blueprint is obviously different from the original system, and, crucially, needs to be interpreted and understood in order to create a copy of the first system, or build a model.

    A computer now is a clever device for ‘animating’ arbitrary blueprints. In some sense, a computer is simply a device rich enough so that it can be imbued with any structure whatsoever—this is computational universality. But then, whatever makes it into a computer from some system to be simulated—modeled—just is that portion of the information that is also contained in a blueprint; hence, it is dependent on interpretation and understanding. In contrast, the original system needs no interpretation: it just is that particular system. Two different observers, with no shared prior knowledge, will come to the same conclusions regarding the original system—they will discover its objective physical characteristics. They will not (in general) come to the same conclusions about a blueprint, or about a computation.

    So what the non-computational stuff is, to me, is simply those properties that make a physical system the particular system that it is, and not any other—this is close to Peter’s use of haecceity, I think. This is what’s lacking in a computer simulation of a physical system, and what makes it possible to interpret the simulation arbitrarily, while physical systems are not subject to arbitrary interpretation. This is what makes a simulated black hole on your desktop computer incapable of sucking in your coffee mug (no matter how detailed the simulation gets), a simulated photosynthetic reaction not produce energy, and a simulated large intestine incapable of digesting actual food. And it’s also what makes a simulated brain not give rise to any conscious experience—simply because it isn’t a brain, it stands to a brain in the same relation as a drawing does to the thing it depicts—one of interpretation-dependent representation.

    People think that because in some sense, you can model absolutely anything on a computer, it’s plausible that everything might just be, somehow, computation; but that’s just as wrong as thinking that because you can draw everything whatsoever, everything might be just a drawing. There’s a difference between the real thing and the drawing, simulation, or other representation; and it’s that what makes that difference that I mean by ‘non-computational’ (or ‘non-drawable’).

  82. Jochen in 85: “So what the non-computational stuff is, to me, is simply those properties that make a physical system the particular system that it is, and not any other—this is close to Peter’s use of haecceity, I think… And it’s also what makes a simulated brain not give rise to any conscious experience—simply because it isn’t a brain, it stands to a brain in the same relation as a drawing does to the thing it depicts—one of interpretation-dependent representation.”

    Ok, I get why a model of something isn’t the thing itself. But I thought you were saying that the brain is doing something, or *is* something, non-computational, which entails that it’s conscious, something that a computer can’t do, or isn’t. Rocks presumably don’t have it, but certain physical systems like brains do. And I was wondering what that something was.

  83. Tom Clark:

    Ok, I get why a model of something isn’t the thing itself. But I thought you were saying that the brain is doing something, or *is* something, non-computational, which entails that it’s conscious, something that a computer can’t do, or isn’t.

    Well, it’s nothing a ‘computer’ can’t do, in principle. It just can’t do it by computation. In some sense, I suppose it’s a fair characterization to claim that ‘brains are computers’. They’re also squishy sponges. But neither their computational nor their spongy natures are what causes them to give rise to consciousness.

    Otherwise, I’m not sure how to give a more straightforward answer to your question than ‘those properties differing between a thing and a model, or blueprint, of that thing’—which surely can be manifold.

  84. Jochen: “I think many so-labeled ‘bio-centrists’ are really the victims of a misunderstanding. The argument is made that consciousness is biological, and the biological is non-computational; and hence, the computational doesn’t suffice for consciousness. But this in no way implies that only the biological gives rise to consciousness—we know it’s sufficient, by our own example, but we don’t know it’s necessary, and I think few people are really arguing that it is. But what is necessary is something non-computational—which however may just as well be realized in terms of silicon chips as cells.”

    These just strike me as god-of-the-gaps arguments. We wouldn’t be debating if we knew. The question is what are best assumptions to take on.

    In the case of computationalism, I don’t see what warrants any of their basic assumptions, let alone what we gain (aside from more philosophy) by taking them on. Solve the hard problem of content first. Until then it just seems like banging fuzzy mystery against fuzzy mystery trying to strike some spark of insight.

    In the case of intentionalism, more generally, I just don’t see how anything could be decisively answered, given the radically heuristic nature of intentional cognition. Why should we think cognitive modes adapted to solving absent information regarding what’s going on will be able to tell us what’s going on?

    Even if you accept the likelihood of some ‘necessary more,’ shouldn’t we restrict ourselves to those ‘mores’ that actually promise some kind of resolution?

  85. Scott Bakker:

    These just strike me as god-of-the-gaps arguments.

    Really? I took myself just to be pointing out a gap, nothing more.

    In the case of computationalism, I don’t see what warrants any of their basic assumptions, let alone what we gain (aside from more philosophy) by taking them on.

    I basically agree—there’s no real reason to suppose that consciousness is ‘due to’ computation, whatever ‘due to’ means in this context. But I think one can even go further and demonstrate that computation isn’t the right sort of thing to use for explaining consciousness–which entails no claim that I know what the right sort of thing is. One can know how something doesn’t work, without knowing how it works.

    In the case of intentionalism, more generally, I just don’t see how anything could be decisively answered, given the radically heuristic nature of intentional cognition. Why should we think cognitive modes adapted to solving absent information regarding what’s going on will be able to tell us what’s going on?

    Because even flawed heuristics contain a spark of the truth, if you’ll excuse the dramatic wording. We can use them, put them to work, just as we did discovering, say, the fundamental structure of matter—certainly, there’s no argument that our cognition is any better attuned to quantum physics than it is to finding out how our cognition works. So I simply see no reason to despair: yes, our tools are flawed, but by and by, we’re coming to realize what those flaws are, and how to work around, or even with, them. Throwing in the towel now is just premature.

    Even if you accept the likelihood of some ‘necessary more,’ shouldn’t we restrict ourselves to those ‘mores’ that actually promise some kind of resolution?

    I’m not sure what ‘more’ you think I’m accepting; my default stance is thoroughly materialist (although I have my doubts on whether science as it’s structured currently is capable of giving an exhaustive analysis of matter). So the only sorts of things I’m willing to posit are those we know are there already, nothing ‘more’.

  86. Our best bet now is the putative brain mechanism that has successfully predicted vivid hallucinations in the SMTT experiments — the retinoid mechanism.

  87. Selfawarepatterns: “You seem to see that physical requirement as a problem. But I’ll note again, there is no such thing as a purely computational or purely functional system. If that’s our working definition of computationalism for this discussion, then even computers can’t be considered computational, since they must have a physical implementation and physical I/O system, and their performance is intimately tied to their physical architecture.”

    The ‘physical requirement’ is a problem for the computationalist when it refuses any clear delineation of implementation and function. And as far as I can tell, this delineation remains a topic of perennial controversy. But a computationalist who agrees that computation is simply a bit player in a system is not much of a computationalist, don’t you agree?

    So all I need to do is rephrase my question: What evidences the claim that consciousness is *functionally independent enough* to fundamentally fall into the computationalist’s explanatory purview?

  88. Jochen (89): But then what was the ‘misunderstanding’ alluded to supposed to mean? I wouldn’t have replied otherwise! 😉 On an eliminativist approach such as my own there is a gap requiring explanation, but that explanation requires no fundamental revisions or posits, simply an appreciation of the kinds of cognitive binds arising out of heuristic neglect.

    I see you’ve streamlined your position quite a bit, Jochen. I approve!

    So how far are you willing to eliminate?

  89. Scott,
    I’m not sure what “functionally independent enough” means. Perhaps the question might better be asked, of the structures that make up the conscious self, what portions could be described as computational? We know from brain damaged patients, for example, that the cerebellum can be lost, and while it will leave the patient clumsy, they will still be the same person. But loss of any cerebrum region can alter the self, leading to conditions such as aphasia or hemispatial neglect. Loss of the thalamus destroys consciousness, as does loss of the wrong nuclei in the brainstem. Damage to the medulla or anterior cingular cortex can damage motivation, etc.

    Based on the neuroscience I’ve read, all of these regions seem to be primarily about processing signals, receiving inputs from the peripheral nervous system, and producing outputs to that system. Of course, since the signals are electro-chemical in nature, they are affected by the mix of hormones and neurotransmitters available, but to me, these are just another signalling mechanism.

    Again, no computational system exists in isolation. In computers, the addition of a specialized coprocessor can dramatically improve performance, and processors often have to monitor temperature and sometimes adjust their processing accordingly. Computation is a physical process with every part influenced by physical environment.

  90. Scott:

    But then what was the ‘misunderstanding’ alluded to supposed to mean?

    Simply that if somebody argues that ‘consciousness is biological (rather than computational)’, they’re often taken to mean that it is exclusively biological, i.e. that it couldn’t be replicated with anything else than the appropriate biological machinery. Searle, for instance, is often supposed to hold a view such as this, while he’s been quite clear that he has no qualms with the possibility of artificial conscious machines that don’t have a biological substrate.

    I see you’ve streamlined your position quite a bit, Jochen. I approve!

    Hmm, I don’t really think I’ve changed my position much since we last spoke (I was certainly just as materialist then). Certainly I still broadly agree with the view I put forward in my article on intentionality, which I think had already been submitted back then.

    I think there really was a bit of a misunderstanding between the two of us—for some reason, you had me pegged early on as subscribing to an idea of irreducible intentionality, but that was always a bit of the picture you colored in yourself (despite my attempts to the contrary).

    (And at the risk of gambling away your approval rashly, I also still don’t think terribly much of eliminativism, sorry.)

  91. Jochen –

    In an exchange a post or two back I alluded to “Gibson’s ecological psychology approach to perception” with which you said you were unfamiliar. There is new paper on that approach that addresses many of the issues of special interest to you (eg, intentionality, symbol grounding), so you might find it interesting. It is cited in this blog post, which also provides some motivational background:

    http://psychsciencenotes.blogspot.com/2016/06/ecological-representations.html

    FWIW, having been influenced by the authors (via their blog), I think about these matters in roughly the way they do – altho at a much less sophisticated level, of course.

  92. @ Arnold: I’d agree with that. It seems to me an endogenous field being correlated with consciousness would be enough to falsify the idea that implementation of a Turing Machine (but apparently only processes assoicated w/ specific magic Holy Grail programs?) produce conscious entities.

  93. The short answer is if you look inside a running computer you do not see data or computations, what you see are machine states or things occurring ‘in time’. If you look inside a brain, you also see neural states occurring ‘in time’. It is not valid to call brain states computational because an electronic computer has time simulated by an electronic clock while for a brain or a biological sensorimotor system, time emerges as a sense just like color. Time perception is necessary for movement. Think about it, and you will see why the above comments display so much circularity. For an artificial consciousness time is actually being simulated or modeled. Time itself or the non-existence of time is key here.

  94. SelfAwarePatterns

    You saw an assertion about design in that quote? No, I don’t think human or animal consciousness was designed; I think it evolved as a survival advantage. But I do think machine consciousness will eventually be designed.

    It may well never be practical to run a human mind on a silicon substrate. We may need to build a physical neural network to do it. Although I think certitude on any of this is unjustified at this point.

    Your idea rests on the idea that software from one computer can be run on another computer type. But that idea refers to something designed – the software. What if something undesigned cannot be run on something else? At best it’s another type of consciousness?

  95. VicPanzica, but clearly no one has much sense of the time passed while sleeping? Perhaps time is simulated for us as well, while our cells have slow burn chemical reactions as their time pieces.

  96. Callan,
    I’m not clear how design versus evolved makes a difference. Evolved systems often seem designed. Indeed, we could say they are designed by natural selection.

    I will say this: the software / hardware divide is an innovation of modern computing, one that brains have never had a need to evolve. You couldn’t copy a mind from biological brain A into biological brain B as you can a program from computer A into computer B. Biological brains simply aren’t put together that way.

    But there’s no reason in principle that a technological brain couldn’t run a copy of a mind from a biological brain. Of course, with any foreseeable technology, obtaining such a copy would almost certainly involve a destructive scan of the original brain.

  97. SelfAwarePatterns:

    Computation is a physical process with every part influenced by physical environment.</blockquote
    I think you need to be more careful in distinguishing a computation from its implementation. A computation is something formally specified—an algorithm, a formula, a Turing machine specification. To get this to the physical level—to make a physical system ‘perform’ that computation—we need an implementation: some set of criteria that specify when a physical system can in some sense be regarded as isomorphic to the formal specification of the computation.

    Let me give an example. In lieu of the computation that implements a mind, think of the much simpler case of implementing addition. For simplicity, let’s only consider addition of single-digit binary numbers with carry. This process of addition (let’s call it A) is given by some computation C_A; our task is then to decide whether a given physical system S implements C_A, and thus, performs addition. This is analogous to a mind M being given by some computation C_M, and us having to decide whether a given physical system implements C_M, and hence, produces a mind.

    Now, the formal specification of addition is the following: we have two logical variables—the two bits we want to add. Call them A and B. Furthermore, we have two results: the sum of both, and the carry. Call them C and D. The computation is now specified as follows: C = A XOR B; D = A AND B. Hence, the value of C is the sum of both inputs, and D indicates whether a carry occurred. For e.g. A = 0, B = 1, we have C = 1 + 0 = 1, and D = 0; for A = 1 and B = 1, we have C = 0 and D = 1, indicating that a carry occurred. (This is what’s often called a half-adder.)

    Let’s now try to find a physical system S such that it implements this computation. Consider for this task a system that is specified as follows. At both inputs (I1, I2) and both outputs (O1, O2) of the system, either a high (h) or low (l) voltage can be applied. The system is wired up such that if a high voltage is applied to either of I1 or I2, a high voltage will be present at O1; if both inputs are at l, or both are at h, O1 is at l. Furthermore, O2 is at h if and only if both I1 and I2 are at h, and at l otherwise.

    Now comes the implementation relation: we choose a high voltage to represent logical 1, and a low voltage to represent 0. With this identification, S implements the addition function as detailed above: we can now use S to calculate the sum of arbitrary single-digit binary numbers (and, by chaining many similar such devices, we can calculate the sum of arbitrary binary numbers). Hence, we can consider S to be an adder.

    However, there is nothing special about high voltage that makes it inherently map to logical 1. We could just as well use a different mapping: say, we now use low voltage to represent logical 1, and high voltage to represent logical 0. Under this mapping—which is just as valid as the one we used before!—the device S no longer implements the computation C_A. Instead, it will, for instance, for A = 0 and B = 1, yield C = 1 and D = 1, since I1 = h, I2 = l means O1 = l and O2 = l. Thus, S—which is physically unchanged from before—no longer can be called an adder; it no longer performs addition.

    The only difference between the two cases is our arbitrary choice of how to interpret voltage levels in terms of logical values; hence, whether S is an adder depends on how we make this arbitrary choice. It is not in any way a property of the system; it is a property of how the system is used, is interpreted by its user.

    By extension, on the computational theory of mind, the same would be true for the question of whether a physical system instantiates a mind: since mind M corresponds to the computation C_M, and whether C_M is implemented by the system depends on just such an arbitrary interpretational choice as above, whether a physical system implements a mind (on the computational theory) is a question of how that system is interpreted—of how its states are related to the logical values (say) used in the specification of the computation (which is, recall, a formal object). Hence, whether a physical system implements a mind in this case can’t be discovered by simply examining that system, just as we can’t discover by examination whether a system performs addition. We have to interpret it as doing so.

    Now, in the above, I’ve used what’s sometimes called the ‘simple mapping’-account of implementation. Other notions of implementation have been articulated, precisely because of the above problem (none, however, are widely regarded to be successful). In my opinion, this misses the point somewhat: I have nowhere claimed that the simple mapping account is necessary to implement a computation, but merely that it is sufficient—which it clearly is: using the mapping, I can implement the computation I chose. But this means that I am also free to use a different, equally as well justified mapping to implement a different computation.

    Also, note that an appeal to the environment fixing the computation in some way does no work here: the environment merely supplies high or low voltage levels (the ‘sensory input’ of our system), and the system in return produces high or low voltage levels (the ‘reaction’ of the system to the environmental input). But what voltages are produced is independent of what computation we consider to be performed; the computational level is truly an extra, and from the perspective of explaining the behavior of the system, extraneous add-on to what occurs physically.

    Thus, even if there were some computation C_M giving rise to a mind M, the computational account fails to answer the question of whether this computation is implemented in some given physical system in an objective way; but if we need to admit subjective notions, such as interpretation, we give the game away, as in trying to explain how minds come about, we need to appeal to pre-existing minds.

    Charles:

    There is new paper on that approach that addresses many of the issues of special interest to you (eg, intentionality, symbol grounding), so you might find it interesting.

    I had already come across that paper, via Sergio’s blog; I haven’t had the time, so far, to give it a proper reading, but on a casual read through, my first reaction was basically that of Jim Hanlyn in the blog post’s comments: that something can be used as a representation does not imply that that something just is a representation (just as that something can be used to perform a computation does not mean that that something just performs that computation); you need a representation-using agent to make it a representation in the proper sense, and it doesn’t seem to be that the paper solves the question of how such a representation-using agent could come about. But I’ll try and find the time for a more thorough reading.

  98. I will say this: the software / hardware divide is an innovation of modern computing, one that brains have never had a need to evolve. You couldn’t copy a mind from biological brain A into biological brain B as you can a program from computer A into computer B. Biological brains simply aren’t put together that way.

    But there’s no reason in principle that a technological brain couldn’t run a copy of a mind from a biological brain.

    Well why, if you can’t just transfer a mind from biological brain A to biological brain B? Why is the technological transfer any more successful?

    Further, I’m reading your use of ‘transfer’ as being something like carrying a floppy disk from one computer to another. Are you talking about carrying a slither of brain tissue from a skull to a computer?

    Otherwise, what do you mean by transfer? In a lot of respects I’d say there is nothing to transfer.

    I think the best you can get is a kind of migration – synthetic synapses and natural born synapses placed near each other to permit link ups, the former programming the latter – until the former unfortunately die of old age. And it’s a question of what program is left in the latter.

    Where as a lot of people think there is some bit of them that can transfer over.

    Of course, with any foreseeable technology, obtaining such a copy would almost certainly involve a destructive scan of the original brain.

    I have to say I’m skeptical of this in as much as I don’t like fiction that runs that way (you can blame it on taste – it’s at least partly that). I feel a lot of fiction has a ‘destructive scan of the brain’ simply to avoid philosophical questions which would disrupt the action in the fiction. Like, a super computer is going to acurately simulate the molecular interactions that contribute to a human consciousness, but we can’t scan a brain without making it into yogurt? It’d seem more a way of avoid that the whole thing merely duplicates the brain involved – there is no transfer.

  99. Ah darn, got my order the wrong way around – I think I have a cold ATM. The corrected version:
    “I think the best you can get is a kind of migration – synthetic synapses and natural born synapses placed near each other to permit link ups, the latter programming the former – until the latter unfortunately die of old age. And it’s a question of what program is left in the former.”

  100. Callan, Well of course there is no time sense while sleeping because you have to be conscious to have time sense along with color sense, smell, touch etc. While sleeping we do have dreams that involve the senses including time but they can be heavily distorted because we are not heavily connected to the environment or our waking reality.

    Body cells may have internal independent time sense or basic Being which we share other members of the biological kingdom.

  101. Selfawarepatterns: So what makes you a computationalist then? Because you think consciousness can be artificially recreated? Because you think information processing will have some explanatory contribution to make?

    I’m not convinced you are a computationalist! Which is a good thing.

  102. Jochen: Misunderstanding now understood.

    You’ve streamlined your presentation here, then. A lot fewer dangling commitments, a clear eye on the dialectical prize. Some great writing. I would approve of that, but you’re driving the wrong way down the road! 😉

  103. Jochen,
    I’m well aware of the interpretation issue with computation in physical systems. We really did pound this issue out earlier in the thread. What I think maybe you’re not aware of is the interpretation issue of consciousness in physical systems.

    We are systems (computational or otherwise) that have a certain type of experience. When we look at other mentally complete adult humans, we can rationally assume that they have a very similar type of experience, which we call consciousness. But the further we move away from that specific model, the less clear, the more interpretational the existence of consciousness becomes.

    Are newborns conscious? Before answering, consider that most of their behavior is driven from their brainstem. The axons in their cerebrum are not well myelinated yet, meaning that signals can’t yet travel far without interference. They still have a lot of synapses to grow. And their behavior is identical to that of children born with hydranencephly, with a birth defect where their thalamacortical structures are missing. (Children often don’t get diagnosed with this condition until they fail to develop past the cognition of a newborn.)

    What about insects? Are they conscious? Or what about the c-elegans worm, with its 302 neurons? These creatures strive to survive, seek food, and generally have interactions with the environment. But whether their experience is anything like ours is questionable.

    Then there are cephalopods. Their brains structure is radically different than ours, but they display strong signs of intelligence. Almost certainly their experience of the world is extremely different than ours. Are they conscious in any human sense of the term?

    This is all before we even get to the issue of brain injured patients. Their consciousness can be damaged in profound ways. At what point do we decide that the damage has taken away their consciousness? By their responsiveness? Which may be nothing but reflex action?

    Considering all of this, I think we have to face the fact that consciousness is a systematic process, one that can be present in degrees, and can be implemented in many different ways. Whether a system radically different than us has an experience anything like ours may forever be a matter of interpretation, of where we decide to arbitrarily draw the line.

    This is why the interpretation of computation in physical systems doesn’t bother me. As to the circularity argument, I could only repeat what I said in comment #15.

  104. Scott,
    There are many types of computationalism. I don’t adhere to any one specific conception. I find most of them too premature in their specifics. I prefer to wait to see what computational neuroscience discovers. But the answer to both of your questions is yes, at least unless new data at some point makes them untenable.

  105. SelfAware “… consciousness is a systematic process, one that can be present in degrees…”

    I don’t think consciousness as such can be present in degrees, but the *content of consciousness* certainly can be present in degrees. See “Evolution’s gift: Subjectivity and the phenomenal world” on my Research Gate page.

  106. Scott:

    A lot fewer dangling commitments, a clear eye on the dialectical prize. Some great writing.

    Thanks!

    I would approve of that, but you’re driving the wrong way down the road! ?

    I expect you’re probably right about that; after all, I can have no reasonable expectation to succeed were people much smarter than me floundered. But, y’know, sometimes you gotta go where the heart wants to go, even if you’ll never get there, or you find the place in ruins. Not much point in going where you’ve no intention to end up. 😉

    SelfAwarePatterns:

    What I think maybe you’re not aware of is the interpretation issue of consciousness in physical systems.

    That’s a very different issue though. There’s an objective fact of the matter regarding whether there’s any conscious experience in the cases you mention—there either is something it is like to be them, or not. We might not be able to easily get at that fact, but that doesn’t mean it’s not there.

    With computationalism, however, there simply is no fact of the matter, as the above shows.

    As to the circularity argument, I could only repeat what I said in comment #15.

    The problem is that that doesn’t help the case—how do you think some minds could arrive at a consensus, when the very existence of those minds depends on that consensus? They’d have to pre-exist the consensus, in order to come to the consensus that’s needed for their existence—which is viciously circular.

  107. Arnold,
    That’s a lot of material to go through for one proposition. Would you mind giving or linking to a quick summary? (Totally understood that the full case may require immersion.)

  108. Jochen,
    “There’s an objective fact of the matter regarding whether there’s any conscious experience in the cases you mention—there either is something it is like to be them, or not.”
    As far as I can see, this is an untestable assumption.

    “With computationalism, however, there simply is no fact of the matter, as the above shows.”
    In both cases, it’s a fact of the matter that there are physical systems. If they have physical interfaces to the environment, then the effects of their internal dynamics on the external environment is also a fact. In both cases, whether to regard them as a particularly complex category of system is often a matter of perspective.

  109. SAPatterns,

    “Are newborns conscious? Before answering, consider that most of their behavior is driven from their brainstem.”

    Well consider that the newborns can communicate because their mothers can interpret their cries, and soon they learn to associate the crying with the appearance of the mothers.

    The real answer may be is how environmentally embedded they are or during the first years of development they constantly learn and are fascinated by their environments. Consider the story of the “conscious” robot below which failed to recognize its environment and “escaped”,

    http://www.livescience.com/55164-russian-robot-escapes-lab-again.html

    …though one theory is someone put an algorithm in it to find the nearest bookstore and get a copy of the latest RS Bakker novel.

  110. SelfAware,

    Briefly, according to the retinoid theory of consciousness, when the brain’s representation of the volumetric space around a representation of the locus of perspectival origin in this space reaches a threshold level, one is conscious. This is the primitive phenomenal world that can be filled up with all kinds of sensory and memorial patterns which determine the degree of conscious content.

  111. SelfAwarePatterns:

    As far as I can see, this is an untestable assumption.

    It’s not; it’s simply the law of the excluded middle: something either is conscious, or it isn’t. That’s an exhaustive set of cases—one of them must apply. Or do you believe that something could be neither conscious nor not-conscious? If so, I think the hope for a scientific theory of consciousness goes right out of the window.

  112. Jochen,
    In my experience, the law of the excluded middle is mostly invoked in places where it doesn’t apply. Why specifically would it apply here?

    Scientific investigation of consciousness is as possible (albeit far more difficult) as scientific investigation of what programs a foreign computer is running.

  113. VicP,
    A healthy baby’s brain and cognitive abilities do develop rapidly. I was really only talking about newborns in their first few weeks of life. That said, we have to be careful not to project, not to infer more sophistication than is there. The hydranencephalics I mentioned seemingly develop (according to Antonio Damasio’s account) favorite caregivers, music preferences, and other outwardly emotional responses, just with a brainstem and hypothalamus.

    Interesting on the escaped robot. Thanks, but I fear I have to side with the skeptics that it’s most likely a publicity stunt.

  114. SelfAwarePatterns:

    In my experience, the law of the excluded middle is mostly invoked in places where it doesn’t apply. Why specifically would it apply here?

    Well, it’s a law of classical logic, so outside of exotic views such as dialetheism, it pretty much always applies. After all, if you have something in a box, and I can never ever look inside that box, I still know that it’s either green, or not green.

    So, if something is conscious, then there is an associated perspective on the world, or not; either there is subjective experience associated with it—in which case, the experiencing entity will know—or not—in which case, there is no experiencing entity. The idea that something can be neither conscious nor non-conscious simply fails to make sense, and that your view forces you into such a bind should really lead you to question it.

    Now, consciousness may, to a certain extend, be a question of definition, like life. There may be issues of vagueness, of how precisely to draw the line between consciousness and lack thereof. There even may be a continuum of consciousness, where any such line is to some extent arbitrary. But, once such a line is drawn, every system falls to one side or the other.

    But this isn’t analogous to the case of computation: there, by the above reasoning, there is no determinate fact of the matter on which side a system falls, even once the line is drawn. Callan linked above to the SMBC comic discussing the Sorites paradox of when something becomes, or ceases to be, a heap. There’s real, interesting discussion to be had on such issues, but we know that, if we designate some amount of ‘stuff’ as a heap, then we’ll be able to divide the world of assemblages cleanly into heaps and non-heaps; however, the lack of objectivity of computation means that there simply isn’t an objectively defined amount of stuff associated with each case, and hence, that depending on interpretation, a given system can fall on either side of the line.

    So even if consciousness is a sliding scale, every system falls at a given point on that scale; but under computationalism, this is no longer true.

  115. Jochen,
    So the law of the excluded middle always applies…except for computation?

    It’s okay to require that we define things before assessing the objectivity of consciousness, but not okay to require that we interpret things before assessing the objectivity of computation?

  116. SelfAwarePatterns:

    So the law of the excluded middle always applies…except for computation?

    It applies perfectly well to computation, what makes you think that it doesn’t? No system ever implements a computation; every system can be interpreted as doing so. These are not contradictory statements.

    It’s okay to require that we define things before assessing the objectivity of consciousness, but not okay to require that we interpret things before assessing the objectivity of computation?

    In the former case, we merely define the categories in which to sort the facts; in the latter, we have to fix what the facts are in the first place (and then still decide on their categorization). The former is a problem we routinely face in the study of objective data, both in the scientific and everyday contexts: where do we draw the boundaries between species? When exactly is a person large?

    But the data—the genetic markup of some biological entity, resp. the size of the person—is objectively there to be discovered. In the latter case, however, our choice, and nothing else, is what fixes that data.

  117. Jochen,

    In agreement with your steam engine governor example in comment #82. But think of an independent biological cell as the steam engine with regulatory governors. Certainly this could be described as some type of computational gloss, but as you hinted the steam escapes because of something occurring at the molecular level. Computations like counting walking steps are very effective at describing the displacement of objects in space and resultant times, but do not reveal the forces of nature operating at the molecular level or the same forces which are possibly interacting at the neuronal level to form experience.

  118. Sci

    ” As per Anirban Bandyopadhyay, who actually gave Orch-OR it’s strongest empirical claims, rejecting a Turing Machine as conscious doesn’t mean a machine of a different sort won’t be conscious ”

    Absolutely. A consciousness machine that works has mental states as an output : thus far the brain is the only one we know of. That doesn’t stop there from being others.

    Computers on the other hand are machines that cycle repetitive physical states – their output is simply movement. Movement of electrons, steam or whatever the computer is based upon. Translation of these arbitrary physical patterns (usually done by peripherals such as computer screens) is how we get to use computers. Like an abacus.And noone would think abacuses are conscious, right ?

  119. @ John Davey:

    “Translation of these arbitrary physical patterns (usually done by peripherals such as computer screens) is how we get to use computers. Like an abacus.And noone would think abacuses are conscious, right ?”

    Yeah this is what confuses me about computationalism. Well that and apparently only certain programs are conscious. As Jochen pointed out awhile back no computationalist in academia seems concerned about all the video game enemies we cycle through an endless cycle of war & death.

  120. Sci

    “Well that and apparently only certain programs are conscious”

    Yes, that’s a hoot. The problem is I could take the binary state of a computer program that is conscious and map it – byte to byte – to one that isn’t conscious by changing my (observers) view of the program. Thats because the link between semantic and syntax is arbitrary and decided by the user/programmer.

    J

  121. On the matter of the burden of proof: it rests on whoever is making a specific claim: i.e. anyone claiming to prove that hard AI is certainly possible, or anyone claiming to prove that hard AI is certainly impossible.

  122. I would like to comment firstly on Aaronson’s talk, and then on your concluding comments on recognition.

    I do not see Aaronson saying that a computer cannot be conscious. “Of course, we all know that if you needed to go down to the quantum-mechanical level to make a good enough copy (whatever “good enough” means here),” he writes, “then you’d run up against the No-Cloning Theorem, which says that you can’t make such a copy”, so everything he says is conditional on that level of fidelity being necessary. In addition, all his remarks on the matter concern making clones of existing intelligent agents; if his argument presented a problem in creating an original artificial intelligence, it would equally be a problem for original natural intelligence (at first, I took his comments about Boltzmann brains to be an exception, but on rereading that passage, he is clearly referring to Boltzmann brains that are duplicates of some other intelligent agent.)

    In his replies to questions after the talk, he spent some time defending the possibility of an intelligent program: “Do you agree, I asked, that the physical laws relevant to the brain are encompassed by the Standard Model of elementary particles, plus Newtonian gravity? If so, then just as Archimedes declared: “give me a long enough lever and a place to stand, and I’ll move the earth,” so too I can declare, “give me a big enough computer and the relevant initial conditions, and I’ll simulate the brain atom-by-atom.” The Church-Turing Thesis, I said, is so versatile that the only genuine escape from it is to propose entirely new laws of physics, exactly as Penrose does—and it’s to Penrose’s enormous credit that he understands that.” (By the way, his beef with Searle is on account of Searle’s insistence there’s some special causal power in brains, without trying to explain how it might arise (like the ‘dormative virtue’ of morphine that Moliere pilloried.))

    On the matter of recognition, Aaronson almost provided an answer: “So for example, one audience member argued that an AI could only do what its programmers had told it to do; it could never learn from experience. I could’ve simply repeated Turing’s philosophical rebuttals to what he called “Lady Lovelace’s Objection,” which are as valid today as they were 66 years ago. Instead, I decided to fast-forward, and explain a bit how IBM Watson and AlphaGo work, how they actually do learn from past experience without violating the determinism of the underlying transistors.” Programs like CNNs, IBM’s Watson and AlphaGo Zero have shown that they can come to recognize “linkages and underlying entities” without having them spelled out: in particular, go experts noted that AlphaGo Zero discovered many well-known strategies and also invented some of its own.

Leave a Reply

Your email address will not be published. Required fields are marked *