Chomsky’s Mysterianism

Or perhaps Chomsky’s endorsement of Isaac Newton’s mysterianism.  We tend to think of Newton as bringing physics to a triumphant state of perfection, one that lasted until Einstein, and with qualifications, still stands. Chomsky says that in fact Newton shattered the ambitions of mechanical science, which have never recovered; and in doing so he placed permanent limits on the human mind. He quotes Hume;

While Newton seemed to draw off the veil from some of the mysteries of nature, he shewed at the same time the imperfections of the mechanical philosophy; and thereby restored her ultimate secrets to that obscurity, in which they ever did and ever will remain.

What are they talking about? Above all, the theory of gravity, which relies on the unexplained notion of action at a distance. Contemporary thinkers regarded this as nonsensical, almost logically absurd: how could object A affect object B without contacting it and without and internediating substance? Newton, according to Chomsky, agreed in essence; but defended himself by saying that there was nothing occult in his own work, which stopped short where the funny stuff began.  Newton, you might say, described gravity precisely and provided solid evidence to back up his description; what he didn’t do at all was explain it.

The acceptance of gravity, according to Chomsky, involved a permanent drop in the standard of intelligibility that scientific theories required. This has large implications for the mind it suggests there might be matters beyond our understanding, and provides a particular example. But it may well be that the mind itself is, or involves, similar intractable difficulties.

Chomsky reckons that Darwin reinforced this idea. We are not angels, after all, only apes; all other creatures suffer cognitive limitations; why should we be able to understand everything? In fact our limitations are as important as our abilities in making us what we are; if we were bound by no physical limitations we should become shapeless globs of protoplasm instead of human beings, and the same goes for our minds. Chomsky distinguishes between problems and mysteries. What is forever a mystery to a dog or rat may be a solvable problem for us, but we are bound to have mysteries of our own.

I think some care is needed over the idea of permanent mysteries. We should recognise that in principle there may be several things that look mysterious, notably the following.

  1. Questions that are, as it were, out of scope: not correctly definable as questions at all: these are answerable even by God.
  2. Mysterian mysteries; questions that are not in themselves unanswerable, but which are permanently beyond the human mind.
  3. Questions that are answerable by human beings, but very difficult indeed.
  4. Questions that would be answerable by human beings if we had further information which we (a) either just don’t happen to have, or which (b) we could never have in principle.

I think it’s just an assumption that the problem of mind, and indeed, the problem of gravity, fall into category 2. There has been a bit of movement in recent decades, I think, and the possibility of 3 or 4(a) remains open.

I don’t think the evolutionary argument is decisive either. Implicitly Chomsky assumes an indefinite scale of cognitive abilities matched by an indefinite scale of problems. Creatures that are higher up the first get higher up the second, but there’s always a higher problem.  Maybe, though, there’s a top to the scale of problems? Maybe we are already clever enough in principle to tackle them all.

If this seems optimistic, think of Chomsky the Lizard, millions of years ago. Some organisms, he opines, can stick their noses out of the water. Some can leap out, briefly. Some crawl out on the beach for a while. Amphibians have to go back to reproduce. But all creatures have a limit to how far they can go from the sea. We lizards, we’ve got legs, lungs, and the right kind of eggs; we can go further than any other. That does not mean we can go all over the island. Evolution guarantees that there will always be parts of the island we can’t reach.

Well, depending on the island, there may be inaccessible parts, but that doesn’t mean legs and lungs have inbuilt limits. So just because we are products of evolution, it doesn’t mean there are necessarily questions of type 2 for us.

Chomsky mocks those who claim that the idea of reducing the mind to activity of the brain is new and revolutionary; it has been widely espoused for centuries, he says. He mentions remarks of Locke which I don’t know, but which resemble the famous analogy of Leibniz’s mill.

If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters a mill. Assuming that, when inspecting its interior, we will find only parts that push one another, and we will never find anything to explain a perception.

The thing about that is, we’ll never find anything to explain a mill, either. Honestly, Gottfried, all I see is pieces of wood and metal moving around; none of them have any milliness? How on earth could a collection of pieces of wood – just by virtue of being arranged in some functional way, you say – acquire completely new, distinctively molational qualities?

36 thoughts on “Chomsky’s Mysterianism

  1. A quick note about gravitation. Einstein showed that it is the warping of spacetime which propagates at the speed of light, which dispelled the action-at-a-distance concern. (Although quantum mechanics brought that specter back up.)

    I don’t have a problem with the general idea that some questions may be in categories 1 and 2, but I don’t see it as productive to ever assume any particular question is. Astronomers once doubted that we’d ever be able to know the physical composition of stars, until someone discovered that chemical elements had spectral signatures. If a question has an understandable answer, it’s unlikely to be found by anyone who’s pre-decided it can’t.

    Your last point is perfect! Can anyone point at Microsoft Windows, Mac OS X, or Linux? They only exist as functional arrangements that transcend any one physical instantiation. As soon as they stop moving they disappear, and we are left with only hardware or media, yet no one suspects any unsolvable mysteries.

  2. Pyramids, levers, steam power, electromagnetism, computers, gravity, relativity, quantum mechanics, Higgs boson, string theory, …. I suspect it’s turtles all the way down. We’ll never understand it all, but we’ll never stop digging deeper.

  3. Peter

    You can explain a mill because it us the aggregate structure of its parts. A mill is reducible by physics to its mechanical components in a very simple process. You can’t reduce a mind to its mechanical components, because all you get from mechanical components is other mechanical phenomena. You can work out ‘milliness’ from wandering about the mill quite easilly . … they are after all some of our oldest machines.

    I think chomsky is right that we must have limits to knowledge. What evidence is there to suggest we can know everything ? Empirical evidence suggests there has to be a limit because we are animals.

    Other quandaries – free will v determinism, for instance, only add weight to the idea of the insoluble. Scientific truths aren’t logical, they are judicial in nature. So even if a ‘proof’ of determinism or free will was claimed, you’d be free to ignore it on the grounds another experiment might disprove it.

    J

  4. Self Aware

    The point is I think that action-at-a distance is only a concern if you insist on certain levels of intelligibility. I find your suggestion that general relativity dispels intelligibility rather optimistic !

    Newton never suggested that action at a distance was intelligible : he said (rightly) that his mechanics – physics – wasn’t there to explain but to predict. And so it remains – not giving what might be termed much in the way of intelligibility or understanding, but instead providing the utility of supremely accurate mathematical modelling.

    J

  5. John,
    On “general relativity dispels intelligibility”, I assume you mean dispeling the intelligibility issue? If so, I didn’t mean to suggest anything so absolute. But I do think GR closes the gap better than could be done in Newton’s time. Ultimately though, there’s always something new to be explained, such as perhaps what exactly spacetime is.

    Explanation vs prediction seems like the old philosophy of science debate between scientific realism and instrumentalism. But I suspect the distinction between these is ultimately artificial. What is an explanation other than a language description of a mental model? Mental models I think we instinctively construct for their possible predictive value. Granted, many of our models aren’t predictive, which should make us cautious of their epistemic status.

  6. As I mentioned in another thread, neuroscience suggests that there are certain explanations people hanker for that we’re not going to get. For example, explaining pains via brains. Neurologically, looking at brain activity excites the visual cortex. Feeling a pain in one’s toe excites the thalamus, hypothalamus, sensory cortex, etc. These brain activities have little overlap and don’t tend to cause each other. If, as physicalism contends, that’s the whole story, then *of course* looking at brain activity isn’t going to bring to mind pain in your toe. But this is exactly what the explanation-demander demands: a story that starts with the representation of brain activity, flows intuitively, and ends by bringing pain to mind.

    Does this put explaining the painfulness of pain in category 4b, or 1?

  7. The inference from ‘a dog can’t understand differential geometry’ to ‘there always will be some things we are constitutionally unable to understand’ is flawed. Consider the equivalent ‘there are functions a calculator can’t compute’ leading to ‘every computing device has certain functions it can’t compute’: we know that there are universal computers capable of computing every function (that can be computed by a computer at all, modulo the Church-Turing thesis).

    There is a threshold effect here: there are many special computers capable of computing only certain functions, but beyond that threshold, there is no computer that is qualitatively more powerful than another one—what each of these computers can do, any of them can do. But explanation (in a certain mode, at least) fundamentally is computation: we explain how to tie a shoe by giving the appropriate algorithm; more generally, what we can simulate, we can explain.

    So it seems likely that there’s a similar threshold effect regarding explanation: beyond this threshold, anything that has an explanation at all can be explained—there are no explanations that can’t be found to beings on that side of the divide. Now, of course, we have finite brains, and that’s a limitation right there—but we also have the means to extend this boundedness, by means of writing, communicating, and even using external computing devices to do our bidding. The real key capacity that we have (and dogs lack) is universal symbolic manipulation.

    But the question is, now, whether everything in the world is explicable in these computational terms. Lots of people believe so: witness those claiming that the world may be a computer simulation. If the world is, in some sense, itself a computational structure, then there is nothing that is in principle inexplicable to us.

    But I see no reason to hold to this thesis. Computation is more a property of language, a property of conceptual thinking, than a property of the world. So rather than opening up the whole world, in terms of explanations, I think our computational explanatory facilities actually impose a horizon on what we can explain; and this, I think, very well fits the evidence we have—there are certain things that are fixed by the physical facts, but which we can’t derive from them, such as qualia. The mystery is just due to the limitations of conceptual explanation, not due to ontological differences. Moreover, this explains why these elements are necessarily subjective: all that we can communicate, that we can bring into objective focus, is necessarily computational.

  8. If we think the mill does “millishness”, consider the mill made from flimsy rubber gears so it mechanically and computationally resembles any other mill or the appearance and timing is identical. However the flimsy mill cannot grind the wheat because the inner molecular structure of the gears are actually transmitting the forces of nature between them.

    The above case can be made for neurons since appearance and computation do not fully explain the inner working and interaction between them.

    Just like the mill is multiple parts or a system, some directly and others indirectly involved, the brain as a system of sub organs and functions is the same.

  9. A bit tangential to the main point here, but…

    The impact of Newton is probably the culmination of what Galileo started by making mathematics the proper language of physics (eventually all of science). Pierre Duhem discusses this in his monograph “To Save the Phenomena” where he gives the example that astronomy was not “proper science” for a long time because it was using mathematical models to fit the observations instead of “real” models.

    Chomsky gave an interview in 2011 where he criticized the statistical AI approaches to produce explanatory insights. In a sense not considering them “real” models.

  10. Self aware

    I don’t think the intelligibility v prediction debate is ‘artificial’ in any way. It’s real. If the observations obey a certain mathematical form, and the physics says so, then the job of physics is done.

    But that doesn’t solve the perfectly natural question : how ? I recall my physics class when the teacher first ‘explained’ gravity by putting newton’s formula on the whiteboard. Predictably the rest of the class consisted of students repeating the same question : yes., I understand the formula, but why is the formula like that?

    It’s a perfectly legitimate question – not just about gravity, but about any physics axiom stated in mathematical form : why is the form like that ? You aren’t an ounce better off with general relativity for the same reason.

    It has an interplay with the mental debate of course – I’d you can’t even answer semantic questions about physical phenomena, what chance you deal with mental ones ?

    J

  11. Jochen

    You can’t refute an assertion based on biology with a argument based around computers.

    Your whole thesis is predicated on a scientific argument that the brain is a computer, an assertion that has no scientific basis.

    If humans are animals then their cognitive tools are handicrafted by millions of years of evolution : they’ll have a shape and a scope. There will be things they can do and things they can’t do. If they are infinitely powerful, then you have to make a separate case that the human beings don’t shafre this finite cognitive shape with the rest of the animal world, but were blessed at some point with infinite power.

    There are times, I feel, when computationalists and the religious aren’t that far apart after all

    J

  12. Tanju

    It’s a slightly different point. Physics seeks deep structure underpinning all the observations. The google language labs are just putting all the observations in a huge data lake and then sifting out the most likely matches. It’s an interesting computational exercise but as a piece of science I can understand those who scratch their heads at the description.

    J

  13. I’m sorry, John Davey, but that’s a little hilarious: you compare me to the religious, while in your zeal missing that I’m actually proposing an argument against computationalism: I see no reason to believe that everything in the world can be explicated in computational terms.

    But let’s take this slowly. First of all, of course I can apply computational notions to the biological: computation is a formal notion; whether something is biological, mechanical, or based on whatever other substrate has no consequence whatsoever.

    I agree, however, that the brain is not a computer—in the same sense that a stone isn’t, or the device I’m typing this on right now isn’t. It’s first an foremost a physical, and biological, artifact.

    But, like a stone, like the thing I’m typing on, brains can compute. And what I’m saying now is that when we explain something, that’s essentially a computational process: every explanation is ultimately a kind of algorithm, a finite series of steps starting with a set of facts taken for granted and ending with the explanandum.

    It’s just that while our powers of explanation are computational, being ultimately language-based, the world isn’t. There’s more to the world than computation; consequently, there’s things we can’t account for, like how subjective experience emerges from physical interactions. There’s nothing intrinsically mysterious about it; it’s just that we’re cognitively closed with respect to the non-computational, so everything that outruns mere computation appears inexplicable to us.

    Zombies are conceivable because we can’t derive the facts about subjective experience from the physical facts, even though they follow from the latter (and consequently, zombies aren’t possible); Mary learns something new because the facts about seeing red are, to her, not entailed by the complete theory of vision, even though there’s nothing more to it than that.

    Computationalism is attractive because it suggests a world in which everything is explicable to us; but to all appearances, that’s simply not the world we live in.

  14. John,
    Certainly there’s always a frontier of knowledge. We understand things in terms of their more primitive components, and we understand those components in terms of their own primitives.

    But eventually we always arrive at primitives we don’t understand. Why does mass cause spacetime to warp? Why the standard model? Why these physical laws instead of others? When we find answers to these (historically it’s been a bad bet to assume we won’t), they will almost certainly be in terms of new primitives that generate yet new questions. Turtles all the way down.

    But read any neuroscience books, and you’ll see that the brain appears to operate in terms of biology, chemistry, and electricity, all within what certainly looks like an information processing framework. There’s still a lot to learn, but it doesn’t look like questions on the frontier of physics are going to be a factor.

  15. Jochen:

    “Zombies are conceivable because we can’t derive the facts about subjective experience from the physical facts, even though they follow from the latter (and consequently, zombies aren’t possible); Mary learns something new because the facts about seeing red are, to her, not entailed by the complete theory of vision, even though there’s nothing more to it than that.”

    I’d suggest there are no categorically first-person facts about seeing red available to Mary, albeit the experience of red is new for her when she leaves the room. So we needn’t worry that the complete theory of vision doesn’t entail such facts since there are none to be entailed. All Mary can assert and know about red is what the rest of us can assert and know: that it stands in certain relations to other colors, is associated with seeing tomatoes and apples, etc. But when it comes to saying or describing what red is like in itself (the supposed first-person fact), Mary is at a complete loss to specify it: she can only point to red objects, which are of course in the public domain.

    This isn’t to deny the reality of subjective experience and the problem of explaining why it should arise only in association with specific sorts of neural goings-on. It’s only to say that experiential qualities are leveraged to specify (third-person) facts about the *world*, e.g., that some things are red, or taste sweet. There isn’t a further private fact about each quality considered in or by itself.

  16. Gents, What Mary is doing is reading and understanding about color, but whether talking scientific language or the predictable language of mathematical computation, we are still talking about language (a Chomsky specialty). Being stranded in language is not a bad place to be stranded for a while. Like for flatearthers, there was no concepts of mass and gravity to explain weight because everything that was heavy moved from up to down and the planets were kept in their orbits by the hand of God. Scientists also knew water was liquid between zero and 100 deg C but could not explain what was happening at the molecular level. Neuroscientists are just not very good physical scientists in spite of their false authority like Scholastics.

  17. Yet another try at explaining why I think typically undefined phrases like “seeing red” often lead to confusion and incorrect conclusions. It’s no doubt tedious for anyone who bothers to read these attempts, but no more so than recurring but mostly inconclusive exchanges are for me.

    So here’s how I understand “seeing red”. Assume a retina is illuminated (whether directly or indirectly) by light in the “red” portion of the visible spectrum. The light stimulates the retina’s visual sensory receptors and neural activity consequent to that stimulation occurs at various points in the visual processing pathway(s). There are (conceptually) two functional processors of special interest here. One produces the “mental image” that I assume is meant to be captured by the phrase “phenomenal experience” (PE). The other identifies previously learned associations that allow the subject to respond to occurrence of a PE by, for example, uttering a color name. Implicit in this assumed procedure is that production of a PE precedes identification of an associated color name. I question that temporal priority, but for now let’s see where accepting it leads us in the Mary’s Room thought experiment.

    Mary hasn’t previously been exposed to “colored” (in particular, “red”) light, hence has never produced a corresponding PE, and therefore can’t have learned to associate such a PE with a color name. So, when she first has a PE due to sensory stimulation by “red” light, she doesn’t learn “what it’s like” to be “seeing red” because she doesn’t even know that the new PE should be associated with the word “red” until she has been taught to do so. Of course, Mary is assumed to know that certain familiar objects – say, ripe apples – are commonly described as being “red”, so if told that there’s a ripe apple in her FOV, she may infer that it is “red”. But that’s just an indirect way of being taught what word to associate with the PE. Of course, as is true of any new experience, she learns “what it’s like” to have the PE consequent to visual sensory stimulation by “red” light, but how one gets from that trivial observation to a refutation of physicalism escapes me. (BTW, assuming that Mary can acquire new knowledge from merely having an unfamiliar PE seems to me an extreme example of Sellars’s “Myth of the Given”.)

    In any event, I question the assumption that production of a PE precedes identification of an associated color name. The blindsight and sensory substitution phenomena suggest that PE isn’t necessary for successfully dealing with objects or events in one’s real or virtual FOV respectively. In which case it seems possible, even likely, that production of visual PEs actually follows the production of “descriptions” of our environment, where such “descriptions” are essentially collections of words that have become associated with neural activity patterns that are consequent to the presumed presence of familiar items in the FOV, eg, a “red” “ball”.

    The resolution of that part of a PE that corresponds to the foveal region of the retina is maximum. This can explain why we can formulate a more detailed “description” of the presumed content of the center of our FOV. But the explanation could conceivably go in the other direction: the possibility of a more detailed “description” of the content of the center of the FOV due to the greater resolution in the fovea might explain the greater resolution in the corresponding part of the PE. Similarly, the production of occurrent mental images that accompanied previously experienced sensory stimulation could use “descriptions” that are somehow “recalled from memory”. The vividness of the mental image would depend on the detail of the “description” which in turn would depend on how visually attentive the person is. Eg, a person with low visual attention (eg, me) might create and store terse “descriptions” and therefore produce occurrent mental images with minimal detail (as indeed mine are).

  18. Tom:

    I’d suggest there are no categorically first-person facts about seeing red available to Mary, albeit the experience of red is new for her when she leaves the room. So we needn’t worry that the complete theory of vision doesn’t entail such facts since there are none to be entailed.

    I don’t know, I’ve always found this move to be rather evasive: OK, there’s something new about the experience, but it’s not a fact, and only facts need to be explained, so we don’t incur some new explanatory burden! Phew.

    But of course, you still have to account for this new not-a-fact that Mary encounters; plus, now you’re also saddled with explaining what, actually, those not-facts are.

    On my conception, it’s perfectly simple (but of course, we all think that of our respective models and ideas): Mary learns something new that wasn’t derivable from all the facts about vision in just the same sense that the question of whether a Turing machine halts is not derivable from all the facts about the TM. This also explains why she’s (and we are) so at a loss at giving any report of her experience: telling a story is something fundamentally computational; while the experience of seeing red, if all of this works the way I hope it does, simply isn’t.

    Likewise, imagine an oracle telling you that a given TM doesn’t halt: there’s likewise no finite story that accounts for this fact. So while you then know that the TM halts, you can’t communicate that knowledge in such a way as to convince anyone else—in general, no story exist that contains that information. Anybody else would have to make the same, oracular, revelatory experience.

    Charles:

    So, when she first has a PE due to sensory stimulation by “red” light, she doesn’t learn “what it’s like” to be “seeing red” because she doesn’t even know that the new PE should be associated with the word “red” until she has been taught to do so.

    Sure, but that’s not what’s usually meant by “knowing what ‘seeing red’ is like”. The relevant knowledge here is not that what she’s experiencing is usually called ‘seeing red’, but rather, what that experience—whatever it may be called—is like. I.e. the knowledge is not about how a given experience may be referred to, but what that experience itself is.

  19. Thanks Jochen. You say “But of course, you still have to account for this new not-a-fact that Mary encounters; plus, now you’re also saddled with explaining what, actually, those not-facts are.”

    Yes, in the comment you’re responding to, I acknowledged that “This isn’t to deny… the problem of explaining why [experience] should arise only in association with specific sorts of neural goings-on.”

    And you say “Mary learns something new that wasn’t derivable from all the facts about vision in just the same sense that the question of whether a Turing machine halts is not derivable from all the facts about the TM. This also explains why she’s (and we are) so at a loss at giving any report of her experience: telling a story is something fundamentally computational; while the experience of seeing red, if all of this works the way I hope it does, simply isn’t.”

    Mary has a new experience, for sure, but the question of private facts aside, the primary challenge is to explain the existence of experience of any sort. Perhaps (and I’m speculating here) the explanation will involve certain recursively-generated limitations on representational algorithms, such that of necessity something qualitative arises for the system, and only for the system. So the story might end up being computational after all, but a story about computational limitations.

  20. Gents, This is really a stimuli vs response controversy, with the stimuli of a “red” object causing a red response in our nervous system. Language is a complex system of stimuli-response behavior which allows us to pass experience as secondary experience of language which is full of internal SR. I can’t make you directly see red except by language if I direct you to look at the color of the headings on this webpage.

    Phenomenal experience may be described with the secondary experience of computational language but first person experience is actually biology which is actually biochemistry which is actually chemistry which is actually physics which is actually the forces of nature which the flatearthers knew nothing about, so all they had were observed calculations and God to explain the heavens.

  21. Jochen –

    the knowledge is not about how a given experience may be referred to, but what that experience itself is [like]

    But that reply makes my point about inadequate vocabulary and completely misses the point of the rest of my comment. Of course it doesn’t matter whether you say you’re seeing “red”, “rot”, or “czerwony”in response to sensory stimulation by “red” light. But it does matter what detailed meaning is assumed for the whole phrase. You may think it self-evident what the phrase means, but Sellars devoted a whole chapter of “Empiricism and Phil of Mind” to addressing that rather complex issue. What I mean by it is (roughly) that when one’s visual sensors are stimulated by “red” light, one can both confidently assert and justify association of the sensory event and the word “red”. I specifically do not mean to suggest that the associated word “refers to” anything, especially not a phenomenal experience. So, what do you mean by the phrase?

    The significance of emphasizing the importance the sensory event being associated with a color name is that any word has a complex role in a language, and therefore association with a word places the event in a web of possible context-dependent responses. I’m unclear what is being included in the phrase “what it’s like to experience event X”, but I assume among those things are the image/PE that is produced in response to the event (if any), and any emotions, physical reactions, or thoughts that occur. So, if Mary doesn’t have any association between the event and “red”, none of those things can be forthcoming. My comment focused on the image/PE component, arguing for the possibility that until Mary has learned to associate her neural activity in response to sensory stimulation by “red” light with the word “red”, she won’t even have a PE. In which case, “what it’s like” for her to “see red” is “nothing”: no PE, no emotions, etc. So again, what do you mean by the phrase?

    Finally, whatever it’s like for Mary to experience her first visual stimulation by “red” light, that by itself is insufficient to provide her with new “knowledge”, at least for any interpretation of “knowing” that involves intersubjective learning (as I assume most current interpretations do) and requires justification of asserted knowledge. I contend that until someone teaches her to relate her new experience to a color name, she can’t make the connections necessary to allow her to justify assertions like “I’m seeing red”. You seem confident that she can. Why?

  22. Tom:

    Yes, in the comment you’re responding to, I acknowledged that “This isn’t to deny… the problem of explaining why [experience] should arise only in association with specific sorts of neural goings-on.”

    But why all the gymnastics? Why introduce these things-that-aren’t-facts in the first place? To me, they seems to serve to obscure, rather than enlighten.

    What’s wrong with simply accepting that Mary will make a new experience, and learn something new in the process—just as she learns something new when first seeing a tree, or reading about trees in a book, but with the difference that you can’t write the knowledge about experiential qualities down in a book? There’s a (generally unstated) assumption in the arguments against physicalism, namely, that an ideal reasoner should be able to derive all the experiential facts from the physical, ‘base’ facts, if the latter follow from the former. To me, this seems like a simple assumption to give up—indeed, even without the pressure from these arguments, it would appear to be a questionable assumption to make.

    So, everything follows from the physical facts—the physical facts entail all the facts about the world—, but there are facts which we can’t derive from the physical base, because our reasoning abilities are computational engines within a non-computational world. Hence, the only way to acquire knowledge of these facts is by direct acquaintance.

    Charles:

    My comment focused on the image/PE component, arguing for the possibility that until Mary has learned to associate her neural activity in response to sensory stimulation by “red” light with the word “red”, she won’t even have a PE.

    Mabye I’m just being dense, but this seems to me like arguing that there are no trees until one has learned to associate them with the word ‘tree’. I mean, in a sense, that’s true: we will encounter these things we have no name for, which consequently aren’t trees, since trees are the things called ‘trees’, but ‘tree’ is just a placeholder—it’s because there are trees that we have the word ‘tree’. How would we come up with the word ‘red’, and indeed, the whole conceptual apparatus related to phenomenal experience, if the existence of the experience depends on us having the word ‘red’? Do small, pre-linguistic children not have any phenomenal experience?

    Again, to me, these just seem to be unnecessary knots to tie oneself into: if the world simply outruns our capacity to conceptually coordinatize it, to give an account of with the finitary, computational means of language and explanation, then we would expect for there to be inutterable, subjective elements which we can’t reduce to the physical base facts—just as there actually seem to be.

    Or, in other words, I simply don’t have a high enough commitment to the (admittedly popular) idea that the world is just a giant computation in order to contort myself into a pretzel trying to explain away the collisions of this belief with the way the world actually presents itself.

  23. Jochen in 22, I agree that Mary has a new experience – what it’s like to experience red – but notice that the particular look of red to her doesn’t enable any cognitive or behavioral capacity that a different look wouldn’t enable. This is simply to say that basic experiential qualities are arbitrary with respect to the function of picking out the objects they end up associated with (so qualia can’t be functionalized, as Chalmers would put it). So the purported first-person fact of how red looks to me plays no role in my identifying apples. Such purported facts aren’t epistemically operative or relevant, which methinks disqualifies them as facts. But again this is not to deny that red looks a particular, ineffable way to me. There’s a section in my JCS paper, Killing the Observer, on this, http://www.naturalism.org/philosophy/consciousness/killing-the-observer#toc–the-unspecifiability-of-private-facts-7apGu1RP

    What is a fact is that apples appear to be red, and for all practical (non-philosophical) purposes *are* red 🙂

  24. If I might butt in, the way anthropologists talk about qualia is slightly different, in that they are interested in the emotional and other associations attached to those primary sensory experiences, These are often allusively capturable by cross-modality comparisons or metaphors or art. Some might be cultural associations eg warm reds and cool blues, but others like light and dark sounds or tastes, or even the selection of phonemes in coining words to describe phenomena, might be hints that some qualities of quality are shared.

  25. Tom (and David Duffy):

    the particular look of red to her doesn’t enable any cognitive or behavioral capacity that a different look wouldn’t enable.

    In substance, just the point I was trying to make in the second paragraph of @21 (and David makes in his first sentence). But in order to minimize the possibility of misunderstanding, I would replace “look of red to her” with “neural activity (and possibly PE) consequent to her exposure to ‘red’ light ” and “different look” with “different activity and PE”.

    This is simply to say that basic experiential qualities … them as facts.

    Agreed.

    But again this is not to deny that red looks a particular, ineffable way to me.

    Again, I’d eschew “looks”. Also, I’m not sure what “ineffable” adds.

    apples appear to be red, and for all practical (non-philosophical) purposes *are* red

    Altho it no doubt seems pedantic to nit-pick a humorous aside, because I consider a deficient vocabulary to be a major obstacle to understanding discourse on these topics I will. In quotidian conversation, the quote is obviously correct. But “practical (non-philosophical)” suggests that it’s OK to say that in exchanges such as this one, which I consider are (or at least should be) largely non-philosophical. Once we’re addressing PE, I can’t imagine what could properly be described as “is red” in same sense that a barn is often so described.

  26. Jochen:

    this seems to me like arguing that there are no trees until one has learned to associate them with the word ‘tree’.

    Sorry, but I don’t see how you get to that ontological conclusion from anything I’ve said. In any event, the issues with colors and objects are different. For example, suppose that (for whatever bizarre reason) Mary had never actually viewed a triangle but knew the description of one. Then (assuming that imaging precedes association of a neural activity pattern with a word) she presumably would be able to match the image to the description and thus be able to associate the image with a name. She can’t do that in the case of the image produced consequent to stimulation by “red” light since there is no description of an image consequent to stimulation by “red” light other than “red”, which she is has not learned to associate with the image.

    How would we come up with the word ‘red’, and indeed, the whole conceptual apparatus related to phenomenal experience, if the existence of the experience depends on us having the word ‘red’?

    As I suggested @17, I don’t see an obvious problem, at least in principle. The association will typically be learned by associating some manifestation of the neural activity consequent to visual stimulation (distinguishable by the brain, though not necessarily consciously by the learner like an image is) and another manifestation consequent to simultaneous aural stimulation produced by a teacher uttering the word. The manifestation due to visual stimulation may be an image, but it isn’t obvious (at least to me) that neural activity earlier in the visual processing pathway(s) wouldn’t produce some manifestation sufficient for making the association. As I understand it, various feature detecting processors are very early in the pathway(s). Why not also a processor that produces distinguishable manifestations of neural activity patterns consequent to visual stimulation by regions of the retina stimulated by light of different “colors”?

    Do small, prelingual children not have any phenomenal experience?

    I don’t know, how about asking one. Oops, guess that doesn’t work. Or think back to when you were prelingual – did you? If you’re like me, that doesn’t work either.

    Based on Sellars’s famous quote that all awareness of particulars et al is a “linguistic affair”, I would guess his answer would be “no, they don’t”. OTOH, I think Arnold disagrees with that, although I’m not sure where in his Retinoid System “awareness” of such entities occurs and where the several kinds of what he calls “PE” arise. In any event, my impression is that it’s an open question. If you have an argument to support the contrary (beyond, of course, introspection, which can’t help), I – and presumably everyone else – would be delighted to entertain it.

    I simply don’t have a high enough commitment to the (admittedly popular) idea that the world is just a giant computation …

    I don’t (and couldn’t even if so inclined) think about the issue in terms of computation, so that idea may be popular, but it isn’t mine. I suppose some may think of complex neural structures as being essentially analog computers, but not me.

  27. Jochen

    I don’t know if it’s hilarious, although apologies if I misunderstod you.

    But if you say


    “I think our computational explanatory facilities actually impose a horizon on what we can explain”

    or


    Moreover, this explains why these elements are necessarily subjective: all that we can communicate, that we can bring into objective focus, is necessarily computational.

    That suggests to me that if you’re not a computationalist, you can’t be far off.


    And what I’m saying now is that when we explain something, that’s essentially a computational process

    So when I say “I feel terrible” is that a computational process ? From what point exactly ? The feeling is non-computational : the identification of the feeling is non-computational : maybe after what might be termed ‘identification’ the ensuing activity may be classed as symbolic only. But the important bit of the process – most of it – contains no computation.


    It’s just that while our powers of explanation are computational

    Language is symbolic and grammars may be dictated by generic rules (although that’s never been fully established as far as I’m aware) I’m not so sure that ‘powers of explanation’ can be limited to the rules of the tool of the communicative medium. All our semantic experiences take place outside of a symbolic straitjacket for instance.

    There wouldn’t be any physics if powers of explanation were restricted to computational rules. Time and space are the stuff of the universe – semantic, meaningful, non-computational – but lend themselves to a neat symbolic extraction by virtue of our cogntive relationship with them. Thus came physics. Physics equations would be meaningless if we weren’t able to map the scalars it generates back to the real stuff of the universe.


    There’s more to the world than computation; consequently, there’s things we can’t account for, like how subjective experience emerges from physical interactions. There’s nothing intrinsically mysterious about it; it’s just that we’re cognitively closed with respect to the non-computational, so everything that outruns mere computation appears inexplicable to us.

    I agree


    Zombies are conceivable because we can’t derive the facts about subjective experience from the physical facts, even though they follow from the latter

    Red rainclouds are conceivable. Anti-Gravity machines are conceivable. Superman is conceivable.
    Zombies are conceivable. However if we believe that identical causal factors are likely to produce different outcomes, that goes against 500 years of basic scientific philosophy. It’s irrelevant that the relationship between the physical causes of mind and mind can’t be mathematized : it doesn’t stop them being causes.

    J

  28. SelfAware


    But read any neuroscience books, and you’ll see that the brain appears to operate in terms of biology, chemistry, and electricity, all within what certainly looks like an information processing framework.

    Biology, Chemistry and “Electricity” have nothing to do with information.


    There’s still a lot to learn, but it doesn’t look like questions on the frontier of physics are going to be a factor.

    Did I suggest there was ? My answer was in reply to you suggestion that there was an “artificial” division between the capacity of physics to predict and to explain.

    On the subject, the elucidation of the development of mental phenomena seem to me subject to far greater impedances than physics. You seem to be suggesting that frontier physics is somehow more demanding than neuroscience which is – according to the stereotype – encompassed within biochemistry. This is just a dogma, and if it had an ounce of truth to it we’d have had more success by now.

    JBD

  29. Tom:

    I agree that Mary has a new experience – what it’s like to experience red – but notice that the particular look of red to her doesn’t enable any cognitive or behavioral capacity that a different look wouldn’t enable.

    Yes, but one must be careful here not to fall into the trap of claiming that the red experience ‘could have been otherwise’, or even, absent completely. My basic thesis is that there’s a nomological connection between the physical facts of the matter, and the experiential ones (or the experiential not-facts, if you will), just that that connection is not available to us as a means of deducing consequences—hence, the apparent possibility of inverted spectra and zombies.

    So the purported first-person fact of how red looks to me plays no role in my identifying apples. Such purported facts aren’t epistemically operative or relevant, which methinks disqualifies them as facts.

    If you want to define ‘facts’ by their relevance in this sense, then I think that’s fine; but I’m not sure I see what this adds. It seems you’d arrive at the same point of view by only allowing what’s objective to be called a ‘fact’, since only the objective has some chance of being communicated, either by influencing behavior or by direct verbal report.

    To me, facts are simply truths about the world—I’m consciously avoiding saying ‘true statements’ here, since not all facts may be (finitely) expressible (indeed, if I’m right, it’s exactly the facts about subjective experience that aren’t). I think requiring that facts be effective in some way just adds a layer of description whose utility isn’t obvious to me.

    Charles:

    For example, suppose that (for whatever bizarre reason) Mary had never actually viewed a triangle but knew the description of one. Then (assuming that imaging precedes association of a neural activity pattern with a word) she presumably would be able to match the image to the description and thus be able to associate the image with a name.

    I agree. That’s because triangles are individuated purely by their structural properties, which can be communicated symbolically. The same goes for trees. Qualia, on the other hand, aren’t structural, but rather, intrinsic; consequently, no string of symbols can capture their essential nature. That doesn’t make them any less part of the world, though, or even necessarily any more mysterious, than trees. So having a word for red, to me, has the same origin as having a word for tree.

    John:

    I don’t know if it’s hilarious, although apologies if I misunderstod you.

    Don’t sweat it. But it seems that I’ll have to be a bit more explicit about what my views actually are; perhaps I assumed too much shared vocabulary—I tend to forget that other people can’t in general see what’s going on in my head in association with the stuff I write.

    So, let me try and be clear. I think that the idea that the world is a computation is conceptual nonsense: it misunderstands both the terms ‘world’ and ‘computation’. A computation is something one can perform, using a certain artifact. It involves, therefore, a physical system being used in a certain sense. The world is not used; that wouldn’t make any sense.

    Indeed, computation is ultimately just interpreting a certain system, or its states, as having a certain meaning—it’s conceptually not different from the way you interpret the marks I’m causing to appear on your screen in a certain way. In computing something, we interpret the computer, or its states, as implementing the structure of a given, different system, or process.

    This interpretation is an instance of using our intentional faculties to establish what I call the ‘modeling relationship’ between physical states of a system, and abstract states of a mathematical operation, or physical states of a different system. A simple example of this modeling relationship is mapping a stack of books to the set of Joe’s paternal ancestors: the thicker the book, the more to the past the ancestor; so, a thicker book is some progenitor of a thinner one. Given two books, and the information that one maps to James, the other to Jack, if the book corresponding to James is thicker than the one corresponding to Jack, you know that James is Jack’s paternal ancestor.

    Another such example is the relationship between the distances of the little beads of an orrery and the planets of the solar system. You can use the orrery to ‘simulate’, or ‘compute’, the temporal evolution of the solar system, and figure out the relative position of the stars at an arbitrary moment in time.

    Computation now just involves a little further abstraction—mapping some configuration of 1s and 0s to configurations of the orrery, say. But the key point here is that the stack of books doesn’t map to Joe’s ancestors out of itself, and the orrery does not map to the solar system’s planets on its own—it needs to be interpreted in order to do so. Consequently, the ‘modeling relation’ and all computation, is derived from our capacity of interpreting something—just as you are interpreting these symbols right now.

    Hence, no successful theory of the mind can ever appeal to computation as foundational—it would simply be circular, since computation is something a creature with a mind can use other systems to perform.

    Furthermore, there is no reason to suppose everything can be modeled computationally. Clearly, some aspects of the system—the structure—carries over from the system to the model; but just as clearly, some doesn’t (otherwise, the systems would simply be identical). So it’s not unreasonable to suppose that there are intrinsic characteristics to physical systems that are not limited to the structural, and consequently, can’t be captured by computation: the world outruns that which we can model using computation.

    But now, my hypothesis is that whenever we explain something, we do so using computation—just as whenever we write down something, whenever we communicate something, we do so using representation-bearing vehicles (which of course only represent if interpreted in the right way). So, basically, upon trying to understand the solar system, we build a little orrery out of mindstuff (whatever that may be), interpreting a particular mental configuration as a model of the solar system. Then, you just move the moon in the right way, and ding! You understand solar eclipses.

    But if that’s right, then our understanding—but not our minds, or their functioning, themselves—is limited by that which can be computationally modeled. Hence, our brains’ capacity for understanding falls short of what our brains can actually do—there is a horizon imposed upon our understanding by the computational nature of this understanding, this modeling. Whatever outruns this horizon, outruns our understanding.

    Consequently, it’s the non-computational nature of mind that makes it appear paradoxical to our computational reasoning facilities. The facts about the mind are not computationally derivable from the physical facts: hence, Mary doesn’t know what red looks like until she sees it, and zombies and inverted spectra seem possible. The mental facts (pace Tom Clark) follow from the physical facts; we just can’t derive them. Otherwise, they’re just perfectly ordinary inhabitants of the world; it’s only assuming that the world should be computational that makes things seem strange.

    There wouldn’t be any physics if powers of explanation were restricted to computational rules.

    Physics, of course, is a paradigm case of computational modeling; in recent years, even literally so, with computer simulations gaining importance. Moreover, all the equations in papers are just exercises in symbol-manipulation. It’s mapping these symbols to the world that depends on our non-computational, intentional capacities. But that’s also exactly what physics apparently fails to account for, as Brentano noted.

  30. facts are simply truths about the world … [that may not be …true statements’ … since not all facts may be (finitely) expressible. … I think requiring that facts be effective in some way just adds a layer of description whose utility isn’t obvious to me.

    This pretty much captures our essential communication problem. You (and others – I’m picking on you only because you at least bother to reply) seem to think that certain words/phrases that can be used without elaboration in quotidian conversation can be used in fora like this one also without elaboration and nonetheless have context-relevant meaning. I don’t. The SEP entries for the specialized use of many such words/phrases go on for pages, and whole books have been written about some.

    Eg, you explain the term subjective fact as being a “truth” about the world that can be “learned” by having an “experience”. For each of the quoted words. I assume a fairly precise meaning in this context. Unfortunately, when those assumed meanings are applied to your explanation, it makes no sense to me. Apparently, you’re assuming different meanings, so without some idea of what those are, we obviously can’t communicate.

    You do elaborate, adding that a subjective fact can’t be put into words at all, never mind assertive form, and isn’t necessarily “effective” in any sense. But that doesn’t help me because I can only respond “then why are such facts of any interest?” As a “meaning is use” proponent, if I don’t see to what use such a “fact” could be put, I don’t see what meaning it could have.

    A writer in any specialized field typically has to use jargon and assume readers know enough to pick up the intended meaning. But that only works if the jargon is sufficiently well-defined and relatively stable. That doesn’t appear to me to be the case in this forum – and arguably not in this field in general.

  31. Charles:

    certain words/phrases that can be used without elaboration in quotidian conversation can be used in fora like this one also without elaboration and nonetheless have context-relevant meaning

    Well, what would you have us do? If we are to presume that communication is possible at all, we have to presume that we more or less sample from some common pool of understanding. After all, each word will just be explained with more words, which you could then again quibble with.

    Furthermore, it’s not generally necessary, even in technical contexts, to insist on a complete specification of all terms—indeed, insisting on this is generally known as the Socratic fallacy. Biologists wrote intelligibly about ‘life’ long before we had any idea of what, exactly, life is; or of evolution before the discovery of genes; and so on.

    Eg, you explain the term subjective fact as being a “truth” about the world that can be “learned” by having an “experience”. For each of the quoted words. I assume a fairly precise meaning in this context. Unfortunately, when those assumed meanings are applied to your explanation, it makes no sense to me.

    So, what are your ‘fairly precise’ meanings of these words? To me, a truth can, for most practical purposes, simply be considered something that actually holds of the world, that is the case; learning something just means acquiring new knowledge; and experience simply is the subjective aspect of direct acquaintance. None of these things, presumably, can be stretched to include all edge cases; but they also don’t need to. Knowledge perhaps isn’t ‘justified true belief’ in some extreme cases; but that doesn’t mean we don’t know what knowledge is, nor even that we can’t consider it to be JTB for most practical purposes. I mean, it took nearly 2500 years to even notice the discrepancy!

    And of course, you can now go on and quibble with things like ‘being the case’, ‘acquaintance’, or indeed, ‘knowledge’. And once I define these, you can quibble with the terms in that definition. And so on: you’ll never run out of material. But what you’re doing then isn’t discussion, it’s just obstruction.

    But that doesn’t help me because I can only respond “then why are such facts of any interest?” As a “meaning is use” proponent, if I don’t see to what use such a “fact” could be put, I don’t see what meaning it could have.

    Facts need neither be of interest, nor of use; I don’t know why you would hold the world to such standards, accommodating you personally in its constitution. The exact number of molecules in the white bit of my left pinky’s nail is presumably neither of interest nor of use to anybody at all; nevertheless, it’s a fact.

    But facts, even those of no interest or use to you personally, can be puzzling: and as such, they require explanation. That’s what I’m trying to do.

  32. I’ve never though much of marythecolorscientistsitsinherchineseroomimaginingwhatitsliketobeabat stories, but if Mary’s knowledge of color science is perfect than she should be able to perfectly simulate the experience of seeing red. If her simulation is perfect then when she sees red the experience will be identical to her simulated experience and so she will not learn anything new. If her simulated experience is not identical with her real experience then the knowledge on which she based her simulation was incomplete.

    My sense of this is that the Mary story is an attempt to convince people of the existence of qualia. It’s not an attempt to prove the existence of qualia, so (like some of the claims in our recent Presidential election) it seems likely to convince only those who were already so inclined. I don’t know of any way to prove the existence of qualia in a way that compels agreement from those not already so inclined, so I think debates about qualia tell us more about the debaters than they do about qualia.

    To John Davey:
    “Biology, Chemistry and “Electricity” have nothing to do with information.”

    I think when discussing information it’s always good to be mindful of the fact information can’t exist in the abstract. Any information has to be embodied, whether as ink on paper or regions of greater and less reflectivity on a CD or what have you. Human beings are not an exception to this. Any information or knowledge possessed by a human being must be embodied neurologically or in some other physical medium. Are qualia information? If so, are they embodied in some physical medium? To those for whom the answer to both questions is no, do qualia exist? To those for whom qualia are information but not embodied, are you theists?

  33. Jochen in 29:

    “My basic thesis is that there’s a nomological connection between the physical facts of the matter, and the experiential ones (or the experiential not-facts, if you will), just that that connection is not available to us as a means of deducing consequences—hence, the apparent possibility of inverted spectra and zombies.”

    I’m not sure how, if the connection isn’t available to us, you know it exists. I too think it’s likely there’s some sort of entailment from certain representational goings-on to the existence of experience (so that our micro-physical duplicates couldn’t be zombies), but that has to be proved or demonstrated, not assumed.

    “To me, facts are simply truths about the world.”

    It’s a fact that apples are red, but what is the fact about red in and of itself, putting aside its relations to other colors? That it looks a particular, if ineffable way, you will say. However, I’m not sure that’s a truth about the *world*, but rather one of your basic, not-further-specifiable subjective categories in terms of which truths about the world get described (the apple is red). Still, I feel the pull of thinking my red is a fact, since after all there it is in it’s inimitable particular way!

    Although I think and hope we might eventually understand why experience arises for systems like us, what I don’t think we’ll get is a law-like entailment or causal story going from physical facts to particular qualitative looks, precisely because those looks are private, subjective phenomena, not anything that can be observed. and thus specified.

  34. Jochen –

    You consider requiring that someone carefully define the terms they are using in a technical context a quibble, I don’t. That leaves us at an impasse.

    Just for the record, my Sellarsian understanding of “knowledge” is related to JTB only in a the loosest sense. “justification” plays a role, but not the one it plays in JTB. “truth” arguably plays a role, but not “truth” understood as you understand it. B plays no role. And in any event, it can’t be acquired by “acquaintance” – one of the key points of “Empiricism and Phil of Mind”.

  35. Michael –

    Any information or knowledge possessed by a human being must be embodied neurologically or in some other physical medium.

    This relates directly to my comment 34. Sellars famously says (roughly) that being in a state of “knowing” being able to make an assertion and justify it to your peers. Such abilities presumably are embodied in motor neuronal structures.

    As I understand it, one of the essential features of a qualia is that nothing can be said about them, in particular neither assertions nor justifications). Hence, at least in the Sellarsian sense of “knowledge”, there can be none about them.

  36. With the section of eyesight that is actually blind (because of where the nerve leaves the eye) but stretches over information from other areas, are those stretch overs called qualia? So you can have qualia of false information?

Leave a Reply

Your email address will not be published. Required fields are marked *