NagelLSo what about Nagel’s three big issues with materialism?

On consciousness the basic argument is simply that our inner experience is just inaccessible to science. We still can’t get inside the heads of those bats, and we can’t really get inside anyone’s except the one we have direct experience of – our own. Nagel briefly considers the history of the problem and the theory of psychophysical identity put forward by Place and Smart, but nothing in that line satisfies him, and I think it’s clear that nothing of the kind could, because nothing can take away the option of saying “yes, that’s all very well, but it doesn’t cover this here, this current experience of mine”. Interestingly, Nagel says he actually suspects the connection between mental and physical is not in fact contingent, but the result of a deep connection which unfortunately is obscured by our current conceptual framework; so given a revolution in that framework he seems to allow that psychophysical identity could after all be seen to be true. I’m surprised by that because it seems to me that Nagel is in a place beyond the reach of any conceptual rearrangement (that cuts both ways – Nagel can’t be drawn out by argument, but equally if someone were simply to deny there is “something it is like” to see red, I don’t think Nagel would have anything further to say to them either); but perhaps we should feel very faintly encouraged.

At any rate, Nagel argues (and few will resist) that if consciousness is indeed physically inexplicable in this way the problem cannot be sealed off in the mind; it must creep out and infect our ideas about everything, because we have to give accounts of how consciousness evolved, and how it fits into our notions of physical reality. The answer to that in short is of course that as far as Nagel is concerned it can’t be done, but getting there through his review of the possibilities is quite a ride.

Cognition is the second problem, and somewhat unexpected: that’s supposed to be the Easy Problem, isn’t it? Nagel draws a distinction between the simple kind of reactions which relate directly to survival and the more foresighted and detached general cognition which he sees as more or less limited to human beings. He doubts that the latter is a natural product of simple evolution, which sort of echoes the doubts of Wallace, the co-discoverer of Darwin’s theory; in later life he took the view that evolution could not explain the human mind because cavemen simply didn’t need to be that bright and would not have been under evolutionary pressure to spend energy on such massive intellectual capacity.

Nagel sees a distinction between faculties like sight, and that of reason. Our eyes present us with information about the world; we know it may be wrong now and then, but we’re rationally able to trust our vision because we know how it works and we know that evolution has equipped us with visual systems that pick up things relevant to our survival.

Our reasoning powers are different. We need them in order to justify anything to ourselves; but we can’t use them to validate themselves without circularity. It’s no good saying our reason must be serviceable because otherwise evolution wouldn’t have produced it, because we need to use our reason to get to belief in evolution. In short, our faith in our own cognitive powers must and does rely on something else, something of which a separate, non-evolutionary account must be given.
There’s something odd about this line of argument. Do we really look to evolution to validate our abilities? I have a liver thanks to evolution, but its splendid functional abilities are explained in another realm, that of biochemistry. I don’t think we trust our eyes because of evolution (people found reasons to believe their eyes before Darwin came along). So yes, our cognitive abilities do, on one level, need to be understood in terms of an explanatory realm separate from evolutionary theory – one that has to do with logic, induction, and other less formal processes. It’s also true that we haven’t yet got a full and agreed account of how all that works – although, you know, we have a few ideas.

But surely not even the most radical evolutionary theorists claim that the theory validates our powers of reasoning – it simply explains how we got them. If Nagel merely wants to remind us that the ‘easy’ problem still exists, well and good – but that’s not much of a hit against materialism, still less evolution.
The third big problem is ‘value'” a term which here confusingly covers three distinct things: first, the target for Nagel’s teleological theory – the thing the cosmos hypothetically seeks to maximise; second, the general quality of the desiderata we all seek (food, shelter, sex, etc); third, the general object of ethics, somewhat in the sense that people talk about “our values”. These three things may well be linked, but they are not, prima facie, identical. However Nagel wants to sweep them all up in a general concept of something loosely motivating which is absent from the standard materialist accountHe quotes with approval an argument by Sharon Street about moral realism, with the small proviso that he wants to reverse it.

Street’s argument is complex, but the twice-summarised gist appears to be that ‘value’ as something with a real existence in a realm of its own is incompatible with evolution because evolution happens in the real material world and could not be affected by it. Street draws the conclusion that since evolution is true, moral realism in this sense is false, whereas Nagel concludes that since moral realism is just evidently true, evolution can’t be quite right.
Myself I see no need to bring evolution into it. If moral value exists in a realm separate from material events, it can’t affect our material behaviour, so we have an immediate radical problem already, long before we need to start worrying about such matters as the longer-term history of life on earth.
I said that I think ‘value’ is actually three things, and I think we need three different answers. First, yes, we need an account of our drives and motivations. But I feel pretty confident that that can be delivered in a standard materialist framework; if we lay aside the special problems around conscious motivation I would even venture to say that I don’t see a huge problem; we can already account pretty well for a lot of ‘value’ driven behaviour, from tropisms in plants up through reflexes and instincts, to at least an outline idea of quirte complex behaviours. Second, yes, we also need an account of moral agency; and I think Nagel is right to make a linkage with philosophy of mind and consciousness. This is a large subject in itself; it could be that morality turns out in the end to need a special realm of its own which gives rise to problems for materialism, but Nagel says nothing that persuades me that is so, and things look far more promising and less problematic in the opposite direction. Finally, we have the fuel for Nagel’s teleology; not wanted at all, in my view: an unnecessary ontological commitment which buys us nothing we want in the way of insight or explanation.
To sum up; this has been a pretty negative account. I think Nagel consistently overstates the claims of evolution and so ends up fighting some straw men. He doesn’t have a developed positive case to offer; what he does suggest is unattractive, and I must admit that I think in the end his negative arguments are mainly mistaken. He does articulate some of the remaining problems for materialism, and he does put some fresh points, which is a worthy achievement. I sympathise with his view that evolutionary arguments have at times been misapplied, and I admire his boldness in swimming against the tide. I do think the book is likely to become a landmark, a defining statement of the anti-materialist case. However, that case doesn’t, in my opinion, come out of it looking very good, and by associating it so strongly with misplaced anti-evolutionary sentiment, Nagel may possibly have done it more harm than good.

59 Comments

  1. 1. Tom Clark says:

    Cogent observations as usual Peter, thanks. Not having read the book, it isn’t clear to me whether Nagel is suggesting materialism (physicalism) is inadequate to explain mind, or whether naturalism is inadequate to explain it.

    There are dualist naturalists like Chalmers who argue that physicalism is likely not the whole story, that there exist non-physical natural phenomena. And then there are dualist anti-naturalists like Plantinga who hold there are non-physical *supernatural* phenomena.

    I suspect Nagel is closer to Chalmers than Plantinga, but perhaps he’s shading a bit into anti-naturalism, if not in his approach to mind then in his approach to values and morality, what do you think?

  2. 2. Peter says:

    Good question – I think it’s both materialism and naturalism he’s attacking – but he does take a detached sort of view, running through a number of possibilities, and apologising for his own failure to advance a sufficiently imaginative alternative.

    In his conclusion he says;

    In the present climate of a dominant scientific naturalism… I have thought it useful to speculate about possible alternatives… It would be an advance if the secular theoretical establishment… could wean itself of the materialism and Darwinism of the gaps…

  3. 3. Tom Clark says:

    Thanks Peter. So Nagel is giving advice to the secular, that is, naturalist, establishment, saying it should think beyond physicalism. Maybe it should, as I have tried to do in suggesting a non-causal representationalist phenomenal-physical parallelism, based in two epistemic perspectives on the world, one creaturely, the other collective. But from reading your review and others it seems like he’s the one drawing incomplete, unevidenced and not that imaginative inferences from explanatory gaps, e.g., that “the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False.”

  4. 4. Callan S. says:

    Is there any further definition given by him for what ‘get inside’ involves, in emperical terms?

  5. 5. Vicente says:

    Well, Tom, Peter, what did you expect?

    Peter commenced pt-1 introducing Nagel in terms like:

    important figure in inspiring the Mysterian school of pessimism

  6. 6. haig says:

    Even though Nagel risibly bungled important details of evolutionary biology and cognitive science, I can’t help but feel he is on the right track in pursuing his project of reintroducing teleology and non-computability back into the scientific picture. To clarify, when I say teleology I really mean teleonomy, and when I say non-computability I really mean superturing/hypercomputational algorithms.

    Ever since Francis Bacon and the scientific revolution, Aristotle’s formal and final causes have been abandoned as relevant forces which explain nature, instead we stick to material and efficient causes. I think recent advances in computer science and complexity science have shown how important formal causes are, and by reincorporating formal causes, a teleonomic view of final causes becomes tenable. This has implications across science and philosophy, including morality as Nagel discussed.

    Nagel is also right in not prematurely sweeping qualia under the rug by simply reducing the hard problem of consciousness to classically computable processes, but that does not mean they are immaterial or invalidate physicalism. It remains to be seen whether qualia does just arise from massively complex integration of simple computations, but there is still a possibility that some other force of nature, possibly non-computable on classical turing machines (but computable nonetheless using the relevant natural forces as ‘qualia oracle machines’) will be discovered to play critical roles in consciousness.

  7. 7. Peter says:

    Callan – I don’t remember any additional clarification on that.

  8. 8. Eric Thomson says:

    Haig can you define ‘formal cause’ and give an example of how it would be useful in computer science?

  9. 9. haig says:

    #8:

    Formal causes are non-entailing laws, where the future states of a system not only depend on the current entailing laws acting upon it, but also by the changes in how the system is organized.

    Entailing laws (efficient causes) are amenable to simple mathematical analysis, which is why, historically, physics has gotten so much further than the other sciences. Non-entailing laws (formal causes) require a different analytical approach that would allow us to capture the organization of systems through time, which is why biology and other complex systems that exhibit such dynamic organizational structures have been harder to understand. Computer science gives us both the conceptual and technological tools to investigate these non-entailing laws and to better understand complex systems.

  10. 10. Vicente says:

    Haig,

    Formal causes are non-entailing laws, where the future states of a system not only depend on the current entailing laws acting upon it, but also by the changes in how the system is organized.

    What causes the changes in the system organisation? Maybe other entailing laws? So, if the boundaries that hold the laws acting are large enough then all the laws become entailing. Is it just a matter of considering all the laws relevant to the system?

    Materialism is ruthless, no middle ways, no compatibilism.

  11. 11. haig says:

    Vicente,

    Changes to a system’s organization are caused by the same natural forces, it is all physicalism. The point in making a distinction between entailing and non-entailing laws is one of computational complexity–the time it takes to find a particular state in a configuration space.

  12. 12. Vicente says:

    Haig,

    Is it an analogy to traditional Mealy and Moore states machines?

  13. 13. haig says:

    Vicente,

    Not necessarily, finite state machines are interesting as far as non-turing complete models go, though the issue isn’t really about computability at all, but computational complexity. Wolfram’s New Kind of Science made this mistake.

  14. 14. scott bakker says:

    haig,

    I’m not sure I see the difference between ‘non-entailing laws’ and teleology then, where the latter is understood as a heuristic means of managing complexity. What would your argument be for attributing ontological *parity* to ‘entailing’ versus ‘non-entailing’ laws?

  15. 15. haig says:

    Scott,

    Yes, the non-entailing laws would be teleological, though the historical baggage of that term usually includes a purposeful ‘intelligence’ doing the ‘managing’, whereas a teleonomic perspective that I prefer would preserve direction through (using Dennett’s terms) cranes without presupposing any skyhooks.

    The two categories of laws describe the rules of the behavior of natural forces, there is no claim that using different rules implies a distinct ontological status to the underlying forces. If anything, it implies an epistemological priority as opposed to the traditional epistemological parity, but that is implicitly followed already.

  16. 16. scott bakker says:

    haig:

    Thanks for the clarification – though Dennett isn’t as clear on the topic as one might like. Saying ‘skyhooks’ are really ‘cranes’ supposes that we actually have some kind of substantive metacognitive access to what it is our brains are doing when it (supposedly) takes any version of the ‘intentional stance.’ The actual heuristics, I would argue, remain inscrutable.

  17. 17. haig says:

    Scott,

    What Dennett ultimately is saying is that the concept of skyhooks should be removed completely, skyhooks are tantamount to miracles, there are cranes and only cranes. This is the universal acid that dissolves any notion of there being skyhook explanations for any system, you don’t need to scrutinize the details to figure that out. You can actually formalize this using algorithmic information theory and computational complexity, and I think it is a pretty solid foundation to operate from.

    This is also why Dennett is put into the straight jacket of eliminative materialism. By his own admission he was led to the abandonment of qualia and the dismissal of the hard problem of consciousness as a direct consequence of the universal acid. Qualia is either a fundamental aspect of the universe and subjective experience extends in some form all the way down from the bottom-up, or it doesn’t exist at all (or is an illusion). He takes the latter position because the former would require an apparently radical rethinking of physics and neuroscience to incorporate the notion of qualia into the foundational physical theories of the universe (which I personally think is less radical than completely denying subjective experience and reducing it to the intentional stance of a classical turing-complete machine!).

  18. 18. Vicente says:

    Haig,

    At the end of the day skyhooks happened to be proposed because cranes are hidden in the fog… always the same theme, the explanatory gap.

    What we need is to elaborate a consistent theory, and for that, I am afraid that a “radical rethinking of physics” will be necessary. To talk about epistemological parity or priority… I am already astonished by “epistemolgy” as a noun.

  19. 19. Arnold Trehub says:

    Vicente, I agree. Below is an excerpt from a forthcoming book chapter. Does this make contact with your notion of “epistemological parity or priority”?

    “Each of us holds an inviolable secret — the secret of our inner world. It is inviolable not because we vouch never to reveal it, but because, try as we may, we are unable to express it in full measure. The inner world, of course, is our own conscious experience. How can science explain something that must always remain hidden? Is it possible to explain consciousness as a natural biological phenomenon? Although the claim is often made that such an explanation is
    beyond the grasp of science, many investigators believe, as I do, that we can provide such an explanation within the norms of science.

    However, there is a peculiar difficulty in dealing with phenomenal consciousness as an object of scientific study because it requires us to systematically relate third-person descriptions or measures of brain events to first-person descriptions or measures of phenomenal content. We generally think of the former as objective descriptions and the latter as subjective descriptions. Because phenomenal
    descriptors and physical descriptors occupy separate descriptive domains, one cannot assert a formal identity when describing any instance of a subjective phenomenal aspect in terms of an instance of an objective physical aspect, in the language of science. We are forced into accepting some descriptive slack. On the assumption that the physical world is all that exists, and if we cannot assert an identity relationship between a first-person event and a corresponding third-person event, how can we usefully explain phenomenal experience in terms of biophysical processes? I suggest that we proceed on the basis of the following points:

    1. Some descriptions are made public; i.e., in the 3rd person domain (3 pp).

    2. Some descriptions remain private; i.e., in the 1st person domain (1 pp).

    3. All scientific descriptions are public (3 pp).

    4. Phenomenal experience (consciousness) is constituted by brain activity that, as an object of scientific study, is in the
    3 pp domain.

    5. All descriptions are selectively mapped to egocentric patterns of brainactivity in the producer of a description and in the consumer of a description (Trehub 1991, 2007, 2011).

    6. The egocentric pattern of brain activity – the phenomenal experience – to which a word or image in any description is mapped is the referent of that word or image.

    7. But a description of phenomenal experience (1 pp) cannot be reduced to a description of the egocentric brain activity by which it is constituted (there can be no identity established between descriptions) because private events and public events occupy separate descriptive domains.

    It seems to me that this state of affairs is properly captured by the metaphysical stance of dual-aspect monism (see Fig.1) where private descriptions and public descriptions are separate accounts of a common underlying physical reality
    (Pereira et al 2010; Velmans 2009). If this is the case then to properly conduct a scientific exploration of consciousness we need a bridging principle to systematically relate public phenomenal descriptions to private phenomenal
    descriptions.”

  20. 20. scott bakker says:

    haig,

    Well there’s scientifically radical and then there’s intuitively radical. Those of us who have given up on the latter, tend to be all the more touchy regarding the former! For my part, I think I can actually explain why qualia pose the apparent problem they do.

    I agree that Dennett’s an eliminative materialist in denial (my post on “Re-engineering Dennett that Peter linked a few weeks back essentially argues this). But I’m still trying to get a handle on your position: the ‘universal acid’ simply is mechanistic explanation, which involves neither entailing nor non-entailing *laws,* but rather descriptions of *particular* causal structures in the world. How do you see teleonomic models fitting into this picture? Personally, I’m with Craver (and Piccini) on this one. The most sober way to look at them, it seems to me, is as a kind of ‘mechanism sketch’ (which is what I take Dennett to more or less mean by ‘crane’).

    Certainly this nothing remotely approaching what Nagel is arguing, is it?

  21. 21. scott bakker says:

    Vicente,

    “At the end of the day skyhooks happened to be proposed because cranes are hidden in the fog… always the same theme, the explanatory gap.”

    Or brain blindess!

  22. 22. haig says:

    Scott:

    The universal acid is not simply mechanistic explanation, it is the *constrained* mechanistic process of how nature searches through the vast design space of possibility to arrive at the actual. This constrained search process, this recursively iterative crane constructor, does not just arrive at historically contingent search results, contra Gould, but is path dependent on necessary convergent end states such as the self-organization of morphogenesis, evolutionary stable strategies, and what Dennett calls ‘Good Tricks’ in the design space. This sets the stage for a plausible teleonomic explanation for final causes, an algorithmic goal-directedness where the ‘goals’ or end states are the convergent development of search results in the design space that are always arrived at if you replay the search process over and over again. These convergent end states require a non-entailed analysis in order to understand them. Nagel sees this process, but arrives at it the wrong way using skyhook teleology instead of crane teleonomy.

  23. 23. Arnold Trehub says:

    haig: “The universal acid is not simply mechanistic explanation, it is the *constrained* mechanistic process of how nature searches through the vast design space of possibility to arrive at the actual.”

    If this is the universal acid, isn’t it in the domain of our unknowable physical reality?

  24. 24. scott bakker says:

    Haig,

    I was pretty sure you would agree with me! I would say mechanistic explanation is constrained by definition (by the causal structure of the world, no less), and it does the dissolving. Posits like ‘design space’ are part of a second-order philosophical explanation that smacks of common sense, but becomes deeply puzzling (as do all intentional conceptualizations) at the merest questioning. So why bother with them, UNLESS we’re simply using them as heuristic short-cuts for mechanistic super-complexity? Or in other words, as redacted mechanistic explanations?

  25. 25. haig says:

    Arnold:

    The ontology of what is ‘knowable physical reality’ gets tricky in this regard. The possibility space is always larger than what is actually reached by the algorithm (from our vantage point at least). Subjunctive reasoning makes this more apparent, as counterfactual results are logically possible even though they never occurred.

    Scott:

    The causal structure of the world does not constrain the algorithm, it *is* the algorithm. Can you explain why ‘design space’ would be puzzling?

  26. 26. Arnold Trehub says:

    Haig, if we were to know the ontology of all physical reality we would be omniscient. Since we are not omniscient, it seems to me that there must always remain an unknowable physical reality. A mystery will always remain. No?

  27. 27. haig says:

    Arnold,

    Yes, causally consistent universes which are expressive enough to ‘know themselves’ will always be incomplete from the inside, meaning there will always be aspects of reality that are ‘real’ but not instantiated (godel writ large). Of course if you could somehow get outside the universe into a metaverse which encompasses and extends this universe, then you could get a god’s-eye view of this universe (though you’d be in the same predicament in that metaverse!)

  28. 28. Vicente says:

    Arnold, thank you for the trailer.

    Yes, I suppose that is does contact in a way.

    I would wish to see Dennetts points based on some real genetics. I appreciate that you support your points on a real biological model (i.e. retinoid system). But Dennett, and others, make all their points appealing to sort of philosophical disquisitions, and I doubt if they have any idea about basic genetics and molecular biology.

    As I see it, evolution requires a leap like the one that thermodynamics took when the new statistical physics settled the basis to explain the macroscopic (integrated) observations, and there were microscopic processes to account for all heat transfer and temperature distributions observed. Once biology is clear, and you have your underpinning biophysics in place, we will be in the position to elucidate dual aspect monism very nature.

  29. 29. Vicente says:

    Scott,

    Well, yes, anosognosia is always there to help us see things clearer ;^).

    But, in this case scientific instrumenation and brain models, should help to overcome the problem. I agree that we can’t access brain processes by introspection, but we can externally observe and measure them. So we can try to blow the fog away a bit.

    A different problem is the hard problem, and phenomenal consciousness.

    So brain blindess applies only to a certain extent in this case.

  30. 30. Arnold Trehub says:

    Haig, you wrote:

    “Changes to a system’s organization are caused by the same natural forces, it is all physicalism. The point in making a distinction between entailing and non-entailing laws is one of computational complexity–the time it takes to find a particular state in a configuration space.”

    In *The Cognitive Brain*, p. 302, I wrote this:

    “Within a more general perspective, the synthesis of novel and useful neuronal patterns that would be committed to long-term memory in the individual and shared cumulatively in the larger society would represent the evolutionary development of biological, intraorganismic cognitive tools in the same sense as the evolutionary development of material extraorganismic artifacts (Trehub 1977).”

    Would you consider this as a kind of non-entailing law for epigenetic trajectories?

  31. 31. scott bakker says:

    Haig,

    “The causal structure of the world does not constrain the algorithm, it *is* the algorithm.”

    That’s the way I prefer looking at it as well. Constraint is built in. So you’re saying there are constraints over and above the laws of physics and causal structure?

    “Can you explain why ‘design space’ would be puzzling?”

    Well, it’s not a real ‘space’ for one. And what is ‘design’? Saying it’s a ‘heuristic stance’ begs the question of what a *stance* is. Dennett only gives the vaguest of answers, and nothing that explains the conceptual pecularities pertaining to it. And so we find ourselves spinning on the intentional merry-go-round, always begging what we’re attempting to explain, never getting behind anything.

  32. 32. scott bakker says:

    Vicente: “Well, yes, anosognosia is always there to help us see things clearer”

    This is more true than you know! The problem for anosognosiacs isn’t ignorance of their deficit – if that was the case, one could simply point it out for them. The problem is how *certain* they are that they suffer no deficit. Things are literally too clear for them.

    “A different problem is the hard problem, and phenomenal consciousness.”

    Brain blindness has a lot to say about this as well. I have a piece coming up on the Knowledge Argument you might find interesting.

  33. 33. Arnold Trehub says:

    Haig: “Can you explain why ‘design space’ would be puzzling?”

    Scott: “Well, it’s not a real ‘space’ for one.”

    What else could it be but real space-time in which physical structure and dynamics spin out in evolving patterns in accordance with its intrinsic properties? Humans, as a special part of nature, capture some of the relevant systematics of nature’s constraints in their brain’s phenomenal cognitive mechanisms. This is why we are able to have this discussion.

  34. 34. haig says:

    Arnold:
    Entailing laws are like those governing a billiard ball’s trajectory. Non-entailing laws are like those governing the regulation of gene expressions. The former is just a function of time, the latter is a function that takes its own organization as input to future states, which in turn changes its organization again, so that you can’t predict too far into the future without actually stepping through the algorithm.

    Scott:
    No constraints above laws of physics, we’re in agreement there, but I’m afraid we part ways in our conceptions of design space. I consider the space real, it is all included in the relativistic quantum field, but what we see is only a small subset contingent to our historical timeline.

    As metaphor, take possibility space to mean all the ways a block of stone can be chiseled, every combinatorial permutation from the most minute scratch to the most intricate sculpture. Now, let us define the design space as all the ways the block of stone can be chiseled by a single artist in a lifetime. The artist is constrained by time, by her skills/tools, and by past historical contingencies as she can’t undo each chip of the stone. Evolution, as played out, takes the design stance of this artist. Even though she is incredibly creative, her designs will always have consistent features due to the inherent skills she possesses and the limited time allowed for her to do her work.

  35. 35. Arnold Trehub says:

    Haig: “… the latter [an entailing law] is a function that takes its own organization as input to future states, which in turn changes its organization again, so that you can’t predict too far into the future without actually stepping through the algorithm.”

    So if a brain changes its own organization as a function of learning (as it does), its current organization is always a function of its sensory input and its learning-modified organization (auto-input?). Algorithms that predict a system’s organization at t as a function of input at t-1 including its organization at t-1 are *non-entailing laws*. Would this be correct in your view?

  36. 36. haig says:

    Arnold,

    Yes, that sounds right. The dynamics are nonlinear, the differential equations used to predict the future states are unsolvable beyond t+epsilon.

  37. 37. haig says:

    Arnold,

    Actually, upon further reflection, I don’t think it is so clear cut since ‘learning’ could mean many things in the brain. Hebbian strengthening of synaptic weights don’t change the overall organization of the brain. Maybe it would be more pronounced during early development. There are criticality points in dynamic systems beyond which entailed predictions break down, and I’m not sure which brain processes contain these criticality points.

  38. 38. scott bakker says:

    Haig: “I consider the space real, it is all included in the relativistic quantum field, but what we see is only a small subset contingent to our historical timeline.”

    And we ‘perceive’ this design/possibility space via some kind of intellectual intuition?

  39. 39. haig says:

    Scott,

    Not at all, it is there in the empirically verified models of theoretical physics, unless you consider those models perceptions based on intellectual intuition, which in a way they are, but they are not ‘just’ that hence the ‘empirically verified’ qualifier.

  40. 40. Arnold Trehub says:

    Haig, you wrote:

    “There are criticality points in dynamic systems beyond which entailed predictions break down …”

    I guess the concept of non-entailing laws hinges on what counts as a *criticality point* in a dynamic system. What are your thoughts about this?

  41. 41. Vicente says:

    Haig
    The dynamics are nonlinear, the differential equations used to predict the future states are unsolvable beyond t+epsilon

    I think there is a misconception here.

    They are solvable !! Solutions exist !! give me the initial and boundary conditions with enough accuracy and N-bits word computer (N>>>>), and I’ll do it.

    I don’t see that is the problem, whether you can solve the equations with enough precision or not is conceptually irrelevant, reality provides the solutions all the time.

    Laws are entailing, if you cannot produce an accurate enough simulation, too bad.

  42. 42. scott bakker says:

    haig,

    Less is more, in my ontological books. I *try* to be agnostic between instrumentalism and realism on matters that clearly outrun our workaday cognitive capacities. I’m afraid I just don’t know what ‘real’ means in the context you’re using it here.

    But I certainly grant their empirical effectiveness. The question is one of what that effectiveness resides in. Are ‘teleonomists’ committed to the reality of non-entailing laws then?

  43. 43. haig says:

    Arnold:
    Yes, the criticality points are an important feature.

    Vicente:
    Yes, I misspoke, I should have said there are no *exact* solutions, but that doesn’t affect my argument.

    Scott:
    Where do emergent properties come from if not from a *real* possibility space? Things don’t just spring into existence. The details of the ontology depends on which interpretation of quantum mechanics you lean towards, which isn’t settled territory, but the mathematical implications are overwhelmingly verified such that any interpretation will admit to the reality of this ‘space’.

  44. 44. Arnold Trehub says:

    Haig, so the precision of a solution and what counts as a criticality point remain to be determined. Aren’t we back to scientific pragmatics in deciding between entailing and non-entailing “laws”?

  45. 45. haig says:

    Arnold,

    Yes, pragmatic judgement is used until a better theory of non-entailing laws is worked out (which I’m going to base my phd thesis on). Ultimately, Nagel’s argument is founded upon taking Leibniz’s principle of sufficient reason seriously, which leads to taking Aristotle’s formal and final causes seriously, both of which have been abandoned by modern biology, cosmology, and analytical philosophy. Not all scientists are so quick to dismiss this though, Lee Smolin and Roger Penrose come to mind.

  46. 46. scott bakker says:

    “Where do emergent properties come from if not from a *real* possibility space? Things don’t just spring into existence. The details of the ontology depends on which interpretation of quantum mechanics you lean towards, which isn’t settled territory, but the mathematical implications are overwhelmingly verified such that any interpretation will admit to the reality of this ‘space’.”

    But the scare-quotes say it all: The reality of some X susceptible to space-like mathematical mapping. Why not just rein in your commitments there?

    As for emergence, which kind to you mean? Mechanistic emergence, whereby wholes possess properties distinct from the properties of component parts, yet are entirely explicable in terms of them, or spooky emergence, whereby wholes possess properties distinct from the properties of component parts, and are not at all explicable in terms of them. As for where emergence comes from where would it come from if not *actual* space. Why bother reifying possibility? What does it gain aside from a more complicated ontology?

  47. 47. haig says:

    Scott,

    The emergence I’m talking about is ‘weak’ emergence, reducible to elementary constituant parts.

    Actual space *is* the possibility space I’m talking about, the solution space of the linear Shcrodinger equation of the wavefunction is a Hilbert space (and the nonlinear form can still be quantized into a phase space), the configuration space of classical (Newtonian) mechanics is not the whole picture, we’ve known this for over half a century. The ‘matter’ that informs our empirical observations is a subset of the total energy potential of the quantum field.

  48. 48. scott bakker says:

    So ‘design space’ isn’t heuristic as you initially intimated, but rather actual space, as understood under certain interpretations of quantum mechanics? Now I’m curious what kinds of space you would count as actual or not… How about, for instance, ‘hope space,’ or ‘prediction space,’ or ‘intention space,’ or ‘phenomenal space.’ In other words, what are the criteria you use to demarcate literal from metaphoric uses of ‘space’?

  49. 49. haig says:

    Scott,

    I make a distinction between the possibility space and the design space, where the former is the reversible time-symmetric set of all possible states contained in the quantum field, and the latter is the time-asymmetric (entropic) evolution of our historical world-line. What makes a heuristic distinct from a classical algorithm is the trade off of completeness/accuracy/precision for speed (ie time constraints), and I do consider the design space heuristic.

    The abstract concept of a space can be abused, but I follow Turchin’s meta-systems theory which constrains the concept to spaces that supervene upon each other, again, through weak emergence. Possibility space is supervened upon by design space, and each subsequent space follows suit, like a Matryoshka doll.

  50. 50. scott bakker says:

    haig,

    So then, in Dennettian terms, possibility space provides the ‘real’ insofar as ‘design’ engages ‘real patterns’? If so, fascinating stuff! Thank you for bearing with my questions. Who should I be reading for a more complete picture of this version of teleonomics?

  51. 51. haig says:

    Scott,

    I think so, though I’ve only skimmed Dennett’s paper discussing real patterns.

    You can read Valentin Turchin’s old book ‘The Phenomenon of Science’ (from the 70s and still obscure!). Also Stuart Kauffman’s books are great, though his most recent work is available only in his papers.

  52. 52. Vicente says:

    Haig,

    It is not a matter of misspeaking. Maybe there are not “analytical solutions”, exact solutions…never.

    For example, the pendulum solution is analytic (sine function), but not exact, the sine function it is just a series expansion, you can take N or 1000*N terms. Besides, you have to compute the function, introducing additional errors. And finally you have to measure the initial and real conditions, more errors.

    Maybe you’re trying to say that the brain is a chaotic system? If that is the case physics of chaos is quite mature and no non-entailing laws are considered.

    I am almost sure that the brain under certain circumstances is chaotic. Minimum changes in the current state or the input, will make the system diverge significantly. Probably when we are in our “comfort zone” strong attractors operate, with low consciousness (better awareness), and when we get out of it, the system gets very far from equilibrium (high awareness)and extremely chaotic. This is also necessary to have fast reactions, as derived from control theory, stable systems are usually slow.

    You could check this by introspection, see how thoughts emerge following a connected chain in a relax state, and how thoughts vary very fast when stressed.

    In any case, I suggest to apply chaos theory and forget about non-entailing laws. After all when we talk about non-etailing laws and “emergent properties” we are not so far from appealing to skyhooks, let’s look for the crane.

    It would be interesting to see if applying statistics and DSP to MEG data, these brain processes could be identified. eg: when two brain processes are oppossed(“I eat the piece of cake now…” versus “I am not eating the piece of cake because I am already to fat), short term and long term reward mechanisms opposing each other, we could try to identify two non linear attractors in the phases space ,and see if the system falls into one or the other. But the point is that all physiological (non linear chemo-physical dynamics) laws behind are entailing.

    Would you say that in meteorology there are non-entailing laws?

  53. 53. haig says:

    Vicente,

    Chaotic dynamics isn’t what I was talking about, and I should have avoided talking about nonlinearity altogether, even though it’s important, it just confused things. These ideas are admittedly not fleshed out completely, but I’ll try to get my point across one last time in a very simple statement. Entailing laws operate within a closed space, non-entailing laws expand that space. Sorry if I’m being somewhat vague, it is a work in progress.

  54. 54. scott bakker says:

    Well, you caught my attention, haig, and I’ve been running attempts to redeem intentionality through the skeptical gauntlet for some time now.

  55. 55. haig says:

    Scott,

    Glad to be of some help. I’ll leave you with one last thought: intentionality is not, as Dennett claims, the illusional stance that rides on top of the design & physical stances. The triad collapses down into one comprehensive intentional stance :).

    A relevant Buddhist koan:
    Two monks were watching a flag waving in the wind. One said to the other, “The flag is moving.”
    The other said, “The wind is moving.”
    Huineng overheard this. He said, “Not the wind, not the flag. Your mind is moving.

    Cheers.

  56. 56. scott bakker says:

    haig,

    I actually have a science fiction short based on that very premise! Dennett would disagree with your diagnosis of his position, insisting that each of the stances he references pick out ‘real patterns.’ The view I can’t find my way out of is that it’s all mechanistic, that what *philosophers* (like Dennett) talk about when they talk ‘intentionality’ is a kind metacognitive illusion. Our brains use mechanisms to accomplish what they accomplish, and between attributing the mysteries of consciousness to our metacognitive limits versus some remarkable twist of the real, the former route strikes me as not only more parsimonious, but in keeping with the historical pattern of the sciences – which is to show us that we are less special than we assume.

    That said, we’re still at the alchemy phase of answering these questions! And I so *want* meaning to be real…

  57. 57. Michael Baggot says:

    Haig writes (re 53): Entailing laws operate within a closed space, non-entailing laws expand that space. Frankly, this all sounds like just more philosophical jargon being heaped upon a classic issue of the AI-Mind debate, ala John McCarthy, in the hope of somehow deriving a real insight into the computational nature of the brain. Specifically, computers are closed deductive systems while minds combine such closed systems with an inductive/propositional system that allows them to expand their axiomatic deductive base and thus escape the incompleteness limitations inherent in Gödel. The problem with this is that such insights are in no way non-entailed. The propositional machinations that lead to insight may seem non-entailed but such insights must always merge with the axiomatic logic of the closed deductive system form which they seem to magically emerge. Non-entailed insights are at bottom pure nonsense. In fact, what we have here is another equally ‘hard problem’ that seems to have been thoroughly ignored by philosophers.

  58. 58. Joseph C Goodson says:

    In case anyone was interested in haig’s recommendation, here is a link to Valentin Turchin’s The Phenomenon of Science in HTML and PDF format:

    http://pcp.vub.ac.be/POSBOOK.html

  59. 59. Mark Pharoah says:

    When Haig says in comment #9 “Non-entailing laws (formal causes) require a different analytical approach that would allow us to capture the organization of systems through time, which is why biology and other complex systems that exhibit such dynamic organizational structures have been harder to understand”, are we not compelled to pursue the underlying principles of systems constructs and behaviours? Is there not a clear hierarchy of systems constructs that appear to emerge and evolve over time? that is, if we understand the principles of emergence and organisation of systems constructs, then we arrive at an understanding of why and how consciousness happens.

    Incidentally, from Peter’s first paragraph of the review, it seems clear to me that Nagel confuses the problem of consciousness with the problem of personal identity i.e. One might explain why consciousness emerged and evolved; why and how it creates the phenomenon of experience in those physical states that possess it; in what way the human mental condition is distinct from other animals etc, and yet… give NO insight as to why any given individual’s perspective is theirs – that is, why an individual is the personal identity that they are in the 13.8 billion year history of the universe and 100 trillion year future of the universe, rather than someone else. Explained another way; one can explain what it is like to be bat-like, but one can not explain what it is to be ‘Fred’ my pet bat.

Leave a Reply