Freedom – why worry?

chainWhy does the question of determinism versus free will continue to trouble us? There’s nothing too strange, perhaps, about a philosophical problem remaining in play for a while – or even for a few hundred years: but why does this one have such legs and still provoke such strong and contrary feelings on either side?

For me the problem itself is solved – and the right solution, broadly speaking, has been known for centuries: determinism is true, but we also have free choice in a meaningful sense. St Augustine, to go no earlier, understood that free will and predestination are not contradictory, but people still find it confusing that he spoke up for both.

If this view – compatibilism – is right, why hasn’t it triumphed? I’m coming to think that the strongest opposition on the question might not in fact be between the hard-line determinists and the uncompromising libertarians but rather a matter of both ends against the middle. Compatibilists like me are happy to see the problem solved and determinism reconciled with common sense, whereas people from both the extremes insist that that misses something crucial. They believe the ‘loss’ of free will radically undercuts and changes our traditional sense of what we are as human beings. They think determinism, for better or worse, wipes away some sacred mark from our brows. Why do they think that?

Let’s start by quickly solving the old problem. Part one: determinism is true. It looks, with some small reservations about the interpretation of some esoteric matters, as if the laws of physics completely determine what happens. Actually even if contemporary physics did not seem to offer the theoretical possibility of full determination, we should be inclined to think that some set of rules did. A completely random or indeterminate world would seem scarcely distinguishable from a nullity; nothing definite could be said about it and no reliable predictions could be made because everything could be otherwise. That kind of scenario, of disastrous universal incoherence is extreme, and I admit I know of no absolute reason to rule out a more limited, demarcated indeterminacy. Still, the idea of delimited patches of randomness seems odd, inelegant and possibly problematic. God, said Einstein, does not play dice.

Beyond that, moreover, there’s a different kind of point. We came into this business in pursuit of truth and knowledge, so it’s fair to say that if there seemed to be patches of uncertainty we should want to do our level best to clarify them out of existence. In this sense it’s legitimate to regard determinism not just as a neutral belief, but as a proper aspiration. Even if we believe in free will, aren’t we going to want a theory that explains how it works, and isn’t that in the end going to give us rules that determine the process? Alright, I’m coming to the conclusion too soon: but in this light I see determinism as a thing that lovers of truth must strive towards (even if in vain) and we can note in passing that that might be some part of the reason why people champion it with zeal.

We’re not done with the defence, anyway. One more thing we can do against indeterminacy is to invoke the deep old principle which holds that nothing comes of nothing, and that nothing therefore happens unless it must; if something particular must happen, then the compulsion is surely encapsulated in some laws of nature.

Further still, even if none of that were reliable, we could fall back on a fatalistic argument. If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday; so your turning that way rather than left was already determined.

Finally, we must always remember that failure to establish determinism is not success in establishing liberty. Determinism looks to be true; we should try to establish its truth if by any means we can: but even if we fail, that failure in itself leaves us not with free will but with an abhorrent void of the unknowable.

Part two: we do actually make free decisions. Determinism is true, but it bites firmly only at a low level of description; not truly above the level of particles and forces. To look for decisions or choices at that level is simply a mistake, of the same general kind as looking for bicycles down there. Their absence from the micro level does not mean that cyclists are systematically deluded. Decisions are processes of large neural structures, and I suggest that when we describe them as free we simply mean the result was not constrained externally. If I had a gun to my head or my hands were tied, then turning left was not a free decision. If no-one could tell which way I should go without knowledge of what was going on in the large neural structures that realise my mind, then it was free. There are of course degrees of freedom and plenty of grey areas, but the essential idea is clear enough. Freedom is just the absence of external constraint on a level of description where people and decisions are salient, useful concepts.

For me, and I suppose other compatibilists, that’s a satisfying solution and matches well with what I think I’ve always meant when I talk about freedom. Indeed, it’s hard for me to see what else freedom could mean. What if God did play dice after all? Libertarians don’t want their free decisions to be random, they want them to belong to them personally and reflect consideration of the circumstances; the problem is that it’s challenging for them to explain in that case how the decisions can escape some kind of determination. What unites the libertarians and the determinists is the conviction that it’s that inexplicable, paradoxical factor we are concerned to affirm or deny, and that its presence or absence says something important about human nature. To quietly do without the magic, as compatibilists do, is on their view to shoot the fox and spoil the hunt. What are they both so worried about?

I speculate that the one factor here is a persistent background confusion. Determinism, we should remember, is an intellectual achievement, both historically and often personally. We live in a world where nothing much about human beings is certainly determined; only careful reflection reveals that in the end, at the lowest level of detail and at the very last knockings of things, there must be certainty. This must remain a theoretical conclusion, certainly so far as human beings are concerned; our behaviour may be determinate, but it is not determinable; certainly not in practice and very probably not even in theory, given the vast complexity, chaotic organisation and marvellously emergent properties of our brains. Some of those who deny determinism may be moved, not so much by explicit rejection of the true last-ditch thesis, but by the certainty that our minds are not determinable by us or by anyone. This muddying of the waters is perpetuated even now by arguments about how our minds may be strongly influenced by high-level factors: peer pressure, subliminal advertising, what we were given to read just before making a decision. These arguments may be presented in favour of determinism together with the water-tight last-ditch case, but they are quite different, and the high-level determinism they support is not certainly true but rather an eminently deniable hypothesis. In the end our behaviour is determined, but can we be programmed like robots by higher level influences? Maybe in some cases – generally, probably not.

The second, related factor is a certain convert enthusiasm. If determinism is a personal intellectual achievement it may well be that we become personally invested in it. When we come to appreciate its truth for the first time it may seem that we have grasped a new perspective and moved out of the confused herd to join the scientifically enlightened. I certainly felt this on my first acquaintance with the idea; I remember haranguing a friend about the truth of determinism in a way that must, with hindsight, have resembled religious conviction and been very tiresome.

“Yes, yes, OK, I get it,” he would say in a vain attempt to stop the flow.

Now no-one lives pure determinism; we all go behaving as if agency and freedom were meaningful. The fact that this involves an unresolved tension between your philosophy and the ideas about people you actually live by was not a deterrent to me then, however; in fact it may even have added a glamorous sheen of esoteric heterodoxy to the whole thing. I expect other enthusiasts may feel the same today. The gradual revelation, some years later, that determinism is true but actually not at all as important as you thought is less exciting: it has rather a dying fall to it and may be more difficult to assimilate. Consistency with common sense is perhaps a game for the middle aged.

“You know, I’ve been sort of nuancing my thinking about determinism lately…”

“Oh, what, Peter? You made me live through the conversion experience with you – now I have to work through your apostasy, too?”

On the libertarian side, it must be admitted that our power of decision really does look sort of strange, with a power far exceeding that of mere absence of constraint. There are at least two reasons for this. One is our ability to use intentionality to think about anything whatever, and base our decisions on those thoughts. I can think about things that are remote, non-existent, or even absurd, without any difficulty. Most notably, when I make decisions I am typically thinking about future events: will I turn left or right tomorrow? How can future events influence my behaviour now?

It’s a bit like the time machine case where I take the text of Hamlet back in time and give it to Shakespeare – who never actually produced it but now copies it down and has it performed. Who actually wrote it, in these circumstances? No-one, it just appeared at some point. Our ability to think about the future, and so use future goals as causes of actions now, seems in the same way to bring our decisions into being out of nowhere inside us. There was no prior cause, only later ones, so it really seems as if the process inverts and disrupts the usual order of causality.

We know this is indeed remarkable but it isn’t really magic. On my view it’s simply that our recognition of various entities that extend over time allows a kind of extrapolation. The actual causal processes, down at that lowest level, tick away in the right order, but our pattern-matching capacity provides processes at a higher level which can legitimately be said to address the future without actually being caused by it. Still, the appearance is powerful, and we may be impatient with the kind of materialist who prefers to live in a world with low ceilings, insists on everything being material and denies any independent validity to higher levels of description. Some who think that way even have difficulty accepting that we can think directly about mathematical abstractions – quite a difficult posture for anyone who accepts the physics that draws heavily on them.

The other thing is the apparent, direct reality of our decisions. We just know we exercise free will, because we experience the process immediately. We can feel ourselves deciding. We could be wrong about all sorts of things in the world, but how could I be wrong about what I think? I believe the feeling of something ineffable here comes from the fact that we are not used to dealing with reality. Most of what we know about the world is a matter of conscious or unconscious inference, and when we start thinking scientifically or philosophically it is heavily informed by theory. For many people it starts to look as if theory is the ultimate bedrock of things, rather than the thin layer of explanation we place on top. For such a mindset the direct experience of one’s own real thoughts looks spooky; its particularity, its haecceity, cannot be accounted for by theory and so looks anomalous. There are deep issues here, but really we ought not to be foxed by simple reality.

That’s it, I think, in brief at least. More could be said of course; more will be said. The issues above are like optical illusions: just knowing how they work doesn’t make them go away, and so minds will go on boggling. People will go on furiously debating free will: that much is both determined and determinable.

171 thoughts on “Freedom – why worry?

  1. Compatibilism doesn’t work because it says that we are programmed to do something and so we have “free will”. lol “Free will” is a self-contradictory concept born of ignorance.

    Here is the self-contradiction:

    1) For a free (freely-willed) action there must be an intention/desire that influences the action.

    2) To the extent that the intention/desire does NOT influence the action, the action cannot be free – because to this extent the action is unintentional/undesired.

    3) To the extent that the intention/desire DOES influence the action, the action cannot be free either – because the intention/desire cannot be freely willed. (If the intention/desire were freely willed, its formation would have to be influenced by an intention/desire, in accordance with point 1, leading to an infinite regress.)

    4) So the supposedly free action is in full extent unfree.

    The concept of “free will” should be abandoned just as the concept of “geocentrism” was. Instead of “free will” we should speak about intentions, desires, motives and strive to give them space for expression and satisfaction as well as regulate them by motivating and demotivating measures for the benefit of individual and society.

  2. If point 1 is false, do you mean that an unintended/undesired action can be free? Any example?

  3. What I’m saying is that a decision is free if not determined by external constraints. That’s all.

    It’s not the case that a free decision must arise from pre-existing desires or intentions. If it does reflect pre-existing desires or intentions it’s not the case that they in turn must have been free.

  4. If a decision doesn’t arise from pre-existing desires or intentions, it seems to be a decision we don’t care about or a decision we make unwittingly. Would you consider such a decision freely willed?

  5. Well, I refer you to what I said in the post and my last comment: a decision is free if not determined by external constraints.

  6. It might have been if it had been a decision and if it hadn’t been determined by an external banana peel. Let’s take a break for reflection at that point.

  7. I don’t think that absence of external factors is sufficient for a decision to be free. If I got a hemorrhage or cancer, it wouldn’t be a free decision even if it was caused only by internal factors (something went wrong in my body). For a decision to be free, one should have some control over it and how can you have control over a decision if it doesn’t arise from your intention or desire?

  8. It’s surprising that you say several times that determinism is a truth about the micro world and indeterminism a macro-appearence, because the physicists will tell you the exact contrary: it’s only when a huge quantity of matter is involved that an approximate determinism emerges (under non-chaotic conditions at least). Basically, apparent determinism is due to the law of large numbers, when the individual behaviour of particles is smoothed out in their average collective behaviour. Some philosophers of physics have argue that all macro-unpredictability can be traced down to micro-indeterminism (under chaotic conditions, precisely).

    It’s also weird, having said that, that you associate the acceptance of determinism with embracing a scientific world view. This is definitely a metaphysical position, and only the metaphysical arguments (fatalism…) are more or less convincing, not the scientific ones. There are deterministic interpretations of physics to be sure but they’re not motivated by the content of physics: to the contrary, they’re obliged to contorsions (such as non locality or alternate worlds). Arguably, these interpretations are motivated by a metaphysical agenda.

    Now of course indeterminism does not save free will as you note, but it always bothered me that indeterminism is always interpreted as randomness: why so? You say that there must be a law for anything that happens. Having a law requires subsuming reality under concepts, separating distinct objects that we recognise as belonging to the same type. Indeterminism could as well be interpreted as a partial failure, or a limit of this kind of project: reality might not be entirely reducible to universal concepts+objective “God’s eye view” facts. Deterministic knowledge is desirable, it doesn’t mean it works.

    If this is so, there is nothing “random” about indeterminism, randomness is just the manifestation, a name we put on phenomena when they are not predictable in principle. Nothing happens without a reason, perhaps, but all reasons are not necessarily amenable to objective, systematic knowledge, and indetermimism merely says they’re not. That’s a good place to start to contemplate the possibility of libertarian free will in my opinion.
    After all, other people’s reasons to act are not obviously amenable to objective and systematic knowledge…

  9. The thing about these versions of compatibilism, Peter, is that I just don’t see how they explain, let alone resolve anything. As a result, the worry has to be that you haven’t resolved anything, merely framed it in a way, that you, personally, find appealing. Certainly we want something more powerful than that.

    Invoking ‘levels of description’ simply leaves us scratching our head as to why there should be any such thing. It sounds intuitive, but it actually adds to the sum of what needs to be explained, rather than explaining how we can be both ‘determined’ and ‘free.’ You understand this, so you provide a (wonderfully written) sketch:

    “We know this is indeed remarkable but it isn’t really magic. On my view it’s simply that our recognition of various entities that extend over time allows a kind of extrapolation. The actual causal processes, down at that lowest level, tick away in the right order, but our pattern-matching capacity provides processes at a higher level which can legitimately be said to address the future without actually being caused by it.”

    But this is simply the *beginning* of the account you need, which is an account of why we have these levels, how these levels interact, and how they should constrain *our* (not just your) understanding ‘free will.’

    This is precisely the account that BBT provides a la heuristic neglect. By its lights, ‘choice’ and the attendant entourage of intentional concepts belong to a system of simple heuristics adapted to solve myriad practical, ancestral problems. The fact of ‘choice’ is that there’s no such thing outside these practical problem-solving contexts. It just makes no sense to *theoretically establish* the fact of freedom (as the absence of extrinsic constraint, behavioural versatility, or what have you) because there simply are no such theoretical facts. And how could there be, given that the idiom, by your own admission, provides away of solving absent knowledge of what’s actually going on?

    ‘Levels of description,’ once understood, only allow us to understand how and why ‘freedom’ is capable of doing the work it does in various parochial contexts–no more. The fact that it does this work doesn’t make it ‘real,’ merely useful.

  10. @Quentin:

    “Now of course indeterminism does not save free will as you note, but it always bothered me that indeterminism is always interpreted as randomness: why so?”

    Yeah William James made note of this as well, saying chance merely meant that something in addition to the prior causes could determine the direction of the next event.

    John Dupre notes the same, and even makes the bold claim that he has thus solved the problem of free will:

    http://cogprints.org/341/1/FREEDOM.htm

    @Peter: Sorry Peter, I have to admit like Bakker I don’t get the “levels of description” argument. I’m not sure what makes you confident this is the correct analysis, as it seems to have a few points up for contention – the truth of determinism, levels of description being enough to rescue any notion of volition.

    And as Quentin notes, from the point of view of what is scientifically observed, indeterminism (and stranger things) seem to exist at the lowest level of description. The physicist Marko Vojinovic actually has an article Farewell to Determinism at Scientia Salon:

    https://scientiasalon.wordpress.com/2014/09/11/farewell-to-determinism/

    I think this question of causality is more fruitful for discussion than free will, which I think is too high level given the ongoing search to explain cause-effect. This is probably my favorite “metaphysical” subject as a layperson reading philosophy, so feel free to skip the rest of this post as it’s just me geeking out. 🙂

    AFAIK the situation in science (and philosophy, according to Massimo) is we have no model of causality and this is before we even get to the challenge of explaining the quantum realm. Yet even if we assume there are “laws of nature” the idea they enforce the behavior of particles seems, to me at least, to invoke a sort of dualist interactionism where the laws hold in a Platonic realm that preserves the chains of causality. I know we’ve also discussed how this seems to get us to an infinite regression of meta-laws.

    As Stephen Talbott notes in his essay “Do the Laws of Physics Make Things Happen?” the Laws would have to be inscribed at the level of quantum foam, or whatever it is that’s supposed to be the material primitive.

    There’s also the problem of how odd some of these laws might be – my understanding is there’s no good law to explain why 25% of photons are reflected off glass. In general this leads us to ask why the relationship A -> B holds even in classical observation, why not A -> C sometimes?

    Others, as Nancy Cartwright likes to note, are discovered in conditions divorced from the real world. In fact she would say “laws” are really regularities, something I recall you noted in your book:

    “…What happens next, we assume, is determined by calculations rooted in these laws.

    In fact talk of the laws of nature, or the laws of physics, is metaphorical. When a string stretches, no calculation takes place inside it to determine how far. Stuff just exists, stuff just happens; the laws are our description of the observable regularities of stuff…”

  11. Peter, you wrote:
    “If I had a gun to my head or my hands were tied, then turning left was not a free decision.”

    I think this gets to the crux of the matter. You want to be able to speak of “free decisions” so that some account of moral responsibility is still intelligible. If Guy 1 shoots someone because he wants his money, he’s evil/bad. But if Guy 2 shoots someone because a third party had a gun pointed at Guy 2’s head, it can be exculpated.

    Simultaneously, you also understand the picture that science paints and you understand it has terrifying predictive power.

    Wanting both things leads you to compatibilism.

    However, let’s look at the case of the guy who shoots his friend for money again. Why can his act be said to be “free”? He could no more control the thoughts in his head than the other guy. His history, his genetics, his environment, have all conspired to make him to do the thing he did. Ions diffusing across membranes, transmitters binding receptors, potentials flowing through innumerable connections… still totally outside of his “control”.

    That said, I think “responsibility” and “moral obligation” are still useful concepts. Guy 1 is certainly more dangerous, and punitive actions against him should be undertaken.

    And interesting “in between” case exists in people like Charles Whitman- where he “freely” chose to shoot people on a college campus, but actually had inkling that something was wrong with his brain. (An autopsy revealed a tumor which may have contributed to his violent actions). Can we say he “freely” shot those people? Where do we draw the line between an action that is compelled and an action that is free?

    Either way, I don’t feel too strongly about this. As long as we can all more or less agree that non-scientific acausal theories of hominid decision making are off the table, I don’t think it matters too much to the issue of consciousness whether we prefer a purely deterministic or compatibilist account.

  12. Quentin,

    it’s only when a huge quantity of matter is involved that an approximate determinism emerges
    You surprise me, too – can you point me towards a statement of this view?

    This is definitely a metaphysical position…

    Whoa, weren’t the scientists putting me right about it just in the last paragraph?

    You say that there must be a law for anything that happens

    There must be a law for anything that must happen. If a specific thing had to happen, we must be able to specify what had to happen, and if we can specify what had to happen we’ve implicitly got a law of some kind, or so I suggest.

    I’m sure there’s an extensive discussion to be had about the various technical definitions of randomness and various technical takes on indeterminacy, but I’m using both fecklessly as vague synonyms. If you can build any kind of free will out of any kind of either I will salute your ingenuity!

  13. Jorge,

    Yes: it gets alarmingly blurry when we try to draw a sharp line around what’s me and what’s the beer talking (or whatever). There was some interesting stuff about a year ago (Nahmias?) about how people vary their willingness to attribute responsibility depending on whether a mental condition is described as ‘medical’ or ‘psychological’.
    Still, I don’t think difficulties in applying a principle invalidate the principle.

  14. @Scott #11

    ‘Levels of description,’ once understood, only allow us to understand how and why ‘freedom’ is capable of doing the work it does in various parochial contexts–no more. The fact that it does this work doesn’t make it ‘real,’ merely useful.

    I’m starting to think that you might hold a rather positivist view of scientific knowledge, but surely I must be wrong on this!

    Consider my re-phrasing, and see if you disagree:

    ‘Levels of description,’ once understood, only allow us to understand how and why an ‘electron’ is capable of doing the work it does in many, but still parochial contexts – no more. The fact that it does this work doesn’t make it ‘real’, it’s merely a useful modelling tool.

    I.e. what is really-real remains solidly out of reach.

    If I’m right and you do disagree with my re-phrasing, and by contrast you think that what science finds and defines to exist “down at the lowest levels” is really-real (it isn’t, in my opinion), then I can see why from your point of view free will, the self, intentionality, etcetera (…), are all things that science will ultimately uncover as not-real (illusionary as in “they don’t really exist”).

    In other words, (I think) I agree with you when we talk about why different levels of description exist: they exist because cognition is constrained, it can only consider so many variables, and thus all it does is finding elements which seem to be enough to provide an explanation and uses them (cognition is possible because it depends on neglecting all the rest, including what is neglected without knowing it). Therefore, to explain phenomena X you may need to pick electrons and/or quantum stuff, to explain Y you might be better off by talking about someone’s desires, for Z you’ll invoke Natural selection and so forth: heuristic tools for heuristic answers. Vastly different levels of reliability do apply and can be verified, but ontologically any level of description has the same weight: none at all!

    Science relies on cognition, mostly enhanced by offloading computational tasks nowadays, but nevertheless all output of science is constrained by the same limitations. An electron is not really-real, not more than my sense of self, it is merely (very) useful to explain and predict plenty of observable phenomena, but not all of them… Thus, it seems that you might be disagreeing with yourself, and you actually think that what physics declares to be really-real (fundamental) is indeed really-real (or more real than entities useful at higher levels of description, which seems odd, because implies shades of realism).
    I would be very surprised if you are actually doing this mistake, but it would explain why I can’t understand some of your positions, so it’s worth checking!

    @Sci, I think the above might help explaining why the appeal to ‘levels of description’ makes perfect sense to me… One day I will need to write down why I think it similarly makes ‘causation’ a mere consequence of neglect and take also this weight off my chest (i.e. convince absolutely no-one).

  15. @Peter

    Let me be more specific.

    Standard quantum mechanics makes probabilistic predictions. It is valid at very small scales.

    Newtonian mechanics describes deterministic evolutions. It is valid at a macroscopic scale.

    Now you have to explain how probabilistic predictions can look fully deterministic at a macroscopic scale.

    The standard explanation involves decoherence, which happens when a quantum systems interacts with a “big” environment (which has many degrees of freedom). Decoherence implies that quantum fluctuations are uncorrelated, which in turn implies that : in non chaotic contexts, all micro-fluctuations will cancel each other out (because they’re uncorrelated) and have no macroscopic effects, thus restoring an approximately deterministic newtonian law of evolution.

    This is not a “view” but part of the standard explanation of how “classicality” is restored (not a complete explanation but let’s leave that aside or we’d have to detail the various interpretations of quantum mechanics), you’ll find references anywhere if you search “decoherence” for example.

    This is what I meant by determinism emerging. Now of course probabilistic laws do not necessarily mean indeterminism, which is why, on the next paragraph, I maintained that determinism is a metaphysical issue. There are interpretations which attempt to explain probabilistic predictions with “hidden variables” for example: a very ad-hoc move to me (and full of complications) but still a possibility.

    I hope it’s more clear.

    Regarding randomness, indeterminism and free will, here is a piece I wrote some time ago: http://physicsandthemind.blogspot.fr/2013/10/in-defense-of-libertarian-free-will.html

  16. @Sergio: I look forward to your essay on causation, seems like an interesting way to think about the problem.

    I agree with your suspicion that the Real exceeds our ability to grasp it. It seems to me scientific investigation must hit certain brute facts which we’s have to assume are acausal, whether that’s the Big Bang, probability clouds for particles, or the vibration of 11th dimensional strings.

  17. @Sci 12:

    “Yeah William James made note of this as well, saying chance merely meant that something in addition to the prior causes could determine the direction of the next event.”

    But as far as I know, quantum mechanics denies that there is “something in addition” that determines the event. I am referring to Bell’s and Aspect’s experiments that ruled out local hidden variables as an explanation of quantum indeterminacy. Those “hidden variables” would be the “additional causes” that determine quantum events. Bohm tried to build a non-local hidden variables version of quantum mechanics but it is not much accepted because it seems inconsistent with special relativity due to its superluminal signals.

    “AFAIK the situation in science (and philosophy, according to Massimo) is we have no model of causality and this is before we even get to the challenge of explaining the quantum realm. Yet even if we assume there are “laws of nature” the idea they enforce the behavior of particles seems, to me at least, to invoke a sort of dualist interactionism where the laws hold in a Platonic realm that preserves the chains of causality.”

    In practice, causality seems to be a special case of logical deductive relation, where consequences are deduced from initial conditions and time-invariant conditions we call “laws of nature”. In other words, determinism = causality. I don’t find the existence of time-invariant laws particularly surprising. The whole reality seems to be a structure and structures can have various regularities and irregularities. By “laws of nature” we usually mean the regularities.

    The absence of causality, as in QM, simply seems to be the absence of a logical deductive relation between an event and a cause. The event cannot be deduced from anything else. We might still be able to calculate the probability of the event based on frequencies of possibilities, like if we have a box containing 20 white balls and 80 red balls, there is a 20% probability of randomly picking a white ball.

  18. @Tomas: As to what drives acausal phenomenon I one metaphysical argument (I believe by Gregg Rosenberg, but this is second hand as I’m still waiting for his book) is that causation is something intrinsic to relata while science is the study of relationships between relata. Thus whatever provides the drive for causality isn’t amenable to scientific inquiry. I do think this might be the case, though it may simply be we lack the brains and tools necessary to pierce what to us will end up seeming like brute facts of nature.

    Not sure if that puts us on the same page? (I can see the claim of “intrinsic property” possibly being, for practical purposes, synonymous with “it just happens”.)

    On the subject of time symmetric laws + reality as structure, are you proposing the present is illusory? I have to admit this notion has never made sense to me. I do agree laws of nature are just observed regularities, but there seems (IMO anyway) to be some circular reasons among those (not you if I’m reading you right) who take observed regularities and then claim they can be elevated to proscriptive laws.

  19. @Sci 20,

    well, while relata in themselves seem unamenable to description (because descriptions are relational, unless they are self-referential and therefore meaningless), causality is obviously amenable to description and scientific predictions, so I think causality is definitely in relations, but then again without relata there would be no relations, so in this sense relata are necessary for causality too.

    I guess our sense of “the present” is a quale of consciousness, and qualia are those indescribable relata, which stand in relations to other relata.

  20. Quentin,

    I’m not offering a thesis about quantum mechanics, or talking about it at all. I did say determinism operated at a micro level, but not that the further down you go the clearer it gets. On the contrary; at both ends we seem to end up with statistics: perhaps reality is in the middle.

    If we really need a huge quantity of matter to get even an approximate determinism, I think you need to warn the engineers.

  21. @Tomas 21: Apologies for my lack of clarity. I agree we can explain and predict using cause and effect, but there remains the question of what links A & B in A -> B? And not just links the two but ensures that there is consistency?

    I believe this was Hume’s critique of causality, though I’ve only read excerpts and interpretations/analysis of his works so I could easily be incorrect in this attribution.

  22. @Sci 23:

    I think the causal link between A and B is a logical deductive relation, in the context of arrow of time and laws of physics. In other words, B as a consequence is something that is entailed/contained in A, time and laws of physics.

    All relations are fundamentally relations between part and whole, or relations of difference and sameness. These relations give rise to the structure of set-theoretic universe, which is the basis for logic and mathematics.

  23. Sergio (16): “If I’m right and you do disagree with my re-phrasing, and by contrast you think that what science finds and defines to exist “down at the lowest levels” is really-real (it isn’t, in my opinion), then I can see why from your point of view free will, the self, intentionality, etcetera (…), are all things that science will ultimately uncover as not-real (illusionary as in “they don’t really exist”).”

    I just can’t see any way of fixing interpretation in such matters, Sergio, so it strikes me as tilting at windmills. When I say something is an illusion, I mean it in roughly the same sense as “visual illusion,” which is to say, a short-circuit involving heuristic short-cuts. The intuitions (metacognitive deliverances) informing debates regarding things like self-hood, personal identity, freedom, and so on are the subjects of such controversy because they depend on specific, practical ecologies to function effectively, and suddenly we’re asking the intuitions underwriting those applications to solve the theoretical question of ‘What Freedom Is.’ The controversy turns on the misapplication of these tools, so to get passed that controversy, we have to stop misapplying them. Redefining freedom as this or that in a way to render it ‘compatible’ with our high-dimensional understanding of the natural world is to confuse it with something that possesses a high dimensional fact of the matter, when it is a dynamic component of system that possesses a high dimensional fact of the matter. That’s where the answers lie. That’s where science has a chance to illuminate. The rest just feeds perpetual bickering–as we should expect.

  24. @Peter
    Huge in terms of number of particles. That’s still small to us.

    Now if by determinism you mean deterministic laws, I don’t know where we get it really: chemistry is generally probabilistic, and there are no strict laws in biology. Perhaps you can have an “in principle determinism” (something like: if we could model precisely an organism down to a certain level, we could predict its evolution) but frankly, since living organisms are complex and full of chaotic regimes, I think it’s wishful thinking.

    I agree all this is not necessarily relevant for free will, and it’s contentious to argue for free will on the basis of the stochasticity of biological processes. But my point is that you cannot rely on sciences to get determinism (at least not in the case of living organisms): it has to be a metaphysical move.

  25. Scott @25 –

    I think I’m with you on all of that except:

    “Redefining freedom as this or that in a way to render it ‘compatible’ with our high-dimensional understanding of the natural world is to confuse it with something that possesses a high dimensional fact of the matter, when it is a dynamic component of system that possesses a high dimensional fact of the matter.”

    As I’ve indicated in previous comments, I think of the distinction I take you to be making in terms of vocabularies/concepts applicable for different purposes (what I take you to mean by “problem ecologies”): the scientific vocabulary for expressing our “high-dimensional understanding of the natural world” and the intentional idiom for psychological discourse. But I don’t see where a “dynamic component of a system” fits in that view, so could you elaborate or rephrase? Tnx.

  26. Scott #25,
    my first reaction was “Damn! I still don’t understand what he says”, but a good night sleep helped, I think.

    Talking about problem ecologies isn’t a bad way of approaching the discussion, and may be useful to clarify where we stand. The other aspect is what ‘tools’ can be and/or are being used to explore a given question.
    I will start with discussing the questions: one way to look at neuro-, brain- and cognitive-sciences is that they collectively seek to explain how we get to interact with the outside world. From a detached, third party position, answering this question does not need to include/explain/address any problematic concept: intentionality, representations, meaning, phenomenal experience, self, self-awareness, free-will and maybe more (I’ll call this the Problematic Bunch – PB) can be ignored. Mechanistic explanations may be built from the ground up and result in a grand theory of behaviour, let’s call it Behaviourism 2.0 (the revenge). Peeking inside this (hypothetical) B2.0 we might or might not find some elements that resemble PB members, all bets are off, it really is anybody’s guess.
    Why is that? Because the space of possible B2.0 theories, even when restricted to only those that show high accuracy and reliability, is virtually infinite: there are so many complicated mechanisms between fundamental physics and human behaviour that there must be innumerable interpretative approaches that will all work well enough, assuming we can put them together.
    In this description, I’ve defined a monstrously “hi dimensional” problem ecology, in fact it incorporates all the naturally occurring problem ecologies that have shaped our evolutionary history as well as all the solution found by natural selection. For me, the sheer size of the problem-space is huge, to a point that defies imagination: we can’t even grasp the size of the problem. This is important, because it means that having something that helps navigating the space of possible solutions would help the process, big time.

    Let’s leave this aside for a short moment, and consider a different angle: among the set of the valid (precise and accurate enough) B2.0 theories, are all the same? Not to my eye, no. To me, the ones that also help clarifying what some element of PB is and does, would look vastly superior to the ones that don’t. It’s a straightforward assessment: all other things being equal, they would have more explanatory power. I hope you are with me so far, because I’ll now add my personal bet, and of course don’t expect agreement on it: my own personal bet is that it would be almost impossible to construct a B2.0 theory without including some elements which resemble members of PB. I’ve discussed with Charles about this in the comments of “Sergio resurgent”, IIRC. Briefly, the ambition of excluding the whole of PB is why Behaviourism 1.0 failed, if you ask me.

    My personal bet however can be used to design a strategy to crack B2.0: the idea is that if we can produce biologically plausible hypotheses about the function and role of PB elements (or at least hypotheses of why they exist), suddenly the search space of B2.0 shrinks significantly, making the task much, much easier (but still unimaginably difficult!). It all hinges on my personal bet, but if I’m right, the PB becomes a resource instead of an annoyance, which should be good news.

    I can now tackle your points:

    The intuitions (metacognitive deliverances) informing debates regarding things like self-hood, personal identity, freedom, and so on are the subjects of such controversy because they depend on specific, practical ecologies to function effectively, and suddenly we’re asking the intuitions underwriting those applications to solve the theoretical question of ‘What Freedom Is.’

    I’ll translate the above in my own language, to check that I did understand. I think you are saying that using intuition to figure out what the elements of PB are is NOT going to work. That’s because metacognition is “designed” to solve entirely different problems. I agree with enthusiasm. However, from this, I (still) see no reason to declare that PB elements are illusionary and should thus be ignored. I’m pretty sure that some element of PB is itself a deliverance of misapplied metacognition, but I can’t a priori know if that’s the case and what elements can be dismissed on this basis. Thus, instead of brushing PB aside, I dive into it, and specifically look for biologically plausible explanations of why PB elements exist. This requires to look at them in terms of how do they aid fitness/survival/reproduction. Thus we close the cycle: I am asking what “ecological problem” the various elements in PB help solving (i.e. why they emerged via natural selection).
    In all this, I still see very little disagreement between the two of us, but I still can’t explain why you declare yourself an eliminativist. At this point, we (people interested in these question, but without access to a neuroscience lab) should focus on PB, not conclude that it’s irrelevant and/or illusionary.
    However, I do see one thing that might explain the apparent discrepancy: you seek to eliminate the “metacognitive deliverances informing debates regarding [PB] things”, this includes 99.9% of armchair philosophysing on PB elements using metacognition/introspection as the main searchlight. This approach is certainly not the most promising and I’m happy to “eliminate” most of the conclusions (more “endless debates” than conclusions, of course) that it produces.
    Thus, maybe, I was just misunderstanding the object of your elimination… (more wishful thinking?)

  27. PS (@Scott)
    It turns out that my whole comment above is an answer to the “why do you bother?” question you’ve posed me a while back. I bother because:
    1. The scale of the task is huge.
    2. If we’re lucky, the PB could serve as a sort of searchlight, when used sensibly.

  28. Sergio –

    Note that in the vocabulary approach to PB (ala Bjorn Ramberg’s essay in “Rorty & His Critics”), each member is at base merely a word (concept) useful for certain purposes in certain contexts. So, I wouldn’t say they are illusions, only that they don’t refer (in the formal sense). Imagine trying to carry on normal conversations without “I think that …”, “I believe that …”, “What I mean is …”. Then try to imagine a neurosurgeon announcing “I see the patient’s problem; he has a wrong belief that must be excised.” (A standard example of the distinction is a fictional character – say, Sherlock Holmes – whose name is much “talked about” but doesn’t refer.)

    I’m all for the project you call “B2.0”, but I think it’s misleading to label it “behaviorism”. My understanding is that B1.0’s objective was to explain what goes on inside based exclusively on what’s observable from the outside, ie, behavior. The project you describe is the inverse – explain behavior in terms of what goes on inside. Not necessarily appropriate to describe every project involving behavior as “behaviorism”.

  29. Charles #30
    I’m not sure I follow your first paragraph. I’ll use a very coarse example, to keep things simple: when someone says “I went to the pub”, is saying something about someone. Thus, “refers to” (intentionality) and relies on the possibility of identifying her as a person. Thus, for me, both concepts, intentionality itself and the self/non-self distinction do indeed refer to something. The difficulty remains: pinning down exactly what they refer to is surprisingly hard, we are all tempted to use introspection to do so, but this approach does reliably mislead. The bet is that better ways to pinpoint these concepts are possible on evolutionary grounds, it is also that doing so will help out sorting and organising the data coming from neuroscience. A long shot by all definitions.

    You would have noticed that I have a bit of self-deprecating tendencies (deliberately: I don’t want to end up thinking that I know the truth), thus, because I dislike Behaviourism 1.0, I’ve allowed myself to describe my hope as B2.0, for the purpose of this discussion, and this discussion alone. On reflection, I’m totally with you when you say that it’s not the best way to describe the approach! I assume you’ve spotted ETC, by now… 😉

  30. The ability to make choices gave us freedom because we acquired the ability to discriminate or eat the right fruits, choose the right leaders and assign the jobs to the right people. Two blog entries back Peter discussed the notion of Self-Denial but no politician would ever deliver a stump speech explaining that the self doesn’t really exist and all he is doing is an exercise. So making choices is really what gave us the ability to acquire health, safety and comfort for freedom somehow got flipped into the greatest parlor game in the history of human thought. One level up from our selfhood is our social brain and the scene below is about social interaction. To set the scene, the undertaker’s daughter was raped by a couple of wealthy men’s sons and is seeking ‘justice’ in the form of murder which is more obviously revenge against them and also the corrupt social justice system which let them off. Vito who is wealthy himself probably realizes this is a “boys will be boys situation” (his own son is a bit of a rapist)and so tells his men to have them beaten up, probably more as a favor between the wives who are friendly as opposed to Bonasera who is a very moral man hugely angered by the violation of his daughter. Ironically he uses the “favor” later on to make his own son’s corpse presentable to the mother for viewing. Bonasera of course is a derivation of ‘good night’ and after he leaves the room, Vito addresses the characters in the background who are in the dark (Scott should like this).
    https://www.youtube.com/watch?v=u8hVTXDhhVM

    My point is that the brain is a social organ and science is an exaptation of these same social mechanisms. Like trying to organize a Las Vegas convention of introverts or Libertarians who identify themselves as members of that social group.

  31. By “refer (in the formal sense)” I meant that there is an object to which a word points. For example, in saying “Sergio went to the pub” I would be assuming (on the Internet, one can never be sure!) that somewhere out there is an object named “Sergio Graziosi”, in which case that name refers. On the other hand, there never was a detective living at 221B Baker St in London named Sherlock Holmes notwithstanding that there has been tons of intentional discourse “about” that character, so that name does not refer. My position is that members of your PB are of the latter sort: talk about them can be quite useful in certain contexts (so they have “meaning” in the “meaning is use” sense), but they nevertheless do not refer.

    No, I’m not familiar with “ETC” – assuming you’re not referring to my favorite C&W singer, the object named “Earl Thomas Conley”.

  32. Charles,
    “refer (in the formal sense)” is not that formal then… The fact that there is someone bearing my name is:
    (1)Arbitrary (people may call me lots of names, whoops!)
    (2)Disputable: if there is no self…
    (3)Impossibly complicated to describe in terms of fundamental physics, and arbitrary just the same (is the hair I’ve just lost still me? If not, was it me a moment ago? What about the air in my lungs? Etcetera).
    (4)Ambiguous: there are at least another 4 “Sergio Graziosi” alive in the present world (I think!).

    Pinpointing what Sherlock Holmes is much, much more difficult, all 4 points above still apply, but they apply much more. That’s why I wasn’t following: I see very little theoretical difference between the two cases. Practical difference? As much as you do, but we’re discussing theory. I start with the weakest ontology that I can manage: I can say “I exist” and believe it is true, but in a extremely weak and circular sense. From there, saying intentionality exists (in the same weak sense) feels like fair game to me.
    The problem with imaginary stuff is that it exist as imaginary, i.e. in our minds, and until we can agree something about what minds are or do, we’d better admit we can’t pinpoint Sherlock Holmes.

    ETC stands for Evolutionary Theory of Consciousness. I’ve taken courage and pre-published my views (here), hoping to get some feedback (direct and uncompromising, if possible) and thus decide if I should just drop it or keep pushing. It’s shameless self-promotion in here, so I feel uneasy mentioning it, but it is supposed to be one step towards something like (the thing we won’t call) B2.0. Hopefully a little less vague and less wrong!

  33. Charles (27): “But I don’t see where a “dynamic component of a system” fits in that view, so could you elaborate or rephrase?”

    Rather than invoking something so cryptic as an ‘intentional stance’ we simply understand what’s going on a la zombie, in terms of certain socio-cognitive systems tracking certain data in certain ways. If you want to know what intentional idioms are, you have to figure out the role they play in these systems. The application of these systems is all but impenetrable from a metacognitive standpoint (and how could it not be?).

    It’s the only way I can see. The alternative is to think peering through a straw *in the right way* will eventually provide the whole picture. 2500 years is enough, I think.

  34. Sergio (28) That was a very cool way of expressing the stakes, but what you’re saying pretty clearly boils down, I think, to the ‘throwing the baby out with the bathwater’ argument, the abductive argument against eliminativism. The thing is, Sergio, my eliminativism actually possesses a theory of meaning, one possessing a great deal of explanatory power.

    What I mean by elimination actually varies from case to case, because heuristic neglect generates different kinds of illusions depending on the ‘crash space,’ the different tools relative to different ecologies of application. In the most general sense I mean there’s an extremely limited role that intentional posits can play in a mature cognitive science.

  35. Sergio (34): “Pinpointing what Sherlock Holmes is much, much more difficult, all 4 points above still apply, but they apply much more. That’s why I wasn’t following: I see very little theoretical difference between the two cases. Practical difference? As much as you do, but we’re discussing theory. I start with the weakest ontology that I can manage: I can say “I exist” and believe it is true, but in a extremely weak and circular sense. From there, saying intentionality exists (in the same weak sense) feels like fair game to me.”

    And that ‘feels’ is pretty much the fuzzy fulcrum from which the whole of the intentional edifice tips. Some are even analogizing it to a smell now! We can infer intentionality because we can trust the reliability of our introspective nose to only smell things that are there.

    But tell me, Sergio, outside philosophers, who ever thinks about ‘Intentionality’? I know I used intentional idioms with great effectiveness for a sizable chunk of my life without catching the faintest whiff of ‘intentionality.’ When I finally did ponder ‘aboutness,’ it seemed to me that it had always been there, that it possessed some kind of existence independent of my judgment… and I spent a good chunk of my life thinking intentionality was ‘self-evident.’ My nose doesn’t lie! But it does, all the time. We know that for a bloody fact. And of course, I’ve spent the last big chunk of my life thinking it was simply an illusion, a product of my all too human cognitive shortcomings.

    You speaking theory, as you say, and this is precisely your problem. You have no theoretical sense of smell (and why would you?), but you lack any ‘meta-olfactory’ capacity to inform you as much, and so you confabulate in predictable ways.

  36. Thanks Scott (36)

    What I mean by elimination actually varies from case to case […]

    Which is precisely the point I was hoping you’d concede: we need to look on a case by case basis (I think I’ve explicitly asked for this a few days back, but can’t find it now). A blanket “all of the PB doesn’t even exist” isn’t going to convince anyone, is far too a-priori and can be guaranteed to be inaccurate, hence my insistence. Also: because BBT and ETC have huge overlaps (but use very different language and tools) I am really pushing hard to try and reach some form of understanding between us.

    Re #37, call me a revisionist: when I talk about intentionality, I talk about intentionality as defined in the Bacterium case – signals about something. I’m keeping the word, and attach to it only the first and minimal spark of aboutness – coming straight out from biology 101, not an armchair. The same for meaning, but you’ll need to read the ETC paper for that. There is a tiny bit of usefulness in both concepts, because they capture stuff that makes biological sense. Incidentally, Arnold made it clear that he sees the same kind of intentionality in the inner workings of the retinoid space, and I think he’s right: that’s the correct kind of intentionality, the one that has explanatory power at the low level, while it can connect to and correct our wild speculations at the highest. If we can keep what works, I’m happy to “eliminate” the rest.
    Thus, yes, I’m talking theory, so there is every chance of getting it wrong at every turn (applies to both of us, regrettably), but no, I’m trying pretty hard to get rid of fuzzy intuitions and stay as close as the empirical as I can. The rest of your criticism is agreed, but does not apply to what I’m trying to do. Or at least, I’m doing all I can to make sure it doesn’t. I really think you are now misreading me, but I don’t really know how to rectify this. Any ideas?

  37. Aren’t there any respected philosophers that defend libertarian free will? That defend the idea that souls really do exist and do control the brain it is inhabiting using libertarian free will? That defend the idea that a big conscious soul with libertarian free will really does control the universe — ie God?

  38. VicP (32) Apologies for providing the wrong video link, here is correct one:
    https://youtu.be/XPTAjNVvrYg

    Also should have begun new paragraph:”One level up from our selfhood is our social brain and the scene below is about social interaction….”
    The level of social interaction where we are bound by intentional posits i.e. all forms of fear as shown in the video, may go a long way in understanding how we surrender the freedom of our personal will for the social groups.

    This entire debate imho centers around levels of explanation and not holding the full realization that we bind on the social level.

  39. Anonymous Programmer (40): Human group dynamics obtain in these debates as much any other, I fear AP. Since the libertarians are invested in what amounts to supernatural posits, circles like these, where science is given the last word on the credibility of theoretical posits, tend to take the falsity of libertarianism for granted.

    I was going to direct you to the Flickers of Freedom site, but it seems to have shut down. Some clever googling should do the trick.

  40. Sergio (38): Our intentional idioms all belong to systems adapted to solving problems absent any capacity to cut nature at the joints. In that sense, no intentional posit is ‘real,’ though all of them are useful in particular (predominantly practical) contexts. Some of those contexts are theoretical (I think of game theory as a paradigmatic example), and there’s no sure way of determining in advance how the utility of those posits will fare as cognitive science matures. It’s case by case.

    However, it still remains the case that there isn’t any such thing as intrinsically intentional phenomena. This means using a ‘simple case’ of aboutness, as you do, as the basis of more complicated intentional phenomena is likely in for a rough ride. Aboutness is a way to make sense of things *absent information*: it seems implausible to think it will do much more than distort our attempts to make sense of the very information it’s adapted to leapfrog. I’ve read your ETC paper a second time, now, and though I certainly applaud you evolutionary approach, I just don’t see how you get past the ‘big problems,’ so much as finesse them in a new way. I just don’t think you’re being *evolutionary enough,* to be honest. You’re agnosticism is just too philosophically loaded.

    To give an example, consider your example of the lizard needing to “represent the suitable causal chains” to be able to be able to more effectively exploit its environments (hide, in your example). What causal chains? Causal cognition is extraordinarily expensive–both in biological and monetary terms if you consider the capital devoted to cognitive science! Lizards may make good wallets, but they don’t carry any 😉 . A lizard is going to rely on systematicities in its environments to gerrymander effective computational shortcuts to leverage the kinds of behaviour it needs to survive–same as all life. A lizard needs to develop the sensitivities to environmental patterns that cue adaptive behaviour (such as hiding)–and that’s it. No ‘aboutness’ is required, only effective causal relations with its environments.

    I can go on, but let’s focus on this example for a bit.

  41. @Anonymous Programmer: As Bakker notes it seems to be a rarity. I’ll list what I recall from having poked around in varied places.

    For souls you’ll likely have to delve into either Christian Apologetics or Process Philosophers/Theologians (see Whitehead & Bergson). The psychologist Stephen Earle Robbins has a book called Time & Memory about Bergson that doesn’t claim we have souls but explicitly leaves the door open. (I suppose Orch-OR does the same re: souls, but I believe there’s a split between its authors. Penrose is – AFAIK – against it while Hammeroff is in favor.)

    JR Lucas has a book on the subject entitled The Freedom of the Will. I’ve not read it but apparently it turns on the controversial Godelian Argument asserting minds aren’t computational (seems one might argue something can be deterministic *and* non-computational but not sure if that’s possible?). There’s a long list of rebuttals and counter rebuttals on Lucas’ site. I linked two papers on the Penrose picture above (comment #3).

    There’s “How Can I Possibly be Free” by the neuroscientist Raymond Tallis, and “A Unified Explanation of Quantum Phenomenon” + “A new theory of free will” by philosopher Marcus Arvan.

    There are physicists who seem comfortable with the idea of free will as a possibility – Lee Smolin, Penrose (I think?), Michio Kaku , and Marko Vojinovic come to mind. Though of course others are not – Sean Carroll for example. Sabine Hossenfelder, last I checked, thinks it’s possible just not something that exists in our universe – see The Free Will Function on arXiv.org.

    Google should net you the papers I mentioned, I’d link them but the filter will eat my post. 🙂


  42. “God, said Einstein,” does not play dice.”

    He did. And like a lot of issues related to quantum mechanics, he was wrong. He’s allowed to be. Newton believed in the Philosopher’s stone, and he was even more significant than Einstein.


    “St Augustine, to go no earlier, understood that free will and predestination are not contradictory,”

    You wouldn’t if you believed in God. Anything is possible then .. even total contradiction.


    “Part one: determinism is true”

    Its probably true. For matter. In the known domains of the Universe that contemporary science knows about and can observe.

    If one assumes that physics is always true then one must assume that determinism is true. If one assumes that physics is usually true (merely accurate, albeit limited ) then determinism is an open question.

    There has never been an instance since the creation of physics 300 years ago that should lead us to assume that physics is more than merely very accurate. Every great theory (thus far) has proved to have very definite scope and limits.

    Furthermore there is no ‘physics’ of physics – nothing to tell us if the process of physics either has or could ‘end’. So physics is condemned to be merely just accurate at best. Not bad nonetheless. But from an epistemelogical viewpoint, it remains just plain old mathematical modelling. Just more accurate. You can fill in whatever semantic gaps you like, but that won’t change what physics is, and physics doesn’t claim (couldn’t claim, actually) that the universe is deterministic.

    Also the relationship between matter and the mental is unknown. It may be that (for instance) determinism is a requirement of our biological inheritance – our cognitive limits require it. Who knows. But to suggest that the truths of mathematical physics (which is restricted to its measurable, known physical domains) can be extended to the mental is not currently justified.

    ” Still, the idea of delimited patches of randomness seems odd, inelegant and possibly problematic.”

    But true nonetheless. Obscure quantum effects like tunnelling – where a small electron can jump over a massive hill without any explanation – literally keeps the grass growing, as it is a key component of photosynthesis.

    Quantum mechanics shed some interesting light on “cause and effect”. In the quantum world, cause and effect are somewhat meaningless terms. “Cause and effect” implies a definite consequence for a given cause. There is no such thing in QM, although we can say that ‘X’ is far more likely than ‘Y’, given the energy of the system. In my opinion it’s still deterministic, as what determines the flow of metrics in the system is still the continuous application of mathematical laws. But as I’ve said before, to talk of cause and effect in deterministic systems I think is not coherent.


    “if something particular must happen, then the compulsion is surely encapsulated in some laws of nature.”

    who says anything must happen ? “Must” is a human idea, the idea of the imperative. Says nothing about physics.


    “If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday;

    Are you serious ?


    “we should try to establish its truth if by any means we can: but even if we fail, that failure in itself leaves us not with free will but with an abhorrent void of the unknowable.”

    Why on Earth would the unknowable be abhorrent ? Are we biological beings or Angels of a Higher Power, predestinated to know all ?


    ” If I had a gun to my head or my hands were tied, then turning left was not a free decision.”

    It is. You could say ‘right’ if there was a gun to your head. The consequences might not be great, but that’s the choice you make. Many people have chosen to die rather than comply with demands.


    “There are of course degrees of freedom and plenty of grey areas, but the essential idea is clear enough.”

    There is no degree of freedom in a deterministic world. There just isn’t. That’s the point. That’s why compatibilism is the worst option of all three.

    Neural structures are matter, determined by physical laws. Their aggregate behaviour is determined by their microscopic behaviour. You can’t have an argument that says “atoms are deterministic but molecules aren’t”. You can’t say that aggregate structures of neurons have different deterministic properties than the atoms that make them.

    ” our behaviour may be determinate, but it is not determinable;”

    practically true – there aren’t enough computers to cope with it. But theoretically I don’t think it holds – you just need enough computers. You can meaningfully suggest outcomes with probabilities I think – same as with material objects in the QM world, although less accurately.

  43. Scott re 35 –

    “we simply understand what’s going on a la zombie, in terms of certain socio-cognitive systems tracking certain data in certain ways.”

    To me, “socio-cognitive systems” suggests that the subject is being viewed as a person (ie, a psychological entity) rather than as a “zombie”, which suggests that the subject is being viewed as merely a physical entity (ie, scientifically) responding to sensory input mechanically. With that interpretation is correct, the quote seems to involve mixing the psychological and scientific vocabularies. And since we seem to be pretty well aligned on these issues, I assume that either my interpretation is off or you don’t see mixing vocabularies as a problem.

    “If you want to know what intentional idioms are, you have to figure out the role they play in these systems.”

    And assuming that my interpretation of “socio-cognitive systems” is right, I agree.

    “The application of these systems is all but impenetrable from a metacognitive standpoint.”

    I interpret this as saying that introspection alone won’t help in that endeavor. Agree.

    “The alternative is to think peering through a straw *in the right way* will eventually provide the whole picture.”

    I interpret this as saying that the physical sciences can’t answer the psychological questions (ie, those related to personhood). Agree.

  44. AP 40 –

    On some PBS stations there is an interview series called “Closer to Truth” which is a pretty high level popularization of some of the more philosophical aspects of science. Many of the episodes address the relationship of some philosophical/scientific question and theism. As a committed non-theist, I don’t find the views expressed by the (most often) Christian philosophers convincing, but the guests are intellectually at the pinnacle of the genre.

    The episodes are online at http://www.closertotruth.com.

  45. And a recent sequence of Closer to Truth episodes on Free Will features several of the people mentioned in Sci 45.

  46. Charles (48): “And since we seem to be pretty well aligned on these issues, I assume that either my interpretation is off or you don’t see mixing vocabularies as a problem.”

    It’s always a problem, one that I’m sure I’ve run afoul of any number of places. But, just as ‘design’ is a well nigh unavoidable short cut in economical evolutionary discourse, intentional idioms is inescapable in economical discourse regarding cognition.

    This is what makes abduction such an important part of these accounts. You need to be able to account for your short cuts when called upon, to explain how and why this or that heuristic does the work you need it to do. I often find it difficult when called out in this way, but I’ve yet to regret the exercise. Part of the reason I find BBT so compelling has to do with its interpretative power.

  47. Sergio 34: “I see very little theoretical difference between the two cases [talking about an entity named X as opposed to the name X referring].”

    Discussions of the theoretical difference between intentionality and reference are seem to me typically right at – if not significantly beyond – the threshold of my intellectual abilities, but my simplistic view is that for a name to refer, the named entity must exist whereas a named entity can be talked about even it it doesn’t exist.

    Re your four points:

    1 and 4. The obvious arbitrariness and ambiguity of names can be resolved by employing a definite description:

    There is an object that submitted comment 34 on this post and to which the name “Sergio Graziosi” refers.

    My assertion about “Sherlock Holmes” already includes a definite description, so I think it stands as is.

    2. My point about members of your PB wasn’t intended to say anything about “self”.

    3. I think I agree, although I’m inclined to go further and suggest that it may be impossible.

  48. john davey,

    “If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday;

    Are you serious ?

    Seems legit. What’s the question you’re raising?

  49. Peter, I wonder: how do you square your belief in determinism with the ‘anomicity’ you’ve proposed to be at play in the mind? It seems to me that the two are directly diametrically opposed—if you have determinism, then it seems at least to be the case that every occurrence can be brought into a lawlike ‘if A then B’ form, where A is the present, and B the future configuration of whatever part of the world is under consideration. In determinism, everything happens for a reason, that reason being given by the prior configuration of the universe; so if all those deterministic relationships were known, then everything would become predictable. But this can’t be the case for an anomic world.

  50. Charles #52 (and many comments from John Davey)
    You’re making my point (and we’re still engaged in a battle of understatement, I see): if my number 3. is so difficult to be impossible, and we can’t produce a precise, unambiguous, entirely materialistic definition of the distinction between me and not me, we have to accept that the Sergio Graziosi object is just a very crude approximation of what really is there, and we adopt your definition of referring as “there is an object to which a word points” acknowledging its brutally heuristic essence (since the “object” is fuzzy and undefinable). We do it because, for all practical purposes (remember?) it works very well.
    Thus, I think we agree, but this brings in the “levels of description” theme again, and links to John’s view in a way that I find interesting.
    I will start with Peter’s position about existence: if something exists, it has causal powers. If something inherently can’t affect anything at all, then there is no something. I hope we can agree on this.
    I am then happy to espouse John’s view: in a fully deterministic world, cause and effect stop having any meaning. Because there is always an element of arbitrariness in picking elements A and B, where A causes B. In ultimate analysis, you get infinite correct choices.
    However, as long as you are part of the deterministic world you’re trying to comprehend, you can never reach the point of ultimate analysis, where you have a full description of what was, what is, and what will be. That’s because, since everything that exists has causal power, you will never be able to produce a full description using only what is inside your deterministic world, if fact, you’ll only be able to produce descriptions that are incommensurately local, by comparison.
    Thus, for all practical purposes, and here I mean it in the strongest possible sense, we have to embrace the heuristic approach, make distinctions between A, B and C and look for causal explanations. We would then pick the distinctions that allow to simplify our model: if description 1 (D1) has 20 elements joined by 40 causal links (explains 40 effects) and description 2 (D2) manages to describe the same 40 effects with only 10 elements, we can pick the latter. If we then find another “theory” (description 3) that has 25 elements and can account for 40 causal effects, where 2 are new, and 38 are already accountable according to d1 and d2, we get alternative theories. D3 isn’t better (or more true) than D2, it’s just different, useful for a different subset of practical purposes.

    The current result is that we have to accept the arbitrariness and fuzziness of our distinctions, because that’s what allows us to make predictions without having to let the whole system evolve and just observe the consequences.
    We finish up joining Scott again: cognition *is* neglect, what kick-starts cognition is the act of ignoring the existence of what isn’t included in our model. Scott mentions this in his first Alien post, as long as Cosmic Rays have no effect on the local ecology, in cognitive terms, they don’t even exist.
    Thus, all referring (and if you want, I think it’s OK to use the aboutness and intentionality words, knowing I’ll lose Scott if I do) is an act of neglect. This naturally connects well with the Data Processing Theorem you mentioned a while back: you can’t process all the information present in your whole deterministic world, you have to slice it up somehow and make it manageable, which in turn means that the perfect theory that predicts everything is a physical impossibility.

    Final point: when you have a case where two theories co-exist, like in D2 & D3, it’s legitimate to try producing D4 and describe 42 causal links. Not necessarily easy, but a worthy thing to try. Which in turn describes the reason why I tend to see far too many agreements: if I’m in D2, and you in D3, I immediately try to produce a D4 and declare “see? we agree after all”. The fact that I usually fail is an irrelevant annoyance 😉

    Sci: this more or less sums up my view of why causality is a cognitive necessity. Without John’s contribution it would require a very long premise, that’s probably why I never wrote the whole thing down in an organised way.

    [Scott: I’m not ignoring you, I’m thinking]

  51. Jocen 54: This is something I was pondering as well. I’ve heard people say things can be deterministic but non-computable but I don’t see how that can be.

    Of course, as always, I’m open to being shown where my thinking may have gone awry.

  52. Hm, I don’t think I see a problem with determinism and noncomputability at all: a Turing machine either halts or doesn’t halt, and does so in a perfectly deterministic way in the sense that it only depends on the initial tape configuration of whether it does or doesn’t. That is, whenever presented with the same configuration, it will produce the same behaviour, and either halt or not—which seems perfectly deterministic to me. However, the question of whether it does halt is of course not answerable by any computer.

    But noncomputability also doesn’t imply anomicity, to me—it’s in a sense merely an epistemic restriction, i.e. the impossibility of getting to know certain consequences via certain means (those afforded by computation); but it doesn’t imply that there is some sort of non-lawlike behaviour on an ontic level. There could be laws perfectly accessible to a hypercomputational mind that, however, would forever elude our grasp; but those are laws just the same.

  53. Ah that’s a good point, distinguishing noncomputability from a genuinely anomic reality.

  54. Good question. Briefly, there’s a level where determinism works, but the meaning of things is not determinable on that level. Now in theory you could work through the deterministic account in exhaustive detail and using a human brain read off the significance of each state at least to you , but the latter process is still anomic, I think.

    This is an “after a couple of scotches” answer – I reserve the right to disown it tomorrow…

  55. Scott,

    It’s always a problem, one that I’m sure I’ve run afoul of any number of places. But, just as ‘design’ is a well nigh unavoidable short cut in economical evolutionary discourse, intentional idioms is inescapable in economical discourse regarding cognition.

    To be fair, using ‘design’ in this way is kind of like expecting someone to get that when you’re calling the dog Mildred that aunt Mildred is dead. It’s quite a leap to make.

  56. Peter #59: So do I understand you correctly that your notion of anomicity (I’m trying really hard to popularize that word!) is linked rather to the impossibility of determinability, i.e. more of an epistemic notion, than to a fundamental, ontic lack of lawlike behaviour? Or are you saying that there is some form of schism in between the level on which the world is deterministic, and the level on which meanings operate, such that the latter genuinely lacks lawlike behaviour—which would then, it seems to me, entail a certain failure of supervenience, leading to a kind of ‘strong emergence’ of non-lawlike behaviour from lawlike fundamentals?

    Oh, and this is a pre-couple of scotches answer, so I reserve the right to rescind it in the evening… 😉

  57. Sergio 55 –

    You’re into issues that are way more subtle than the simple point I was trying to make.

    Suppose I say that my dog is in the basement. Unless you have reason to doubt my veracity, if you go to the basement you’ll expect to find an object there which will cause you to say something like “Sure enough, there’s your dog.” If in that scenario “dog” is replaced by “unicorn”, not withstanding that you have seen pictures of a unicorn and as a child may have enjoyed stories involving unicorns, you will not even bother to look in the basement because you know you won’t find any object there which would cause you to say “Aha, there is indeed a unicorn here”. Why? Because you know that there can be no such object in the basement.

    In an exchange with Scott above, you suggest that he (and others) seemed to consider members of your “problem bunch” (eg, words in the intentional idiom) to be “illusionary”. Because saying some mental event is an “illusion” often results in pushback (eg, see quip under this blog’s title), I was just offering the suggestion that one might want to express the underlying sentiment differently and followed Rorty by suggesting the alternative of saying that those words “don’t refer”. Of course, that’s also debatable but does avoid some of the baggage attendant to saying that something is an “illusion”.

  58. Anonymous Programmer: It occurs to me you might like Kauffman’s writing as well, maybe start with Answering Descartes: Beyond Turing. Other names that come to mind of possible interest are physicists Christopher Fuchs & Anton Zeilinger, maybe also Lee Smolin.

  59. Sci, thanks for that link (you seem to have quite a nice collection of references at hands!). Unfortunately, it seems that one should first advise Stuart Kaufmann to learn some physics—both in the piece you linked to as in the earlier one about his ‘poised realm’, there are several egregious misconceptions that unfortunately undermine his thesis. To perhaps highlight just two, in the poised realms piece, he asserts that physicists don’t have a way to describe systems that are ‘partially decohered’ since the unitary Schrödinger dynamics isn’t applicable—but that’s manifestly false, we do, it’s called the quantum master equation approach and can be used to describe the evolution of arbitrary open systems. This follows simply from the fact that we can always take into account the environment, which, together with our system, will evolve unitarily, and then ‘forget about’ the degrees of freedom we’re not interested in, reducing the dynamics to and (in general, non-unitary) evolution of the open quantum system.

    Secondly, he claims that the behaviour of systems in this ‘poised realm’ could somehow be used to underwrite non-Turing computable, non-algorithmic behaviour; in fact, he claims that the system is ‘obviously’ non-algorithmic, since its behaviour is not definite. That’s straightforwardly wrong: we know that all quantum systems can be simulated on ordinary computers to an arbitrary degree of precision; it’s true that one does incur an additional systematic slowdown, but still, systems based on both substrates compute exactly the same functions (as usual, there’s some caveat on the unprovability of the Church-Turing thesis here, but that applies just as well in a purely classical context). Hence, we don’t have any more reason to think that quantum ressources allow super-Turing computation as we do regarding classical resources.

    The worst thing about this is that I think there’s something basically right about his overall stance, and that’s even got something to do with quantum processes—minds are ultimately non-computable, simply because they take input from an environment that includes genuinely random occurrences, and genuine randomness can’t be produced algorithmically. Moreover, any hypercomputable function can be computed by means of a finite-length algorithm plus an unbounded sequence of random numbers, such that it is indeed the case that in the combination of our brains (executing the finite algorithm) with the environmental randomness, a hypercomputable function is implemented (with probability 1; of course, certain random sequences merely lead to the ordinary computable functions, but those form a null set in the space of all functions).

    That’s the reason our minds are so damn incomprehensible: our powers of comprehension simply merely include the computable; any explanation is essentially a finite series of steps, hence, an algorithm. Thus, our brains can do more than they can explain. This also explains the conceivability of zombies: since we are limited in our reasoning to the algorithmical, we can’t perform the step of inferring mental content from the physical substrate, and hence, it seems to us as if no connection exists, while the connection is, in fact, necessary (but nonconceptual). Additionally, it accounts for Mary’s inability to infer her future experience: to her computational mental faculties, the behaviour of the noncomputable function implemented by her brain plus the environmental influence is utterly opaque, hence, she can’t predict the consequence of the introduction of novel stimuli (but can, afterwards, remember: the behaviour of a noncomputable function, over finite stretches, is computably approximable; but of course, one needs to know the behaviour of the function first).

    But the reason I believe all this is fundamentally that the troubles one experiences in explaining the mind are just those that come up at the boundary between the computable and the noncomputable. Most notably and obviously, there’s the infinite regress, that manifests itself in the form of the homunculus problem, for instance. Infinite regresses are the hallmarks of fixed points: if you lay a map down on the desert Island it is a map of, there is a point on it that’s identical on the Island and on the map, namely the point where the map is situated. If the map is now perfectly detailed, the area around this point is an infinite regress of maps within maps. It can be shown that all of the classical paradoxes of self-reference originate in such fixed-point constructions (but the argument is a bit too lengthy to replicate here).

    Now, think of your classical homunculus. Say, it’s a device in your brain that interpretes the incoming data stream, and then generates your subjective experience from that. Of course, you know that this account doesn’t work: for how does the homunculus perform this feat? Well, by having its own inner homunculus, of course! And then what?

    And here, if limited to computational results, we must get off and try to look for another solution. But to a hypercomputer, such an infinite regress poses no difficulty. Consider for instance the Zeno machine, which performs its first step of operation in one second, its second step in half a second, its third step in one quarter of a second, and generally, its nth step in 2^(-n) seconds. It’s capable of executing infinitely many computational steps in two seconds (of course, the time scale here is arbitrary). Thus, it just completes the homunculus regress above, breezing through all the infinitely many nested levels; but then, the homunculus can actually perform its function and generate experience from the sensory data.

    Now, what does this show? Well, all the things that a Zeno machine can implement are hypercomputable functions; thus, there exists a hypercomputable function generating experience. However, all hypercomputable functions can be decomposed into a finite length algorithm plus a random number; but this is plausibly just what a human brain in an environment with genuine randomness has access to. Hence, there is a function (although we can’t finitely describe it) that gives rise to experience and that is executable with what it seems we have access to.

    Or, well, at least those are the hypotheses I currently work with. 😉

  60. @Jochen – Ah thanks for that, always good to have a physicist around. On the references, when I’m in such total ignorance of a subject like consciousness I try to gather input from a wide net of sources. Figure I can better see the landscape of possibility that way.

    I’d honestly be curious to see if Kauffman would respond to your critique…

  61. Well, I might hope he’d embrace my hypercomputational mind proposal instead, but that’d probably be a tad overoptimistic…

  62. Are there more details on a hypercomputational mind? I recall seeing a TED talk on how quantum level spookiness can enter the classical world – how that can be exploited for quantum computers:

    https://www.youtube.com/watch?v=wZzHnZzm_58

    But don’t think this is exactly what you mean?

    Also one on quantum biology possibilities:

  63. Jochen #56,
    I kept procrastinating, but I also wanted to thank you for spilling your beans here. Seriously cool stuff, I guess Perter got interested, as he’s betting on non-computable minds. Also: I do think that writing this was costly to you, it isn’t easy to break the reserve and stop keeping things for ourselves. Bravo for that, I have obvious reasons to sympathise.

    However, I’m not convinced, for reasons that are so obvious that I refuse to believe you don’t already have an answer.
    You say: “all hypercomputable functions can be decomposed into a finite length algorithm plus a random number” and from the bit above (“minds are ultimately non-computable, simply because they take input from an environment that includes genuinely random occurrences”) I feel entitled to conclude that for you, the random number is generated by the environment and comes in as sensory stimuli. From there the brain executes a finite length algorithm, and the result is “bang” a non-computable mind. Are you saying that brains do computational things and the result is the mind, which is noncomputable because the input is noncomputable to start with?
    I’d sort-of agree if you where speaking metaphorically (where “noncomputable” stands for “incomprehensible FAPP”), but I think you’re trying to make a very concrete point.

    I see three things to note:
    1. for practical purposes, if you’re right, we should be (theoretically) able to reverse-engineer the finite length algorithm, and since we all have access to reality, we would be able to engineer new minds. Is this what you’re saying? Isn’t this approach vulnerable to the kind of objections I was pushing against a while back?
    2. I like how you solve the problem with Mary, but note that we don’t need all that theory. When you have a qualitatively new experience, you learn something (that’s the gist of Mary’s story, isn’t it?). That is, if the experience seemed relevant, you will retain some memory, and as Scott keeps reminding us, such memory will be much impoverished, when compared with the actual experience. However, it will be enough to recognise the same sort of stuff if it happens again. This is ultimately what it means to “know how red feels”, you are able to recognise it when you see it again. Now, the algorithm that picks and stores the information necessary to achieve this is devilishly complicated (we know it is, because it works!) and thus, a normal human being can’t perform it explicitly, the processing power that we do control isn’t even close enough. Thus, it makes intuitive sense that Mary wouldn’t be able to consciously build a memory of seeing red, before actually seeing it. This fact would be true even if the input was “computable”.
    I recall some (long) conversation between you and DM about this but can’t remember who was making an argument that was close to the above.
    3. By introducing the spooky concept of hypercomputation, you are doing one thing: explaining why we can’t intuitively grasp the relation between brain and mind, and thus why we have dualist intuition and/or why the hard problem exists. However, I’m not sure you are not just brushing all the spookiness of consciousness under the carpet of hypercomputation spookiness. At least, this is how it looks to me, because I’m a biologist, and hypercomputers are far out, as far as I’m concerned.

    Thus, question for you: are you saying that brains are algorithmic and that if they were fed with controlled (computable) inputs the results would remain computable? Does this mean that in such a case no mind will emerge? If we fed computable inputs to a brain, and this still produces a mind, would we find this “ordinary brain/mind, in a very unusual settings” suddenly easy to understand? (You’ve guessed that I’d answer “no” to this last question).

    I’m hoping the remarks and questions above are enough to explain why I’m puzzled… I also hope you don’t mind me asking (I’m trying to return the favour, in fact). If you’d prefer not to explain further details in public, just let me know, I’d find it easy to understand!

  64. @Charles 63
    I can finally understand why we got caught in this spiral. For me, what you are proposing is unsatisfying because I internally translate like this:
    “doesn’t refer” becomes “refers to nothing” and therefore “the implied referent doesn’t exist”. Thus, we go back to the proposition I’m attacking: “X is an illusion, as in X doesn’t really exist”. In other words, I am given the impression that your solution doesn’t solve, it seems to help because it obfuscates.
    Surely enough, I have this impression also because I “think” I have a better solution. In a nutshell, when you say “there is a cat on the table” you are already using (implicitly)self-referential language. The objectivity of the utterance above only stands because we share the same cognitive architecture (which forces us to recognise cat and table as independent objects, whether we want to or not).
    When studied in detail, you find that there are no discrete cat and table, and when you study in detail the Problem Bunch, you find the same. In both cases things are not what they seem, but they still exist just the same.
    I find this solution much more elegant, but then elegance is in the eye of the beholder, so I guess we might have reached a deadlock.

  65. @Scott #44
    You really made me jump: no one ever before told me that my arguments aren’t evolutionary enough. If anything, people remind me that you can build evolutionary explanations for virtually everything, true or not.

    Anyway, I still think you are missing my point, and it starts with the bacterium, so I’ll leave the lizard alone, with apologies. I say the bacterium processes signals about glucose. And maintain we can agree on this. We can then say, ah, so this is where aboutness comes from. And then conclude, that this aboutness has exactly none of the strange properties that philosophers ascribe to intentionality. I’m cool with not calling this kind of aboutness “intentionality” I use the term because I follow conventions.
    However, this means I have no reasons to expect a harder ride because of this move. Biologically speaking, the ride is already hard enough (i.e. the easy problems are complicated beyond comprehension). However, if you want to explain “how can we produce thoughts about this and that?” the answer has to boil down to cognition, and I think we agree that cognition is made possible by perceiving things.
    Thus, my feeling is that when I mention intentionality, you think I’m following the crowd of philosophers, and object to this. But I don’t think you’re aiming at me, I’m in a different room laughing at philosophers who are all busy inspecting their own armchair, trying to figure out where intentionality is hidden. In the meanwhile I’m in another room, playing with little invertebrates and/or reptiles (actually, I play with imaginary ones).

    You think that my “agnosticism is just too philosophically loaded”, while I think your eyes assume that all the philosophical load that they usually find is there, but it isn’t. One of us is wrong, but I don’t think it’s me, because I agree with your entire last paragraph (on the lizard):

    No ‘aboutness’ is required, only effective causal relations with its environments.

    Indeed. I am saying that “effective causal relations” are all there is, and that they are also what makes it possible for our cognition to refer to stuff. There is a causal relation beneath, which usually works, and we use it effectively (because it allows producing inferences, thus allowing us to plan our future behaviour). There is nothing heavily philosophical in all this, so I’ll rebut your “I just don’t think you’re being *evolutionary enough,*”, with “excuse me, but aren’t you being too philosophical?” (I think you are reading things into my texts, and I can testify that I did not intend to put them there). 😉

    Seriously now: you (the actual Scott) are not my target audience. The thing is supposed to be publishable via peer-review: I need to use a language and conventions that would be understood by my target audiences. Incidentally, the way you write is utterly incomprehensible to most of the people with my background and very hard for me (and you know I try hard). So I’m not surprised you find things I didn’t intend to put into my own text, we still have a linguistic barrier between us, but assuming I am following the same old paths because I’m using well-known imagery/language is a bit harsh, I think. I am trying to be intelligible for the widest possible audience, so please keep that in mind…

    Final question: besides all the fog, do you see why the last part of the paper effectively spells out (my understanding of) BBT? If you don’t, I should find out why. (Note also that I’ve reached that point before even knowing you existed, that’s why there are no citations/credits).

    Finally: I speak my mind, but that doesn’t mean I don’t appreciate the fact that you gave ETC a go and that you took the time to comment. I couldn’t appreciate more!

  66. Hey Sergio, I’m glad you found something of value in my ramblings. I’m happy to try and answer your (and any other) questions—it’s only fair that I should give the opportunity of giving my views the same pummeling yours have received, and thinking that I finally have something that can take at least some punishment without collapsing immediately is part of what made me share this here. So on to your questions:

    Are you saying that brains do computational things and the result is the mind, which is noncomputable because the input is noncomputable to start with?

    Well, first of all, the brain is part of its own environment, so its functioning might itself depend on some random processes—I mean, there certainly is some random stuff going on there, however, it’s too early to say whether it’s essential or merely incidental.

    Ultimately, the main ingredient in my account is indeed the noncomputational resources the brain makes use of—and I think that’s how we should conceive of it: as a resource the brain receives from the environment and then uses in its mind-building. And I do mean noncomputability in the technical sense, since random numbers can’t be produced by any computable process.

    Now the obvious counterargument would be that a Turing machine equipped with a random oracle, i.e. a fair coin, still can’t compute any function that an ordinary Turing machine can’t also compute, but there’s two somewhat subtle points to make here. First of all, such a system can still perform actions in the world that a nonrandom system can’t—I gave an example of a cops-and-robbers game on here a while back, in the comments underneath the ‘Quanta and Qualia’-post. The gist of it is that a Turing machine can compute a function computed by a machine with a source of randomness only by successively going down every path, i.e. simulating all possible outcomes of getting a random input; but this is in general not possible in the physical world. Thus, while the set of functions both machines can compute remains the same, the performance of individual computations differs between random and deterministic machines.

    Furthermore, this only applies to machines that have access to a fair coin; but many more things become possible once a machine has access to a specific random number. There is, for instance, a (algorithmically) random number that encodes the answers to the halting problem for any given machine (the so-called Omega number, halting probability, Chaitin constant, etc.). Given this number on its own, it’s just indistinguishable from a series of coin flips; but given the right algorithm for ‘decoding’ it, it will provide you with the answers to the halting problem (for some specific machine), and thus, enable a performance clearly beyond that of any standard TM. And in this sense we need both the right algorithm and the ‘right’ randomness for performing noncomputable functions—only when the two meet do we get the appropriate performance, of e.g. the homunculus-function I outlined for Zeno machines above. At every locus of this happening, the world gives rise to a mind (or, well, so I suppose).

    Now, your numbered points:
    (1) Yes, I do think that we can create minds in such a way—we can create conscious robots that interact with the environment, but not conscious computers, or beings inside a simulation (this also allows me to discard Bostrom’s simulation argument, so there’s a bonus!). About the objections, could you remind me what they were/where to find them?

    (2) While even in a deterministic setting, the algorithm that produces any given phenomenology might be so involved as to prohibit our wholly appreciating it even in principle, note that this is true for many things we nevertheless understand quite well (not least the software we ourselves write—I could not emulate the simulations I write by myself, nor even hold all the steps of the algorithm in my head simultaneously, but I still know what goes on on a large scale). Think about a TV: the detailed story about how the electrons are accelerated and directed onto the viewscreen to produce any particular picture is too complex to hold in mind; but the overall picture is easy enough to understand. So I think understanding does not in general depend on being able to follow the detailed algorithms; rather, we can chunk its performance into manageable pieces, and assemble a higher-level, in-principle understanding from there.

    The same thing, however, seems to be prohibited in the Mary case: there, the argument is precisely that such in-principle understanding doesn’t seem to arise. That is, while the performance of physical systems acting upon another in whatever way seems ultimately transparent to us, it’s exactly due to this transparency that we can’t imagine how this could possibly lead to phenomenology production. And this is where my account comes in: there is a mismatch between the things happening in the physical world, and the things we can transparently imagine as occurring, which is given by the algorithmic faculties of our understanding being unable to account for the nonalgorithmic parts of the world. This yields a natural ‘explanatory gap’, without resorting to any spookiness (or at least, none of the dualistic, non-naturalistic sort).

    So to me, this is in a sense the most conservative extension of the picture: the computable doesn’t seem to be able to do what we want it to (also for the reasons I’ve been giving in prior comments), so we just take the next small step forwards: we hold fast to the entirely physical nature of the world, but allow for it to outstrip the capacities of any finite machine. The appeal here is not so much explanatory, but rather the observation that if things were that way, they would seem mysterious to us in exactly the way they do in fact seem mysterious to us. The disadvantage is, of course, that there are things that seem to be forever beyond our grasp, thus opening the door to a version of ‘new mysterianism’.

    (3) Well, in a sense, I am exactly sweeping the spookiness of consciousness under the hypercomputational carpet; but note that we have a well worked out theory of hypercomputation, so while we can’t intuitively apply it in the same way we can apply algorithmic reasoning in order to reach conclusions from certain premises in a finite series of steps, we can still reason about it, and deduce certain consequences.

    Think back to my cops-and-robbers example: Maggie, the cop, knows the police manual by heart; faced with the choice of searching either of two houses, in one of which she knows a robber hides, she executes a deterministic search algorithm, at every timestep either switching houses or staying. Now, a robber that, like Maggie, knows that algorithm, can elude her indefinitely, if the setting is deterministic; but if Maggie’s choice is random, she will eventually capture any robber (with probability one if the robber also makes random choices).

    Now, picture Maggie trying to train a new class of police cadets in her ways: all she can ever convey (in finite time) is the police manual algorithm; but this knowledge does not suffice to replicate her performance. Indeed, Maggie herself can’t give an account of her abilities—in retrospect, in every instance, she could give a simple account of what she actually ended up doing, but there is no such account that will explain her performance across all these and future cases. So Maggie has a capacity she can neither convey nor explain, and which essentially depends on the randomness she has access to.

    Take away this randomness, and while everything becomes explicable, the ability itself vanishes—that is, for a robber knowing the police manual (which is, by the way, itself a very stripped-down version of the Gödelian construction), she now no longer can catch them. Thus, the existence of the ability and its explicability are linked—any explanatory account implies the vanishing of that which was to be explained.

    In particular, one could craft an explanation of all her past successes; but this would fail to explain her ability, since one could easily imagine a robber that would elude a cop who knows this explanation. Maggie, however, would catch this robber regardless.

    I’ve got to go now (and should stop anyways lest things become entirely unwieldy); does the above help you somewhat?

  67. Sci

    “If it is true that on Tuesday you’ll turn right, then it was true on Monday that you would turn right on Tuesday;

    The determinism question is about the nature of the future – it’s not answered by pointing out that the past is static. Even in an undeterministic world, the past never changes.

  68. @john davey: On the past never changing, my understanding is even that’s not clear at the quantum level?

  69. @Jochen #72
    Thanks. The short reply is that it seems I was reading you right, and you managed to confirm my reading in much greater detail. However, you left my worries unscathed, which surely wasn’t your aim.
    Thus, now I have the job of making sure you understand my worries, so that you can find your own answers to them. Trouble is, I think my part won’t be easy, mostly because I can’t see how not to repeat myself.
    If you don’t mind, I’ll start by asking a question, so to at least have a chance of not chasing shadows.

    Your Maggie story is great, it caught my attention the first time round, and it remained in my little bag of intellectual tools ever since. In her case, the source of non-computability can indeed be a fair coin, so there is nothing to explain there.
    However, I’d like to check my understanding of the Zeno machine side. It’s all fine about how a Zeno machine should work, but I’m not sure I understand how “all hypercomputable functions can be decomposed into a finite length algorithm plus a random number”.
    From my hastily background check, I think “A random number” stands for a precise real and irrational number (like Pi or the square root of 2) which your (extended) Turing machine somehow is able to use “as such” and not in an approximate form (I1). Thus, it kind of provides a “source of infinity” if you pass me the cheap metaphor, and, assuming the Turing machine can somehow tap into it, solves the problem of the infinite regression.
    Is that correct? I’m asking because I might have got it completely wrong!
    Another interpretation (I2) is that your Turing machine needs to have access to a random number generator, one which picks a random number out of a finite set of possibilities, which is the same thing as making use of a fair coin, so I don’t think that’s what you have in mind.
    Final interpretation that I can envisage (I3) is that you just need any randomly chosen number (not the specific one used in I1), with or without infinite precision (i.e. chosen from a set of infinite or finite possibilities) it doesn’t really matter. This interpretation makes absolutely no sense to me, so if it’s the correct one I’m afraid I’ll need some hand-holding.

    If you can (re)confirm my interpretation (I1) or point me in the right direction, I will then try to give your idea the punishment it deserves (I hope you know I mean it in a good way).

  70. Sorry for failing to address your concerns, Sergio; I must have misunderstood you. And don’t be afraid to repeat yourself, sometimes if I don’t get something on the first go around, I still have a chance on the second (or third or…)! I’m a slow learner, but I do learn. Occasionally.

    However, regarding your questions re the decomposability of a hypercomputable function into a short algorithm, I’d thought I had addressed this already… So if I should repeat myself, I’ll have to ask your patience.

    Basically, there’s a theorem that states that you can compute any function at all using a short algorithm and a random number—proven, as far as I know, for the first time here (where I also got the Maggie story from, as I hope I didn’t forget to mention in my original post), but it’s sort of intuitive if you know that the number of functions of a set to itself is equal to the cardinality of its powerset, so that there are, for instance, as many functions from the natural numbers to themselves as there are subsets of the natural numbers; but there are as many real numbers as there are subsets of the natural numbers, so we can pair functions and real numbers. So each real number can be used to uniquely identify a function between natural numbers (or integers, which doesn’t make much of a difference for our purposes, as we’re only interested in cardinality).

    One can then group the functions from natural numbers to themselves into three classes:

    1) The computable functions, i.e. those for which a finite algorithm exists,
    2) functions that can be computed using a source of randomness, such as the Maggie-function, and
    3) functions that can be computed using a finite-size algorithms and a specific random number, such as in my above example the halting probability of a TM, which can be used to solve the halting problem for that TM.

    Perhaps it’s worthwhile to spend a few seconds talking about random numbers. My preferred notion of what a random number is derives from algorithmic information theory: a number is random if no algorithm exists that can compress it, that is, if no program substantially shorter than this number exists that can output the number. Thus, numbers such as pi or e or sqrt(2) are not random—there exist short algorithms capable of enumerating all their digits. They’re lawful, as opposed to random numbers, which are then anomic.

    This notion pays heed to the intuition that random numbers should, in some sense, be unpredictable—that is, knowing the first n-1 digits of the number should not enable you to guess the nth with better than chance accuracy. It can be shown that this notion of randomness is equivalent to more traditional ones, such as Martin-Löf randomness etc.

    One can also motivate this notion by noting that a sequence is random if and only if there is no ‘law’ that one can use to generate it, since such a law would have to be itself capable of being stated more compactly than the original sequence; otherwise, stating the sequence itself would count as a law according to which it is generated.

    One should note, however, that in order to perform super-Turing computation, a finite algorithm A1 would need unbounded access to some source of randomness—otherwise, there exists a finite-size algorithm A2 that simply includes the number up to the required length that replicates the behaviour of A1. However, it does not at any point require access to any infinite-length digit sequence, or make some infinite-precision real number measurement; a process continually producing the next digit when asked will suffice.

    Using such a process, Maggie can catch every robber, while no deterministic search strategy can match her prowess (type 2 above), and an algorithm being fed the digits of some TM’s halting probability (necessarily an algorithmically random number, otherwise a computable process would produce it, leading to the halting problem becoming computably solvable) will solve the halting problem for programs of ever-increasing length (type 3 process).

    In a nutshell, my hypothesis is then that consciousness requires something of either the second or third type, which explains the difficulties we have in understanding it, while still staying firmly within naturalistic/physicalistic concepts. The positive argument I’ve made towards this conclusion (as opposed to merely arguing for it via lack of alternative) can be summarized in the following form:

    P1) (Premise) Explanations for consciousness run into troubles with infinite regresses, such as the homunculus problem
    P2) (Premise) A certain hypercomputer, the Zeno machine, can ‘run through’ infinite regresses
    C3) (Conclusion, by P1 and P2) Hence, there is a hypercomputable function giving rise to a mind
    P4) (Premise) Every hypercomputable function can be computed using a finite-size algorithm and a (certain) random number
    C5) (Conclusion, by P4 and C3) A finite size algorithm together with a (certain) random number can give rise to a mind
    P6) (Premise) There exists randomness in the world
    C7) (Conclusion, by P6 and C5) Finite size algorithms utilising environmental randomness can give rise to minds.

    So now bring out the big guns, and have at it!

  71. Sergio (71): No, you actually fall quite wide the mark of BBT–which turns entirely on understanding heuristic neglect (and its pervasiveness). The position you describe toward the end amounts to a version of Type B materialism, which actually has no systematic way of diagnosing why we find ourselves in the epistemic straits we do. Your explanation for the mysteriousness of qualia is definitely ‘BBT-ish’ in that you assume (with type-B materialism) that the problem is epistemic, but lacking any account of how this is the case allows you to fudge in favour of things like ‘redness’ or ‘aboutness’ and what have you. BBT *dissolves* all the neural identity-thesis problems you take yourself to be solving: for instance, ‘beliefs,’ on BBT, are not simply the ‘brain seen darkly,’ but a way of communicating an inaccessible brain to other inaccessible brains to solve a various, restricted set of problem-ecologies. Searching for NCCs of ‘belief’ amounts to systematically misunderstanding what beliefs are (a way to carve a problem at the joints, not the brain).

    For a while there I thought you and I were having a version of the debate I’ve had with Eric Schwitzgebel off an on regarding the ‘prudential value’ of intentional idioms. My worry is quite simply that adverting to intentional idioms cues the application of intentional cognition, which, as a way of getting around brain blindness, solves by getting around the facts. We should only expect it to generate controversies if we rely on it too heavily. What we need, I think anyway, is a thoroughgoing zombie account of intentionality and experience, something we can then use to reference the potential problems generated by our short-cuts.

    But there’s too many claims that you make that I cannot understand in terms of this debate. The fact is, your account *really isn’t evolutionary enough,* in that you fail to take explicit account of the heuristic–which is to say, thoroughgoingly ecological–nature of human cognition. Your approach is quite traditional, inferential as opposed to ecological, which I fear simply delivers you to the same old problems.

    I appreciate you need to do this for dialectical reasons, to even pull up a seat at the table, but I think the problem is much more radical than you conceive.

    What, for instance, is your solution to the problem of evaluative properties? Positing an EM is well and fine, but without explaining the difference between correct or incorrect in naturalistic terms, human evaluation remains entirely mysterious. This essentially means you have no determinate concept of ‘cognition.’ Where do you take yourself to be succeeding where, say, Dretske or Millikan or Fodor fail?

  72. John Davey 47: “But true nonetheless. Obscure quantum effects like tunnelling – where a small electron can jump over a massive hill without any explanation – literally keeps the grass growing, as it is a key component of photosynthesis.”

    But if it’s really not an ‘electron’ but a wave or a string, is the hill so ‘massive’?

  73. VicP: Well waves we have measured, but it seems like physicists are leaving strings behind with the unprovable multiverse?

    Yet it’s not clear to me how wave/particle duality would take the weirdness out of quantum tunneling? That said AFAIK there’s as yet no proof quantum tunneling would be part of the explanation for consciousness though IIRC Stuart Kauffman has suggested superposition might be though he may have been speaking metaphorically.

    Finally, I thought it was superposition that made photosynthesis efficient, but quantum tunneling that played a role in the Sun’s fusion and possibly along with enzymes?

  74. Addendum to the last post – wasn’t trying to say John Davey claimed quantum tunneling had anything to do with consciousness. Was just trying to make my post more germane to the blog. 🙂

  75. Scott (77)
    That’s fair enough, I think I get what you’re saying now. Will think a little more and get back to you. Probably a long affair, both in terms of word counts and time to get there, so please don’t hold your breath.

    Jochen (76) Thanks, I needed a little lesson and you gave me exactly that. Considering how different our backgrounds are, I guess we have to endure the didactic side, and I never stop to be glad we somehow manage to understand one another (sometimes). You’ve clarified your position and painted a very clear target for my cannons. In the mean time, I’ve ordered the iron and I’m molding the casts. If all goes well I’ll have some useable cannons shortly, I’m planning to fire them over the week end! Should be fun (hopefully for both of us).

  76. Sergio (81) – As always when rereading my comments, I cringe for being so unrelentingly critical, and not really all that informative. I’m presently closing what I think is the final (armchair closable) hole in blind brain theory. I could use your feedback when I post on it in a week or two… I think it’ll cast neurobehaviourism in a pretty difficult to contradict light. And I think it’ll provide you with a way to see why intentionality and phenomenality, the hard problems of content and experience, rise and fall together. Take a look anyway…

  77. Scott (82),
    Nothing to worry about the directness of your criticism. It’s exactly what I’m asking. I knew you are working on an renewed version of BBT, and I was already trying to fill-in the blanks of what you write here (and elsewhere) with what I expect you to write in the latest version.
    More to the point, if you wish, I would be very happy to read what you have and provide feedback before you publish it, no strings attached (assuming I’ll be able to fit in your schedule, which I can’t guarantee). You should have my email address from my comments in your own blog, please feel free to use it!
    Re the contents: at this time I can’t predict if I’ll be disagreeing with you only on a few details or on the general architecture and conclusions. Based on the indirect information I have it could go both ways. However, I do have high hopes that your renewed version will be both more mature and more accessible to ordinary neuroscientists. At the very least, I could offer some comments on neuro-friendliness which I assume is among your aims.

  78. Scott –

    Could you elaborate on:

    What we need … is a thoroughgoing zombie account of intentionality and experience

    I thought we were essentially on the same page re intentionality – for me a useful vocabulary, for you a useful heuristic. But if so, a “zombie” account” – which I take to mean “purely physical” – doesn’t seem to fit. Which of my assumptions is off?

    I eagerly await your “neurobehaviorism” post since that term perfectly fits how I view these issues.

    Just out of curiosity, are you familiar with Davidson’s essay “Three Varieties of Knowledge”?

  79. Charles (84): “I thought we were essentially on the same page re intentionality – for me a useful vocabulary, for you a useful heuristic. But if so, a “zombie” account” – which I take to mean “purely physical” – doesn’t seem to fit. Which of my assumptions is off?”

    Once you’ve accepted that human cognition is both fractionate and ecological, then the big problem becomes one of matching cognitive modes with problem ecologies, understanding what the limits are, what kinds of problems signal misapplications, and so on. When it comes the brain, the problem is simply complexity, and the way complexity cues the kinds of ‘inherence heuristics’ (to use Cimpion’s and Saloman’s term) we are prone to rely on. The best way to accomplish this, I think, is to have a zombie account of cognition in place, and thus an understanding of the scene that intentional idioms, for instance, strategically parse.

    It’s only the absence of this account that makes redefinitional strategies like Dennett’s remotely plausible, I think.

    Check out: https://rsbakker.wordpress.com/2014/03/24/davidsons-fork-an-eliminativist-radicalization-of-radical-interpretation/

  80. OK, I’ve now read your Davidson and Kriegel essays which confirm my suspicion that “zombie account of intentionality” mislead me but we are indeed on the same page. Although I’d prefer something like an “account of behavior using a physical vocabulary instead of the intentional idiom”. I don’t care for the word “intentionality” since I’m often unsure whether it’s being used in the sense of “aboutness”, as indicating the intentional idiom, as suggesting intent (eg, “I intend to have dinner at 7PM”), or some mixture of those.

    In any event, to the extent that I understand those essays, I’m on board with both, although I might be a bit more, uh, charitable towards Davidson’s principles of charity, which in one essay he calls “consistency” and “correctness”. Whether or not they’re attributed by an interpreter, they clearly need to be in place for interpretation to proceed. Eg, the interpreted zombie’s response to visual sensory stimulation by light with a certain spectral power distribution (SPD) had better almost always be the same utterance (consistency) and the response to any substantially different SPD had better almost always be a different utterance (correctness).

  81. Charles (86): Charity is simply the intentional stance. The question is one of what it explains. They really are spectacularly uninformative once you start picking at them, and both constructs have been seized upon by the Sellarsian normativist gang for precisely this reason: they promise some deflationary point of contact with the physical world that seems to licence of the elaboration of normative metaphysics and other Wittgensteinian nonsense.

  82. As DD uses the term, I agree since he talks about attributing the principles to the one being interpreted. I’m just noting that the components as described in the essay (A Coherence Theory of Truth and Knowledge) seem essential to successful triangulation even if there is no attribution, especially during the initial steps when the participants are merely trying to match up single words with features of the world. And I’m assuming that the behavior suggested by those principles (consistency and correctness) can be explained in physical (evolutionary, perhaps?) terms – ie, that at that basic level of interpretation, a “zombie account” is possible.

    Maybe I should double check that we are indeed on the same page. I understand your suggested approach as being to try for a “zombie account” of basic behavior in purely physical – mechanical, if you will – terms and then argue that behaviors suggested by the intentional idiom – eg, belief – will gradually become understood either in physical terms as the relevant sciences develop or as heuristics that may be useful in some discourse (eg, social science?) but should be abandoned in scientific discourse. And I understand your focus on “cognitive neglect” as being a way of explaining why armchair philosophizing on these matters is futile since more and/or better introspective insight into the relevant neurology is inherently impossible.

    Is that at all close?

  83. @Jochen (76)
    Time to try giving your idea a good shake, hoping that I did get the theoretical side right, this time.
    I have concerns at all levels:
    At the highest, I worry that we don’t need your kind of explanation, as we can reach at least one of your meta-conclusions in much more pragmatic ways (W1). At the medium level, if fear there is a mismatch between what you are proposing and what is likely to happen in the real world (W2). Finally, I think you need additional premises to reach your conclusion, and crucially, they aren’t self-evident at all (W3).
    Despite all the worries, I still think you’re onto something, so I’ll finish up with a positive note :-).

    (W1) One of your meta conclusions is:

    the computable doesn’t seem to be able to do what we want it to (also for the reasons I’ve been giving in prior comments), so we just take the next small step forwards: we hold fast to the entirely physical nature of the world, but allow for it to outstrip the capacities of any finite machine. The appeal here is not so much explanatory, but rather the observation that if things were that way, they would seem mysterious to us in exactly the way they do in fact seem mysterious to us.

    This refers to Mary’s story, and the fact that to us, it is intuitively true that even if she knew all the physical details about colour vision, she would still learn something new when she saw colours for the first time. I do think your injection of randomness, and more specifically, unpredictability, is indeed part of the picture (more below), but I don’t think we need exotic concepts to explain our intuitions about Mary. In fact, I think our default intuitions about Mary are correct, but that there is nothing mysterious about them.
    You used the example of a CRT television, and how it’s impossible to hold all the details in our head, so to precisely predict what will appear on the screen given an input signal. However, we can still grasp the principles of how images are formed; thus, we are not inclined to consider CRT screens mysterious. I agree. But this isn’t a satisfying answer to my original objection. Imagine showing a TV to someone from the 15th century, while offering no explanation. Actually, let’s imagine we sent back the 15th century (anywhere in Western Europe) a bunch of walkie-talkies, along with instructions on how to use them, but no explanation on how they work internally. What consensus would form around them? My guess is that they will be considered solidly supernatural, most likely the direct making of the Devil himself. Why? Because no-one would have a clue about any of the basic principles needed to explain how walkie-talkies work.
    When it comes to explaining minds, we are in exactly the same situation: we don’t have an (agreed upon) clue. Mary’s story exploits this total ignorance to reach an unwarranted conclusion. Your idea of a TV allows to show why: imagine I give you a TV, the full, perfect description of how it works, with all the detailed maths. I also give you a mathematical description of a signal, and ask you to work out what the TV would show when receiving that signal. You do your calculation and work out that it would show the usual tree (experience A). You then send the signal to the TV, see the tree and enjoy the satisfaction of a job well done (experience B). Now, who would seriously suggest that experiences A and B feel the same? Even excluding the circumstantial emotional responses, working out the maths and actually seeing a tree on a screen ought to feel different, because the input signals are radically different. So, your understanding of TVs allows you to consider them purely physical and non-mysterious, but to know how watching TV feels you still need to watch it. Same with Mary and colours, assuming she knows about all the mechanisms, she still needs to see colours in order to experience the sensation, there was no mystery for the TV and (assuming the easy problems are all solved in perfect detail) there is nothing left unexplained in Mary’s case. In our case, we know close to nothing about what transforms photons hitting the retina into Phenomenal Experience, thus we conflate our ignorance with Mary’s, and think that what Mary learns when seeing colours for the first time is a consequence of an unbridgeable gap, but the gap is the same as for the TV case: knowing that you’ll see a tree is and has to be very different from seeing it. I could continue, but the end of this line of reasoning points directly towards dual-aspect monism, and the difference that there has to be between theoretical explicit knowledge and direct experience. Different ways of knowing produce different pictures, that’s what we get and what we should expect.
    In our case, Mary’s story becomes the logical consequence of realising that explicit knowledge is and has to be different from direct experience, will always be (in a computational theory of mind, it is a necessity) and there is nothing particularly interesting about the difference. Now, the unpredictability of the precise input is indeed part of the story (reason why I do like your account), but is not necessary in this case.
    Conclusion of (W1): we currently have no reason to believe the explanatory gap isn’t the result of our abysmal ignorance and is instead a gap that has to exist in principle (beyond the difference we already expect to exist between knowing the TV will show a tree and actually seeing it). Therefore, your conclusion about Mary look superfluous to my eye (which doesn’t mean it’s wrong! It can be superfluous and right, there’s no law against it).

    (W2): if I understand you correctly, the brain (a mechanical, Turing-like computer) has access to an algorithm which, when confronted with the infinite randomness embedded in the sensory input, can perform computations that are otherwise impossible for Turing-like computers. In detail, when supplied to a precise random number (a precise but unpredictable infinite sequence of digits), the resulting computation is equivalent to a hypercomputation (what could be performed by a Zeno machine) and Bang! a mind happens. I actually have two problems with this account: (W2.1) is about likelihood, (W2.2) is about explanation.
    (W2.1): what happens to a brain that works as you describe when the input is the wrong random number? What if it isn’t a random number at all? Do you still get a mind? Pragmatically, it seems that you get different inputs all the time, and you seem to assume it will always get the right number (the precise one you mentioned) and/or that somehow you can generalise to all numbers. You may have an explanation for this objection, but I haven’t spotted it.
    (W2.2): Furthermore, we are assuming that the brain (a mechanism) has access to the infinite precision of the input, and as far as I know, we have no idea of how a mechanism could harness this in computational terms. Thus, to me it seems you are kicking the mystery out of most people’s sight, but not eliminating it, you’re merely hiding it (a big no-no, I guess). W2 in short: what you propose seems very unlikely to be possible, and if it is possible, we’d still need an explanation of what makes it possible. Add to this (W1), the fact that your main explananda can be explained without leaving a detail unexplained, and you should understand my scepticism.

    (W3) If the above wasn’t enough, I don’t even think your premises are complete. Precisely, your “P1 + P2 => C1” sequence isn’t warranted. The most blatant version of the homunculus fallacy explains nothing at all, we probably agree: if all the explananda are moved onto the homunculus without even being reduced, you do get an infinite regress, but you get one that doesn’t converge to anything. Thus, a Zeno machine would indeed halt, but still produce nothing useful. To get some meaningful results at each step in the regress you need to get closer to the desired result. The explanations that we need have to be better than just saying “the homunculus watches the Cartesian theatre”: at each step we want to explain something, if we don’t, we are just not making progress. Therefore, we are interested in recursive explanations that explain something at each iteration.
    Within this class of explanations, two possibilities are present: in one case, we need to perform N cycles, at which point we are left with nothing left to explain. In this case, we don’t need a Zeno machine at all. In another case, at each iteration our explanatory power diminishes proportionally to what’s left of the explananda, in such a way that we would need infinite iterations to explain the whole thing. Your whole structure is necessary for the second case, but not the first. In other words, you need to add something to explain why we are necessarily in the second case and not the first (W3.1). In the absence of any agreed explanation, I don’t see how you can.
    Secondarily, there even is a problem with my objection, which exposes a problem in your own logic: P1 is epistemological “Explanations for consciousness run into troubles with infinite regresses”, it’s about how we explain a phenomenon. From this, you derive an ontological claim: “there is a hypercomputable function giving rise to a mind”. In my objection above, I talk about explaining, and assume that the explanation is equivalent to the phenomenon itself, I jump as you seem to do, between an epistemological claim to an ontological conclusion. Only problem, this unwarranted step is the same that allows you to reach C1, but I don’t see how we can grant it at all. When trying to explain consciousness and minds in physical terms we do tend to hit recursive problems, but this could be entirely the result of our cognitive limits, and/or our current ignorance. Concluding that therefore minds arise when and because infinite recursions are performed via a finite sequence of steps is just an unjustified leap, we don’t know if/why/when this would be the case (W3.2).

    So, to sum up the objections, here is your revised schematic summary, with my issues in bold:
    P1) (Premise) Explanations for consciousness run into troubles with infinite regresses, such as the homunculus problem
    W3.1) We don’t know that infinite regresses are necessary. Some explanatory regresses might never converge (fail to explain), some others might not be infinite at all (are computable).
    P2) (Premise) A certain hypercomputer, the Zeno machine, can ‘run through’ infinite regresses
    C3) (Conclusion, by P1 and P2) Hence, there is a hypercomputable function giving rise to a mind
    W3.2) Reaching C3 requires to assume that our explanation necessarily implies an infinite regression. It also assumes that our explanation will somehow explain why a particular infinite regression is necessary and sufficient to generate a mind. Both assumptions aren’t obviously true.
    P4) (Premise) Every hypercomputable function can be computed using a finite-size algorithm and a (certain) random number
    C5) (Conclusion, by P4 and C3) A finite size algorithm together with a (certain) random number can give rise to a mind
    W2.2) Reaching C5 requires to explain how brains exploit the infiniteness of the (certain) random number.
    W2.1 We also still need to explain how does the brain fetch the right (certain) random number or alternatively why pretty much any random number would do.
    P6) (Premise) There exists randomness in the world
    C7) (Conclusion, by P6 and C5) Finite size algorithms utilising environmental randomness can give rise to minds.
    W1) Given the gaps in the above, we can at best explain why we find it cognitively difficult to bridge the explanatory gap. However, we don’t need the above to explain that.

    All we need is:
    (A1): Assumption: cognition is about information processing. Specifically, it’s about decision-making.
    Thus, cognition is equivalent to lossy compression. The whole game is about reducing the amount of raw data necessary to describe what we’re interested about.
    (CX) Conclusion: cognition can’t cognise itself because lossy compressions are by definition irreversible.
    The overall result is that I’m afraid your reasoning doesn’t explain much, and what it does explain, can be explained in many alternative and simpler terms (I’ve employed two).

    However, I do think you are still onto something: the randomness of the input does generate unpredictable results (how things feel), and effectively makes every experience unique and irreversible. Thus, it is True that we can’t produce a complete explanation of how a particular feelings feels, because we lack the ability to algorithmically describe the random side. I think this is your core intuition, and I agree with it. This means that we don’t explicitly know what red feels like, we know how to recognise red (h/t to Charles), and that is all we need to know. This in turn makes us think we know how it feels, and we do, but we only do in an implicit, non-verbal, and perspective-dependent way. Thus, the feeling of red is private, ineffable, and all that. Once you understand cognition in terms of lossy compression (or, with BBT, systematic neglect – Same Thing!), things fall into place.

  84. Hi Sergio, thanks for the shakedown! I’m grateful somebody takes my ideas seriously to engage with them. But (perhaps not surprisingly), I think that upon closer inspection, your worries evaporate/attack a proposal different from the one I was making.

    So let’s take things in order. First, I agree with you that in the 15th century, people would likely have thought walkie-talkies to be magic, and you probably would’ve gotten a rather uncomfortably warm reception for using one. But the point of the Mary story, as I understand it, is a different one, because we’re in a radically different epistemic situation from 15th century peasants burning witches: differently to them, we know the relevant sort of processes that occur in the brain. The 15th century peasant (or even the 15th century philosopher) has no notion of electromagnetism, no theory of matter, nothing with which to gauge what is physically possible, and what isn’t. Claims of possibility were merely claims of familiarity: had they grown up with walkie-talkies, they wouldn’t have seen strange to them at all; it was just one more phenomenon lacking a full understanding, not really any different from lightning, fire, gravity, and so on.

    We, however, do know the relevant physical processes that occur in the Mary story, even if we’re ignorant to the neurological details. It is the fact that there remains a mystery nevertheless that is puzzling. There is a qualitative difference between this problem and other as-yet open riddles in that with most of the latter, we at least understand how a solution might be possible. But the Mary story is puzzling because even though all of its aspects are completely transparent, we still don’t see how the phenomenon comes about. A 15th century peasant fails to explain walkie-talkies because their functioning is wholly opaque to them; but the sorts of things brains do, their material, physical basis, is compeltely clear to us—but nevertheless, there is no hint at how this could conceivably give rise to phenomenal experience. There is no hard problem of walkie-talkies to the 15th century peasant, there is simply an incomplete understanding of their physics. The hard problem is hard, not due to our ignorance, but precisely due to our knowledge!

    This is where my proposal comes in: if I’m right, then there’s simply some things we can’t conclude even from knowing the full set of physical facts. In fact, we know that this is the case already: certain physical phenomena can be mapped to undecidable problems (occurrence of quantum measurement in a given channel in a specific experiment, ground-state relaxation for certain systems, etc.), and thus, there is no way by which we could, even given full knowledge of the requisite systems, derive these phenomena. So I merely, and modestly, suggest that phenomenal consciousness may likewise be such a phenomenon, and we immediately get to have our cake and eat it, too: we can accept the problems posed by the Mary story, and by zombie arguments or inverted spectra (which are neatly resolved in the same way), without having to contort ourselves into believing implausible stories of how some subtle thing maybe goes on if we just pile up enough complexity that then somehow sparks an experience into existence, but we don’t have to accept the horn of mysticism or dualism—but get meaningful conscious experience in a natural world.

    imagine I give you a TV, the full, perfect description of how it works, with all the detailed maths. I also give you a mathematical description of a signal, and ask you to work out what the TV would show when receiving that signal. You do your calculation and work out that it would show the usual tree (experience A). You then send the signal to the TV, see the tree and enjoy the satisfaction of a job well done (experience B).

    If I understand your analogy here right, then I think it misses the core of the Mary story: Mary is not tasked with recreating experience A, but directly with experience B; and the puzzling aspect comes from the fact that there does not seem to be any story such that Mary can infer the what-it’s-like of having set up the right sort of neuronal signal, because that is of course itself just a neuronal signal that should be a priori decidable if any is. That is, if the physical details of the story entail the what-it’s-like, then Mary should be able to understand what it’s like just from knowing those physical details, just as much as from a complete explanation of tying your shoes you should be able to understand how to tie your shoes, even if you have no shoes at hand (and possibly even never seen shoes). That there apparently is a difference between explicit knowledge and direct experience, while this difference does not seem to apply to other areas is exactly the problem.

    Moving on to your point (W2), first, to clear up a misunderstanding: the brain does not need any sort of infinite precision access to a random number; as I’ve pointed out in my previous post, it merely needs to have access to a continuous source of digits of a given random number. Think about the Maggie case: at each instance, she only needs one random bit, and thus gains capabilities beyond any computational system.

    Also, the likelihood that a finite algorithmic system with access to a random source performs a noncomputable function is exactly one: the number of computable functions is equal to the cardinality of the natural numbers, while the number of all functions from natural numbers to themselves is equal to the cardinality of the continuum, hence, every function picked at random will be noncomputable. The question is, will the hypercomputation being performed be of the right sort to give rise to a mind? Well, here, ultimately my proposal is that it’s the finite algorithm that decides if there is a mind, in the same way that Maggie decides whether she uses her randomness to catch criminals, or just to choose drinks at a bar. Different random numbers will then give rise to different minds, to different experiences, different streams of consciousness: in other words, the difference between me and you is merely the environmental stimuli we receive. So there is no ‘wrong’ random number in this sense; getting a different number just means having different experiences.

    As for (W3): I do in fact mean the version of the homunculus that explains nothing at all on every level (and I believe that this is the only homuncular account possible). That’s why we need the limiting process. Somewhat figuratively, just as the product of two functions can converge to something finite if one goes to zero and the other to infinity, we get an explanation of an infinite series of zero-explanation steps.

    The key to the explanation is that if one were to suppose that the level-2 homunculus actually generates phenomenology, then the experience of the level-1 homundulus is explained; generalizing this, the level-n+1 homunculus explains the level-n homunculus’ consciousness. This leads to the infinite regress that superrecursive methods are actually capable of performing.

    P1 is epistemological “Explanations for consciousness run into troubles with infinite regresses”, it’s about how we explain a phenomenon. From this, you derive an ontological claim: “there is a hypercomputable function giving rise to a mind”.

    Hm, I’m not sure if I’d consider a function part of ontology—it’s ultimately merely an abstract concept. Anyway, the thrust of my logic there is somewhat inverted to what you take it to be: since we can form an explanation of consciousness that includes an infinite regress, traversing an infinite regress is sufficient to construct a process giving rise to a mind—I make no claims about necessity, just that we know that if we have such a process, then we can make our explanation for mind work. The explanation is not equivalent to the phenomenon, but having the explanation suffices to construct something bringing about the phenomenon. The explanation of tying your shoelaces is not equivalent to tying your shoelaces, but having that explanation enables you to do it.

    So, in your revised scheme, my counterarguments run as follows: agains W3.1) and W3.2), I point out that I don’t need necessity, but merely sufficiency—if it is sufficient to traverse an infinite regress to create a mind, then Zeno machines have the capacity of doing so, and hence, hypercomputational processes exist that give rise to minds. W2.2) is a misunderstanding: the brain does not need access to any infinite random number. W2.1) is right to a degree, but I think not fatal: if we model the world as containing both random and algorithmic processes, then it only needs to be big enough for the right algorithmic process sampling the right random number; and it may be the case (if my above speculation is true, which admittedly I can’t establish at this point) that whether or not we get a mind merely depends on the algorithmic part—randomness is the yarn, and the algorithm decides how it is spun.

    W1) then I still think rests essentially on a misconstrual of the Mary story: as I said above, it is not the opaqueness of our brains that makes consciousness seem mysterious, it is their transparency. Like an empty lake where you can see all the way to the bottom with perfect clarity, while you nevertheless keep hauling fish in.

    As for your:

    (CX) Conclusion: cognition can’t cognise itself because lossy compressions are by definition irreversible.

    I don’t think that follows: understanding ourselves does not mean having a full micro-level account of all our inner workings; in fact, that would probably be more obfuscatory than anything else. No, we only need the executive summary: as in the case of the CRT TV, I can’t account for the precise way that this particular picture is produced, but I can give a description of what processes occur in picture production; and this is the explanation needed wrt the mind, not some monstruous account of our detailed micro-level workings. And I don’t see how this sort of thing would in any way be precluded simply by an appeal to lossy compression. In fact, it’s lossy compression—leaving out irrelevant details—that makes us able to explain anything besides the mind-numbingly trivial at all!

    To close this, a few words on my own view. The argument I’ve made in my last post is actually something that only came to me fairly late in the game, and I feel I’ve relied on it too much; I think the view’s appeal is rather just how neatly it is able to explain all of the explanatory gaps we keep bumping against in our attempts to explain consciousness, without diminishing its phenomena. Now, you seem to be saying that maybe it isn’t really that much of a gap, or perhaps it’s just a little bit wider than we can jump at the moment; but you’ll also have to agree that this is hardly a universally held view.

    Rather, a sizeable fraction of the people thinking about these things are sufficiently impressed by the arguments for such a gap as to consider increasingly desperate measures to cope with it, from the extremes of eliminativism (‘there is no gap; in fact, there isn’t even an other side!’) to dualism (‘there is a gap, and it can’t be bridged’). I don’t think any of these proposals would be taken seriously, were it not for the extremely dire situation in which we find ourselves with our current attempts at explanation: either, we give up intelligibility, but we do retain some meaningful consciousness; or, we have a story that’s straightforwardly appreciable, but that falsifies those things we seem to experience most clearly. I think that something like my proposal might represent a chance at retaining both—a reality that is ‘of one piece’, and that nevertheless contains meaningful consciousness, at the relatively small price of us being unable to grasp the exact details of the gap-bridging explanation. So that’s why I like this idea. It might be a long shot, but hey—everybody needs a hobby.

  85. Charles (88): “Maybe I should double check that we are indeed on the same page. I understand your suggested approach as being to try for a “zombie account” of basic behavior in purely physical – mechanical, if you will – terms and then argue that behaviors suggested by the intentional idiom – eg, belief – will gradually become understood either in physical terms as the relevant sciences develop or as heuristics that may be useful in some discourse (eg, social science?) but should be abandoned in scientific discourse. And I understand your focus on “cognitive neglect” as being a way of explaining why armchair philosophizing on these matters is futile since more and/or better introspective insight into the relevant neurology is inherently impossible.”

    Not quite. The eliminativism comes in because I think that heuristic neglect severely curtails the use of intentional idioms. Since those idioms cue very specific patterns of thinking, we need to be wary of the ways they can confound or otherwise impede theory formation. Having a zombie account provides a yardstick of sorts, and alerts us a great variety of alternate ways to investigate a phenomenon, or not. Think of how easy it is to discount the whole first person authority debate on a zombie account of interpretation.

  86. No quarrel with that.

    I intend to reread more carefully those two posts on your blog and may have some questions. Do you get flagged on new comments to old posts?

  87. Charles: Yes I do, though I tend to get behind in my housekeeping, so drop me a line if think I’ve missed anything. I highly recommend Stephen Turner’s work as well. He’s a rare and brilliant bird, deserving of far more attention in analytic debates, I think.

  88. @Jochen #90,
    sorry I’ve remained silent. I’m still here, only monstrously busy, and can’t find half the time and clarity to produce a full reply. (Will be like this until the second week of October, I’m afraid)
    To keep you from getting bored (if needed!), there is one quick thing I can say:

    The hard problem is hard, not due to our ignorance, but precisely due to our knowledge!

    This is precisely where we depart, I think. Having a not so vague idea of what we know and what we don’t about biology in general and neurobiology in particular, I couldn’t disagree more with your statement above. The same applies to:

    We, however, do know the relevant physical processes that occur in the Mary story, even if we’re ignorant to the neurological details.

    To my eyes, this sentence is macroscopically false. We know a few things, and we think they are important to figure out how brains work, but the list of known-unknowns is endless, and one can only speculate about what this means about the unknown unknowns. I’ll write something more detailed for you on this, but very relevant hints are here. If you have time and motivation, do see if you agree and/or what you disagree with, then we can look at more details.

  89. No worries, Sergio, do take your time; I’ve set things up so I get an email on new comments here. I’m at a bit of a lull right now, so I’ll just take the opportunity to maybe flesh out why I said what I did there.

    Basically, the reason is that it seems we have good reason to believe the following three things:

    (1) Gears and levers don’t suffice for phenomenal experience.
    (2) Putting together lots of gears and levers doesn’t engender qualitatively new properties.
    (3) At the bottom, it’s nothing but gears and levers.

    Obviously, I’m using gears and levers metaphorically here, to stand for any kind of mechanisms—i.e. any kind of simple physical systems, like those we believe the world fundamentally consists of (bosonic and fermionic fields, if you will). Number one seems pretty clear to me—panpsychists be damned, my thermostat doesn’t feel anything. Number two is just the thesis that there is no strong emergence—again, something that looks pretty reasonable to me; and in fact, I would find it hard to reconcile with physicalism or naturalism if (2) should be wrong, due to the radical failure of the supervenience thesis. Number three, then, is the advantage we have compared to 15th century peasants—the knowledge we now have about the world that they lacked.

    Together, the three theses then imply that the way the brain is put together (so to speak) doesn’t matter in detail—because we know what it’s build up from, we know there’s no way to jiggle those parts to get something essentially new out. That’s the strength of stories like Mary’s, and why I say that they depend on our knowledge, rather than our ignorance.

    Now of course, I’m not claiming that we actually know that those three these are true, either on their own, or taken together. I’m merely saying that they’re reasonable, based on those things that we do know. All of them can be denied, but all of these denials have, to me at least, the disadvantage of having to overcome a charge of being, in some sense or other, unreasonable, which I think my account doesn’t face. (1) is denied by panpsychists, and leads to conscious thermostats (or proto-phenomenal or proto-experiential ones, or whatever verbiage is used to circumvent the basic feeling of that can’t be right). (2) can be denied in various ways. Most common, I think, is the strategy that I call ‘opaque complexity’: maybe, if we pile up really really really many gears and levers, something somehow happens somewhere, and poof! Consciousness. Maybe an analogy to water is given: water is liquid, while individual H2O-molecules aren’t. New properties can emerge from parts that don’t have them, so why not consciousness?

    But that overlooks the fact that what makes water liquid in fact does inhere in individual H2O-molecules—it’s simply in the way they couple together. Sand can flow if you make it fine enough; so water is liquid simply by virtue of being made up of tiny constituents that don’t adhere to one another very strongly. If you’d couple grains of sand to one another with tiny springs, you’d get something like a solid. There’s no mystery there at all. But what’s the properties of gears and levers that conceivably could give rise to experiential properties?

    Finally, (3) is denied by the dualists and other mystics.

    Now, of course, nothing here amounts to a definite rebuttal of those attacks on the three theses—dualism could be true, as could panpsychism; and there could be properties emerging from just the right clustering together of stuff that somehow come to ‘feel like’ something. All I want is merely to point out that they’re, in various ways, unreasonable—which doesn’t make them false, of course; the world doesn’t have to conform to what I consider to be reasonable (indeed, it often doesn’t). But all else being equal, I would opt for something that seems reasonable; and to me, my approach does—indeed, once you’ve realized that the combination of finite algorithms and randomness lead to hypercomputation with near-certainty, and that there are hypercomputations leading to minds, it seems damn near unavoidable to me. But then I would say that, wouldn’t I?

  90. VicP

    “But if it’s really not an ‘electron’ but a wave or a string, is the hill so ‘massive’? ”

    QM’s preferred semantic model – if you insist that physics requires such, which it certainly doesn’t in order to work – is of wave particle duality. That is an electron (or indeed any other massive body) can be “found” literally anywhere in the universe (although not exceeding the speed of light)according to a wave function. Thus an electron remains figuratively a particle – it can only be measured at a specific point – but we have literally no idea where it is until we “find” it next time. In other words we can’t use its speed and direction to guess where it is next.

    If a football is at the bottom of mount everest then there is a probability that, in the next second, it will be found on the summit. Not a very large one, it has to be said, but there is a theoretical possibility. That may be surprising as clearly there is a huge amount of energy required to move a ball from the bottom of everest to the top. And that is what an electron is doing a lot of the time : being found where it shouldn’t be, at the other end of a large energy barrier. Personally I don’t think semantic imagery suitable for human digestion is necessary for physics. It’s a smokescreen : physics is by nature not designed to be intelligble, but merely to relate one metric to another according to mathematical calculus.

    J

  91. @Jochen #90 & #96
    [Peter: please let me know if/when my long ramblings start looking excessive to you!]
    In this kind of discussion there is always the risk of diverging into an ever-increasing number of smaller threads, and lose track of the main topics we are trying to tackle. To try and help us avoid this pitfall, I’ll do two things: first, I’ll state explicitly what I’m trying to achieve, at the risk of sounding patronising (apologies if I do!). Second, I’ll select what I see as the main areas of disagreement and let drop a few secondary issues. Please do feel free to re-open whichever line of discussion you think we should pursue: my aim is to be useful to you, so I’m happy to follow your initiative.
    My intention: what it is that I’m trying to do? I’m highlighting the issues I see in your approach, because from direct experience, I know that it’s very hard to spot the weak points in one’s own reasoning/theories. I rely on other people doing the same for me, so I am eager to reciprocate. My aim is help making your effort more solid, with the expectation that we’ll end up disagreeing just the same. [Secondarily: after writing what follows, I can now see that I am also trying to spell out in clear language some intuitions that have gripped my mind for years.]
    I took my time reading and re-reading our previous exchange: we touch so many topics that it’s indeed difficult to avoid getting lost in unnecessary details. My points W1-3 where my attempt to keep us focused, but could have worked better. From your replies, I see two main areas where we depart. The first is around Mary, therein, we then also fall apart when assessing our current (collective) epistemological situation (W1). The second is about likelihood/explanation where I’ll collapse W2 and W3, leaving aside what look to me as minor technicalities (do feel free to re-open secondary threads, if you wish!).

    First Section: Mary and our epistemological position.
    I think we agree on what’s the gist of Mary’s story, but if we do, then you misread my previous objection, so I’ll try to clarify. In my example with a television, if I give you all the technical details, along with a description of the input, you can then work out that the TV will show a tree, I call this experience A. In Mary’s story, her own brain plays the part of the TV, and because she has the full description, she can work out how the brain will mechanically react when it finally receives coloured input, her prediction thus is equivalent to figuring out that the TV will show a tree (experience A, once again). However, I think we agree that if you then actually turn the TV on, and feed it with the expected signal, seeing the tree on the screen necessarily constitutes a different experience (experience B), and that there is no big mystery here. In Mary’s case, experience B becomes actually seeing colours [in your words: “Mary is not tasked with recreating experience A {ex-hypothesis, she can}, but directly with experience B”, the whole story is interesting because she can’t produce experience B, even if she know how it works], but for her as well, experience B will be different from experience A [Mary can wilfully produce A, but needs actual coloured input to get B] . Knowing what will happen in her brain is not the same as letting it happen. Thus, Mary will indeed learn something new from experience B. In the original thought experiment this should allow us to conclude that phenomenal experience can’t be reduced to a mere mechanism – something remains unexplained/unanticipated, there still is an explanatory gap. So far I agree, and I think we are probably on the same page, but the next passage is probably controversial: in Mary’s case, the standard conclusion is that phenomenal experience is irreducible to a mechanistic account and that to reach a full explanation we will need some additional ingredient, over and above standard reductive scientific (third-party) descriptions.
    In my personal interpretation, this conclusion isn’t false per-se, but it is misleading. To me, the gap indeed remains open, but this isn’t because there is something else out-there in the reality we are trying to describe, this is because the descriptive power of reductive accounts is intrinsically limited (as is cognition in general). First and third person accounts of the same phenomenon are epistemological distinct, there are strong reasons why the latter can’t produce a full, comprehensive account of the first, and the reason is that the latter (third-person) is necessarily an approximation, perhaps the same thing as what you call the “executive summary”: the whole aim of third-person descriptions is to capture the gist of a phenomenon, not the full thing, one can produce more and more precise descriptions, but something will have to be left out (remember my brief mention of lossy compression?). What the standard Mary’s narrative misses is that, once we have a full explanation of the mechanisms, there are good reasons to expect that these explanations will also predict the existence of the explanatory gap: they will make it perfectly obvious why thinking “receiving this stimulus will activate these neurons in these ways” (t1) is necessarily different from activating these neurons (t2), if it isn’t already. Having the full description of causal connections it will be self-evident that you can only get (t2) by sending coloured input in, there is no kind of (t1) that can lead to (t2). This isn’t hard to predict, because brains have an architecture which is shaped by ecological needs, allowing (t1) to produce (t2) means that brains would be able to hallucinate at will, and this isn’t solving any ecological problem, it is actually maladaptive in most scenarios.
    Here the point becomes solidly meta-epistemological: to understand why there is a problem with Mary, one needs to understand and accept something fundamental about knowledge. Third party accounts, all of them, are useful approximations, by definition they can’t capture the full phenomena they describe, and more importantly, we don’t want them to. Maps are handy because they approximate (abstract away) the real territory, expecting that Mary’s map will be so precise as to be equivalent to Mary’s experience relies on a mistaken conception of what the map is.
    Thus, we reach another epistemological point: what I’m trying to express above isn’t new, but is certainly not universally accepted. Lots of people accept alternative accounts of what explicit knowledge is, and therefore can keep referring to Mary’s story. I’ve said this before (here in CE), we are facing a chicken and egg problem: we can’t agree on a objective (third-party) description of what knowledge is, because we don’t even have a full, agreed upon description of what cognition is, but without this, upholding the value of Mary’s story remains possible, leading to the dire situation that you mention yourself. Thus, we are legitimately allowed to try finding imaginatively new solutions to bridge the gap, while in fact, a portion of the gap is unbridgeable, for reasons that are fully comprehensible.
    I can try to explain in an alternative way, using an example that is usually abused, so I’ll do it reluctantly. When discussing these things, the example of Vitalism is frequently raised. Once upon a time, when trying to explain what life is by defining its essence, our best explanations relied on the irreducible concept of Elan Vital or equivalent, which makes living things fundamentally different from inanimate matter. A vast number of scientific advancements gradually allowed us to clearly see that instead, matter is matter. It doesn’t matter (!) if it’s part of a living organism or not, it still does what matter does. What people usually forget is that somewhere along the way, we have stopped looking for the essence of life, and have become perfectly comfortable with the idea that alive/dead is not a clear-cut dichotomy. I can be dead while some of my cells are still alive, viruses can be in a grey area between living organisms and mere mechanisms, and so forth. Thus, we collectively and mostly implicitly accepted that life has no essence, it’s more complicated than a single dichotomy. I mention this because we can and should acknowledge a few things:
    (L1) We don’t have a full, final and unambiguous explanation of what distinguishes life from death. Thus, our understanding of life is still plagued by its own explanatory gap.
    (L2) We don’t feel the need of such a full explanation because in this case we do know better. Since we have learned many more details, we now see that the explanatory gap is a consequence of how we think. Living things behave differently from inanimate matter, so we use the concepts of “life and death” even if we now know there is no such hard and objective distinction out there. The existence of the “life” explanatory gap is explained by what we do understand about life, together with our understanding of why the idea of life remains useful.
    Thus, no one (?) goes around wondering why we still can’t explain life, we have finally accepted that the whole concept is just an immensely useful simplification.

    When it comes to consciousness and phenomenal experience, the epistemological trajectory we can expect to happen is almost the same, with the additional difficulty that in this case the idea that cognition requires useful simplifications is part of our explananda, and therefore allows lots of people to stumble.

    Coming back to your position, the existence of random (as in: unpredictable) input is indeed part of the game, as it neatly explains why there has to be an unbridgeable gap between the explanation and the phenomenon – to cognise, we want to find useful ways to filter out the noise and retain just the gist, as you point out quite correctly. However, explaining the existence of the gap doesn’t need the notion of hypercomputation, in my view.

    Moreover, we can now tackle your point about our epistemological situation. According to you, Mary’s story is significant because it shows that the hard problem is hard precisely because we know a lot of things. According to me, the hard problem is hard because of our refusal to acknowledge what is not-knowable (or: acknowledge the limits and nature of cognition). However, because we don’t have the final scientific proof of what cognition is, considering the hard problem hard is still legitimate. If I’m right, it follows that once the easy problems will be solved (what you call the neurological details), the hard problem will be fully explained, which isn’t the same as saying that it will be solved, but “just” that we will know why it is there. This is exactly the same Trajectory I’ve described with the alive/dead distinction: we can’t fully explain the difference, there still is an explanatory gap, but very few care about it, because we see very well why the gap exists.
    In turn, this explains why I have personally spent a couple of years writing about epistemology, exploring the limits of both first and third person accounts of reality and, importantly, spelling out why and when essentialist thinking is wrong. In my view: you can’t solve the riddle of consciousness if you don’t pick the right epistemological premises; the right epistemological premises are, by definition, the ones that allow to explain more, and thus, the premises that allow to explain consciousness are (circularly) the right ones.
    Overall: I still don’t think we need your theory, not if I’m right in thinking that the main explanation it delivers is “why there is an explanatory gap”. Actually, it’s worse: I think your theory might even be counter-productive, because it hides some of the limits of cognition, making it less easy to spot why the hard problem needs to exist in the first place. Other than that, I also think that our positions are compatible: hypercomputations can be seen as computations performed on unpredictable inputs, thus, the whole system isn’t describable by finite-computations, which is all that third-party explanations (including explicit cognition) can deliver. This leads both of us back in Peter’s company (appropriately), we are basically repackaging his point on Haecceity, using different languages and approaches (not a surprise, I’m sure).

    In this context, the assessment of our epistemological situation is, to me, dire: we can’t agree on what cognition is, what knowledge is, whether brains compute, what computations are, etcetera. Without empirically verifiable theories, these theoretical premises can only remain disputable. However, on the empirical side, we know very little, we don’t understand what neurons do, we don’t know if we need to model also what non-neuronal cells do within brains or if we can concentrate on neurons alone, we don’t know how many types of neurons there are, why there are so many, what rules they obey in “deciding” what type to develop into, what rules are followed in shaping synapses, and much more. Thus, we are very-very far from producing empirically verifiable theories of cognition as a whole and as a result, we are forced to work with uncertain/controversial theoretical assumptions, hoping to get lucky and identify the right combination. This is the chicken and egg problem, and the main reason why I can’t accept your proposition: “The hard problem is hard, not due to our ignorance, but precisely due to our knowledge”. In contrast, I would be very happy to accept a variation: the hard problem is hard, because what we do know makes us overestimate the power of knowledge and concurrently underestimate the vastness of what we don’t know.
    In turn, this also allows me to be very sympathetic with your propositions:

    (1) Gears and levers don’t suffice for phenomenal experience.
    (2) Putting together lots of gears and levers doesn’t engender qualitatively new properties.
    (3) At the bottom, it’s nothing but gears and levers.

    I broadly agree, but with one important caveat about (2): “Just” putting together lots of gears and levers doesn’t engender qualitatively new properties. However, putting together lots of gears and levers in very precise ways does generate thoroughly sensational results: basic physics, in the form of thermodynamics, predicts ever-increasing entropy. In contrast, all of biology shows how extraordinary pockets of constrained order end up self-assembling, achieving ever-increasing complexity. If that’s not qualitatively different, I don’t know what is. Before developing the theoretical notion of natural selection, such order was thoroughly unexplainable, exactly how Phenomenal Experience is now. Thus, we need a theoretical breakthrough, and this is one reason why it makes sense for me to try helping your own theoretical effort. Even if I might disagree, I know you may be right!

    Second section: likelihood and incomplete explanations.
    Moving on, there is some misunderstanding on the explanation/likelyhood side (W2 & 3):

    the likelihood that a finite algorithmic system with access to a random source performs a noncomputable function is exactly one

    I can agree with this, I’m assuming your mathematical position is correct, also because I wouldn’t have the knowledge necessary to challenge it. My worry is different: the explanation that I think is unsatisfactory is about why hypercomputations would generate minds (why not something else? What is a mind according to your account? Aren’t you just just saying that “minds are the things we can’t understand”?). Or, in other words, I find this passage unlikely/counter-intuitive/almost-unreasonable. I can appreciate your latter explanation about the infinite homunculus regression, where each step explains the previous and remains 100% equivalent to it (nothing is explained), but the infinite regression nevertheless explains everything, but I can’t convince my less rational side that it is a receivable/sufficient explanation. To me, it shows that there was nothing to explain, at best.
    On this matter I can only send the ball back to you: do you have other ways, or more precise mechanistic accounts of the relation between hypercomputation and the creation of what we call a mind? When does a given hypercomputation produce a mind? When does it not?
    I think I can see why you think there might be a link, but I still don’t see why we can conclude that there definitely is a link, and a very strong one. This is the epistemological/ontological mismatch I’ve mentioned before: we have a proposed, very naive, approach to explain phenomenal experience, the homuncular account, which doesn’t work because of endless recursion. You jump in saying, “hey, you can escape endless recursions”, which is fine, but what if our initial account was actually wrong?
    I think you have a lot to gain if you can find a way to support your C5 conclusion in stronger ways (“C5) (Conclusion, by P4 and C3) A finite size algorithm together with a (certain) random number can give rise to a mind”). I’m not saying you are wrong, just that what you propose is hard to swallow, and hence in need of the strongest possible support.
    I hope this helps!

  92. Hey Sergio, thanks again for your (exhaustive!) engagement with my ideas. To me, the big takeaway seems to be that you have no fundamental issues with my proposal, i.e. there doesn’t seem to be anything you consider flat-out wrong. If I’m reading you right there, then that’s already more than I had expected.

    The two points that you raise, if I may paraphrase, are (1) that you consider that my proposal is superfluous, since all the things that it purports to explain already receive explanation in less baroque terms, and (2) that I’ve insufficiently argued my claim that hypercomputation does, indeed, lead to minds. Broadly speaking, I think that we’re probably not going to achieve a total meeting of minds on (1)—which is, to a certain degree, OK since I do think it’s an issue about which legitimate differences in opinion are possible, at least given our current state of knowledge. Nevertheless, I want to try and clarify my position somewhat further. On (2), however, I think there is a better chance—I think that you essentially take me for making a slightly different argument than I intend to, and thus, I hope that we can make progress there.

    On Mary, contrary to your impression, I think we genuinely disagree in how to read the thought experiment. The reason is, basically, that you offer up an account in first- versus third-person viewpoints as explanatory, whereas I think it simply restates the problem: why are there first-person viewpoints in the first place, in a universe whose fundamental laws can (to the best of our knowledge, and if you’re not a dualist) entirely be stated in third-personal terms? So to me, you’re basically driving out the devil with Beelzebub. Let me try and make myself more clear.

    You’re distinguishing between two levels of description: (t1), on which there is knowledge regarding which neurons are activated, given certain stimuli; and (t2), which is the knowledge generated by actually having those neurons activated. Now, you posit this as being helpful for the explanation of Mary’s lack of ability to predict what it’s like to see red. However, for me, the question of why (t1) and (t2) seem to be irreducibly different is exactly the problem that’s raised by the thought experiment!

    The reasoning is, basically, that Mary, if she knew (t1), and if the description contained in (t1) (the physical properties) is a complete one, and if she possesses perfect reasoning capacities, then since (t1) entails (t2), she ought to be able to infer (t2) from knowing (t1). The assumption that (t1) entails (t2) is just the assumption of physicalism: once god fixed all the physical facts, there was nothing left for him to do.

    Now, to me, your account reads as if you conflate Mary with a typical human being, with limited cognitive capacities. But that’s I think a wrong reading: rather, Mary is able to draw any inference that can possibly be drawn—her cognitive capacities are unbounded. For instance, she could simulate an entire brain in her mind (if that should be necessary), bring it into the state given by (t1), and then just deduce from there what that brain would experience—it’s true that she has access only to a third-person perspective, but if physicalism holds, then everything ought to be derivable from a third-person perspective; otherwise, you’re building a fundamental first person/third person split into your ontology, which would rather be some kind of dual-aspect theory or maybe anomalous monism, but certainly not physicalism. So I think the distinction between (t1) and (t2) just kicks the problem up the ladder one rung, but doesn’t ultimately make any headway in addressing it.

    Regarding your use of the elan vital-metaphor, I’ve always had a basic problem with it: those using it generally intend to show how ‘intuitively obvious’ aspects of the world can go away once understanding deepens, and hence, want to argue by analogy that the same might happen to the problematic phenomenal aspects of consciousness, e.g. However, that analogy is askew: elan vital was an explanatory hypothesis to account for the phenomenon of life, while phenomenal experience is the phenomenon itself (in a somewhat linguistically awkward manner). The phenomenon of life is still with us, even though we understand it better; so, taking the analogy seriously would actually mean that we should expect phenomenal consciousness to stay with us, as well.

    Now, you put the analogy to slightly different use, but I still can’t fully agree. In particular regarding your ‘explanatory gap’ as regards life vs. death:

    (L1) We don’t have a full, final and unambiguous explanation of what distinguishes life from death. Thus, our understanding of life is still plagued by its own explanatory gap.

    To me, this would actually entail that there is no gap at all—that life and death are at the extreme points of a continuously connected spectrum. Precisely the existence of this continuum of in-between states accounts for the fact that we no longer have any feeling of mystery regarding life; in recognizing that there is no strict separation, we have in fact closed the gap. If we were able to do that with regards to consciousness/unconscious matter, then I think the gap would be just as effectively bridged.

    However, unlike in the case of the life/death dichotomy, I think there are good arguments that we are a priori unable to close this gap; that is, we can’t see how tiny salty squirts give rise to something feeling a certain way, because those are categorically different phenomena. My aim is thus to take these arguments seriously (and while we’ve concentrated so far on Mary, I think that my approach also works for zombies, inverted spectra and the like—that is, not only knowledge arguments, but Kripkean modal arguments as well are accounted for), while nevertheless building a consistent picture of the world that doesn’t have to appeal to magic.

    Now, you think that ultimately, these arguments hold no water; and that may indeed be the case, and just for that, I’m fully supportive of attempts to try to make sense of consciousness on more conventional terms. I, and quite some other people, however, am suitably impressed by these arguments to try out a different route, just to be able to say, hey, in case the straight and narrow is blocked, here’s something else that has a chance at working.

    the right epistemological premises are, by definition, the ones that allow to explain more, and thus, the premises that allow to explain consciousness are (circularly) the right ones.

    Well, I think that mine do just that! 😉

    Overall: I still don’t think we need your theory, not if I’m right in thinking that the main explanation it delivers is “why there is an explanatory gap”.

    While I do think that’s its main appeal, as you’ve noted, I’ve also made an attempt at an argument that is not purely negative, but yields the positive conclusion that minds are indeed possible on my account, and in a non-mysterious way. If this argument is right (about which, more later), then in a sense that something like my model applies in the world is pretty much unavoidable—as I noted, if there is randomness, and if there are algorithmic processes, then hypercomputation occurs; and if there is enough of that, then chances are that some of it gives rise to minds.

    So the main appeal of my view, to me, is that if we only make those mild assumptions—there is randomness, and there are algorithmic processes—then (if my argumentation holds) we get out that we should expect that there are minds, which, like ours, would be saddled with an explanatory gap regarding their own functioning! I think that’s quite neat, to be honest.

    I think your theory might even be counter-productive, because it hides some of the limits of cognition, making it less easy to spot why the hard problem needs to exist in the first place.

    Hmm, could you maybe expand on that? To me, it’s the first thing that drops out of my model: brains can do more than they can explain; in particular, those things for which hypercomputation is needed can’t be finitely explained. Hence, hard problem.

    As regards my three points, it seems that (contrary to your initial assessment) you reject point two, and submit that maybe if things are put together in just the right, special way, then somehow the spark of consciousness catches the whole thing on fire. This is just what I earlier called “opaque complexity”: without an account of how such a thing might be possible, it’s little more than wishful thinking. It’s like trying to build ship out of paper: the pieces you have just don’t seem to have the right properties, but since they’re all you have, you hope that maybe some way exists of putting them together such that they’ll not go soggy and sink. But of course, we know that this can’t happen, since we know the properties of the material you have at hands—i.e. it is due to our knowledge that we know such an attempt to be misguided.

    However, putting together lots of gears and levers in very precise ways does generate thoroughly sensational results: basic physics, in the form of thermodynamics, predicts ever-increasing entropy. In contrast, all of biology shows how extraordinary pockets of constrained order end up self-assembling, achieving ever-increasing complexity. If that’s not qualitatively different, I don’t know what is.

    There is no qualitative difference here; in fact, it’s been known for a while that pockets of increasing complexity are a quite straight-forward consequence of the second law in far-from-equilibrium contexts. It turns out that one of the best ways to increase the entropy of the system as a whole is to generate small pockets of entropy decrease, which actually result in a net increase to the system that far outweighs that decrease. Take, for instance, the Earth-Sun system: if the Earth were an inert rock, it would dissipate far less energy than it does. In fact, it’s even been proposed that a high entropy production might be a signature for alien life! So, localized increases in complexity aren’t something qualitatively new so much as the best strategy for systems to obey the constraints of thermodynamics.

    But in the end, I appreciate that many feel different about Mary and her ilk; and I don’t claim that I can say with any certainty that they are wrong. Indeed, if a full account of consciousness in natural terms is found, and it doesn’t relay on hypercomputational exotica, then I’ll be the first to cheer! However, playing the odds here, I don’t think this is likely. Hence, I’m trying out the next-most-conservative route, to see if that leads anywhere; and I think that what I’ve found so far is encouraging enough to continue the inquiry. But I know full well that not everyone will be convinced, and that’s not my aim.

    So, then, moving on to (2). I take your main questions to be contained in this paragraph:

    On this matter I can only send the ball back to you: do you have other ways, or more precise mechanistic accounts of the relation between hypercomputation and the creation of what we call a mind? When does a given hypercomputation produce a mind? When does it not?

    There are, however, some problems with finding straightforward answers to these questions. First of all, the main reason I’m trying out a hypercomputational account at all is because it is nonintuitive: we can’t give a description of how a given hypercomputation works, because they are not finitely specifiable. So, I can’t answer when a hypercomputation gives rise to a mind.

    Now, before you think that this is a definite objection to my model, note that the computationalist faces the very same problem: by Rice’s theorem, no nontrivial property of computational functions is decidable—that is, you can’t in general say what a program will do upon being run (if you could, you could also solve the halting problem). So, there is no procedure which could tell you whether a program gives rise to a mind, or not, in every case. So, my account doesn’t incur any additional explanatory debt here, since that is already shared by computational ones.

    But I also think you slightly mistake the claim I am making. What I’m proposing is merely an existence claim: that is, within the set of noncomputable functions, there exists at least one giving rise to a mind (most, however, probably won’t). Such an argument is of necessity nonconstructive, since there is no way to finitely specify noncomputable functions. In order to make this argument, I rely on the homuncular account—but that doesn’t mean that it’s necessarily the case that consciousness works that way in the real world. This is just a possible way to get what we want, not necessarily the one employed by nature.

    And I think that the homuncular account works can be argued for in quite simple terms—it only needs to be the case that it is sufficient to have conscious experience on level n-1 if there is conscious experience on level n; but this is shown by the usual ‘man in the theater’-example: the sense-data is projected onto an internal ‘screen’ (level 1), and an internal observer (who we will assume to be fully conscious) observes this representation (level 2); since the observer is conscious, the internal representation on level 1 is consciously perceived, by virtue of the internal conscious level 2 perception of the observer. Now just iterate this through all the levels.

    And yes, this is non-intuitive; if it were intuitive, we probably wouldn’t be having this discussion, as the problem would have been solved centuries ago. But again, it also accounts for the fact that it is non-intuitive, due to our brains being incapable of grasping non-computational mechanisms.

  93. Maybe this has run its course, but it seems to me that your view on Mary is also hard to square with the historical developments, Sergio. Basically, if you maintain that the hard problem is merely due to our ignorance, then it should have been with us all along—however, the actual historical picture is vastly different: basically, none of the ancient philosophers seems to even have recognized that there was a problem. In fact, the mind-body problem as it is with us today can be traced to the same origin as the modern scientific conception of the world. Just a couple of pages after his famous dictum that ‘the book of the universe is written in the language of mathematics’, Galileo writes in The Assayer:

    Accordingly, I say that as soon as I conceive of a corporeal substance or material, I feel indeed drawn by the necessity of also conceiving that it is bounded and has this or that shape; that it is large or small in relation to other things; that it is in this or that location and exists at this or that time; that it moves or stands still; that it touches or does not touch another body; and that it is one, a few, or many. Nor can I, by any stretch of the imagination, separate it from these conditions. However, my mind does not feel forced to regard it as necessarily accompanied by such conditions as the following: that it is white or red, bitter or sweet, noisy or quiet, [348] and pleasantly or unpleasantly smelling; on the contrary, if we did not have the assistance of our senses, perhaps the intellect and the imagination by themselves would never conceive of them. Thus, from the point of view of the subject in which they seem to inhere, these tastes, odors, colors, etc., are nothing but empty names; rather they inhere only in the sensitive body, such that if one removes the animal, then all these qualities are taken away and annihilated.

    Note that he’s essentially making a knowledge argument here: “if we did not have the assistance of our senses, perhaps the intellect and the imagination by themselves would never conceive of them”. So it seems that (arguably) the first time anybody formed an image of the world in modern scientific terms, the problems with integrating into it the ‘secondary qualities’ were immediately obvious. It’s not Galileo’s ignorance that revealed this problem to him; rather, it is his—at that point, unparalleled—insight into the composition of the world that did so.

    From that point on, we suddenly have an explosion of philosophical interest in the relationship between mind and body, starting, of course, with Cartesian dualism right at the inception of modern philosophy, then Leibnizian prestabilized harmony, Spinoza’s dual aspects, and so on.

    To believe that the recognition of the mind-body problem doesn’t have anything to do with the knowledge gained due to the inception of the scientific revolution seems, in the face of this, to be to believe in an unfathomable coincidence.

  94. Jochen (99),
    I’m enjoying this and will linger on the issue where we both acknowledge that a legitimate disagreement may persist, so I’ll keep both sides of the discussion going, if only for the pure debating pleasure (it’s useful to me, that much I know).
    Thus, I will insist on (1) (my objection on whether we really need what you’re trying to do): I think we are touching important points, which are relevant to the whole field, no matter what theory one is trying to uphold. On (2)(about the explanatory power and likelihood of your account) in answering your question(s), I will take a provocative stance, hoping to help you solidify your argument, I see it as a sparring exercise, not a sincere attack. Before doing so, let me write a few conciliatory words: I follow your explanation about (2) and agree with it. If you are correct, and it IS a possibility, you are doing it right and such an account can’t be much more constructive.

    But first, let’s talk about Mary.

    The reasoning is, basically, that Mary, if she knew (t1), and if the description contained in (t1) (the physical properties) is a complete one, and if she possesses perfect reasoning capacities, then since (t1) entails (t2), she ought to be able to infer (t2) from knowing (t1). The assumption that (t1) entails (t2) is just the assumption of physicalism: once god fixed all the physical facts, there was nothing left for him to do.

    The trouble with this reading is that it misrepresents knowledge and by doing so it asks our intuitions to go where they must be, by definition, unreliable. Under perfect (idealised, and thus impossible) conditions, knowing all the physical facts about colour perception, which includes a perfect understanding of all possible inputs, and having perfect (unbounded) inference abilities, (t1) entails (t2) and therefore Mary wouldn’t learn anything when released. It’s not necessary for us to disagree on this, perfect inference abilities require the ability of solving endless recursions, while “knowing all the physical facts” requires the ability of having exact measures of all physical quantities (at least about the relevant stuff within the perceiving brain). Thus, in your own terms, in the reading where Mary has such idealised knowledge and ability, she is able to execute whatever a Zeno machine might, whenever she wishes to. In other words, it seems that you are inadvertently negating Mary’s story in your own way: the fact that concluding “she will learn something new” is intuitive to us is because we can’t intuitively (or otherwise?) grasp what hypercomputations might yield, but the idealised Mary can, and thus would already be able to experience t2 whenever she wishes.
    In my reading, on the other hand, Mary is half idealised: she knows all the physical facts, because that’s our starting hypothesis, but she is normal otherwise (I had to check, and I can confirm this is the case). Mary therefore is still an idealisation, because knowing all the physical facts is not possible, not even in principle. Thus, because she doesn’t have radically superhuman reasoning abilities, she can’t reach t2 as an act of pure will. Knowing enough (not all) of the physical facts about brains and sensory perception already allows us (mere humans) to explain why (as I was trying to do).
    In other words, Mary’s story confuses by positing that “knowing all the physical facts” is:
    a) possible in principle.
    b) not radically different from what we experience as factual knowledge.
    Even if I agree with the first conclusion (Mary will learn something – and I don’t think you can in fact agree with this, not in your own super-idealised reading), by a) and b) we are not allowed to draw the second conclusion: that physicalism is false. Jackson relies on a false, misleading (and almost ubiquitous) conception of knowledge. To say it with your own words: “once god fixed all the physical facts, there was nothing left for him to do” can still be true, even if we are forever unable to know all the facts. In fact, if physicalism is true, we must be unable to know all the physical facts. This is why I keep banging on epistemology: before even starting to tackle consciousness and minds, you got to understand how cognition works, how it relies on slicing reality in categories, objects, properties, causes and effects so to make useful predictions for practical purposes. One needs to understand that such distinctions are not objectively out-there, they are useful tricks, and they are useful because:
    1. They allow to reduce the number of variables to be considered.
    2. They are reliable, to some degree.
    Thus, the usefulness of the distinctions we make is a function of how big reduction 1 and reliability 2 are. This point is hard to swallow, but I don’t think it’s far from your position: you already admit that we can’t cognise hypercomputations properly.
    Back to Mary, it turns out that there is a gradient of possible interpretations: on one extreme, God-like Mary (GlM) knows everything and can hypercompute at will [I guess you might claim your reading is close, but not quite GlM: she can reason perfectly, but without being able to hypercompute]. On the other extreme (Human-like Mary – HlM) Mary is just as clever as humans can be, and “knowing all the physical facts” is a shorthand for the kind of knowledge that science can produce: she understands the mechanisms well enough to make pretty accurate predictions. On the HlM extreme, we are making the least assumptions, but we still are idealising a little: in real science, there is no way to reach the conclusion that “we understand all relevant facts”: there always is a possibility that our models are missing some relevant detail.
    Nevertheless, on the HlM side, our intuitions may be considered somewhat meaningful, so it would not be totally preposterous to try drawing conclusions based on our strong intuitions (as per the original argument). However, on the GlM extreme, all bets are off: the closer we get to that end, the less our intuitions apply, converging to your account, where we at least know why our abilities must fail to grasp what’s going on. For a God-like Mary, we should all agree that we can’t make reliable predictions about what she might or might not know/infer (or just assume she knows everything, by definition, and leave it like that).
    For me, however, we can understand “why our abilities must fail to grasp what’s going on” without having to approach GlM. Staying close to HlM is enough, but relies on down-sizing our self-appointed abilities: I’m advocating a rather pessimistic view of knowledge, because this then allows to dissipate most of the philosophical conundrums that we discuss in here (and more!). [This is why I keep pestering Scott, I think we are more or less proceeding among parallel lines.]
    I am, in other words, fascinated by how much Mary appeals to people, because to me it is self evident that both (t1) and (t2) are much, much less than what people think they are. (t2) is just the ability to recognise red after having seen and noticed it already, nothing more. (t1) is a highly sophisticated, very useful heuristic, it’s at best a very reliable way to produce predictions, but can’t claim to explain everything (not even within a well-defined domain), it can’t produce perfectly accurate predictions, and can’t entail the certainty that the produced predictions will never spectacularly misfire. Once you accept these sad conclusions, Mary’s story becomes insignificant, not something that allows to draw any meaningful a-priori conclusion. For the record, this account then can be applied to zombies, inverted qualia and whatnot!
    In our case, this is the reason why I’m challenging you with (1) (not sure we need your attempt) and then with (2), claiming also that your account obfuscates the real limits of cognition. [You asked for an explanation on this one!] You posit that your account explains our inability to grasp the mechanistic nature of mind, but to do so, you appeal to hypercomputations, which are impossible to describe in finite terms. So you are saying “we can’t understand minds more or less for the same reason why we can’t intuitively grasp the vastness of infinity”. This obfuscates, because we can only understand much, much less. In fact, we find it very difficult to understand how limited and local our powers of “understanding” are in general.
    This brings me to my last provocation:

    we can’t give a description of how a given hypercomputation works, because they are not finitely specifiable. So, I can’t answer when a hypercomputation gives rise to a mind.

    This is indeed a possibility, but it’s rather bleak. You are fundamentally saying that we can’t figure out what generates inner life and what doesn’t. You are throwing away all hope of finding an empirical way to asses the presence of inner life. We may avoid having to espouse dualism, but the price is an uncompromising form of mysterianism. Far from my ideal solution.
    Furthermore, I still need to fully understand why you think that hypercomputations are needed to produce minds. Yes, they might be needed, but who knows? Simply saying that they probably do because in this way we understand why we can’t understand minds is not going to convince me: I have a less exotic way to explain why we are failing to explain minds (we are misrepresenting minds, we are trying to explain something that doesn’t exist and that is far more capable than the minds that do exist). Thus, the main supporting pillar (the ability to explain something we can’t explain at the moment) of your account looks unable to support much, in my eyes. Of course, this is certainly also because I have my own strong opinions, so it’s guaranteed that I might be blinded by my own delusions.

    Minor Points:

    (L1) We don’t have a full, final and unambiguous explanation of what distinguishes life from death. Thus, our understanding of life is still plagued by its own explanatory gap.

    To me, this would actually entail that there is no gap at all—that life and death are at the extreme points of a continuously connected spectrum. Precisely the existence of this continuum of in-between states accounts for the fact that we no longer have any feeling of mystery regarding life; in recognizing that there is no strict separation, we have in fact closed the gap. If we were able to do that with regards to consciousness/unconscious matter, then I think the gap would be just as effectively bridged.

    We’re degenerating into sophistry, but we can agree on this. Some people could say that, because we can’t pinpoint the distinction, then we must admit there still is a gap. I can agree with this, but I can also agree with you: the difference is a matter of interpretation of what the gap is supposed to be. You change your definition and reach one or the other conclusion. I am also with you in thinking that if we’ll be able to see that there is no strict separation between conscious and unconscious matter, the gap should be considered closed, or at the very least not metaphysically loaded.

    As regards my three points, it seems that (contrary to your initial assessment) you reject point two, and submit that maybe if things are put together in just the right, special way, then somehow the spark of consciousness catches the whole thing on fire. This is just what I earlier called “opaque complexity”

    My time to deploy sophistry: I said I was “very sympathetic”, wasn’t intending to suggest I agree. I do agree that opaque complexity is a no-go in explanatory terms: it’s a legitimate hope, not an explanation. This underlies my last provocation: your account tells us why opaqueness is unavoidable (can’t understand hypercomputations), so remains just as opaque while removing all hope for clarifications. Also, you do know that I’m trying to stand by my words and spell out what the “right way” might be and why it would be on fire ;-).
    Anyway, I’m being deliberately provocative, because I trust you know I’m doing so in a friendly way. In the end, I do understand why you might be right. I do hope you aren’t because, even if my own position has a strong mysterian side, I think yours gives us even less hope.

    On your latest #100

    To believe that the recognition of the mind-body problem doesn’t have anything to do with the knowledge gained due to the inception of the scientific revolution seems, in the face of this, to be to believe in an unfathomable coincidence.

    That’s fair enough, but it’s a bit disappointing that you attribute this view to me. I was taking it for granted that the hard problem appears in the moment you develop a third-person point of view (condition 1) *and* realise how powerful it is (condition 2). A good amount of knowledge is needed for both, thus the hard problem appears when we have enough knowledge to meet both conditions, but not enough to bridge the explanatory gap. Same as the much abused case of life (apologies!): before being “solved” (if that’s what happened), it had been a pretty hard problem for a long time.
    On the other hand, believing that there is no way to empirically bridge the explanatory gap relies on overestimating how powerful the “third person account” is (misrepresenting knowledge), overestimating our current understanding of the world (ditto) and underestimating our (long-term, collective) problem-solving abilities. You can only reach that conclusion if you “ignore our ignorance” ;-).

    PS I’m still very busy, the next 10 days will be breathless for me, I might disappear completely until the 8th, but that doesn’t mean I’ve had enough (would understand if you did!).

  95. @Jochen: “So it seems that (arguably) the first time anybody formed an image of the world in modern scientific terms, the problems with integrating into it the ‘secondary qualities’ were immediately obvious.”

    Don’t quote me on this, but IIRC Democritus realized this was a problem when he proposed that everything consisted only of “atoms and Void”.

  96. I don’t really think secondary qualities are the hard problem. For one think I don’t think the distinction is widely accepted any more, is it? And there’s nothing ineffable or inexpressible about secondary qualities, is there?

  97. Sergio, I’ll get back to you later, when I have more time; I hope that’s OK with you.

    Sci, yes, good point: Democritus indeed talks about sensations etc. as ‘mere convention’; but I’m not sure he considered this to be a problem, rather, pretty much everything was ‘mere convention’ to him in the sense that it’s not part of the ultimate level of reality, but needs to be constructed from ‘atoms and void’. So I’m not sure that, to him, feelings and sensations had a much different status than tables and chairs, say. But I’m certainly not an expert on his philosophy.

    Peter, well, I’ve always (perhaps wrongly?) regarded the primary/secondary distinction as at least ancestrally related to the mind/body dichotomy, in that it cuts along the same lines—objective versus subjective, third-personal versus first-personal, etc. It’s true that modern formulations of the problem have different emphasis, e.g. on the ‘what it’s like’ to experience these properties, rather than the properties themselves, which I think is what’s more aptly related to qualia, with all their ineffability and so on. But it nevertheless strikes me as the first attempt to frame the essential apparent difference between the ‘in here’ and ‘out there’, so I don’t think I’m committing any greater sin of equivocation than when I’m calling what Galileo did ‘modern science’. 😉

    If you disagree, though, I’d be very interested in your position; here, as well, I can’t claim any expertise.

  98. Jochen; there is a connection, at least at a a psychological level. People are certainly more comfortable talking about qualia in relation to secondary qualities (colour above all of course). There are discussions of smell qualia and emotional qualia, but I can’t think of one discussing the qualia of solidity or extension – though strictly I don’t see why there shouldn’t be. I think I suggested somewhere that the position that qualia only relate to secondary qualities would have a certain plausibility, but I don’t think anyone has taken up my generous offer to yield any claims to originating the idea.

    Still, I think there is a definite distinction. I don’t think I’ve ever heard that anyone suggested secondary qualities per se might be different or inverted between subjects. They do in an important sense pertain to the observer not the object, but whether they’re phenomenologically internal in the same way as qualia… Hm.

    It’s an interesting point anyway, and might merit a more extended discussion.

  99. Well, from a few half-remembered clues and some frantic googling, the first example of an inverted-spectrum argument dates to Locke’s Essay, where he also considers the primary/secondary quality distinction. I would have assumed that what’s inverted exactly are those secondary qualities (and he certainly counts color as a secondary quality), but from a quick perusal, he never quite comes out and says so, talking instead about ‘simple ideas’ produced in the mind by sense data.

    But anyway, I’m at least not completely alone in connecting qualia and secondary qualities—this wikibook on consciousness studies asserts:

    It is generally considered that secondary qualities correspond to qualia (Smith 1990, Shoemaker 1990) and the two terms are often used synonymously.

    This may be a bit oversimplified, but at least it shows I’m not the only fool rushing in here.

  100. OK Sergio, let’s see if we can’t keep kicking the can down the road somewhat longer! 🙂

    First of all, to nobody’s great surprise, I still disagree with your reading of the Mary argument. To make things clear, my Mary is characterized by:

    a) having access to all the relevant physical facts, and
    b) being an idealized, perfectly rational reasoner—that is, if a conclusion can be drawn at all, she is in principle able to draw it (which, to me, entails her using finitary means: the usual process of drawing a conclusion is algorithmic in the sense that one uses finitely many steps to derive the conclusion from the premises; so no hypercomputation).

    This is not as unreasonable as you make it out to be, because for a great many cases, we are in exactly that position (indeed, it seems to be only the hard-problem cases where this sort of description is not available to us—that this is telling us something is, in a sense, the whole point of the argument). Take, for instance, Mary’s uttering ‘Oh, what a beautiful red rose!’ upon seeing such for the first time. There is no mystery in finding an explanation for this: our knowledge of the relevant physical facts enables us to create a deductive account, starting with the light impinging upon her retina, to signal processing by thalamus and cortex, to the activation of motor neurons that cause the right sort of contractions within our vocal apparatus in order to modulate the air stream to produce the appropriate sounds.

    Of course, this story is enormously sketchy, and hides staggering amounts of biological complexity; but it’s nevertheless clear that the blanks can be filled in, and ultimately, a story of some such character gives a complete account of her uttering those words. Furthermore, such a story can be told for every kind of behaviour.

    And that’s exactly the problem: we can tell consistent and complete stories about the any kind of behaviour, and nowhere need recourse to conscious perception, the what-it’s-like-ness and so on. Thanks to our knowledge of the physical basis, and our deductive capacities, it seems that we’re able to produce explanations for all behaviours (in terms of gears and levers), which however leave the subjective aspect out completely. And on that front, nobody’s ever produced even the first step towards an explanation in the last 350 years or so.

    So you see: it’s our ability to form a picture, not our inability to come to an explanation, as was the case with the 15th century peasant examining the walkie-talkies, that makes consciousness seem special.

    I mean, just consider the argument that’s arguably the common ancestor of all those modern formulations: Leibniz’ mill. He writes:

    One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception.

    Again, it’s clearly not ignorance that forms the foundation of the argument, but on the contrary, our ability to conceive of a machine, or a more general mechanism, in such a way. Like my example with the paper towels, one only needs finite capacities to conclude that there is no possibility to build a ship from them—it’s not necessary to be able to consider all possible combinations of paper towels, their attributes make them unsuitable in any configuration.

    If I understand you correctly, you’re also trying to argue that humans are in some way necessarily bounded in their reasoning capacity—an often-heard example is the idea that you can’t teach calculus to a dog, while you can learn it; hence, by extension, there might be things we are just fundamentally incapable of knowing. (This might sound similar to Scott’s BBT, but I think your ideas are saliently different—they’re not immediately self-refuting, for one.)

    This kind reasoning was supportable, I think, only until Turing’s work on computation, because since then, it’s been clear that there is a threshold, that of computational universality, beyond which additional complexity does not buy you additional capabilities. That is, once you can perform universal computation, you can compute anything that can be computed at all; since our reasoning is fundamentally computational, the same goes for our capacity of knowledge. We are capable of performing universal symbolic manipulation, and hence, nothing that occurs according to computable processes (which includes, e.g., the functioning of a neuronal network such as the brain, absent a source of nonalgorithmicity) is fundamentally beyond our epistemic grasp.

    It is thus in fact not right that we can understand ‘much much less’ than that which is not describable in finitistic terms, because that exactly delineates our capacity for understanding—we can walk right up to the border of the computable, but no further.

    It’s true that there are resource constraints—some computations are simply so involved that no mind could ever perform them in, say, the lifetime of the universe. But that’s not what’s at issue. First of all, we are capable of extending our capacities, using anything from pen and paper to supercomputers. But, more importantly, we don’t need to be able to perform a computation to understand how it works: I can’t factorize the number 82475298296734 in my head, but I know exactly what a computer would do in order to find the prime factors; likewise, I can’t actually hold all the details of the story of how a red-rose-stimulus produces the utterance ‘Oh, what a beautiful red rose!’ in my head, but I know exactly what an ideally rational agent would have to do in order to do so. But regarding the Mary story, I just haven’t the foggiest. Thus, there seems to be a salient difference here, and I don’t think trying to bury it beneath heaps of complexity or epistemic restrictions does any work at all. It’s a problem we need to face up to, not try to hide behind a veil of obscurity.

    So, a perfectly rational Mary with access to all the relevant physical facts ought to be able to derive the what-it’s-like-ness of seeing red from first principles, if (computational) physicalism is right; and from the fact that we can infer stories about the occurence of any behaviour save that of conscious experience, we can validly infer that Mary would likewise be so limited, and hence, that we must reject computational physicalism.

    You are fundamentally saying that we can’t figure out what generates inner life and what doesn’t. You are throwing away all hope of finding an empirical way to asses the presence of inner life.

    Well, that’s not quite right, and to the extent that it is, it’s a problem all computational accounts share, as I’ve already pointed out. First of all, recall that a mind is, in my model, a finite algorithmic process plus a source of nonalgorithmicity; and there’s every chance that we can say a lot of things about the finite process. A completed cognitive science would, for instance, tell us what kind of algorithms are performed in our brains, so that we would have to attribute sentience to anything that performs similar algorithms and has access to environmental nonalgorithmicity. Indeed, we might even be able to come up with criteria for when algorithms utilize the nonalgorithmicity in such a way as to give rise to a mind (most algorithms will not use it in any meaningful way, for instance). My own hunch is that this algorithm is correlated to the process by which minds acquire intentionality, which is a second part of my model that’s a bit more involved.

    Additionally, even if computationalism is true, it would still be the case that we can’t in general say that some specific algorithm A gives rise to a mind—we could conceivably construct algorithms such as for that to be the case, but in general, no procedure exists to test for the mind-producing capabilities of algorithms found in the wild. So I don’t think that my account leaves us any worse off than computationalism does.

    Furthermore, I still need to fully understand why you think that hypercomputations are needed to produce minds.

    As I’ve pointed out, I’m not arguing that they are necessary, but merely that they are sufficient. That is, I’m just pointing out a way by which one would end up with a world containing minds; whether it’s the only way, I don’t know, and I also don’t think it’s really a question that can be answered exhaustively. Rather, I’m putting up a hypothesis to either be substantiated or refuted.

    Some people could say that, because we can’t pinpoint the distinction, then we must admit there still is a gap. I can agree with this, but I can also agree with you: the difference is a matter of interpretation of what the gap is supposed to be.

    That’s not what I mean. In my view, it is exactly the fact that there is a continuum between life and non-life that fills the supposed ‘gap’ (which, again, I don’t think ever really was present in the case of life, the way it is present for mind): basically, if somebody asks, ‘how do life and non-life hang together?’, we can just point to the continuum and say, ‘that’s how’. And there would not be any residual mystery. If we could do the same for conscious and non-conscious matter, then the problem would be solved just as fully.

    Regarding my post #100, I was just intending to stress the role knowledge (rather than ignorance) plays in recognizing the hard problem, not attributing any view to you. To stretch my earlier analogy a bit, say you know that there are ships—you can see them on the water. You want to know how to build them, so you cast around for suitable material. Eventually, it turns out that everything you have is essentially made of paper towels. This is where the hard problem comes up: you know paper towels dissolve on contact with water, and hence, you can’t build ships from them. The obvious conclusions are: the ships are illusions (eliminativism) or there is some fundamentally different stuff out there from which to build ships (dualism). (There’s no analogy here to psychophysical parallelism, so dual-aspects, prestabilized harmony, or panaquaeism aren’t represented.)

    Your solution essentially is to look around for yet new ways of putting together paper towels, so that maybe they won’t sink; or perhaps that paper towels actually aren’t all there is, or that we can’t say in every situation if paper towels sink or swim. You want to say that we overestimate our own capacities by claiming that paper towels are all there is, and they’ll always sink. I think that the chances for that to be the case are pretty slim, and rather believe that you underestimate our capacities. But in any case, what I wanted to illustrate is simply that the problem does not arise from not knowing what the world is made up from, but from exactly that knowledge—together with a (reasonable) belief in its comprehensiveness.

  101. Jochen,
    just a very short one to say that I’m finally out of my long tunnel of work. EEG is still flat, but I should recover enough brain resources to finally answer soon enough (it’s good fun, hope you are enjoying it as well).

  102. @Jochen #107
    Apologies for being so hopelessly slow: before attempting this answer, I had to go through a long tunnel of work, then reconnect with the subject, and finally overcome a difficulty that arises from how you lay down your argument. I can’t tell how many times I’ve planned a reply and then ditched it because it either felt unconvincing or utterly unfair.
    Thus, instead of a direct reply, I’ve finally decided to explore the difficulty I’m facing.
    You say:

    Take, for instance, Mary’s uttering ‘Oh, what a beautiful red rose!’ upon seeing such for the first time. There is no mystery in finding an explanation for this: our knowledge of the relevant physical facts enables us to create a deductive account, starting with the light impinging upon her retina, to signal processing by thalamus and cortex, to the activation of motor neurons that cause the right sort of contractions within our vocal apparatus in order to modulate the air stream to produce the appropriate sounds.

    Of course, this story is enormously sketchy, and hides staggering amounts of biological complexity; but it’s nevertheless clear that the blanks can be filled in, and ultimately, a story of some such character gives a complete account of her uttering those words. Furthermore, such a story can be told for every kind of behaviour.
    […]
    So you see: it’s our ability to form a picture, not our inability to come to an explanation, as was the case with the 15th century peasant examining the walkie-talkies, that makes consciousness seem special.

    I (think I) can see your point in perfect focus, I genuinely hope I don’t need you to explain it again for the Nth time, but I see it as so blatantly self defeating that I find it paralysing: I fail to understand what makes you blind to the contradiction in your own words, clever as you are, and in turn this makes me wonder that there might be something that I’m missing instead. Perhaps you are trying to make a point that I don’t understand at all (would be very disappointing) or more likely, we are talking past each other at such dizzying heights that we have no chance of noticing. Either way, our discussion seems sterile, but I really don’t want to be defeatist and be the one to give up. Thus, I remained paralytic for a long time.

    The only thing I can try is to explain why I see a glaringly obvious internal inconsistency in your point. The key word is ‘beautiful’ in Mary’s imagined ‘Oh, what a beautiful red rose!’ utterance. To me, it necessarily links to experience(1) and meta-cognition(2): something qualifies as subjectively beautiful if the act of looking at it produces at least a tiny bit of pleasure. Naturally, pleasure is inevitably part of Phenomenal Experience (PE)(1). Furthermore, being able to report about beauty requires to have a record of the pleasure that one has experienced(2). Thus, once we’ll be able to produce a complete mechanistic/physicalist explanation of all the relevant physical facts that happen between seeing the rose and naming it ‘beautiful’, this mechanistic explanation necessarily will contain a description of the mechanisms that produce PE.
    So, embracing physicalism is equivalent to assuming that it’s possible to find such explanation (Exp) (we don’t have it, but we do think we will get there), but accepting the hardness of the Hard Problem, without the qualifiers I’m trying to introduce, is equivalent to saying that no, a “complete mechanistic/physicalist explanation of all the relevant physical facts[…]” necessarily does not contain an explanation of PE(1), nor one of meta-cognition (2).
    The Contradiction: you say our knowledge of how to build (Exp) generates the problem, but fail to see that (Exp), if complete as per-hypothesis, necessarily solves the problem. Hence, something is amiss in our knowledge: your picture must be incomplete in a way that is very relevant (not just missing irrelevant details).

    Both of us are trying to solve this riddle by explaining the apparent hardness of the hard problem: you embrace it in full, and say, “hey, if (Exp) includes hypercomputation, that’s why we can’t intuit it and/or anticipate it”, you then build a half-convincing (to me) case of why hypercomputation is likely to enter the picture. Me, I explain away the hardness of the problem by pointing out that we are setting our bar spectacularly too high: we want to explain PE while relying on concepts of Intentionality (I), PE (in “what is it like” terms) and Knowledge (K)(used here to define what counts as a successful explanation, but in Mary’s case also reflectively, exploring the interplay between knowledge and PE). In my reading, the hard problem is hard because we misunderstood I, PE and K. Thus, the hard problem must remain unsolvable until we recognise the errors that we’ve inserted in our premises. To me,I, PE and K are systematically over-hyped, they do not exist as posited, and therefore can’t get a straightforward explanation. As a result, it is entirely possible (actually, it is inevitable), that, unless we re-define I, PE and K (which entail also downsizing our understanding of ‘meaning’ and ‘intuition’), getting at Exp will not solve the Hard Problem, because we will fail to see the solution, even if we will necessarily have it in front of our eyes.
    If I’m right, we need to re-define I, PE and K, which counts as what you refer as invoking “epistemic restrictions”. Your own attempt is directly interesting to me because it does call on accepted and self-evident epistemic restrictions (we can’t understand hypercomputations), but at the same time it rubs me in the wrong direction because it falls short to the re-definitions that I think are necessary.

    I’ll provide an example of a redefinition that would satisfy me: factorizing 82475298296734. You know it’s possible, because you can factorise easily 12, 15, 21 and many more (relatively small) numbers. You also do see that for all the numbers you can factorize yourself, the difficulty increases with the size of the number, but the method (algorithm) you’d use remains the same. Thus, you can intuit that making a computer apply the same method (algorithm) to 82475298296734 will yield the correct results and conclude that there is nothing left to be explained about factorising big numbers. You are almost certainly right, but your conclusion is inductive: if you didn’t have any direct experience of similar occurrences, where algorithms reliably scale-up beyond our ability of performing them ourselves, your intuition would be different. In other words, you regularly mistake reliable intuitions (the useful concept) for Knowledge with the capital K (the misleading chimera), defined here as objective truths.
    I mention this because you started the whole journey by assuming that “our knowledge of the relevant physical facts enables us to create a deductive account”, and by doing so you overlook the fact that in order to produce deductive accounts we need a theory, but theories can only be grounded on induction (you can reduce your sigmas how much you want, but they won’t reach zero, not when we are talking about physical reality). This tells me that all knowledge (of the physical world – small ‘k’!) and all our intuitions are inescapably heuristic, therefore they can fail. Thus, our intuitions on what Mary may or may not be able to intuit can be utterly wrong, and before jumping to any conclusion, we should try to evaluate how reliable such intuitions might be: I’ve done that, and concluded that they must be unreliable, so Mary’s story tells us nothing reliable. Why? I’ve tried to explain it in my previous comments, but here is another take: the whole thought experiment relies on the truth-bearing of intuitions, our own ones, applied to what Mary may or may not infer herself. Once you accept that induction always grounds inferences and intuitions, you accept that they are never 100% reliable and therefore (a) what Mary may infer could be wrong, and our second order inference about (a) could also be wrong(b), multiplying the probability of reaching wrong conclusions. Furthermore, the more you move away from ecologically realistic hypotheses (Mary can effortlessly compute anything computable, Mary can hold in her head lots of relevant physical facts, Mary has access to all the relevant physical facts, etc. – Each hypothesis is, in its own way, ecologically impossible) the less reliable our conclusions would be (because induction can be reliable for similar-enough conditions, and completely fall apart after a given threshold). Result: Mary’s story is useless, we can’t trust the intuitions it generates.
    All this brings me back to BBT, so I’ll conclude with direct questions:
    (Q1) Did you deliberately insert the ‘beautiful’ qualifier in you last reply? (where you trying to make a point which I’ve missed?)
    (Q2) Can you help me understanding why you think that BBT is “immediately self-refuting”? I think clarifying this might help me spot the source of our disagreement.

    [I don’t know if any of the above can be helpful to you, but I doubt it (alas, I suspect this exchange has been more useful to me than you). Please do feel free to end this conversation, I would not blame you if you did.]

  103. Hey Sergio, glad to see you’re still with me! I was afraid you’d abandoned our exchange. And somewhat contrary to your impression, I think we’re actually making some strides, at least at uncovering the source of our disagreement. (To expect it to be overcome completely was always going to be a bit of a pipe dream; of course, we all like to believe we’re some perfect rational creatures who dispassionately form their belief merely on the grounds of the evidence laid before us—it’s only everybody else, those obstinate fools, who’s the problem!—, but I don’t think everyone ever really is, nor am I sure we should aspire to be. Some heat and passion adds all the spice to a good argument, as long as it’s done in a friendly manner, and ultimately, our views make us who we are—to just let go of that without a fight just wouldn’t be human. So let’s dive right in!)

    First of all, I don’t at all see why you believe Mary saying something about the rose being beautiful necessitates her having phenomenal experience, unless you put it in before in terms of a requirement or truth condition for her statement (‘She only really means ‘beautiful’ if there’s some experience within her to back it up’), which would be circular. (And if you’d want to go that way, you could just as well declare that her uttering the word ‘red’ only refers if there is some experience of redness to back it up.) Declaring an image to be beautiful or not is simply a classification task—give a neural network a large enough sample of images, together with your judgments of whether they are beautiful, and chances are it will perform quite well at predicting which other images you consider to be beautiful, as good as another person, perhaps. Indeed, googling around yields plenty of examples of that sort of thing being done—here’s a study of how well a computer does at classifying faces according to beauty when compared to human classifications (answer: very well).

    But that of course simply means that you can abstract a set of criteria, which, according to the degree they’re fulfilled or not, allow you to classify images regarding their beauty (to an average human being, or whatever else you might set as a standard reference). (This is a difficult task if all you have is a trained neural net, but we’re simply concerned with in-principle possibility here.) So you just take an image, and check boxes—proportions, ratios, colors, etc. None of this (as I hope we’re agreed) necessitates any subjective experience at all. Hence, Mary’s ‘Oh, what a beautiful red rose!’ can be explained without ever referring to her subjective experience, by processes which just need to check boxes, by gears and levers acting upon one another—and that makes it mysterious.

    Regarding your re-definitions, I take my account to be appealing precisely because no re-definitions are necessary at all—intentionality, phenomenal experience and the like are exactly what they are presented to us as. One reason for this is that skepticism about these things runs a very sharp risk of lapsing into incoherence—about which, more later. There’s also the ‘it’s the only thing we’re ultimately really acquainted with’-argument: in a sense, skepticism about the content of our minds is way worse than your ordinary brain-in-a-vat skepticism about the external world. A brain in a vat has, at least, genuine internal experiences; indeed, as Chalmers has argued (in his ‘The Matrix as Metaphysics’), those are enough to ground a realist account of the world. The skeptic about phenomenal experience and the like looses this possibility—she can’t even be a solipsist, because she doesn’t even know the kind of thing she is; not just the external world, but even the internal one is beyond her grasp. A Kantian looses the noumena, but at least gets to hold on to the phenomena; but what’s the PE(etc.)-skeptic got left?

    So this is simply an unappealing theoretical position for me—I may not be able to disprove it, in the same sense that I can’t disprove brain-in-a-vat-skepticism, but I may at least look for more palatable alternatives. To me, the assumption of an epistemic gap brought on by the inability of our computational intelligence to grasp a noncomputational reality provides just that.

    I mention this because you started the whole journey by assuming that “our knowledge of the relevant physical facts enables us to create a deductive account”, and by doing so you overlook the fact that in order to produce deductive accounts we need a theory, but theories can only be grounded on induction (you can reduce your sigmas how much you want, but they won’t reach zero, not when we are talking about physical reality).

    First of all, the example you use is a bad one: factorizing large numbers is not something I know to be possible by induction and by many small examples; rather, I can formulate a proof that every natural number (>1) can be decomposed into its prime factors—a deductive argument, in other words. (The Fundamental Theorem of Arithmetic, not just called this for fun!) I don’t need acquaintance with all natural numbers, and their factorization, in order to establish this—I merely need to know their structure.

    But of course, you’re making a different point: in the real world, we don’t ever have access to the full structure. Science proceeds (well, can in certain cases be idealized to proceed) using a hypothetico-deductive method: you propose a hypothetical structure, from which you derive consequences; if these consequences are verified, you keep to your hypothesis, eventually calling it a theory if it’s acquired enough inertia and people start preaching it from various distinguished chairs etc. But if those consequences fail, your theory is falsified and tossed onto the garbage heap of ‘great, but wrong’-ideas. So, you can never be certain that you do know the right structure of the world, and hence, deductions from this always themselves inherit the hypothetical character of the theory used to produce them.

    So far, that’s all well and good, but I didn’t use the word ‘deductive’ lightly. First of all, I can simply bite the bullet: using our best current theories and understanding of matter, I am licensed to draw conclusions from this at least until it is repealed. Thus, anybody wishing to counter my argument would have to supply a constructive argument: either proposing that matter does contain some sort of seed of the mental, as in dual-aspect theories and the like; or showing how mentality may emerge, or at least, some bits of matter might fool themselves into ‘thinking’ that it does. The first possibility is simply lobbying for a change in ontological assumptions, which I’m probably not inclined to make; the second, of course, amounts to nothing less but solving the mind-body problem. If you can do that, I’ll be wrong, but I’ve won the greater prize of knowing how the mind works—another reason my stance seems like a win-win to me.

    Second, I can also advance the argument that it’s vanishingly unlikely that our understanding of matter as it applies in the ordinary regime is going to change. The reason for that is a little bit of magic called ‘effective field theory’: as you move along the energy scale (equivalent to the length scale in physics), what you see is that the dynamics decouples from higher energies/shorter lengths. So, in a sense, what goes on at the very bottom doesn’t matter: we have an effective theory that isn’t sensitive to the changes down there. This theory governs our everyday lives, and accounts for all its phenomena.

    Now, you’ll want to claim that I can’t really know that. But I do. The reason for this is that sometimes, absence of evidence is evidence of absence, contrary to the somewhat regrettable canard that’s so often parroted around online. Example: by the absence of evidence for there being an elephant in my room, I conclude quite validly that there is, in fact, no elephant in my room. Why? Because I can survey the entire domain at the required scale, meaning that if there were an elephant in the room, I would have necessarily observed evidence of it by now. The same thing is essentially true with what Sean Carroll calls ‘the physics of everyday life‘: at least on those physical terms, it’s a solved problem. So another way to defend my use of ‘deductive’ would be to point to the fact that this theory isn’t going to change, and derive my conclusions from it.

    But I’m after a more fundamental quarry. I think that even before we got to work out ‘the physics of everyday life’, conclusions such as that of the Mary story were possible, and well justified. The reason is that while we did not know the precise formulation of this explanation, we did know—from the time of Galileo—the framework in which this explanation would be formulated, and it is that very framework that leaves no room for phenomenal experience and the like. That framework is essentially mathematical, and by that, structural: mathematics ultimately is the science of relationships objects can enter, that is, their structural properties. Mathematics can be grounded in set theory, and all of the mathematical structures definable can be accounted for in terms of relations between sets. A function is just something that associates inputs with outputs—i.e. forms ordered sets of elements (i,o). And so on.

    But structure is incapable of accounting for intrinsic properties (h/t haecceity), and it’s those that we need in order to make sense of conscious experience. So, in all brevity, another way to frame my program is to identify the structural with the algorithmical, and then propose that hypercomputation is a minimal way to extend our ontological reach to bring the intrinsic into the fold. (This was actually the original argumentation that led me to pursuing this road. It’s a bit more complicated, though, and I’m not sure it’s really amenable to blog comment discussion, hence I chose the presentation in terms of the explanatory gap and Zeno argument above.)

    Anyway, returning back to the main thread of the argument: yes, I do mean giving a deductive argument—return to the example of building a ship from paper towels. We’ve discovered that paper towels are all there is, hence, we can’t build ships from them; but we also appear to see ships sailing on the sea. You point out that ‘all there is’ is an inductive leap; I can make three replies of different strength: first, if you claim that’s not the case, or that there is some way after all to use paper towels to build a ship, the onus is on you to demonstrate this; second, I can point out that the paper-towel theory is explanatorily complete, at least as everyday experience, of which the ships are a part, is concerned; and third, I note that actually, the very framework in which we explain anything at all constrains the objects of our explanations to be paper towels. That last one is the strongest, surely, but it’s also the hardest to argue for; but for the purposes of our discussion, I will be happy with either of the first two options.

    All three allow me to repeal your stipulation (at least up to being falsified) that all Mary’s story generates is intuitions: I say it generates conclusions, reached with full deductive force.

    Now on to your questions: no, I didn’t introduce the word ‘beautiful’ with any grand argument in the back of my head, but now, I regard it as a serendipitous accident—I hope that showing how one can produce the judgment that something is beautiful without needing the least spark of subjective awareness more clearly demonstrates the character of the explanations that our understanding of the physical world allows us to furnish, and moreover how those explanations never touch anything remotely phenomenal.

    As for your second question, well, I’m not sure how much time I want to spend on BBT anymore; I went over that ground with Scott already, with little to show for my troubles. But anyway, the gist of it is as follows: BBT’s story is framed in unavoidably intentional terms; but it presumes to conclude that those terms ultimately fail to refer. However, if this conclusion is true, then we have no means by which to evaluate its story in the first place, and hence, can’t decide whether it actually implies its supposed conclusion. So the whole thing just doesn’t get off the ground.

    As an analogy, take the attempt to disprove the validity of logic using logical reasoning. Something like:
    (1) Everything human-made is flawed and prone to be erroneous
    (2) Logic is human-made
    ———–
    (3) Hence, logic is flawed and prone to be erroneous

    This argument is surely valid; hence, one should think it’s sound, if its premises are true. But it can’t possibly be, since its conclusion asserts effectively that logical reasoning in general isn’t sound, and that hence, we can’t assert the soundness of this argument specifically. So, trying to argue for the non-soundness of logic using logical reasoning is just a non-starter.

    A similar thing now happens with BBT. It is formulated using things like ‘heuristic predictions of behaviour’, and the like, which are intentional statements (the prediction is about behaviour). From such formulations, it then attempts to draw the conclusion that we can’t trust intentional statements. Now, if that conclusion is right, then I have no grounds on which to accept the story that led to it—I no longer mean what a ‘heuristic prediction of behaviour’ is if intentional language fails to refer. But then, I simply have no grounds on which to assert that conclusion—the argument’s conclusion pulls out the rug from under itself. BBT tries to reason itself into distrust of the brain, while forgetting that then, the reasoning process used to arrive at this distrust is itself suspect.

    That doesn’t make its conclusion wrong, anymore than the above anti-logical argument’s failure means that logic is necessarily sound (we do believe that to be the case, but on different grounds than the failure of this argument). But, in order to make a convincing argument, one would first have to put BBT’s story in non-intentional terms—but in order to do this, one would have to find a way to analyze things put in terms of ‘x being about y’ such that no aboutness is left; that is, one would have to solve the problem of intentionality. But if that’s possible, then BBT turns out to be false, as it is evidently possible to give an account of the intentional in non-intentional terms, which is what BBT alleges we are ‘blind’ to.

    So BBT, as it stands, fails to make a coherent argument; and the only way to fix it would be to show it wrong.

  104. Ugh, I should be more careful proof-reading—in the first paragraph, “but I don’t think everyone ever really is, nor am I sure we should aspire to be”, that should be ‘anyone ever really is’, and in the next-to-next-to last one, “I no longer mean what a ‘heuristic prediction of behaviour’ is” should be ‘I no longer know what…is’. Sorry for that.

  105. Jochen –

    BBT … is formulated using things like ‘heuristic predictions of behaviour’ and the like, which are intentional statements

    BBT … would have to find a way to analyze things put in terms of ‘x being about y’ such that no aboutness is left

    These quotes seem inconsistent with my understanding of BBT (which I’ve run by Scott a time or two). I take BBT to argue that while the intentional idiom is a heuristic useful (arguably even necessary) for some purposes, elements of the idiom imply introspective insight into brain functionality that we don’t actually have. Hence, scientific explanations of behavior will ultimately need to eschew the idiom in favor of some alternative vocabulary (yet to be fully developed). Since “aboutness” is a concept from the intentional idiom, it wouldn’t be part of such explanations.

    If that’s roughly correct, then BBT doesn’t attempt to explain intentionality, rather it argues for ultimately dropping the concept – in which case there’s no inconsistency.

  106. Charles: “If that’s roughly correct, then BBT doesn’t attempt to explain intentionality, rather it argues for ultimately dropping the concept – in which case there’s no inconsistency.”

    I’ve tried to explain this to Jochen many times, but for him, commitment to ‘aboutness’ is somehow an inescapable consequence of saying ‘about.’ The way he would put it is that the claim that ”about’ belongs to a heuristic regime’ has to be ‘about something’ if it’s going to be intelligible at all. I answer, but of course it’s ‘about something,’ so long as we’re not essentializing ‘about,’ and he simply repeats the move. For some reason he thinks a non-heuristic application of about has to be involved somehow, and he thinks his refusal to specify this non-heuristic about somehow spares him the charge of begging the question.

    In short, he’s deep in the self-affirming intentionalist loop!

  107. Since “aboutness” is a concept from the intentional idiom, it wouldn’t be part of such explanations.

    Well, but this is the problem: it’s part of BBT itself, hence, it can’t be part of such an explanation.

    I’ve tried to explain this to Jochen many times, but for him, commitment to ‘aboutness’ is somehow an inescapable consequence of saying ‘about.’

    No, Scott, that’s really not what I’m talking about. What I am saying is that whenever you use the word ‘about’, or otherwise make pronouncements using the intentional idiom, then I can analyze them using the usual understanding of aboutness—the claim ”about’ belongs to a heuristic regime’ may itself be an application of heuristic aboutness (I understand that, perfectly well, I really do!), but if it is, then I do not know how to assess its truth. Because you’re not telling me how your ‘heuristic aboutness’ is supposed to be cashed out—that is, how I can tell when a proposition involving heuristic aboutness is fulfilled.

    I know when a claim involving aboutness in the ordinary sense is true—‘A is about B’ is true if, in fact, A is about B. You’re claiming this notion is mistaken—there is never any A such that it is about some B—, which, again, is very well possible. But in substantiation of this claim, you’re using sentences of the form ‘A is about B’; but if my ordinary understanding of them is false, I have no way of knowing whether they actually serve to substantiate your claim. Not until, at least, you tell me when a sentence like ‘A is about B’ is true in your model.

    I’m not saying that a sentence like ‘A is about B’ commits you to some sort of intrinsic understanding of aboutness; I’m merely saying that intelligibly using such sentences obligates you to give an account of how they are supposed to be understood, if not as asserting that A is, in fact, about B. Merely saying ‘it’s heuristic’ is just a hand-wave—why should I, for instance, believe that there is a notion of A being heuristically about B that makes the sentence ‘A is about B’ come out true in all the situations you need it to? Unless there is some account of what A does, or what is done to A, in order for it to be ‘heuristically about’ B, that sentence is just a string of meaningless symbols.

  108. Jochen: “I know when a claim involving aboutness in the ordinary sense is true—‘A is about B’ is true if, in fact, A is about B. You’re claiming this notion is mistaken—there is never any A such that it is about some B—, which, again, is very well possible. But in substantiation of this claim, you’re using sentences of the form ‘A is about B’; but if my ordinary understanding of them is false, I have no way of knowing whether they actually serve to substantiate your claim. Not until, at least, you tell me when a sentence like ‘A is about B’ is true in your model”

    Statements of the form ‘A is about B’ are typically reported ‘true’ whenever such schematizations contribute to successful behaviour.

    Again, this is part of much larger story, some angles of which I can cash out quite elegantly, others which require work. But it’s perplexing to see the goal posts have changed yet again: ‘Handwaving’ is not the criticism you were making several weeks back (which was all about intelligibility)! It’s also a criticism that cuts *both ways*: labelling a subject matter ‘irreducible,’ for example, is about as an egregious example of hand-waving as exists. Lord knows ‘”The snow is white” is true iff the snow is white’ explains precious little! The obligation “to give an account of how they are supposed to be understood” is as much yours as mine.

    I raise this because the question has to be comparative: Which account can systematically explain the most with the least. What account of intentionality out there, do you think, can explain more than my account of intentionality?

  109. Statements of the form ‘A is about B’ are typically reported ‘true’ whenever such schematizations contribute to successful behaviour.

    But that doesn’t help with BBT’s story at all. You don’t need for those statements to be ‘reported to be true’, you need for them to be factually evaluable, in order to get the BBT story to a point where it can be made sense of. Otherwise, I come across these statements in the formulation of your story, and from that point, I just can’t tell anymore what the story says (provided the conclusion that there’s no intentionality as it’s usually portrayed holds), and hence, in particular, I can no longer judge whether the story actually supports its conclusion—because fundamentally, I just don’t know what the story says.

    But it’s perplexing to see the goal posts have changed yet again: ‘Handwaving’ is not the criticism you were making several weeks back (which was all about intelligibility)!

    So if I just keep trying to explain my original criticism, I’m stuck in a self-affirming loop; if I try to come from another angle, I’m shifting the goalposts. You’re not exactly leaving me many options here! (Besides, even back then, I did raise the point that just saying ‘it’s heuristic’ doesn’t really tell me anything substantial about what’s doing what.)

    I raise this because the question has to be comparative: Which account can systematically explain the most with the least.

    This argument I don’t get at all. I don’t have to accept an account that to me appears flawed, just because I offer no alternative. I don’t have to believe that Zeus’ anger sends down lightning, just because I can’t offer anything in the way of an electromagnetic theory. Saying ‘I don’t know’ is an entirely valid response.

  110. Jochen: “But that doesn’t help with BBT’s story at all. You don’t need for those statements to be ‘reported to be true’, you need for them to be factually evaluable, in order to get the BBT story to a point where it can be made sense of.”

    For you, as you say. Otherwise, for about statements to be ‘factually evaluable’ is for them to be ‘reportable as true/false,’ and as I say, they are generally reported as true the degree to which they figure in different varieties of successful behaviour.

    The intentionalist strategy at this point is to generally try to cue the application of these heuristics to the question of these heuristics (the bulk of the Western philosophical tradition lies in this space), to insist that the answer to the question of intentionality find some *second order intentional explanation,* lest it ‘miss the point’ or ‘change the topic’ or what have you (this is the normativist tactic, for instance).

    Is this what you think, Jochen?

    “I don’t have to accept an account that to me appears flawed, just because I offer no alternative.”

    You don’t have to accept anything you don’t want to (of course)–but this response strikes me as curious. So we’re not in the business of providing ‘best explanations’ at this level, Jochen?

  111. Otherwise, for about statements to be ‘factually evaluable’ is for them to be ‘reportable as true/false,’ and as I say, they are generally reported as true the degree to which they figure in different varieties of successful behaviour.

    But you want the story of BBT to be true, i.e. correspond to the way things actually are; you don’t merely want to claim that belief in BBT yields successful behaviour. And there’s the problem.

    Take the sentence ‘nothing is about anything’. This is, as such, nonsense: if it’s true, then it simply doesn’t assert anything (it’s not about anything, after all). Hence, if these sorts of statements occur in BBT, it’s likewise self-defeating. Peter also noted this problem way back when:

    [F]or one thing to dispel intentionality is to cut off the branch on which you’re sitting: if there’s no intentionality then nothing is about anything and your theory has no meaning. There are some limits to how radically sceptical we can be.

    A more charitable interpretation of what you’re saying is then that it ultimately boils down to ‘nothing is about anything, in the sense that we naively understand ‘aboutness”. This is a sentence that potentially is meaningful. But it needs an account of how then to understand ‘aboutness’ differently in order to be intelligible. Otherwise, evaluating its truth simply is impossible; it might just as well say ‘colourless green ideas sleep furiously’. And of course, all further argument inherits this unintelligibility.

    So, ultimately, I find that if I believe the story of BBT—if, say, I have read your account here and here, and agree with it—then upon re-reading the account, I encounter formulations such as ‘Thespians cognize one another’, and, thanks to BBT, I know that my naive understanding of ‘cognizing something’ is flawed—it’s merely a heuristic shaped by evolutionary pressures with no regard or need for being truth-tracking in any way.

    But then how else am I supposed to understand this formulation? When is it true that something cognizes something, when is it false? I have no way to tell: my naive understanding, I’m told, is wrong; but I’m not given anything to supplant it. Moreover, anything that would imbue me with such an understanding would correspond to a way of explaining intentional posits in non-intentional language—i.e. solving the problem of intentionality (otherwise, I would fail to understand any explanation just as much as the original statement). So either I can’t make sense of BBT; or, I don’t need it, because I’m given a way to analyze intentions non-intentionally.

    This does not commit me to an intentionalist stance (which I in fact don’t hold): for instance, I’d be perfectly happy with an explanation of intentions, say, in terms of teleosemantics (provided the approach proves eventually fruitful). Then, I could understand the phrase ‘Thespians cognize one another’ on its terms, analyzing its intentional content in terms of functions performed towards distant ends. But this explanation would then remove the blindfolds my brain had been wearing, and obviate the need for BBT.

    What other account of the meaning of the sentence ‘Thespians cognize one another’ would you claim is possible?

    You don’t have to accept anything you don’t want to (of course)–but this response strikes me as curious. So we’re not in the business of providing ‘best explanations’ at this level, Jochen?

    To have a best explanation, we first need to have any explanation at all. Since I think your account is flawed, I don’t believe it yields one. Or are you really intending to say that the ‘Zeus’ wrath’ explanation should be accepted in the absence of a better one, rather than just coming clear and admitting that we don’t know?

  112. I have to admit that I have considerable difficulty following this exchange between Scott and Jochen and assume I’m missing some subtleties. But I have to wonder if I’m the one missing the essential point given statements like these:

    whenever you use the word ‘about’, or otherwise make pronouncements using the intentional idiom …

    The word “about” is used in at least two contexts in which its “meaning” – ie, the way it’s used – differs. When used in quotidian discourse, an expression like “X is about S” usually just means some presentation mode X – eg, a novel, play, speech, movie, utterance – has S as its subject matter. In order to be clear, when the assumed context is quotidian, I’ll substitute “q-about” for “about” . My understanding is that “q-about” has nothing to do with “intentionality”, a distinctly philosophical concept.

    I understand the issue in the philosophical context to be how a mental event – eg, belief or thought – can be about (call it “p-about”) some absent, possibly inexistent, “object”. (Brentano’s word). And being a mental concept, “p-about” is indeed a member of the intentional idiom. I assume that in the quoted statement “about” is “p-about”.

    [“aboutness”] is part of BBT itself, hence, it can’t be part of such an explanation [ie, one not invoking the intentional idiom]”

    In what way is “aboutness” part of BBT? Surely not just because Scott’s explication may occasionally use “q-about”. Does his explication of BBT require use of “p-about” (or other members of the intentional idiom)?

    I know when a claim involving aboutness in the ordinary sense is true

    Does a claim “involve aboutness in the ordinary sense” if it uses “q-about”? If so, how does this statement relate to intentionality?

    why should I … believe that there is a notion of A being heuristically about B that makes the sentence ‘A is about B’ … true

    I interpret Scott’s use of “heuristic” as suggesting (among other things) fall-back use of the intentional idiom in the current absence of a fully developed scientific vocabulary for explaining behavior. So, I don’t know how to parse “A being heuristically about B”. OTOH, I would understand a phrase like “a thought p-about the Wizard of Oz” as being a “heuristic” use of the intentional idiom.

    In short, because of its meaning’s dependence on context, “about” should be used with care.

  113. Charles, if I understand you correctly, then the distinction you’re making is what’s more commonly called ‘derived’ versus ‘original’ intentionality, where derived intentionality is that which sentences possess—i.e. the power of a sentence to be about something, to have something as its subject matter—while original intentionality is an attribute of mental representations (note that ‘original’ here does not mean ‘ontologically fundamental’, but merely not derived from anything else—because if it were, we’d run right into an infinite regress).

    The reason for the naming is that the general view is that the former derives from the latter—a sentence has meaning, because it is interpreted as having such meaning by an intentional agent; in some sense, the sentence ‘cues up’ mental representations within the agent, and by doing so, becomes intentional due to the intentional nature of these representations.

    So this use of aboutness—your q-aboutness—is not any less problematic, and in fact, seems to derive from what you call p-aboutness. In general, whenever something is about something else, represents that something, we are faced with fundamentally the same puzzle. There is nothing intrinsic to a piece of text, a string of symbols, that makes it about something—it needs to be interpreted in order to be. Consider, for example, a code, or a sentence in a foreign language: there’s no way of telling whether some string of symbols codes for some meaningful proposition, or is just gobbledegook (in fact, any string of symbols can be interpreted to be basically about anything). So text always involves appeal to ‘p-aboutness’ in order to be meaningful.

    Besides that, the formulation of BBT also makes explicit reference to ‘p-aboutness’, when using formulations like ‘Thespians cognize one another’, where that cognition is explicitly stated to be cognition of something, another Thespian in this case. Encountering those in the formulation of BBT, having shed my naive belief in the intuitive understanding of (p-)aboutness, I find I no longer now how to interpret them, and hence, the whole story of BBT becomes semantically opaque to me.

    I would understand a phrase like “a thought p-about the Wizard of Oz” as being a “heuristic” use of the intentional idiom.

    But what needs to be the case in order to make that sentence come out true? What does A do in order to think p-about the wizard? I need to know this, in order to evaluate the content of propositions building on such formulations. Take the sentence ‘Dorothy smiled, because she thought p-about the Wizard of Oz’: Dorothy’s behaviour is explained in terms of this ‘thinking about’ thing; but then, only if I understand what that means do I know how this explanation actually works. I can’t in general be certain of the truth of propositions derived from heuristics, because heuristics fail; so without knowing whether the heuristic yields an accurate picture, I can’t evaluate conclusions derived from them, and hence, in particular, I remain uncertain of the conclusions BBT purports to reach.

  114. Jochen –

    I recall an exchange a few years ago re derived intentionality, and as best I remember, it didn’t converge on any insight that seemed particularly useful. And it seems clear to me that if quotidian use of “about” (or any other member of the intentional idiom) for any purpose whatever defeats intentional idiom eliminativism, then the concept of derived intentionality needs some rethinking. I take it that derived intentionality is the point of the “if nothing is about anything argument”? Any premise that leads from questioning the scientific basis of some idiom to that extreme conclusion seems inevitably flawed.

    In any event, I’m skeptical that language works anything like the way you described. “intentional agent”, “mental representations”, “interpretation” are all part of the intentional idiom and should therefore be, uh, eliminated by an intentional idiom eliminativist (IIE). For example, as one who suspects that phenomenal experience is causally impotent I don’t think mental imagery has any role in understanding (ie, responding to) a sentence. The whole basis of (my) being an IIE is confidence that behavior can (in principle, though not currently) be explained in a non-intentional vocabulary, and language use can be viewed as just a form of behavior.

    I probably went down a wrong path in my previous comment. Although I think I agree with the essence of BBT, I’m not committed to Scott’s way of describing it. So, it was a mistake for me to try to defend it. I just reread the “Alien Philosophy” essays, and I wouldn’t (and couldn’t given Scott’s much, much greater erudition is this discipline) describe my position as he does. Perhaps there are indeed inconsistencies in his presentation of his position. So, I’ll retract my defense – although I continue to subscribe to what I take to be his essential position.

    There is nothing intrinsic to a piece of text, a string of symbols, that makes it about something—it needs to be interpreted in order to be.

    You’ll see from this comment that I agree. (My use of “interpret” there is an example of using the intentional idiom where it serves a useful purpose. The point could be formulated without it, but it would be tedious and counterproductive to do so.)

    What does A do in order to think p-about the wizard?

    The value of using the intentional idiom in lieu of a scientific explanation (assuming one were available) is to avoid answering such questions, which is unnecessary in quotidian discourse (almost everyone “understands” the phrase and will execute some context-dependent responsive action despite having no idea how or why).

    In Scott’s first Alien Philosophy essay he claims he’s going to “explain intentionality”, and I wonder if that claim is misleading you. What he actually does in the essay is speculate on why and how the intentional idiom emerged. I assume IIEs can’t have as a goal explaining intentionality itself since we (at least I) doubt that it’s possible to do so – which is what makes use of the idiom a “heuristic” rather than a collection of true statements.

    I can’t in general be certain of the truth of propositions derived from heuristics

    I don’t understand how “truth” enters into the discussion. As I understand Scott’s use of “heuristic” (not a word I’d choose), it means use of the intentional idiom to describe mental events in the absence of a science-based description. I vaguely (possibly incorrectly) recall his explicitly saying somewhere that sentences employing the idiom are either false or can be neither true nor false.

  115. Charles, I’m not proposing anything greatly original wrt derived intentionality, nor regarding the workings of language; to the extent I’m aware of it, I’m just going with what seems to me to be the majority view, see e.g. the SEP article on intentionality:

    On this view, sentences of natural languages have no intrinsic meaning of and by themselves. Nor do utterances of sentences have an intrinsic content. Sentences of natural languages would fail to have any meaning unless it was conferred to them by people who use them to express their thoughts and communicate them to others. Utterances borrow whatever ‘derived’ intentionality they have from the ‘original’ (or ‘primitive’) intentionality of human beings with minds that use them for their purposes.

    I also didn’t mean to imply (although I didn’t phrase it sufficiently clearly) that phenomenal experience has necessarily any part to play in intentionality. There are those who believe that to be the case—David Chalmers comes, once again, to mind—but I’m not quite sure that case has yet been convincingly made. When I spoke about ‘calling up a mental representation’ I merely meant something like triggering some pattern of neuronal activity that—by whatever means—has some intentional content, or seems to do, at any rate.

    I also realize that I should have looked up ‘quotidian’ earlier (English is not my first language, as you might have noticed)—I thought you were proposing a difference between the sort of aboutness inherent in sentences as opposed to that in mental representations, and considering the former to be, in some way, more easily explained, and hence, BBT’s reliance on it unproblematic. But you were making the different argument that ‘everybody knows’ what aboutness means in everyday terms, and hence, we can understand the story just fine. The problem, however, is precisely that the point of BBT is that the way in which ‘everybody knows’ to understand aboutness is simply false—there’s nothing in the world that corresponds to this understanding. Hence, we can understand BBT only if we hold to a (on its terms) false understanding of it—but then, we don’t actually understand it at all, but merely falsely believe we do.

    This leads to how ‘truth’ comes to enter the discussion: presumably, Scott wants his theory to be true; but if it is true, then it undermines the understanding within which we ground the evaluation of its truth.

    I vaguely (possibly incorrectly) recall his explicitly saying somewhere that sentences employing the idiom are either false or can be neither true nor false.

    If you’re indeed right in your recollection, then I’m all the more surprised by his reaction to my criticism; because basically, that’s all I’ve been saying, together with making the simple observation that BBT is formulated in just such terms. So it, too, must be either false or neither true or false, by extension.

    Finally, regarding eliminativism about intentionality, there’s two positions that are often conflated. First, there are people calling themselves ‘eliminativists’ that are, in the end, simply reductionist—that is, they don’t believe that there’s something like ‘aboutness’ among the fundamental constituents of our world, but that from its non-intentional components, something possessing intentionality can be built. This is also often called ‘naturalizing intentionality’. I don’t have a problem with this sort of ‘eliminativism’—in fact, in this sense, you might well call me an eliminativist.

    On the other hand, there are those that hold that the very concept of intentionality is so misguided as to not apply to anything in the world; it should be completely discarded, replaced by an account in which it not simply emerges from non-intentional components, but is instead completely phrased in non-intentional terms. This is where, I think, things get somewhat difficult, and one must tread very carefully to avoid lapsing into the sort of self-falsifying statements that I think BBT falls prey to.

    So, just to be clear, what kind of eliminativist do you consider yourself to be? If you’re of the second kind, how do you propose to deal with the objection I quote above from Peter? What’s a theory about, in a world where nothing is about anything?

  116. Jochen –

    Your last comment highlights some shortcomings of this generally wonderful way of interacting with people having common interests. It surfaced several misunderstandings that would have been immediately corrected or wouldn’t have occurred at all in a face-to-face exchange.

    First, I would never have guessed that you are not a native English speaker. (My wife would argue that neither am I, a Texan!) In particular, use of “quotidian” is not common (in the US, anyway) and is arguably a bit pretentious. But I really like the word (“everyday” is so boring), and so take that chance. (Just curious – what is your native language?)

    And I understand that derived intentionality is a standard concept, it’s just not one I accept. In his essay “The Emergence of Thought”, Donald Davidson argues that thought requires language. Of course, he may be wrong, but that a person of his stature has that position suggests the possibility that the standard understanding of the relationship between philosophical and linguistic “aboutness” is wrong.

    But you were making the different argument that ‘everybody knows’ what aboutness means in everyday terms

    Not quite. My argument is simply that in everyday (ugh – quotidian!) conversation people use “about” (and every other word in their vocabulary) as they have been taught and seldom if ever give any thought to “meaning”. It’s an extreme version of the language-as-tool view. A hammer is useful in many contexts but has no intrinsic “meaning”.

    presumably, Scott wants his theory to be true

    Being a Rorty-style truth skeptic, I think Scott can hope only for acceptance of BBT by his peers. I understand the core of his argument to be that the concept “intentionality” is deemed essential by many or most philosophers mainly because it’s part of the inherited wisdom of the field but that the concept really isn’t well founded. Since I got along quite satisfactorily for all but the previous six years of a long life not having heard of “intentionality”, it’s easy for me to accept his argument. (Altho admittedly, only in the loosest possible sense am I a “peer”.)

    Which seems to put me in your second category, the extreme eliminativist. But I emphasize yet again that the intentional idiom serves a purpose (as part of a tool kit) in quotidian discourse – the objective is it’s elimination only from formal discourse. And since the presentation of BBT in the two “alien philosophy” essays seems to me closer to informal (though very sophisticated) description than formal theory, I just don’t see the “self-falsification” that apparently is clear to you and Peter. The former includes “about” since it’s part of Scott’s linguistic tool kit, the latter may not need “about” at all.

    At base, a mathematical theory is just a set of axioms and a bunch of logical conclusions deduced from them. One such collection of axioms and conclusions is called “group theory”. Is the theory “about” an entity named “group” which “exists” only as defined within the theory? Or is the theory really “about” nothing? Are such questions meaningful or merely word play? I’m don’t know.

  117. Jochen (#110),
    I will not interrupt this dialogue without telling you! I’m just slow, but happy to keep going until a halting condition is found; I can foresee the following ones:
    – Peter asks us to shut up (I’m always surprised by how much OT stuff he allows me to write)
    – We find the bottom of our disagreements and both feel we understand where they come from.
    – We think we’ll never find the bottom and/or one of us loses interest.
    – We end up agreeing (save the world and find the secret of eternal life at the same time, no doubt).

    [I’ll add my comments on BBT separately, if I can find the time]

    To me the discussion is fascinating because we’re tantalisingly close on some crucial matters and astronomically distant on some others, so I still have a hunch about there being a potential reward in this, albeit I have no idea of what such a reward looks like.

    First of all, I don’t at all see why you believe Mary saying something about the rose being beautiful necessitates her having phenomenal experience, unless you put it in before in terms of a requirement or truth condition for her statement (‘She only really means ‘beautiful’ if there’s some experience within her to back it up’), which would be circular. (And if you’d want to go that way, you could just as well declare that her uttering the word ‘red’ only refers if there is some experience of redness to back it up.)

    I’m not sure I want to go that way, but I do think we have to thread very close to that path, whether we like it or not, for reasons we’ve touched under my “computationalism” posts already. I simply can’t accept that we need to throw out a relevant chunk of the evidence we have, not if the only reason to do so is that it may lead us into circular reasoning.
    The evidence is: looking at beautiful things generates pleasure. We learn to call “beautiful” the things that we find to be “pleasing to the eye”. It’s certainly true that after learning this we can outsource the classification to computers, but the first “meaning attribution” (with scare quotes, as you may not see why I chose to use the loaded word ‘meaning’) involves pleasure. We agree that it’s a simple classification task, on one level, but your account (“pleasure needn’t enter the picture”) is just plain wrong. Remember our chit-chat about interpretation maps? We can teach a mechanism to do the classification accurately, but only if we provide some sort of map (or pre-selected, already classified training data), and we, the humans, can produce datasets which can be used to teach an AI to produce its own map only via a process that does involve pleasure. This doesn’t mean that pleasure isn’t epiphenomenal, but, as a biologist, I find it profoundly odd that something so consistent (I don’t normally find beautiful things to be unpleasurable, and when I do, they do not presently look beautiful to my eye, even if I can predict that other people will find them beautiful) may have been created by natural selection even if it serves no function (see the “Third attempt: evolutionary theory” bit in my comment here).
    [See also this remarkable paper (1902!) which explains the same concepts very well – it does get off the rails on page 7-8, though. I’ve just discovered it thanks to Andreas Roepstorff.]
    Thus, because of my cultural background, I can only bet on the fact that PE is, has, or is indissolubly linked to something that provides an evolutionary advantage. You know the detail: ETC proposes that PE is inevitably linked with learning from past experience (an unavoidable consequence of re-evaluating memories, if you wish).

    In your case, we are starting with a Mary who’s as human as she can be: if she were a digital computer, the thought experiment would tell us nothing because she would lack what’s needed to see colours as we do – if she were a bat then we wouldn’t have a clue of what she knows, how she thinks and how it is like to perceive the world as she does, right? In fact, we have to accept that Mary does have PE for the whole exercise to even get off the ground: she learns something when a new kind of PE happens to her, remember? This in turn supposedly allows us to draw some conclusion because we are assumed to “already know” what that particular kind of PE feels like and can’t imagine any way to learn this something that doesn’t start from experiencing it. That’s where our intuition comes in: we conclude that she must learn something new because we can’t imagine anything else.
    Your analogy with ships and paper towels breaks up at this level: if you allow me to hijack it, I’d say that we are having this whole discussion while cruising on a ship made of paper towels. We started the whole journey knowing that it is made of paper towels, and neither of us thought this posed any danger. That’s the starting point: that PE somehow enters the picture (functioning ships can be made entirely of paper towels) is what we know, what we don’t know is how this can possibly be the case.
    You make a logic argument concluding that mere mechanisms can’t generate PE (functioning ships made of paper towels). The foundation is that we supposedly understand everything that mechanisms can potentially produce (we know ships can only be made of paper towels and that paper towels are always permeable). I’m telling you this foundation is unwarranted: in strictly empirical terms, there are no cases where we can accept such a premise.
    Now, in the overall context, things are becoming confusing: on one side, you told us that hypercomputation (HC) can’t be congised (agreed), and thus HC explains why we conclude that Mary does learn something. On the other, you tell me that our conclusion is warranted by the whole strength of deduction (we can accept it as true without empirical verification, it is not a strong intuition, it is something that is necessarily true). Would be fine, but at the same time you are offering an explanation of why the conclusion is wrong: the mere mechanisms within Mary, where coupled with unpredictable input, can be described as real-world ingredients for HC to happen, so somehow the hyper-qualities that result do generate Mary’s mind and PE. Thus, after all, the mere mechanisms within Mary, coupled with the world as we know it (we know the world is somewhat unpredictable) do generate Mary’s mind. Isn’t this eating the cake you want to have?
    You may avoid the need of re-defining intentionality, but I’m still not convinced that your account is coherent, sorry!

    From my perspective, I see other reasons to think that redefinitions are necessary, and have done a lot of legwork to explain both why and how: a good 50% of my blog, plus a good proportion of my computationalism posts and discussion is dedicated to this. It tries to show why all knowledge of the physical world is indeed heuristic, an observation which explains why empirical science is the best way to produce reliable (not certain!) knowledge. You only need to accept that deduction rests on induction (with the exception of 100% abstract concepts) and you get out of the vicious cycle you’re discussing with Scott at the same time: we can’t know a-priori when a proposition about the physical world is true, but we can make reasonable bets with the aid of empirical verifications. In this way we are sometimes able to make predictions which are, FAPP, certain, which is a good thing, isn’t it? However, for all theoretical purposes, all our predictions about the physical world are somewhat uncertain, the nice thing of science is that it can sometimes reduce uncertainty to negligible amounts. If there is no way to reach absolute truths about physical reality, but we can make reliable predictions, you can understand intentionality as founded on reliable correlations between sensory information and environmental variables (correlation here includes the idea that it isn’t a perfect matching, occasional mismatches are expected), all the magic/slippery qualities of intentionality disappear in one move, and what remains is amenable to empirical investigation. The price is that we’ve redefined knowledge, and declared Knowledge with a capital K, as absolute and objective, impossible in the domain of physical reality. We keep knowledge, which is the always “a little uncertain” version of Knowledge, aspires to the capital K status, but can never reach it.

    If redefinitions are necessary, we immediately explain why the hard problem is hard (our pre/proto-theoretical definitions of the explananda are wrong), why intentionality is such a slippery philosophical concept (ditto), why Mary’s story isn’t useful (rests on fallible intuitions), why zombies can’t possibly exist (the zimbo thought experiment already tells us they must be indistinguishable from us, the zombie/human distinction is meaningless as it can’t be made, not even by and about ourselves – zombies must believe they have PE exactly as we do), why all our intuitions are fallible, why deductive accounts may still need to be verified empirically, and more. In other words, you explain why centuries of philosophising didn’t find good enough answers and you do so without smuggling in some new form of inscrutability: science epistemology already accepts the heuristic(fallible) nature of knowledge, so I still see no need to add HC to the mix, all we need is to follow the chain of evidence that we already have.

    Having said all this: (certain/many)algorithms+unpredictable input -> unpredictable output. Can we agree on this? If we can, it follows that Mary does learn something new but also that this doesn’t imply that mechanisms can’t generate PE/Minds: all we are saying is that she couldn’t know in advance exactly all the characteristics of the input signal generated by the red rose. If we agree, the observation that (certain)algorithms+unpredictable input -> HC, becomes redundant :-(.

    Thus, we go back to Mary declaring the red rose beautiful: if you are right, she does so algorithmically, following mechanisms that can and do produce hypercomputations (also) because their input isn’t predictable. HC would explain why we can’t see how this is possible, but not how this is possible. Furthermore, it also means that if we feed a potentially conscious system with known (and thus predictable) input, no consciousness will happen. So, if I measure the input with enough precision, and then re-feed the system with it, the first time consciousness happens, but the second it doesn’t??
    On my account, the mechanistic explanation necessarily includes whatever it is that generates pleasure in seeing the rose, so it necessarily includes the mechanism that generates PE. This remains true even if we could perfectly control the input. To disagree with this, you need to convince me that Mary would declare the rose beautiful even if seeing it didn’t generate (or be somewhat associated with) pleasure. Trouble is, you can’t, because to my eyes, you would be negating the whole of my own experience, and since my own experience is the only direct empirical evidence that I do have (as opposed to reported empirical evidence), you just have no way to take it away from me.

    In short, I’m still inclined to to eat my own cake, if you don’t mind ;-). The question for me is: did I manage to explain why?

    The above applies to the differences between us, leaving me the pleasure to mention what (I think) we agree on:

    in a sense, skepticism about the content of our minds is way worse than your ordinary brain-in-a-vat skepticism about the external world. […]

    Indeed! I think we really are on the same page on this: uncompromising eliminativism can only end up in two ways, incoherence or irrelevancy. If you give me an account of all of human behaviour, but can’t from it derive whether a given human is happy or sad, feeling pleasure or pain, I’d be astonished, but also very interested in seeing how much you had to redefine. On the face of it, I can’t imagine how such a theory might work.

    I can simply bite the bullet: using our best current theories and understanding of matter, I am licensed to draw conclusions from this at least until it is repealed.

    Yes you can, I shouldn’t be complaining about this. The problem I have is with the following bit:

    Thus, anybody wishing to counter my argument would have to supply a constructive argument

    This refers to “a constructive argument” on how mechanisms produce Mind/PE (along with meaning and intentionality, they form what I’ve called the “problematic bunch” PB). Now, the PB is usually infused with mystical qualities, all I’m saying is that:
    a. Our idea of why the PB must have exotic qualities is wrong.
    b. At least one mechanistic account (ETC) can explain why the PB introspectively seems to have such qualities (I’m not alone in this, at least Graziano and Metzinger would agree with me on this one, albeit they have their own accounts in lieu of ETC, not that they are all that different…).
    Thus, I do have a constructive argument, but wouldn’t really need it: all I need is an agreeable argument about why we should understand the PB better and stop assuming that it must have exotic qualities, it merely must feel like it has. In your paper towels metaphor, it would become: we assume paper-towel-ships can sail indefinitely, but in fact they don’t. In other words, what we are trying to explain doesn’t exist as such (things are different from what they seem), so it’s no wonder that we’re failing to find suitable explanations. In a way, this rescues your claim that the problem is knowledge, not lack thereof, but wrong knowledge, so I assume you’ll disagree!

    Trying to summarise the above I realise that I’m attacking you from every directions: I challenge your premises (we don’t know enough to establish that mechanisms can’t produce PE), I challenge the logic that follows (I don’t need a positive account, just to show why your chosen explananda is ill-defined), and I also suggest that your conclusions imply absurd scenarios (same input on the same bunch of mechanisms can produce PE or not depending on a third party ability to predict it). Exactly why and how I think we are close escapes me as well, but at least I do hope you see why I am doing so with the most friendly of the intentions. I’m still being provocative on purpose, this is what I’d expect from my own friends: challenge my ideas as vigorously as they can.

    Anyway, it’s a real pleasure to debate with you, I hope my slowness isn’t too annoying. Apologies if it is!

  118. Charles, I’ll add ‘quotidian’ to my vocabulary. Nothing wrong with a bit of pretentious language every now and then—language may be a tool, but it’s also an art! (My native language is German, by the way.)

    Now, regarding your views about language pragmatism etc., I’m afraid I simply must part company with you. It might just be my natural science background showing, but I don’t really see any sense to theorizing if you don’t have the expectation that the theory you espouse actually captures some matters of fact. And if that’s one’s expectation, then I think the problem with BBT is fatal, since it essentially precludes us from getting at those matters of fact.

    Likewise, the concept of a ‘truth skeptic’ is also something that I don’t see how to make sense of. Skepticism, to me, means being unconvinced of the truth of something—but whatever could ‘being unconvinced of the truth of (perhaps the existence of) truth’ mean?

    And certainly, even the most hardened pragmatist agrees that Maxwellian electromagnetism is superior to the Zeus’ wrath theory—if not in an explanatory sense, then at least on a purely empirical or consequential one (Maxwell’s theory leading to decidedly more technological innovations than belief in Zeus’ wrath). So even if one doesn’t want to talk about truth, there’s a difference in justification of belief in different theories—but we can’t assess whether we’re justified in believing in BBT, because fundamentally, we don’t know what the theory even is if we don’t know what ‘aboutness’ means as it (apparently) claims. (Remember, it claims that the quotidian use of ‘aboutness’ is ultimately mistaken.)

    Regarding mathematical theories, well, that’s a whole nother muddle. Basically, I think that mathematics is ultimately a theory of structures—roughly, of the relationships into which objects may enter—in the abstract; so while those structures don’t have any existence of their own, they are realized in terms of objects of one sort or another. The structure is that which we transfer from one part of the world to another that we wish to use as a model of the first; using this model system, we can then derive consequences for the original one that hold if the latter faithfully implements the former’s structure. That’s in a nutshell how mathematical modeling works.

    Group theory is a particularly enlightening example here. Basically, groups concern the symmetry structure of objects—those transformations that leave the object unchanged. Take a circle: it is unchanged under arbitrary rotations; those rotations form the group called ‘U(1)’, for reasons I don’t want to go into here. Rotations can be combined in a certain way: a rotation, followed by another rotation, yields again a total rotation—this yields the multiplication law of group elements. The fact that for a circle, it doesn’t matter which rotation we carry out first, means that multiplication is commutative. Since every rotation can be undone by a rotation in the opposite direction, there always exists an inverse element. Furthermore, there’s a special rotation—by 0 degrees—that corresponds to merely the identity transformation of the circle (leaving it unrotated). For three consecutive rotations, it doesn’t matter how we group them together (leading to associativity of the multiplication). These are just the group axioms (well, the axioms for an Abelian group; in general, commutativity is not required, and in fact fails for, e.g., rotations in higher dimensions).

    So for individual groups, there exists objects that are at least imaginable that ‘realize’ the group in terms of a particular relationship it has with itself—that of symmetry. Therefore, it seems that the group is as real as that object—or, as real as the properties of an object are.

  119. Sergio, this time it’s my turn to apologize—I do get bogged down at work occasionally, as well. Anyway, I’ll try to be somewhat brief (for a change), in order to try to maybe crystallize the root causes of our disagreement (how’s that for a mixed metaphor). (Well, that brevity thing didn’t turn out so well. Guess it’s just not for me…)

    So, you write:

    Thus, because of my cultural background, I can only bet on the fact that PE is, has, or is indissolubly linked to something that provides an evolutionary advantage.

    Which is something I’d likely agree with, with some caveats. But your argument really depends on something much stronger than that, namely that this linkage is in some sense to be brought within the paradigm of mechanical explanation. And that’s where things get question-begging, since this is effectively the position you need to argue for.

    So let’s for the moment suppose that something like panpsychism, or property dualism, or some other nonreductive account of mentality holds. Then, it’s entirely possible that there is a mechanical explanation for Mary’s exclamation ‘oh, what a beautiful red rose’, but that mechanical explanation would not include anything regarding phenomenal experience—that comes about, say, due to some psychophysical linkage laws that hold in addition to the physical ones. In such a situation, it would also be possible that those sorts of things that are selected for evolutionary are linked to phenomenal experience—say, phenomenal experience goes along with information processing (without being reducible to it).

    So on such an account (a bit like what David Chalmers holds to), we’d have an evolutionarily selected capacity for phenomenal experience, yet simple mechanical explanation does not include PE. So my version of the story of Mary would be right, while nevertheless your contention that PE ‘is linked to’ something evolutionarily selected for would hold, likewise. Hence, the two views are only in opposition if you make the further assumption that PE must be reducible to mechanism. But this is exactly the question that’s at issue.

    In other words, dragging ‘pleasure’—more accurately, the subjective experience thereof—into the discussion is at best a red herring, and at worst, simple question-begging. Ultimately, what we find pleasurable—like what we find beautiful—can be codified into a set of rules, and it is entirely possible that entities implementing that set of rules could arise, without their arising depending on their phenomenal experience—they could, for instance, simply materialize out of the primordial dust in a Boltzmann-brain-esque fashion, if one waits long enough. So then, those are beings that will respond to stimuli with the same judgments of beauty and pleasure that Mary does, but to accord subjective experience to them would simply be begging the question: you’d have instead to argue, constructively, that indeed, any such being necessarily must have PE.

    You make a logic argument concluding that mere mechanisms can’t generate PE (functioning ships made of paper towels). The foundation is that we supposedly understand everything that mechanisms can potentially produce (we know ships can only be made of paper towels and that paper towels are always permeable).

    The part in italics is false: we don’t have to understand everything that mechanisms can produce for the argument; we don’t need to know all the infinitely many forms that mechanisms can be put together (which, I agree, would be impossible). Rather, we merely need to know what mechanisms can’t produce, and this we can deduce by knowing how mechanisms produce things. We know that paper towels don’t make ships, because we know that paper towels disintegrate upon contact with water. That doesn’t require any knowledge at all about the infinitely many things that can be built from paper towels!

    So, as Galileo, Descartes, Newton and all of those giants past realized, the explanations we can formulate are fundamentally of a special, mechanical (or I would say, structural) character. Think billard balls: two billard balls impact on one another, changing directions according to their energy and momentum. Basically, that’s it. Modern quantum physics etc. might have changed the details, but the basic principle at work—local interaction—has remained steadfast. (And indeed, since you can build universal computers using billard balls, we can take the analogy literally—all the other stuff is then simply a simulation run on the billard ball computer.)

    Now, a red rose sends out a certain kind of billard ball; this billard ball impacts another (the ‘sensory apparatus’ of some entity), which then triggers a mechanism, producing the sounds (billard balls themselves) ‘Oh, what a beautiful red rose’. I suppose you’ll agree with me that we can tell that no consciousness is present in this simple setup.

    But every further elaboration on this is, at the bottom, just the inclusion of more billard ball interactions—certainly, even given a fairly modest sophistication, it becomes impossible to hold all of this complexity within one’s mind, but the kind of explanation doesn’t change—at any given point, all we have is billard balls ricocheting off one another. So given that billard ball collisions don’t produce conscious experience, and that everything in so far as it is mechanical can be reduced to billard ball collisions, yes, we can indeed deductively say that no consciousness is necessary in Mary uttering things about roses.

    Now, in the overall context, things are becoming confusing: on one side, you told us that hypercomputation (HC) can’t be congised (agreed), and thus HC explains why we conclude that Mary does learn something. On the other, you tell me that our conclusion is warranted by the whole strength of deduction (we can accept it as true without empirical verification, it is not a strong intuition, it is something that is necessarily true).

    Perhaps it’s easier to understand by analogy to Gödel’s theorem: you have a set of axioms, and its deductive consequences, which are the provable theorems of the system; but this misses the Gödel sentence (and actually, infinitely many others). So the deductive consequences do not exhaust the true statements. Hence, we may deductively conclude that no phenomenal experience is present in mechanical systems, but if we allow for the possibility of hypercomputation, then our deductions simply fail to exhaust all of the effects the system produces, opening up the possibility for a peaceful coexistence between meaningful conscious experience and a mechanistis/physicalistic view of the world. That’s not having your cake and eating it; that’s eating your cake and realizing there still is another left!

    we are having this whole discussion while cruising on a ship made of paper towels. We started the whole journey knowing that it is made of paper towels, and neither of us thought this posed any danger. That’s the starting point: that PE somehow enters the picture (functioning ships can be made entirely of paper towels) is what we know, what we don’t know is how this can possibly be the case.

    No. We are on a ship, alright, but we don’t know that it’s made from paper towels; in fact, it gives every appearance not to be. It’s only the fact that paper towels are the only explanatory entities we have to work with that makes us believe the ship is made from them—this hypothesis would never otherwise even have entered our minds. (As the mind-body problem did not occur to people pre-Galileo et al.) Supposing the ship is made from paper towels as a starting point is exactly the sort of question begging that smuggles phenomenal experience into the Mary story. (Well, of course, Mary does have phenomenal experience, but that this phenomenal experience is mechanically explicable is what would have to be shown, and is indeed what the argument questions.)

    You only need to accept that deduction rests on induction

    I think the bit around this quote is a misunderstanding of how science works (well, or is often idealized to work). Our predictions aren’t uncertain; whether they manifest is. That is to say, we start off with a theory from which we deduce predictions—this is, if done correctly, a completely certain procedure: if the theory is right, then the prediction must manifest. Hence, if the prediction fails to manifest, we know the theory is wrong, and must look for a better one. In this way, while no theory ever is completely certain, we know exactly what would be the case if our theory is the right one; and it’s that knowledge that my argument rests on, not some sort of a priori knowledge of the world beyond our experience. Induction just doesn’t enter it (indeed, keeping induction out of it was sort of the whole motivation for framing the scientific method in this way).

    That’s not to say that I claim absolute certainty in the conclusion of my argument: deductive arguments may nevertheless be wrong. The premises may be false, or the logic used may be fallacious (in which case it’s probably not strictly speaking deductive, but only appeared to be that way before the error was pointed out). This may indeed be the case in the argument that from our knowledge of the physical world, we can deduce that the kinds of processes do not lead to phenomenal experience. But this doesn’t change the form of the argument: it’s deductive all the same. Thus, I believe your strategy is erroneous: as the argument isn’t inductive, you can’t point to the unreliability of induction (or intuition), which is basically what you do; in order to repeal the argument, you’d rather have to show explicitly how, and where, it fails—i.e. expose a flaw in its premises, or its logic, or find a counterexample (this is what I mean by ‘constructive argument’).

    So, if I measure the input with enough precision, and then re-feed the system with it, the first time consciousness happens, but the second it doesn’t??

    No, that’s very wrong. Hypercomputation occurs when the input is nonalgorithmical—i.e. doesn’t admit any systematic compression, for one definition. Whether or not the input is known has nothing to do with it. Furthermore, it’s not the outputs of the computation that are in some way ‘unknown’ due to the nonalgorithmic input, it’s that this input enables machines to perform functions that they would not be capable of otherwise, such as, for instance, going through the Zeno regress. This is why Mary can’t predict her experience upon seeing the red rose; not simply because she doesn’t know the input, hence the output, but because the relevant computation can’t be performed by an ordinary Turing equivalent machine. An algorithm getting unknown input isn’t hypercomputation; it’s just an ordinary computer with a monkey at the keyboard.

    If you give me an account of all of human behaviour, but can’t from it derive whether a given human is happy or sad, feeling pleasure or pain, I’d be astonished, but also very interested in seeing how much you had to redefine. On the face of it, I can’t imagine how such a theory might work.

    Hmm, I wonder if I understood you correctly here—because ever since the fall of behaviourism, I don’t think anybody has held to the idea that an account of human behaviour really paints a complete picture. And the theory you ask for is trivial: simply record all possible stimuli, and observe all responses, and put those into a giant lookup table. Presto: perfectly faithful account of all human behaviour, no idea at all of whether they’re happy, sad, or if there’s anything going on inside at all. No redefinitions necessary! (And of course, any billard ball or otherwise mechanical account is of the same kind, because ultimately, really all that you get at this way is behaviour—behaviour of brain areas, neurons, or atoms, but still merely behaviour.)

    stop assuming that it must have exotic qualities, it merely must feel like it has

    I think this is the point at which I always fail to understand a purported explanation of this kind: people say, it doesn’t actually feel like anything, it just feels like it feels like something! And then smile and go on their merry way as if they’ve just let me in on a great secret. But I’m just sat there, confused as ever: how can it feel like it feels like anything, if nothing ever feels like anything? Pretty much every eliminativist account (of the second kind I’ve mentioned above) I’ve ever come across has this problem—it purports to overthrow the very grounds that are necessary for its acceptance. People feel they’ve solved at least part of the problem, even though all they’ve managed is to kick the can a little further down the road, which is, then, taken as evidence for progress. Our phenomenal experiences are explained—they’re just illusions! If there’s no phenomenal experience, how can we have illusions, you ask? Well, we’re working on that. But, progress!

    Now of course, that’s a caricature. And I’m not really meaning to be as harsh as I probably sound in this reply—as you say, all in good fun. (If I do come of as overly brash, just chalk it up to me having a somewhat exhausting workweek; I’m still genuinely enjoying this.)

  120. Jochen –

    I mentioned to my wife that I was engaged in an interesting exchange with “Jochen”, and based on the name she guessed you might be a German speaker – perhaps Austrian?

    [BBT] claims that the quotidian use of ‘aboutness’ is ultimately mistaken.

    That’s not how I interpret BBT (at least as presented in those “Alien Philosophy” essays), but I’m happy to leave defending BBT to Scott. Anyway, it finally occurred to me that our disconnect may be due to different interpretations of “intentionality”. You focus on the idea that a belief, desire, thought, etc, is “about” some entity. When I speak of the “intentional idiom”, I’m speaking of those words/concepts themselves, eg, “belief”. My position is simply that for discourse in most contexts, “belief” is a useful word which typically causes no problems. However, as the context moves from quotidian to psychological to physiological to neurological, a context will be reached in which behavior will need to be explained in a vocabulary that doesn’t include “belief”. It seems to me that there is a widespread tendency to keep using “belief” – and the “intentional idiom” generally – long after it should have been abandoned. Eg, in quotidian discourse we would be hard pressed to avoid use of “belief”, “desire”, et al, but it seems quite unlikely that we’ll ever be able to reify those concepts as types of neural structure. (What I take – not necessarily correctly – to be one interpretation of Davidson’s anomalous monism.) So, I think quotidian use of “belief” is
    wrong” only in the sense that Newton’s laws of motion are strictly speaking “wrong” – but nonetheless undeniably useful in certain contexts.

    All I can say in defense of “truth skepticism” is that although admittedly a controversial position, it’s held by credible philosophers, notably Richard Rorty. Criticisms I’ve read usually exhibit some degree of misunderstanding (sometimes seemingly willful). Should you ever be interested in pursuing it, try the three essays in Part 1 of Rorty’s “Contingency, Irony, and Solidarity”. They have special significance to me, having motivated my interest in this subject matter.

    Finally, I understand that a physical system can be modeled using an abstract mathematical structure, but I don’t see that as relevant to my point, which was just to question the central role that the word “about” seems to play in some discussions of intentionality. But perhaps the ambiguity of that word (as discussed above) explains that disconnect.

  121. Jochen, I’m late as always… (don’t apologise for a delay of a few day, you embarrass me! 😉 )

    Indeed, I think this conversation is worth having, in fact, it becomes obvious early on:

    Hence, the two views are only in opposition if you make the further assumption that PE must be reducible to mechanism.

    I really like how reach this point, and so far, I agree.

    But this is exactly the question that’s at issue.

    Well, this is interesting, because I personally wouldn’t reach this point. Let’s see if we can leave aside paper towels and follow a common path for a little while:
    – Science has so far been able to mechanistically explain a lot of phenomena that seemed to be only explainable via some supernatural ingredient.
    – In the process, it kept “naturalising” humanity, making our assumption of being very special more and more questionable.
    As the pattern is well established, by induction, lots of us expect that history will repeat and explain one of the last bastions of supernatural (dualist, magic dust, you know what I’m talking about) explanatory power: consciousness, and more in particular phenomenal experience (PE).
    Thus we start with the expectation that mechanisms are all there is, and get to lucubrate about how can this possibly be.
    You may not count yourself in this crowd, but you most likely recognise that this isn’t an uncommon stance.
    In comes Mary, supposedly telling us that we must be wrong, because she learnt something when she saw colours for the first time.
    Thus, for me, the task is:
    (aim) defend the inductive premise.
    (method) undermine Mary’s argument just a little bit, and move it from “must be wrong” to “appear to be wrong” or even “we haven’t got a clue of how it could be right”.
    By contrast, you take the assumption to be a “further assumption”, while I see it as my foundation. From your point of view, you can however conclude that:

    dragging ‘pleasure’—more accurately, the subjective experience thereof—into the discussion is at best a red herring, and at worst, simple question-begging.

    And I can see why (I think), but can’t agree, because I have strong inductive reasons to assume that mechanisms will explain PE, and feel that Mary’s story can be weakened very easily in a number of ways.
    I’ll try to summarise (some of) them:
    Knowledge: the whole thought experiment rests on the notion of acquiring new knowledge, and gives knowledge acquisition ontological powers. Because our imaginary Mary would learn something, we should conclude that something more than mere mechanisms must exist. But if we accept that knowledge is local, and that, outside pure abstractions, it is always provisional, perfectible and never 100% precise (there are at least two perfectible dimensions of knowledge: reliability and precision), then we can conclude that even if the logic of Mary’s story is correct, her knowledge must be fallible/incomplete, and therefore she will always have something to learn. This weakens the must above, operating within Mary’s story.
    Similarly, via the same hypothesis, our own knowledge is fallible, and therefore we can’t be sure that a thought experiment entails a strong conclusion (the usual “must”), it may or may not, leaving our hypothesis viable (we can’t be sure it’s false).
    Incidentally, the assumption that mere mechanisms are all there is suggests in itself that knowledge must be local, fallible and never 100% reliable, granting this approach a nice feeling of coherence (which does also smell of circularity, to be honest).
    At an even more general level, the closest we can get to objective Knowledge (capital K) is via the scientific method (presuming such a thing exists, for simplicity), but science works via the assumption that no knowledge is absolute. It allows to get ever more close to objective Knowledge because it never assumes to have reached it and is always open to refinement.
    Epistemologically, this makes it acceptable to grant foundational status to induction, as the whole enterprise rests on the idea that reality is underlined by at least some regularities. We have the strongest possible inductive reasons to believe that that’s the case, but they are still inductive. Thus, accepting all knowledge as provisional makes it possible (and necessary) to incorporate and embrace the uncertainty entailed by induction.
    This allows us to do science, produce theories and apply deduction to produce predictions, but we still rest on induction, and we are actively acknowledging that theories must be perfectible (in either the domain of reliability, precision, or both). So, deduction always rests on induction: a deductive argument, when used to make predictions outside the purely abstract, never comes with absolute certainty, and therefore, Mary’s story may provide a strong indication, but can’t entail a must.
    I’ve used other arguments to strengthen the notion that knowledge is always perfectible, but you surely do get the gist: the whole landscape looks coherent to me, and ties back to the starting point. We got off the ground via induction, acknowledging that this doesn’t guarantee we’ll get it right, but along the way we found evidence that it’s the only approach we can take, and that this approach allows to keep increasing the reliability and precision of the conclusions we may find. The vigorous challenge provided by Mary ends up reinforcing our inductive premises, because, by accepting them, we are also able to resist the challenge.

    I’m restating all the above because I think it highlights a difference between you and me: on one side, you think that “is it possible that mechanisms are all there is?” is a good question to get off the ground; on the other, you don’t think that deduction can only provide conclusions that are as or less reliable than the inductive assumptions it rests upon (I’m not sure if you think that deduction about non-abstract matters does not necessarily rest on inductive premises) and therefore end up creating a separate worldview, which seems to be close, but also irredeemably different from mine.
    If you agree with my diagnosis, we could try to reconcile our premises, or just agree that we are espousing incompatible epistemologies and explain our disagreements in this way.

    Reading your last reply further, I find things that confirm my impression, for example:

    Rather, we merely need to know what mechanisms can’t produce, and this we can deduce by knowing how mechanisms produce things.

    I disagree, because I assume that our knowledge of how mechanisms produce things must be somewhat incomplete. And therefore I assume we can’t know for sure what mechanisms can’t produce.
    Moreover, I still smell a whiff of incoherence (one reason why I think my insistence may yield more than just a pleasurable debate):

    [via Gödel] the deductive consequences do not exhaust the true statements. Hence, we may deductively conclude that no phenomenal experience is present in mechanical systems, but if we allow for the possibility of hypercomputation, then our deductions simply fail to exhaust all of the effects the system produces

    If you take out hypercomputation, the above is making my point: deductions necessarily can’t exhaust all possibilities. As I’ve said before, hypercomputation is redundant – just an example of something you can’t deduce in a straightforward way. Which doesn’t mean that it isn’t part of the solution, of course!
    A similar pattern repeats again:

    [in science] we know exactly what would be the case if our theory is the right one; […] Induction just doesn’t enter it [… But] deductive arguments may nevertheless be wrong. The premises may be false, or the logic used may be fallacious

    And thus induction sneaks-in again: how does it happen that you might be using wrong premises? If you follow a chain of premises, eventually you’ll find some that rest on induction. That’s why they may always be false, and that’s why you can never have 100% confidence on a deductive argument [and why Mary’s story can’t be definite proof of anything].

    My main case rests here. I can’t see how to take induction out of the picture, and if I keep it in, hypercomputation becomes roughly equivalent to “and then something unimaginably complex happens” so I remain sceptic (but still interested, because I do think we need to clarify what the unimaginably complex thing is, which means making it imaginable).

    Minor points:

    So, if I measure the input with enough precision, and then re-feed the system with it, the first time consciousness happens, but the second it doesn’t??

    No, that’s very wrong. Hypercomputation occurs when the input is nonalgorithmical—i.e. doesn’t admit any systematic compression, for one definition.

    Sorry. I’m clumsily trying to point out that I still don’t understand your account of hypercomputation and/or nonalgorithmical input. This is most likely due to my ignorance, so it’s OK to drop it, in case schooling me is as boring as I’d expect. For example, in my understanding, there is no universal way of finding out if a sequence is algorithmical or not (compressible or not, etc). But your account seem to imply that we function the way we do because we constantly receive nonalgorithmical input; I think the above, in my understanding, can be used to produce many apparently contradictory predictions. Given my ignorance, I’m happy to concede this point, but I’d like you to keep in mind this source of doubt/confusion, because I expect I won’t be the only one that will struggle with it.

    […]how can it feel like it feels like anything, if nothing ever feels like anything?[…]

    Fair point. That’s why I’m trying hard to avoid the eliminativist trap. If you go deep enough in there, you inevitably leave out your intended explananda. I share and sympathise with your frustration, and can’t presume that I’ve managed to avoid the trap. Unfortunately, most of my efforts are attempting to avoid this particular mistake, so I was hoping you wouldn’t attribute it to me. But, alas, you did, so now I’m thinking that I need to make my own argument better, and find a way to make sure that people will see why it isn’t eliminativist at all.

    I’m ok with the caricature, and with you explicitly calling me out whenever you spot an error, that’s why I am enjoying the debate. No need to sugar coat your rebuttals, directness helps clarity: I know you aren’t solely interested in “winning the argument”, I would have left long ago if I thought you were. [In other words, I don’t detect any harshness/aggressiveness/ill-will from you, and do hope the feeling is mutual!]

  122. Hey Charles, your wife got the language right, but I hail from Germany, not Austria. Both countries well known for giving rise to now-unfashionable philosophies with names that don’t mean what you’d think they mean (positivists are sometimes quite negative, while idealists don’t necessarily have strong ideals).

    Regarding aboutness, to me, the “quotidian” use is that whenever we say “x is about y”, then actually, x has y as its content, or is directed at y, etc. “Apple” means apple, that thing out there in the world that I can sink my teeth into, that is capable of nourishment, has a certain taste, etc. So when I say “this apple tastes nice”, that sentence likewise refers or points to something out there, namely the aforementioned apple, and asserts that it has a certain quality, that of tasting nice. This is, I think, how most people would understand the sentence.

    The problem with this is that it’s deeply mysterious, if this understanding in fact holds. How can those pixels on the screen, pressure-waves emanating from my throat, scribbles on a piece of paper point to anything beyond themselves? This is, as I understand it, the problem of intentionality—exactly a problem with the quotidian use of words like ‘about’, etc.

    BBT, now, to the best of my ability to tell (and apparently corroborated by Scott’s replies to some of my questions, where he was adamant that ‘nothing ever really is about anything’, or, according to your recollection, that sentences using the intentional idiom are neither true nor false), asserts that this quotidian use of aboutness is mistaken, and hence, that the problem we have in explaining it is merely a pseudoproblem, which seems at first blush like an attractive position, as it seems to do away with the difficulties. Thus, it seems to me that the quotidian use of aboutness being wrong is exactly the point of BBT.

    Or do you understand something different as being the ‘quotidian meaning of ‘about”?

    Have you ever let two chatbots talk to each other? Generally, things degenerate into utter nonsense quickly, but, and here’s the thing, the chatbots don’t notice. Within their operational parameters, they are carrying out a perfeclty adequate conversation: every string of text prompts another; no matter that the whole thing is composed of non sequiturs.

    Now my worry is just that views such as BBT, eliminativism, skepticism about truth, and so on, essentially amounts to assuming that we’re really not better off than those chatbots. We also just produce some strings of symbols, that then prompt certain kinds of behaviour—with the meaning of those symbols essentially not playing any role at all. Like the chatbots, our conversation would thus be, at the bottom, utter nonsense; we just don’t notice. But of course, such a position, despite being quite depressing, is also utter nonsense: because it would itself just be another string of symbols one chatbot produced, and another reacted to in a certain way, without any relation to things themselves, to matters of fact being a certain way.

    So as with Sergio’s hyper-Cartesian doubt where even the elements of the mind are suspect, while I probably can’t mount a succesfull attack against it, it seems to me such an unattractive position that I’d rather give up philosophizing entirely than adopt it (indeed, taking up that position would mean to give up the ability of saying anything substantial at all). So, in my own naive idealism, I try to instead construct a positive account of how the world might be in order to fashion our experience of and within it; that might be an impossible task, but I don’t believe that has been established as yet.

  123. Hey Sergio! As always, I’m just glad to continue this, so no more apologies etc. I think we’re both mainly interested in making ourselves clear enough to one another (a difficult task, since we both have quite different backgrounds), so to at least make our disagreement reasonable, and not merely grounded in misunderstanding. I think that’s an effort well worth making. So let’s get on Mary’s case again:

    As the pattern is well established, by induction, lots of us expect that history will repeat and explain one of the last bastions of supernatural (dualist, magic dust, you know what I’m talking about) explanatory power: consciousness, and more in particular phenomenal experience (PE).

    It’s certainly very reasonable to harbor such expectations, also for different reasons, such as the problem of how to get different ‘substances’ to interact with one another, as Princess Elisabeth so astutely pointed out to Descartes, reducing him to babbling about the pineal gland, and saddling all future dualists with a burden that I think has yet to be met.

    But I want to, just parenthetically, note that I don’t agree with your implication that anything that differs from mechanism is equivalent to supernaturalism. This implies a commitment to the idea that the natural and the mechanist is coextensive, which you’ve certainly not demonstrated, and which, I think, would not find unanimous support, either in philosophy or in the natural sciences.

    Bohr, for instance, considered quantum mechanics to be a very distinct departure from mechanism in the natural sciences, for instance talking about the ‘non-mechanical disturbance’ at play in the famous Einstein-Podolsky-Rosen thought experiment. Indeed, Bohr is often considered to be an instrumentalist, but he called himself a staunch realist, he just believed that the notion of ‘mechanical’ realism that’s been dominant since Galileo, Newton, etc., has been demonstrated to be misguided by quantum theory. So my first reaction would just be to deny that it’s necessarily supernatural if it fails to be mechanical.

    And I can see why (I think), but can’t agree, because I have strong inductive reasons to assume that mechanisms will explain PE, and feel that Mary’s story can be weakened very easily in a number of ways.

    That may very well be the case, but it’s question-begging as a defense against the Mary argument: as you say, you have reason to believe that some sort of mechanist explanation will ultimately account for PE. And then along comes Mary, ostensibly demonstrating that this is not the case; hence, if the Mary story succeeds at its aim, your reasons to believe in mechanism must be insufficient. But you can’t then turn things around and say that, because you have grounds to believe in mechanism, something about the Mary story must be false—it is those very grounds the story questions.

    So an appeal to PE as being mechanically explicable in order to attack the Mary argument does not accomplish anything; at best, it’s just an attempt to sweep the argument under the rug by flat denial. So I feel compelled to reiterate my earlier assertion: the question of mechanist reducibility of PE is exactly what is at issue, and hence, you can’t appeal to it in order to attack the Mary argument.

    Knowledge: the whole thought experiment rests on the notion of acquiring new knowledge, and gives knowledge acquisition ontological powers. Because our imaginary Mary would learn something, we should conclude that something more than mere mechanisms must exist.

    I think this misrepresents the argument. If it’s true that Mary indeed has knowledge of all the physical facts, and she learns something new, then it directly follows that the physical facts don’t exhaust the facts about the world. This is not an ontic, but merely an epistemic question (we’re talking about facts about the world, not things within it).

    You then try to fight the hypothetical, and note that in the real world, Mary never could acquire 100% certain knowledge of the physical facts. But granting this, which I’m happy to do, does not necessarily weaken the thought experiment. One could not, being a massive body, ever ride alongside a beam of light; but doing it as a thought experiment yields some important insights. The assumption is made per impossibile, but that itself does not impact the thought experiment.

    The relevant question is whether we can derive accurate conclusions from the assumption of Mary having total knowledge of the physical facts. And this is where my arguments regarding paper towels, gears and levers, and billard balls come in: I claim that we only need some very basic facts about mechanistic explanations, such as that every behaviour can be explained in terms of things working upon one another, and thus, does not mention PE at all, in order to derive certain conclusions in such a case. In other words, from the axioms of mechanistic explanation, I can deduce that it does in no case touch phenomenal experience in any way. This does not require me to know all there is to know about mechanical explanation; I need not make an exhaustive search across all possible mechanisms in order to reach this conclusion. I only need to know this one thing.

    And again, for simple mechanisms, it’s clear to all of us that we do know this: a lever pushing out some ‘I see a red rose’-flag certainly does not generate red-rose phenomenology upon being pulled. And this is the greatest strength of the mechanical world view: that we can trace its consequences. It all comes down to Leibniz’ Mill: gears and levers acting upon one another, and not a spark of experience at all to be seen. Knowing this just needs knowledge of what gears and levers are, how they operate, and so on.

    Epistemologically, this [the scientific method] makes it acceptable to grant foundational status to induction, as the whole enterprise rests on the idea that reality is underlined by at least some regularities.

    Again, this seems to me to strongly be at variance with the story as it’s usually told. Basically, that story is:

    Hume: “Guys, we can’t actually reasonably assume that the sun will rise tomorrow, just because it did so today.”
    Everyone: “OMG why not??!!”
    Hume: “Well look, on what basis would you justify that just because something’s always worked that way in the past, it’ll also work that way in the future? Because that reasoning always worked so well in the past…?”
    Everyone: “Well shit then, guess nothing’s true and everything’s allowed…” (start rioting)
    Kant: “…mumble mumble a priori faculty mumble mumble transcendental deduction mumble…”
    Everyone: “…?” (continue rioting)
    Popper: “Wait! We don’t have to rely on induction at all. Actually, induction is a myth! There’s no need for justification. In principle, we could just guess that the sun comes up tomorrow, and then try and refute that theory, and accept it provisionally if no refutation occurs!”
    Everyone: “Oh thank you, now we understand the hypothetico-deductive model of science, ol’ Hume was just barking up the wrong tree, and everything can go back to its orderly ways…” (quickly sell a couple of car radios they just ‘happened’ to have around, then go on their orderly ways)
    Kuhn, Feyerabend, Lakatos et al: “Uh, well, actually that’s a bit too simpl—”
    Everyone: “OH SHUT UP YOU GUYS!!”

    Or, well, something like that. So on a Popperian understanding of scientific reasoning, there is no formal way to justify what you start out with, and there is also no need for one; you could just as well use the reading of goat entrails to arrive at your hypothesis, if that is corroborated, then it’s just as acceptable as a hypothesis arrived at any other way. Induction just doesn’t enter into it (in fact, he’s quite adamant about that: from Conjectures and Refutations, “Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure.”). So to the extent that the Popperian model still is that which one typically thinks of when somebody says ‘scientific method’, I’d say that assertions like ‘deduction always rests on induction’ are highly questionable at best.

    I disagree, because I assume that our knowledge of how mechanisms produce things must be somewhat incomplete. And therefore I assume we can’t know for sure what mechanisms can’t produce.

    I’m not sure if I can really see you as saying anything more than ‘we’re fallible, so any idea we come up with might be wrong’. Which, obviously, is true, but it’s no license to pick out those ideas you like, while discarding those you don’t. So I think that every exchange must work under the implicit assumption that if we’ve worked to the best of our abilities, what we’ve produced is coherent, until at least it’s shown to be otherwise. But this doesn’t seem to be your strategy: all of your attacks, on capital-K knowledge, induction, and so on, seem to be targeted at showing that the Mary story might be wrong; but this doesn’t buy you any new ground, in my opinion. Rather, you’d have to show where it actually goes wrong—otherwise, I could simply point out that, well, your arguments might be wrong, as well, and we’d get nowhere. But regarding this point, all you seem to be doing is just to appeal to your prior commitment to mechanism.

    Anyway, more appropriate to the above quote, I think that knowing what a mechanistic explanation is, really is all the knowledge we need in order to show what it can’t yield. That’s because knowing what a mechanism is, we can deduce properties that have to be true for all things composed of mechanisms, and if something fails to have that property, then it can’t be composed of mechanisms, period. For instance, it can’t yield an explanation for ghosts: the properties ghosts have (e.g. being able to move through walls, while still being able to throw around things) are irreconcilable with the properties of mechanisms. But, no problem there: ghosts don’t exist anyway.

    Consciousness, however, does exist, in one way or another—even the eliminativist must accept its illusion, so to him I just extend the invitation to then explain that illusion instead. And again, for all mechanisms simple enough to hold them in the head, so to speak, we can see that they are not accompanied with conscious experience—or at least, that due to their being that way, no conscious experience is necessarily accompanying their operation. So mechanism then needs at least a commitment to the idea that, suspiciously conveniently, at some point beyond which we can no longer picture the entire mechanism, something somehow happens and *poof*, consciousness!

    But I think not even that will work. Take, again, the billard ball example above: any given mechanistic model has a functional equivalent in a particular combination of billard balls (this is demonstrated by the simple fact that billard balls and their interactions can be used for universal computation). In particular, no more than two billard balls ever collide in such a model. But two-ball collisions can be entirely pictured in the head, and it is clear that no conscious experience results—the whole thing is completely described by masses and momenta, and nothing more. So, I claim to be able to conclude that no matter how many billard balls you use, no phenomenological consciousness will ever arise. If you disagree, you are free to demonstrate how it does (or at the very least, how the emergence of consciousness in such a scenario might at all be possible)—but merely pointing out that I might be wrong will not suffice!

    If you take out hypercomputation, the above is making my point: deductions necessarily can’t exhaust all possibilities. As I’ve said before, hypercomputation is redundant – just an example of something you can’t deduce in a straightforward way.

    If you take out hypercomputation, the point is rendered moot; we are computationally universal, hence, can in principle deduce all that can at all be deduced from some axioms. And again, note that I don’t hang my hat on the failure of deducing something—if that were all, I probably wouldn’t get worried for the next couple of tens of thousands of years. While we can deduce anything, that doesn’t mean it can’t take quite a while to do so, and from my point of view, we’ve just barely started yet (which is why I can never quite get on board with those that argue since we haven’t solved a given problem in the past couple of hundreds of years, it’s probably unsolvable, a pseudoproblem, etc.).

    No, I start from the arguments establishing that we can’t deduce something—like in the Mary story, we can’t deduce the phenomenal experience from the physical facts. Or, when it comes to zombies, we can imagine physical/functional isomorphs, nevertheless without them having any phenomenal experience. So it’s not just that so far, we don’t have a way of deducing phenomenal experience from physical facts, but it’s that there are arguments (deductive themselves) that purport to establish that we can’t make that deduction.

    It’s in meeting this burden that my account possesses something yours lacks. You need to argue that somehow, these arguments are mistaken, in order to rescue the hope that PE is somehow deducible from the mechanical fundamentals; but I can simply accept these arguments, and point to them just establishing something about the in-principle limits of deductive reasoning, of using an effective procedure to yield conclusions from axioms; but if the world itself is not limited to the computable, then there is no reason at all to suspect that we could make these deductions, hence, our inability to do so says precisely nothing about physicalism.

    So in being met with the anti-physicalist arguments, there are generally two strategies—accept them, and give up physicalism; or reject them, and hope that physicalism might, perhaps, one day, yield an explanation of consciousness. I simply point out a third way: accept the arguments in their full strength, but point out that they merely highlight a limit as regards what we are able to deduce from the physical fundamentals, just as Gödel’s theorem showed us a limit regarding what we can deduce from a set of axioms. Moreover, according to our best physical theories, reality is noncomputable anyway; and in fact, analogous situations—with us being unable to deduce from the physical facts certain higher-level, but just as physical facts, such as whether certain measurement outcomes ever occur, what the ground-state energy of certain systems is, etc.—actually occur. So in fact, we should expect to hit epistemic barriers; the Mary/Zombie/etc arguments then just spotlight where they are.

    And thus induction sneaks-in again: how does it happen that you might be using wrong premises?

    Again, this is radically at odds with how scientific hypothesis generation generally is taken to work, at least on my understanding. If you want to challenge this, then you’d need to come up with a much stronger argument. Failing that, I can always answer that I read my hypotheses in goat entrails: their origin doesn’t—in fact, mustn’t—matter.

    For example, in my understanding, there is no universal way of finding out if a sequence is algorithmical or not (compressible or not, etc). But your account seem to imply that we function the way we do because we constantly receive nonalgorithmical input; I think the above, in my understanding, can be used to produce many apparently contradictory predictions.

    I don’t understand how it would. We receive nonalgorithmic input, simply because quantum mechanics produces genuine randomness (on the standard interpretations, at least; of course, it might be that we just receive pseudorandom numbers such that the random seed is so unimaginably large that we could never discover that fact, but barring some extraordinarily strong arguments, I think that that’s an academic possibility; furthermore, there exist arguments that claim to show that quantum randomness must be algorithmically random, but I’m not completely convinced regarding their definiteness). Anyway, at the moment, I’m happy to simply use the algorithmic randomness of quantum mechanics as a premise, in order to show that if my premises are right, then we would observe a world much as that we seem to be observing (subjective experience and all).

    That’s why I’m trying hard to avoid the eliminativist trap. […] Unfortunately, most of my efforts are attempting to avoid this particular mistake, so I was hoping you wouldn’t attribute it to me.

    Well, then you need to give an account of how “it must merely feel like” consciousness has some exotic qualities, without that ‘feeling like’ exactly being one of those exotic qualities—to me, the most exotic of all. The problem, as I see it, in relegating conscious experience to illusion, is always that one inevitably ends up talking about how ‘it just feels like’ or ‘appears as if’ or ‘seems to be’ something mysterious—when the mysterious part exactly is how something could ‘feel like’, ‘appear as’, or ‘seem to’!

  124. Jochen –

    I think the focus on “about” as the marker of intentionality is obscuring the (at least my) objections to use of the (perhaps misleadingly named) “intentional” idiom in trying to explain behavior in scientific terms. The idiom’s defining feature is that its members purport to refer to mental states (beliefs, thoughts, et al), yet it isn’t obvious that such states can be described in any scientific vocabulary. But if one’s objective is to explain human behavior, then perhaps we can bypass the intermediary intentional idiom and try to explain each instance of human behavior using some scientific vocabulary. We won’t know until lots of people have tried, but at least in my (admittedly quite limited) experience, most explanations still rely heavily on the idiom.

    [Assuming eliminativism, like chatbots] we also just produce some strings of symbols, that then prompt certain kinds of behavior—with the meaning of those symbols essentially not playing any role at all.

    As far as I know, there’s no consensus on what constitutes the “meaning” of an utterance. I assume the “meaning” of many utterances to be precisely the behavior that the utterer intends to “prompt” in the hearer (obvious in the case of imperative utterances). So, I agree with the first part of your statement but disagree with the last part. If the hearer’s behavior in response to an utterance is that intended by the utterer, the utterance has been “understood”. What further role do you envision for “meaning”?

    Humans sometimes (mostly?) behave exactly as you describe chatbots sometimes behaving (think of obsessive ideologues spewing out canned slogans, eg, most questions and answers in the current political “debates”). But occasionally humans produce impressive discourse. I don’t see that those observations say anything at all about the mechanics of human discourse in general.

    such a position [would amount to human discourse being] just another string of symbols one chatbot produced, and another reacted to in a certain way, without any relation to things themselves

    The point of Rorty’s “coping versus copying” distinction is that if humans (or chatbots) can exchange symbol strings in such a way as to successfully navigate the world (ie, coping), that’s sufficient. Describing the world “as it really is” (copying) isn’t necessary even if possible – which it arguably isn’t.

    it seems to me such an unattractive position that I’d rather give up philosophizing entirely than adopt it

    Attendant to my position on these issues is a strong version of determinism which also often engenders that sort of reaction. I have a hard time understanding why either does. The world works in some way, most learn to cope with that way, and we privileged ones also get to do fun things like asking exactly how and why the world works that way. Why quit doing something that’s fun just because the answer isn’t one we’d prefer?

  125. Charles, I suppose my starting point is just somewhat different from yours. To me, the central explanandum in intentionality is, first and foremost, indeed the fact that it at least appears to be the case that certain objects (in a very general sense) can come to point to, or represent, or refer to other objects beyond themselves; once we get a handle on this, I think, we can then think about the modalities under which this occurs—e.g. belief, desire, and so on. Because really, that’s what all those things have in common—a belief is a belief that something is the case, a desire is a desire to make something be the case, etc.

    Now of course, it may well be that talk of desires, beliefs and the like is misguided; but if that is to be the case, it seems to me that one first would have to account for the fact that we can successfully talk about their ‘aboutness’ without there being, ultimately, such a thing. So trying to get rid of the ‘intentional idiom’ the way you seem to be suggesting seems to me to be trying to run before you have learned to walk.

    My example with the chatbots fell a bit flat, it seems. My intention was to demonstrate that in this case, we manifestly can tell that the meaning of a chatbot’s utterances is divorced from the, well, ‘flow of conversation’, if you will. We can notice that some answers one chatbot gives to the other don’t make any sense, while the chatbots themselves don’t notice this.

    I suppose you might argue that, in this case, it’s just that the meaning we assign to the chatbot-utterances is different from the meaning they assign, that is, those same symbols do different things to us than they do to them—i.e. elicit a ‘huh?’ reaction in a human spectator, while drawing out a response as if to a sensible query from another chatbot. In taking this stance, I suppose one could say that we haven’t failed at creating a chatbot that has any understanding, we’d just have one that has an understanding different from ours—say, it speaks English’ instead of English.

    While this would make for an interesting objection to the Turing test—‘failing’ the test might then merely imply that the computer uses a different language to ours, that just happens to use the same set of symbols—, it also leads to some more troubling conclusions and questions, I think. First of all, we’d never really know what a conversation with a robot is actually about—does it speak English, or some ‘close, but no cigar’ English’?

    More worryingly, we’d be in that same situation towards one another—after all, either of us could be that robot. In some ways, this ties back to Quinean points about the indeterminacy of translation, and similar things, but it seems worse to me—our conversation could be about something fundamentally different to you than it is to me (which is different from mere misunderstanding: we’d both understand perfectly well, our understanding would just differ). Worse yet, it might be about nothing at all—just symbols cueing up other symbols, without any connection to, for want of a better term, matters of fact. It’s in this sense that it always seems to me that such theories at least court self-falsification—if it’s really true that we’re no better off than those chatbots, then I don’t really have grounds towards believing that anything we say has any justification, and hence, in particular, I don’t have grounds to believe that we’re no better off than the chatbots. So why believe that?

    That said, ultimately, I think that there’s something to the idea that meanings are connected to action—in particular, since I take arguments that semantics doesn’t follow from syntax seriously, I think that action provides a promising non-syntactical element on which to ground meanings (it’s ambiguous what a given string of symbols means, but it’s unambiguous what that string of symbols causes you to do). However, such behaviouristic accounts of course have their own problems. In particular, I think that there needs to be something additional, since otherwise, one just runs into homuncular regresses—your taking up a cup in response to the string ‘take up that cup’ is presented to me only in the form of sensory signals, i.e. symbols, whose interpretation again is arbitrary; if I act in some way in response, the problem again iterates.

    Regarding the point I made about the (to me) unattractive nature of the position it leads to, you say:

    The world works in some way, most learn to cope with that way, and we privileged ones also get to do fun things like asking exactly how and why the world works that way.

    But see, the problem is that on a view such as you seem to be subscribing to, we don’t get to ask how and why it works that way; the strings of symbols we produce are not in any way connected to matters of fact about the world, and hence, while you may produce an utterance like ‘the world works in such-and-such a way’, in fact, this is not about the world at all, or at least, we have no grounds on which to believe that it is. The reason I would quit thinking about these things in such a case is simply that I have no reason to think that any of my thoughts are any better than a string of symbols arrived at by rolling a die; so in fact, I can’t actually think about what works how and why. (And thus, of course, again the conclusion that I can’t do so becomes suspect, because it’s just as much a die-roll string of symbols. And this just goes round and round and round again…)

  126. Well, Jochen, as I tried to indicate in my last comment, we’re at an impasse on “aboutness” since I have no reason to accord to it the sine qua non significance that you do. I suspect that’s due either to our having been introduced to issues of language via different authors or because we’re interpreting “intentionality” differently. I’m using that word in the sense of signifying the “mental”, and in that context, I don’t remember having encountered much if any discussion of “aboutness”. So, I really have nothing to say on that concept. However, I do have one related question. You say:

    it at least appears to be the case that certain objects (in a very general sense) can come to point to, or represent, or refer to other objects beyond themselves

    What are the “certain objects” that do the pointing, et al? Are they neural structures in the brain, and if so what does it mean for them to “point” at another object?

    the problem is that on a view such as you seem to be subscribing to, we don’t get to ask how and why it works that way; the strings of symbols we produce are not in any way connected to matters of fact about the world

    This recurring self-refutation argument is based on some misunderstanding. My “view” is merely the suggestion that the vocabulary of the “mental” may be inappropriate outside of the context of psychology, eg, when addressing the functional architecture of the brain. That doesn’t say anything at all about the utility of that vocabulary in its proper context.

    In your discussion of chatbots, you seem to suggest that an exchange of utterances can in some sense be “discourse” and yet be unanchored from the world. Assuming that the exchange is in some language, I don’t see how that can be since the process of learning any language anchors it to the world. As far as I know, that’s uncontroversial, so perhaps I’m missing your point.

    our conversation could be about something fundamentally different to you than it is to me (which is different from mere misunderstanding: we’d both understand perfectly well, our understanding would just differ)

    I more or less agree but prefer addressing the scenario from the perspective of a speaker S. Assume both S and hearer H speak the same language, that S utters U with intent to evoke action A by H, and that H responds with A. Given my interpretation of understanding, H responded as S intended and therefore understood U. However, it is quite possible that most other hearers would respond to U with action B, in which case we’d be inclined to say that the “meaning” of U has been understood if a hearer responds with B. This suggests that there is no intrinsic “meaning” to an utterance. The “meaning” is essentially an agreement (typically implicit) between speaker and hearer as to how the hearer should respond to an utterance.

  127. Charles, you say:

    I suspect that’s due either to our having been introduced to issues of language via different authors or because we’re interpreting “intentionality” differently. I’m using that word in the sense of signifying the “mental”, and in that context, I don’t remember having encountered much if any discussion of “aboutness”.

    That’s indeed not how I understand intentionality. I take my angle on the problem from Brentano, who, I believe, is generally credited with introducing modern philosophy, in the following oft quoted passage:

    Every mental phenomenon is characterized by what the Scholastics of the Middle Ages called the intentional (or mental) inexistence of an object, and what we might call, though not wholly unambiguously, reference to a content, direction towards an object (which is not to be understood here as meaning a thing), or immanent objectivity. Every mental phenomenon includes something as object within itself, although they do not all do so in the same way. In presentation something is presented, in judgement something is affirmed or denied, in love loved, in hate hated, in desire desired and so on. This intentional in-existence is characteristic exclusively of mental phenomena. No physical phenomenon exhibits anything like it. We could, therefore, define mental phenomena by saying that they are those phenomena which contain an object intentionally within themselves. (Franz Brentano, Psychology from an Empirical Standpoint, London: Routledge, 1995, pp. 88-89)

    While he does consider intentionality to be the hallmark of all mental phenomena, I think he’s pretty explicit in affirming that it’s the property of mental states to be, in some sense, other-directed—to represent something, reference something, in short, be about something—that’s what characterizes their intentional nature.

    What are the “certain objects” that do the pointing, et al? Are they neural structures in the brain, and if so what does it mean for them to “point” at another object?

    The answers to that will depend very strongly on who you ask. For some, of a strongly reductionist bent, they will indeed affirm that it is, in some sense, neural structures that do the pointing; others, like Brentano above, flatly deny that anything physical could have an intentional object; yet others coinsider the intentional content to ultimately be grounded in the success conditions of actions undertaken by agents; some believe that in order to have intentionality, one needs to have phenomenal experience, and argue that each such experience is inherently about the experienced thing; and so on. Likewise, what it means for them to point to another object will differ across the various approaches.

    My “view” is merely the suggestion that the vocabulary of the “mental” may be inappropriate outside of the context of psychology, eg, when addressing the functional architecture of the brain. That doesn’t say anything at all about the utility of that vocabulary in its proper context.

    But everytime you use that vocabulary, a certain understanding of it is presupposed; and if that understanding is wrong, then I just don’t have any handle to evaluate what you’re saying. This isn’t like the situation of Newtonian mechanics that, even though fundamentally wrong, still yields good enough data to send an astronaut to the moon successfully, or at least, I don’t see how it is—something is either about something else, or not; it’s a binary issue (or again, at least I fail to see how it could be OK to say something like ‘a is about b for all practical purposes’; but if you have an idea there, I’d be keen to hear it).

    Assuming that the exchange is in some language, I don’t see how that can be since the process of learning any language anchors it to the world. As far as I know, that’s uncontroversial, so perhaps I’m missing your point.

    Well, I wouldn’t say it’s uncontroversial; notably, Quine argued that there is an inherent indeterminacy to learning a language (and Putnam considered this to be ‘the most fascinating and the most discussed philosophical argument since Kant’s Transcendental Deduction of the Categories’). Basically, Quine imagines an anthropologist, observing a tribe speaking an unknown language. He observes that there are two causes of indeterminacy in that task: first, if the natives use the word ‘gavagai’ whenever presented with a rabbit, it might seem natural to try and translate it with ‘rabbit’; but the native might simply mean ‘food’, or ‘let’s go hunting’, or ‘a storm is coming’ if they’re superstitious and believe that a rabbit indicates such a thing, or even ‘undetached rabbit-part’. Now, those interpretations are not all equally likely, and some might be ruled out by futher test cases, but as Quine argues, there’s always some residual indeterminacy left.

    The second cause of indeterminacy is more subtle, concerning not the indeterminacy of individual words within a sentence uttered in another language, but rather, the observation that a sentece can be broken up into functional units in different ways, and that the correct assignment of functions is itself a matter of context, which can, however, only be gained through an understanding of the language.

    One can respond to this situation in much the same way as you do, and appeal to a ‘behaviourist’ definition of ‘understanding’—and in fact, that’s mostly what Quine himself did. I think that’s a step backwards—treating organisms as ‘black boxes’ that produce behaviour upon being exposed to stimuli reduces them, in effect, to giant lookup-table like constructs; but I think it’s exactly in the functional modelling of how inputs and outputs are connected that we’ve made the greatest advances in understanding the origins of behaviour.

    And while I’m not sure about their current empirical status, I think an answer along the lines of Chomsky to the problems of translation is much more illuminating—we possess a universal language instinct, a grammar, reliance upon which allows us to bypass Quine’s indeterminacies. But this we can’t find out by merely making models of behaviour; we have to model what goes on on the inside, and only if the right things go on there can we expect genuine mutual understanding. So it’s not just what actions are evoked upon our verbal cues in the other, but also how these actions are invoked that matters.

  128. Jochen: “For some, of a strongly reductionist bent, they will indeed affirm that it is, in some sense, neural structures that do the pointing”

    The hallucinations experienced in the SMTT experiments give very strong evidence that neural (brain) structures do, indeed, do the pointing.

  129. Jochen –

    But everytime you use that vocabulary, a certain understanding of it is presupposed; and if that understanding is wrong, then I just don’t have any handle to evaluate what you’re saying.

    I honestly have no idea what this means. I’m not saying anything is “wrong”. I’m suggesting (what seems to me obvious) that any specialized vocabulary has a proper context of applicability and may be inapplicable in other contexts. If I ask you how you enjoyed a movie and in response you launch into a critique of the politics of the country in which it was filmed, that’s a misapplication to one context of a vocabulary entirely legitimate in other contexts. The vocabulary of beliefs, desires, thoughts, etc is fine in quotidian use and some psychological discussions. But IMO, it should be avoided once the discussion is physiological. I can’t imagine how you get from that to a self-refutation argument to the effect that limiting a vocabulary to an appropriate context makes it “wrong” and therefore leads to a general inability of anyone to understand anyone else.

    By “learning a language”, I meant learning a first language. Quine argued that there is an inherent indeterminacy in translating from one language to another, but I didn’t mean “learning a language” in that sense.

    As I understand classical “behaviorism, it argued that everything one needed to know about human mental processes could be gleaned from observations of behavior”. Trying to understand the mechanics of human behavior by considering by looking inside the body is exactly the opposite. In any event, I don’t know what behaviorism has to do with my interpretation of understanding unless you’re taking any mention of behavior to be evidence of it.

    treating organisms as ‘black boxes’ that produce behaviour upon being exposed to stimuli reduces them, in effect, to giant lookup-table like constructs

    Treating a system as a black box says nothing about the internal architecture – that’s what “black box” means. So, I don’t know why you connect the concept to look-up tables or our exchange.

  130. Arnold:

    The hallucinations experienced in the SMTT experiments give very strong evidence that neural (brain) structures do, indeed, do the pointing.

    I agree that this is probably the most widespread belief, and it’s one I happen to share. But it’s certainly not the only option out there.

    Charles:

    The vocabulary of beliefs, desires, thoughts, etc is fine in quotidian use and some psychological discussions. But IMO, it should be avoided once the discussion is physiological.

    Well, I think that this vocabulary in everyday discourse is deeply mysterious, and thus, generally tend to assume that it needs an explanation on some more fundamental level (be that physiological, functional, or whatever else you favor). So what I take people to be doing (perhaps wrongly) when they try and give a more fundamental account of intentionality is to try and provide an elucidation of the mystery present in its everyday manifestations, that is, some way of analyzing statements like ‘I believe x is the case’ such that, at the very least, its truth conditions become manifest. Such an explanation might, e.g., run the way teleosemanticists propose—some agent believes x to be the case if and only if it exhibits behavior selected for being appropriate to x being the case. Thus, the theory of intentionality underwrites the quotidian use of intentional vocabulary, such that the mystery inherent in this vocabulary is dispelled.

    If now, however, the theory supposed to underwrite this use merely asserts that such use is mistaken, then whenever I do use this vocabulary in everyday discourse, if that theory is right, I am not really uttering a meaningful statement; if now that theory itself depends on such statement, then it will itself not be meaningful. That’s all there’s to it.

    The intentional idiom may be fine to use as long as you bracket all consideration of what underwrites it, but plainly, you no longer can do so once the foundation of the idiom itself is at issue.

    By “learning a language”, I meant learning a first language. Quine argued that there is an inherent indeterminacy in translating from one language to another, but I didn’t mean “learning a language” in that sense.

    Quine’s arguments apply just as well between speakers of ostensibly the ‘same’ language; in fact, they even apply regarding your own past statements.

    In any event, I don’t know what behaviorism has to do with my interpretation of understanding unless you’re taking any mention of behavior to be evidence of it.

    I’m sorry if I misunderstood you, but your account of meaning in the last paragraph of your previous post seems explicitly behavioristic to me—basically, you take the meaning of an utterance to simply be the behavior induced by that utterance, without any regard as to how that behavior is produced by the utterance. If that’s a wrong interpretation, perhaps you could elaborate.

    Treating a system as a black box says nothing about the internal architecture – that’s what “black box” means. So, I don’t know why you connect the concept to look-up tables or our exchange.

    True, but if you treat entities as black boxes, then their internal structure might as well be given by a lookup table, and hence, any claim you make has to be compatible with that being the case—otherwise, you put a constraint on the internal composition of the entity, and hence, don’t treat it as a black box.

  131. Jochen –

    I think that this vocabulary in everyday discourse is deeply mysterious

    But from the perspective of average users, it isn’t. They don’t know or care about any of the issues you are raising. Do you really mean to suggest that therefore their discourse is meaningless? That there is zero merit in the meaning-as-use view?

    what I take people to be doing (perhaps wrongly) when they try and give a more fundamental account of intentionality is to try and provide an elucidation of the mystery present in its everyday manifestations

    Again, depends on what you encompass by “intentionality”. Some may do that vis-s-vis “aboutness”, but some are trying to explain propositional attitudes in terms of brain architecture, functionality, etc. Eg, Eric Schwitzgabel has addressed the “behaving as if one believes B” approach to which you allude. The problem I see with that approach is that behavior is context-dependent, which makes that concept of “belief” context-dependent. But I think people tend to think of a belief as something you either do or don’t have.

    If now, however, the theory supposed to underwrite this use merely asserts that such use is mistaken

    This the point at which you always lose me. I have tried to be consistent in denying that I consider the quotidian (or psychological) use “mistaken”, “wrong”, et al. I’m suggesting only that theories in the physiological that use are likely to prove unsatisfactory due to the irreducibility of the vocabulary of propositional attitudes to the vocabulary of physiology – vocabularies which are used for different purposes and therefore can’t be mapped into one another. (See Ramberg’s essay in “Rorty & His Critics” for an elaboration of this view. After regularly rereading the essay for several years I still don’t fully understand it, but to the extent I do it was seminal for my views on these issues. Quine’s indeterminacy makes an appearance in a sense relevant to this discussion.)

    The intentional idiom may be fine to use as long as you bracket all consideration of what underwrites it, but plainly, you no longer can do so once the foundation of the idiom itself is at issue.

    I don’t know what constitutes the “foundation” of an idiom. Again, it appears that you don’t buy meaning-as-use. So what is the foundation of “cool” and “hot” as in “He/she is cool/hot”? It certainly isn’t a theory of thermodynamics. Yet most people have no problem understanding those statements.

    Re indeterminacy not only between two different languages but also between uses of the same language by the same person at different times. The issue you raised previously was the anchoring, or its absence, of some vocabulary to the world. How does this broad sense of indeterminacy fit with that requirement? However one defines “meaning” and “understanding”, it seems a rather significant handicap to understanding if meaning changes from use to use by the same person in the same context.

    you take the meaning of an utterance to simply be the behavior induced by that utterance, without any regard as to how that behavior is produced by the utterance.

    No. Reread the final paragraph of comment 135. It involves speaker-intent and a specific hearer. In fact, the whole point of my interpretation is that “understanding” cannot be determined by observation of behavior alone. You would have to know the intent of the speaker in order to determine whether observed behavior demonstrated understanding.

    Re black boxes, I essentially agree with your last paragraph, but since I am interested in the internal structure – the physiology – I still don’t see the relevance to our discussion. OTOH, at least in thE case of simple and familiar verbal exchanges, I actually do suspect that the process of producing responses to utterances is much like a lookup table – the neural activity pattern consequent to an instance of aural sensory stimulation acts as a pointer into an array of learned responses. Think of the phenomenon of successfully answering a question before the person asking has completed it. That can only happen if the question is already in the head of the person asked and is somehow paired with an answer.

  132. Charles:

    But from the perspective of average users, it isn’t. They don’t know or care about any of the issues you are raising. Do you really mean to suggest that therefore their discourse is meaningless? That there is zero merit in the meaning-as-use view?

    Quite to the contrary, I want to find out what it is that makes the ordinary talk about such things come out right. Let’s take an example. I have basically zero knowledge about Feng Shui, but I remember reading a piece by Douglas Adams where he claims that its underlying principle is that a dragon must be able to effortlessly move through the spaces in your house. So, this is the everyday level of discourse: in order to facilitate dragon movement, put the bed here, the desk there, and so on.

    Now suppose this works—people in Feng Shui’d up living spaces indeed are more comfortable. Hence, we’d have people using the ‘draconic idiom’ successfully; thus, in everyday life, we can get by with using it. Talking about dragons is useful. But suppose now you want to find out exactly why this sort of thing works: clearly, it’s got nothing to do with dragons, as they don’t exist (I’ll presume we agree on that). (In this analogy, the belief that there is an actual dragon is something like the belief in an irreducible, original intentionality.) So that they talk about dragons isn’t what makes their talk meaningful.

    So you need a foundation for the draconic idiom that is free of dragons. OK, now say investigate the causes of well-being in the home, and find certain laws pertaining to, say, illumination, lack of clutter, harmonic arrangements, and the like, perhaps ultimately grounded in some quirks of the human neural wiring (some of which may have good evolutionary reasons—good, natural illumination facilitates vitamin D production, which in turn has some mood improving consequences).

    Eventually, you can then replace every sentence of the form ‘windows and doors should be arranged in such-and-such ways to facilitate a dragon’s effortless motion’ with ‘windows and doors should be arranged in such-and-such ways to be conducive to subjective well-being for these-and-those factors of human biology’. You can continue talking about dragons if it’s easier, but you know that you can, in principle at least, always switch to the foundational level, say to eliminate ambiguity. So, talk of dragons is grounded in solid scientific principles that have nothing ultimately to do with dragons; it’s merely metaphorical.

    Now suppose Feng Shui is just so much woo. That is, there is no tangible benefit at all to whether one follows its rules, or not. Then, you would come to a quite different conclusion: talk about dragons is, ultimately, just meaningless—there’s no way to cash out a sentence such as ‘windows and doors should be arranged in such-and-such ways to facilitate a dragon’s effortless motion’ such that it makes sense. It’s mere superstition, mere folk belief.

    As far as I know, either could be the case with respect to Feng Shui. Likewise, one might argue, is it in the case of intentionality: one might find a theory underwriting the use of the intentional idiom in the same sense that the biological level underwrote the draconic idiom in the Feng Shui example above (which is what I mean by ‘foundation’), or, as eliminativists would argue, one might find that it is mere folk superstition—words we use that ultimately lack reference, just as the appeal to dragon movements simply lacks reference if Feng Shui fails to actually accomplish anything.

    But the case is different here: at issue is not simply a term, like ‘dragon movements’, that can either refer or fail to, but the question of whether any term can refer at all. That is, we analyze the Feng Shui example within a given idea of what it means for a term to mean anything: the dragon’s movements refers, in the first case, ultimately in a sort of heuristic way to biological facts about human beings, while in the second, it lacks reference.

    In the case of intentionality, it is our idea of what it means for a term to mean something itself that is in question. Thus, no problem arises when the first sort of case obtains: we find some way to give a, perhaps neurobiological, grounding for intentionality, and thus, understand the origin of meanings in the same way we understood the origin of a feeling of comfort in the case that Feng Shui does something. In that case, all is well. (This basically corresponds to my ‘eliminativists of the first kind’-position above.)

    But not so in the case where intentionality turns out to be a folk belief. Then, we simply have no handle on meaning—in particular, every time the intentional idiom is used, the statements we make are analogous to those made by people invoking dragon movements in the case where Feng Shui is a load of hooey. But every instance of understanding, every statement we make, and every theory we formulate works within the intentional idiom—understanding is understanding of something, statements and theories are statements and theories about something, and so on. So like the quotidian use of the draconic idiom becomes suspect upon discovering the falsehood of Feng Shui, also the quotidian aboutness becomes suspect; but then, we find we are simply not in a position anymore to talk about the meaning of anything—which includes the meaning of the theory that has brought us to this point.

    So, bottom line being, I don’t see how the quotidian use of the intentional idiom could be considered appropriate in a case where there is no appropriate neurophysiological (or whatever) foundation for it—it would be like saying that use of the draconic idiom is OK even in the case that Feng Shui doesn’t achieve its aims, and this, to me, seems straightforwardly wrong. You can make sense of ‘put the bed over here to help the dragon move efficiently’ if there is some appropriate foundation that actually makes it so that putting the bed over there achieves the desired outcome of greater comfort; but if it fails to do so, then the sentence is simply meaningless.

    The issue you raised previously was the anchoring, or its absence, of some vocabulary to the world. How does this broad sense of indeterminacy fit with that requirement?

    Well, to me, it seems that the core worry is the same: across some finite sample that you have, behavior of your interlocutor matches that which you would expect of somebody whose vocabulary was anchored in the world in the same way as yours, just as to the anthropologist, in his observations, ‘gavagai’ is co-extensive with ‘rabbit’. But this isn’t sufficient to conclude the same meaning for the terms: possibly the situation in which the word ‘gavagai’ is applied just to the rabbit’s foot has just not occured yet, and the situation in which the behaviour of your conversation partner suddenly widely diverged from your expectations hasn’t yet arrived.

    For an admittedly contrived example, just consider a machine that responds randomly to your utterances: for any finite stretch of time, there’s a non-vanishing chance that you consider all of its responses reasonable, and conclude a mutual understanding from that. Yet it seems clear that there would not be any understanding at all.

    No. Reread the final paragraph of comment 135. It involves speaker-intent and a specific hearer.

    OK, but how do you intend to cash out ‘intent’? Merely as behavior aimed at producing other behavior? Or does this intent need some sort of meaningful inner representation of what the speaker wants to achieve?

    Think of the phenomenon of successfully answering a question before the person asking has completed it. That can only happen if the question is already in the head of the person asked and is somehow paired with an answer.

    Interestingly, to me, this seems to suggest the complete opposite: any given sentence can in principle be completed in any horseradish bubblegum. Each of these completions requires, in general, a different response. Thus, with only a lookup table, there is no way of telling which one of all the possible completions is to be preferred. Contrarily, with understanding of the context of the conversation comes the possibility of making educated guesses as to which of the possible completions is more likely, and hence, how to complete the question, and answer it.

    But while I don’t really agree with your example, I think I agree with the point you’re trying to make: in everyday conversation, much comes really down to the exchange of certain stock phrases. (‘How are you?’-‘Fine, thanks, and you?’-‘Gee, look at that weather.’-…) This works well in highly constrained circumstances—essentially, there’s a small number of possible openings, and from there on, a highly pruned decision tree. So in a large part of discourse, I agree that many of our utterances probably strictly speaking don’t mean anything—they’re just noises made as dictated by social protocol. But I’d like to think (perhaps somewhat presumptuously) that what we’re doing here is different…

  133. Jochen (132),
    I confess I have the bad feeling we’re running in circles, a feeling that might be due to the fact that I’m running out of new arguments and find myself unable to restate or rephrase what I’ve written already. The other interpretation is that I do need to reiterate, and need to because you’re elegantly finding ways to avoid biting the bullets I’m shooting at you.
    I’ll try to pin you down another time.

    If you take out hypercomputation, the above is making my point: deductions necessarily can’t exhaust all possibilities. As I’ve said before, hypercomputation is redundant – just an example of something you can’t deduce in a straightforward way.

    If you take out hypercomputation, the point is rendered moot; we are computationally universal, hence, can in principle deduce all that can at all be deduced from some axioms. And again, note that I don’t hang my hat on the failure of deducing something—if that were all, I probably wouldn’t get worried for the next couple of tens of thousands of years. While we can deduce anything, that doesn’t mean it can’t take quite a while to do so, and from my point of view, we’ve just barely started yet

    That’s almost fair enough, but you’re aptly exploiting a small slip of mine: I wrote “take out” when I could (should?) have said “substitute HC with any other hypothetical explanation”. You can imagine that for me ETC would be the choice, but it doesn’t have to. I didn’t follow this route because the point I need to make here is general, once we agree that we can’t by definition use deduction to find all true propositions, it follows that Mary’s story provides a strong hypothetical indication, not a definitive conclusion.
    It’s nice that you elegantly gloss over my provocations, but I was making a real point when mentioning that I spot a whiff of incoherence in you position, in schematic form:
    Mary concludes that there must be something beyond the physical. You say, ok, let’s accept that as true and use HC to escape. However, HC allows you to neatly explain why Mary-mediated conclusion is wrong. So after all, you are not accepting the story as true, you are showing why it may be false: in your view, it’s because HC isn’t computable and therefore escapes Mary’s deductive powers.
    I arrive with my load of post-Popperian unpleasantries (your rendering of the “story” made me laugh with gusto!), and point out that the Mary-mediated conclusion may be wrong for many more reasons, which appeal to me because they strike me as less exotic (more approachable) than HC.
    You then get all twisted, defend Mary and the power of deduction when rebutting my claims, but still maintain that you are showing why the conclusion is wrong: there isn’t anything beyond the physical in your solution, after all. Thus it seems to me that you are actually trying something you haven’t even stated explicitly (for understandable reasons) namely to propose that your solution is the only possible one.
    This ties well with your short (for now) exchange with Sci: can we really find new truths from the armchair? It’s an open and very interesting question, I myself am ready to buy Descartes conclusion because it enables me to ask questions, but I don’t think we can go much further: asking questions and finding promising hypothetical answers can and should be done from the armchair, but to reach consensus we do need empirical verification (see my take, inspired by Chalmers himself). This need importantly applied to Einstein and even more to Bell, if I’m not mistaken. However, we can’t presently empirically verify Mary’s story, so all this strand of your defence looks very weak to me: we don’t know that the conclusion granted by Mary is necessarily true. I think we do agree that all the power of deduction crumbles in front of empirical falsification, but in Mary’s case we can’t try this route, so we remain solidly on the hypothetical sphere.

    Minor point: it also seems that you are rebutting my views by appealing to authority. Yes, Popperian views are mainstream in scientific circles. Yes, my position isn’t mainstream, but no, I have no idea why Popper felt that “Induction, i.e. inference based on many observations, is a myth. It is neither a psychological fact, nor a fact of ordinary life, nor one of scientific procedure.” I have repeatedly observed that I hurt myself if I try to pass through solid objects, that’s how I’ve learned the idea of solidity, well before learning to speak; to me this is a self-evident observation. In science, once we see that a given theory produces dependable predictions, we are justified to depend on them, before that, we aren’t, so once again, induction sneaks in. And that’s fine, it’s actually convenient: in elevating the FAPP scenario to the only one that counts, it solves many philosophical conundrums in one move, while both explaining why science works (the world is full of regularities) and reconfirming its value.

    Finally, self serving observation: ETC says that if we arrange a billiard-ball processor with input organs, the billiard-ball machinery needed to model the world and elements of the machine itself, memory, the ability of classifying incoming signals as desirable or not, memory-selection/forming mechanism informed by the classification, effectors and some recursive elements, bang, the whole lot will inevitably have phenomenal experience. This is trying to be very different from saying “add some unknown complexity”, it relies on an enormous amount of well-defined complexity (enough to defeat our powers of imagination) and at the same time it is in my view enough to stop worrying about Mary: we have some unrelated grounds to accept that the Mary-mediated conclusion may be wrong, and we have a hypothetical story showing that indeed the conclusion is wrong. Then of course, I may be very wrong (I am surely somewhat wrong, mind you), but we can start making verifiable predictions. ETC explicitly tries to escape the Leibniz’ Mill problem, so telling me that I need to doesn’t really do much work, you’re kicking an open door…
    On the other hand, if we follow the Popperian mainstream, how do we go about verifying your hypothesis? How will a HC-based theory help us deciding if a given lump of matter is conscious or not? I’m asking because I don’t know, albeit I do it with little enthusiasm as it seems a low blow, especially since I’m personally post-Popperian, obviously.

  134. Jochen –

    Now suppose [practicing Feng Shui] works — people in Feng Shui’d up living spaces indeed are more comfortable. Hence, we’d have people using the ‘draconic idiom’ successfully.

    I see the question relevant to our discussion as being whether using the draconic idiom is successful under some definitions of meaning and understanding, not whether the practice of Feng Shui is successful. Eg, I’d say that if an experienced practitioner describes to a novice how to “Feng Shui up” a living space and the novice follows those instructions, then the description was meaningful and the idiom was successfully used. And if the word “dragon” appears in that explanation, I’d say the description is “about” dragons (and, of course, other things). Hence, I don’t see the need for an explanation of how the practice of Feng Shui creates feelings of comfort that is “foundational” in the sense that it can be expressed in the vocabulary of physiology.

    Unfortunately, neither do I see the need for a foundational explanation of why using the draconic idiom – or any other, including the psychological idiom (what I’ll start calling it instead of “intentional” in an attempt to avoid the confusion that word seems to cause) – is successful under some definitions of meaning and understanding. Which seems to leave us at an impasse on that issue since you see the need as critical.

    how do you intend to cash out ‘intent’?

    Good question. The best I can come up with is to note that “meaning” and “understanding” are part of the psychological idiom, which for me – though not for you – makes it unlikely that they can be cashed out in the physiological vocabulary. However, “intent” is also in the psychological idiom, so the speaker-intent idea of meaning is expressed in words from the same vocabulary. An admittedly unsatisfying answer since I think we’d both prefer to reduce the idiom to a physiological vocabulary.

    any given sentence can in principle be completed in any horseradish bubblegum.

    This seems to have gotten garbled, but I think I get your point. However, I too consider context critical, so we’re actually in agreement on that point. OTOH, google does an amazing job without explicit context – although I assume they make some behind-the-scenes educated guesses about context. My favorite example: suppose you want know in which movie the famous line “what we have here is failure to communicate” occurs. Type into the google search window “what we h” and the question will be completed for you (at least it was for me – it’s possible that they know more about me than I’d like! Eg, I’ve done that experiment many times, so perhaps they assume that’s what I’m up to).

    I’d like to think … that what we’re doing here is different…

    Probably is in some dimension (eg, complexity), but perhaps in some other dimension less than you hope. I really don’t care all that much.

  135. Hey Sergio, hey Charles,

    sorry for dropping largely off the radar; I’ve been having a bit of a stressful time lately—besides the usual Christmas hoopla, we’re also just in the process of moving, and I’m finishing up a couple of papers for publication, and generally trying to navigate that end-of-the-year getting stuff done that should long have been completed rush… But I’m not ignoring you, and I’ll get around to writing responses eventually (even though I share Sergio’s concern that we’re probably going round in circles—well, not circles, maybe a spiral: I feel that even though we haven’t managed to completely work out our differences, we’ve at least raised them to a higher level, which to me signals some progress at least. 🙂 ).

    Anyway, in the meantime, I’ve gotten some happy news: a paper of mine on intentionality has passed peer review and been accepted for publication in Mind and Matter (I originally sent it to the Journal of Consciousness Studies, however, the editor considered it ‘too mathy’ for their target audience, and referred me to their sister journal M&M). So now I have the satisfaction that at least two people doing this sort of thing professionally (one of whom in particular gave very helpful and insightful comments during the review process) don’t think my views are utter rubbish. It’s my first foray into philosophy, with no academic credentials or training in the field, so I was more than a little trepidatious upon entering in the fray, but I must say it’s been a very rewarding experience.

    All this just to say I haven’t forgotten you guys, I’m still thinking about this stuff, but at the moment, there’s a lot going on. I hope you’ll forgive my tardiness.

  136. Jochen,
    Many Congrats!
    Well done 🙂 I look forward to read your M&M paper – looks like I’ll have to bother you personally to get the FT, believe it or not it seems that UCL doesn’t subscribe to M&M.

    Now, where is that “green with envy” emoticon?

    No worries for late replies: you know I can relate with the difficulties you’re facing (without any effort at all). I can’t even progress my own stuff lately, let alone answer other people’s comments… :-/

  137. Thanks, Sergio and Sci! Sergio, I’d be very happy to send you (or anybody else who’s interested) a pre-print of the manuscript (but it might take a couple of days for me to get around to doing so).

  138. Jochen –

    Congrats on your paper – and pls send a copy to:

    ctwiii you-know-what yahoo you-know-what com

    My attempt in our previous exchange to deal with the “intent” part of speaker-intent meaning took me in exactly the wrong direction: back to the psychological idiom. I’ve thought some more about “intent” and see a possible way of “cashing out” the word in physical terms.

    I view all behavior as a reaction to stimuli, including the interoceptive. Eg, the body develops an itch, the subject scratches. Generalizing, in a given context an organism may experience discomfort – physical or emotional – and act so as to possibly relieve the discomfort. One possible act in response to such discomfort is speech for which the speaker has an embodied (exactly how is TBD) history of success in obtaining relief in similar contexts involving persons similar to the person to whom the current speech is directed. Ie, the speaker’s “intent” is to obtain relief from the discomfort by precipitating a particular response by the hearer.

    Note that any response by the hearer that is consistent with the speaker’s history is sufficient for the speech to have been “understood” whether or not the speaker’s discomfort is thereby relieved. Similarly, if the response is inconsistent with that history, the speech arguably was not understood even if the speaker’s discomfort is relieved.

  139. Jochen, Sergio –

    In your exchange here, there seems to be considerable weight placed on the Mary thought experiment (disowned even by her father, Frank Jackson). I’m skeptical of thought experiments in general and think Mary is especially suspect.

    There are at least two versions of why Mary hasn’t previously experienced color. The first assumes that her physiological capabilities are intact but she has never been exposed to colored light. However, since she is assumed to know all physical facts about color vision, she presumably has the knowledge and equipment necessary to directly induce at some point in her visual processing chain the neural activity that would typically result from being exposed to light in the red part of the spectrum. In which case Mary could have learned to associate the phenomenal experience attendant to that neural activity with the word “red”. So, when first exposed to red light, she wouldn’t even have a new experience, never mind learn anything new.

    The second and more interesting story is where Mary’s physiological capabilities relevant to color vision have been dysfunctional but are restored. In that case, prior to restoration she could not have experienced the neural activity attendant to exposure to red light. Once those capabilities have been restored, when first exposed to red light she does have a new experience but has not yet learned to associate the new experience with the word “red”. In the terminology I used in my comment to Cognicious just now posted to the “Are we aware of concepts” thread, Mary has a non-cognitive experience that she may subsequently learn to associate with the word red (Arnold’s PE1) but can’t have a cognitive “seeing something as red” experience until that association occurs.

    Discussion of Mary often include exclamations like “Wow!” or “Beautiful!” that supposedly accompany her new experience and apparently are supposed to show that she now “knows what it’s like to see red”. But there’s no obvious reason to assume that her new experience would evoke such responses. In our culture, the word “red” has special significance because of its association with things like blood, roses, fire and firetrucks, stop signs, extreme anger, capes waved at charging bulls, et al. But Mary can’t make those associations since she can’t associate the word “red” with her new experiences. And in any event, it isn’t clear to me in what sense of the word learning what it’s like to have an experience constitutes new “knowledge”.

  140. So I’ve sent out a few preprint copies of my paper to those of you that expressed interest. Sergio, I couldn’t find an email address for you (I thought there was one via your blog, but if so, I couldn’t locate it). If you don’t want to share it publically, maybe we can establish contact via Peter?

    (Peter, or perhaps you could just forward the mail I sent you to Sergio, if it isn’t too much trouble…)

    [Forwarded! Peter]

  141. Jochen (153) (and Peter),
    Sorry, I hope you didn’t spend too much time looking around (there is no email address published on my blog). However I do have your email, don’t worry (comes from your own comments there). I’m just slow, had other plans, so I’ve forgotten to get in touch – will amend before Xmas, promise.

    Charles (152), you are kicking an open door on Thought Experiments and Mary, as far as I’m concerned. Thought Experiments are nice toys (love them, to be honest), and are frequently useful to produce important hypotheses. But that’s as far as they go: “all the force of deduction” counts very little to me, if empirical verification is either impossible or missing for other reasons.
    In the end most scientific experiments are (or should be) born from big or small thought experiments: one makes an hypothesis, takes it to its logical conclusions and tries to see if reality complies. In the case of Mary, this is impossible, but it’s worse: I don’t think the conclusions it proposes have any strength, for the list of reasons I’ve been banging about. Of course I’m happy to add the two scenarios you are providing to my own list.
    Your second objection is more interesting, but I found your discussion with Arnold and Cognicious very hard to follow: I thought I understood the basics of Retinoid space, but now I’m not so sure anymore.
    As a result I can’t really comment, I’m afraid.
    I also suspect I’m not the only confused one, but that’s just speculation ;-).

  142. Alright, Sergio, let’s see if we can’t ride the merry-go-round once more. 😉

    First of all, I think there’s a point regarding HC that I’ve so far failed to make clear to you: as a ‘third option’ in reaction to Mary/Zombies/etc., it’s unique—that is, it’s the only way to resist throwing out physicalism while simultaneously accepting those arguments. So when you say:

    That’s almost fair enough, but you’re aptly exploiting a small slip of mine: I wrote “take out” when I could (should?) have said “substitute HC with any other hypothetical explanation”.
    The thing is, there is no ‘other hypothetical explanation’—at least none that does the same work.

    You and Charles have both expressed reservations towards thought experiments, but they are just arguments, like any other, and like any other argument, if their premises are true, and their reasoning is valid, then you have no choice but to accept the conclusion—you can’t say ‘it’s just a thought experiment’, and reject it solely on that note. Of course, you are right to point out that reasoning—all reasoning—is fallible, and thus, may be shown wrong; but that alone does not serve as grounds for resisting a conclusion you don’t like—you have to show either that at least one premise is wrong, or that the reasoning is fallacious.

    I don’t believe either has been successfully done regarding the thought experiments in question; in fact, if their conclusion were not so profoundly against the ‘mainstream’ notion of physicalism, I’d go so far as to think they’d be near-universally accepted. Resistance against them has not typically derived from finding fault with the arguments themselves, but from not wanting to accept their conclusion, which point of origin always deserves to be regarded with some scepticism.

    But of course, the arguments’ proffered conclusion is not one that appeals to me, either. However, I can’t find fault with the arguments—it seems quite obvious to me that an analysis of anything on purely mechanical terms necessarily fails to touch any phenomenological dimensions, as it did to Leibniz with his mill. So I try to offer up my ‘third way’.

    Perhaps in order to make more clear how it’s supposed to work, consider the recent article by Toby Cubitt et al., Undecidability of the Spectral Gap. In effect, they show that a situation analogous to what I am claiming for consciousness holds for whether or not there exists a finite energy difference between the lowest lying and first excited states of quantum field theories (of a certain kind). (There are other physical undecidability problems—in effect, whether a given machine ever halts, i.e. ever reaches some specific state, being one of them—but this one has attracted some attention lately.)

    In other words, full knowledge of the theory—knowing all the phyisical facts—does not suffice to nail down all of its implications. Now, one might tell a kind of ‘Mary’ story, of an experimenter who has full knowledge of the theory, yet nevertheless, will not be able to tell whether an energy gap exists, until she in fact goes out and makes the measurement.

    Of course, nobody would argue that this is an argument against physicalism—after all, that energy gap, if it exists, is perfectly physical in its own right. What the argument shows, however, is that the set of real-world physical consequences exhausts that which is derivable from a given set of physical facts. But if this is the case, then the force of the knowledge argument (and likewise, the zombie argument: the physical theory is both compatible with a gapped and a gapless ‘zombie’ world) as an argument against physicalism evaporates.

    If, on the other hand, all real-world consequences were computable—as I take they are in e.g. ETC—you can’t avail yourself of the same conclusion: in this case, there is a further fact in need of explanation, namely that of why a given conclusion can’t be reached, even though we are in principle able to do so (being equivalent to, or at least capable of constructing, universal computers). In a computable world, there should be a story that can be told as to how subjective experience emerges; and if that is the case, then that story is what I want. But if you’re admitting hypercomputation, no such story exists. That’s a fundamental difference between the accounts; substituting any old ‘well maybe we just don’t know or aren’t clever enough’ for the hypercomputation simply doesn’t buy you the same ground.

    Keeping this in mind, I hope you see that the ‘inconsistency’ you perceive really isn’t there. You say:

    Mary concludes that there must be something beyond the physical. You say, ok, let’s accept that as true and use HC to escape. However, HC allows you to neatly explain why Mary-mediated conclusion is wrong. So after all, you are not accepting the story as true, you are showing why it may be false: in your view, it’s because HC isn’t computable and therefore escapes Mary’s deductive powers.

    Mary concludes that not all real-world consequences can be deduced from some set of known physical facts; only if you additionally believe that the world is computational does this entail that physicalism is false. So I can in fact accept the Mary argument, but renounce the belief in computationalism. The Mary conclusion is exactly right, and hypercomputation is what makes it right: because there are hypercomputational/undecidable effects, not all real-world consequences are derivable.

    Minor point: it also seems that you are rebutting my views by appealing to authority.

    Well, not exactly. But if you wish to appeal to a non-mainstream point of view, you need to lay out your reason for doing so; I work (for present purposes) from within the mainstream, so I wonder why and how you’re deviating. So the general belief is that appeal to induction is fallacious, because strictly speaking, induction simply does not carry any logical force—that’s what Popper was getting at when he said that ‘induction is a myth’. If all your justification for a claim is that it’s always been that way in the past, there’s no sense in which I am obligated to accept that it will also be that way in the future.

    So if you want to argue that induction does carry some sort of persuasive force (the way deduction certainly does), then you have to provide an account of it that makes this case, and I don’t see that you have done so.

    That you have repeatedly bumped into a wall does not provide you with any justification in expecting that this will happen again (not in any formal sense, at least)—that you form a theory according to which you will bump into a wall, rather than passing through it, and that this theory has proven empirically adequate, does.

    ETC says that if we arrange a billiard-ball processor with input organs, the billiard-ball machinery needed to model the world and elements of the machine itself, memory, the ability of classifying incoming signals as desirable or not, memory-selection/forming mechanism informed by the classification, effectors and some recursive elements, bang, the whole lot will inevitably have phenomenal experience.

    I might not have grasped your account fully, but can you give a simple indication of how? What needs to happen such that PoolHead becomes conscious? Because to me, it seems abundantly clear that every action or reaction of PoolHead can be accounted for using a (long) chain of one ball careening off another ball, with no phenomenal experience anywhere in sight. Certainly, there does not seem to be any phenomenal experience necessary: all of that careening surely could happen without a spark of consciousness, so it seems that there’s an explicit zombie account for PoolHead. Where do you think that goes wrong?

    On the other hand, if we follow the Popperian mainstream, how do we go about verifying your hypothesis? How will a HC-based theory help us deciding if a given lump of matter is conscious or not?

    Well, first of all, I’m not proposing a scientific hypothesis, so questions about verification/falsification don’t really enter. Second, no theory ever gets verified on a Popperian account. Third, all of this doesn’t mean that there aren’t any veritably scientific indications to take away from this: certainly, there are necessary conditions for consciousness if my account is right, such as the ability to harvest noncomputational resources from the environment. And even if phenomenal experience is difficult to track scientifically, that doesn’t mean every aspect of consciousness is—in my M&M paper, I argue that a very specific structure suffices for intentionality. So if you believe that intentionality is necessary for full consciousness, then that structure’s presence (or something functionally equivalent) suffices to pin down that aspect of it.

    And finally, of course, you assess consciousness in the same way we do right now: if it walks like a duck, and quacks like a duck, then even if it maybe looks like a rabbit if you tilt your head just right, benefit of the doubt says it’s a duck.

  143. Charles:

    I see the question relevant to our discussion as being whether using the draconic idiom is successful under some definitions of meaning and understanding, not whether the practice of Feng Shui is successful.

    Well, that would be a different analogy than the one I was intending to make, though. The idea is that there is something that makes talk come out true—a truthmaker. Thus, the truthmaker of ‘if you arrange the furniture such-and-such, you will experience increased well-being’ is, in classical Feng Shui, supposed to be whether or not dragons can move with ease (although the exact relation between dragon-movements and well-being seems somewhat obscure). But even without dragons, such sentences might come out true, if there is a different truthmaker—say, facts about the neurological makeup of human beings. In such a case, you might use the draconic idiom to truthfully talk about ways to increase well-being, even though what was thought to be the truthmaker for those utterances in fact doesn’t exist.

    The question is, then, if there’s no ‘aboutness’ in the world, is there some alternative truthmaker that can be used to underwrite usage of the intentional idiom? Or does the absence of fundamental aboutness mean that the idiom is, ultimately, just wrong (as BBT supposes)?

    Whether usage of the draconic or intentional idiom leads to succesful practice is a different question—interesting in its own right, but ultimately divorced from considerations regarding the matters of fact beyond the words that we use. Whether a practitioner successfully arranges the furniture according to the instructions supplied in the draconic idiom doesn’t tell us anything about whether doing so increases the well-being of those living in such a Feng Shui’d place; but since the ultimate goal of using the draconic idiom is to increase that well-being, its measure is whether it achieves that goal, which can only be evaluated by analyzing what increases this well-being, if it is in fact increased. So to me, saying that one may successfully follow the rules of Feng Shui doesn’t address the central question, which is whether following those rules achieves their goal.

    The same goes for usage of the intentional idiom: whether utterances within the idiom prompt appropriate responses is a different question from whether the idiom ultimately pertains to anything in the world. My saying ‘I’m thinking of an apple’ might prompt an appropriate reaction from you—asking ‘Is it red?’, perhaps—but only if there is some truthmaker for the proposition ‘thinking of an apple’ does my statement succeed in actually communicating some state of affairs—namely, that given by the truthmaker. In quotidian usage of the intentional idiom, that truthmaker is supposed to be the fact that I am actually thinking of an apple—disquotationally, ‘I am thinking of an apple’ iff I am thinking of an apple. Whether that is actually the case, there is some different truthmaker for the statement (corresponding to the facts about human neuronal makeup in the Feng Shui case), or there is no truth at all attached to that statement is the problem of intentionality as I see it—corresponding to a theory of original intentionality, reducible intentionality, or strong intentional eliminativism in order.

    suppose you want know in which movie the famous line “what we have here is failure to communicate” occurs. Type into the google search window “what we h” and the question will be completed for you (at least it was for me – it’s possible that they know more about me than I’d like! Eg, I’ve done that experiment many times, so perhaps they assume that’s what I’m up to).

    Works for me, too—but of course, that’s not an example where google uses any sort of context to establish most likely continuations, but merely checks what were the most frequent continuations in past searches. And if you think that’s all there is to context—a mere cross-referencing of database entries—it seems to me you’re faced with a homunculus problem, analogous to Wittgenstein’s famous rule-following paradox: the fact that references encode contexts can’t be cashed out in terms of references itself, on pain of circularity.

    Probably is in some dimension (eg, complexity), but perhaps in some other dimension less than you hope. I really don’t care all that much.

    To me, this basically says that there are no explanations for why we say what we say, because there’s no contend to what we say beyond that it queues up certain responses in our counterparts. But then, it seems that your behaviour runs counter to this professed belief, since looking for such explanations is what we’re doing here. And again we’re hitting the trivializing worry: if this is really correct, then ultimately, it’s not saying anything.

    Note that any response by the hearer that is consistent with the speaker’s history is sufficient for the speech to have been “understood” whether or not the speaker’s discomfort is thereby relieved.

    You’re kicking the can down the road: how do you judge what is consistent, and what is inconsistent, without prior understanding? If there’s just some big rulebook, then where’s the rule that its rules should be followed (or even, how they should be interpreted)? You can’t teach somebody a language by just giving them a dictionary written in that language—words explained in terms of more words don’t suffice to ground meaning.

    In which case Mary could have learned to associate the phenomenal experience attendant to that neural activity with the word “red”.

    If you’re meaning to say that Mary could ‘just by thinking about it’ induce in herself the right neural activity, then I think that’s question-begging—there’s no reason to believe that we do in fact have such control about our neural activity. If you’re saying that she could use some device in order to cause the right activity within herself, then well, of course she could—one possibility would be, for instance, to shine a red light onto her retina. Alternatively, she could directly create the signal trains travelling down her optic nerve—there’s really no difference between the two scenarios, in both cases she would simply have ‘seen red’ in all the ways relevant to the thought experiment. Indeed, if you hold that the only way Mary could come to know what red ‘looks like’ would be to feed her brain with the appropriate signal, then I think you’re accepting the conclusion of the thought experiment (which you probably don’t want to).

    Your second version of the thought experiment is, I think, not faithful to the spirit of the original: Mary is supposed to be physically identical to an average human (otherwise, her not knowing what red looks like if she lacks the capacity of seeing red would not be any more surprising than a monochrome monitor not ‘knowing’ how to display color), which she wouldn’t be if her capacities were stunted in such a way.

    (Fingers crossed I didn’t mess up the quoting this time…)

  144. So Mary says ‘yes I thought that red would be like that but it is so lovely to actually see it for real along with all the other colours, they are more vibrant than I thought’.

  145. Jochen –

    The idea is that there is something that makes talk come out true—a truthmaker.

    Perhaps for assertion of a proposition (eg, your example of Feng Shui), but for all talk? What are the truth conditions for “Slab!” (Wittgenstein, Phil Investigations, §2,6)?

    In any event, since I don’t think of language in terms of “truth conditions”, I don’t think I can respond to a line of discussion based on that approach.

    saying ‘I’m thinking of an apple’ might prompt an appropriate reaction from you … but only if there is some truthmaker for the proposition ‘thinking of an apple’ does my statement succeed in actually communicating some state of affairs

    Independent of issues re truth conditions or intentionality, I infer from this that you take the primary objective of an utterance to be conveying a fact. But if successful, what am I supposed to do with that new fact? The proximate objective may be simply to expand my knowledge base, but I assume the ultimate objective must be to change my behavior in the current or a future context, ie, to evoke a response, possibly latent.

    if you think … all there is to context [is] a mere cross-referencing of database entries

    In the relevant comment (143), I make two observations about context that should have made it obvious that I don’t think that.

    this basically says that there are no explanations for why we say what we say, because there’s no content to what we say beyond that it queues up certain responses in our counterparts.

    I’ve given a possible explanation of why we say things in comment 151. And an utterance considered as a stimulus “intended” (as defined in the same comment) to evoke a response doesn’t have to be just “noises … dictated by social protocol”, ie, rote; it can be complex, consequential, abstract, etc. The response may also be complex.

    Think of the format of each of our exchanges. Even the long ones can be decomposed into several relatively short (quote,reply) – ie, (utterance,response) – pairs. As is explicit in some of your exchanges with Sergio, the ultimate objective presumably is to converge to a final pair (proposition,agreement). For me that pair is specifically:

    (complex explanation – in a non-intentional vocabulary – of some feature of human behavior,”I totally agree and henceforth will think about these issues from that perspective.”)

    What more “content to what we say” is required?

    I think we (in the US for sure, presumably many Europeans as well) are getting a painful lesson in some problems with the concept of the meaning of speech being to convey factual content. Trump’s speech is about as content-free as one can imagine speech being in the context of a major political campaign. Nonetheless, he’s getting lots of response, and I think it’s safe to assume that it’s largely consistent with his intent. If so, he’s being understood in the (utterance,intended response) model. And “truth” simply plays no role in that process.

    how do you judge what is consistent, and what is inconsistent, without prior understanding?

    Given a speaker’s history of (utterance,intended response) in different contexts, in a specific context there can be many possible actual responses. An actual response is “consistent” with the intended response if it is in some TBD metric “close enough”. If the intended response to “I’m thirsty” is for a hearer to bring the speaker a glass of water but the water is instead brought in a paper cup, that response is probably “consistent” with the intent of most speakers.

    if [a thought experiment’s] premises are true, and the reasoning is valid, then you have no choice but to accept the conclusion

    Which is why I don’t think much of thought experiments. The premises are often not obviously truth evaluable nor is the reasoning necessarily valid. Eg, exactly what does it mean to “know all the physical facts about X”? Or to “experience color”? (Essentially the subject of my exchange with Cognicious and Arnold.) Or to “learn a new fact”? You’re right that Jackson’s original statement said Mary had never “experienced” color, so my first argument isn’t applicable, even to the first version. But the second argument applies to either. (And there’s nothing wrong with the second version (which, BTW, I didn’t make up) – it just offers a different reason for Mary’s not having previously experienced color in response to objections to the unreality of the original scenario. Everything else is unchanged.)

    I’ve skimmed your e-mail text accompanying the paper, and there seem to be parallels in our views despite our disconnects here. I haven’t yet had a chance to tackle the paper itself, but will in the next few days.

  146. Perhaps for assertion of a proposition (eg, your example of Feng Shui), but for all talk?

    Point taken. Yes, I should have said something like ‘assertive talk’.

    I don’t think of language in terms of “truth conditions”

    But you do evidently think that assertions can be either true or false, otherwise you would not have taken me to task for talking about truth conditions for all talk. This implies some belief that the proposition ‘all talk has truth conditions’ is false, i.e. has a truth condition that fails to be actualized. So you assert that you don’t think in terms of ‘truth conditions’, but nevertheless, you act as if you did.

    And of course, whenever you say ‘truth doesn’t enter into it’, I’m left wondering whether you take yourself to be asserting something true by that, or if you don’t, what the point of saying so is. Or, in the words of Conor Oberst, And if you swear that there’s no truth and who cares/How come you say it like you’re right?

    I infer from this that you take the primary objective of an utterance to be conveying a fact.

    The ultimate object of communication is establishing a common ground, sharing information—I have some piece of information, and through an instance of communication, I intend to make that information at least approximately available to you, as well. What you do with that—how you react, etc.—is only a secondary concern, at best.

    But on a conception where words are just so much noise, and not related to matters of fact, then of course no communication in any real sense ever takes place. Now this might of course be the case, but I’d find it depressing, and furthermore, I also believe that one can give a consistent account in which we actually talk about things, that is, in which our utterances do have the relation to matters of fact that we naively think they do. Since this is a much preferrable state of affairs to me, I try to develop this account.

    And again, arguing for the position that we’re ultimately not arguing about anything just seems like a strange (and at least borderline self-defeating) kind of thing to do.

    An actual response is “consistent” with the intended response if it is in some TBD metric “close enough”.

    But that just sweeps all the complexity of the problem under the ‘TBD metric’-rug—you basically assume here that there will probably be some way to somehow judge consistency based on syntactic factors, but that’s a very non-trivial and contentious stance. In general, syntactic features probably don’t suffice to pin down semantic closeness—for instance ‘we invited the strippers, Abe Lincoln, and Napoleon’ asserts something very different from ‘we invited the strippers, Abe Lincoln and Napoleon’, despite being very close in a syntactic sense.

    And you’re dodging my main worry—namely, that you’re falling into a rule-following regress. First, you say that you want to cash out understanding in terms of consistency; then, consistency is to be determined by ‘some metric’; presumably, this metric must be defined by some further rule, and so on. You’re explaining rule-following with further rule-following.

    The premises are often not obviously truth evaluable nor is the reasoning necessarily valid.

    And if this is the case, then you can point that out to attack the thought experiment’s conclusion; but this isn’t really an argument against thought experiments in general.

    Eg, exactly what does it mean to “know all the physical facts about X”? Or to “experience color”? (Essentially the subject of my exchange with Cognicious and Arnold.) Or to “learn a new fact”?

    All of these seem quite readily understandable to me. Knowing all the physical facts about X is just knowing everything that about the physical processes and entities involved in bringing about X, which is a perfectly well-defined set. Experiencing color is that subjective experience which follows being exposed to a certain kind of stimulus, i.e. colored light (with some appropriate ‘ceteris paribus’ statements perhaps, such as that light being sufficient to trigger a response in the retina, the person being normally sighted, etc). Learning a new fact means having available information about some state of affairs that one did not previously have available.

    I mean, I suppose one could quibble with all these definitions, but then, the onus would be on those disagreeing—a blanket dismissal carries no logical force. There’s a perfectly good quotidian understanding of any of these terms, and if you wish to challenge that, you have to show why that understanding is not applicable in a given case—which may well be a fruitful path of resistance against a given thought experiment, but doesn’t underwrite the generality with which you dismiss thought experiments in general.

    You might just as well say that you don’t trust arguments, since often, their premises or logic are wrong. Of course that’s often true, but it doesn’t license you to dismiss a conclusion by pointing out that it was just arrived at by argument, and arguments can be wrong.

    And there’s nothing wrong with the second version (which, BTW, I didn’t make up) – it just offers a different reason for Mary’s not having previously experienced color in response to objections to the unreality of the original scenario.

    Well, whether or not the version is originally yours, I don’t think it’s as successful as the original in establishing its conclusion—for lets say Mary lacks some neural wiring necessary to perceive color. Then, an attacker of the Mary argument could just hold that that wiring is perhaps also necessary to imagine color, and hence, that Mary’s inability to do so—until her capacities are re-established—doesn’t tell us anything about whether color experience is derivable from the physical facts of vision. It merely tells us that a being different from a normal human in one respect (ability to perceive color) also differs in another (ability to imagine what it’s like to see color), which doesn’t strike me as terribly surprising.

    Alright, enough for now; time to get ready for the day’s festivities. Have a merry Christmas (and a merry Christmas to everybody else, as well)!

  147. Jochen (155),
    I’m glad we didn’t give up: I do think we’re finally getting at the heart of this argument.

    First of all, I think there’s a point regarding HC that I’ve so far failed to make clear to you: as a ‘third option’ in reaction to Mary/Zombies/etc., it’s unique—that is, it’s the only way to resist throwing out physicalism while simultaneously accepting those arguments.

    Indeed, this “third option” angle wasn’t focussed well enough in my mind so far, and I do think it has some merit. Doesn’t mean that I’m convinced, but I do think I better understand why you are.
    For this discussion, I think there are a few separate strands, so as usual I’ll try to untangle them (and assume we understand why they are interconnected):
    1. Your way of actually eating and having the cake: the third route offered by HC.
    2. How much we can conclude from Mary’s story.
    3. Induction and how science works.
    4. ETC and whether or not it answers the questions you (rightly) expect to be answered.

    I can’t really separate 1 from 2, and 2 alone is a little dry for my liking, so I’ll tackle Mary’s story (2) only in light of the use you make of it (1).
    Regardless of how we interpret Mary’s story premises (for the moment), we are interested in the conclusions it draws, and there are two of them. The first, which I see as “internal” to the story is that Mary learns something when she first experiences colours (C1). The second draws from C1, gets out of the fiction and reaches a conclusion that supposedly applies to the real world; it goes like this: because Mary learns something, it follows that you can’t infer everything from physical details and therefore there must be something beyond the physical (C2).

    I think we agree on saying that we can and should explore the space of the possible premises to Mary’s story and evaluate two main questions: (Q1) do some interpretations lead to different conclusions? (If yes, why?) and (Q2) (when) are the conclusions applicable to the real world? I think we disagree on how we answer those questions for some particular interpretation, and will come to this disagreement shortly.

    In your case, you are picking one particular interpretation, and for legitimate reasons: you think the original conclusions are conserved, are applicable to the real world and allow you to build something new on top of them. So far, all looks perfect to me. You may also be saying that the interpretation you’ve picked is the only legitimate one, and I would definitely disagree with this stance, but that’s probably just sophistry, so I’m ok with glossing over this quibble.
    The particular interpretation you pick is one where Mary can compute everything that is computable and knows all the physical details that pertain colour perception. Under these circumstances you say that it is still self evident and unquestionable that both C1 and C2 hold true. You then go one step beyond and offer the third way: perhaps the “something beyond the physical” isn’t really beyond the physical, it is merely “non-computable” and therefore Mary would learn something because she couldn’t infer it (being restricted to the computable). Thus, the solution of the riddle is HC.
    Am I describing your position in an acceptable way?
    Assuming I am, I’ll keep going: the whiff of incoherence I’ve smelled stems from the fact that you are accepting C1 in full, but you are effectively modifying C2, it becomes “you can’t infer everything from physical details and therefore there must be something beyond the computable” (C2J). I am not saying you are wrong, I’m saying that C2 != C2J and therefore you are saying that Mary’s thought experiment has led us to the wrong conclusion. Not dramatically wrong, but certainly not the best one.

    If I’ve finally found a way to express(/translate) your position in my own words, I think you have a very interesting point and something that is worth exploring (I am also not saying you are necessarily right!), so at least I finally get to understand why I’ve invested so much of my own energy in it ;-). The whiff of incoherence isn’t a major flaw, it’s something that you can easily fix with a little crafty reasoning: it does ask you to revise claims about how strong conclusions one can infer from thought experiments, but that’s certainly a cost I would be happy to pay (you may not, however).

    I’ll keep assuming you are still with me (hoping to save us a bit of back and forth), but won’t be surprised if you aren’t. Anyway, this last bit is leading us in the treacherous waters of science epistemology. It kind of explains why we got to discuss it in the first place, but that’s the area where I think we both lost track of the ball, so I won’t spend many words on it today. The on-topic side (for us, still OT overall!) is “just” about how much weight we can give to Mary’s story and by extension to your way out. I think you need to acknowledge that C2 != C2J and that therefore, since Mary’s story didn’t allow us to reach a conclusion that is absolutely right, building on and extending it doesn’t guarantee rightness as well (:-), I say so by virtue of induction!). On some level you agree, but if I phrase the above as in “QED: we do need to be worried about deductive leaps”, you might jump at me again. Take it as you wish: I love discussing epistemology, so I’ll take the bait, if you’ll offer me another one.

    At this point we reach my position. If we need to take C1 and C2 with a good amount of scepticism, C2 might not hold in the real world, and we might not need C2J. Maybe Mary does learn something (under the set of interpretations that produce applicable inferences) and still we can’t conclude that the only way to escape C2 is by giving brains powers of hypercomputation. This is what I’m proposing, I’m not saying I’m definitely right, I’m saying this possibility can’t be ruled out. I’ve also argued in favour of rejecting C2, explaining why I find this strategy more palatable, but I don’t think I need to repeat myself (I also think we can’t say if Mary learns something under your interpretation, where “Mary is able to draw any inference that can possibly be drawn—her cognitive capacities are unbounded (99)” – she’s just too different from us to justify trusting our intuition that she must learn something new). Of course, in the end, I take this stance because I think ETC helps refuting C2. But I have strong reasons to be worried: I think ETC refutes C2 precisely because it answers the questions you think I’ve left unanswered. So, real-world, empirically verified take home message for self: ETC as I’ve written it is not fit for purpose. I can conclude this because you clearly have not understood it: if you can’t, it’s likely that most people won’t.

    The main questions you have for me are:
    JQ1:

    If […] all real-world consequences were computable there is a further fact in need of explanation, […] namely that of why a given conclusion can’t be reached, even though we are in principle able to do so (being equivalent to, or at least capable of constructing, universal computers).

    JQ2:

    In a computable world, there should be a story that can be told as to how subjective experience emerges; and if that is the case, then that story is what I want.

    JQ3:

    ETC says that if we arrange a billiard-ball processor with […], the whole lot will inevitably have phenomenal experience.

    I might not have grasped your account fully, but can you give a simple indication of how?

    Short answers, in reverse order:
    JQ3: no, I can’t. There is no simple explanation. There is an explanation, but I’ve clearly so far failed in making it a “simple explanation”.
    JQ2: ETC aims at being that story.
    JQ1: Universal computers exist in theory only, in practice, all computers are bound by both time and resource limits. It’s one reason why some explanations are harder to find than some others. The particular conclusion we’re trying to reach is particularly hard to find, and we all agree on this, since we all know we haven’t found it yet and many even think it can’t be found.

    More in detail, JQ1 is legitimate, but the answer is trivially easy:
    Our powers of computation are (a) constrained by our physicality, (b) not optimised for universal computation, (c) the result of a messy process which favours ecological efficacy, not universal computation (perfect rationality isn’t what our brains excel at, if you wish).
    Furthermore, we are trying to explain the core of how we reason, but this is something that (via (c)) isn’t neatly arranged, it’s in fact the result of a messy and stochastic exploration of the ever-changing space of fitness. Evolution is a disorganised designer, its results resemble scrapheap engineering and must be by definition very hard to reverse-engineer (d). We also know the answer hasn’t been found so far (e), and we even know that we’d have difficulties in recognising the right answer as such (f) (the present discussion demonstrates it!). By virtue of each a-f consideration, even a single one in isolation, we can conclude that the explanation must be difficult to find, therefore we don’t need to conclude that it “can’t be found”.

    JQ2 doesn’t need more answers, because they would repeat for JQ3: the simplest (or maybe just shortest?) explanation I’ve managed to write is here, where the minimum computational requirements are listed in the points 1-5. They are all necessary and none is sufficient on its own, so I can’t provide a simple description of how they interact. However, the key point is clear: it is recursion that generates the mystery. If you accept that the Evaluation Module can be implemented computationally, and that the results of the EM can themselves be re-evaluated, the mysterious qualities of qualia and PE necessarily must appear. The general shape follows many aspects of BBT (can’t say is equivalent to, because we know Scott disagrees, and he’s probably right, as I wouldn’t declare the concept of intentionality meaningless); perhaps more convincingly, the latest version of Webb’s and Graziano’s Attention Schema describes the exact same effects of recursion. What Webb and Graziano are saying is precisely the heart of ETC: what they call awareness (they explicitly refer to “subjective awareness”) I call PE, and can only happen because what I call the “paying attention to [x]” state is represented on what I call level 3. In their language, this representation (model) is used to control behaviour. If they are onto something, so is ETC; perhaps their explanation is easier to understand (most likely!).
    For us here, it means that if Mary is like every other human, but happens to know how colour perception works in the same way as we currently know how digestion works (enough to make lots of correct inferences, but in no way enough to say “we know all relevant physical facts”) then both ETC and Attention Schema Theory predict she will learn something: this would make C2 clearly false, because the prediction comes from a description of pure mechanism (the story you are asking for). Whether this story is true is an entirely open question, of course.

    Summary: for all we know, we could both be right. In the physical world, inference is always bound, there is no way you can know all the physical facts about anything and there is no way to infer all consequences of a given state of affairs (even if you knew all the details and all the physical laws that apply – which nobody ever will), so I’m OK with the idea that “full knowledge of the theory —knowing all the physical facts— does not suffice to nail down all of its implications”. A big part of my criticism in previous comments boils down to “there is no way you can be sure you do have this phantomatic full knowledge“, reinforcing the conclusion. In turn, this means it’s possible that pure computable mechanisms produce PE, or it’s possible that to produce PE the actual mechanisms need to tap a source of randomness in order to acquire hypercomputational powers. The story I would like to hear in support of HC is something explaining why evolution produced it (the benefit one can get from being unpredictable can be obtained without hypercomputation, I would guess) and how it got there. It would also be nice to have criteria which would allow us to establish if PE is present in a given “thing” or not. I don’t think you’ve given us any of that: so far, HC is a viable solution of a riddle, it is worth exploring because there is a chronic paucity of viable solutions.

    Looking forward to hear your next reply. I’ve read your paper once, Peter sent it over 3 days ago… Need to find the courage to re-read it, before I can comment.

  148. Sergio, I believe you’re right that we’re making some progress. So I’ll try and focus, in this reply, on casting out what I perceive to be the remaining misunderstandings (or at least some of them), so that we can hopefully sit back and take stock of each other’s position with as little confusion about it as possible. With that in mind:

    Am I describing your position in an acceptable way?

    Close, but not quite, I believe. As I said in my previous post, to me, the conclusion of the Mary story is that

    not all real-world consequences can be deduced from some set of known physical facts.

    In order to then conclude something about the physical’s non-exhaustiveness requires another premise, which is generally left unstated, namely that (Comp) an ideal reasoner should be able to infer any possible consequence from the known physical facts.

    Premise (Comp) is one about deducibility, and hence, about computability (if one makes the—IMHO reasonable—assumption that ‘ideal reasoner’ does not imply any hypercomputational reasoning abilities, mostly because there is no evidence that we humans possess them)—if there is some set of physical facts whose specification suffices to fix all the facts about the world, then an ideal reasoner should be able to use a finite, effective procedure to derive all the facts about the world from those physical facts. (Chalmers, in Constructing the World, uses a fictional device, the ‘cosmoscope’, in order to implement such a derivation.)

    This assumption is what I challenge. Thus, I am with the Mary story all the way up to ‘there are some things that can’t be derived from the basic physical facts’, but don’t endorse the further conclusion that ‘the physical facts don’t suffice to fix all the facts’. Indeed, as the example regarding the spectral gap shows, this assumption seems in fact to be wrong—there are perfectly ordinary physical effects that can’t be deduced from a set of physical ‘base truths’, i.e. a full specification of physical theory. Even though a world governed by that theory would either have a spectral gap in its spectrum of excitations, or not, that is, even though the ‘base truths’ fix all the physical facts, we can’t derive all those facts from the base truths. Chalmers’ cosmoscope, if it is supposed to be a computational entity (a Laplacian perfect reasoner, for example), can’t exist.

    Thus, one is not free to make this assumption in the Mary case (or in the case of zombies, or color-inverts, and so on—the focus on Mary here may be somewhat unfortunate, since one of the main virtues of the approach, as I see it, is that all of these other thought experiments admit the same answer), as we know that there are instances where it fails.

    I think you need to acknowledge that C2 != C2J and that therefore, since Mary’s story didn’t allow us to reach a conclusion that is absolutely right, building on and extending it doesn’t guarantee rightness as well

    This I’m afraid I still don’t get. The conclusion entailed by Mary’s story is that not every fact can be inferred from the basic physical facts—which is, I think, exactly right. The further conclusion that hence, the physical facts don’t suffice to fix all of the facts can only be drawn by using the additional premise that ‘fact-fixing’ and ‘fact-deducing’ are co-extensive, i.e. that all facts that can be fixed by the physical facts can also be deduced from knowing them. This, it seems to me, is manifestly false; hence, rebutting this premise allows to resist the conclusion (as always must be the case in resisting arguments).

    I also think we can’t say if Mary learns something under your interpretation, where “Mary is able to draw any inference that can possibly be drawn—her cognitive capacities are unbounded (99)” – she’s just too different from us to justify trusting our intuition that she must learn something new.

    I don’t think this is true. Mary is an idealized version of our own capacities, and as such, much more easily described formally—namely, essentially by a universal Turing machine (which as you’ll recall were invented exactly by abstracting from our own usual reasoning process). There’s a very simple theory associated with that, and we can unambiguously say that at each non-computable or undecidable juncture, Mary will indeed not be able to derive those facts, and hence, learn something not reducible (to her) to her prior knowledge. Consider again the example with Mary the experimenter: she can’t derive whether the excitations of a given theory show a spectral gap from a full specification of the theory; but once she does the experiment, she’ll know, and hence, have learned something new.

    The particular conclusion we’re trying to reach is particularly hard to find, and we all agree on this, since we all know we haven’t found it yet and many even think it can’t be found.

    Well, yes—but in principle, it is there to be found. This is a very important difference between our accounts. On yours, we need to be deceived in some fundamental way; on mine, our intuitions are dead-on right.

    JQ2 doesn’t need more answers, because they would repeat for JQ3: the simplest (or maybe just shortest?) explanation I’ve managed to write is here, where the minimum computational requirements are listed in the points 1-5

    Well, if you excuse me for being so blunt, I have severe doubts that ETC (as I understand it) meets the explanatory burden it sets out to meet. So, let’s say I grant you all your claims about the EM, your level architecture, and so on (I don’t necessarily do; in particular, the EM raises homuncular worries for me, but let’s table that for now). Why would I then believe that a being so organized has any sort of subjective, qualitative phenomenological mental content? You may be right that there are things that it doesn’t know of its own workings—but from there, to infer that it ‘seems mysterious’ to itself needs a commitment to the idea that things can ‘seem’ a certain way to it, which begs the question. You may be right that it can’t formulate working hypotheses about its own perceptions—but to conclude from there that they ‘appear ineffable’ again needs a belief that things can ‘appear’ in any way to it. The tricky part—explaining how anything can seem, appear, be like something and so on—seems to me to just be swept under the rug.

    Certainly, it seems easily possible to imagine a being organized the way you claim a conscious mind to be, without however any shred of phenomenology to go with it. PoolHead will implement any algorithm you feed it with, but no billard ball interaction feels like anything to it. He might not know how he produces reactions to stimuli, but he also doesn’t care—without there already being something it is like to be PoolHead, nothing can appear ineffable, immediately available, self-explanatory, or any other way to him.

    If you accept that the Evaluation Module can be implemented computationally, and that the results of the EM can themselves be re-evaluated, the mysterious qualities of qualia and PE necessarily must appear.

    This simply doesn’t follow, to me. An entity can lack knowledge about its own functioning without there being any subjective states associated with this lack of knowledge; indeed, in order to experience something like mystery at this lack of knowledge, it seems trivial that it needs to have a faculty of experience first.

    For us here, it means that if Mary is like every other human, but happens to know how colour perception works in the same way as we currently know how digestion works (enough to make lots of correct inferences, but in no way enough to say “we know all relevant physical facts”) then both ETC and Attention Schema Theory predict she will learn something

    But this doesn’t meet the requirements of the Mary story. If she doesn’t know everything about perception, then that she can’t infer certain details is no surprise. The onus on accounts like ETC is to make it plausible that it is in principle possible to have such a complete knowledge of perception that an ideal reasoner could indeed come to know what seeing red is like without ever actually seeing red. Indeed, if you believe in ETC, then you ought to, in principle, believe that it is possible to tell a person, blind from birth, a story that will allow them (sufficient capacities of imagination taken for granted) to imagine a scene as well as a sighted person can.

    Or, to take it onto a more personal level, I have no sense of smell, as far as I know since birth. I find I simply cannot believe that anything anybody will ever tell me of the mechanics of smelling will suffice for me to imagine what it would actually be like to smell. Or take the feeling of a feather brushing your skin, an itch you can’t scratch, that nagging doubt of whether you’ve remembered to lock your apartment door: can you in all seriousness claim to believe that in principle, it ought to be possible to explain how all of these things feel like to somebody who’s never experienced them?

    I realize I’m appealing to emotion here, but I think this is amply backed by cases such as PoolHead, where one knows that every reaction can completely be explained by a story of billard balls colliding, with nothing left over. At the very least, it seems to me that one should insist on a very strong argument in favor of the possibility of such stories before being convinced; and so far, none has been produced, as far as I can tell.

    Your account based on ETC still seems to require me to believe that the mysteriousness of an entity’s own workings would be presented to it, would appear to it a certain way; but there is no argument given (that I could see, anyway) that that would indeed be the case. PoolHead might be totally mysterious to itself in that it could not produce a theory of its own working upon being prompted (as you know if you’ve read my paper, that would in fact necessarily be the case), but I don’t see how this should entail any kind of subjective experience at all—it could equally well be mysterious to itself with all the lights being off.

    Alternatively, it might be that you’re suggesting that ‘seeming’ is just something perfectly natural in the world, it’s just that our inability to introspect perfectly renders it mysterious (ineffable, etc). This would then entail a commitment to a sort of panpsychist mysterianism—there’s in fact experience associated with physical processes in a perfectly natural way, we’re just somehow barred from seeing that connection. Again, that’s something one may either believe in, or not; but, as with run-of-the-mill panpsychism, it doesn’t so much provide an answer to the riddle of conscious experience, as it does merely ignore it.

    A big part of my criticism in previous comments boils down to “there is no way you can be sure you do have this phantomatic full knowledge“, reinforcing the conclusion.

    But, and this is very important, this does not entail that one can’t explore the implications of having it, that is, what would happen if one did have this full knowledge, or what this full knowledge would enable us to do. One case in which one does have full knowledge, for instance, is PoolHead—it’s known that all of the interactions necessary to compute arbitrary responses to arbitrary stimuli can be phrased in terms of billard ball collisions. Thus, all that happens in this case is billard ball collisions; so, everything that PoolHead does can ultimately be accounted for in such terms, and hence, if one has no reason to believe that billard ball collisions are accompanied by phenomenology, one has no reason to believe that PoolHead has any phenomenology at all.

    The story I would like to hear in support of HC is something explaining why evolution produced it (the benefit one can get from being unpredictable can be obtained without hypercomputation, I would guess) and how it got there.

    Well, as you know, not all traits or behaviors are necessarily adaptive, so this stance may blind you to certain possibilities that nevertheless deserve to be taken seriously. Furthermore, I think I’ve already given ample justification for the expectation that hypercomputation occurs in biological brains—namely, any Turing machine with a nonalgorithmic input (i.e. environmental randomness) will execute a non-recursive function (almost surely, that is, with probability one). This simply follows from the fact that all possible functions can be implemented by a TM with a source of randomness, and that the computable functions form a set of null measure within that of all functions. So if there is genuine randomness in the world, and we process environmental input, then we will unavoidably perform hypercomputation. No evolutionary justification story needed (indeed, given these premises, one would need a justification as to why we don’t perform hypercomputation, or at least, why whether we do isn’t relevant).

    It would also be nice to have criteria which would allow us to establish if PE is present in a given “thing” or not. I don’t think you’ve given us any of that:

    Well, I’ve given you justification as to why HC should be expected to be present in biological brains; and I’ve also given you a possible way to account for the origin of consciousness, namely, the Zeno regress, which a HC could perform. This is a perfectly good criterion for the presence of consciousness, however, it’s not an effective criterion (that is, one checkable with finitary means). It might be that there’s no such thing, of course; but in my previous reply, I’ve also made some suggestions as to how we could nevertheless establish at least the possible presence of a mind.

  149. Jochen, progress indeed!

    As I said in my previous post, to me, the conclusion of the Mary story is that not all real-world consequences can be deduced from some set of known physical facts.

    I can agree with this, it’s what I’ve included in JC2:

    you can’t infer everything from physical details and therefore there must be something beyond the computable

    I would be with you, if it wasn’t that I can’t see how the above (either your own formulation, or JC2, or both) is equivalent to the official conclusion of the original thought experiment (copy and paste from “Epiphenomenal Qualia”, 1982):

    What will happen when Mary is released from her black and white room or is given a colour television monitor? Will she learn anything or not? It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete. But she had all the physical information. Ergo there is more to have than that, and Physicalism is false.

    In other words, Jackson reaches the wrong conclusion, according to both of us. Do we agree? From here, I get carried away with my “humble epistemology” trip, and insist that we just can’t trust our intuitions and/or what looks to us as inescapable deduction. If we can’t double check with an experiment, the possibility that our conclusion was wrong has to be considered as very relevant. This applies to both of us, but for me, when somebody says things like “with all the force of deduction”, I just can’t help myself, I have to complain (sorry!).
    But this really is sophistry (or worse), so I’ll try to move on. I mention it here and below because of the thing you “don’t get”, and I thought it was worth clarifying: I’m ok with the idea that we might not be able to infer all consequences from perfect knowledge of the facts, not even if we had perfect inference abilities. On this, you kick an open door, if I disagree, I’m actually arguing for even less powers of inference.

    Before I go into marginal details about HC, let me state one thing explicitly: once I’ve seen your “third way”, I’ve recognised that you might be right. I’ve even started doing some (private) mental acrobatics to see if we could be both right (not quite). It’s important for me to make sure you understand this non-trivial detail: it means that what follows really are marginal worries, I am going to write them because criticism might help writing down your argument. Sure enough, the fact that I’m far away from the mainstream might make my points moot (FAPP), so please do feel free to stop trying to convince me (you’ve convinced me enough, to get me more convinced you will have to make me change my mind on epistemology in general).

    Ok, residual worries:
    What you say at the end of your last comment (161) is all good for me, but I really can’t avoid detecting a residual hint of incoherence.
    I’ll try one more time, your key ingredients are: simple mechanisms (AKA computations). Environment randomness. Put the two together and you necessarily get HC. Then, by virtue of one particular HC, the Zeno Machine, you say: look, maybe the homuncular regress actually does explain the thing.
    Fine: in our heads therefore there must be the very peculiar mechanisms capable of implementing the Zeno Machine (or some equivalent highly exotic machinery that just happens to generate PE when fed with genuine randomness), otherwise you are the one going all panpsychist on us. The world is full of mechanisms, and full of environment randomness. Unless only particular mechanisms are capable of doing the trick, almost everything must have a mind. That’s the core of my residual worry: you are either saying that everything is conscious, or you are kicking the can down the street (give me that can, I was kicking it!). What I haven’t found in our discussion is an explanation of what defines the particular mechanisms that can create minds, and why they have appeared as a product of natural selection. Sure, not everything that natural selection produces is functional, but usually we can produce fairly substantiated explanations of how it got to generating the observable shapes. Perhaps more worryingly, I don’t even know if one can produce formal definitions of what kind of mechanisms can be expected to hypercompute (or at least hyprecompute in the right way), this is again my own ignorance, so you really needn’t worry too much.

    Also: when I get on my high horse (made of paper towels, of course) and blab about epistemology, I do so also for a secondary reason. On many occasions you have referred to the current accepted state of fundamental physics. Not surprising considering your background, but to me, it feels risky. For example, I don’t think one can definitely rule out the possibility that the “Undecidability of the Spectral Gap” is due to some variable that we know nothing about. Furthermore, for me it’s undeniable (what, undeniable? for me?) that all measures come with some error. As you refine your instruments and reduce error, you get to rely on tinier forces/particles (whatever you are using to make your measurement), and alas, if you then want to measure them, you got to find something even tinier to exploit for the subsequent measurements. This has epistemological consequences: there is no way to make perfect measurements, and therefore there is no way to get in Mary’s idealised position (of knowing all relevant facts), thus, by good old chaos theory, we are, by definition (oh gosh, should I say, deduction?!) incapable of inferring all consequences of a given state of affairs. That’s because we’ll never know everything of the state of affairs. This reaches your same conclusion (CJ2) that “you can’t infer everything from physical details” (FAPP). I can see my take doesn’t require the HC ingredient, and therefore doesn’t suit your aim, but I find this approach less risky because it doesn’t rely on current theories of fundamental physics. As we do expect that theories of fundamental physics will evolve in directions we’ll all find surprising, it feels safer to me. It really is a minor quibble, I’m mentioning it to explain why I always feel the need to get on the high horse (I’m kind of allergic to what I see as physics hubris, I hope you can forgive me for this).

    Never mind, the above is more due to my stubbornness than a genuine hope of being useful. Your main point is conceded: HC might be necessary for consciousness. I won’t bet on it, but surely that must be because I’ve already placed my bets ;-).

    Second side of usefulness: my own. What you say about ETC also makes sense, I think I can see what I’ve failed to explain, so I can give it another go here (as long as I don’t get on Peter’s nerves).
    I’ll try to start with the question: what are the minimum computational requirements to create a Philosophical Zombie (PZ)? Because of functional equivalence, a PZ will be genuinely convinced to have PE. Thus, if I explain convincingly how to produce such an “illusion” (with scare quotes), I should be winning the jackpot, right?

    Naturally our PZ will need to have a concept of “self”, will need to “know” that certain parts of the world belong to “it”, that they harbour sensory structures, and so forth (recall my proper posts here: I/O structures are a requirement). PZ also needs to have memories, and just like us, needs to have faulty memories: there must be a heuristic mechanism that decides what to remember. PZ can’t remember everything (again, functional equivalence), and what it does remember, needs to be more or less sketchy (as Scott keeps insisting, rightly) i.e. not a perfect recording of the original stimuli. It then needs to be able to recollect memories (including to recall what was that thing just an instant ago), and needs to be able to re-analyse the memories more or less as it analyses the current input (“Oh, I think remember Peter mentioned XYZ in a post a long ago, do I need to dig it out?”).
    Take away any of the above and you don’t have a perfect twin, right?

    But this is all you need: EM selects what to do with some incoming information, and this could be a memory (just a different source of input, functionally), and when recalling a perception, PZ needs to attribute the perception to itself, so the particular input coming from “memory banks” will be recognisable as such. We agree that all these mechanism need to be at least partially (but fundamentally) opaque to PZ, because you know, computation. So hey, when PZ asks itself “what made me want to remember that XYZ” will recall XYZ, pass it through the EM and get some output (which can then potentially re-enter the EM). The recollected XYZ, needs to be labelled as “this is what I perceived at that time” (also known as “this is how the initial input got stored”) and because our PZ has a model of “self”, the mystery is served. Due to opaqueness, the symbolic nature of both the original XYZ stimulus (when it reached the EM the first time it was a signal representing the stimulus, not the thing itself) and of the recollected XYZ (ditto), is completely unknown to PZ, and therefore PZ has no way of describing it in either explicit/propositional thought (further introspection) or words (giving you privacy, ineffability). Because EM is always-on (when PZ is conscious, includes dreaming), to PZ all this just happens, and thus you get the immediacy of qualia.

    You are yourself proposing that meaning exists by virtue of what it makes a system do (I did read your paper!), so in the end, we are left with nothing to explain. The signals that reach the EM have their own primary intentionality (they represent stuff that happens in the world), either from the start (according to me), or after passing through the EM and getting stored in at least short-term memory (according to your paper 😉 ). Because PZ has a model of self, which can (and does) pass through the EM, incoming stimuli are classified as “information I’ve collected from the outside”, but because PZ has no clue of how this happens (nothing of sorts is contained in the “self” representation), this is equivalent to “information I’ve somehow perceived”. It can be re-analysed, and when this is done, it will be flagged as “information I’ve perceived”, and that’s how you get the “illusion” (with gargantuan scare-quotes) of PE. Only it’s not an illusion, that *is* (according to me/ETC) PE. The illusionary side is what seems magic, which is in fact the unavoidable consequence of the fact that the perceiver is just a mechanism.

    In the above I’ve abruptly started using the “perception” word, so I’d better explain why I think it’s legitimate to do so: information is collected from the outside, classified as “data coming from outside myself”, stored in short-term memory (my Level 3); it can then be fed as input to EM. This step is automatically performed to decide if this information requires action, if it should be stored for future reference or if it can be ignored (and more, I’m sure). When the same something passes though the EM once more, it will be pre-digested information, it may lack the original perceptual detail (for example, for long-past memories) and will be labelled as “what I recall of the original input”, the “I” in this sentence is possible because the system contains a model of self. But once you allow the use of “I” you can re-write my last enunciation as “what I recall of the original perception“, because after all, the original input got digested by “I”, thus, to me (the model of myself which is automatically maintained in working memory) the original incoming information isn’t transparent: it’s something I know I’ve processed and evaluated, it is something I have perceived. If you keep a model of the self, and can re-evaluate sensory information recursively (changing and synthesising it at each pass), you can (must?) attribute to the (modelled) self the ability of perceiving (i.e. collecting and evaluating sensory information) – thus, the existence of PE will look undeniable to yourself. PE will also apparently possess the strange qualities of qualia because the model of self that is itself perceivable won’t include anything of the above, not the processing routes, and even less the details of how information is encoded and evaluated.

    This doesn’t mean that ETC includes an explanation of why red feels the way it does. ETC on its own can’t do this, so to my eyes it doesn’t fully cross the explanatory bridge. However, it does say that you can’t even hope of doing this last step if you don’t know how red is encoded when fed to the EM and how the EM operates in detail. Once you do, there is hope to reduce the gap even further, but no guarantee that we’ll be able to infer what a signal would feel like: to do so you need to deploy our powers of imagination, and these are both very limited and idiosyncratic. In other words, I wouldn’t go so far as believing “that it is possible to tell a person, blind from birth, a story that will allow them […] to imagine a scene as well as a sighted person can”, this will remain impossible because of limitations of the human machinery. However, in a transhuman scenario where people get artificial cognitive extensions (equivalent to the bit I’ve omitted: “sufficient capacities of imagination taken for granted”), then yes, it would be possible (useless, because we’ll give them artificial eyes instead), but it does mean that once the whole mechanism is understood well enough the privacy of qualia will be surpassed (in small, difficult and dangerous steps, I’m sure).

    Now, picturing in your mind such a thing made only of billiard balls is a tall order! However, following your own way of arguing, in order to convince me that ETC is necessarily (and fundamentally) wrong, I need to be convinced that the architecture I’m proposing can’t be instantiated. If you can actually implement it, I simply can’t see how this thing won’t consider itself conscious, exactly like us.

    You can of course theoretically propose to build PZ as our PoolHead, and you can of course then describe its behaviour without mentioning PE, but you’d be missing a trick. It’s exactly like a computer (the physical devices): you can describe its behaviour only in terms of electronics, but if you want to explain how any piece of software works, you’d be better off looking at macroscopic behaviours and designed algorithms, right? Why should we be different? I type stuff in here, and this doesn’t really become a series of zeros and ones, it actually is just electrons flowing here and there, but to understand what’s going on, how my words can eventually reach you, selecting the level of explanation that includes Unicode, http protocols and so forth is the correct way. Selecting the “correct” explanatory level is useful because it allows to neglect a massive amount of irrelevant details. Thus, the fact that we can explain PoolHead in strictly mechanistic detail, doesn’t mean it can’t have phenomenal experience. In this context, if PoolHead implements the ETC architecture, we will have reasons to believe that these particular mechanisms generate PE. But on the other hand, if you start from the assumption that mechanisms can’t possibly generate inner feelings (because you can’t build ships with paper towels), then there is nothing I can say to change your mind.

    Conscious systems are systems that are capable of interpreting themselves in a very peculiar way. So yes, “‘seeming’ is just something perfectly natural in the world”, all matter has the potential of producing PE, but only if arranged in particular ways. This means that, considering my insistence of epistemic limitations, I do allow some mysterianism, but I don’t think I need to worry about panpsychism.

    I don’t know if my long ETC diversion makes it clearer to you, it certainly is useful for me to try to put these concepts in writing, so thanks for giving me the chance! (thanks also to Peter, as always)

  150. Sergio, I’m very happy that it seems you agree that my HC account at least provides a possible solution to the puzzle; it’s really more than I had hoped to achieve in this discussion. As far as I’m concerned, the rest is just ‘window dressing’, and we could as well drop the subject; but I’ll reply to some of your remaining worries anyway, and then maybe extend to you the same courtesy you showed me in giving your ideas a(nother) shakedown?

    Jackson reaches the wrong conclusion, according to both of us.

    Jackson reaches the right conclusion—Mary’s knowledge is incomplete. He then produces a further conclusion from that one, and the (unacknowledged) premise that all the physical facts ought to be derivable from some base of physical truths, namely, that the physical facts don’t exhaust all the facts about the world. It’s this further conclusion I disagree with, since on my account, there is no way even for an idealized reasoner to derive all the facts from some base set of facts (which is different from your account: there, an idealized reasoner should in principle be able to derive all the facts, and hence, also those of her subjective experience).

    If we can’t double check with an experiment, the possibility that our conclusion was wrong has to be considered as very relevant.

    But ‘your argument could be wrong’ is not a valid reason to resist an argument. Of course, any argument could be wrong; but this doesn’t give you carte blanche to just not believe those arguments you disagree with! If there is something wrong with an argument, you have to show where, and how, it goes wrong, in order to be able to resist it. (Also, note that experiment is not going to help in such a situation: all experiment is theory-laden, so if you don’t trust reasoning as a matter of principle, you have no reason to believe, say, the physicist when he says that the red light blinking means that the particle had spin 1/2.)

    If we start with true premises, and use correct reasoning, we get a valid argument; if you want to disagree, you can attack premises or reasoning (its soundness or validity), but you can’t say that it just seems kinda fishy to you because people are sometimes wrong. This isn’t humble, it’s just vague and ill-defined.

    I’m ok with the idea that we might not be able to infer all consequences from perfect knowledge of the facts, not even if we had perfect inference abilities.

    But do you think there’s another way except for hypercomputation to achieve this state of affairs? Otherwise, if we had perfect inference abilities (i.e. were universal Turing machines), and the world is computable/does not contain undecidable questions, then it seems we should be able to infer every consequence.

    What I haven’t found in our discussion is an explanation of what defines the particular mechanisms that can create minds, and why they have appeared as a product of natural selection.

    And I’m not sure I’m capable of giving one—after all, what hypercomputation is being performed by a given system isn’t in general decidable. Indeed, not even what computation is being performed is decidable, so as I’ve pointed out before, both the computationalist and the hypercomputationalist are in the same boat here, so I can just take this criticism and say: right back atcha! 😉

    But it becomes more reasonable, I think, if you look at it from a process-ensemble kind of stance. The world is an ensemble of processes, at least some of which are capable of information processing, and indeed, given environmental randomness, of hypercomputation. Then again, if there is some HC account of consciousness, then some of these processes (given a large enough ensemble) will possess it, and presumably, ask the questions you’re asking. What we get then is essentially akin to the fine-tuning problem: why do we (meaning the particular processes in our brains) experience consciousness? Analogously, why do we inhabit environments so suspiciously favourable to our well-being?

    Well, the answer in both cases is the same: because otherwise, we wouldn’t be here to wonder about it. If there was no habitable environment like Earth, but only the surfaces of stars, life like us couldn’t exist; if we weren’t those processes giving rise to conscious experience, we couldn’t be wondering about it.

    So the only explanation I’m able to give (at this point, anyway) is a statistical one. Although, note that we may be better off in judging when something has a human-like mind: intentionality, as I argue in my paper, arises from a certain structure, whose presence we can in principle detect. So if you believe (as I do) that for a human-like mind, both intentionality and phenomenality are necessary, then one can at least exclude certain systems from consideration.

    For example, I don’t think one can definitely rule out the possibility that the “Undecidability of the Spectral Gap” is due to some variable that we know nothing about.

    One can, in fact, because the theories that are being used are constructively defined—that is, one knows exactly what fundamental degrees of freedom go into the construction; there are no ‘unknown variables’. In principle, the proof just relies on the fact that certain structures can be considered to implement universal computations, and whether the computation halts dictates whether a spectral gap exists.

    A quite different worry might be whether those theories apply to anything in the real world. And here, it’s true that if we live in a universe with just a finite number of degrees of freedom (corresponding, e.g., to a discrete as opposed to continuous spacetime), then everything stays computable. So yes, I do need to make an assumption about the nature of the world in order to get my theory of the ground—all the better, since it gains at least indirect falsifiability thanks to this.

    Otherwise, however, I’ve already referred to effective field theory as a justification for my assumptions—no matter our ignorance about the next more fundamental level of physical theory, we can formulate a closed effective theory which we know every possible completion must asymptote to. That’s due to the existence of so-called universality classes—different fundamental theories suddenly become ‘the same’ (yield to the same empirical predictions) at a certain (usually length/energy) scale. Thus, which one of those theories is right does not affect predictions beyond that scale; so if all of the phenomena necessary for conscious experience just occur on ‘this side’ of that cutoff scale, I can justifiably assume that as far as the explanation of consciousness goes, physics is basically complete. And that scale is tiny, certainly much smaller than any cellular (or even molecular) phenomena. The LHC has probed the currently best theory of particle physics up to energies of TeV, and the predictions have all held up; that corresponds to a length scale of about 10^-18 m. So under the assumption that nothing a hundred million times smaller than an atom has any relevance to the explanation of consciousness, I can make do with our current understanding. (This footing then also suffices for the Mary story.)

    Onward, now, to your own story!

    Because of functional equivalence, a PZ will be genuinely convinced to have PE.

    This sort of phrasing immediately raised alarm bells for me: the PZ won’t be convinced of anything; he will give the appearance to be, that is, display the attendant behavior and possess the physical correlata, but, in so far as there is something it is like to be convinced of something, he won’t be (ex hypothesi). It’s a minor quibble, but it’s sloppy language use like this that usually leads to trouble when it comes to hard problem matters.

    Thus, if I explain convincingly how to produce such an “illusion” (with scare quotes), I should be winning the jackpot, right?

    Depends on how you intend to cash out the scare quotes—a system such as a PZ might set an internal flag labelled ‘I think I’m conscious’, but of course, this gives us no reason whatever to believe that he actually should be experiencing any subjective phenomenology. So first, you ought to explain how such a system can have any form of ‘illusion’ at all—that is, a mistaken belief, or an experience of having such a belief, as opposed to the mere throwing of a flag.

    And I think that’s just what’s missing in your next two paragraphs—you start by using scare quote descriptions, but then just gradually elide them. So while one would typically cash out “perception” when applied to a computational system as ‘being served a certain data pattern’, and ‘perception’ as applied to a human mind as ‘becoming aware of a certain percept’ (or something equivalent), you just gradually erode the distinction, without, it seems to me, giving any justification as to what allows you to do so.

    You seem to think that if the PZ “believes” that it has some states it can’t account for, it’s the same thing as if a human mind ‘believes’ such a thing; but even though both have the same propositional content, their mode of presentation may be different, and hence, the former does not necessarily suffice to account for the latter (it may, but as things stand, it seems one has to just believe so, just as one has to believe that experience just is associated with certain entities in dual aspect theories—which I should have referred to instead of panpsychism in my prior reply, sorry for that).

    To put it bluntly, a robot “believing” something is ultimately just throwing a flag, setting a bit in a certain memory section, and so on. But to a human mind, ‘believing’ entails an awareness of the belief’s content; whether computational structures such as a set bit suffice to give rise to the human-like experience of belief is exactly the question that one would like to have answered. But you presuppose the answer to that question to be affirmative; otherwise, one would simply not have any reason to ascribe any subjective state to your PZ (it would have that set flag, would have certain things it can’t express, would have “introspection” in the sense of data-patterns that under the right interpretation pertain to itself, and so on, but would not possess any subjective experience of all of that).

    Certainly it doesn’t seem necessary that a set flag induces subjective states. I can take a piece of paper, put on it a box, labelled ‘I am a piece of paper’, and another box, labelled ‘I am a sheep dog’, and tick the first box; but certainly nobody would believe that as a result, the paper correctly believes that it is a piece of paper.

    But your scheme strikes me as merely an elaboration of this sort of thing—use enough boxes and labels, and one can encode the whole propositional structure of your PZ, such that if you tick one box, it would “believe” that it had “ineffable”, “private”, “subjective” and “unanalyzable” states—but of course, there would not be any actual such belief, only a mark on paper saying so. For your PZ, that mark on paper would be a bit in some memory location, and the label on the box would be a designation for that memory location, but the principle strikes me as being exactly the same.

    If you keep a model of the self, and can re-evaluate sensory information recursively (changing and synthesising it at each pass), you can (must?) attribute to the (modelled) self the ability of perceiving (i.e. collecting and evaluating sensory information) – thus, the existence of PE will look undeniable to yourself.

    It seems that you’re proposing that the recursiveness somehow allows to overcome the troubles I’ve mentioned above. But you can always get rid of any recursion in an algorithm, and convert it into an iteration—and an iteration is just like writing all the steps down onto a page. So it doesn’t seem that this can buy you any new ground.

    Furthermore, even if you keep all the recursions in, then you just iterate the problem—i.e. if there’s some phenomenal experience on the nth step, it might suffice for phenomenal experience on the next one, but how do you get started? (And additionally, writing ‘I am a piece of paper’ on a piece of paper is recursive, in a way, but doesn’t really get us any closer to PE.)

    In particular, in the quote above, do you mean to say “attribute”—i.e. set a flag somewhere—or ‘attribute’—i.e. believe something to be the case, in the sense that there is some experience of having that belief? Because without the latter, I don’t see that attributing the ‘modelled self’ with anything necessarily gives rise to any experience at all—in any sense beyond setting a flag labelled ‘I have phenomenal experience’ (and that that’s not sufficient should probably not be a point of contention).

    Moreover, is the PZ (I keep flashing to PZ Myers when I type this) supposed to be ‘aware’ of his modelled self, i.e. have an experience related to there being that modelled self, or just “aware”, i.e. there is some data-pattern related to it (using the right interpretation)? Again, with only the latter—which is really all you have given me reason to grant—I see no cause to accept the claim that the “illusion” of PE is really all that is there to it.

    And perhaps a note on meaning here: the meaning I consider explicitly lacks elements of what-it’s-like to have a particular belief; those I don’t think can be incorporated in a computational schema. (In fact, I think there’s another reason for noncomputability here: semantic notions, such as truth (per Tarski) can’t be defined within a given formal system, per a Gödelian argument; but that would need more time to flesh out.)

    Thus, the fact that we can explain PoolHead in strictly mechanistic detail, doesn’t mean it can’t have phenomenal experience.

    But if you intend to stick with the computational metaphor, then we run into the problem that from PoolHead’s mechanistic description, we can’t infer what kind of experience it has (just as we can’t infer what kind of program is being implemented by just looking at the electrons being shuffled around). Then, you have an underdetermination of the phenomenal facts by the computational facts—i.e. just that sort of underdetermination that the Mary argument concludes. There needs to be additional fact-fixing to make the associated conscious experience *that* experience, as opposed to *this* one.

    And that’s just what I wanted to highlight with this description: PoolHead’s behaviour is accounted for in terms of pure mechanism; and there’s no way to derive *what* experience is associated with that mechanism. You can, of course, claim that such-and-such experience is, and I would not be able to fault you for it; but I could equally well claim that another experience is associated with it, or even none at all. (This is, of course, also a hallmark of undecidability—just as all Gödelian propositions and their negations are consistent with a given set of axioms, or like how both a machine’s halting and non-halting is consistent with its mechanical description, and so on.)

    So ultimately, I’m just not convinced that I should believe any PE at all to be associated with a system governed by ETC. Certainly, one might suppose that it could print out the sentence ‘I have PE’ on paper, or produce vocalizations appropriate to this, or set a flag, or anything of that sort; but to grant that there is PE would necessitate an additional belief that this sort of thing is sufficient for it, which I don’t think is reasonable (recall that paper on which ‘I am a piece of paper’ is written).

    So I find that, when you say things like this:

    Conscious systems are systems that are capable of interpreting themselves in a very peculiar way.

    It seems to me that I can only agree with you if I suppose that ‘interpret’ here already means ‘having an experience of oneself as’, such that the sentence becomes ‘conscious systems are systems that are capable of having an experience of themselves as something’, which however is trivial; but if one means merely “interprete” in the sense of having particular data pertaining to themselves (under a particular interpretation), then I don’t think this follows at all.

  151. Jochen (163),
    Sometimes I can’t avoid getting the impression you’re playing with me, and avoid acknowledging a key point just to see how I’ll react. The positive side is that you force me to make an effort on clarifying more and more, so I’m not complaining! The below stems from two instances of the same kind of situation, one for HC and another for ETC.

    A bit more window dressing: Jackson’s conclusion is that there must be something beyond the physical (already labelled C2) Mary’s thought experiment is uded to reach this conclusion. To get there, he passes through C1 (Mary does learn something). You agree on C1 and disagree on C2, so when you say “Jackson reaches the right conclusion — Mary’s knowledge is incomplete” you are glossing over a crucial detail: that “Mary’s knowledge is incomplete” (C1 again) is not the conclusion of Jackson’s argument. The real conclusion, C2 is, according to you, wrong, because Jackson didn’t know or didn’t think about hypercomputation. What Jackson didn’t do exemplifies the epistemological point I’ve tried to make from the very start: he gets it wrong, because of ignorance.

    This is more than window dressing to me, because it generalises.
    To reach both C1 and C2 Jackson relied on a crucial logical step, and both times, it’s unwarranted: on C1, it seems undeniable that Mary will have to learn something. It seems undeniable, because most humans can’t even begin to imagine what else could happen. Also C2 seemed undeniable, once more we humans couldn’t even begin imagining how to reach a different conclusion, or so thought Jackson. But then you arrive, introduce a novel element, HC, which acts as new knowledge and allows us to imagine how to reach a different conclusion. So, all along, I’ve been claiming that:
    – Ignorance may explain why both C1 and C2 looked unavoidable, but might be incorrect just the same.
    – With HC, you’ve made my point about C2.
    – By induction, your own argument confirms that also C1 might be wrong.
    Mine isn’t a strong conclusion, be as it may for C2, C1 could still be right, but any amount of reasonable doubt is enough to render Mary’s argument weak: the original conclusion was that physicalism
    must be false, and reasonable doubt should be enough to remove the *must*. Thus, we get to where I stand: thought experiments that make unverifiable predictions can have value as suggestions, but can’t have normative value (show how reality must be).
    Is this more clear? The knowledge argument doesn’t hold, because ignorance ;-).

    But ‘your argument could be wrong’ is not a valid reason to resist an argument. Of course, any argument could be wrong; but this doesn’t give you carte blanche to just not believe those arguments you disagree with! If there is something wrong with an argument, you have to show where, and how, it goes wrong, in order to be able to resist it.

    Once again, you are glossing over inconvenient details: explaining why ‘your argument could be wrong’ is exactly what you need to do to refute an unverifiable, speculative and normative (“must”) argument. I’m making an empiricist claim here: you don’t need to find a flaw in pure speculation to acknowledge that the results it reaches might be wrong. That’s why science eventually fosters consensus and philosophy doesn’t (and is also why philosophy is so much fun!).
    If we look at this with philosopher’s eyes, it means that one should always be very weary of whatever normative positive conclusions one is tempted to accept as unquestionably true. That’s where humbleness comes in.

    If we start with true premises, and use correct reasoning, we get a valid argument

    Not really. Ignorance is the uncontrollable variable. Whether we start with true premises is a known unknown: it is always possible that our premises didn’t include a significant detail because we ignore it. It _always_ feels like we know the full picture, thus the fact that we can’t imagine what we’re missing bears no weight at all.
    For all these reasons, whenever speculation reaches a “must” conclusion, I feel entitled to completely ignore premises, reasoning and conclusion, and change the “must” to “might” or “looks like”. The really difficult part is applying this general rule to my own conclusions, of course.

    [Incidentally, this is also why it feels risky to me to rely on current theories of fundamental physics, and the reason why I will never subscribe to your “there are no ‘unknown variables’”, whether applied to things I don’t understand (the Spectral Gap) or not.]

    For the positive case you’re building, this has little consequence, your account is still a viable possibility. So I guess that’s enough window dressing.

    I like very much how you attack the fine-tuning argument and how you apply it here, I also note that “if there is some HC account of consciousness” can include “if there is some explanation of how natural selection would generate the underlying mechanism”, so I think we really have reached a local agreement peak (as far as HC goes). [I don’t see how we can get to agree even more at this time, but of course, this might be due to my ignorance!]

    On to my case, where the local agreement peak might be quite far! I am hoping this isn’t an exclusively selfish exercise on my part, because much of what follows is also an indirect (and very incomplete) comment on your MM paper: I suspect you don’t draw the conclusions I think should follow from your own account of intentionality.

    you ought to explain how such a system can have any form of ‘illusion’ at all — that is, a mistaken belief, or an experience of having such a belief, as opposed to the mere throwing of a flag.

    Fair enough! I took for granted far too much, guilty as charged. Unfortunately, to make such an explanation possible I’ll need to rewind the tape a an awful lot!

    The zombie argument (philosophical zombie, PZ) is quite useful in this case, but in a surprising way: it’s useful because it misses the mark in the most spectacular of ways.
    The general idea is that functional equivalence (originally including also the physical details, but one could also explore the option of different internal mechanisms which produce the same behaviours): we are asked to entertain the possibility of having a twin, someone who acts exactly like us, but lacks phenomenal experience (for this discussion, I’ll call this “Philosophical Phenomenal Experience” – PPE). By definition, therefore, in all possible circumstances PZ will act exactly as a human, including talking and reasoning about it’s own Phenomenal Experience (whether there is a distinction between vanilla PE and PPE will become clearer below).
    This definition is (purposefully) equivalent to saying “a PZ is exactly the same as a human being as far as science can ever tell”. From this, the simple fact that we can supposedly conceive PZs grants us the conclusion that PE must be something beyond the physical (i.e. outside the classic domain of science). [It goes without saying that I have to reject the “must” as per argument above.] However, the more interesting approach is also something that Peter has independently explored: what exists is whatever has at least the potential of having some effect on reality. If something has no causal powers, it doesn’t exist: that’s more or less the best definition of existence that we can conceive. From there, it follows that the PE which is lacking on our conceivable PZ doesn’t exist.

    … lets the notion sink in … ??? … !?what did I just write?!

    Superficially, this account looks strange but still fairly popular: it would be tempting to conclude that PE is an illusion, join the eliminitavist camp and have a rest. But that would be just another mistake: if the illusion exists, something about it has causal powers, so we can eliminate exactly nothing, we still need to explain whatever it is that does exist and what causal powers “make it real”. That’s a long way to explain why I started with the scare quotes around the term “illusion”.
    So: PE has to be real, which means it has to have some effect on the real world. We talk about it, so something must be causing this talk, it may not be what we think it is (“illusion” with scare quotes) but it is something.
    [If you can rebuke the above by replicating my “on ignorance” approach, I’d love it: I’ve tried to do it myself but couldn’t.]

    Let’s try to use some awful formalisation to try and put some order:
    Definition: Human Being = HB.
    Under the Zombie thought experiment: HB – PPE = PZ. (the only difference between HB and PZ is that PZ lacks PPE)
    Ex hypothesis, PZ and HB share exactly the same causal relations with the world, both within themselves and the rest of the world.
    However, by the same hypothesis: PE isn’t real, so, as far as reality (the domain of science) is concerned, PE is an empty set, comprises of nothing.
    HB – PPE = PZ
    PPE = 0 (sorry: I’m not sure the HTML code for the empty set symbol would work via wordpress)
    HB = PZ.

    In other words, the Zombie thought experiment tells us that if you take a human being, and don’t change it at all, you still have a human being. What a great insight. It is however oddly useful nevertheless: it sort of confirms the idea that if it looks like a duck and quacks like a duck we should indeed think of it as a duck.
    At this point it’s worth remarking that yes, I’m saying that in some sense we are all zombies. But no, I’m not saying that PE doesn’t exist: that’s why I’ve hastily introduced the PPE acronym. What I am saying is that PE != 0 and therefore PE != PPE. If you wish, I’m rejecting the idea of consciousness as a strong epiphenomenon (something that causes nothing at all). Saying that something exists (at least something which creates an illusion) is equivalent to saying it has causal powers. Therefore phenomenal experience (or whatever causes us to believe in it) exists, and it is not an epiphenomenon. Thus, in another sense I’m saying that we are not zombies: we really have PE. More importantly, what follows is that PE has causal powers, and something that has causal powers is by definition amenable to scientific enquiry. In turn, this means that mechanistic explanations are at least theoretically possible.

    [Worth noting that your HC hypothesis, as I currently understand it, explains why we don’t understand PE, and may never will, but still implies that PE can theoretically be explained mechanistically.]

    So, in summary, we are left with a depured PE (DPE), which is something that we have strong reasons to expect to be explainable mechanistically. We should also remember that mechanistic explanations are by definition approximations, and therefore some mysterianism is implied the moment we declare a subject to be amenable to scientific enquiry (pace physics hubris). We know we don’t understand DPE, but we know it’s not what we are talking about when we invoke zombies – more specifically, we know that zombies need to have DPE as well, otherwise they wouldn’t be functionally equivalent to humans. DPE has to have causal powers and must be possible to explain it with varying degrees of precision in terms of mechanisms. So yes, I do think that in theory, given perfect inference abilities (which we don’t have and have no idea of what it means to have them) we could explain what red feels like to a blind perfect reasoner (unless you are right on HC or other similar currently unknown situation applies). [If you wish to challenge this point, please start by explaining why we have no idea of what it feels to be a bat, but we do have a reliable idea of what it feels like to be a perfect reasoner – this is a trick question!]
    The upside is that so far HC is sill in the cards, it fits the current frame with ease.

    We are left with DPE, and with the expectation that mechanistic explanations can be produced and refined. So far I’m merely saying that we aren’t willing to try explaining what doesn’t exist. The corollary is that we expect that it will be either impossible to produce any explanation at all (which to me means that DPE must be thoroughly random, something which seems already empirically falsified) or we can explain at least some aspects of DPE.

    Fine, at this point we are left with the task of identifying explainable aspects of DPE. What should we pick? To start with, I’m picking the hardest I can find, namely the mysterious qualities of qualia (immediateness, privacy, ineffability). At this point you may complain that I still have to explain how can any system have “any form of ‘illusion’ at all — that is, a mistaken belief, or an experience of having such a belief, as opposed to the mere throwing of a flag”. But I think you’d be missing the point of distinguishing between PE, PPE and DPE. The whole exercise is intended to explain why for a system to have a belief (mistaken or not), or an experience of having such a belief, has to be somewhat explainable as the throwing of a flag or not explainable at all (not directly at least, which allows me to keep HC in the range of viable options). The whole exercise is starting from the assumption that there is no ghost in the machine. If you then go back and protest that we haven’t explained the ghost, you are cheating. [I know I sound like Dennett, but I internally take it as a compliment ;-).]
    In other words, what needs to be explained is within a flag-raising system, under what circumstances raising a flag produces (or is) DPE? We may do this with the expectation is that most flag-raising systems won’t have DPE (if we aren’t panpsychists).
    This is precisely what both ETC and your hypothesis about HC try to do. It’s also worth noting that your MM paper does exactly (in my reading) the same operation when you define intentionality in terms of the consequences it has. In broad strokes, we go back to HB = PZ: we have to explain how and when flag raising relates to DPE (or intentionality). We are not interested in explaining things that don’t exist, but we are ok with the idea of explaining why on introspection PE (or intentionality) seem to have paradoxical properties.

    Now, more on ETC in particular, remembering that we need to account for behaviours (things we can verify in the physical world, at least in theory) and that in this case, even thoughts are to be understood as behaviours (the act of thinking something, which we are expecting to boil down to raising some particular flags, but some very special ones). Once more, ETC predicates that a system which:
    – constantly receives input from both internal parts and the outside world,
    – constantly uses some of this input to partially model itself and the outside world,
    – [includes: constantly digests this input, keeps the digestion in working memory for a while,]
    – heuristically saves a (heavily digested) copy of some contents of the working memory for further reference,
    – AND can recursively re-digest both what’s in working memory and what can be retrieved from longer-term memories.

    Then, such a system will be able to attribute to itself the act of having perceived something. This is the crucial step: if the system models itself, the world and the relations between the two, when re-digesting any memory of previous input (whether from short or long term memory) it will identify the content of the memory with whatever input caused it, it will in other way flag it as “something that happened to me”.
    This is equivalent to infer that your sensory structures have collected some data in the past, and that this data was then synthesised and stored within yourself (of which you actively model some aspects). In my previous description, that’s where I dropped the scare quotes, because I can’t see how the last underlined part differs from our best functional description of “being aware of one’s own perceptions”.

    Such a system will be functionally indistinguishable from its non-zombie twin, but because PPE = 0, such a system IS its non-zombie twin. However, the same system will be unable to understand how this happens, won’t be able to directly change how it happens, won’t be aware of the symbolic nature of its perceptions (to the system the perception is the thing itself) and, assuming it can talk, will have no adequate way of describing the qualitative side of its own perceptions. This lack of abilities is an unavoidable consequence of the mechanistic nature of the system and directly explains the ineffability, privacy and immediacy of qualia/PE. Thus, I’m left with nothing else to explain: what remains is the supposedly easy side, which is unimaginably difficult, of course.
    What ETC does attempt to explain is under what circumstances raising a flag does feels like something (when the flag is part of a vast array of flags within a system which can store memories – more flags -, re-probe them and attribute data as collected by itself -requires self-modelling- from the world), which I take is what you (correctly) want. In other words, ETC tries to tell you what sort of mechanical system will attribute PE to itself. The exploration of the Zombie argument then explains why this is all we can get, but also all we need: functional equivalence is full equivalence.

    Now I can go back to the theme I’ve declared at the top, I’m claiming that you are tactically overlooking something, but so far I’ve not explained what.
    You are offering the following critique:

    To put it bluntly, a robot “believing” something is ultimately just throwing a flag, setting a bit in a certain memory section, and so on. But to a human mind, ‘believing’ entails an awareness of the belief’s content; whether computational structures such as a set bit suffice to give rise to the human-like experience of belief is exactly the question that one would like to have answered. But you presuppose the answer to that question to be affirmative

    Which seems perfectly reasonable (bluntness appreciated, as always), but in doing so you are actually assuming that PPE != 0. You are effectively saying that my PZ twin may be throwing flags, may have no PE (save perhaps DPE, under my definition?), however, ex-hypothesis, the PE that I have, but my twin doesn’t (PPE) has exactly zero consequences in the world. This is the assumption that what we are trying to understand is something which doesn’t have any causal power and therefore doesn’t exist. Under this assumption, PPE can’t be explained in any foreseeable empirical way because it has no effect on any physical/measurable stuff at all, not even in theory.
    Once again, the only way we can even propose to tackle PE is to assume that it must be part of causal relations with the rest of the universe, and if we assume this, it follows that it makes epistemological sense to produce descriptions that include things like throwing a flag, as this is a common and useful metaphorical way of describing certain mechanisms.

    The fact that you don’t see that you occasionally swap assumptions, passing between PPE = 0 and PPE != 0 is truly surprising to me. In your HC account of consciousness, you are happy with assuming that all we need are mechanisms and environmental randomness, from this one has to derive that a PZ (having the same mechanisms and access to the same environmental randomness) will have DPE while PPE remains an empty set. But when you are asked to criticise an alternative account you move to the other side and remind me that I’m not allowed to start with the assumption that PPE = 0. Moreover, on the MM paper you start with the (agreeable to me) assumption that “the meaning of some symbol, or string of symbols, can be construed as that which the presence of the symbol causes an agent to do”. Once again, this stance, if followed through, implies that all we care about is the differences in behaviour, leading to the conclusion that PZ = HB, or, if you prefer, that flags thrown by and within PZ have a meaning to PZ itself. If they do, your whole objection evaporates.
    Thus, in our discussion, since everything you propose does assume that PPE = 0, I (subconsciously) assumed I didn’t need to make this particular point. I’m happy you forced me to make it, though.

    Now, the rest of your criticism (conveniently) looks to my eyes as coloured by a refusal to engage with it; since you take the stance of challenging the premises, this is hardly surprising. For example, when you ask about “attribution” I mean something like “include the ability of collecting sensory information in the model of self”, which in plain English becomes “attribute to itself the ability of perceiving”. Once this is included in the model, and the model is available to subsequent evaluations most of the work is done. Because a self-model has to be made of less parts than the thing it models, and therefore at least some inner workings of the actual system need to be not modelled, the system will have finite introspection abilities, and eventually find qualities that it can’t explain or describe explicitly (i.e. the qualitative aspects of qualia).

    Now, the one criticism I accept in full is “there’s no way to derive *what* experience is associated with that mechanism”. Currently ETC doesn’t specify any mechanism in detail, so indeed, it says exactly nothing about what an experience would be. Furthermore, sticking with the computational metaphor is indeed tricky, that’s why I’ve written the two guest posts in here (should we rehash that discussion?).

    Overall: I am bluntly but very friendly complaining that your criticism misses the mark. From one point of view you are asking me to start my explanatory quest from an incoherent premise (PPE != 0) conveniently forgetting that such a premise makes the explanation impossible. On the other, you are not accepting the incoherent premise yourself… That’s OK, because it allowed me to put in writing the PPE argument, and I do need to thank you for that.

    So, the question remains, now that I went through a long exploration of my premises: where do you think I’m getting it wrong?

  152. Sergio:

    Sometimes I can’t avoid getting the impression you’re playing with me, and avoid acknowledging a key point just to see how I’ll react.

    I’m very sorry to hear that. I can assure you I’m not intentionally doing any such thing; if I miss a point you feel is important, then it’s because either I don’t agree with it, or I just didn’t get it. In particular, I suppose one instance you’re hinting at is your conclusions C1 and C2: there, my point is simply that (part of) your C2 doesn’t follow from the Mary argument, but from Mary plus the assumption that everything ought to be derivable from a base set of physical facts for an ideal reasoner.

    Thus, to me, the conclusion of the argument is simply that not all facts are derivable from the physical facts (which I hold to be entirely correct); now, most proponents of the argument (including Jackson in his original presentation) hold that it follows immediately that hence, not all facts are physical. However, this depends on the generally unstated further premise, and is thus not a conclusion of the Mary argument itself.

    The reason that I’m hesitant to go along with your enumeration of the conclusions is that the conclusion that we are entitled to draw from the Mary story sort of ‘sits between’ your C1 and C2. C1 is basically that Mary learns something new (which I agree with), but I agree only with part of your C2, and disagree with another. Your formulation was:

    because Mary learns something, it follows that you can’t infer everything from physical details and therefore there must be something beyond the physical (C2).

    The first part, I agree with; but after ‘therefore’ comes the part that I don’t. So the conclusion that Mary’s story leads to is something like ‘C1 + 0.5*C2’—Mary learns something new, and because of that, there are facts that are not derivable from the physical. But again, this licenses us only to draw the further conclusion that there must be something beyond the physical if we assume that there is some set of physical facts from which all further physical facts can be derived by an ideal reasoner.

    Perhaps this will become more clear if we try and formalize the Mary story somewhat. As a first approximation, consider the following:

    (MP1) Mary is an ideal reasoner, that is, she can draw any inference that is entailed by a set of facts known to her.
    (MP2) Mary knows all the physical facts pertaining to color vision.
    (MC1) From (MP1) and (MP2), Mary can draw any conclusion about color vision entailed by the physical facts about it.
    (MP3) Mary learns something new upon seeing red for the first time.
    (MC2) From (MC1) and (MP3), there are facts about color vision not entailed by the physical facts.

    All of this, I think, is completely right (provided one uses ‘entail’ in the logical sense, that is, as ‘being derivable by an effective procedure’, or, roughly, what can be concluded using a Turing machine equivalent). Jackson and others now introduce the further, implicit, premise that

    (MP4) all physical facts are derivable from some finite set of base facts,

    to come to the conclusion that

    (MC3) from (MC2) and (MP4), there are facts going beyond the physical facts.

    This, too, is completely valid reasoning, and if all the premises are right, one thus has made a sound argument disproving physicalism. Thus, Jackson isn’t reasoning out of ignorance; granting all his premises (as many people do), one is forced to accept his conclusion. However, I attack the soundness of the additional argument, by demonstrating that the premise (MP4) cannot be taken for granted (as demonstrated by the fact that it is actually false in certain cases, like the spectral gap problem).

    This is one of two logically permissible ways to attack an argument; the other would be to find a fallacy in the reasoning used. However, arguing that because some conclusion is false, a conclusion that is not entailed by this one likewise is (or may be) false, is actually fallacious.

    So again, Jackson (although he doesn’t acknowledge it) is making two arguments: one, the Mary argument, that some facts are not derivable from the physical facts; and another, the argument that there are facts beyond the physical facts. I agree with the former, but consider the latter to be false (unsound, though valid), because one of its premises doesn’t obtain.

    By induction, your own argument confirms that also C1 might be wrong.
    Mine isn’t a strong conclusion, be as it may for C2, C1 could still be right, but any amount of reasonable doubt is enough to render Mary’s argument weak: the original conclusion was that physicalism must be false, and reasonable doubt should be enough to remove the *must*.

    This line of reasoning, I’m afraid, simply is fallacious. My argument, if it is correct, entails that (MC3) is false, because of the falsity of (MP4); this has no bearing—none at all!—on the question of whether (MC2) is true, or not. There is, in particular, no ‘reasonable doubt’ implied by my argument, since one can hold to the premises leading to (MC2), while disagreeing with those leading to (MC3).

    Consider the following example:

    (EP1) All men are mortal.
    (EP2) Socrates is a man.
    (EC1) Therefore, Socrates is mortal.

    This is an argument that is both valid (the reasoning used is correct), and sound (its premises are true). Introducing the further premise

    (EP3) all mortals are larger than 1.8m

    leads to the conclusion

    (EC2) Socrates was larger than 1.8m.

    However, this conclusion is not warranted, since the premise (EP3) is false. But this does not create doubt for (EC1)!

    Ignorance enters in the form of postulating a premise that is faulty; combating the argument then means exposing the flaw in the premise. But one isn’t licensed to conclude that because one has found a flaw with one premise, other premises may be likewise faulty, and hence, well, let’s just not believe what strikes us as inconvenient. In order to resist another argument, one must once again find an explicit flaw with a premise (or, again, with the reasoning used), otherwise, one is at best appealing to emotion.

    In particular, in your case, you have no reason to question (MP4), if your universe is computable/doesn’t contain any undecidable statements. Hence, if you also agree with the other premises of the argument, you’re compelled to accept its conclusions (in toto); in order to resist it, you must find flaw with (MP1-3) (the logic is valid).

    As far as I can tell, your avenue towards resisting the argument seems to be to doubt premise (MP3), that Mary (conceived of as an ideal reasoner, etc.) would learn something new; this is, of course, a perfectly viable route, provided you can make a case for that premise not holding. However, at the moment at least, I don’t think you’re quite there.

    I’m making an empiricist claim here: you don’t need to find a flaw in pure speculation to acknowledge that the results it reaches might be wrong.

    Any argument might be wrong, but acknowledging this does not give you license to resist a particular argument. In order to do so, you have to show it to be wrong. And as I pointed out before, even in science, we rely on the sort of reasoning going into the Mary story in order to derive empirically accessible conclusions. Consider Bell’s theorem. It starts with the premises:

    (BP1) Reality is local (as in, influences don’t travel faster than light).
    (BP2) Observable quantities have definite values at all times.
    (BP3) We are free to make any experiment we choose to.

    From this, it derives a particular conclusion, namely

    (BC1) Correlations between observations are bounded to a certain value.

    These bounds are the so-called Bell inequalities. Then, we introduce the formalism of quantum mechanics by means of a further premise:

    (BP4) In quantum mechanics, those correlations may exceed those bounds,

    leading to the conclusion

    (BC2) either quantum mechanics is not empirically adequate, or one of (BP1-3) must be false.

    Then, we introduce empiricism, by means of which the following further premise

    (BP5) quantum mechanics is empirically adequate

    is justified, to finally draw the conclusion:

    (BC3) local realism (BP1-3) fails to hold.

    Importantly, we need to rely on the argument (‘thought experiment’) proposed beforehand in order to derive this conclusion from the empirical observation; thus, using your style of ‘maybe something we don’t know does we don’t know what’-attack against such reasoning would prohibit us from ever drawing conclusions based on empirical findings. Empiricism yields justifications of certain premises, then to be used to draw conclusions; in order to utilize it in fact-finding, we must rely on the process of reasoning being sound, otherwise, we just give away the game.

    For all these reasons, whenever speculation reaches a “must” conclusion, I feel entitled to completely ignore premises, reasoning and conclusion, and change the “must” to “might” or “looks like”.

    Logically, you’re not entitled to this, I’m sorry. (Also note that the ‘must’ here isn’t normative—perhaps one might say it’s modal: we’re not asserting that this is how the world ought to be, or how it should be valued, or something like that, we’re concluding that if a set of facts hold, then a further fact obtains necessarily.)

    Any argument boils down to the statement that if its premises are true, so is its conclusion. There is no grounds on which to justify a ‘maybe’ here. Of course, the premises might not be true, but showing this to be the case is the burden of those attacking the argument. Otherwise, we might just stop any discussion, since then one is always allowed to say ‘yeah, well, I’m not buying it anyway’ regarding any conclusion one doesn’t like. But the purpose of argumentation is exactly to show that one can’t simultaneously consistently believe *that* set of premises, while not believing the conclusion.

    Sorry for spending so much time on this, but I feel it is a pretty fundamental issue. The idea that showing one conclusion wrong somehow fosters distrust in another conclusion is just sloppy thinking, and I think is something you need to do away with in order to make your ideas as crisp and convincing as they possibly can be (otherwise, if nothing else, you’re inviting the retort ‘that’s all well and good, but I still think you’re wrong’). Work out what, exactly, it is you disagree with about the Mary argument; and if you find that there isn’t anything, then you should reevaluate your beliefs regarding the story.

    I will never subscribe to your “there are no ‘unknown variables’”, whether applied to things I don’t understand (the Spectral Gap) or not.

    Again, this is not something you’re justified to say. Regarding the spectral gap issue, the theories are constructive, that is, everything that’s in there is what we put in, and there’s nothing else besides. It’s like defining an axiom system A, and deriving conclusions from it (in fact, that analogy is ultimately what makes the proof possible): you can’t say ‘maybe there’s some axiom that’s wrong, or one you forgot’—that would mean dealing with a different axiom system. You can doubt whether that axiom system describes what one set out to describe, but that’s a crucial difference (and puts the onus on the doubter to show inadequacy).

    On to other matters.

    what exists is whatever has at least the potential of having some effect on reality. If something has no causal powers, it doesn’t exist: that’s more or less the best definition of existence that we can conceive.

    I’d disagree: ‘to exist’ is not a predicate, but ‘having causal power’ is. If one were to take existence as a predicate, then one could not intelligibly talk about the existence of nothing, as ‘nothing exists’ would then mean that ‘there is some x that exists, and that x is nothing’. But that’s garbage. Additionally, nothing could well exist, but have no causal powers (nothing comes from nothing).

    Similar problems abound with the notion of everything: everything certainly can’t cause something, as there’s nothing to cause beyond itself; but that doesn’t mean that everything doesn’t exist. Furthermore, one can readily conceive of more mundane examples of things that exist without causing anything—a universe with a lone proton would be unchanging, hence, nothing would ever be caused there; but does that suffice to say that that proton doesn’t exist? Of course, you might hold that that proton nevertheless has ‘causal powers’, since it could cause things if brought into interaction with other particles, say, but then, ‘causal powers’ just is kind of a nebulous notion that I’m not sure buys you anything with respect to explaining existence.

    Also, modern cosmology includes theories where there are parts of the universe out of causal contact with ours; to say they ‘don’t exist’ doesn’t strike me as at all useful. So I think there’s good grounds to believe that it’s too simple to say ‘exist = has causal powers’. It’s true that we can only know of an entity via causal influences, but of course, there’s nothing that says that we can know of everything that exists.

    But anyway, for the time being, let’s move on.

    [If you can rebuke the above by replicating my “on ignorance” approach, I’d love it: I’ve tried to do it myself but couldn’t.]

    Well, you’re assuming you know what existence means; but since knowledge is fallible, you ought to doubt that. (You could be ignorant of what existence really means.) (Of course, in order to actually rebut your claim, one would have to make an argument doubting the equivalence of existence and the having of causal powers, as I’ve done above.)

    Definition: Human Being = HB.
    Under the Zombie thought experiment: HB – PPE = PZ. (the only difference between HB and PZ is that PZ lacks PPE)
    Ex hypothesis, PZ and HB share exactly the same causal relations with the world, both within themselves and the rest of the world.
    However, by the same hypothesis: PE isn’t real, so, as far as reality (the domain of science) is concerned, PE is an empty set, comprises of nothing.
    HB – PPE = PZ
    PPE = 0 (sorry: I’m not sure the HTML code for the empty set symbol would work via wordpress)
    HB = PZ.

    This is much (much!) too quick. There are many different ways under which PPE does not have causal powers additional to those of the physical, and in which still philosophical zombies are possible. Even if one accepts your identification of existence with having causal powers, this does not mean that since the physical is sufficient for causing behavior, there can be nothing beyond it.

    First of all, it might be that there is more than behavior to a human being (indeed, since the behaviorist burnout, this is a common point of view). So, while the physical may be sufficient to cause behavior, causing certain mental states may need an extraphysical aspect to reality—i.e. while it’s enough to cause your biting into an apple to have the appropriate chain of mechanisms from seeing that apple to grasping it to sinking your teeth into it, the enjoyment you feel in doing so may not be sufficiently accounted for by this, and hence, necessitate nonphysical causes—the subjective experience of the apple’s succulence, say. Thus, the philosophical zombie would lack both those causes and effects while being physically identical, but since the subjective experience is causally efficacious, it must, according to your definition, exist in human beings.

    This also came up in my brief discussion with Tom Clark in the Conversations with a Zombie comments thread: there are two chains of causality, one conceived of subjectively, first-personally, experientially etc., and the other conceived of third-personally, intersubjectively, mechanically and so on. So, if you stub your toe, subjectively, the pain makes you cry out and curse, while third-personally, c-fibers firing leads to some particular neuronal thunderstorm that ultimately causes you to pass air through your flapping vocal chords in a particular way. It might not be the case that only one of those is the exclusively ‘true’ account (similar to how it’s not the case that a photon is either a wave or a particle)—both may be true from the appropriate point of view; only, in the zombie case, one of these points of view has been eliminated.

    Additionally, while the physical may be sufficient to cause behavior, some philosophers have advocated a thesis of causal overdetermination—the phenomenal experience is a contributing cause to behavior, in the same sense that you might pick up a piece of paper because you want to avoid littering, and because you want to know what’s written on it. Perhaps either of these causes would be sufficient to explain you picking up the paper, but it’s implausible to suggest that hence, only one of those causes exists, since you might actually both want to avoid littering and read the paper. Again, in this case, the phenomenal has real causal powers, and thus, should be said to exist on your reading.

    And of course, there are many accounts on which the phenomenal just ‘accompanies’ the physical—that is, there is phenomenal experience associated with at least some physical processes/events/entitities, but it might be otherwise. Consider dual-aspect theories: there, at least some entities have both a mental and a physical ‘pole’, and hence, certain physical processes are accompanied by experience; but if one were to, so to speak, cut off the mental pole, one would be left with a purely physical zombie-world, which is however on the physical (and behavioral) level indistinguishable from ours (but still very different as to its experiential dimension). The same goes for panpsychism, property dualism, or even outright dualist views under a sort of ‘prestabilized harmony’ assumption.

    So ultimately, basically any view of the relationship between mind and matter besides physicalism repudiates your argument; and to the extent that you want to establish your brand of computationalist evolutionary physicalism as a viable option, and hence, to show that it can account for the sort of first-person evidence we have, starting out with basically assuming some sort of such account must be true leads to circularity. In other words, one can only buy your ‘PPE = 0’ argument if one is already convinced that some physicalist account must be true; but then, the argument really doesn’t do any further work.

    What I am saying is that PE != 0 and therefore PE != PPE.

    So, exactly what is PE? How does it differ from ‘PPE’ (other than not being null)?

    [If you wish to challenge this point, please start by explaining why we have no idea of what it feels to be a bat, but we do have a reliable idea of what it feels like to be a perfect reasoner – this is a trick question!]

    Being a perfect reasoner is easily formalizable in third-person terms (it’s a functional property, namely, that of being able to reach any conclusion from a given set of premises). What it feels like to be a bat, on the other hand, is first-personal. (What was the trick?)

    Fine, at this point we are left with the task of identifying explainable aspects of DPE. What should we pick? To start with, I’m picking the hardest I can find, namely the mysterious qualities of qualia (immediateness, privacy, ineffability).

    I don’t think those are really the hardest to explain features; harder, it would seem to me, are their subjective, first-personal nature, their qualitative, experiential character, or the ‘what-it’s-likeness’ of having any experience at all. It follows from their subjective nature that they’re private and ineffable, and their experiential character entails their immediacy; but this doesn’t hold the other way around (not that I could see, at least).

    The whole exercise is intended to explain why for a system to have a belief (mistaken or not), or an experience of having such a belief, has to be somewhat explainable as the throwing of a flag or not explainable at all (not directly at least, which allows me to keep HC in the range of viable options).

    And it’d be fine if you provided such an explanation, but, to the best of my ability to tell, you merely stipulate that this is the case—a stipulation that I don’t see any reason to go along with, and indeed, good reason to doubt. Again, a paper on which ‘I am a paper’ is written presumably doesn’t have the belief of being a paper, but all flag throwing is ultimately nothing different.

    The problem here is that the flag needs to be interpreted as meaning something; so if the paper had the faculty of interpreting what is written upon it, then one might talk of a belief, but the flags thrown by a computer always ultimately rely on the interpretation of the user in order to mean anything. But in explaining the mind, we can’t rely on a user with some already-existing intentionality to borrow meaning from. This is the reason why I insist, in my M&M paper, that it’s not a computational model, but that the meaning is grounded by performing real actions on real patterns (of the CA): computation is always mind-dependent, just as the meaning of a series of squiggles on a piece of paper depends on the interpreting mind.

    This is the crucial step: if the system models itself, the world and the relations between the two, when re-digesting any memory of previous input (whether from short or long term memory) it will identify the content of the memory with whatever input caused it, it will in other way flag it as “something that happened to me”.

    Reasoning in terms of a system ‘modeling itself’, attributing properties to its model and so on is very dangerous. One, you run the danger of conflating having some data-structure, some string of ones and zeros, some squiggles on a paper with having beliefs about that model. Two, you run the danger of imagining that the system has somehow a ‘picture’ of itself (and its situatedness in the environment) that it can regard, manipulate, appreciate etc. The paper upon which ‘I am a piece of paper’ is written contains, if interpreted, a ‘model’ of itself—it asserts a true proposition about its nature. But it doesn’t assert this proposition to itself. And neither do an automaton’s flags, or data-structures, mean anything to that automaton—it might be that if some particular flag is set, it can perform some action correctly, but it necessitates further interpretation to assert that this flag means anything about the automaton to that automaton. Getting around this (which I believe is possible) is the point of my M&M paper.

    It’s very seductive to believe that a system could have a ‘model’ of itself in the sense that we have an image of ourselves, merely by virtue of containing in some form data structures that can be interpreted to be about itself—seductive, since it’s the most natural thing in the world for us to imagine. But it’s also fallacious to assert that it does, as it begs the question of whether such data structures are sufficient for these kinds of beliefs about oneself.

    You are effectively saying that my PZ twin may be throwing flags, may have no PE (save perhaps DPE, under my definition?), however, ex-hypothesis, the PE that I have, but my twin doesn’t (PPE) has exactly zero consequences in the world. This is the assumption that what we are trying to understand is something which doesn’t have any causal power and therefore doesn’t exist.

    I hope it’s clear now that I don’t agree with this argument along several lines: one, PPE doesn’t have zero consequences in the real world. It might not cause behavior, but it may cause, e.g., subjective feelings about behavior—might cause the enjoyment of biting into an apple, the anger of stubbing your toe, and so on—such that the zombie might express the same behavior (saying ‘Mmh, what a tasty apple!’ or cursing the unfortunately placed table) without, however, feeling any enjoyment or anger or whatever. Alternatively, it might be causally responsible for behavior, but simply in a non-necessary way. Moreover, a dual-aspect, panpsychist, dualist… view might actually be the right one.

    Additionally, I don’t think that ‘exist = has causal powers’ is a good definition of existence, for reasons provided above. (And I don’t think you ought to think so, on your own ‘it’s possible we’re wrong’-argumentation—nevermind my thinking that that’s also not right.)

    Furthermore, I think your account for ‘attributing qualities to one’s own internal model’ begs the question—you need to assume that the internal model is something the system conceives of, interprets, appreciates etc. in some way in order to get off the ground, for if it were merely the case that because there is a voltage in memory area 0xF2 causes a certain activation of actuators, as opposed to that voltage being there being interpreted as meaning, say, ‘I see an apple’, then there is no reason to assume that the system ‘attributes’ anything to itself.

    But when you are asked to criticise an alternative account you move to the other side and remind me that I’m not allowed to start with the assumption that PPE = 0.

    Well, I’m criticizing from a broader perspective, of course; we may share the assumption that there is no ghost in the machine, but that’s not universal, and your account must be defensible also against those that don’t share this assumption.

    Besides, as far as I understand you, I don’t believe that PPE doesn’t exist; rather, I believe that it exists, and is the result of hypercomputational processes. That’s why zombies can be consistently imagined: we can’t derive phenomenal experience from the physical description, and hence, can consistently imagine that there might be a world in which it is lacking—in the same way, we can’t derive whether a spectral gap exists in certain physical theories, and thus, can consistently imagine a world in which it doesn’t. That is, we’re not committing any error in imagining this; it’s just that our powers of reasoning don’t exhaust what’s actually possible.

    Now, in the real world, of course, whenever there is the right sort of hypercomputational process, we will get phenomenal experience (of just the sort we seem to have); likewise, whenever that particular theory is instantiated, we’ll get a gap in energy between its ground state and the first excited state.

    Moreover, on the MM paper you start with the (agreeable to me) assumption that “the meaning of some symbol, or string of symbols, can be construed as that which the presence of the symbol causes an agent to do”. Once again, this stance, if followed through, implies that all we care about is the differences in behaviour, leading to the conclusion that PZ = HB, or, if you prefer, that flags thrown by and within PZ have a meaning to PZ itself.

    You’re conflating several different issues here. My von Neumann minds may be intentional, if you grant my argumentation, but they don’t have phenomenal experience (or at least, there’s no reason to believe they do). So not everything boils down to differences in behavior.

    Moreover, my construction doesn’t imply that some thrown flags have meaning to the system; it implies that there is a particular sort of system (that constructed using the von Neumann replicator architecture) whose state has meaning to itself—that is, which does not need an external, originally intentional entity to derive meaning from (hence avoiding the homunculus regress).

    For example, when you ask about “attribution” I mean something like “include the ability of collecting sensory information in the model of self”, which in plain English becomes “attribute to itself the ability of perceiving”.

    This is exactly where things look question-begging to me: some automaton does not “include the ability of collecting sensory information in the model of self”; that might be a plausible interpretation of what its data structures mean, but to suppose that interpretation is present within the automaton assumes it to be capable of interpretation in the first place, which is, however, the thing you need to establish.

    The ‘model of the self’ of such an automaton is just a bunch of squiggles on paper; and if those squiggles look one way, some action A is performed, if those squiggles look another way, action B is executed instead. There’s no ‘central meaner’ that looks at those squiggles, and interpretes them as meaning, say ‘I am too far away from my target, hence I should move closer’—supposing this first incurs inconsistency (in terms of infinite recursion), and second is also unnecessary. Only the syntactic properties of the squiggles are important in guiding behavior, their semantic dimension is wholly irrelevant.

    Getting around this is why I proposed the von Neumann model: there, the squiggles themselves are the actors—that is, they perform operations on one another, and thus, by virtue of the meaning as action-account, have meaning to one another (with what squiggle A makes squiggle B do being the meaning of A for B). Thus, one can construct squiggles that have meaning to themselves, that hence are about something to themselves in a non-arbitrary way.

    In your case, however, the squiggles are fed to a separate mechanism, your evaluation module, which then—using their syntactic properties only—bases action on those squiggles. While one can thus associate these squiggles with meanings, in terms of the actions they cause, those are meanings to a different entity, namely the EM; but as I’ve argued in the paper, meaning to an external agency always incurs the homunculus problem (since you end up having to postulate ever more external agencies).

    From one point of view you are asking me to start my explanatory quest from an incoherent premise (PPE != 0) conveniently forgetting that such a premise makes the explanation impossible. On the other, you are not accepting the incoherent premise yourself…

    I’m still not sure why you attribute the stance that PPE doesn’t exist to me; in my view, exactly that sort of phenomenal experience that we seem to have, we do in fact have. Furthermore, we can entirely consistently imagine it lacking in zombies, and it’s what Mary learns upon seeing red for the first time, etc. It’s just not derivable from the physical facts, because it corresponds to an undecidable proposition.

    ————

    I must confess writing this burned me out a bit (not to mention taking a sizable bite out of my downtime, but that’s my own choice of course); I might need some time to recoup before diving back into this…

  153. BTW, regarding BBT-esque eliminativism, I just chanced across a paper by Lynne Rudder Baker that makes essentially the argument I’ve been trying to articulate regarding its inconsistency in a very convincing way, labelling it Cognitive Suicide. Going beyond what I’ve been trying to point out, she concludes that the ‘commonsense conception’ of belief/aboutness etc. is not actually an empirical theory (and a bad one, at that) as the eliminativist claims, but a precondition to theorizing. I think that makes a certain sense (but then again, I also think I’ve got at least the germ of a way of naturalizing intentionality, and thus, don’t feel its sting as badly as those driven to eliminativism seem to).

  154. Jochen,

    Your posts are excellent. I’m enjoying reading this discussion. Do you mind sending me a copy of the paper you mentioned? ( I may quibble on the existence is not a predicate bit [ Kant agree] but that’s irrelevant for now…)

    Mosaicroy@hotmail.com

  155. Jochen (meta-answer),
    I agree with Chen (glad somebody else is enjoying this!): your posts are excellent, and you have absolutely nothing to feel sorry for. I am also inclined to believe you probably don’t know how useful you’re being to me: this discussion is invaluable for me on more than one level.
    To name a few: it forces me to both make explicit and to question a whole lot of assumptions that I make; because they are automatic to me, without help like yours, I wouldn’t be able to.
    It also provides a constant source of new ideas on how to try explaining my main points. At the same time, you make me experience directly how difficult it is to try steering out of the mainstream and still retain some hope of being both intelligible and convincing (or at least, avoid looking completely foolish).

    Thus, the fact that “I can’t avoid the feeling…” is exactly what should be happening: you are painfully and slowly showing me how things that look obvious to me are quite astonishing from different (and less idiosyncratic) points of view. But convention and mainstream are not the focus here: it’s the fact that we are all blind to some of the assumptions we make, and thus need this kind of detailed and painfully repetitive dialogue to shed some light over them.

    I can only thank you! I was also drained by the effort, and now have lots of ideas spinning around. Means I’m going to need some time to let the conceptual storm settle, regroup, and try to put down the results in writing, hoping the result will be somewhat coherent. (Rudder Baker’s paper seems very promising, BTW, thanks for the tip)

    I propose a temporary truce. If things go according to plan (and they seldom do), I’ll probably try to produce one or more posts on my own blog, summarising (some of) what I’ve learned from this discussion. Don’t know when, as always, so will post a FYI notification here when done.

  156. Chen,
    thanks for your kind words. I’m pleasantly surprised that there’s actually somebody else following this discussion—I’d thought the whole thing had long ago become too unwieldy and idiosyncratic to be of much interest to anybody beyond myself and Sergio. If there’s somebody else getting something out of this, all the better!

    I’ve sent you a preprint copy of my paper; I hope it’ll be of some interest to you, and would welcome all comments and criticisms you might care to make.

    Regarding my Kantian notion of existence, well, if you were to quibble, I’d probably go along with that—I don’t think I’m either prepared or especially willing to defend it in detail. But I think it serves as a useful reminder that existence needs to be treated with some care, something I don’t think is lost on more modern (e.g. Quinean) takes on the matter. But that’s an area where I’m very far from being an expert.

    Sergio,
    thanks for your kind words, as well. I feel much the same: the discussion with you has brought several issues regarding my HC account to the fore, and I believe the idea has gained much thanks to it.

    I’m also very happy that we’ve gotten as far as we did in our discussion, but, like you, I also feel that there’s much work still ahead regarding your ETC. So, while I do think it’s a tiny bit ironic to simultaneously say that you’re happy that the discussion is useful to other people besides just serving our own interests, and proposing to end it (for the moment) in the same post ( 😉 ), I think I agree that we’ve come as far as we can right now, and wait until you’ve organized your thoughts regarding your ideas to perhaps take it up again.

  157. Jochen (169)

    I feel much the same: the discussion with you has brought several issues regarding my HC account to the fore, and I believe the idea has gained much thanks to it.

    Phew. Relief is what I felt when reading the above: it was my aim at the start, and was worried that the predictable mission creep (my doing) had put the primary objective at risk.

    Re ending the discussion: I’m not ending it! I’m merely trying not to waste anyone’s time by proposing half-cooked arguments, so yes, we do seem to agree on the necessity of a temporary truce (it’s been tons of fun so far!).
    Cheers for now.

  158. Pingback: Sources of Error: Epiphenomenalism (part 1) | Writing my own user manual

Leave a Reply

Your email address will not be published. Required fields are marked *