Reality

knight 4This is the last of four posts about key ideas from my book The Shadow of Consciousness, and possibly the weirdest; this time the subject is reality.

Last time I suggested that qualia – the subjective aspect of experiences that gives them their what-it-is-like quality – are just the particularity, or haecceity, of real experiences. There is something it is like to see that red because you’re really seeing it; you’re not just understanding the theory, which is a cognitive state that doesn’t have any particular phenomenal nature. So we could say qualia are just the reality of experience. No mystery about it after all.

Except of course there is a mystery – what is reality? There’s something oddly arbitrary about reality; some things are real, others are not. That cake on the table in front of me; it could be real as far as you know; or it could indeed be that the cake is a lie. The number 47, though, is quite different; you don’t need to check the table or any location; you don’t need to look for an example, or count to fifty; it couldn’t have been the case that there was no number 47. Things that are real in the sense we need for haecceity seem to depend on events for their reality. I will borrow some terminology from Meinong and call that dependent or contingent kind of reality existence, while what the number 47 has got is subsistence.

What is existence, then? Things that exist depend on events, I suggested; if I made a cake and put it in the table, it exists; if no-one did that, it doesn’t. Real things are part of a matrix of cause and effect, a matrix we could call history. Everything real has to have causes and effects. We can prove that perhaps, by considering the cake’s continuing existence. It exists now because it existed a moment ago; if it had no causal effects, it wouldn’t be able to cause its own future reality, and it wouldn’t be here. If it wasn’t here, then it couldn’t have had preceding causes, so it didn’t exist in the past either. Ergo, things without causal effects don’t exist.

Now that’s interesting because of course, one of the difficult things about qualia is that they apparently can’t have causal effects. If so, I seem to have accidentally proved that they don’t exist! I think things get unavoidably complex here. What I think is going on is that qualia in general, the having of a subjective side, is bestowed on things by being real, and that reality means causal efficacy. However, particular qualia are determined by the objective physical aspects of things; and it’s those that give specific causal powers. It looks to us as if qualia have no causal effects because all the particular causal powers have been accounted for in the objective physical account. There seems to be no role for qualia. What we miss is that without reality nothing has causal powers at all.

Let’s digress slightly to consider yet again my zombie twin. He’s exactly like me, except that he has no qualia, and that is supposed to show that qualia are over and above the account given by physics. Now according to me that is actually not possible, because if my zombie twin is real, and physically just the same, he must end up with the same qualia. However, if we doubt this possibility, David Chalmers and others invite us at least to accept that he is conceivable. Now we might feel that whether we can or can’t conceive of a thing is a poor indicator of anything, but leaving that aside I think the invitation to consider the zombie twin’s conceivability draws us towards thinking of a conceptual twin rather than a real one. Conceptual twins – imaginary, counterfactual, or non-existent ones – merely subsist; they are not real and so the issue of qualia does not arise. The fact that imaginary twins lack qualia doesn’t prove what it was meant to; properly understood it just shows that qualia are an aspect of real experience.

Anyway, are we comfortable with the idea of reality? Not really, because the buzzing complexity and arbitrariness of real things seems to demand an explanation. If I’m right about all real things necessarily being part of a causal matrix, they are in the end all part of one vast entity whose curious firm should somehow be explicable.

Alas, it isn’t. We have two ways of explaining things. One is pure reason: we might be able to deduce the real world from first principles and show that it is logically necessary. Unfortunately pure reason alone is very bad at giving us details of reality; it deals only with Platonic, theoretical entities which subsist but do not exist. To tell us anything about reality it must at least be given a few real facts to work on; but when we’re trying to account for reality as a whole that’s just what we can’t provide.

The other kind of explanation we can give is empirical; we can research reality itself scientifically and draw conclusions. But empirical explanations operate only within the causal matrix; they explain one state of affairs in terms of another, usually earlier one. It’s not possible to account for reality itself this way.

It looks then, as if reality is doomed to remain at least somewhat mysterious, unless we somehow find a third way, neither empirical nor rational.

A rather downbeat note to end on, but sincere thanks to all those who have helped make the discussion so interesting so far…

117 thoughts on “Reality

  1. Hi Peter,

    > It looks then, as if reality is doomed to remain at least somewhat mysterious, unless we somehow find a third way, neither empirical nor rational.

    I personally (have deluded myself to) believe I have an answer to this in the form of Tegmark’s Mathematical Universe Hypothesis.

    You seem quite open to Platonism, so let’s take that as a given.

    Now the idea is basically that all reality is just mathematical existence. The idea of objective reality is said to be incoherent.

    You say that it is causal powers that make something real. Consider for a moment a series of events in a hypothetical universe which we’ll say does not exist (for now) but which we can model mathematically and simulate on a computer. We can, in a manner of speaking, see that events in this universe cause other events and so on, such that an observer in such a universe would perceive things to have causation just as we do.

    To me, this means that it is not very satisfactory to define reality as that which gives causal powers to things, because even unreal things can have causal powers with respect to other unreal things (in mathematical models, for instance).

    To say that causation in the real world is actual causation but causation in the unreal mathematical world is not actual causation, you need to presuppose that the real world is real and the unreal world is not real, so defining reality in terms of causation (or vice versa) ends up being circular.

    Tegmark’s view (and mine) is that there is no fact of the matter about what is objectively real, indeed that what is real is entirely subjective. This whole universe is just a mathematical object which exists Platonically, and we perceive it to be real only because we are embedded in it: that is, from our perspective there are causal links between us and the world around us.

    But much the same can be said for any other mathematical model of a universe which is complex enough and organised so as to support observers — from the perspective of those observers their universe is real and ours is not.

    If Tegmark is right (and I am convinced he is), then pure reason is enough to explain why the universe exists after all. Indeed, it explains that all possible universes exist, because the difference between existence and subsistence is illusory and due to our perspective within a mathematical object.

  2. Good stuff Peter. Empiricism hits brute facts (like the natural “laws”) that can’t be explained, abstractions get us away from the real and become ungrounded (think multiverse silliness).

    The physicist N. David Mermin had an interesting point a few years back. He said that at first he thought his work was helping to discover the underlying laws of existence, but as he grew older he came to feel physics was more a net that tried but was unlikely to ever completely account for the nature of the universe. Chomsky also goes into the failure of mechanistic conceptions of reality in his talk Grammar, Mind, & Body*.

    Just seems we know very little about fundamental aspects of the physical world such as causality, the nature of time, what sustains the regularity we observe and extrapolate into laws. Throw in our incredible ignorance about intentionality and subjectivity and it seems like we’ve only scratched the surface of what can be known about reality.

    Exciting to think of the vast frontier, though also ideally humbling enough to keep people from leaping away from what we know into varied flights of fancy like uploading minds & MWI on the one side and theistic claims on the other.

    *https://www.youtube.com/watch?v=wMQS3klG3N0

  3. qualia – the subjective aspect of experiences that gives them their what-it-is-like quality – are just the particularity, or haecceity, of real experiences.

    What about hallucinated qualia?

  4. No, Disagreeable, I’m not particularly Platonist (though compared to some hard-line materialists here I might look it). In particular I think it’s a mistake to think that maths could create, rather than describe reality – that’s an example of the assumption that theory precedes reality, the assumption that makes the reality of seeing red seem so mysterious…

    Thanks, Sci!

    Jeremy – good question; hallucinations are real, in the sense I intend – they have haecceity – so it’s not absurd to think they can have qualia. It’s ideas, Platonic entities, that don’t have the required particularity. To get all this straight I think we need to draw the distinction between the existent and the subsistent rather than between the real and the imaginary.

  5. Perhaps if you consider it as a problem of synecdoche?

    As is we take it [A] the eye is detecting a colour then [B] something is feeling ‘what it’s like’ to see that colour.

    But what if that’s just part of the system?

    What if it’s recursive – there is actually a [C] eye that detects [B] (instead of detecting outside things – and lets say it’s not very good at seeing what it looks at, either. It sees things in kind of a blurry way). AND like [B] monitors [A], there is a [D] that monitors [C].

    But due to synecdoche, we assume there is only [A] and [B], primarily because it’s all packed into one skull – thus we assume only one thing is there. Not two. There aren’t two skulls, after all!

    When you retract the viewing device and processor that is [C] and [D] as not being part of the equation – well then you’d have a [D] reporting ‘what it’s like’ but not treating itself as there. The Delephant in the room! Hur, hur! It would scrutinise [B] most thoroughly – indeed in focusing so utterly it would help ensure that it’s own existance isn’t detected for lack of it looking anywhere else! Plus it’s already treated itself as not existing, which makes the finding all the more unlikely! Call it the philosopher’s contradiction.

    It makes sense when you think about not just ‘what it’s like’ but – how do you know ‘what it’s like’? Isn’t something detecting ‘what it’s like’ for you to do that? But what is it like to detect [B] what it is like? There’s no feeling for that. But at the same time, how do you know ‘what it’s like’ – wouldn’t folk say it’s reasonable to say you are detecting [B] somehow? But you can’t feel yourself detecting [B], you can only detect [B]. Thus it seems like only [B] is there – thus the synecdoche and the contradiction.

    And explaining the whole system when you’re only looking at half of it – that really is a hard problem!

  6. Disagreeable Me:

    Consider for a moment a series of events in a hypothetical universe which we’ll say does not exist (for now) but which we can model mathematically and simulate on a computer. We can, in a manner of speaking, see that events in this universe cause other events and so on, such that an observer in such a universe would perceive things to have causation just as we do.

    But the causation here is merely a descriptive gloss—an interpretive mapping—on real-world causation subvening the computational process, that is, the actual evolution of the physical system to which the computation is ascribed. In a sense, the computation that a physical system performs is just a way to talk about its physical evolution in a slightly different, or differently interpreted, language—so the causation there is not with respect to anything unreal, but grounded in the quite real causation of the physical substrate.

    More generally, mathematics is ultimately just structure—you can ground it in set theory, where sets are just classes of objects standing to one another in a particular relation. But structure underdetermines content: think about the relations ‘is an ancestor of’ and ‘is thicker than’. We can imagine a set of people and a set of books, such that whenever two people stand in the ancestor-relation, then two books stand in the thicker-than relation. But of course, people and books are very different things—but their difference is not accounted for by the merely structural account.

    Now, one might object that neither people nor books are exhaustively characterised by the above, that they have more structural properties that tells them apart; and that is of course right. But still, the general conclusion holds: for any purported structure, there is not a unique set of things that embodies this structure; conversely, any set of things (of the right cardinality) can be endowed with any given structure. This is known as Newman’s problem, and it’s a famous objection to structural realism—and if it’s true that mathematics is ultimately the science of structure, also to any sort of ‘mathematical’ or ‘Platonic’ realism.

    More concretely, mathematics in general captures those properties of the world that can be reflected in the differences of signs on a piece of paper; but there’s of course no guarantee that the world is exhausted by these sorts of properties. Indeed, certain features of mental experiences—their ineffable character, for one—suggest that they are precisely not of this sort—that they are non-structural, intrinsic properties of certain mental states (what I think Peter means by ‘haecceity’, or at least something related).

    But then, mathematics and computation fail to account for those kinds of properties, and there is, indeed, a certain character to reality that can’t be captured in these objective, communicable terms.

  7. The skeptic Massimo was very taken with Platonism until he eventually abandoned it as straying too far from empiricism. He credits this paper by John Wilkins (Information is the new Aristotelianism) for helping to clarify the issue:

    https://scientiasalon.wordpress.com/2014/05/01/information-is-the-new-aristotelianism-and-dawkins-is-a-hylomorphist/

    I think it does a good job of showing how one can make “information” seem magical by confusing/conflating definitions, though I’d say this only compounds the problem Rosenberg (and I assume all eliminativists regarding intentionality?) has raised:

    If physical things aren’t producing semantic information, and thus (as Rosenberg notes in The Atheist’s Guide to Reality) neurons can’t point to or be about anything due to the indeterminate nature of the physical…then how do we have thoughts about anything at all?

  8. Hi Jochen,

    > so the causation there is not with respect to anything unreal, but grounded in the quite real causation of the physical substrate

    Well, not necessarily. Say we don’t actually run the simulation. Say we’re considering a class of hypothetical worlds we *could* simulate. From a Platonist perspective, there are (unreal) events in these worlds which can be said to cause other (unreal) events and these events don’t supervene on anything because we’re not actually running any simulations.

    Now, we can do as you have done and say that this isn’t real causation but a descriptive gloss, because this is not real-world causation. The events don’t even exist because they are just hypothetical.

    But to do so is to presuppose a distinction between the real and the unreal. This is circular if you are trying to explain a difference between the real and the unreal by appealing to causation as Peter has done. We’re back where we started!

    > there is not a unique set of things that embodies this structure; conversely, any set of things (of the right cardinality) can be endowed with any given structure.

    Yes, if the structure is just a set of abstract relations that do nothing and mean nothing. But a mathematical model or computer simulation is more like what I have in mind, where the state at earlier times entails a state at later times. The relations are not entirely arbitrary and meaningless but pseudo-causal, with entailment playing the role of physical causation.

    Newman’s problem would still need to be addressed. There is no limit to the ways even a dynamic structure could be realised. We can imagine that the universe is a simulation running on one kind of computer or another. Similarly, we can imagine that each subatomic particle known to particle physics has a substructure, and we can imagine that this substructure can never be known to us, and that there are an unlimited number of ways that substructure could be organised.

    On the MUH, none of this matters. We identify our universe with the structure we *can* in principle detect. Different variants of hidden substructure therefore don’t change what we identify with. In mathematical space, there exists an infinite number of such variants, but from our perspective as observers supervening on the structure we can see, all of these are the same and can be collapsed to one object. The distinctions are immaterial. This parallels what I said to you on the last thread about the meaninglessness of distinguishing between identical copies of myself at vast distances away in infinite space.

    In the same way, the Mandelbrot set as computed on a Mac PC is the same as that computed on an android mobile phone. It doesn’t matter to the set (if we can pretend it has a perspective) whether it is supervening on one computer architecture or another. All that is completely isolated from and so irrelevant to it. Indeed, on Platonism, the set exists just fine even if it is never explored by thinking creatures.

    If we take an example from mathematics only, there are an unlimited number of shapes and tilings and so one that take as a part of them an equilateral triangle, for instance the four faces of an equilateral tetrahedron. If the four points of the tetrahedron are A,B,C,D then, as Newman says, we can form an equliateral triangle in a number of different ways. We have ABC, ABD, ACD and BCD. But these are all just instances of the same Platonic equilateral triangle, and if what you identify something with is that Platonic equilateral triangle then what you are seeing is different reflections of this embedded in another Platonic object.

    In other words, even if there are numerous ways to form the elements of the structure of this world that have significance to us, it doesn’t need to undermine structural realism as long as you believe that all structures exist and can have other structures reflected and embedded in or supervening on them. All these are indeed distinct mathematical objects, but our universe is still an object in its own right and just happens to crop up in these other objects also.

    This means that I am unconcerned by, e.g. Nick Bostrom’s argument that we may be in a simulation. For me, the proposition that we are in a simulation is meaningless, because as long as those outside the simulation don’t interfere with us then we are also within a mathematical object independently of the simulation (where what “we” are is identified with a particular substructure of the mathematical object that is isomorphic to our universe). So even if the experimenters pull the plug, we continue to exist. In this view, a simulation is a way of peering into and examining a mathematical object and not a way of bringing it into reality.

    (Things get more complicated if the experimenters interact with the simulation.)

  9. Hi Sci,

    > He credits this paper by John Wilkins (Information is the new Aristotelianism) for helping to clarify the issue:

    You’ll find my reaction to that paper in the comments section of the article you linked (I’m pretty active on Scientia Salon).

    > neurons can’t point to or be about anything due to the indeterminate nature of the physical…then how do we have thoughts about anything at all?

    To me, intentionality is a kind of relation arising out of causal links and structural similarity.

    I’m with those against naive realism, I think we refer to objects in the real world only indirectly. What we actually think of are mental representations which bear a certain resemblance to real objects and which have certain causal relationships with real objects (perceiving an object in sense data is the activation of a mental representation), and this is how intentionality arises.

  10. @Disagreeable Me:

    “To me, intentionality is a kind of relation arising out of causal links and structural similarity.”

    Isn’t this invoking intentionality in order to explain it? Isn’t “similarity” a comparison that requires a mind to judge?

    For example – if I write a Comparator in, say, Java, am I not the mind that determines what makes the objects of comparison similar enough?

  11. But to do so is to presuppose a distinction between the real and the unreal.

    Why? Even in mere consideration, there is a subvenient physical causality; thus, the causality of the imagined world is merely derived. So there’s a clear demarcation criterion here, since if you changed any detail of the causality of the supervenient fictional world, this would have to be associated with a corresponding change of the subvenient causal matrix—i.e. if I think of things in the fictional world occurring differently, then necessarily my neurons must fire in a different way.

    But conversely, changes in the subvenient medium may occur, without there being any corresponding change in the resulting imagined world—for instance, the whole thing could be instantiated not in a neuron-based mind, but rather, in one made up of silicon chips. So, without appealing to real and unreal, there’s a clearcut difference.

    We identify our universe with the structure we *can* in principle detect.

    The problem is that what we seem to be observing does not appear to be structural at all. Do what I call the ‘scribbles-on-paper’ test: if you can instantiate a property in scribbles on paper, then it’s a structural property—since what you do is implement the same structure with scribbles on paper. This is what ultimately explains the alleged ‘unreasonable effectiveness’ of mathematics: it’s a highly efficient system for implementing structures in an easily-manipulated way. The same goes for computers: their universality is a universality of structural modeling.

    But the conclusion seems irresistible to me that the subjective aspects of experience fail the scribbles-on-paper test: I could never hope to convey to anybody else what a given experience feels like to me using scribbles on paper. I could give some account of the interrelationships of experiences—a given noise was louder than another, a color brighter, and so on; but this doesn’t even begin to get to the heart of the matter regarding what it is like to experience these things.

    If you believe differently, then I have a challenge for you: I have no sense of smell at all, and never had, to the best of my recollection. So, if you think subjective experience is structural, then what is it like to smell strawberries? Or, perhaps more generously, what is it like to smell at all?

    I know there are people, like Daniel Dennett, who basically believe that it would be in principle possible to write down just such a description, that however the task is in practice so complex that it can never be completed. But I don’t think this holds water: even if any given experience is vastly too complex to be described, this should not stand in the way of giving at least a hint at what such a description might be like. For instance, I don’t know all the processes by which my computer gives rise to pictures on its screen, and doubt that I could hold them all in my head; but still, I have a pretty good idea what the complete explanation would be like. Why is even this apparently impossible when it comes to consciousness?

    But then, if it’s true that mathematics is structure, and consciousnesss is non-structural, it seems that the MUH is just a non-starter. I mean, think about it: what would a conscious mathematical object be like? Ultimately, all mathematics gets its meaning from human minds; it has, at best, derived intentionality. Without this, it is nothing but empty symbols. How do you get semantics from syntax?

  12. Jochen: “I could never hope to convey to anybody else what a given experience feels like to me using scribbles on paper.”

    In the SMTT experiment, the subject self-induces a vivid hallucination of a triangle that he/she is able control so that the base of the hallucinated triangle is approximately equal to randomly varying heights. Independent observers, looking over the subject’s shoulder experience the same hallucination. Is the subject conveying the hallucinatory experience to those who experience the same self-induced hallucination when they look over the subject’s shoulder?

  13. Hi Sci,

    > Isn’t “similarity” a comparison that requires a mind to judge?

    Sure. This is only a problem if we’re saying there is an objective fact of the matter about whether a certain mental representation really does refer to a particular object in the external world.

    I don’t think there is such a fact of the matter. We won’t go far wrong in deeming a mental representation to refer to an object when that mental representation bears a structural similarity to the object and where there is a causal connection of some kind between them.

    Once you get out of the habit of thinking of mental representations having objective relationships to external objects, the problem of intentionality goes away, because all you have are mental states being connected to other mental states.

    So the world I perceive is a construct of my mind, which is not to say that it does not bear a strong relationship to the real world. You are right that it requires a mind to judge the strength of that relationship. If I am as deluded as you think I am then the relationship is rather weak indeed!

  14. Hi Jochen,

    > Even in mere consideration, there is a subvenient physical causality; thus, the causality of the imagined world is merely derived.

    If I were specifically thinking of these events and evolving the state of the world in my mind, you would have a point.

    But I suggested that we need not work out the details of these worlds. I’m just postulating that there is a set of such worlds which *could* be thought about or simulated. Since I’m not working through specific models and specific events, the events I’m talking about are never realised. They remain hypothetical and unreal. On Platonism, these events exist abstractly whether or not we actually go to the trouble of investigating them.

    > I could never hope to convey to anybody else what a given experience feels like to me using scribbles on paper.

    Agreed. Because scribbles on paper do not have the power to rewire brains to whatever structure you like. Your scribbles are interpreted by the other person and reconstituted into a representation in the mind of that brain. There are constraints to what this process can achieve, not because there are mental states which are not structural, but because the ability of a brain to restructure itself on demand in response to verbal or symbolic communication is severely limited.

    But, say your scribbles defined a precise brain state (you’d need a lot of paper!) and say the other person is John Searle and has the superhuman ability to interpret your scribbles as in the Chinese Room to emulate that brain state. In this case the person themselves will not understand your experience but they will realise a distinct personality who does, a virtual mind which supervenes on their own in the manner of a virtual machine in computer science.

    > I know there are people, like Daniel Dennett, who basically believe that it would be in principle possible to write down just such a description, that however the task is in practice so complex that it can never be completed.

    A rare instance where I disagree with him. You can’t perfectly reconstruct an experience from a description any more than you can learn juggling by reading a book (without practicing). Brain state is not just propositional knowledge. Knowing everything there is to know about brain state does not mean you have the ability to put your brain in a particular state.

    Not without surgery at least! Mary the colour neurologist may know everything there is to know about the colour red, but she cannot know from only reading books what it is like to experience red. She might anticipate how she will react on seeing it for the first time (e.g. whether she will be overcome or regard the whole thing as anticlimactic), but she cannot actually call to mind the experience at will because to do so would require rewiring her brain by an act of will.

    If on the other hand she can rewire or stimulate her brain with some sort of tool, then it is quite conceivable that she would know how to interfer with her neurons so as to experience red. That’s what knowing everything about the experience of red gives her.

    > I mean, think about it: what would a conscious mathematical object be like?

    Depends on the object! One might be like me, another might be like you!

    > How do you get semantics from syntax?

    Ooh, that’s a big topic. I could direct you to some earlier, less formed ideas I had a few years ago.
    http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-semantics-from.html
    and
    http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-meta-meaning.html

    as well as an article I posted to Scientia Salon last year.
    https://scientiasalon.wordpress.com/2014/09/01/the-intuitional-problem-of-consciousness/

    To give a very quick answer, I think a symbol in the external world has semantics when we can convert it to the native mental representation used by our brains. This native mental representation is itself a symbolic construct. You could say it has semantics to the hosting mind because it needs no translation, it’s already native. It also has semantics because it’s not just a dry sequence of symbols such as a text, but a complex mechanism where symbols have causal relationships to other symbols, allowing the possibility of building dynamic models with causal relationships and similarity to objects in the outside world. As I said earlier, this is how I think intentionality is achieved.

  15. @Disagreeable Me: Are you an Idealist then, and all these structures you discuss are in some kind of Uber-Mind?

    If not I don’t see how causal relationships between Platonic Math structures gets you to intentionality. One of the problems would be isolating *the* causal relation amidst what I’d assume were the variety of causal relations between these non-spatial math Forms.

    Also, what does it mean to have causal relations between Platonic Math Forms?

  16. Mary the colour neurologist may know everything there is to know about the colour red, but she cannot know from only reading books what it is like to experience red. She might anticipate how she will react on seeing it for the first time (e.g. whether she will be overcome or regard the whole thing as anticlimactic), but she cannot actually call to mind the experience at will because to do so would require rewiring her brain by an act of will.

    Nice example, DM!

    I think the problem in the discussion of the reported ‘semantics’ is people expecting a description of the electronic interactions of a computer to somehow be raised up to the heights of ‘semantics’ somehow. When really it’s more like the explanation of ‘semantics’ dropping it way, way down to the depths of electronic interactions.

    Like in the ‘zombie with no qualia’ examples, the zombies always seem to accept their state. They don’t say ‘I am not a zombie and I AM having a concious experience!!’ even though it’s easy enough to imagine a ‘zombie’ saying that. The zombies know their place in these examples. And so these discussions seem to orbit more around social pecking order and self promotion within that, than anything else.

  17. But I suggested that we need not work out the details of these worlds. I’m just postulating that there is a set of such worlds which *could* be thought about or simulated.

    But then, it’s just an inert structure—like a cooking recipe, for instance. I don’t see how it’s meaningful to attribute causality to something like this, at all—I mean, would you attribute cause-and-effect links to the entities in a story, in a book noone is reading at the moment?

    Agreed. Because scribbles on paper do not have the power to rewire brains to whatever structure you like.

    I never found this argument very convincing. First of all, it’s not clear that they should have the power to do any re-wiring in order to convey the necessary ideas: human minds are basically universal (given sufficient resources) simulation machines, so while I can’t learn how to ride a bike from a book, I could certainly imagine myself riding a bike after reading how it works; and really, the latter is all I’m asking for. There need to be no changes in hardware—a universal computer can emulate any other computer, so a sufficiently powerful mind ought to have no problem learning how it is to see red from a book. But I can’t even begin to imagine how such an explanation would look (which is, of course, nothing but a statement of my own limitations, but in order to entertain the possibility of capturing subjective experience structurally, I would still demand at least a plausible explanation of how that would work).

    Second, I don’t think it’s wise to anchor this argument to the incidental particulars of human neuroanatomy: one could imagine a robot, with complete freedom to re-wire his silicon chips, that would be capable of learning how to ride a bike from reading about it in a book—essentially, implementing the algorithm written down. However, even though I am not such a robot, I can easily imagine what the algorithm would look like, and that it in fact would produce the desired effect in a suitable robot. Nevertheless, even though, by analogy, it ought to be possible to create a robot that could learn what it’s like to see red from a book, there does not even seem to me to be an algorithm that accomplishes this task.

    Finally, one may note that the problem does not get any easier if both parties do actually know what it’s like to experience a certain stimulus: I can’t anymore give an objective third-personal description of what it’s like to see red to you than you can describe what it’s like to smell strawberries to me, even though in that case, we presumably share the relevant wiring. Any such description I can think of collapses into circularity, and presupposes experience in order to explain experience. This is not the case for the bike-riding example: there, if both you and I know how to ride a bike, we could exchange a description on which we both agree that it covers all that there is to bike-riding, even though it would not be enough to learn how to ride a bike from. Such a description, however, is lacking, and is exactly what would be needed to establish that mental properties are structural.

    To give a very quick answer, I think a symbol in the external world has semantics when we can convert it to the native mental representation used by our brains.

    But isn’t this (like all similar attempts at mentalese, IMHO) just blatantly homuncular? In a way, you wish to explain representation by appealing to representation; but in order for something to be a representation, there must be some entity that uses it as a representation. But postulating such a user just iterates the problem to a higher level…

    Also, I wanted to point out that I see some tension between the structural and Platonic accounts of mathematics: if mathematics is really just the study of structures, then the Platonic world would be one of relations without relata, and moreover, would seem to be wholly redundant, since one need not appeal to Platonic mathematical form to ground mathematical reasoning, but can simply point out their instantiation in real-world structures (even an imagined structure is a real-world structure in this sense, since it describes the relations between objects of thought; in fact, even unimaginable structures can be realized by just making up objects to taste, in the form of scribbles on paper). So how do you reconcile the two?

  18. Arnold:

    Is the subject conveying the hallucinatory experience to those who experience the same self-induced hallucination when they look over the subject’s shoulder?

    Hm, I wouldn’t say so. Both are subject to the same visual stimuli, so it doesn’t seem all that mysterious that both report the same visual experience. If one of them were in fact not receiving those stimuli, but the subject could report them in such a way that the other nevertheless had the same visual experience (without resorting to just detailing the experience), then I think we would have a case about an experience being conveyed structurally.

  19. Hi Jochen,

    > human minds are basically universal simulation machines

    I don’t think they are, or at least not the way you do. Learning a new skill or experiencing new sensations has the power to literally rewire the brain so that we gain new abilities and new powers of imagination.

    > I could certainly imagine myself riding a bike after reading how it works

    I think that when we imagine things, we are mostly reassembling bits of past experience. Even if you have never ridden a bike, you have probably been in motion, moved your legs in various ways and balanced yourself in various contexts. Put all that together and you have an approximation of riding a bike, but only an approximation. Actually riding a bike will give your imagination new powers.

    > so a sufficiently powerful mind ought to have no problem learning how it is to see red from a book.

    If by “sufficiently powerful” you mean “can put itself in whatever state it wants by an act of will” then sure. But no human mind is like that.

    > a universal computer can emulate any other computer

    I agree. If you’re talking about emulation, however, we’re in a Chinese Room scenario as I described, with another virtual mind supervening on the native mind.

    Instead, you may want to make the point that a universal computer can run any software at a native level. I don’t think the brain is like this at all. The software of the brain is encoded, at least in large part, in the physical connections between the neurons. The brain does not have a facility for completely overriding this “software” or structure at will.

    You could perhaps think of the brain as a very tightly secured Windows machine to which you have only very limited permissions. You can perhaps read and write text files in one specific folder and that’s about it. In such a scenario, you have little hope of installing completely new software unless you can hack it somehow (analogous to doing brain surgery).

    So, a description of what it would be like to see red that would actually have some chance of working for someone with the kind of rewirable mind who could actually use it would be something like “Connect these neurons to these neurons, like so, and then stimulate these neurons in succession, then release a little dopamine, then stimulate these neurons”.

    > Nevertheless, even though, by analogy, it ought to be possible to create a robot that could learn what it’s like to see red from a book

    I quite agree. For instance, we can imagine that the robot can emulate human brain states, and the book could describe the brain state of a human experiencing red.

    > there does not even seem to me to be an algorithm that accomplishes this task.

    How would we know if there was? If the robot outputs “Oh, so that is what red is like” would you be satisfied? I doubt it. In order to get the robot to experience anything you would have to make a conscious machine. I think that is possible. Perhaps you don’t. But even if we did make a conscious machine, the AI-skeptics would just see it as aping consciousness.

    > I can’t anymore give an objective third-personal description of what it’s like to see red to you than you can describe what it’s like to smell strawberries to me

    Well, I’m a bit of a qualia-eliminativist, like Dennett. I don’t think there is anything to describe. I wouldn’t go so far as to say qualia don’t exist, but I think that qualia are nothing more than the handles or labels we give for certain brain states and sensations. They are without content. The very idea of communicating a raw feel from one mind to another is in my view a category mistake.

    I mean, if all your qualia were inverted but the causal powers remained the same, you would not notice a difference (how could you, if their causal powers remained the same?). To me this means it is most parsimonious to simply discount talk of the qualia in themselves.

    > But isn’t this (like all similar attempts at mentalese, IMHO) just blatantly homuncular?

    No, because I’m not imagining that the mental representation needs to be read and translated again. It’s already in the “machine language” of the brain, and as such it has all the direct physical causal powers it needs to enable information processing.

    > in order for something to be a representation, there must be some entity that uses it as a representation

    So, think of a computer program running in memory. That computer has state with variables and so on (representations of some kind). That state is processed by the CPU. You could describe this as the CPU “reading” the state. This looks a little homuncular too, as reading suggests some effort made to comprehend or translate an external representation to an internal one. But this is not what the CPU is doing. Rather, what is actually going on is that the state is *causing* the CPU to do one thing or another. There is a physical mechanism underlying the whole thing.

    Similarly, mental representations cause your brain to process informationin various ways. They constitute an integral part of the structure of your mind and are nothing like a book for some homunculus to read.

    > then the Platonic world would be one of relations without relata

    I would say to build relational structures you need *some* sort of relata, but these can be abstract contentless propertyless nodes and do not have to be physical objects of any kind. A graph is such a structure.

    When I say structure I really mean anything that can be mathematically modelled or simulated on a computer. I would not wish to get too bogged down in defining relations or objects or what constitutes a relatum. I probably don’t subscribe to a very very strict definition of structuralism where all there are is relations, because I don’t see how you can get dynamic (pseudo-)causal behaviour out of that. I could be wrong about that, though. Perhaps there is a way to do it. I’m not an expert on the subject.

    > since one need not appeal to Platonic mathematical form to ground mathematical reasoning

    Perhaps not, but I don’t think it’s redundant because it helps to avoid certain problems if we assume mathematical objects exist, at least in *some* sense. More on that here.

    http://disagreeableme.blogspot.co.uk/2013/10/mathematical-platonism-is-true-because.html

    > in fact, even unimaginable structures can be realized by just making up objects to taste, in the form of scribbles on paper

    I don’t follow, I’m afraid.

  20. Hi Callan,

    > I think the problem in the discussion of the reported ‘semantics’ is people expecting a description of the electronic interactions of a computer to somehow be raised up to the heights of ‘semantics’ somehow. When really it’s more like the explanation of ‘semantics’ dropping it way, way down to the depths of electronic interactions.

    I think that’s very well put, and I agree completely.

  21. Hi Sci

    > Are you an Idealist then, and all these structures you discuss are in some kind of Uber-Mind?

    No. I don’t think math supervenes on mind. I think mind supervenes on math.

    > One of the problems would be isolating *the* causal relation amidst what I’d assume were the variety of causal relations between these non-spatial math Forms.

    Well, again you seem to be trying to find an objective criterion for intentionality, while I’m saying there is none. I’m not sure I really understand your question though. Perhaps my view will become clearer in my other comments.

    > Also, what does it mean to have causal relations between Platonic Math Forms?

    There’s no causality in a triangle or a circle for a start. You have to think of a structure that includes a time dimension and that has rules for how things evolve and interact within the structure.

    For example, consider a certain starting configuration of Conway’s Game of Life, which constitutes such a mathematical structure. When this configuration is allowed to evolve according to the rules of the game (and you can view this evolution in time as a third dimension, just as much a part of the structure as the configuration at any point in time), we may see gliders flitting about and so on. If one glider collides with another, this may *cause* both gliders to fall apart. I say “cause” but we could just as easily talk about this in terms of entailment, and that’s fine. But in that case I see the causation in this universe as reducing to entailment also.

  22. Hi Jochen,

    I missed this bit.

    > I mean, would you attribute cause-and-effect links to the entities in a story, in a book noone is reading at the moment?

    I’ve used characters in books as an analogy for my views in the past. It’s not a bad analogy but it is not perfect.

    Within the context of the story, certain events certainly are caused by other events. Within the context of a dynamic mathematical structure, certain events are caused by other events. Within the context of this universe, certain events are caused by other events.

    Whether you count any of these as real causation depends on your perspective, what context constitutes your frame of reference. To say a story actually has causation you would have to say that a character in the story has a legitimate perspective. I’m open to using that language, but the difference between a character in a book and an observer in a mathematical structure is that a character is an empty shell — it’s all behaviour and no internal information processing. We never get a description of the physical mental states of Oliver Twist. He only becomes a fleshed out person in our imaginations as we project our own experiences onto him. Conversely, an observer in a mathematical model or computer simulation of a universe does have internal state and can process information and so on, so it seems more plausible to me to consider such a person to have an actual perspective.

  23. I think that when we imagine things, we are mostly reassembling bits of past experience.

    Which is, of course, a very good argument against a structural account of experience: if our imaginations of it all depend on actually having had experience, then it is hard to argue that ‘in principle’, it could nevertheless be so reduced.

    For instance, we can imagine that the robot can emulate human brain states, and the book could describe the brain state of a human experiencing red.

    Well, of course, if you could do that, you could also clone… 😛

    Well, I’m a bit of a qualia-eliminativist, like Dennett. I don’t think there is anything to describe. I wouldn’t go so far as to say qualia don’t exist, but I think that qualia are nothing more than the handles or labels we give for certain brain states and sensations. They are without content. The very idea of communicating a raw feel from one mind to another is in my view a category mistake.

    That’s a bit dancing around the point, though. Whether or not you think qualia exist, whatever it is we experience, to be capturable in structural terms, must be capable of being communicated; but it’s not merely that nobody can provide such a description, nobody even known what such a description would conceivably look like. Absent at least some pointers in that general direction, I think it’s a bit much to simply take on faith that a structural account ought to be possible—kind of like when you believe there only are white swans, and you find a black one, that it must in fact somehow be a white swan. Certainly, that’s a possibility: someone could have painted it, it could have been in an oil spill, or whatever else; but in order to put that possibility on the table in a meaningful sense, at least one such a story needs to be proposed. No such story exists with respect to the structurification of qualia.

    No, because I’m not imagining that the mental representation needs to be read and translated again. It’s already in the “machine language” of the brain, and as such it has all the direct physical causal powers it needs to enable information processing.

    But for this, you wouldn’t need any mentalese, you could use simple English, or whatever else your language is. The translation step does not in fact accomplish anything: certainly, you can imagine symbols that, by properties related to their shapes, have various physical effects; but in order for this to be convincing, you also need to give an account as to why the resulting entity is not a zombie.

    Or, in other words, the mentalese level does not represent anything; it merely is a set of physical causal links. But the only way we know how to imbue such a structure with intentionality is deriving it from the pre-existing intentionality of human minds.

    Rather, what is actually going on is that the state is *causing* the CPU to do one thing or another. There is a physical mechanism underlying the whole thing.

    Similarly, mental representations cause your brain to process informationin various ways.

    No argument there, but the explanatory gap is not closed by this, but merely ignored. You are describing a way to do everything we seem to do without recourse to concepts like meaning or aboutness; and I don’t contest that that’s possible. But the question is that since to us, there is meaning and aboutness, where does it come from if all that goes on underneath is what you describe?

    Again, I find it vastly easier to just accept that no, structure isn’t all there is. I mean, why should there be? Is there some a priori argument that all that can exist is structure? Consequently, our descriptions simply fail to capture some non-structural aspects of the world—the map, as the old saying goes, simply isn’t the territory, and taking it to be is nothing but a level confusion.

    Then, all the problems I outlined simply become what one would expect, without any need for hope or faith that even though it’s inaccessible to us due to our wiring or whatever, there is still some mechanism that somehow produces our phenomenal, intentional experience of the world from the structural ground. Frankly, I can see no reason to suppose this to be the case.

    Perhaps not, but I don’t think it’s redundant because it helps to avoid certain problems if we assume mathematical objects exist, at least in *some* sense. More on that here.

    Well, I think that’s too diluted a concept of existence for me. On it, also Long John Silver would exist in ‘some’ sense, and so on.

    In fact, I think that’s not too different from the term ‘subsistence’ as Peter uses it. I have no qualms with admitting the subsistence of Platonic structures, but of course, that doesn’t suffice for a mathematical universe, since subsisting Platonic structures need being embodied on a pre-existing substrate. It seems to me, however, that you must be committed to a stronger claim of existence regarding the Platonic realm, since after all, it in the mathematical universe provides the foundation for the existence of all further objects.

    I don’t follow, I’m afraid.

    Well, I can’t for instance imagine the structure (or relation) of entanglement; but I can use symbols to embody this structure, which I can meaninfully manipulate and come to true conclusions (well, the process is rarely that straightforward in practice, but you get the gist).

  24. I’m open to using that language, but the difference between a character in a book and an observer in a mathematical structure is that a character is an empty shell — it’s all behaviour and no internal information processing.

    Well, actually, I would have said that the character in the book and in a mathematical structure are precisely alike in this regard…

    Regardless, my point was merely that for there to be causality, there ought to be something happening—but an unread book is just a series of scribbles, except if you want to introduce some sort of Platonic realm in which it’s actualized in some sense. But if that book is read, it produces mental objects that do have causal effects on one another. Hence, I would not think that there is any causality at all associated with the story as such, but rather, with its reading—and thus, in a sense, instatiation.

  25. Jochen: “If one of them were in fact not receiving those stimuli, but the subject could report them in such a way that the other nevertheless had the same visual experience (without resorting to just detailing the experience), then I think we would have a case about an experience being conveyed structurally.”

    The interesting fact in the SMTT experiment is that neither the subject nor the independent observer is receiving stimuli that remotely correspond to the vivid hallucinations they experience. Since the induced hallucination is created and systematically shaped by the subject, why wouldn’t you call it a structurally conveyed conscious experience?

  26. why wouldn’t you call it a structurally conveyed conscious experience?

    Because the causative agent of the experience still is the visual stimulus—that experience is not a faithful representation of the stimulus, but that’s quite generally the case. If one of the subjects lacked that stimulus, and the same experience were nevertheless induced in them via the other subject’s report (without that report being grounded in shared particulars of visual experience), then the experience would have been communicated.

  27. Hi Jochen,

    > if our imaginations of it all depend on actually having had experience

    Because the way that having certain experiences, i.e. sensory input, affects the structure of the brain is different from reading text. You can after all tell the difference between riding a bike and reading about riding a bike.

    So, Mary’s brain can be put in the state of perceiving red by having the right signals passed along the optic nerve. She cannot be put in the state of perceiving red by learning a set of propositions. Each of these activities affects the structure of the brain in a different way.

    However, once she has been put in a state of perceiving red, she has gained the ability to dimly recall that state by imagining red.

    I’ll just say I think it is unlikely in practice that it would pan out quite like this. If this experiment were actually carried out, my guess is that Mary’s colour detection ability would have atrophied and she would perhaps be unable to perceive colour at all by the time she was an adult. Perhaps she could relearn it, but only gradually.

    > whatever it is we experience, to be capturable in structural terms, must be capable of being communicated

    That’s just what I rejected in my last comment to you. I reject completely the assertion that humans have the ability to communicate and reproduce the structural aspects of their brain states. Yes, these aspects can be scribbled down, but reading a description of a brain state does not mean you can reconstruct that brain state with your actual neurons. At best, it might mean that you could emulate that brain in a Chinese Room scenario where you wouldn’t actually be experiencing what that brain is experiencing but allowing another instance of it to supervene on yours.

    > No such story exists with respect to the structurification of qualia.

    My article on Scientia Salon tries to do that. If you reproduce a structure which believes itself to be experiencing qualia, the reproduced structure will also appear to believe itself to be experiencing qualia. I think it is parsimonious to equate this operational, functionalist belief* with actual belief. I think it is meaningless to suggest one can be believing themselves to be experiencing qualia while not actually experiencing qualia at all. The experience and the belief in the experience are one and the same.

    > But for this, you wouldn’t need any mentalese, you could use simple English, or whatever else your language is.

    No you couldn’t, because the native language of the brain is not expressed in English or other human languages. The native “language” of the brain is expressed in neural connections that have causal powers (and I really don’t think it’s accurate to conceive of it as an actual language). Similarly, you can’t have a computer directly run a Java program with no interpretation. You can’t even have it directly run a pattern of ones and zeros in machine code. What it actually runs, the “mentalese” of the computer, is a pattern of voltages with causal powers.

    > you also need to give an account as to why the resulting entity is not a zombie.

    And that’s what you’ll find on my Scientia Salon article. I don’t think the idea of a zombie is coherent. I think that the consciousness we have is just what it feels like to be an entity with the information processing abilities we have. Why should it feel like anything? Because the idea of something that can believe itself to be in pain while not actually being in pain is silly.

    Or, to put it another way, perhaps we are all zombies. Zombies mistakenly believe themselves to be experiencing qualia and perhaps we are no different. In such a case I would say qualia collapse to the belief in qualia rather than that they disappear altogether.

    This account assumes that zombies can believe, while anti-functionalists claim that they don’t believe at all. It depends what you mean by believe, of course. It is possible to give a functionalist account of belief by which they do believe, so to argue that they do not believe is to presuppose the distinction between functionalist belief and para-functionalist belief. The anti-functionalist who insists zombies do not believe is begging the question as much as the functionalist who insists that they do. The way to resolve this is to ask which is more parsimonious — to have two qualitatively different though completely isomorphic kinds of systems of mental states, one functional and unconscious and another para-functional and conscious, or to collapse the distinction and realise that the two systems are no different after all.

    > Or, in other words, the mentalese level does not represent anything; it merely is a set of physical causal links.

    There’s no “merely” about it. As Callan suggested, explaining the relationship between syntax and semantics may be possible not by raising syntax up somehow to imbue them with semantics, but by reducing and dissolving semantics down until all you’re left with is syntax. In this view, semantics reduces to causal links. Having a dynamic causal structure enables it to interact in an orderly fashion with its environment as well as internally reproduce aspects of the structure of its environment in its own structure. My claim is that this is what intentionality and representation are. To represent a tree is (in part) to have a representation linked to an image processing algorithm such that when we see an image of a tree that representation is activated. The representation also has structure in that it has causal relations to other representations, of concepts such as leaves and bark and roots and so on.

    This picture of semantics is radically unintuitive, because when we imagine a semantic web such as this we picture a graph of nodes without labels and despair of seeing how we could ever relate such a graph to the real world without being told what each node is supposed to represent. And indeed we couldn’t, because there is no way to know how to match the graph on the page to the graph in your head. But you don’t need to know how the graph in your head relates to the real world because the graph in your head is all you really know. You only know the real world indirectly, via certain similarites that may obtain between it and the graph in your head. In such a way you are able to navigate the world successfully, just as a robot might, however the mental representations you have of the objects in the world are no more intrinsically meaningful and intentional than the electronic representations in the computer of the robot (which is not actually to say that they are not meaningful and intentional, but if they are then so are those of the robot).

    > since to us, there is meaning and aboutness

    And if you were a zombie you would be asking exactly the same question. If zombies can also believe there to be meaning and aboutness, then perhaps we are zombies or zombies are incoherent (take your pick).

    > It seems to me, however, that you must be committed to a stronger claim of existence regarding the Platonic realm

    You can interpret me two ways, either of which are fine by me. You can take me to be saying that mathematical objects *really* exist, or you can take me to be saying that nothing *really* exists. It depends what you mean by “real existence”. In fact my view is to regard that concept as incoherent unless clarified with respect to some particular framework.

    (This paper by Carnap goes some way towards explaining this view: http://www.ditext.com/carnap/carnap.html)

    > the character in the book and in a mathematical structure are precisely alike in this regard…

    Well, not precisely alike. The state of a fictional Oliver Twist’s neurons is completely undefined. The state of a mathematical analogue’s neurons is precisely defined. In a mathematical model, we could run a simulation forward and back and find out what happens to the characters outside the scope of the story’s timeframe. What happens outside the scope of what is explicitly related in a novel is undefined. The people in the mathematical model are therefore a little more “real” from a certain perspective.

    > my point was merely that for there to be causality, there ought to be something happening

    OK, but you can say that there are events happening *in the book*. Whether something is actually happening therefore depends to a certain extent on your perspective. From the perspective of an observer in our universe, I will agree that nothing is happening. From the perspective of a protagonist in the book, things are indeed happening. Whether the protagonist actually has a perspective is questionable, more questionable I feel than an observer in a mathematical object.

  28. I think I’m still confused as to the nature of these Platonic – or whatever – structures. These exist a non-spatial reality? If so, I’d need some explanation of how anything exists non-spatially as I’m assuming this is different than 0-dimensional.

    And if there is a non-spatial reality giving rise to this one, how exactly are we account for time? I think the idea of time as a spatial 4-D axis is actually rather flawed – I think Bergson had the right of this much at least – but even so not sure how a Mathematical Universe would be affected the “flow” of time.

  29. Jochen in 23: “I find it vastly easier to just accept that no, structure isn’t all there is. I mean, why should there be? Is there some a priori argument that all that can exist is structure? Consequently, our descriptions simply fail to capture some non-structural aspects of the world.”

    I think it’s logically the case that the basic representational elements used in descriptions, e.g., qualia in the phenomenal realm (“the chair appears red”), are the non-structural building blocks of structured descriptions, hence themselves won’t be amenable to description. They are ineffable and incommunicable: we can describe the chair as being red but not red itself as an element of one’s experience of the chair, a quality-based, phenomenal representation. But I’m not sure whether there are non-structural aspects of the *world* (as opposed to our representations of it) that are eluding capture in our descriptions of it.

  30. Hi Sci,

    > These exist a non-spatial reality? If so, I’d need some explanation of how anything exists non-spatially as I’m assuming this is different than 0-dimensional.

    You seem to be trying to visualise these things floating around in another plane of existence. I think that is the wrong way to think about it. The nature of these objects is just to be self-contained mathematical objects. They don’t exist in a place or at a time. They just exist. In what way that they exist? Well (rather unhelpfully) just in the way that mathematical objects do!

    We are all familiar with some mathematical objects. Presumably we are familiar with the intuitions that lead some people to feel like they have objective properties. That new insights are discoveries rather than creations or inventions. Mathematical Platonism is just to take this intuition as something more than metaphorical. As the view that the language of existence is appropriate when discussing mathematical objects.

    An example of a mathematical object which exists non-spatially is the number 5. You might say either that this doesn’t really exist or you might say that it only exists because we’re thinking of it and so it exists spatially in our brains. Let’s take each horn of this dilemma.

    If it doesn’t really exist, OK, but there are other objects which exist even less. Consider the concept of the largest prime number. Intuitively, it might seem like there could be such a thing. But Euclid showed there wasn’t. So even if 5 doesn’t *really* exist, it is at least a little more real than that. And that little smidgen of reality is what I take to be mathematical realism.

    OK, so let’s say it does exist, but only in our minds. If we both think of the number 5, I would like to think that we are in some sense thinking of the same object, as is the case when we think of Obama, say. But if the number 5 exists only in your mind and in mine (unlike Obama), then how can we be thinking of the same thing? We must each have our own private number 5 which just happens to be quite similar in its properties. While this may be tenable I prefer to be generous enough with the concept of existence to say the number 5 exists independenlty of us.

    > And if there is a non-spatial reality giving rise to this one, how exactly are we account for time?

    Time is internal to the mathematical structure, as in the Conway’s Game of Life example I gave earlier. I think the concept of time as a spatial 4D axis is about right, although I understand that it is quite unlike the other axes in many respects.

  31. Because the way that having certain experiences, i.e. sensory input, affects the structure of the brain is different from reading text. You can after all tell the difference between riding a bike and reading about riding a bike.

    But that’s all still just due to incidental features of human neuroanatomy. My problem is that, contrary to the case of riding a bike, I can’t even imagine how this would work for some being lacking this limitation. That is, while reading about bike-riding, I can form a picture of what bike-riding is and how it works, even without ever having ridden a bike or seen one; and I think plausibly even without having had any of the relevant bike-riding subexperiences. As long as I know some basics about the laws of motion and of human anatomy, I can work out a picture of precisely what happens—or I can at least see how such a picture could be worked out by a sufficiently powerful intellect.

    This seems vastly different with regards to subjective experience. I can’t at all form a picture that convinces me that in some way, a sufficient intellect capable of self-modification could induce these experiences in itself without having ever had them. You’re telling me that there is an explanation that somehow does just that, and of course that’s possible. I’m just not ready to take it on faith.

    Ultimately, what you’re doing is just a case of special pleading: all other structural facts can be conveyed via textual description; however, phenomenological experience seems impossible to convey in this way. But since you’re vested in the idea that it nevertheless must be structural, you come up with an ad hoc hypothesis that it’s just due to the wiring of the brain that this doesn’t work.

    And I’m still not seeing the relevance of that, by the way. Again, in the bicycle example, even though I can’t rewire myself in order to learn it from a book, I can still appreciate that the structural description provides me with all there is to know about riding a bike. But once more, this doesn’t seem to be the case with phenomenal experience. Yet your committment to the idea of the structural nature of experience forces you to posit that it would be possible in principle, but due to some special features, this possibility simply isn’t manifested. If you weren’t committed to the idea of the structural nature of phenomenality, would that sound convincing to you?

    What it actually runs, the “mentalese” of the computer, is a pattern of voltages with causal powers.

    Exactly. And the only thing that endows these voltage patterns with any meaning at all is the computer’s user; without this interpretation, they could be ‘about’ anything. The same problem exists with neuron firings; and yet, we do experience a single definite meaning to our mental states.

    This account assumes that zombies can believe, while anti-functionalists claim that they don’t believe at all.

    Well, full disclosure: I don’t think zombies are possible, or indeed conceivable, either. But I also don’t think that the kinds of processes you have in mind suffice to have anything that could be meaningfully called a ‘belief’. A computer merely flashing the words ‘I believe I’m conscious’ on the screen, or flagging a certain memory sector, or flashing some diode with the label ‘is conscious’, certainly wouldn’t suffice to actually believe. For believing, you need a believer; so the account that consciousness is equivalent to the belief of being conscious is circular (even though I held exactly the same view for a long time, as you can convince yourself if you care to on my long-defunct blog).

    I think that this kind of argumentation often suffers from a kind of veil of complexity. It’s easy to believe that for some sufficiently complex system such that one can’t hold all its details in the mind, something somehow happens to bring forth the ghost in the machine; but the basic capabilities of such a system can be exemplified in much simpler cases, where I think the lack of any phenomenal experience—or at least the absence of any mechanism to bring it about—is more obvious.

    So consider the classic thermostat as the most simple example of a control system. It receives stimuli from the environment, and acts accordingly. Let’s say above some temperature, it ceases to heat the environment, and below a second temperature, it starts heating again. Let’s imagine that it represents its state by a diode that lights up whenever it heats. Further, if you (or Callan) insist, we can add a small photodiode that closes a circuit whenever the first diode is lit, thus monitoring its internal state. But that’s, more or less, qualitatively all that goes on in the most sophisticated computational systems.

    So either, this model system does not give rise to phenomenal experience (as would be my expectation). Then, it needs to be explained how just the same bag of tricks in a brain or an AI suddenly acquires this capacity.

    Or, there is phenomenal experience associated with it, as for instance Giulio Tononi in a commendable effort to embrace all the consequences of his model thinks. But then, in such a simple system, there should be a straight-forward story regarding how this experience comes about. Certainly, the exhaustive description of the system is not too complex to keep in mind; and neither is its functioning. And there also should be no issues of brain-rewiring.

    Of course, you might point out that in this case, I can’t appreciate the phenomenology of the thermostat because I simply am not a thermostat (well, to the best of my knowledge, anyway). But then, I’m not a frog, and I can perfectly well form an understanding (from reading about it in a book) of how a frog swims—because that’s amenable to characterisation in structural terms. Again, experience, if you want to continue insisting that it is structural, seems to have some special peculiar properties.

    OK, but you can say that there are events happening *in the book*.

    But this already presupposes an independent existence to those events, which is however (wrt Platonism) what is in dispute.

  32. Tom Clark:

    But I’m not sure whether there are non-structural aspects of the *world* (as opposed to our representations of it) that are eluding capture in our descriptions of it.

    Well, I do consider our representation of the world to be an aspect of the world, so if that does have non-structural aspects, then this means that there are non-structural aspects to the world. Otherwise, I agree with pretty much all of your post.

  33. Hi Jochen,

    > But that’s all still just due to incidental features of human neuroanatomy.

    Sure, so what? I never denied that there could in principle be a being that could totally rewire the structure of its brain at will and so communicate experiences perfectly with language. It seems doubtful that such a being would evolve naturally as this ability seems rather dangerous and maladaptive to me.

    > I can’t even imagine how this would work for some being lacking this limitation

    So, say you give me a complete description of the state of your brain at a certain point of time, by for instance listing the states and connections of all your neurons, or even all the positions and momenta of their constituent particles. I then rearrange all the neurons or particles in my brain to have the same state. I am now in the state you were describing, meaning that I can now experience red or the taste of strawberries or whatever it was you wanted to communicate, at the minor cost of having obliterated my own identity.

    Of course the problem goes the other way too. Though you may experience strawberries, you don’t usually have any idea how this relates to the structural state of your brain, so you can no more produce such a description than I can put it into action.

    > all other structural facts can be conveyed via textual description

    There’s all kinds of other things that can’t be conveyed either. As noted, you won’t be able to juggle just because I tell you how to juggle. You won’t be made happy just because I tell you about happiness. You won’t come around to my point of view no matter how long we discuss this stuff on Peter’s blog comments section!

    > you come up with an ad hoc hypothesis that it’s just due to the wiring of the brain that this doesn’t work.

    I don’t think it’s an ad hoc hypothesis that we lack the ability to alter the structure of our brains to whatever we we wish by an act of will. To be honest, I think my answer completely defeats your objection that we ought to be able to communicate brain states such as qualia. I have shown that there is no reason to suspect that structuralism (or functionalism) implies anything of the sort. I may not have made the case that this is all structural, but neither do I think that your objection shows what it purports to show.

    > I can still appreciate that the structural description provides me with all there is to know about riding a bike

    It provides you with all the propositional knowledge, certainly. If that’s how we want to conceive of knowledge. But it doesn’t really tell you what it is like. You’re missing the phenomenal experience of it just as much as in a failed attempt to describe any other qualia. The difference with the bike is you have some chance of reconstructing a facsimile from similar experiences.

    My position is that what it is like to experience qualia is not really knowledge at all. It is certainly not propositional knowledge. To know what the colour red looks like is in fact to have the ability to conjure up the brain state which corresponds to imagining the colour red. You can’t do that until you’ve experienced it by having certain signals passed up your optic nerve. This does something to your brain state that you have no other way of triggering without some kind of brain surgery.

    > If you weren’t committed to the idea of the structural nature of phenomenality, would that sound convincing to you?

    Yes, I rather think that I would it think this defeats the argument you are putting forth. The thesis that all brain states are structural is simply not refuted by the ineffability of qualia, because if brain states are structural then qualia have something to do with the disposition of neurons and we don’t have the ability to rewire our neurons at will. To me, this is open and shut. I haven’t demonstrated that qualia are structural but your argument simply doesn’t stand.

    > Exactly. And the only thing that endows these voltage patterns with any meaning at all is the computer’s user;

    Or, in the special case that these voltage patterns form part of a self-aware causal network (a mind), that mind itself. Which is the point in question, of course.

    > For believing, you need a believer

    OK. So, in my view, believers are pretty complex information processing structures such as human minds which form what could be interpreted as representations of propositions and then make decisions (i.e. intentional seeming behaviour) in accordance with those representations. Anywhere you have such a structure, whether on a computer or in a brain or even just in an abstract mathematical world, you have a believer.

    So, I’ll agree with you that most computer programs are not believers because they are not particularly complex and the way they process information has very little in common with how human brains do.

    But in the hypothetical case that the computer program includes a faithful simulation of what is physically happening in a brain, then that brain will appear to be believing things. It will be forming representations of propositions and it will make decisions in accordance with those propositions. I think that the simulated brain is an actual believer. You don’t. I have explained why I think the former is more parsimonious.

    > It’s easy to believe that for some sufficiently complex system such that one can’t hold all its details in the mind, something somehow happens

    I see more continuity than this. I do think even simple computer systems have proto-beliefs. The difference between the beliefs of these systems and my own is only one of complexity, in my view. The relations I hold in my head are rich and deeply interconnected and associated with (brain states corresponding to) various qualia and so on, while a typical relational database is a pretty anemic, sparse graph of empty nodes in comparison.

    > But that’s, more or less, qualitatively all that goes on in the most sophisticated computational systems.

    Sure. And to me that is a trivially simple example of a prototypical intentional system. I see myself as continuous with that, the main difference being that I am a whole lot more complicated.

    > So either, this model system does not give rise to phenomenal experience
    > Or, there is phenomenal experience associated with it,

    I don’t think it is always helpful to think in dichotomies like this. For instance, suppose I were to ask you whether an atom is big. It’s not doing anything different in terms of having volume than, say, the sun, except on a smaller scale. In dichotomous thinking, if the sun is big, it must follow that an atom is big. Or if it isn’t, then there is some magic threshold where an object becomes big.

    The correct answer is that it is true that the atom has some non-zero volume, but this volume is not sufficient to put it into the set of objects we would usually label as *big*. That said, it is big compared to, say, an electron.

    Similarly, I would not normally say that a thermostat has phenomenal experience, but it might perhaps be somewhere on that spectrum. I would note however that it lacks features we might want to consider important such as memory and the ability to have its self-monitoring feed into its information processing in some way (as opposed to just turning on a light). It doesn’t have a representation of self or learn or change over time beyond turning on and off. There are probably many other things in the ‘bag of tricks’ human brains have at their disposal we would want to throw into the mix before being happy to attribute full blown consciousness to something.

    > I can perfectly well form an understanding (from reading about it in a book) of how a frog swims

    I’ll warrant you can also form an understanding of how a thermostat controls temperature! This is not a mystery. What you don’t know is what it is like to be a thermostat. And similarly, you don’t know what it is like to be a frog (or a bat!). You can’t without actually being one.

    > But this already presupposes an independent existence to those events, which is however (wrt Platonism) what is in dispute.

    So, we can (on a fictional-Platonism analogue of mathematical-Platonism) coherently say that these events (fictionally) exist and we can also on materialism say that these events do not (physically) exist. Platonism is just a view on what the term “existence” ought to mean and how it ought to be applied. Again, you can also take me to be denying the coherence of some ultimate standard of objective existence or reality without respect to some particular framework or perspective.

  34. Jochen: “If one of the subjects lacked that stimulus, and the same experience were nevertheless induced in them via the other subject’s report (without that report being grounded in shared particulars of visual experience), then the experience would have been communicated.”

    But the significant fact about the SMTT hallucination is that there is no perception of the visual stimulus when the hallucination is shared, so there is no “report being grounded in shared particulars of visual experience”.

  35. So, say you give me a complete description of the state of your brain at a certain point of time, by for instance listing the states and connections of all your neurons, or even all the positions and momenta of their constituent particles.

    A minor point, but you seem to take for granted that this is possible. But if there is a significant random component to the operation of the brain (like for instance a dependence on the precise quantum state, although I admit that’s probably not realistic), then this can’t be done. But that’s not the reason I see a problem with your account.

    As noted, you won’t be able to juggle just because I tell you how to juggle.

    Well yes, but I can envision a complete structural description of juggling that would enable a mind sufficiently capable of self-rewiring to learn how to juggle. Actually, even I can do that: I just need to practice. That’s basically how I get my neurons to rewire themselves. But no textual description can get me to the point where I just ‘need to practice’ to understand what it’s like to see red.

    And in fact, it’s actually very implausibe that the brain needs to do any rewiring in order to be capable of seeing, or imagining, red: there is no delay between being subject to a stimulus and having the associated experience, at least none that is sufficient for any kind of rewiring to take place. Additionally, while solidifying this experience for further recall in long-term memory takes rewiring, it’s immediately afterwards available to me in short-term memory. So I think the analogy between ‘learning to do something’ and ‘understanding what something is like’ breaks down.

    You won’t come around to my point of view no matter how long we discuss this stuff on Peter’s blog comments section!

    Yes, but that’s not because of difficulties communicating, but simply because you’re wrong! 😛

    I don’t think it’s an ad hoc hypothesis that we lack the ability to alter the structure of our brains to whatever we we wish by an act of will.

    That’s not the ad hoc hypothesis I meant. I meant the idea that the only way to communicate the what-it’s-likeness is by having the brain re-wire itself. As noted above, it’s very unlikely that any actual re-wiring is necessary in order to experience the redness of red, say; and even if it were, in those cases that we know take re-wiring, say learning a skill, still everything relevant is communicable. I can understand everything about riding a bike without ever actually having done so, and I can imagine how a mind capable of re-wiring would induce this capacity in itself. This isn’t the case for subjective experience. There, all I want to know is how it is done—i.e. I want the analogy of the knowledge that is needed to, e.g., start practicing.

    When I have a complete description of bike-riding, even though it does not enable me to ride a bike, there is no residual mystery of how bike-riding works; but no description of ‘phenomenology-generation’ has yet been given that, so to speak, closes the gap. You seem to think that such a description must exist, because you disbelieve in the existence of non-structural entities; I simply think that the lack of such a description clearly points to experience just being such a non-structural entity. I think there is no good reason for the a priori assumption that everything must be structural, so it’s quite reasonable to demand convincing arguments from somebody holding that position—say, a description of how the faintest inkling of phenomenal experience comes about in the simplest system capable of having it.

    In other words, I see white and black swans, and thus, believe in the existence of both; you hold only white swans exist—I agree that’s possible, but I would want some explanation of why some swans seem black before I could be convinced.

    It provides you with all the propositional knowledge, certainly. If that’s how we want to conceive of knowledge. But it doesn’t really tell you what it is like.

    I’m not asking about what it’s like; I’m asking about how it works. For a sufficiently accurate description, I will afterwards have a complete picture of that: which muscles must contract at what points, how those muscular contractions are translated via the pedals and the gears to a rotational movement of the wheels, how that movement translates via contact friction to a linear movement of the bicycle (and its rider), and so on. That’s all there is to riding a bike. Hence, it can be exhaustively described in structural (or maybe functional) terms. All I want is an equivalent description in the case of subjective experience.

    My position is that what it is like to experience qualia is not really knowledge at all. It is certainly not propositional knowledge.

    This, to me, says basically the same as that they’re not describable in structural terms. Well, OK, maybe you could hold that it’s non-propositional (to a brain with fixed wiring) in the same sense that the skill of bike-riding is non-propositional; but I’m not asking, so to speak, for acquiring the skill of qualia-experiencing, I’m merely asking about how that works, which can be completely captured in propositional terms in the bike-riding example, but, for unknown reasons, seemingly can’t be when it comes to subjective experience.

    Or, in the special case that these voltage patterns form part of a self-aware causal network (a mind), that mind itself.

    Well, but that’s just question-begging: yes, if you can arrange voltage patterns to form a mind, then you can arrange voltage patterns to form a mind. But whether that’s possible is the thing in question.

    I have explained why I think the former is more parsimonious.

    Parsimony is not a guide to truth, however. It is useful in theory-building, if it is applied to explanatory entities: it picks out one (in the ideal case) of infinitely many possible theories in the only consistent way we know, and thus, guarantees predictability and hence, falsifiability (otherwise, our present observations would be compatible with any kind of future observations at all, i.e. we would have no effective weapon against ad hoc explanations).

    But you try and apply parsimony to metaphyisical categories; but that’s a similar error as those people make that declare many-worlds theories to be in violation of parsimony because of the infinitely many worlds it produces when all we have to explain is one. However, on the level of theoretical entities, many worlds is far simpler than things like collapse interpretations and hidden-variables and whatnot.

    Likewise, you need to invoke ad hoc hypotheses to ensure that everything can be interpreted in structural terms; but it would be more parsimonious to let go of that hypothesis in the face of the difficulties it encounters, and simply admit the existence of non-structural properties. But in such a case, the structural/functional equivalence between belief and belief* does not license us to conclude their full equivalence, any more than the structural equivalence between a stack of books under the thickness-relation and a set of people under the ancestor-relation licenses us to conclude that both are the same.

    The difference between the beliefs of these systems and my own is only one of complexity, in my view.

    But that’s just the sort of ‘opaque complexity’-argument I mentioned above: somehow, if you just pile on enough stuff (preferrably such that nobody can say with any certainty what goes on anymore), things such as genuine beliefs will pop up. In order for that account to have any plausibility, you’d need at least some sketch of how beliefs are constructed from proto-beliefs, not to mention an account of what, exactly, ‘proto-beliefs’ are and how they come about.

    And to me that is a trivially simple example of a prototypical intentional system.

    See, to me this is the exact opposite of an intentional system: its states have nothing at all to do with temperature; it could just as well be monitoring brightness, sea-levels, height of a plane on autopilot, etc., and be ‘none the wiser’, so to speak. How intentionality, which is exactly about the temperature, e.g., comes of such systems that are completely arbitrarily related to the world is exactly the question.

    I don’t think it is always helpful to think in dichotomies like this.

    But it is dichotomous: either there is (some) phenomenal experience, or there isn’t. The gradation of different levels, qualities, whatever of phenomenal experience only comes into play once we can say that there is any phenomenal experience at all; so talking about the gradation again assumes that it is already present, when how it comes about is exactly what we’re trying to figure out.

    I’ll warrant you can also form an understanding of how a thermostat controls temperature! This is not a mystery. What you don’t know is what it is like to be a thermostat. And similarly, you don’t know what it is like to be a frog (or a bat!). You can’t without actually being one.

    And that is a very succinct statement of why this state of affairs is so mysterious: why can I know how a thermostat controls temperature, how a frog swims—how all these functions are performed—while I can’t know how they generate experience—perform that function? Your argument is basically that there is no relevant difference between swimming and phenomenology-generation; but why, then, are there so many apparent differences and incompatibilities?

    So, we can (on a fictional-Platonism analogue of mathematical-Platonism) coherently say that these events (fictionally) exist

    I don’t think you can coherently say that anything ‘fictionally exists’. I don’t even think there is any meaning to the term ‘fictionally existing’, since fictional entities are precisely those that don’t exist.

  36. But the significant fact about the SMTT hallucination is that there is no perception of the visual stimulus when the hallucination is shared, so there is no “report being grounded in shared particulars of visual experience”.

    Then maybe I don’t fully understand the experiment. Basically, both are subject to the same visual experience, e.g. a triangle being moved behind a narrow slit. Both reconstruct from the fractional experience a triangle; they ‘hallucinate’ it in your words. But why is it surprising that both, subject to the same stimuli, also form the same visual experience?

  37. Jochen,

    When the subject changes the shape of his hallucinated triangle, the stimulus that grounds this change in his conscious content is an unconscious stimulus. When you look over the subject’s shoulder you do not perceive a visual stimulus; you simply hallucinate a triangle that changes its shape in accordance with the subject’s conscious content. Neither of you has the experience of a “fractional triangle”. The subject is directly sharing his conscious content without a mediating perceptual representation of the conscious content, as would be the the case if the sustaining stimuli were similar to the shared conscious experience.

  38. Hi Jochen,

    > A minor point, but you seem to take for granted that this is possible.

    Fair enough. I agree that it isn’t, because on QM there are aspects of our structure that are unknowable. But this means that there are structural aspects that can’t be scribbled down on paper, so if the experience of seeing red depends on such aspects (which I don’t personally accept), then that provides an additional reason against your view that all features of our mind structure should be communicable.

    > Actually, even I can do that: I just need to practice.

    Absolutely. And that’s the point. You need not just the knowledge, but some particular activity in order to rewire your brian. The same is true of perceiving red. You need not only the structure delivered as a series of propositions, but some activity in order to rewire your brain accordingly. In the case that you cannot send the right messages along your optical fibres to do this the normal way (by receiving a red visual stimulus), you could instead open up your skull and stick electrodes in the right places or whatever. When it comes to experiencing red, brain surgery is the analogue for practice.

    There is however another problem with the analogy to juggling and the riding of bikes. You can learn to juggle or ride a bike by reading a book and practicing, but the book does not describe brain structure, it describes a physical activity. So what you are communicating in this case is not the actual structure but a particular set of circumstances in which the right brain structure will naturally arise.

    > There, all I want to know is how it is done—i.e. I want the analogy of the knowledge that is needed to, e.g., start practicing.

    So the analogy is to communicate the quale of “red” by telling someone to go look at a red thing. If they cannot follow your instructions, e.g because they have no eyes or are colour blind, the analogous situation with regard to juggling or cycling would be to give the instructions to a quadruplegic.

    > There is no delay between being subject to a stimulus and having the associated experience

    Right. Because you already have all the brain structure needed to perceive red. You don’t even need a stimulus. You can call the quale to mind (to some extent). Everything you need is already in place.

    But remember that I claim Mary would not see red right away, if ever. I think she would need to learn to see it, much like a stroke victim might need to learn to regain control of her body movements.

    I could be wrong about that, so let’s suppose she has all the right structure in terms of synapses and neurons. In that case, she still can’t imagine red because she lacks the ability to activate the structure in the right way. The change before she knows what red looks like and after she knows red look like may not be in the connectome of the neurons but something in the pattern of neural firing (which can still be viewed as a structural aspect). I don’t know what that change is. Nobody does, to the best of my knowledge. It is whatever is physically different in the brain of a Mary who knows what red looks like and a Mary who doesn’t. And there must be such an objective change, not just a qualitative one, because the Mary before says “I don’t know what red looks like” and the Mary after says “I do know what red looks like”, which is a perfectly objective measurable difference in behaviour.

    > Yes, but that’s not because of difficulties communicating, but simply because you’re wrong!

    If we could communicate perfectly, you could communicate to me why I am wrong. You could send your brain state to me and I could reproduce it in my head and be convinced.

    > I can understand everything about riding a bike without ever actually having done so

    You can understand all the propositional knowledge about riding a bike without ever having done so. You can imperfectly imagine what it is like by reassembling bits of familiar experience. You can’t actually know what it is like without doing so. You wouldn’t even be able to imagine it very well if you were a quadruplegic your whole life and had little relevant experience to draw on.

    Similarly, ex hypothesi, Mary knows and understands all the propositional knowledge there is to know about the colour red but doesn’t know what it is like to experience it because she has no useful bits of familiar experience to draw on.

    > no description of ‘phenomenology-generation’ has yet been given

    Qualia-eliminativists don’t need to provide a description of phenomenology generation. There is no phenomenology. There are only mental states where we are disposed to believe we are experiencing phenomenology. If we want to say there is phenomenology at all, that’s all it is.

    > I’m not asking about what it’s like; I’m asking about how it works.

    But you can just as well describe how seeing red works, in terms of lenses and retinas and optic nerves and so on.

    > That’s all there is to riding a bike.

    No it isn’t. There’s the feeling of your backside on a bike seet. The feeling of wind in your hair. The strain in your muscles. The cultivated instinct to be alert and aware of your surroundings. The skill of adjusting your weight to balance as you turn, which you will be activating in a latent mirror-neuron sort of way if you really imagine cycling clearly. There are aspects of this which can be reconstructed by a person who has had similar experiences, but no perfect reconstruction is possible from a mere description, and barely any reconstruction at all for someone who doesn’t have similar experiences to draw on.

    So for both bike-riding and red-seeing, there are objective propositions about how it works that can be understood completely. For both, there are also subjective phenomenological aspects that cannot be known without experiencing them, because to know them means having had and being able to recall the brain state that obtains while experiencing them. If you cannot experience them or anything like them, the only way to get your brain into the state corresponding to familiarity with these experiences would be through some kind of brain surgery.

    > but I’m not asking, so to speak, for acquiring the skill of qualia-experiencing

    Hmm, OK, I thought you were, because to me “knowing what red is like” is simply “having the ability to recall the (brain state corresponding to the) experience of red”, which can be construed as a skill.

    > I’m merely asking about how that works, which can be completely captured in propositional terms in the bike-riding example

    The two cases are equivalent. To ride a bike, you sit on the bike and move the pedals. To perceive red, you look at a red object. If you don’t have a bike you can’t learn to ride a bike. If you don’t have a red object you can’t perceive red.

    > Well, but that’s just question-begging: yes, if you can arrange voltage patterns to form a mind, then you can arrange voltage patterns to form a mind. But whether that’s possible is the thing in question.

    Agreed. But, as is often the case with question-begging, the accusation goes both ways. To insist that the voltage patterns cannot form a mind is just as fallacious. And that’s my point.

    > Likewise, you need to invoke ad hoc hypotheses to ensure that everything can be interpreted in structural terms

    I don’t think I’m doing that though. I think there are serious problems with your analogy to bike-riding as I’ve been pointing out in this comment, so as for right now I don’t accept that I’m constructing an ad hoc hypothesis to explain the difference between bike-riding and seeing red. I think I’m treating the two cases pretty similarly and consistently.

    But if it were the case that I were forming an ad hoc hypothesis, being inconsistent or engaging in special pleading, I would agree that you have a point and my view would not be as parsimonious as I claim it is.

    > if you just pile on enough stuff (preferrably such that nobody can say with any certainty what goes on anymore), things such as genuine beliefs will pop up.

    Well, kind of, but not spontaneously or suddenly. We get things that are more and more belief-like until eventually we just call them beliefs. There’s no magic special sauce there. And understanding what goes on doesn’t make the beliefs go away.

    > not to mention an account of what, exactly, ‘proto-beliefs’ are and how they come about.

    A proto-belief would be something like the state of a record in a relational database. The record “proto-refers” to an object in the real world in light of two relations I’ll illustrate now (neither of which are absolute so are somewhat open to interpretation, but then I don’t think any intentional states are absolute).

    Firstly, there’s a causal connection. when something happens to the corresponding real world object, the database is updated to reflect that. Making a change to the database may even cause something to happen to the real world object.

    Secondly, there’s a structural similarlity. The database record has references to other records which parallel the relations between the real world object and other real world objects.

    When the database says that my gender is male, to me that is a proto-belief that my gender is male. It’s not the same as that belief in a human because the database has no detail about what being male entails. To it it’s just a contentless flag. But as you flesh out the computer’s knowledge base to be more detailed, approaching that of a human, provide links as appropriate between records and sensory input, and give the computer greater powers to process the data in more and more sophisticated ways, then you’re getting to what I would call real beliefs.

    > it could just as well be monitoring brightness, sea-levels, height of a plane on autopilot, etc., and be ‘none the wiser’, so to speak.

    OK. So from the point of view of the thermostat, its intentions relate to some quantity being in one state or another. For our convenience we can use that same information processing algorithm to control whatever we like as appropriate and the thermostate doesn’t know the difference. Nevertheless, I think its intentions are really about this abstract quantity (and only incidentally about temperature or altitude). As I said earlier regarding human intentions, what they are really about is our internal representations and only approximately or indirectly about real world objects. So, when the thermostate mechanism is placed in different contexts, the causal connection changes what this abstract representation quantity happens to refer to in the real world.

    Unlike humans, we suppose, it seems that in the case of the thermostat, we have intentional states directed at a relatively abstract entity, a specification which can be satisfied by more than one physical thing. But this is not so unlike humans after all. A person may hate the killer of a loved one without knowing who that is. Like the thermostat which knows about a quantity that could be temperature or altitute depending on the context in which it is deployed, the intention of the person could be satisfied by Andy or by Beatrice depending on which of them killed Carol.

    As intentional states get more and more detailed, the number of physical entities which satisfy the description get narrower and narrower. Often, in humans, the specification is so detailed that there is only one physical entity (if any!) which matches best. The same would be true of any computer system that had as detailed a representation.

    > The gradation of different levels, qualities, whatever of phenomenal experience only comes into play once we can say that there is any phenomenal experience at all

    I’m afraid I simply don’t think of it like this. I guess if you wanted to be unsympathetic you could paint me as some sort of panpsychist, if that helps you see what I’m getting at. But the difference between me and panpyschists is that they mostly seem to think there is a possible world of philosophical zombies and I don’t. Where they might be physical panpsychists I might be described as more of a structural or mathematical panpsychist.

    But, to be clear, I think that whatever glimmer of phenomenal experience there may be in simpler systems is so miniscule that it is perverse to describe them as having phenomenal experience at all. I’m pretty also pretty comfortable with views that specify some minimal set of criteria that have to be met before we start talking in these terms, but I think it is wrong to be overly precious or absolutist about what these criteria might be. I don’t for example think there is a fact of the matter about whether insects or fish are conscious or experience pain. It depends what you mean by consciousness or pain.

    > while I can’t know how they generate experience

    Experience is not generated. It is just what it is like to be a certain structure. There’s no extra work needed to create an experience. It would not be possible to be such a structure, having such functions, without there being this experience. Asking how the structures generate experience is like asking how they generate complexity.

    > I don’t even think there is any meaning to the term ‘fictionally existing’, since fictional entities are precisely those that don’t exist.

    Sherlock Holmes’s brother (Mycroft) fictionally exists in the works of Arther Conan Doyle. Sherlock Holmes’s wife does not (to the best of my knowledge at least).

  39. @Arnold: Are there some good diagrams/videos of this illusion? I feel as if I understand the basics, but am at a loss how this illusion fulfills what Jochen is asking for – an account of the qualitative subjective in terms of quantitative structure and dynamics.

    @Jochen: Curious, have you read Clifton’s Empirical Case Against Materialism? ->

    http://cogprints.org/3481/1/An_empirical_case_against_materialism.pdf

    He makes the same criticism you do, that there seems to be an often overlooked problem with the inability to provide even a rough sketch of how the quantitative/qualitative gap might be bridged. Instead there are appeals to “complexity”.

    To repeat myself ad nauseum, personally I don’t take this as a reason to stop scientific investigation which I see as an ontologically neutral exploration of the world as it presents itself to our instruments of measurement.

    It does, however, provide a good argument (IMO) against leaping beyond said world toward philosophical positions that outrun what is currently known. (I’ve come to suspect a lot of this is about politics, grabbing money, and snatching at prestige rather than accepting the state of honest ignorance we find ourselves in.)

  40. When the ‘inability to bridge the gap’ comes from people insisting ‘but you haven’t explained how the dragons are there!’, it doesn’t sound so legitimate. But it seems like people need their qualia and such concepts utterly explained away before they’ll consider the hypothesis that qualia and such are superstitions!!

    I’ll contend that if you really, really think dragons are there, then no one is going to prove otherwise to you. But it wont be from being correct. For those who really, really think the reported notions of qualia and subjective experience are existant thing that are there exactly as they report them (with not even a small discrepancy to what they report and what is), likewise no one is going to prove otherwise to them.

    If you can humour that your dragons might be something else – like dinosaurs, you’re in with a chance.

  41. @Callan: As Jochen as repeatedly noted, even if one is to explain away qualia there still needs to be an explanation for why the appearance of qualia is there. (The whole “Who is being fooled?” problem at the top of the page.)

    As such, if we need to amend our viewpoint from dragons to dinosaurs, then it seems you’re hoping to shift the opposition’s philosophical position. This still leaves the scientific question unanswered which, if it depends on a particular philosophical stance to be seen as legitimate, would to me suggest an act of metaphysical faith in how the scientific evidence is supposed to be interpreted.

    That would, of course, make the solution inadequate as the evidence alone should suffice to invalidate the opposition’s argument that accounting for qualia in terms of structure and dynamics is impossible.

    Of course science should be given every opportunity to find this evidence, rather than simply give up. I’m happy to consider the possibility that qualia are illusory, and we don’t ever actually think about things, just as I’m happy to consider the Idealist argument that consciousness is all there is. Even better if such claims lead to a theory that can be falsified – as Sergio notes having a theory can at least provide a direction to investigation.

    But at the same time one must acknowledge the true depth of the problems posed by subjectivity & intentionality. We all seem to have thoughts about things, we all seem to have qualitative experiences that defy quantitative description. This is where the comparison to hallucinated dragons falls apart – to have mistaken beliefs requires intentionality, to have mistaken perceptions requires subjective experience.

  42. Sci,

    even if one is to explain away qualia there still needs to be an explanation for why the appearance of qualia is there.

    Yes, but the way people seem to test whether an explanation of the appearance of a dragon has occured is to test whether the explanation was about dragons. If the explanation was about dinosaurs, people respond as you do now and say no explanation of the appearance of dragons was given!

    Look at the discussion with Disagreeable Me – the responces are ‘where is the subjective experience in that math?’ ‘where is the qualia in that computer?’

    It’s all ‘There is no dragon in what you are talking about! Therefore you have failed to explain anything’ refutations!

    If qualia don’t exist, then an explanation of qualia that does not mention qualia even once AUGHT to be enough. You’d agree? Sure, you’d say they exist. But if they didn’t, that aught to be enough, right?

    As such, if we need to amend our viewpoint from dragons to dinosaurs, then it seems you’re hoping to shift the opposition’s philosophical position.

    To me the difference between getting someone to temporarily humour that their A is really B Vs getting someone to just accept that A is B – those are quite different. You’re depicting me as the latter, when I’m asking for the former.

    To me the difference is so clear it seems were talking right past each other when you take me as the latter.

    I’m happy to consider the possibility that qualia are illusory, and we don’t ever actually think about things

    What rough models of how that possibility would work have you made up? Or worked with with someone elses rough model?

    I’m inclined to think that when someone doesn’t figure out or work with some model (no matter how rough) of how it could work, it’s because the ‘possibility’ of it is so minute to them it’s not really a possibility – just like we don’t make a detailed model of how we’d spend our lottery winnings, because we just don’t think it’ll happen. But we grant the possibility we could win…even as we don’t think about it at all.

    I have to say, I don’t consider thinking about the possibility of illusory qualia in the same way as the lottery is really thinking about the possibility.

    While I can see fair reason to be disinclined to put work into such a model, I don’t consider it being open minded on a subject to give it a lottery consideration. While ‘Yeah, we could win…but I’m not going to write up a spreadsheet on spening it!’ is valid, I think, in practical terms, I don’t think it’s being open minded on the matter. Same goes for granting the possibility of qualia being illusory.

  43. Sci: “… but am at a loss how this illusion [SMTT] fulfills what Jochen is asking for – an account of the qualitative subjective in terms of quantitative structure and dynamics.”

    First it is important to recognize the SMTT phenomenon is not an illusion; it is a hallucination. When the *vertically* oscillating dot in the slit reaches a threshold rate of oscillation (4 cycles/sec), the subject suddenly cannot see the slit or the dots and instead has a vivid conscious experience of a complete triangle moving back and forth *horizontally* on the screen. There is no such image on the screen. This is obviously a striking qualitative subjective phenomenon that is predicted by the quantitative operating characteristics of the neuronal structure and dynamics of the putative retinoid mechanisms in the subject’s brain. Isn’t this the kind of account that Jochen is asking for?

  44. @Callan:

    “If qualia don’t exist, then an explanation of qualia that does not mention qualia even once AUGHT to be enough. You’d agree? Sure, you’d say they exist. But if they didn’t, that aught to be enough, right?”

    Not at all, since it’s thoroughly unsatisfying to say an experience is nothing but the movement of matter, mathematical structure, or whatever. That sounds too much like magic to me unless there’s an explanation for why the structure and dynamics – whatever they are made of, matter or math – results in qualitative experience (or the illusion thereof).

    On the flip side, it’s just as unsatisfying to hear IIT evokes the consciousness potential in all matter. Simply accepting consciousness as fundamental is also a leap beyond the evidence we have, as much as declaring it to be nothing more than the running of a particular – but as yet unknown – program on a Turing machine.

    As Feynman once noted, good science comes from accepting mysteries exist and being comfortable with them. As evidence comes in and theories are tested, we can change our minds in due course. If computationalists can solve the Hard Problem or show how the derived intentionality of a program can make the leap into intrinsic intentionality, I’m away to admit I was wrong.

    @Arnold: I’ll let Jochen answer, but for my part it seems predicting a hallucination isn’t the same as describing the experience in structural terms? Perhaps I’m simply not quite understanding what you’re getting at and upon comprehension of the retinoid model things will become clear to me.

    That said, regardless if it isn’t quite a translation of subjectivity into structure/dynamics it does seem like a very good step in the right direction.

  45. Sci: “That said, regardless if it isn’t quite a translation of subjectivity into structure/dynamics it does seem like a very good step in the right direction.”

    The geometric structure and the systematic changes in the geometry of the hallucinated triangle are reflected in the spatio- temporal geometry of the neuronal activation pattern in retinoid space. Since we cannot expect a theoretical model of a conscious mechanism to actually create consciousness, the best we can do is to show that the model brain mechanism can successfully predict proper neuronal patterns matching relevant conscious phenomena that were previously inexplicable.

  46. Peter: You write

    Let’s digress slightly to consider yet again my zombie twin. …. Now according to me that is actually not possible, because if my zombie twin is real, and physically just the same, he must end up with the same qualia. However, if we doubt this possibility, David Chalmers and others invite us at least to accept that he is conceivable. Now we might feel that whether we can or can’t conceive of a thing is a poor indicator of anything, but leaving that aside I think the invitation to consider the zombie twin’s conceivability draws us towards thinking of a conceptual twin rather than a real one.

    The solution seems simple — zombie twins are both impossible and not even a coherent concept because qualia have causal power. And so, someone just like you but without qualia wouldn’t be exactly like you.

    I’m tired of mentioning my “Do elephants have hair?” example yet again, as it didn’t make enough an impression to get addresses in the times past. So, other examples of quale-based thinking: Would Euclid have considered his postulates to be indeed self-evident postulates without imagining them visually? All of the rather rigorous proof system that one learns in geometry rests upon postulates that we accept not through formal reasoning but on the qualia of manipulation lines. So your zombie is less likely to be as convinced of geometry than you are. Or, of accepting the conculsions Einstein demonstrated by thought-experiment.

  47. Qualia really can’t have causal power; or at least, if we are going to talk about qualia with them, we won’t be talking about what the people who started the conversation about qualia wanted to talk about!

    I imagine Euclid drew diagrams in the sand, so yes, highly visual. But that doesn’t mean he needed qualia to set out his proofs; he just needed ordinary objective vision. Zombie Euclid would have got on fine.

    I think it’s the same with the elephants (don’t know how I missed them): you’re assuming that the comparison of images requires qualia, but things like the posession of hair are objective things we can easily talk about. There’s no doubt that boradly speaking you see elephant hair where I see it: but whether the grey I experience is what you experience when looking at something pink, we can never determine.

  48. Arnold:

    The subject is directly sharing his conscious content without a mediating perceptual representation of the conscious content,

    But still, both are subject to the same causal agent that produces their shared phenomenology. Think about, for instance, transcranial magnetic stimulation: suppose I had a device capable of eliciting very specific phenomenology by just magnetically stimulating the right areas in the brain. Then, I could induce the same phenomenology in two subjects, even though they were never exposed to a visual stimulus generating that phenomenology. But I don’t see any sense in which that experience has been ‘communicated’ from one to the other.

    Disagreeable Me:

    But this means that there are structural aspects that can’t be scribbled down on paper

    No, because on orthodox QM (as opposed to, say, Bohmian mechanics), there simply are no additional aspects, no ‘hidden variables’ that could serve to further specify the state. And even in Bohmian mechanics, those hidden variables are simply enclosed in a permanent ‘black box’, so the inability to write them down simply follows from their inaccessibility; if we could open the box, we could share them just the same as other structural data.

    If they cannot follow your instructions, e.g because they have no eyes or are colour blind, the analogous situation with regard to juggling or cycling would be to give the instructions to a quadruplegic.

    That’s a very good example. A quadriplegic could still understand everything that there is to bike-riding, and that they could do it, and how they would do it, if they weren’t paralysed. The analogue of this is what I want for a blind person: how his brain creates the phenomenology of seeing red, upon actually seeing red. This is not solved by merely looking at something red, anymore than bike-riding is taught by just setting somebody up with a bike: I know what red looks like, but I haven’t the foggiest about how my brain generates this experience.

    Because you already have all the brain structure needed to perceive red.

    There are particular smells that you probably haven’t yet experienced, particular molecules that although your nose has olfactory receptors for them, they simply haven’t occurred in your vicinity yet in order to be detected. Are you telling me that once they come in contact with your receptors, you won’t be able to smell them? That every smell has to be experienced, or to be trained, before it actually becomes a smell? Because I find that very hard to accept.

    And there must be such an objective change, not just a qualitative one, because the Mary before says “I don’t know what red looks like” and the Mary after says “I do know what red looks like”, which is a perfectly objective measurable difference in behaviour.

    But of course, merely making such a pronouncement is no indication of its truth; I could code a program that says the above in the appropriate cases, but it wouldn’t follow that it actually knows what seeing red is like, or actually knows anything, in fact. So the change in brain state picked out by that criterion need have no connection to whatever enables the actual phenomenology.

    Qualia-eliminativists don’t need to provide a description of phenomenology generation.

    But they need to provide a description of why it seems to us that there is phenomenology generation, which I don’t think makes the problem any easier. You could quite conceivably hold that arms don’t exist, since they are really just collections of cells organized in a specific way; but you haven’t thereby made any progress of explaining how arms work.

    There’s the feeling of your backside on a bike seet.

    This isn’t bike-riding, it’s the phenomenal experience that goes along with bike riding. But even without this experience, what occurs would still be bike-riding—i.e. if I had no feeling in my backside, and rode a bike, then I would still be riding a bike. If a zombie were riding a bike, he’d still be riding a bike. It’s only that which I’m interested in—the precise combination of cause and effect that makes somebody on a bike move forwards. Because, if you’re arguing that what can’t be communicated about bike riding is the associated phenomenology, then you’re incurring a vicious circularity: you wanted to argue that some kinds of things can’t be communicated by pointing to bike-riding, and that hence, phenomenology also conceivably might not be communicable. But if you’re now saying it’s really the phenomenology of bike riding that can’t be conveyed, then you’re saying that maybe phenomenological experience can’t be conveyed because phenomenological experience can’t be conveyed.

    But about bike-riding itself, i.e. the actual skill of mounting a bicycle and making it go forward, even though this can’t be conveyed to human brains via a textual description, it’s easy to see that in principle, this could be done, which exemplifies the structural/functional character of the activity; the problem is, that in the case of phenomenology, nobody so far has even the faintest clue of how such a textual description could in principle look like, that is, how you can describe the process by which, once you look at something red, your brain makes a red-experience happen.

    This is what would be expected if qualia are non-structural; but it’s the opposite of what would be expected if they are simply structural properties. This is hence where the need for explanation lies.

    The two cases are equivalent. To ride a bike, you sit on the bike and move the pedals. To perceive red, you look at a red object.

    Yes, and I know the story of how the activity of sitting on a bike and going through the right motions makes the bike go forward; I don’t know the story of how looking at red makes the brain come up with a red-experience. That’s what’s missing.

    To insist that the voltage patterns cannot form a mind is just as fallacious.

    It’s not a symmetrical situation. You insist that there is something extra that comes about, somehow. I merely point to what we know there is. It’s like my reason for not believing in God: there is no reason to believe. Likewise, there is no reason to believe that mind exists in voltage patterns. If you’d have me believe, you’d have to give me a reason to.

    When the database says that my gender is male, to me that is a proto-belief that my gender is male.

    This is, I think, where you’re fooled by the apparent transparency of your own interpretation; you see the world in terms of meaning and reference, and attribute that meaning and reference to the world. But in truth, in every case, it is provided by you: to the database, there are a few bits of memory that are set in a particular way—nothing about this is intrinsically about you, your gender, or concepts like that. This interpretation is only provided by you, and for a different observer, will be completely different. Somebody else could read that same database record as you being female, for instance—it’s just a matter of convention. Or, it could not pertain to you, or to gender, at all.

    So that’s what I mean: the one core principle about belief, that they are about something, is absent from your proto-beliefs. In order to make a convincing case, you’d have to provide a story of how this aboutness comes about.

    Nevertheless, I think its intentions are really about this abstract quantity (and only incidentally about temperature or altitude).

    They’re not about anything: we could just rip out the sensor, and replace it with a random number generator, and the thermostat would be none the wiser.

    A person may hate the killer of a loved one without knowing who that is.

    They know exactly who it is: the killer of Carol. They don’t know anything about that person’s other properties—whether he is male or female, short or tall, and so on. But they know a property which suffices to exactly single out the target of their hate—it is that x such that x is the killer of Carol. It’s no conceptually different from knowing Bob, without knowing how tall he is, or how much he weighs, or so on—you don’t need to know all the attributes of something to identify it. So, they hate the killer of Carol qua being the killer of Carol.

    This is not remotely similar to the case of an ‘abstract quantity’ (and as I argued above, even that identification doesn’t hold). You can’t think about an abstract quantity qua abstract quantity, but only about particular properties of some quantities.

    The same would be true of any computer system that had as detailed a representation.

    The problem of identifying the intentional content, the meaning of some state of a physical object is exactly the same as that of deciphering a message for which you don’t have the code (which is really again Newman’s problem in a different guise). You might think that a short length of message would be harder to crack than a long one, but that’s not the case—in fact, it’s impossible in either case. You need to have some key, some interpretation, so as to make it understandable to you—that is, in particular, you need to translate it into a form such that a mind can then imbue it with meaning. The problem is the arbitrariness of the interpretation: what to you reads like a laundry list, could under a different ‘code’ be a Shakespearian sonnett to someone else. Neither of you is any more right than the other, because the meaning does not reside in the scribbles on paper, but in the mind interpreting them. Hence, all such schemes ultimately rely on the existence of a pre-existing mind in order to imbue things with meaning.

    I guess if you wanted to be unsympathetic you could paint me as some sort of panpsychist, if that helps you see what I’m getting at.

    Panpsychism, to me, is really just some form of defeatism. So, we can’t seem to explain how mentality arises, so let’s just suppose it’s fundamental, and doesn’t arise from anything else, and be done with it. To me, that’s a deeply dissatisfying answer. I think there are things that unambiguously don’t have any mentality associated with them—maybe quarks and leptons, maybe atoms, maybe something else. So, there is a fact of the matter how mind arises from non-mind, and that’s what I want to get at.

    It would not be possible to be such a structure, having such functions, without there being this experience.

    Well, then there’s the question I want answered: why?

    Sherlock Holmes’s brother (Mycroft) fictionally exists in the works of Arther Conan Doyle.

    No. A fiction of Mycroft Holmes exists, but that fiction is not Mycroft Holmes—it’s not, for instance, head of British intelligence. Recall Quine: to be is to be the value of a bound variable. Thus, there is no Mycroft Holmes; nevertheless, we can analyze the sentences ‘Sherlock has a brother called Mycroft’ and ‘Sherlock has a wife’ such that the first comes out to be true, and the second as false.

  49. Peter, I don’t think Euclid needed to draw pictures in the sand to convince himself of the truth of his postulates. Perhaps to teach less imaginative students. But even so, he couldn’t have drawn infinite lines to show they never met. Which is why I compared it to Einstein having you imagine experiments that couldn’t yet be performed, so that you could “see” the resulting paradox for yourself.

    We base conclusions on imagined scenarios. Like my imagining variations on the theme of elephant to decide whether the version that matches memory has hair.

    Is there imagination without qualia? Frankly, I thought qualia were just the new way of talking about what Aristotle called imagination (which, in contrast to modern usage, would include the image of a rose when the rose is actually being seen).

    To put it another way, Micha and Zombie-Micha’s conclusions would diverge over time because the former has more ways of reaching those conclusions than the latter.

  50. Jochen: “Think about, for instance, transcranial magnetic stimulation: suppose I had a device capable of eliciting very specific phenomenology by just magnetically stimulating the right areas in the brain. Then, I could induce the same phenomenology in two subjects, even though they were never exposed to a visual stimulus generating that phenomenology.”

    But suppose the TMS device enabled one of the subjects to control the stimulation to induce just the kind of phenomenology he desired and that this self-induced conscious experience was directly induced in the passive subject. This is closer to what happens in the SMTT experiment. The common causal agent is a physical event (as it must be in a physical world), but the conscious experience of the second person matches that of the first person without the need to interpret a mediating description of any kind.

  51. the conscious experience of the second person matches that of the first person without the need to interpret a mediating description of any kind.

    In a way, however, this mediating description is exactly what I’m after. The TMS device would be simply a way to set up a particular brainstate, equivalent to seeing the appropriate image, but delivered via different pathways; that such a thing is possible is, I think, already implied by the fact that the usual stimulus produces said visual experience.

    But what I want to understand, and what I’m sceptical about, is the possibility of reducing qualia to structural facts, that is, roughly, facts about relations between things. This is what an intermediate description, a reduction to some set of symbols on a page, would do, but not what’s provided by the SMTT experiment, to the best of my understanding.

  52. Hi Jochen,

    > because on orthodox QM (as opposed to, say, Bohmian mechanics), there simply are no additional aspects, no ‘hidden variables’ that could serve to further specify the state.

    I’m not talking about Bohmian hidden variables with definite state. I’m talking about the probabilities we can’t measure directly or copy because of the no cloning theorem. Those probabilities are part of the structure. By assuming that structure can be scribbled down, it seems to me that you’re the one violating the no cloning theorem. I’m just running with the assumptions you began the discussion with.

    > A quadriplegic could still understand everything that there is to bike-riding, and that they could do it, and how they would do it, if they weren’t paralysed.

    OK, I think we can make a little headway here. I took you to be failing to recognise the qualitative aspect of bike riding. If we’re explicitly setting it aside and just talking about how one can perform certain physical actions, then we don’t need to appeal to bike riding at all. We can keep it with seeing red. So, to discuss “how” one sees red, we can give a little recipe which is easily understood. We might suggest that you need a light source, a red object and a direct line of sight to it, to direct the eyes on it and so on, and that then you should see red provided your visual system is working correctly.

    None of this is communicating anything of the brain structure associated with seeing red. Instead, as with bike riding, it describes how to set up a physical situation where you ought to bring about the brain structure associated with seeing red. The brain structure itself cannot be communicated because we can’t rewire our brains on demand or even perceive our brain structures without brain scanning technologies.

    > I know what red looks like, but I haven’t the foggiest about how my brain generates this experience.

    Sure. Do you see this as a contradiction of some kind? Because I don’t!

    > There are particular molecules that although your nose has olfactory receptors for them, they simply haven’t occurred in your vicinity yet in order to be detected.

    I don’t know that much about olfaction, but I can think of a few possibilities.

    1) I already have the brain structure necessary to detect them, which has been dormant since I was born
    2) Smells are combinations of different olfactory receptors firing with varying strengths, and my qualia is just a new combination of existing primitives
    3) I can’t smell it
    4) I can’t distinguish it from other familiar smells

    Any or all of these might obtain in different circumstances and for different smells. I don’t think any of them poses a major problem for me. Remember I said only that it was my suspicion that Mary would not be able to perceive colour right away. I did not assume that it would be so and also offered an account which was compatible with the case where my suspicion turned out to be false.

    > So the change in brain state picked out by that criterion need have no connection to whatever enables the actual phenomenology.

    I think your critique in this paragraph is rather beside the point. I think most people would agree that Mary will have an objective change in her behaviour after perceiving red for the first time. Ex hypothesi, she is not lying or pretending or an automaton masquerading as a perceiver of qualia. There is some change in her brain state and this is the structural change I am talking about.

    > But they need to provide a description of why it seems to us that there is phenomenology generation

    First you have to accept that it makes sense to talk of “seeming” or “believing” independent of any presupposition of having consciousness or qualia. For example, a face detection algorithm may perceive faces in pictures where there are no faces at all, an example of AI pareidolia. To me, this means that it seems (or pseudo-seems, it you like) to the algorithm that there is a face in the picture.

    Once you can go that far, then we can discuss why an AI or a human might say that it seems to be experiencing qualia. I would say it is because there is a difference between being in a mental state of merely believing something to be red and in a mental state of directly perceiving it as being red because of sensory data. The latter is essentially a belief that you are experiencing a red sensation. And that’s all there is to it. There is no work done to generate the quale itself. Once you believe you are seeing red, the job is done and there is nothing more to do.

    To you, this explanation is dissatisfying for two reasons. Firstly, it presupposes that there are no qualia, and secondly because it fails to explain how the qualia are generated. And yet, if that first presupposition is entertained, it explains more or less everything and then the second problem disappears.

    > But if you’re now saying it’s really the phenomenology of bike riding that can’t be conveyed, then you’re saying that maybe phenomenological experience can’t be conveyed because phenomenological experience can’t be conveyed.

    Yes. So my point was that there’s nothing really different about bike riding or perceiving red. Each have an objective functional side we can understand by third-person study and a subjective phenomenal side we can only experience by being in certain brains states ourselves (usually triggered by sensory data or physical interaction). Since we can’t directly communicate brain states, this subjective experience cannot be conveyed. There is nothing ad hoc about this in my view.

    > I don’t know the story of how looking at red makes the brain come up with a red-experience. That’s what’s missing.

    One problem is that even the functional side of colour perception is only dimly understood by experts. So we must imagine that this is understood completely for the analogy to cycling to work. If it were understood completely, we could understand how red stimuli cause certain intelligent systems to behave in a manner consistent with having perceived a red stimulus. In the case of humans, it might for instance dispose them to make exclamations such as “I am experiencing red!”. We might even come to understand how it can prompt certain humans to get into philosophical debates about qualia, i.e. what they claim to be experiencing.

    Once you understand all this, my view is there is nothing else to explain. We understand why we believe ourselves to be experiencing red, and we even understand why we feel this to be mysterious and unintuitive. The quest for an explanation of “real” qualia is therefore a wild goose chase.

    > You insist that there is something extra that comes about, somehow.

    I don’t think so. I’m the eliminativist, not you. As far as I can see, you’re the one insisting that something else (qualia, consciousness, mentality, intentionality) has come about somehow in a human brain. I’m saying all there is is structure. Qualia and the rest of it are no more than structural or functional states in my view.

    > Somebody else could read that same database record as you being female, for instance

    Sure. Because to me and that person, gender means more than a flag that goes either way. To the database, that’s all it is. Nevertheless, I think the database has a proto-belief corresponding to my gender. Even if we don’t know which of 1 and 0 means male and which means female, you could for instance glean from the database that my brother and I share the same value.

    > So that’s what I mean: the one core principle about belief, that they are about something, is absent from your proto-beliefs. In order to make a convincing case, you’d have to provide a story of how this aboutness comes about.

    And I think I’ve provided you with that in the discussion of the thermostat…

    > They’re not about anything: we could just rip out the sensor, and replace it with a random number generator, and the thermostat would be none the wiser.

    So what? It would still have intentions about the abstract quantity. What you would have done by replacing actual temperature with a random number generator would have been to change the real world entity corresponding to that quantity. So now its intentions are effectively and indirectly about the output of a random number generator.

    The same way, if I anesthetized you and made you a brain in a vat so you were experiencing a simulated world when you woke up, your perceptions and your beliefs from that point on would be about virtual objects in a virtual environment. You would still have intentions in the same way as before, which directly only refer to your own mental representations. I would have changed the real (well, virtual!) world entities corresponding to those representations and you would have been none the wiser.

    The real entities would continue to exist and your memories and beliefs about them would still refer to them. But then if the thermostat had a memory or beliefs about past temperatures much the same thing could be said of it.

    > They don’t know anything about that person’s other properties—whether he is male or female, short or tall, and so on.

    Right. And similarly the thermostat doesn’t know anything about the properties of the abstract quantity it is measuring only that it can go up or down and that it must be in one of two states depending on whether it is above or below a certain threshold. Anything that satisfies those criteria could be a potential referent of its intentional states and the same thing goes for a human’s intentional states. To me, intentionality is therefore largely a matter of criterion-satisfaction.

    > you don’t need to know all the attributes of something to identify it

    That’s exactly right. And that’s why a thermostat needs to know almost nothing about what temperature is and yet it can still refer to it.

    > You need to have some key, some interpretation, so as to make it understandable to you

    OK. First, I just want to point out that you are conflating two problems here. The first is in decoding a series of external symbols, which means essentially to successfully map them to a mental representation in your head. The second is how the mental representations in our heads have meaning, which I have explained with reference to structural similarity and causal links. It should be clear that they don’t need to be decoded because they are already in the form native to your understanding. Even so, we can talk of decoding them if you wish, and I would agree that this problem is so intractable as to be nigh impossible. This is why we can’t (yet?) read minds with brain scans, no matter how detailed.

    But it is not the case that we are without keys or clues to the interpretation, and these are the same things that give rise to the semantic reference in the first place. If a certain neuron causes one to think of or taste strawberries when stimulated, then that neuron probably has something to do with our mental representation of strawberries. Especially so if it fires when sensory input pertaining to strawberries is given. In this way we have actually managed to decode and map out a number of crude mappings between certain brain areas and certain aspects of intentionality.

    And this is also how we could in principle figure out which tables in a database refer to persons and which fields refer to gender — by observing how providing new data (sensory stimulus) to the system makes changes to the data stored. The interpretation of the data cannot be arbitrary as long as it has a consistent causal connection with the real world. That’s what a real AI system would have and that’s what our brains have, and that’s how we can have intentionality referring to the real world.

    > So, we can’t seem to explain how mentality arises, so let’s just suppose it’s fundamental

    I think you’re right for actual panpsychists. However, I don’t really assume that it is fundamental. I just assume that it is no more than a set of functional states, that the idea that it is something beyond this is false. In my view you are pursuing a mirage.

    > Well, then there’s the question I want answered: why?

    Because this experience *is* just a set of functional states.

    > A fiction of Mycroft Holmes exists, but that fiction is not Mycroft Holmes

    I’m just explaining by way of example what I mean by fictional existence. It means existence within the context of a particular fiction. I don’t think this is as perverse an abuse of language as you perhaps do.

  53. Jochen,

    If I understand you, the “facts” that constitute the qualia in the SMTT paradigm are the relations among activated autaptic neurons in retinoid space — the brain mechanism that generates the pattern of these relations in the brain is a theoretical entity, not yet directly observed. It is assumed to exist because it explains/predicts a wide rang of previously unexplained phenomena/qualia.

  54. I’m talking about the probabilities we can’t measure directly or copy because of the no cloning theorem. Those probabilities are part of the structure.

    And accordingly, we can of course write them down, and reduce them to relationships between symboldy on a page, as must be possible for anything purely structural. You could for instance just take a large number of copies (if you, say, have a source that provides the same state), and perform a tomographically complete measurement, which gives you the precise state (up to experimental error).

    We can’t get at those probabilities in the general case, because a measurement changes them: after the measurement, the system will be in an eigenstate of the measurement operator, and no longer in its original state. But that’s the case as well in classical mechanics, if you for some reason can’t perform a nondestructive or noninterfering measurement—take a ball moving through a dark space; your only hope of detecting it is prodding it, which will tell you the location at some point, but change the ball’s velocity. But that doesn’t mean that location and velocity aren’t structural properties, merely that you can’t measure them.

    So, to discuss “how” one sees red, we can give a little recipe which is easily understood. We might suggest that you need a light source, a red object and a direct line of sight to it, to direct the eyes on it and so on, and that then you should see red provided your visual system is working correctly.

    But the analogy is not between bike-riding and ‘seeing red’ in your sense, but between bike-riding and having a red-experience. It’s the latter I suppose to be non-structural, and accordingly, your recipe above doesn’t give me an inkling regarding how that experience comes about.

    You appealed to bike-riding or learning some skill in general to make a case that there might be some kinds of knowledge that can’t be transferred via a structural or functional account, like the skill of bike-riding can’t, in order to make it plausible that subjective experience might still be structural, but unconveyable. If you’re now saying that you were really just claiming that the subjective experience associated with the practice of that skill can’t be conveyed, then of course that argument no longer works, and my original question—why can’t qualia be reduced to a description in terms of symbols, if they’re structural after all?—still stands.

    1) I already have the brain structure necessary to detect them, which has been dormant since I was born
    2) Smells are combinations of different olfactory receptors firing with varying strengths, and my qualia is just a new combination of existing primitives
    3) I can’t smell it
    4) I can’t distinguish it from other familiar smells

    Well, I think we can discount 3) and 4): a perceptual system that can only perceive stimuli that it already has perceived or is familiar with would not be very useful. 2) is, to my knowledge, simply not how olfaction works: contrary to vision, where the stimulus elicits a combination of distinct activations of different receptors, olfactory receptors are tailor-made to react to specific molecules. And 1) would require to hard-code a far larger portion of our neuroanatomy in our genes than I think is feasible—in fact, we know that brain-growth is largely a self-organized process.

    So ultimately, I can’t really see grounds on which to accept the thesis that the perception of a novel stimulus needs restructuring of neural connections.

    For example, a face detection algorithm may perceive faces in pictures where there are no faces at all, an example of AI pareidolia.

    This, again, is simply question begging to me: a face detection algorithm does not do anything that is in any way connected to faces. In fact, under a different interpretation, it might just as well compute the sum of the first n prime numbers, or solve basically any other task. What a given system computes is always and completely a matter of the perspective of the user.

    The latter is essentially a belief that you are experiencing a red sensation. And that’s all there is to it.

    Even if I were to accept the idea that there is some way of getting intrinsic intentionality from systems that only possess derived intentionality, it still wouldn’t follow that a belief of something brings about that thing itself. I can believe I can fly all day long, and I’d still die if I jumped from the tenth floor; so why should the belief of having experience bring about having experience? Why couldn’t Mary believe she knew what it’s like to see red, and then, upon actually seeing red, actually having that experience, realize that that’s a completely different thing from what she had in mind?

    To you, this explanation is dissatisfying for two reasons. Firstly, it presupposes that there are no qualia, and secondly because it fails to explain how the qualia are generated.

    No, the explanation is dissatisfying for the reason that it starts from premises that I find at best dubious, and at worst completely misled.

    Since we can’t directly communicate brain states, this subjective experience cannot be conveyed. There is nothing ad hoc about this in my view.

    The ad hoc hypothesis is that for some reason, experiencing qualia requires some brain rewiring. But there are clearly kinds of knowledge that can be conveyed: take, for instance, math. With enough patience, any piece of mathematics can be conveyed symbolically—indeed, the manipulation of symbols is all that’s needed to do all kinds of mathematics. Yet still, I am presumably in a certain brain state when doing a certain kind of math. So those brain states, apparently, are communicable. Why are those that produce experience different? Worse yet, for somebody believing in the mathematical universe, why and how does something whose basic definition is being communicable symbolically give rise to something whose central characteristic seems to be not communicable in this way?

    I mean, don’t you see that there’s some need for explanation there?

    We understand why we believe ourselves to be experiencing red, and we even understand why we feel this to be mysterious and unintuitive. The quest for an explanation of “real” qualia is therefore a wild goose chase.

    That may very well be the case, but a convincing story needs to be told of a) how qualia reduce to believing in having qualia, b) how the belief in qualia comes about, and c) how belief arises in systems that lack intentionality. The hope is that this will be somehow amenable to a simpler explanation, but I don’t think that hope is well-founded. Again, whether you explain the functioning of my arm or of the cells that make up my arm—that expose its arminess as mere fiction—in the end doesn’t make a difference. But so far, people just keep saying, ‘there’s no arm, there’s only cells!’; and so I’m merely saying, that’s fine by me, so explain what those do in order to accomplish all those things I usually take my arm to be doing.

    I don’t think so. I’m the eliminativist, not you.

    That’s exactly why you need to explain more: not only how qualia come about, but instead, how we can take ourselves to have qualia when in fact, we don’t. That is, not merely explaining what leads to all the full-HD stereo phenomenology we seem to be experiencing, but also why we seem to experience it when it actually isn’t there at all.

    The thing is, you’re saying that the thermostat might just as well have proto-beliefs as it might not. But without any evidence to the contrary, of course, the only viable stance is that it does not: otherwise, we’re doomed to question begging about invisible pink unicorns, teapots orbiting the sun, flying spaghetti monsters and whatnot. So the onus is, in fact, on you to demonstrate how what you call proto-belief can play the role it needs to play in order to provide us with all the beliefs we seem to have, in the way that we seem to have them.

    It would still have intentions about the abstract quantity.

    No. Its states simply don’t pertain to any quantity, abstract or otherwise. Again, the relations that it sets up can be filled with anything at all, as per Newman. Consider the books and the people: their relations don’t pertain to whether it’s books or people; but the character of our intentional states is precisely to pertain to some given thing. Structure is that which can be transported from one object, and instantiated by means of another; but what our mental content pertains to is exactly the object itself.

    It should be clear that they don’t need to be decoded because they are already in the form native to your understanding.

    And this is where things get question-begging: to show that there is something that doesn’t need to be decoded, that is somehow ‘intrinsically interpreted’, is exactly what you need to do in order to ground your account. But your account simply keeps referring to this ‘intrinsically interpreted’ level as a given. But the reality is that a given neuron firing doesn’t pertain to a given stimulus any more than the thermostat pertains to a concrete quantity.

    And this is also how we could in principle figure out which tables in a database refer to persons and which fields refer to gender — by observing how providing new data (sensory stimulus) to the system makes changes to the data stored.

    And again: this only helps us decode the database if we already understand the data it’s being provided with. Without such pre-existing knowledge, we simply have correlations between meaningless data; but these don’t suffice to ground meaning. Recall Newman again: the only information about the things subvening a given set of relations you can get from nothing but these relations is their cardinality. But that’s plainly not the only information we have about the world.

    I just assume that it is no more than a set of functional states, that the idea that it is something beyond this is false.

    And as I keep saying, I think that’s a reasonable stance, but it only shifts the explanatory burden, it does not eliminate it.

    Because this experience *is* just a set of functional states.

    I don’t understand what that means, sorry. If I am in a certain brainstate, which is, say, exhaustively described by some groups of neurons firing, then why does that caus the experience—or the illusion of the experience—of a field of poppies, rather than a field of violets? Or a Chopin concerto? Why, most of all, isn’t it anything like some groups of neurons firing?

    It means existence within the context of a particular fiction. I don’t think this is as perverse an abuse of language as you perhaps do.

    The thing is that it’s hard, on these terms, to analyze claims about such entities. Something that ‘exists within the context of a particular fiction’ can’t be the head of British intelligence; but Mycroft Holmes is the head of British intelligence. Something fictionally existing in your way also couldn’t be the world’s best amateur detective; but Sherlock Holmes is the world’s best amateur detective.

  55. Two different neurologies, at least within the eye, could produce the same quale. I don’t think one can assume a 1:1. For example, the difference between seeing a single frequency of light as orange and seeing orange on an RGB monitor.

  56. Sci,

    “If qualia don’t exist, then an explanation of qualia that does not mention qualia even once AUGHT to be enough. You’d agree? Sure, you’d say they exist. But if they didn’t, that aught to be enough, right?”

    Not at all, since it’s thoroughly unsatisfying to say an experience is nothing but the movement of matter, mathematical structure, or whatever. That sounds too much like magic to me unless there’s an explanation for why the structure and dynamics – whatever they are made of, matter or math – results in qualitative experience (or the illusion thereof).

    Why would it being unsatisfying matter? If one has cancer and one is diagnosed with cancer, that’s unsatisfying. But it makes no difference.

    What’s magical about it if qualia don’t exist and the whole explanation doesn’t mention qualia? Discussions of dinosaurs that don’t mention dragons aren’t magical, are they?

    Would it be magic, or would it just be a sufficiently advanced explanation? As the quote goes, sufficiently advanced technology is indistinguishable from magic. That doesn’t make it magic though, does it?

    Perhaps instead of insisting it is magic, instead ask yourself why the sufficiently advanced seems magical? Ie, it’s not something the explanation is bringing in, but instead something the reader is bringing in.

    If computationalists can solve the Hard Problem or show how the derived intentionality of a program can make the leap into intrinsic intentionality, I’m away to admit I was wrong.

    We’re a species that had benefited from planning ahead rather than just waiting until something goes wrong then reacting.

    If you think such a solution wont have a big effect, I’d disagree but I can see why you’re waiting. You think there are no stakes involved and it’s all fairly academic. If it were, I’d agree with that approach and feeling comfortable about such mysteries. I really would.

    If you think such a solution will have a massive effect on billions of peoples lives, then I can’t see why you’d wait until you’re proven wrong. That’s waiting until something goes wrong, THEN reacting.

    I’m guessing it seems like the lottery winning – you’d grant you could win, but you’re not going to think about it very much, despite the massive change it’d make.

  57. @Callan: Oh I agree the stakes are big, why the religion of computationalism has to actually prove itself before we grant AIs civil rights.

  58. I only read now this post so it’s hard to follow the whole discussion, but it seems to me the confusion stems from mixing together “realty” and “language”.

    The first comment says:
    “This whole universe is just a mathematical object which exists Platonically, and we perceive it to be real only because we are embedded in it: that is, from our perspective there are causal links between us and the world around us.”

    Yes. The difference between reality and language is that language is a specific perceptive perspective. So the important point is not what is real and what is not. But what is perceived and what is not. It’s an angle on reality, not reality.

    Language is merely the substance of perception, but not of reality. It’s again a contradiction that comes up from not wanting to consider the dualism into the model. It’s the typical mistake.

  59. Hi Jochen,

    I tried to submit a response but it wouldn’t go through. Perhaps it was too long. I’m not sure what the rules are. I’m mostly posting this just to see if it goes through.

  60. I’ve had problems when my post contained a url linking to a different page on this blog. Did you try to post a link? Otherwise, Peter has been commendably quick at picking up on these things and sorting them out…

  61. @Abalieno: I’m a bit confused by your reply. Are you agreeing or disagreeing with the Mathematical Universe?

    Or are you saying the model contains a hidden dualistic notion?

  62. Hi Jochen,

    I’m guessing my comment was too long. Rather than answer every point I’ll try to bring up some themes.

    I suggest we drop discussion of the no cloning theorem in this conversation because I don’t think it has much bearing on the main issues.

    You are right to point out that I seem to have missed the point about the analogy to cycling, or forgotten why we brought it up in the first place. So, as well as qualia, my position is that there are certain skills or mental abilities associated with cycling and the only way to acquire them is to practice. They can’t be communicated with words alone. So qualia are not unique in this way.

    A repeated theme that comes up is whether the intentions we feel inclined to attribute to artificial systems are arbitrary or not. You bring up Newman’s problem to argue that they are, but I refer to the causal associations to argue that they are not. For instance, a face-recognition program really is about faces because it responds to images of faces and not to images of non-faces. This means that it is not quite as justifiable to interpret it as being about flowers, for instance.

    Newman’s problem only applies where there are no causal links or entailments, where we are looking at a static structure of arbitrary relations. Where there is an actual physical mechanism, you are not as free as Newman implies to project whatever interpretation you like on the system.

    So, when you insist that a given neuron firing doesn’t pertain to a given stimulus, I disagree. It pertains to that stimulus because it is triggered by that stimulus alone. We could perform some kind of surgery on the brain to make that neuron fire in response to a different stimulus, but now all we have done is change what the neuron pertains to. It still pertains to something. Similarly, you think of poppies rather than violets or Chopin because of the various causal associations set up in your brain. The interpretation is not arbitrary.

    So, for now, I think I have answered Newman’s problem as it applies to AI and intentionality. It seems to me I have explained how interpretations of intentions are not arbitrary.

    I also want to be very clear about how I account for what intentionality is, as you don’t seem to appreciate this in all of your objections.

    In my view, intentionality arises due to the satisfaction of criteria held in mental representations. What we know directly are these mental models. We only know the real world indirectly and approximately. The objects of our intentions may not even exist.

    For exampple, my intentions about you arise via criteria which are satisfied by any intelligence (human, alien or AI) that posts on Conscious Entities (hmm, I now realise that perhaps my last post failed to appear because I mentioned this site using it’s .com domain name) under the username “Jochen” exhibiting a high degree of intelligence and learnedness and having certain views. Just as the thermostat could be altered so that it referred to ambient light rather than temperature and it would be none the wiser, so could another person start posting under your name and I would potentially be none the wiser. That doesn’t mean I can’t have intentionality directed at you and it doesn’t mean the thermostat can’t have intentionality directed at temperature.

    The next theme I want to bring up is how qualia can be a kind of belief if people can be wrong about what qualia they are perceiving or imagining. This gets a little convoluted so try to bear with me.

    You mention Mary believing herself to know the qualia of red, and finding herself to be mistaken when she actually sees red for the first time. I think the problem here is a confusion between actual brain states and English language descriptions of brain states. Mary’s brain is not in the state of knowing red. She thinks it is, which just means that she doesn’t really understand the English language descriptions. She applies the description “I know what red looks like” where it is not appropriate. At no time has her brain actually represented the proposition most of us would render as “I am seeing red light” and at no time has she actually been in a brain state most of us would render as “I am imagining red”. She may render her brain states thus but that is just because she doesn’t understand the words she is uttering, or at least she has very idiosyncratic interpretations of those words.

    You also mention that a belief of something doesn’t bring about the thing itself. This is of course trivially true in general. But it is not true in the case that the thing itself is just that belief, so it’s not really enough to defeat the point.

    The next theme is the accusation that I am making “ad hoc” hypotheses. I think you misunderstand me a little. You think I require that experiencing qualia requires major brain rewiring. Let me clarify. It is true that I perhaps use the word “rewire” where I suspect that there may need to be significant changes to the connectome. But I never insist on that, especially not for very short term changes in state, such as switching between seeing light and darkness. Wherever you think I incorrectly insist on changes in neural connections and so on, my argument can be just as well put by insisting instead on some other change in physical pattern, such as whether a given neuron is firing. Even such transient minor changes as this can be seen as structural changes and so whereever you like you can interpret my talk of rewiring to refer to this kind of thing.

    But even so, you have only very limited conscious control over even this transient “soft” state. If I tell you to fire neuron #234534575657, you will have no way of knowing how to do it. There are neurons you can consciously fire at will of course, but you don’t know which neurons correspond to which mental acts. It may be that neuron #234534575657 fires when you think of strawberries, so to fire this neuron you simply think of strawberries. But you have no way of knowing that this is the way to fire this particular neuron.

    Perhaps to think of red Mary needs to fire neuron #35654645655. Perhaps, given her scientific knowledge, she even knows this. But she lacks the mental ability to fire this neuron at will. There is simply no causal link in place from her interpretation of the thought “I want to think of red” to the firing of this neuron. However, when she sees red for the first time, the neuron fires, and a bunch of associations are set up in her brain due to the firing of other neurons. Now there is a causal link from her interpretation of the thought to the firing of the neuron and she can fire it at will by thinking of red. Indeed, the firing of this neuron more or less *is* her thinking of red.

    Next, we have the analogy to mathematics. We can communicate mathematical structures with words and symbols, so the fact that we cannot communicate qualia is evidence to you that qualia are not structural.

    Well, first I would quickly point out that we *can* communicate qualia with words and symbols. If I say “red” to you, you think of the red quale. But I am not communicating the quale itself, rather a label for it which you understand. That’s not too dissimilar to what we do when we’re communicating mathematical ideas by using labels we both already understand. I think that communicating the idea of “red” to Mary is a case of not having this agreed syntax and understanding, because Mary lacks the ability to tie this symbol to the appropriate mental representation or structure in her mind.

    But I think the key difference here is that the mental representations of mathematical structures are physical structures representing abstract mathematical structures, while the mental representations of qualia are physical structures representing raw sense data. So we have second order structures and first order structures. It’s not that qualia are not structures, it’s that they are not structures used to represent other structures.

    When you communicate some mathematical idea to me, you convert your mental representation to a pre-agreed common language, a syntax of mathematical symbols. I know how to interpret this and rebuild my own mental representation of the mathematical structure. You are not communicating the structure of your brain state to me directly. My brain and my understanding is different to yours so my mental representation might be quite different. The communication is successful just in case my mental representation encodes an abstract structure which is similar enough to yours in the respects which are important for the activity in which we are engaged.

    You have my high esteem and I can’t fault you for debating tirelessly and honestly, but we’re probably reaching a point of diminishing returns on this one. From my perspective, it feels like I’ve offered the explanations you’ve asked for, and you’re still insisting I haven’t. I claim (or stipulate) that a thermometer can have (proto-)intentional states and explain what this means, you flatly claim it cannot. I give what appears to me to be a plausible and consistent explanation of how qualia can be both uncommunicable and structural states, and you see it as ad hoc. I have explained that interpretation of representations need not be arbitrary because of causal connections, and you keep asserting that it is because of Newman. Where I see parsimony, you see shifting the burden. And so on.

    I realise that things will look different from your perspective and my description of the state of the debate is surely biased, but nevertheless I’m not sure how to progress. Unless you have very specific questions or particularly ironclad points to make against what I have said, perhaps we ought to drop it.

    In any case, I think all you really need to know about my views are in that Scientia Salon article. In summary, we can consistently claim either that there are is more to intentionality than functionalism or that there isn’t. Both views are consistent with what we see, and perhaps we even agree on that. The difference is in criteria such as parsimony, explanatory power and appeal to mystery or faith. These criteria are somewhat soft, hard to measure objectively, but to my mind the functionalist explanation is more plausible on these grounds. That isn’t so for you and that is not likely to change in the near future. What else is there to say?

  63. So, as well as qualia, my position is that there are certain skills or mental abilities associated with cycling and the only way to acquire them is to practice. They can’t be communicated with words alone. So qualia are not unique in this way.

    The problem is that there still remains a strong distinction between bike-riding and qualia-experiencing: for bike-riding, we can, without difficulty, imagine how to confer it to an intelligence capable of carrying out the necessary rewiring—even though we can’t implement it (without practice) ourselves, the structure of bike-riding is perfectly clear to us. In simpler cases, we can even do the necessary ‘rewiring’ ourselves: take, for instance, lacing your shoes. I could write down a short description that is simple and unambiguous enough to enable anybody unfamiliar with the concept to tie their shoes; by thus merely providing that algorithm, I have transmitted the necessary skill. Moreover, all skills, even horrendously complex ones, appear in principle to be treatable in this way.

    Yet for even the simplest kind of perception, nobody has the slightest idea of performing the analogous feat. No simple quale has ever been given an account of in structural terms, such that in principle, an intelligence capable of doing the necessary rewiring (or whatever) could instantiate it in itself without ever having the need to make the experience firsthand. Note that brain-state sending does not do this: even aside from its dubious possibility, it does not instill in the recipient the knowing-how of having experiences, the way the algorithm for shoe stringing does.

    For instance, a face-recognition program really is about faces because it responds to images of faces and not to images of non-faces.

    That’s not right. It responds to a certain pattern of ones and zeros, and because we have set things up in such a way that this pattern of ones and zeros is associated with a face, we can interpret it as being about faces. With a different ‘camera’, so to speak, that encoded flowers in the way we now encode faces, it can be equally well interpreted to be about flowers.

    This is, ultimately, the reason for computational universality—that certain rich enough structures (which can be quite simple in practice) can be interpreted to be about anything. Take the example of the cellular automaton rule 110: it’s computationally universal from random initial conditions, which means nothing else than that its evolution can be interpreted to compute any computable function at all. All universal computations are equivalent (‘reducible’) to one another.

    Newman’s problem only applies where there are no causal links or entailments, where we are looking at a static structure of arbitrary relations.

    This isn’t right, either. Causality is simply structure, too—that’s why we can use the same mathematics to describe the interactions of billard balls and cars, etc. Because although they are different as physical objects, their causal relations are the same—hence, causal relations do no work in pinning down content, unless you are in some way supposing that one side of the relation is already interpreted (as I think you continue to do). I.e. I believe you take the physical side as a given—that a tree is a tree, simpliciter, and that thus, anything causally linked in some way to that tree can be identified by this link as being in some way about that tree. But since we don’t have access to the physical world directly, but only via our intentions, this basically means that you smuggle in intentions from the start—i.e. you suppose we could somehow jump out of the system and identify that tree for what it truly is, but all we really have is a thought that pertains to a tree; thus, grounding your causal links in the physical world fails because you inevitably end up back in a mental, intentional regime.

    In my view, intentionality arises due to the satisfaction of criteria held in mental representations.

    And here is what I think you don’t appreciate: that in talking this way, you take mental representations for granted, which are, however, intentional in nature.

    That doesn’t mean I can’t have intentionality directed at you and it doesn’t mean the thermostat can’t have intentionality directed at temperature.

    Well, I thought I’d explained that: your intention refers to me qua poster “Jochen” on Conscious Entities. This is not an intention that (necessarily) has only one object, just as, for example, a thought of a cloud doesn’t have only one object. But the thermostat doesn’t refer, it merely reports: if its light is lit, we can then infer something about the temperature; but only at this state, in our reception of it, does any intentionality come about. Other than that, there simply is no aboutness—we can describe its states abstractly using some cipher of zeros and ones, for the sensor and diode: let’s consider a sensor that closes a circuit when the temperature gets too cold, and opens it when it gets too hot, and a diode that is lit when the system is heating (alternatively, we could simply conceive of it directly as the heating system; since it’s only structure, only information, what it pertains to is arbitrary), and off when it isn’t. So, possible states for the thermostat would be (writing 1 for ‘circuit closed’ and 0 for ‘circuit open’, and respectively 0 for ‘diode unlit’ and 1 for ‘diode lit’): 00, 01, 10, 11. Thanks to the thermostat’s design, we know that not all those states occur; in fact, there will only ever be 00 (too hot -> circuit open, hence diode off), and 11 (too cold -> circuit closed, hence diode on). This fully characterizes the operation of the thermostat in structural/informational terms.

    But all this is, really, is a pattern of differences (as Gregory Bateson so memorably put it, information is a difference that makes a difference). As such, it can be instantiated on any system that can be considered to be composed of two components that can be in two different states each—and this is, exactly, what Newman is about. The fact that there is some causal connection between these states merely means a constraint on which possible states are actualized; but that doesn’t help pare down the number of systems so considered. For instance, you can always coarse-grain the state space: say you have a system that visits all the states 00, 01, 10, 11, the you can just identify 10 and 11, resp. 00 and 01, (i.e. join them via logical or), and you’ll have a realization of the structure of the thermostat.

    She applies the description “I know what red looks like” where it is not appropriate.

    Well, but she believes it to be appropriate, no? The utterance of the sentence ‘I know what seeing red is like’, if it is not a lie, necessarily is caused by a belief that she indeed knows what red is like. But according to you, if I don’t misunderstand, that belief is equivalent to actually knowing what red is like (otherwise, there would be a further fact—really seeing red—and hence, such a belief could be mistaken; but then, plainly, the belief on its own is not sufficient).

    But it is not true in the case that the thing itself is just that belief, so it’s not really enough to defeat the point.

    But that the belief in question is just that thing we’re looking for is what you’d want to conclude, not what you can presuppose. To me, that doesn’t seem as clear-cut as you want to make it out, on a purely structural construal at least: for, to use your nomenclature, there might be beings that believe* they are seeing red, but don’t actually do, if belief* and belief don’t coincide—which they of course only do on a structural account, which is what I don’t accept.

    That’s not too dissimilar to what we do when we’re communicating mathematical ideas by using labels we both already understand.

    The point of the analogy to mathematics is that we don’t need any kind of understanding of the symbols in order to perform any mathematical operation—just a list of agreed upon symbols, together with rules for their manipulation, plus a set of starting expressions is completely enough. That’s why Turing machines work.

    Unless you have very specific questions or particularly ironclad points to make against what I have said, perhaps we ought to drop it.

    Well, if I could ask your reply to just three points, it would be the following ones:

    1) The above mentioned discrepancy between skills and qualia, in that all skills seem to in principle be convertible to structural knowledge, and for some skills (e.g. shoe-lacing), that is even enough to communicate them; while yet, for no quale, such a description exists.

    2) The ‘mentalese’ notion that there could be some ‘intrinsically understood’ set of symbols. As I argued above, every set of symbols can ultimately be understood in terms of differences (converting them to ones and zeros); but of course, a mere pattern of differences does not suffice to pin down what they refer to—and adding causal connections doesn’t help. Yet you say that somehow, neuron firings are intrinsically meaningful to the brain; but of course, those, again, simply give you a pattern of differences, which can be instantiated any old way.

    3) The Putnam/Searle problem of the arbitrariness of computation: basically, every physical system can be seen to instantiate any computation (Putnam proves this explicitly, using finite automata—computationally equivalent to our desktop PCs and human brains—as his model of computation). Thus, any conceivable program ‘runs’ on every system (exceeding a certain minimal complexity). But then, if computationalism is true, everything around us is conscious in every possible way. This has been discussed here in a previous comment thread (on, I think, Pointing?), but that would be a chore to wade through; I think Chalmers provides the best summary (and attempts a rebuttal, which, however, I believe falls flat).

    In any case, I think all you really need to know about my views are in that Scientia Salon article.

    On that one, I part company with you fairly early on: I don’t believe that the laws of physics are computable—actually, I think we know that they’re not: quantum measurement outcomes are genuinely (algorithmically) random—they must be, or else, we’d end up violating special relativity (the argument is complex, but I can point you to a proof if you want), and no computer can produce algorithmically random numbers.

    You might, being a many-worldsian, want to argue that on such a conception, the overall history is deterministic; but even then, choosing one out of the infinite number of branches is exactly equivalent to producing an algorithmically random number, and that is needed to fix your indexical facts, i.e. those anchoring you to here and now.

    Both views are consistent with what we see, and perhaps we even agree on that.

    See, I wouldn’t agree with that: as I pointed out before, I used to be just as strongly convinced that the computationalist view was the best on offer, and could account for all observations. I no longer think that’s true, due to realizing that actually, in every account there exists a hidden circularity, a homunculus that supplies some small dosage of intentionality into the system, hidden well enough that even a careful observer may convince themselves that it isn’t actually there. I believe I’ve been pointing out the homunculi in your account, however, you’re reluctant to acknowledge them—not that I’m accusing you of bad faith, I think you’ve been perfectly honest in your debating; it’s just that I’ve been in your shoes, so to speak, and I know how hard it was to get my boots out of the morast.

    Of course, in doing so, I might just have fallen victim to a migrage, now seeing homunculi where there are none; but I think that I could be convinced by an account that well and truly explains how qualia and intentionality actually arise, and not vaguely handwave at opaque complexity, and account where there’s no point anymore where I can ask, “and then what happens?”—an account like the one of how to ride a bike, in other words. So I’ll continue doing that until either I get that account, or make my reasons why I believe it doesn’t exist understood. So either way, it’s a win-win for me. 😉

  64. So I just found out about this new technology enabling colour-blind people to see more colours (apparently; I can only go by their testimony), using a special kind of glasses that apparently works by screening out parts of the spectrum that both red and green cones are susceptible to, leaving only those where the red or green cone has its highest response.

    According to the testimony of the people trying out the glasses for the first time—e.g. here, which is admittedly a promotional video, but there are also similar videos taken privately—, their experience is genuinely that of seeing new colours (one woman points to a patch on the wall, saying ‘I’ve never seen this colour before’).

    So this, to me, seems to be evidence against the thesis that experiencing new stimuli requires some actual brain rewiring, as those reactions are all immediate. Of course, I haven’t seen any proper peer-reviewed testing of the method, so I can’t vouch for it with any kind of certainty, but I think it’s at least an interesting data-point.

    (So I wonder whether the spam-filter will catch up on this comment… It’s probably the most ‘try out this amazing new product!’-post I’ve ever made.)

  65. Hi Jochen,

    > Moreover, all skills, even horrendously complex ones, appear in principle to be treatable in this way.

    No they’re not. For instance, ear-waggling. I was quite old before I discovered I could do this consciously. I don’t believe I could tell you how to do it. To me, it seems like my conscious mind just happened to discover by chance a neural connection, a causal link, that I was not aware of before. I have no way of communicating to another person, or even to a clone of me, how to do this.

    Tying shoelaces is different from this because it can be done relying only on skills we each already have. As such it has the same problem as your analogy to mathematics. Nevertheless, even so I can make points which illustrate further problems with your analogy:

    1. Getting a stroke victim with partial mobility and poor motor control to tie shoelaces is not so easy, even though all the damage is in the structure of their brain. They need to relearn the necessary skills with practice. Telling them what to do is not sufficient snf they may never get back to full competence. I think such stroke victims are in more or less the position I was in with respect to ear-waggling before I discovered it.
    2. There is a difference between consciously and laboriously tying shoelaces by following a set of instructions and doing it automatically with so-called “muscle-memory”. The latter is a skill in itself and needs to be practiced to be acquired.

    > With a different ‘camera’, so to speak, that encoded flowers in the way we now encode faces, it can be equally well interpreted to be about flowers.

    That seems a little far-fetched if not outright absurd. Faces and flowers are not like, say, temperature and altitude, simple quantities which could easily be mapped from one to the other. For your proposal to work, your ‘camera’ would need to be able to detect flowers in its images and reversibly (without destroying any information) effectively replace them with images of faces. That’s the only way this could work, because without image recognition algorithms, images are encoded according to patterns of light and dark, not by what objects can be discerned within them. It would also have to detect any other images of faces and obfuscate them (or perhaps it could just scramble the whole image so any faces therein couldn’t be seen).

    So it seems that to enable such a consistent transformation between faces and flowers or vice versa you would need to have face-recognition or face-generation built in somewhere, which would mean there would still be a connection to faces (and flowers) somewhere in the system as a whole.

    Nevertheless, let’s allow all that. My answer would be the same as my answer regarding changing the thermostat to respond to a random number or light or whatever. You haven’t defeated the idea that there is reference or intentionality. You’ve just changed what it is directed at. Not unlike the case where another person takes over commenting under your name and I am none the wiser.

    > which means nothing else than that its evolution can be interpreted to compute any computable function at all. All universal computations are equivalent (‘reducible’) to one another.

    That’s not true. Rule 110 can used to compute any algorithm, but you have to specially craft the initial conditions. It is not true that an arbitrary run of Rule 110 can be meaningfully or consistently interpreted to be calculating any algorithm you please. Universal computation just means that all computable functions can be reduced to a sequence of primitive operations, and once you have a system such as Rule 110 or a Universal Turing Machine or an Intel CPU that can implement these primitive operations in any desired sequence it can compute any computable function.

    > Causality is simply structure, too

    I’m glad you think so. I certainly do. This is controversial though.

    > that’s why we can use the same mathematics to describe the interactions of billard balls and cars, etc.

    So an abstract mathematical model underdetermines what that model refers to. We can imagine different scenarios that would be satisfied by the same model. I’ll concede this is often the case. However I think that once you have a model that interacts with the thing it is modelling, that changes, because there will only be one system in the world which interacts with it in that way. The representations in intelligent systems are not just abstract mathematical problems on a page but dynamic and in interaction with the real world.

    Let’s say you have a red switch and a green switch and a red light and a green light. Firstly, we may guess that there is a connection between the red switch and the red light and between the green switch and the green light. This is more or less a matter of convention or interpretation. It’s just how a human would be likely to interpret this setup.

    But if switching the red switch actually does light up only the red light and switching the green switch actually does light up only the green light, then this is no longer a matter of mere convention. There is a real connection there. We can say in a completely rigorous, non-arbitrary way that the red switch is “about” the red light and the green switch “about” the green light by virtue of how they are wired together. Newman’s problem only brings up the possibility that we could project different relations onto these entities if we wanted to, but this ignores the fact that, independent of our projections, there is an actual link there which is a little more objective. My position is that the intentionality of intelligence systems arises from the pattern of these causal connections rather than whatever interpretations could be projected onto them by observers.

    > this basically means that you smuggle in intentions from the start—i.e. you suppose we could somehow jump out of the system and identify that tree for what it truly is,

    No, I don’t. I’m assuming there is some state of affairs which more or less corresponds to my mental model of the tree, and I’m assuming that when we discuss that same tree this is the same state of affairs that roughly corresponds to your mental model of the tree. I do not suppose that I can jump out of the system and verify that this is so. I don’t need to anyway since all I need is my mental model and the fact that you seem to be understanding what I’m saying when I talk about the tree. It could be that we’re both hallucinating or that the whole world is an illusion and we are both brains in vats. This would make no difference to our subjective experience, but it seems not to be parsimonious or plausible to me. All I’m really insisting on is that I have a mental model that has internal semantics. If we can agree for the sake of argument that there actually is an external world to which I have intentions, then I’m giving an account of how my mental model more or less relates to that external world in terms of criterion-satisfaction and causal connection.

    > Well, I thought I’d explained that:

    Yes, there’s a lot of that going around! 🙂 I also feel like I’ve already explained points you keep bringing up.

    > your intention refers to me qua poster “Jochen” on Conscious Entities.

    Which is just another way of saying that my intentionality is directed at whatever entity satisfies the criterion of being a poster “Jochen” on Conscious Entities. Exactly parallel to what I’m saying about thermostats. Their intentionality is not directed at temperature qua the average kinetic energy of particles in a volume, but at temperature qua whatever it is that keeps rising and falling around a threshold. Again, intentionality is about criteria satisfaction. Reference is to whatever it is that happens to meet those criteria. Changing what satisfies the criteria changes the reference.

    > Other than that, there simply is no aboutness

    Well, I disagree, clearly. I have given an account of what aboutness is. I’ll note that you don’t appear to have a competing account. That is not in itself any guarantee that my account is right, but I personally don’t see a problem with it apart from the fact that it is unintuitive and many people refuse to accept it.

    > This fully characterizes the operation of the thermostat in structural/informational terms.

    OK. So what?

    > (i.e. join them via logical or), and you’ll have a realization of the structure of the thermostat.

    Fine. So, this harks back to my original answer to Newman’s problem. Simple systems can be embedded in more complex systems, like the triangles which are the faces of a tetrahedron. That doesn’t mean the triangles (or the thermostat algorithm) cannot be considered as entities in their own right.

    > Well, but she believes it to be appropriate, no?

    She believes it to be appropriate but it isn’t, because she does not in fact have a connection between the symbol “red” and the state her mind would be in if she were perceiving or imagining red.

    > But according to you, if I don’t misunderstand, that belief is equivalent to actually knowing what red is like

    No, I would say they are different beliefs.

    Let’s say we label as X the brain state I am in when I perceive red and Y as the brain state I am in when I am imagining red. In state X, my visual cortext is representing the proposition “I am actually seeing red right now”. The proposition has these semantics by virtue of causal connections between the symbols and actual visual stimulus. State Y has some structural similarities to and causal connections with X but is different enough so that it is not usually mistaken for it. My belief that I know what red is like is a connection between the symbol “red” and states X and Y.

    If I were Mary and I did not know what red looked like, then my belief that I know what red looks like could be a connection between the symbol “red” and some other state or states that have nothing to do with the experience of red, by which I mean having no causal connection to the perception of a red stimulus (say state Z).

    > just a list of agreed upon symbols, together with rules for their manipulation, plus a set of starting expressions is completely enough.

    But that just is the understanding you need.

    > 1) The above mentioned discrepancy between skills and qualia,

    Answered. Ear-waggling.

    > 2) and adding causal connections doesn’t help.

    Yes it does. Green switch/red switch. Also reference by criteria satisfaction (substituting “Jochen” for another person).

    > 3) Thus, any conceivable program ‘runs’ on every system (exceeding a certain minimal complexity)

    I often allow this for the sake of argument (as I will in a moment), but I hope the proof of this isn’t based on the same misunderstanding you exhibit of rule 110. I should probably go read the paper.

    > But then, if computationalism is true, everything around us is conscious in every possible way.

    Not my computationalism, not in light of the MUH, because I attribute consciousness to abstract structures such as algorithms rather than to the physical objects that instantiate them.

    Say within the vibrations of the molecules of the wall in front of me we could pick out some way of mapping these to a simulation of the mind of a person. For that mind to be conscious, it would have to be processing information, so it would need some sort of simulated environment with internal time and so on. Let’s go the whole hog and say that the wall in front of me can be seen as a simulation of the whole room I’m sitting in, as outlined in my article on Scientia Salon.

    Now, in my view, that means that this *structure* is conscious but it doesn’t mean the *wall* is conscious. In light of the MUH, that just means that somewhere in the multiverse there is a conscious structure such as that being explored by the wall-simulation (and indeed there is — I am that consciousness). Because that’s what I regard simulations as… not really realisations of mathematical objects which exist independently of simulation but reflections of and explorations of mathematical objects.

    So, in this view, if Bostrom is right and it is plausible that we are all in a computer simulation, it wouldn’t matter if they pulled the plug. We would continue to exist. The simulation doesn’t add to the amount of concsciousness in the multiverse, it’s just a window into a world that exists independently. Similarly the destruction of the wall doesn’t matter.

    The same goes for the destruction of my brain. If you destroy my brain, and there is some possible world where you didn’t destroy my brain, then I will continue to live there. Nevertheless I prefer not to be destroyed because it would cause grief for my loved ones in this world.

    On the other hand, on this view, instant destruction of this whole planet would be a matter of no import. (That’s not to say that I’m arrogant enough to destroy the planet on a whim. I’m not arrogant enough not to allow a sliver of doubt that there’s some flaw in my reasoning. A good heuristic is “if in doubt, try not to kill anyone”!).

    But there’s still an important respect in which my brain is different than the wall. My brain is interacting with the environment (this universe) in a consistent, directed and meaningful way. There are real causal connections there. But there are no meaningful causal connections between what happens in the simulation supervening on wall vibrations and the events in the real world. The algorithm we are projecting on the wall was decided beforehand and a mapping delicately and deliberately contrived so as to allow the interpretation Searle and Putnam desire. There isn’t really any connection whatsoever between the real world and the simulated wall-world. (We can’t actually use the wall as a general computation device without such predetermined contrivances — if we could we wouldn’t need computers.)
    This means that if the molecules had vibrated differently the same story would have played out within the simulation but the mapping Searle would have chosen would have been different. As such we can regard the people within wall-world to be causally disconnected, not really a part of our universe. We can afford to pay them no heed in a way we can’t for the minds of brains. We are part of the same structure so what we do affects each other.

    > no computer can produce algorithmically random numbers.

    Fine. But I think I can make an argument (don’t want to get into it) that there is no practical difference between algorithmically random numbers and sufficiently well produced pseudorandom numbers, not least because every sequence of pseudorandom numbers is consistent with some random sequence.

    > but even then, choosing one out of the infinite number of branches is exactly equivalent to producing an algorithmically random number

    Fine, but we don’t have to choose. According to standard MWI, all of these branches contain copies of me and all of them are conscious. Whatever choice there is is illusory. You might as a practical matter want to choose one to avoid having to simulate all of them, but since whatever one you choose is supposed to be conscious (since they’re all conscious) that’s just fine.

    About the colour-blind glasses, that’s very very interesting and I thank you for pointing it out. I would indeed count it as very good empirical evidence that I am wrong to suppose Mary couldn’t see colour right away.

    But I don’t think anything important hinges on that. Remember that structural changes can simply be a different pattern of neuron firing.

  66. For instance, ear-waggling. I was quite old before I discovered I could do this consciously. I don’t believe I could tell you how to do it.

    I can tell you how you do it: a nerve impulse is transduced to some motor neuron, which induces a contraction in some muscle or other, which has the effect of moving your ears.

    No, of course that’s not a very fine-grained or complete (and maybe even correct) story, but it’s clear that such a story exists, and completely explains ear-waggling without any residual mystery. There’s no ‘and then a miracle occurs’ between the description of physical processes and the desired effect occurring. That’s how we know that it can be completely captured in terms of structure—and any analogous system can implement it just as well.

    But again, no even remotely similar story exists for qualia, not even on the level of detail I have just provided.

    Tying shoelaces is different from this because it can be done relying only on skills we each already have. As such it has the same problem as your analogy to mathematics.

    But again, mathematics does not depend on any previously shared knowledge, interpretation, or other basis. It can be completely captured as, and transmitted by, symbols on a page.

    Getting a stroke victim with partial mobility and poor motor control to tie shoelaces is not so easy, even though all the damage is in the structure of their brain.

    Well, and getting a quadriplegic to ride a bike is even harder. But the fact that certain systems may not have the capacity to implement specific recipes does not establish that the skills described by these recipes aren’t in principle communicable—if an appropriate system is capable of implementing the recipe, then it suffices in principle to confer the skill.

    Faces and flowers are not like, say, temperature and altitude, simple quantities which could easily be mapped from one to the other.

    Each are simply excitation patterns of, say, a grid of 1024*768 cells. As such, there exists a mapping that associates to each face a flower, just as certain as that there exists a mapping that associates every 2-bit string with every other 2-bit string.

    You haven’t defeated the idea that there is reference or intentionality. You’ve just changed what it is directed at.

    No, I’ve shown that what it is directed at is arbitrary, merely dependend upon interpretation. The face-recognition algorithm gets handed a pattern of bits; but there is no sense in which this pattern of bits means ‘face’ anymore than it means ‘flower’, or anything else. This interpretation is due to intentionality derived from our own intentionality.

    That’s not true. Rule 110 can used to compute any algorithm, but you have to specially craft the initial conditions.

    If you fix the interpretation, yes. But as I said, all universal computations are reducible to one another—thus, if you change the interpretation—if you look at the system in a different way—you change the computation.

    We can say in a completely rigorous, non-arbitrary way that the red switch is “about” the red light and the green switch “about” the green light by virtue of how they are wired together.

    But you bring in concepts that you only have access to through your own intentional mental representation of the world, namely the switches and the lamps. Think about the switch/lamp system itself: it can be described in the same way as the thermostat/heater, that when one is in a given state, the other is in that same state.

    Or, take a system that doesn’t possess intentionality, and tries to discover the causal connection between these things. It is provided with data in some form from the environment; but it is not provided with intentional concepts like ‘switch’ and ‘lamp’. It could very well discover the causal relations in the data that it is provided with; but it could not, simply from there, get to the concepts of ‘switch’ and ‘lamp’.

    This is just what I mean when I say that we are often fooled by the transparency of our own mental states—we take concepts as given, while concepts don’t occur outside of our heads. So, in order to truly derive intentionality, you have to start without it—not with switches and lamps, but with data a system is provided with.

    I don’t need to anyway since all I need is my mental model and the fact that you seem to be understanding what I’m saying when I talk about the tree.

    Yes, you have that understanding, and I share it, because we’re both intentional systems. But a non-intentional system doesn’t share it, so, to itself, its intentionality can’t be grounded in some form of a mental model of the tree—what it gets is merely a stream of data, of differences, which don’t fix the content.

    Their intentionality is not directed at temperature qua the average kinetic energy of particles in a volume, but at temperature qua whatever it is that keeps rising and falling around a threshold.

    First of all, the thermostat knows nothing of ‘rise’ or ‘fall’. It knows, if that is the right word, the causal connection between its sensor and the heating mechanism, that when one is in a certain state, then so is the other. It is not directed at any quantity: it merely embodys a certain causal relationship by means of its construction. But what this relationship pertains to—what would be the intentional object in an intentional system—is completely absent. It’s like a paint-by-numbers picture where nobody has told you what colours the numbers represent, while an intentional mental state is the full picture—even if the picture does not uniquely specify its object.

    No, I would say they are different beliefs.

    So, there’s a way to believe that one is experiencing red, and be right, and a way to have that same belief (as regards its propositional content) and be wrong.

    If I were Mary and I did not know what red looked like, then my belief that I know what red looks like could be a connection between the symbol “red” and some other state or states that have nothing to do with the experience of red, by which I mean having no causal connection to the perception of a red stimulus (say state Z).

    Well, earlier on, you wanted to argue that seeing red is actually merely the belief of seeing red, that thus the mental state of believing one sees red is the mental state of seeing red. But know, you’re talking about the belief of seeing red and some state of mind that means actually seeing red, as two different things. So which of the two is it?

    But that just is the understanding you need.

    There’s no ‘understanding’ there; the symbols are pure syntax, without any semantics. Again, that’s the strength of formalism in mathematics and computation.

    Answered. Ear-waggling.

    And yet, in ear-waggling, one can still easily conceive of a story which gaplessly accounts for how ear-waggling works. Tell me such a story re qualia, and I’ll relent.

    Yes it does. Green switch/red switch.

    To know that something is a ‘green switch’ or a ‘red switch’ presupposes intentionality, hence, this appeal is a good example of just the sort of implicit homunculus I mean.

    If you destroy my brain, and there is some possible world where you didn’t destroy my brain, then I will continue to live there.

    I always thought of this as a supremely bleak version of immortality: on it, in your subjective experience, you will outlive anybody you care for, and likely will be trapped in a state of permanent agony from all the things that almost killed me. While I admire your consistency in accepting this implications, I must say that it alone would be enough for me to at least hope that maybe my view isn’t the right one.

    But there’s still an important respect in which my brain is different than the wall. My brain is interacting with the environment (this universe) in a consistent, directed and meaningful way.

    On the MUH, I don’t see how this difference exists: you are a mathematical structure interacting with other mathematical structures; the wall-people likewise are mathematical structures interacting with other mathematical structures.

    But I think I can make an argument (don’t want to get into it) that there is no practical difference between algorithmically random numbers and sufficiently well produced pseudorandom numbers, not least because every sequence of pseudorandom numbers is consistent with some random sequence.

    Well, as I said, if the universe produced merely pseudorandom numbers, then faster-than-light information transfer is possible. (If you’re interested, here is the argument I’m referring to.)

    Remember that structural changes can simply be a different pattern of neuron firing.

    I’m sorry, but this seems, to me, very much like you’re changing your ad-hoc hypothesis to a new one, now that the old one has been shown infeasible.

    And besides, this one doesn’t do quite the same work: because even though I can’t physically master the task of riding a bicycle just by reading about how one does it, I can easily imagine myself doing so; so why can’t I (apparently) produce this same kind of understanding with respect to qualia?

  67. on qualia and causal effects – would the strain to remember something count?

    just spitballing here…

  68. Hi Jochen,

    > I can tell you how you do it: a nerve impulse is transduced to some motor neuron, which induces a contraction in some muscle or other, which has the effect of moving your ears.

    But how do I fire the neuron? How can I learn to do it if I can’t do it already? Suppose all I need to do to visualise red is to fire some other neuron? Are these not analogous situations?

    > if an appropriate system is capable of implementing the recipe, then it suffices in principle to confer the skill.

    And if the system is capable of implementing the recipe of “imagine red”, and “imagine yellow” and “imagine a square” and “imagine a circle” then it suffices in principle to confer the skill of imagining a red circle embedded in a yellow square. What you cannot communicate are the primitive mental operations used to build up more complex composite activities. You cannot tell someone how to raise their arm if they cannot do it already, any more than you can tell them how to imagine red if they cannot do it already.

    > As such, there exists a mapping that associates to each face a flower,

    Yes there does. But to produce such a mapping as you would have to do to build such a camera, you would have to know what a face or a flower look like. Even if that mapping just happens to have arisen by chance, then that doesn’t mean there isn’t intentionality implicit in it. If an human being with true beliefs were to suddenly form from chance quantum fluctuations, then that human being would also have intentions even though they arose by chance.

    > No, I’ve shown that what it is directed at is arbitrary, merely dependend upon interpretation.

    You have done no such thing, I’m afraid. You have shown it is dependent on context, and you have done so by changing the physical context of the operation of the system.

    > but there is no sense in which this pattern of bits means ‘face’ anymore than it means ‘flower’, or anything else.

    It doesn’t mean ‘face’ qua the front of a human being’s head. But it does mean ‘face’ qua a certain visual arrangement of parts.

    > But as I said, all universal computations are reducible to one another—thus, if you change the interpretation—if you look at the system in a different way—you change the computation.

    This is not true. Give me a reference if you like. Yes, you can interpret a run of 110 as being any algorithm if you have an interpretation which can change arbitrarily for every cell so as to map the output to some predetermined algorithm you have in mind. But then you could interpret any pattern at all (e.g. blank, or chess board, or random) to be any algorithm if that’s what you’re doing. There is nothing special about rule 110 if that’s what you have in mind. That you bring up rule 110 at all implies that you must be confused.

    > Think about the switch/lamp system itself: it can be described in the same way as the thermostat/heater, that when one is in a given state, the other is in that same state.

    And that’s not a bad approximation of how aboutness works. When I think I am holding my mouse, my mouse is in the state of being held by me. That’s how my mental representation of holding a mouse comes to refer to actually holding a mouse. The states of the two things tend to be correlated.

    > but it could not, simply from there, get to the concepts of ‘switch’ and ‘lamp’.

    But it doesn’t have to have such concepts in order to have a reference. Again, and this point keeps coming up, it doesn’t refer to ‘switch’ qua a physical device built by humans to activate or deactivate other devices, it refers to it qua the thing that changes the state of the lamp. And it doesn’t refer to the lamp qua a device built by humans for the purpose of emitting photons, it refers to it as qua that which is changed state by the switch. We refer to things qua the roles they play in our mental representations. This is how I can refer to you without knowing much about you. The same is true of simple systems — they refer to things qua the roles they play in those systems but they don’t know anything about them outside of these roles. We delude ourselves into thinking we have true intentionality while computers don’t because the roles things play in our minds are usually very complex and detailed. We see that computer systems lack this detail (“Sure it knows that your ‘gender’ is ‘M’, but it doesn’t know what ‘gender’ means”) and falsely conclude that they don’t have intentionality.

    > First of all, the thermostat knows nothing of ‘rise’ or ‘fall’.

    That’s just an irrelevant detail. For purposes of illustration, I was allowing myself the luxury of imagining a slightly more detailed themostat that actually digitally measures the temperature and compares it to a threshold. It doesn’t know ‘rise’ qua ‘movement in an upward direction’, it knows up and down qua ‘being above or below a threshold’.

    > It’s like a paint-by-numbers picture where nobody has told you what colours the numbers represent, while an intentional mental state is the full picture

    It’s like a paint-by-numbers picture in that you’re missing some information, some detail, but you can still refer to the numbers qua the role they play in that system. You can know, for example, that this region is the same colour as that region. The full picture you’re talking about is just a (presumably human) representation which has more detail. That’s the only difference (well, as well as being situated in a more complex information processing system with memory, intelligence, goals and so on).

    > Well, earlier on, you wanted to argue that seeing red is actually merely the belief of seeing red,

    Your criticism relies on confusing the sign with the signified. There is the belief that you are seeing “red”, i.e. the belief that what you see is what people refer to as “red”, and there is the belief that you are seeing red, i.e. the proposition represented by your visual cortex on receiving light of a certain wavelength. A Chinese person does not believe he is seing “red”, he believes he is seeing “??”, but he does believe he is seeing red.

    Now, assuming you don’t speak Chinese, as far as you know my translation of red into that language could be correct or incorrect. I might have tricked you by using the word for blue instead. So when you see red, the truth of the proposition that you are seeing “??” is contingent on my honesty while the truth of your seeing red is not. So even though you know what red looks like, we can imagine a situation where you falsely believe yourself to be seeing “red”, i.e. if you have been taught to use language incorrectly, but we cannot imagine a situation where you falsely believe yourself to be seeing red while you are not (assuming hallucinating counts as seeing — perhaps I should say experiencing instead).

    > There’s no ‘understanding’ there; the symbols are pure syntax, without any semantics.

    What about the rules? The rules are the semantics, more or less.

    > Tell me such a story re qualia, and I’ll relent.

    I have given yourself a story. You believe yourself to be experiencing qualia so you report that you are experiencing qualia and wonder how it is that you are experiencing qualia. Intentional states are cashed out in terms of causal relationships and so on as I’ve been describing. I can’t give you a story about qualia which is exactly the same as the story about motor control because experiencing qualia is not exercising motor control.

    > To know that something is a ‘green switch’ or a ‘red switch’ presupposes intentionality

    Qua!

    > I always thought of this as a supremely bleak version of immortality

    I agree. It is bleak. I’m not thrilled about it. It would be nice to be wrong.

    > On the MUH, I don’t see how this difference exists: you are a mathematical structure interacting with other mathematical structures; the wall-people likewise are mathematical structures interacting with other mathematical structures.

    It is not an objective difference but a subjective one. It means they are not real to me as you are and I am not real to them as their friends are.

    > Well, as I said, if the universe produced merely pseudorandom numbers, then faster-than-light information transfer is possible.

    Perhaps it is. But perhaps not in a useful way. I am interested but with all this correspondence and limited time I think I’ll have to resist the urge to get into that now. I don’t think it’s terribly important. I’m not insisting that the universe is actually in a computer after all. I’m just arguing that a local system such as a room with a conscious person could be simulated in principle. I don’t think the possibility or impossibility of FTL information transfer within the simulation would have much bearing on anything.

    > I’m sorry, but this seems, to me, very much like you’re changing your ad-hoc hypothesis to a new one, now that the old one has been shown infeasible.

    No I’m not changing anything core. Perhaps you didn’t understand me. The hypothesis is the same. You can’t tell me how to experience red, you can’t communicate that qualia to me, because you can’t tell me how to put myself in a specific brain state. I can’t “soft-rewire” my brain at will any more than I can “hard-rewire” it. I can’t tell specific neurons to fire just because I want them to. The only way to cause this to happen is to actually go look at something red.

    This is exactly the same reason why you can’t tell me how to waggle my ears if I can’t.

    > even though I can’t physically master the task of riding a bicycle just by reading about how one does it, I can easily imagine myself doing so;

    I don’t see how this argument is supposed to work. I think this is how I got trapped into talking about the qualitative experience of riding a bike. You can imagine the subjective experience of riding a bike by composing it from familiar subjective experiences. You can imagine the objective mechanics of riding a bike by visualising a third person perspective of a bike rider.

    The analogous situation for qualia would be to point out that you can compose subjective experiences by rearranging familiar ones. You can imagine a red circle surrounded by a yellow square even if you have never seen such a thing. This is what you do when you read a book. From the objective side, you can imagine an organism that can see colours you cannot (e.g. the mantis shrimp) and you can imagine the third person perspective story of how this works in terms of photons and photoreceptors and neurons and so on.

    You cannot imagine what it is like to be a mantis shrimp because you don’t have the primitive subjective experiences needed to compose that experience. Your brain has no faculty to represent the proposition that it is experiencing the colour arising out of mixing photorceptor pigment #3 with photoreceptor pigment #5.

  69. But how do I fire the neuron?

    This is the wrong question. The only question that needs to be asked, and can, in principle, be answered is which neuron needs to fire. Then, a system with the capacity to fire its neurons at will (‘re-wire its brain’) will be able to implement the recipe and waggle its ears.

    Suppose all I need to do to visualise red is to fire some other neuron? Are these not analogous situations?

    If that supposition is true, then yes, they are. But whether or not it is is what we’re debating here. That is, I then want the story of how firing that neuron gives rise to a red-experience, in the same way as one can tell a story about how firing a given neuron gives rise to ear-waggling.

    And if the system is capable of implementing the recipe of “imagine red”, and “imagine yellow” and “imagine a square” and “imagine a circle” then it suffices in principle to confer the skill of imagining a red circle embedded in a yellow square.

    Yes, that’s all I’ve been saying, I’m glad that we’re finally on the same page there. The problem is just that nobody has any idea of how these recipes could look, while for all other skills I can imagine, coming up with at least a coarse-grained version of such a recipe seems ultimately rather trivial.

    But to produce such a mapping as you would have to do to build such a camera, you would have to know what a face or a flower look like.

    To produce a mapping that associates a given pattern of bits with a face, you have to have the very same knowledge. That’s just what I’m saying: how to interprete the data is arbitrary; it’s only our knowledge of the world, of our intentional mental content, that fixes this mapping.

    Suppose I give you a grid of bits. Does this represent a face? Obviously, there is no way for you to tell. But you can tell that when a face is present, and this pattern occurs, that it does pertain to that face—but this obviously depends on your knowledge that a face is present. But this is the intentional content of your mind. So, the association of some bit pattern with a face is only possible because you can ground it in your intentionality.

    You have shown it is dependent on context, and you have done so by changing the physical context of the operation of the system.

    I have shown that the context in each case in which you want to say ‘this bit pattern is about this real-world thing’ always includes your own intentionality in the identification of the real-world thing. If you had no own intentionality, then the real-world thing itself would only be a bit-pattern; and ‘this bit-pattern refers to that bit-pattern’ doesn’t suffice for providing intentionality, since it never bottoms out. Think about the way we define words: if all we had were other words, then no word would acquire meaning at all. So, we need to be able to point at things in the world in order to ground meaning; but of course, we never really point at things in the world, but merely, at the intentional content of our minds. Thus, we need intentionality present a priori in all such stories.

    But it does mean ‘face’ qua a certain visual arrangement of parts.

    Well, then again: I give you a bit pattern. How do you tell if it represents a face?

    This is not true. Give me a reference if you like.

    You only need a mapping that takes every line of 110’s evolution to the corresponding line in a different evolution—or any other evolution of some universal machine.

    There is nothing special about rule 110 if that’s what you have in mind. That you bring up rule 110 at all implies that you must be confused.

    As I said, I brought it up merely because it’s the simplest example of universal computation that I know of.

    When I think I am holding my mouse, my mouse is in the state of being held by me. That’s how my mental representation of holding a mouse comes to refer to actually holding a mouse. The states of the two things tend to be correlated.

    Yes, but note that you have access to the things in themselves, not merely to their causal relationship. If you had access merely to their correlation, then you’d lack the relata.

    But it doesn’t have to have such concepts in order to have a reference.

    But what does it refer to? All that there is, is again correlated on/off states—00 and 11. So there’s among other things not even a distinction between the switch/lamp and the thermostat, and all other systems on which such a structure can be imposed (which is virtually all systems).

    You keep pointing to opaque complexity, claiming that if you just pile up enough such empty relations, somehow, relata will emerge—‘and then a miracle occurs’. But once more, the problem is exactly the same as decoding a message in an unknown cipher—there’s just no ‘there’ there.

    This is how I can refer to you without knowing much about you.

    But nevertheless, you refer to me as a concrete entity—ultimately, clad in the words that you read. Who or what originates these words, you don’t know, but the words themselves are not things that can be satisfied by several different entities, but are just the words you read. How would you give a relational account of those? They’re just what is given to your experience—they’re what subvenes the relations present within your experience, not constructed from them.

    it knows up and down qua ‘being above or below a threshold’.

    It doesn’t even know above and below, merely on and off, ‘same’ and ‘different’. Seriously, it’s exactly details like that, which you consider irrelevant—‘that’ll get sorted out somehow’—,that make you persuade yourself that maybe, somehow, one could build higher-order representations using the primitives you have. But those primitives only include differences; and how things differ from one another does not suffice to fix what those things are.

    That’s the only difference (well, as well as being situated in a more complex information processing system with memory, intelligence, goals and so on).

    Again, pile on complexity, and something somehow happens. This, to me, is simply not convincing enough.

    There is the belief that you are seeing “red”, i.e. the belief that what you see is what people refer to as “red”, and there is the belief that you are seeing red, i.e. the proposition represented by your visual cortex on receiving light of a certain wavelength.

    The latter isn’t a belief, though; it’s not accessible in its propositional content. It’s simply the brain state one is in when one is seeing red; this then may cause the belief that one is seeing “red”, but it isn’t a belief in any sense I’d use that word.

    You were proposing that the belief, caused by being in that state, suffices for the quale; but the belief need not be caused by that state, and hence, can be mistaken. My proposition was that experiencing the quale causes the belief; if you’re now saying that it’s in fact the underlying state that causes the quale, and hence, the belief, you’re in fact arguing for a different thesis than you claim you’d argue for.

    What about the rules? The rules are the semantics, more or less.

    No, the rules are the grammar, the syntax.

    I have given yourself a story. You believe yourself to be experiencing qualia so you report that you are experiencing qualia and wonder how it is that you are experiencing qualia. Intentional states are cashed out in terms of causal relationships and so on as I’ve been describing.

    But the story has gaps in all the important places: you haven’t shown precisely how intentional states—say one with the propositional content ‘there is a bicycle’, or whatever else you think fit—come about from the mere pattern of differences causality yields; you haven’t shown how beliefs come about; and you haven’t shown that belief in experiencing a quale is sufficient for the quale. The most you’ve done is handwave in the direction of added complexity.

    I don’t think the possibility or impossibility of FTL information transfer within the simulation would have much bearing on anything.

    I think that this would be very relevant indeed, since FTL implies time travel, and in a universe in which all conditions eventually come to pass, so will conditions in which a temporal loop prevents the creation of the universe; so I believe basically that the absence of this is an important consistency condition on universes. But yeah, I guess this is too much speculative metaphysics to get sorted in this conversation.

    This is exactly the same reason why you can’t tell me how to waggle my ears if I can’t.

    But I did! This doesn’t maybe help the average human being to actually waggle their ears; but merely that they, due to biological quirks, can’t implement the recipe doesn’t imply that there is something fundamentally incommunicable.

    From the objective side, you can imagine an organism that can see colours you cannot (e.g. the mantis shrimp) and you can imagine the third person perspective story of how this works in terms of photons and photoreceptors and neurons and so on.

    (Actually, I think recent research shows that mantis shrimp don’t see that many colours: each of their receptors just registers one colour, but they don’t integrate differential responses into a whole colour space. But of course, that’s neither here nor there.)

    Anyway, my problem is that no, I can’t imagine that story at all—well, of course, the third-person perspective, that’s no problem, but the difficult part is precisely why that should be associated with any first-person viewpoint at all, why the third person isn’t all there is. If this can be captured in structural terms, then there should be a story that should be comprehensible in just the same way as the story of ear-waggling.

  70. Hi Jochen,

    > This is the wrong question.

    But this is more or less what you have been asking with regard to qualia. Or at least that’s how it seems to me. I understand that this is not the question you think you are asking.

    > That is, I then want the story of how firing that neuron gives rise to a red-experience

    There is no red experience, if the red experience is a real quale with meaning independent of its causal role. There is only the belief that you are having a red experience. So before I can explain qualia I need to get you to understand how information processes can have intentional states.

    > Yes, that’s all I’ve been saying, I’m glad that we’re finally on the same page there.

    The problem is you’re comparing imagining riding a bike to imagining red when you’ve never seen red, when you should be comparing it to imagining a red circle in a yellow square when you’ve never seen a red circle in a yellow square (but you have seen red things and yellow things and circles and squares).

    A much better comparison is waggling your ears. If you have never waggled your ears, you can’t easily see how to waggle your ears, any more than you can clearly imagine wagging a tail or moving a third arm. To the extent that you imagine it at all, you imagine it by analogy to moving some other body part. We can do that with colour too, I can dimly imagine what it would be like to perceive a fourth primary colour by analogy to the primary colours I can already perceive.

    > Suppose I give you a grid of bits. Does this represent a face? Obviously, there is no way for you to tell.

    Well, there is a way to tell if it represents an X if there is an algorithm for X detection. The algorithm has intentionality towards objects that satisfy the criterion of being detected as X by the algorithm. They don’t have intentionality towards these objects qua any role but this, just as my intentionality towards you is role-specific.

    > you want to say ‘this bit pattern is about this real-world thing’ always includes your own intentionality in the identification of the real-world thing.

    I don’t actually want to say “this bit pattern is objectively, factually, indisputably about this real-world thing”. I’m really just saying it is correlated with the real-world thing. But then I would say the same of my own mental representations, so what I’m saying is this bit pattern is as much about this real world thing as my own mental states regarding it are about it.

    > and ‘this bit-pattern refers to that bit-pattern’ doesn’t suffice for providing intentionality, since it never bottoms out

    But it does. Because what you have is a deeply interconnected mental model of the world. What we are aware of in our heads can be thought of as a mental simulation of the world around us. It is only approximately correlated with the real world, and the intentions we have in our heads are only indirectly and imperfectly directed at real objects in the real world. We have intentions toward real objects insofar as these real objects are correlated with our representations of them. There is no fact of the matter here, no hard objective intentional links, and there don’t have to be because approximate is good enough.

    > I give you a bit pattern. How do you tell if it represents a face?

    By using a “face”-detection algorithm, which can detect “faces” (whatever they are) in bit patterns. This algorithm then has intentionality towards all objects which fulfil the criterion of being “faces”, i.e. objects which pass the “face”-detection test when captured in bits by that system. In the real world and in practice this picks out human faces, just as my intentionality towards “Jochen” picks out you (whoever you are).

    > Yes, but note that you have access to the things in themselves,

    Aha. Very good. You see, that’s the assumption I disagree with. I don’t think I have direct access to the things in themselves. I think I only have a mental model of them, and, e.g. mouse-detection algorithms embedded in my brain which can represent the propostion that I am perceiving a mouse in my sense data. I do think there is a real mouse out there, but only because this is parsimonious. I am against naive realism, the view that we can perceive (or know) things directly. The difference between me and a typical computer is only that my mental model of a mouse is richer, i.e. I know more stuff about mouses than almost any computer system does.

    Here you will no doubt again say:

    > Again, pile on complexity, and something somehow happens.
    or
    > You keep pointing to opaque complexity, claiming that if you just pile up enough such empty relations, somehow, relata will emerge—‘and then a miracle occurs’.

    But that is not at all what I am saying. I’m not saying there is a magic threshold of complexity. I’m saying there are differences in degree, and that these are important and need to be acknowledged. The difference between me and a thermostat is not just that I have a much richer representation of what temperature is but that I use that information in more complex ways. But note that I do insist that a thermostat has intentionality too, so I’m not using complexity to explain how intentionality arises by magic.

    > You only need a mapping that takes every line of 110’s evolution to the corresponding line in a different evolution—or any other evolution of some universal machine.

    > As I said, I brought it up merely because it’s the simplest example of universal computation that I know of.

    If that’s what you’re doing then the fact that 110 is a universal computer is irrelevant. You could just as well have a mapping from each value in a sequence of successive natural numbers (which does not constitute a universal computer) to the evolution of a universal machine. Again, the universality of computation just means that any computer, given enough time and space, can implement any computable function. It does not mean that all computations are equivalent. I don’t think this helps your case at all.

    > but it isn’t a belief in any sense I’d use that word.

    Well, maybe that’s right. Maybe it isn’t a belief in the way we ordinarily use the word, because we don’t ordinarily describe our qualia in the language of second order intentions. I’m just pointing out that if I can make you believe you are experiencing red (and you and I both agree about what it is to experience red, so there is no misunderstanding of what “red” refers to), I don’t see how this could come to pass unless you were actually experiencing red, whether because of a real red object in your vision or because you were hallucinating.

    > You were proposing that the belief, caused by being in that state, suffices for the quale; but the belief need not be caused by that state, and hence, can be mistaken.

    No, I’m saying that these are different beliefs that are rendered into the same English sentence because the speaker is confused about what “red” means.

    I mean, a monkey can’t be *wrong* about what red looks like because a monkey doesn’t have a word for red or higher order intentions about qualia (presumably). But a monkey can know what red looks like, because knowing what red looks like just means being familiar with the mental state corresponding to the belief that one is experiencing red stimulus, i.e. the state the monkey’s brain is in when it receives red sense data. The mistake you are talking about is in the higher order intentions alien to the monkey, not in the lower order ones we share with it.

    > No, the rules are the grammar, the syntax.

    Right. And you have to understand that grammar, that syntax. You have to understand the ways you can legally manipulate the symbols. The relationship between the semantics and those rules is not arbitrary. If the rules were different in certain ways those symbols could not have the same semantics.

    > you haven’t shown how beliefs come about;

    This is the key one since everything else follows. And I have shown how beliefs come about. All you’ve done in answer is special pleading that your beliefs are real beliefs and those of a thermostat are not. This is because you think you have access to the thing in itself, but I think you don’t. Therefore all the dismissive things you’ve said about the thermostat’s beliefs (e.g substitutability to refer to different physical phenomena) have analogues in human beliefs. I’m not handwaving in the direction of complexity as you accuse because I think a thermostat can have intentional states and I think these are continuous with human beliefs.

    > I think that this would be very relevant indeed, since FTL implies time travel,

    Ack, I’m not getting into this unless you think it’s important to the discussion at hand. I don’t.

    > But I did!

    Telling me to fire a neuron is not telling me how to waggle my ears. I don’t know how to fire the neuron.

    > This doesn’t maybe help the average human being to actually waggle their ears; but merely that they, due to biological quirks, can’t implement the recipe doesn’t imply that there is something fundamentally incommunicable.

    Biological quirks? This seems to imply something like a genetic component. Now, there certainly could be a genetic component, but my genes did not change between when I couldn’t and when I could waggle my ears. If the difference between the me of before and the me of after could be characterised as biological (since pretty much any state I am in could be described as biological), then so could the difference between the Mary of before and the Mary of after. The difference in both cases was an incommunicable change of brain state.

    I think if this conversation is to continue it would be best to drop all talk of qualia and focus on beliefs and intentions alone, because I think this is more straightforward and that all else follows.

    Perhaps you could answer some questions for me (or just identify any interesting disconuitities in what your answer would be). Answer them in order as if you have not read later questions, please.

    1) Do you have any intentions about the object “XO-2b”?
    2) Do you have any intentions about the astronomical object “XO-2b”?
    3) Do you have any intentions about the exoplanet “XO-2b”?
    4) Do you have any intentions about the star “XO-2N”?
    5) Do you have any intentions about the star Sirius?
    6) Do you have any intentions about the planet Jupiter?
    7) Do you have any intentions about the star we call the sun?

    The point is that our intentions seem to feel more real the more detail we have about their object. In the first question, you presumbly know nothing. I’m just giving you an empty label, so your intention is like that of a computer. You have no context for the object referred to. All it is is whatever object is satisfied by the criterion that it is referred to as “XO-2b” in some unknown domain.

    As the questions progress, you know more and more about the object, so it seems more reasonable to suppose you have intentions toward it. But my point is that this is a continuous transition. There is no magic point where we say you have real intentions. Your intention towards “XO-2b” in the first question is qua the entity that fits that label. In the third question it is qua the entity that fits that label and is an exoplanet. By the final question the “qua” is filled with so much knowledge (including relationships to your knowledge about other objects) that no computer system is likely to rival it. The “qua” is key to understanding why it seems to you that your intentions are real but those of a computer are not.

  71. But this is more or less what you have been asking with regard to qualia. Or at least that’s how it seems to me.

    No, I’ve been asking for a story analogous to ‘neuron x fires -> signal is sent to muscle y -> muscle y contracts -> ear waggles’.

    There is only the belief that you are having a red experience. So before I can explain qualia I need to get you to understand how information processes can have intentional states.

    And how the belief of experiencing some quale gives rise to the subjective phenomenology I seem to have.

    If you have never waggled your ears, you can’t easily see how to waggle your ears, any more than you can clearly imagine wagging a tail or moving a third arm.

    I can very easily see how those stories go: neurons fire, muscles contract, parts move. I can also do the same thing with much more out there abilities, like flying, or anything else. I can’t do that thing with subjective experience—there is no such story that has at its endpoint ‘…and then, the system has (or takes itself to have) the subjective experience of redness’.

    Well, there is a way to tell if it represents an X if there is an algorithm for X detection.

    There will be, for any given pattern of bits, an algorithm that detects a face in that pattern; likewise, there will be one detecting a flower, or a 1967 Ford Mustang fastback, or anything else. Plus, how do you tell whether a given algorithm is and X-detection algorithm? Certainly, you can do that if you have access to X—but then, you’ve again smuggled in the intentionality you were hoping to derive.

    I’m really just saying it is correlated with the real-world thing.

    But even saying that it’s correlated with that real world thing, you need to be able to point to the real world thing it’s correlated with—i.e. you need intentionality.

    But it does. Because what you have is a deeply interconnected mental model of the world.

    You keep asserting that this is somehow sufficient. But I’m not just going to take your word for that—if you think it’s true, you should be able to demonstrate it somehow beyond a mere bald assertion.

    By using a “face”-detection algorithm, which can detect “faces” (whatever they are) in bit patterns

    Well, then tell me how you can tell whether an algorithm is a face-detection algorithm!

    Aha. Very good. You see, that’s the assumption I disagree with.

    Sorry, I used confusing verbiage there. I meant the things as they appear in your mental representation—as the intentional content of your thoughts.

    But note that I do insist that a thermostat has intentionality too, so I’m not using complexity to explain how intentionality arises by magic.

    But what is the thermostat’s intentionality directed at? Again, it’s only the occurrence of either 00 or 11—that is, only that of an elementary difference. In order to convince me that you can get from there to the intentionality I have, you’d have to tell a story how to construct higher-level intentionality—that is clearly not about things merely being different from other things—from there, without just pointing vaguely in the direction of ‘a much richer representation of what temperature is’ which you use ‘in more complex ways’. Tell me what constitutes this richer representation, and how it comes about; tell me what those more complex ways you use that information are.

    If that’s what you’re doing then the fact that 110 is a universal computer is irrelevant.

    No, it’s not, because otherwise you’d have to use a mapping that is itself computationally universal.

    I’m just pointing out that if I can make you believe you are experiencing red (and you and I both agree about what it is to experience red, so there is no misunderstanding of what “red” refers to), I don’t see how this could come to pass unless you were actually experiencing red, whether because of a real red object in your vision or because you were hallucinating.

    The part in brackets is, of course, the hard part. Of course, if we’ve already experienced red, we both can agree; but setting up that agreement with somebody who’s never seen red seems to be impossible in principle. And if we don’t share that agreement, then it’s simple how this belief could come about: I could be mistaken.

    I mean, otherwise, you’re basically stipulating that this ‘belief’ one is seeing red is one about which it is impossible to be mistaken, which surely would disqualify it from being any sort of belief as the word is ordinarily used. (And then, I’m simply no longer sure I understand what you mean by ‘belief’.)

    But a monkey can know what red looks like, because knowing what red looks like just means being familiar with the mental state corresponding to the belief that one is experiencing red stimulus, i.e. the state the monkey’s brain is in when it receives red sense data.

    This is jumbled. The state the monkey’s brain is in when it receives red sense data is not a state of believing anything (under any conception of ‘belief’ that doesn’t just make it flatly synonymous with ‘brain state’). Hence, the belief of seeing red comes about through being in the appropriate brain state. You want to substitute the effect for the cause: this you may do, in principle, but only on pain of fallibility—that is, of being mistakenly of the belief that one is seeing red.

    The relationship between the semantics and those rules is not arbitrary.

    It is; that’s even a theorem of mathematical logic—at least for rich enough axiom systems, there are infinitely many models providing semantics to the syntax that’s given by the axioms and the rules of derivation (I believe it’s the Löwenheim-Skolem theorem I’m thinking of).

    Plus, you don’t need to understand the rules in any real sense—you merely need to implement them, the way the Chinese room inhabitant implements the rules of Chinese, without himself understanding what they pertain to at all.

    Telling me to fire a neuron is not telling me how to waggle my ears. I don’t know how to fire the neuron.

    But all that there is to the story is that firing that neuron leads to the waggling of ears; this is all that I would require for an understanding of how qualia come about. Knowing this means that you understand how ear-waggling works, even if you maybe can’t implement it. In the same way, we can understand how flying works, even though we lack the anatomy to actually do it. That understanding re qualia is all I’m asking for.

    1) Do you have any intentions about the object “XO-2b”?
    2) Do you have any intentions about the astronomical object “XO-2b”?
    3) Do you have any intentions about the exoplanet “XO-2b”?
    4) Do you have any intentions about the star “XO-2N”?
    5) Do you have any intentions about the star Sirius?
    6) Do you have any intentions about the planet Jupiter?
    7) Do you have any intentions about the star we call the sun?

    1) Yes—it’s a string of symbols I’ve just read and hence, is present in my mind.
    2) Yes—it’s a string of symbols I’ve just read, now annotated with the information that it pertains to an astronomical object.
    3) Yes—it’s a string of symbols I’m by now familiar with, now further clarified as referring to an exoplanet.
    4) Yes—it’s another string of symbols, which I’m told is a star, and which resembles the earlier string of symbols.
    5) Yes—it’s a name I’m familiar with, denoting a star that is approximately 8.6 light years away, if Nick Cave is to be believed.
    6) Yes—it’s a name I’m familiar with, denoting the fifth planet of our solar system.
    7) Yes—it’s a name I’m familiar with, denoting the central star of our solar system.

    I’m just giving you an empty label, so your intention is like that of a computer. You have no context for the object referred to.

    I have a very clear intentional object from the start, the string of symbols ‘XO-2b’. I think now you’re confusing the signifier and the signified—it’s true that I have no intentional states directed towards the object denoted by ‘XO-2b’, but then, I’m not presented with that object—I’m presented with the string of symbols ‘XO-2b’, which is as clearly present in my mind as anything.

    So my intention is precisely not like that of a computer—my intention has a concrete object, not merely some relation that could be filled any which way.

  72. Hi Jochen,

    I’m going to try to (mostly) ignore any comments about qualia and focus on intentions, if that’s OK with you.

    > There will be, for any given pattern of bits, an algorithm that detects a face in that pattern; likewise, there will be one detecting a flower, or a 1967 Ford Mustang fastback, or anything else.

    Forget “any given pattern of bits”. An algorithm detecting anything in any single given pattern of bits is meaningless.

    For it to count as detection, then the algorithm must successfully identify the object in images of that object and not identify it in images not of that object.

    The bit-level encoding of the image (bitmap, JPEG, PNG or something ridiculously contrived to prove a point) does not matter as long as there exists a way of translating it to and from a two-dimensional array of RGB values (or a tuple of three 8-bit unsigned integers if you wish). That is the mathematical model we ought to take to be an image. Any encoding you conceive of must be possible to translate into this unpacked format and it is this format that is used for pattern recognition.

    > Plus, how do you tell whether a given algorithm is and X-detection algorithm?

    By definition. X is defined as that entity that is detected by the algorithm. That’s why I called it X as opposed to sticking with faces.

    > But even saying that it’s correlated with that real world thing, you need to be able to point to the real world thing it’s correlated with—i.e. you need intentionality.

    I disagree. Whenever I (mentally) point to the real world thing I’m really pointing at my model of it. When I discuss my model as opposed to the thing itself I’m really just expressing a second order belief about my model. I have a model of my model, essentially. This sounds like I would need models of models and so on and so I would need an infinitely detailed knowledge base but this isn’t so, because my knowledge base is not a static set of propositions but a production system and I can derive beliefs from it on demand.

    > Well, then tell me how you can tell whether an algorithm is a face-detection algorithm!

    That is such an obvious question I don’t know where you’re coming from. I’m going to break it down into baby steps to see if I can answer whatever it is you’re really asking.

    OK. First, if you ask me to tell whether an algorithm is a face-detection algorithm, I have to parse what it is you are asking me. I have an algorithm to do that. It translates the English sentence to a representation in my mind, and this representation has a connection to the concept of a face, which is a detailed mathematical model which (presumably) correlates to something in the real world. I understand that you are asking me to tell if this algorithm picks out the same real world objects as my concept of face.

    The first thing I have to do is to find some images of objects I would call faces. I navigate the world in such a way as to do that.

    Really, when I say I navigate the world, I’m just navigating the virtual world in my head, but let’s assume this bears a strong relationship to an external world that presumably exists. By navigating the virtual world in my head (generating the right commands to my muscles, incorporating sensory feedback into my model of the world and so on), it transpires that the physical body I presumably have manages to locate some files which contain images of objects I recognise as faces as well as those I don’t and load that into the algorithm.

    If the algorithm seems to agree with me on which objects are faces and which are not, then I can be satisfied that there is a correlation between what I consider to be a face and what the algorithm considers to be a face. If I can identify a face, then so can the algorithm.

    None of this assumes that I have a “real” intention towards faces. I’m only assuming that I can translate your word “face” to some set of criteria I can apply to my sense data and see how that matches the set of criteria embodied by the algorithm.

    > But what is the thermostat’s intentionality directed at?

    Whatever it is that its state is correlated to. Its reference is only to the thing qua that correlation, not to any of its other roles or properties. This is just what I’ve been trying to explain with my analogy to my intentions towards you.

    > that is clearly not about things merely being different from other things

    There’s more than differences there, there’s ways of being different.

    Like, perhaps A, B and C are all different from each other, but suppose that A+B are the same as C. Straight away you know more than that there are are differences and similarities. From such simple primitives can elaborate structures be built.

    About rule 110, it seems pretty clear to me that you are confused. Or that we are talking past each other. Give me a reference or a worked example to show me what you mean or let’s drop it.

    > The state the monkey’s brain is in when it receives red sense data is not a state of believing anything

    And yet I’m insisting that it is, or at least that it can be viewed as such. If the monkey has been trained to expect food when it sees red, then if it sees red it will expect food. I think there is something akin to “Ah, red! I must be getting fed soon” going on in the monkey’s head. Or, as a syllogism, “If I see red, I will soon be fed. I see red. Therefore I will soon be fed”. I don’t think the monkey is thinking about it as a syllogism. It is not reflecting on what it is doing (it has no second order beliefs about its thought process) but I think that is more or less what it is doing. The proposition that it is seeing something red feeds into something akin (but messier and more fallible) to a logical process implemented by a neural network.

    The way I think of it, for there to be no beliefs then the monkey would have to be passively accepting stimulus without them having any impact whatsoever on how it makes decisions or navigates the world or on its memory. In such cases (which probably do occur) it would not be conscious of the stimulus.

    > But I’m not just going to take your word for that—if you think it’s true, you should be able to demonstrate it somehow beyond a mere bald assertion.

    I think we should remember what is happening in this conversation. I’m trying to defend a view you are attacking. Right now, I’m trying to establish tenability rather than proving it to be true. I can try to do this by poking holes in your arguments as well as by telling a coherent story that resists your criticisms. Occasionally this involves simply explaining what that view is, which comes across as making bald assertions.

    The actual reasons for holding the view in the first place are that it coherently explains intentionality and that there are no serious competitors that I can see. I also offer the following independent reasons for holding to functionalism.

    http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-evolutionary-parsimony.html

    http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-illusion-of-real-consciousness.html

    http://disagreeableme.blogspot.co.uk/2013/01/strong-ai-naturalism-and-ai.html

    > (I believe it’s the Löwenheim-Skolem theorem I’m thinking of).

    Still, the relationship is not arbitrary. It is constrained. You can’t make just any rules of syntax model just any system. The theorem you’re talking about just means that you can make more complex systems that happen to obey the same axioms.

    > Plus, you don’t need to understand the rules in any real sense—you merely need to implement them

    So you just shut up and calculate, is it? I would say that *is* mathematically understanding the system. I think you’re talking about bringing in knowledge outside the system. For instance, if the system is supposed to model the orbits of planets, you could understand the system without understanding that it models the orbits of planets. But in such a case I think you still understand the system qua system.

    The guy in the Chinese Room understands the system and understands the rules qua system and rules. The knowledge about what representations within the system pertain to (such represenations probably supervening several layers of analysis above the rules and low level symbols themselves) are understood by the system and not by the guy.

    Regarding the questions about XO-2b and so on, I made a mistake by putting the name in quotation marks. I’m not talking about the sign, I’m talking about the signified. You can presume I’m talking about the actual object referred to by the name and not the name itself.

    So.

    1) XO-2b is an object. Do you have any intentions towards that object?

    And so on.

    Would you still answer “Yes” to that first question? How would this be different from the intention of a computer system? You know pretty much nothing about that object, so in what sense are your intentions more real than that of a machine?

  73. For it to count as detection, then the algorithm must successfully identify the object in images of that object and not identify it in images not of that object.

    You’re chasing your own tail! How do you tell if there is object X in some representation? By an X-detection algorithm. How do you tell if an algorithm is an X-detection algorithm? If it successfully detects X in all pictures of X.

    The first thing I have to do is to find some images of objects I would call faces.

    And this is where you substitute in your own intentionality. Pictures of faces are only pictures of faces, because to you they represent faces—so the algorithm will be trained in such a way that it successfully picks out (under some arbitrary mapping) pictures that you consider to be of faces; but this doesn’t give whatever system is running that algorithm its own intentionality, it is just intentional by virtue of your intentionality—i.e. without your intentionality in the first place, you could not have produced the algorithm.

    Whatever it is that its state is correlated to.

    But there’s no fact of the matter regarding what it’s correlated with. There’s just an empty placeholder in a relation.

    From such simple primitives can elaborate structures be built.

    At least, so you keep saying. But even in your example, you’re just highlighting a redundancy in the data—say some sensor monitors levels of quantities A, B, and C, in such a way that at each point, the bit-string its C-sensor produces can be obtained from the bit-strings of the A- and B-sensors—then it just means that the information obtained by the C-sensor is redundant. But it doesn’t tell you anything about what A, B, and C pertain to.

    If the monkey has been trained to expect food when it sees red, then if it sees red it will expect food.

    Sure, but how does that help you? This just means that seeing red produces the belief that it will be fed; this doesn’t do anything to convince me that ‘seeing red’ is a kind of belief. They’re simply different modalities.

    The way I think of it, for there to be no beliefs then the monkey would have to be passively accepting stimulus without them having any impact whatsoever on how it makes decisions or navigates the world or on its memory.

    Of course, there is a belief attached to seeing red—seeing red causes the belief of having seen red.

    But I wonder how your account fares, for example, in cases of blindsight. I know you’re probably familiar with this affliction, so I won’t waste time on a description. Basically, the question is, if somebody is asked what object was present in the part of his visual field he does not have conscious experience of, and names the correct object, then to me this would be indicative of having a true belief that he has seen this object. However, there is no quale attached to that belief. Or are you now going to argue that it is only a belief if it reaches the frontal lobes via the lateral geniculate nucleus and the V2-area?

    I can try to do this by poking holes in your arguments as well as by telling a coherent story that resists your criticisms. Occasionally this involves simply explaining what that view is, which comes across as making bald assertions.

    Well, my criticism in this case was that you keep asserting that something can be done, without actually demonstrating that this is the case. So I think that without such a demonstration, there is no compelling reason to believe that your assertion is correct.

    But in such a case I think you still understand the system qua system.

    If you want to call it that, fine. But the original point I was making was that anything about mathematics can be communicated via scratch-marks on paper; it was you who asserted that we draw somehow on some shared understanding in order to make sense of these. But we don’t: whatever understanding there is is nothing but syntactic manipulation. Hence, the difficulty of accounting for qualia in a mathematical universe that is basically nothing but what you can write down as scratch-marks on paper.

    1) XO-2b is an object. Do you have any intentions towards that object?

    Yes, of course: it’s that object (or those objects) denoted by the string ‘XO-2b’. As this object, it’s now present in my mind. I don’t know anything about that object (not even whether it exists, but of course, intentional objects may be fictional) but that it is called, at least by you, ‘XO-2b’. But that’s still perfectly well-formed intentional content.

    You know pretty much nothing about that object, so in what sense are your intentions more real than that of a machine?

    Because they are intentions of some concrete object; there is no empty placeholder, but that placeholder is filled by ‘that object that is denoted by ‘XO-2b”. In other words, if my intentional content is ‘that object that is denoted by ‘XO-2b”, then this picks out some real-world object; conversely, if I am presented with this object (qua ‘that object that is denoted by ‘XO-2b”), then the relevant intentional state will be present. Each side determines the other.

    But in, say, the thermostat example, the side that pertains to the world is left open: the interior state does not determine any object; only if you go in and ‘fix’ the intentionality by substituting your own does the relation become fulfilled. The same thing happens with the face-recognition algorithm (as you explicitly described), and so on. That’s the relevant difference between my intentionality and that of a computer.

  74. Hi Sci,

    Thanks for the kind comment. I appreciate it.

    Hi Jochen,

    > You’re chasing your own tail!

    No I’m not. As I explain later on, I think I have an internal X detection algorithm and that you have an internal X detection algorithm and so on. To me, an X is just that which is detected by that algorithm. I suppose that means there is an X in the real world, but it could be a hallucination. For me to recognise an algorithm in a computer as an X detection algorithm is to recognise that it picks out the same entities as my internal X detection algorithm.

    > i.e. without your intentionality in the first place, you could not have produced the algorithm.

    Well, possibly. But not necessarily. You could imagine a neural network or genetic algorithm learning to classify various kinds of visual stimulus because they help it to reach some goal or perform some task such as navigating a maze with various kinds of obstacles without explicitly teaching it what the kinds of object it might want to classify are.

    In any case I don’t see training a computer system to perform some task you have in mind as defeating the idea that it has its own intentionality. I learned most of the things I know from other people. If I train a chimpanzee to differentiate between pictures of cars and pictures of motorbikes, then does the fact that this distinction existed in my head first mean the chimp doesn’t have its own intentions?

    > But there’s no fact of the matter regarding what it’s correlated with.

    Exactly right. My view is there is no fact of the matter about what anything is correlated with, and so no fact of the matter about what intentions are directed at in the external world. Nevertheless, even though correlations are somewhat open to interpretation, there are degrees by which things are more or less correlated, and when things get tightly correlated enough it becomes perverse to insist there is no connection. Though there is no fact of the matter *ultimately*, there is for all practical intents and purposes. The same is true of how the thermostat has intentions towards temperature.

    > There’s just an empty placeholder in a relation.

    Right. Just like you and XO-2b. Until you learn more about it, that label could apply to just about anything. That is the beginning of intentionality. As you learn more about it and make more links with your mental model, it takes on more and more meaning. XO-2b is not as meaningful to you as the sun is because the sun is much more connected to other concepts in your head.

    > then it just means that the information obtained by the C-sensor is redundant.

    This is just a silly example, but even so you are too quick to dismiss it. The information is not redundant because the fact that A+B match up to C might be interesting. Perhaps it is not always so, for instance.

    > But it doesn’t tell you anything about what A, B, and C pertain to.

    Because for you things apparently only have meaning if we can match them up to things which are outside the scope of the system. But we can define abstract systems as we please and things have meanings due to their roles in those systems, right? I mean, you know stuff about A, B and C now just by thinking about the trivial system I have invented. That means you have intentions towards them qua their roles in this system, even if they pertain to nothing in the real world.

    So my view is that this is what all intentions are. Everything in your head is just part of some big abstract system and you only understand them qua their roles in that system. There is no ‘fact of the matter’ about what they correlate to or refer to in the real world.

    But even so, they do, in an intuitive sense, more or less happen to match up in practice, the way we can see from a third person perspective how the representations in the software of a robot have to match up to objects in its environment in order for it to navigate that environment successfully. There may be no ultimate fact of the matter about how that robot’s representations ought to be interpreted by us — we might be able to invent convoluted interpretations whereby they refer to objects which don’t exist or are elsewhere, but the fact of the matter is there has to be some causal connection between those representations and objects in the environment, independent of any human interpretation. My position is that we are no different from the robot in this respect.

    > Of course, there is a belief attached to seeing red—seeing red causes the belief of having seen red.

    Great. I’m glad we agree on that at least. So, the argument would go, if I can explain in outline how it is possible for you to have that belief, then I don’t need to explain how you can actually see red in the first place, because my explanation would be compatible with all the evidence we have for qualia even if qualia don’t exist.

    > Or are you now going to argue that it is only a belief if it reaches the frontal lobes via the lateral geniculate nucleus and the V2-area?

    No, I’m going to argue that there are subtly different beliefs. The belief that a red object was before him, versus the belief that he has experienced red. Different propositions expressed in different parts of the brain.

    But, yeah, I’ll admit that blindsight is a problematic case for me. I think I would need to understand more about it (I have the basics, but I’m not sure how well it is understood in detail even by experts) to give a definitive answer.

    > it was you who asserted that we draw somehow on some shared understanding in order to make sense of these. But we don’t:

    But we do! We need to understand the rules and how to apply them. That is the understanding I’m talking about.

    > Yes, of course: it’s that object (or those objects) denoted by the string ‘XO-2b’.

    It’s interesting to me that it seems obvious to you that you have intentions towards that object, but your argument against thermostats having intentions towards temperature seems to be that they don’t know anything about it, it’s just the object denoted by a particular state.

    How can you distinguish between the two cases for me? Let’s see what you offer:

    > Because they are intentions of some concrete object;

    But you said yourself that object may not even exist. So it is not that it is concrete. Besides, temperature exists too.

    > there is no empty placeholder, but that placeholder is filled by ‘that object that is denoted by ‘XO-2b”.

    OK, so “There is an object denoted by ‘XO-2b'” is not an example of an empty placeholder, but “There is an object denoted by variable xo_2b” is an empty placeholder? Why is that?

    > if my intentional content is ‘that object that is denoted by ‘XO-2b”, then this picks out some real-world object;

    Not necessarily. Besides, temperature is real anyway.

    > But in, say, the thermostat example, the side that pertains to the world is left open:

    How is it left open? I don’t get how it’s any more open than your intention to XO-2b. I could have told you XO-2b was an asteroid and you would have been none the wiser. If you saw XO-2b yourself you would not recognise it. Until you know what it is, it could literally be anything as far as you’re concerned. At least the thermostat actually responds to and interacts with temperature in some way. I would say that this is actually less open than your intention.

    I think the problem is that you’re stuck with this idea that, somehow, as if by magic, human intentions have a mystical objective link pointing to objects in the external world. In this view, again, somehow (we have literally no idea how, I suppose) as soon as I mention XO-2b to you this line is drawn reaching far out into space and connecting you to that planet. To you, this means that there is a fact of the matter about what you refer to in a way that you can see is not so for a thermostat.

    The problem is resolved by recognising that there is no fact of the matter in either case.

  75. For me to recognise an algorithm in a computer as an X detection algorithm is to recognise that it picks out the same entities as my internal X detection algorithm.

    But the question is exactly how you can have an ‘X-detection algorithm’! We start out with the question of how we can form meaningful intentions about the world. To study this, we invent certain model systems and investigate how intentionality comes about in those. You claim it does so by them having an X-detection algorithm. So I ask what X is, and you answer that it’s what’s detected by an X-detection algorithm. So I ask what an X-detection algorithm is, and you answer it’s an algorithm that detects X. So I ask, how do you know it’s an algorithm that detects X, and you answer, because it matches up with my X-detection algorithm!

    But obviously, that doesn’t bring us an iota further towards answering the question. The ultimate answer is always deferred, to some system that possess intentionality already. Whenever I question this, you point to yet another system that accounts for the intentionality of that system—which then, subsequently, again turns out to be an intentional system.

    You could imagine a neural network or genetic algorithm learning to classify various kinds of visual stimulus because they help it to reach some goal or perform some task such as navigating a maze with various kinds of obstacles without explicitly teaching it what the kinds of object it might want to classify are.

    I don’t deny that this is possible—the system gathers data, structure, from the environment, and then, over several generations, evolves strategies that match up with the data in a certain way, so as to maximize its differential fitness. But the point is, that at no point does the data have to pertain to anything in particular in order to generate such behaviour—i.e. the system gets a particular pattern of zeros and ones, and this pattern is then (say, via a giant lookup table) matched up to some particular behavioural output. That behavioural output is then appropriate to the stimulus, thanks to the evolutionary processes shaping the connection within the table, but at no point in explaining this stimulus-response chain do I have to refer to what, actually, the stimulus is—what the data is about. Thus, all of this can be performed by a system completely lacking any intentional states.

    Again, you can then go and look at the situation, and say something like ‘the monkey fled because there was a lion’, but then, you’re substituting in your own intentionality.

    My view is there is no fact of the matter about what anything is correlated with,

    Your view about pretty much anything difficult is that there’s no real fact of the matter, it seems. For me, that’s a too easy way out, taken too early.

    Nevertheless, even though correlations are somewhat open to interpretation, there are degrees by which things are more or less correlated, and when things get tightly correlated enough it becomes perverse to insist there is no connection.

    I can only again invite you to give an example for something connected tightly enough in order for there to be a true connection; otherwise, this is just more bald assertion.

    Right. Just like you and XO-2b.

    No. XO-2b is the value of a bound variable; but the variables in a computer’s ‘intentionality’ are open. So, when I see you calling something ‘XO-2b’, I know that ‘there is an object x such that x is denoted ‘XO-2b’ by Disagreeable Me’. Paraphrasing Quine, ‘to be (an intentional object) is to be the value of a bound variable (in a context pertaining to mental states)’.

    That’s why the thermostat doesn’t refer to anything—it only knows that there is some correlation, but this does not suffice to pin down the value of any variable. Structure only settles cardinality questions; but it can’t pick out a given object to ‘fill in’ the place left open by a variable.

    The problem is still that of having some ciphertext, and needing to translate it, to give it meaning. You say that perhaps, if there are enough other unknown text that stand to the first one in some relation, then you can do it—but the truth is simply that you can’t do it unless you know what at least some text means. You need an intentional Rosetta stone; and in the examples you brought forward, this was always your own intentionality.

    So, the argument would go, if I can explain in outline how it is possible for you to have that belief, then I don’t need to explain how you can actually see red in the first place, because my explanation would be compatible with all the evidence we have for qualia even if qualia don’t exist.

    But if that’s all you’re really wanting to say, then my original objection stands—because while the belief of seeing red may be caused by seeing red, it may also be caused by something else, and thus, be erroneous. And then, upon actually seeing red, one would recognize that error; showing that the belief is not enough.

    But we do! We need to understand the rules and how to apply them. That is the understanding I’m talking about.

    And they’re also right there, on that paper. Actually, one needn’t even write them down, one can just include a few derivations and leave them implicit. So still, the point that anything about mathematics can be reduced to pure structure stands.

    I mean, otherwise, this would be devastating for the mathematical universe: if there needed to be some kind of understanding that can’t be reduced itself to scratch-marks on paper in order to give content to mathematical structures, then you’d need some kind of mental backdrop that supplies this understanding, and the MUH would actually collapse onto some kind of idealism.

    But you said yourself that object may not even exist. So it is not that it is concrete.

    Fictional objects are completely concrete; they simply happen to not exist.

    Besides, temperature exists too.

    True, but the thermostat doesn’t refer to temperature; there is just an empty slot where that reference would go that could be filled by temperature, but also by indefinitely many other things. In contrast, ‘the object XO-2b’ singles out exactly that object you refer to by that moniker.

    Anyway, I’ve gotta cut things short (well, ‘short’), since I’ve got a few errands to run; but I think I addressed the main points. If not, you can alert me to anything you’d like me to respond to…

  76. Hi Jochen,

    > But the question is exactly how you can have an ‘X-detection algorithm’!

    I’m sorry to exasperate you!

    I guess the problem is in part that I’m being circular. I define X by reference to an X-detection algorithm, and I define an X detection algorithm by reference to X.

    I would say that this is not as problematic as it appears to be, and that ultimately all definitions must be circular. A useful definition is often one that takes some unfamiliar or uncertain concept and relates it to familiar or certain concepts. But in the case where you have to define a whole system from scratch, where all you have are unfamiliar concepts, this won’t be possible. In that case all you can do is relate the unfamiliar concepts to each other, and then you can understand them qua these relations, even if you don’t know how they pertain to anything outside the system (if they do at all). This is how you understand A,B and C in the earlier example or concepts in an abstract game such as Go. I also think it is how you understand the world as a whole, your understanding being a very detailed abstract object of this kind.

    A second problem is that I’m allegedly always falling back to some other intentional system.

    I think the problem is in the question you’re asking.

    “So I ask, how do you know it’s an algorithm that detects X”

    You’re not asking in what sense it is an X-detection algorithm, you’re asking how I know it to be one. You are explicitly asking about another intentional system (me), so it’s not surprising that I bring that system into it.

    This question presupposes that I know what an algorithm is and that I know what X is and that I know what detection is. OK. Let’s say I do, then you want to know how I know what an X is, and I answer that it’s because I have my own internal X-detection algorithm. To you, this is a problem, because we were just talking about how to identify something as an X-detection algorithm, so to bring another X-detection algorithm into the picture can’t help because now we have to be able to tell that *that* is an X-detection algorithm too.

    But I don’t know my own internal X detection algorithm, not consciously at least. I only know X, and my guess is that I know X because I myself have an unknown X detection algorithm somewhere in my brain. I conclude that I must have one because I am drawing an analogy to the computer system where from a third person perspective I can see it does have an X detection algorithm. My X detection algorithm is therefore necessary for me to know X and to recognise the computer system as an X detection algorithm (or so I surmise). It is not necessary for it to *be* an X detection algorithm.

    Any clearer?

    >But the point is, that at no point does the data have to pertain to anything in particular in order to generate such behaviour

    I disagree. I guess it depends on how you define “pertain”. You seem to believe in a mystical objective kind of direct reference I don’t accept, so I can’t show that it pertains to anything in your sense because I deny that your sense has any basis in reality.

    I cash out reference and pertaining with correlation. The data has to correlate to something in particular or the robot could not successfully navigate its environment. This means, to me, that the data does indeed pertain to things.

    > but at no point in explaining this stimulus-response chain do I have to refer to what, actually, the stimulus is—what the data is about.

    Well, perhaps (just perhaps) I might concede the point for a system with a very limited set of states that learns the correct responses to all possible situations by trial and error. But for many systems the number of possible situations is effectively boundless and it needs to learn correct responses before all of them can be encountered. To do so it needs to generalise, and to generalise it needs to be able to classify things into objects and model how they behave. This is the kind of system I’m talking about.

    > and say something like ‘the monkey fled because there was a lion’,

    I’m surprised. You don’t think this is a plausible explanation?

    > Your view about pretty much anything difficult is that there’s no real fact of the matter, it seems.

    Yes it is, for a great many philosophical questions at least. Do mathematical objects exist? Yes or no, depends what you mean by “exist”. Is murder objectively wrong? Yes or no, depends what you mean by “wrong”. Do people reduce to collections of elementary particles? Yes or no, depends what you mean by “reduce”. And so on.

    In my view, endless debates have waged on questions that are ultimately meaningless without further clarification, and that said clarification would settle the matter. That this sad state of affairs continues is down to the fact that the issues we refuse to clarify are inevitably the kind of evolved intuitions we are psychologically ill-equipped to question. Intuitions such as identity, consciousness, qualia, morality and so on.

    So the problem with “real intentionality” as opposed to correlation intentionality is that no clarification is forthcoming except for the intuitive “bald assertion” that it is something more. That you are “really” picking things out while a computer system is not.

    > For me, that’s a too easy way out, taken too early.

    Many of these questions have been raging for thousands of years. It’s not too early, it’s too late! 🙂

    > I can only again invite you to give an example for something connected tightly enough in order for there to be a true connection;

    But I’m saying there are no such things! I am saying there are only things that are so tightly correlated that it is perverse to deny a connection of some kind, not that the connection is objectively real.

    Take the notions of cause and effect for example. It could be that all the supposed interactions we have ever seen (billiard balls knocking into each other for instance) are just coincidences and there is no such thing as cause and effect. That was the gist of Hume’s argument at least. But the correlation is so obvious that it seems perverse to deny it. Same goes for my switch and lightbulb setup.

    > No. XO-2b is the value of a bound variable;

    I pretty much flatly disagree with almost all of this section so perhaps we cannot proceed. I mean, what Quine said is OK to a first approximation, but I think there are can be differences in degree between what is bound and what is not bound. I think we can have intentions towards the variables qua the role they play in the model, independently of what they may be bound to outside of that model. They are bound with respect to the model but unbound with respect to the world outside the model.

    The point of XO-2b was to give you an example of a label referring to an object you know nothing about. But I guess this is not so easy, because you know at least that it is an object I know something about. So I guess my example fails in achieving what I wanted it to.

    That doesn’t mean my position falls apart though. It just means that you’ve poked a hole in one particular approach to communicating it.

    I guess I’m going to have to fall back on insisting that the thermostat’s reference to temperature is not unbound either. It has a role in that system and the thermostat understands it qua that role. It can be said to relate to the concept we know as temperature because it is correlated with it. If we change the thermostat so it is correlated with some other concept instead, then this would just change the reference. The thermostat would be none the wiser just as we would be if the astronomers discovered that XO-2b was an optical illusion rather than a planet and didn’t drop in here to tell us.

    > Structure only settles cardinality questions; but it can’t pick out a given object to ‘fill in’ the place left open by a variable.

    An argument I feel I’ve already answered. This argument applies only to abstract structures which are not in interaction with the real world.

    > but the truth is simply that you can’t do it unless you know what at least some text means.

    Again, because you construe meaning as always about relating some variable within the system to some object without. You don’t need this if you’re just understanding what the variables mean within the context of the structure, i.e. how they relate to other variables. And my view is that your mental model of the world is such a structure and that you only have this kind of understanding.

    > You need an intentional Rosetta stone; and in the examples you brought forward, this was always your own intentionality.

    Again, of course! I mean, if you are asking how *I* can interpret the representations of some other system, of course my intentionality is involved. It has to be for me to be interpreting it. Likewise, if some computer wants to interpret my intentionality, then its intentionality has to be involved.

    So, the impossible task of decryption you describe is indeed what it would take for me to figure out what the representations in your mind refer to. As a practical matter, I would have no hope of being able to do it.

    Not without that Rosetta stone at least. But I have such a stone in my own brain. My brain will process sensory input in a manner analogous to yours, and I of course understand my own brain’s intentions. So all I need to do to understand the representations in your head is to see how your brain responds to stimulus I understand. If for example a certain part of your brain activates on seeing images of faces, then I can deduce that that part of your brain has something to do with faces.

    So, causal interaction is everything as far as third-person interpretation is concerned. As far as first-person semantics go, all you need is to understand how the variables relate to other variables, because these variables are all you know.

    > And then, upon actually seeing red, one would recognize that error; showing that the belief is not enough.

    Again, this is just confusion about what “red” means. Two different beliefs.

    > And they’re also right there, on that paper.

    You need to be able to understand the rules. Which means you need to understand the language the rules were written in and any concepts they appeal to. If you can’t rely on such concepts you need to ground the rules in something completely unambiguous like a physical system obeying the laws of physics. That’s how CPUs and brains work, and that’s why causation is so important for understanding and intentionality.

    > if there needed to be some kind of understanding that can’t be reduced itself to scratch-marks on paper in order to give content to mathematical structures

    Not to give content to mathematical structures. To enable communication of mathematical structures from mind to mind.

    > Fictional objects are completely concrete;

    News to me. I would have them as the very exemplars of abstract objects.

    > True, but the thermostat doesn’t refer to temperature;

    Ah, but it does! Yay! Bald assertion tennis! Ball’s in your court! 😉

    > In contrast, ‘the object XO-2b’ singles out exactly that object you refer to by that moniker.

    There is no contrast. The situation is exactly the same. We could rearrange the thermometer so its intention now refers to the output of a random number generator and it would be none the wiser. I could choose to change my mind and refer to my mug as XO-2b and you would be none the wiser. Both cases change a physical state of affairs thereby changing the criteria by which the reference is achieved. This doesn’t undermine intentionality in either case, it just changes what is picked out by the reference.

  77. I would say that this is not as problematic as it appears to be, and that ultimately all definitions must be circular.

    I think that’s a very hard stance to defend, and I don’t think I can bring myself to accept it. To me, definitions are ultimately grounded in the world as it is given to us (in our intentional mental content, natch). That is, think about the example of defining words with words I used earlier: to avoid the whole thing collapsing into circularity (and, IMO, meaninglessness), we ground meaning by pointing to things in the world—the word ‘tree’ is not to be defined as ‘something with leaves’, and ‘leaves’ then defined as ‘something which grows on trees’; no, we ‘define’ tree by pointing at that element of our shared perception of the world that we want to name, thus grounding its meaning, and by starting with thus-grounded meanings, build up a meaningful system of references to the world.

    But of course, this requires some genuine intentionality to work, so I can see why you’re reluctant to embrace it; but merely defining a tree by its leaves and leaves by being on a tree simply—like all mere relations—leaves open what a tree is, thus creating nothing but a set of placeholders, free floating and unconnected to anything.

    You’re not asking in what sense it is an X-detection algorithm, you’re asking how I know it to be one. You are explicitly asking about another intentional system (me), so it’s not surprising that I bring that system into it.

    Such knowledge can, I think, be analyzed without reference to intentions; it would be equivalent to ask ‘how can one tell that it is an X-detection algorithm’, or even ‘what makes it an X-detection algorithm’. There’s no need to bring an intentional system into play, and hence, doing so I think fails to address the question.

    I only know X, and my guess is that I know X because I myself have an unknown X detection algorithm somewhere in my brain.

    But that guess is what we’re addressing in this discussion. We haven’t yet established the possibility of such an X-detection algorithm—and since each X-detection algorithm can equally well be regarded as an Y-detection algorithm, I’d say that this is unlikely to be the reason that you know what X is. Your knowledge of X, in other words, is what we’re trying to account for; by assuming from the outset that it must be due to some algorithm, you’re giving the game away. You’d have to show that an X-detection algorithm exists that can account for you knowing X in order to validate that guess, and that you can’t do by appealing to your recognition of X, because it’s there that you smuggle external intentionality into the system.

    You seem to believe in a mystical objective kind of direct reference I don’t accept, so I can’t show that it pertains to anything in your sense because I deny that your sense has any basis in reality.

    No, I don’t believe in any mystical properties of the mind (and that’s a charge that to me always seems to enter these discussions once the side that takes itself to defend the only proper naturalist stance as they see it runs out of arguments). But I do know what my own experience is like, and I want to account for this experience in naturalistic terms; the only problem I have with your account is that it does not seem fit to do so, on my most charitable construal.

    This means, to me, that the data does indeed pertain to things.

    Correlation only allows reference if one side of the relation is known. If you have two boxes that each contain a ball of the same colour, but you don’t know if both are green or both are red, then the correlation between both only serves to tell you what you already know—in each box, there’s either a red ball, or a green one. Only if you open one box can you ‘cash in’ on the correlation and discover the colour of the balls in both boxes—in that sense, a red ball found in one box ‘refers to’ or ‘pertains to’ the red ball in the other box.

    But without knowing one side, the correlations alone don’t even help to pare down the possibilities—without any correlation, either box contains either a red or a green ball. With correlation, it’s still the same; hence, correlations don’t help. They’re necessary for reference, but in order to create a genuine reference, it’s necessary to go beyond what correlation tells us.

    To do so it needs to generalise, and to generalise it needs to be able to classify things into objects and model how they behave. This is the kind of system I’m talking about.

    Actually, I think (if I understand you correctly) that’s known to be wrong. One can build a general-purpose problem solving system that works, ultimately, by compressing the data it receives from the environment, and thus, according to some principles from algorithmic information theory, finds the most likely way to continue the series of data into the future, as dictated by the Solomonoff universal prior. It’s also known that this system is asymptotically as fast for any given situation it encounters as the best special-purpose problem solver. And while it’s true that real world systems can only approximate this theoretical agent, there’s a strong possibility that the approximation is good enough for practical purposes (after all, human reasoning in such situations is far from perfect, either).

    And all this agent really carries out is data compression—which just pertains to the syntactic (redundant) properties of the data, but bears no connection to what the data is data about.

    I’m surprised. You don’t think this is a plausible explanation?

    A plausible explanation from the point of view of an intentional system, certainly. But for something like the AIXI agent, the explanation would be that the data-pattern it received was one in which a certain set of actions maximized the expected reward—nothing about a lion, or even about fleeing at all.

    Do mathematical objects exist? Yes or no, depends what you mean by “exist”. Is murder objectively wrong? Yes or no, depends what you mean by “wrong”. Do people reduce to collections of elementary particles? Yes or no, depends what you mean by “reduce”. And so on.

    It’s a bit of a sideline, but with all these issues, I would say that the real question mostly is about what you say it ‘depends on’. And in some cases, while there may be no objective fact of the matter, I think one must guard against relativism—things like morals, for example, are in my view something that each social system has to define for itself, and it does so by means of a continuous dialogue, whose exact purpose is to continue to question established norms, and perhaps overthrow them should the need arise. In that sense, there’s very much a ‘fact of the matter’, it’s just not something one could come to from first principles, but very much dependent on the time and social context. Such investigations by theire very nature don’t come to an end—they’re more like a continuing dialogue than a game of twenty questions (as one could perhaps characterize the scientific mode of inquiry).

    Many of these questions have been raging for thousands of years. It’s not too early, it’s too late! 🙂

    Well, it took us thousands of years before we ever got quantum theory or rocket science; with that attitude, we might just have given up beforehand. And I think that those really were the easy questions, and that the real work probably still lies ahead of us…

    I mean, what Quine said is OK to a first approximation, but I think there are can be differences in degree between what is bound and what is not bound.

    I think there is a problem of understanding here. A variable is ‘bound’ exactly if it is quantified over; there is no matter of degrees there. In my example ‘x’ is bound because the sentence contains a ‘there exists x’. Another example would be a ‘for all x’.

    I guess I’m going to have to fall back on insisting that the thermostat’s reference to temperature is not unbound either.

    Of course. The problem is not that the variable isn’t bound, it’s that there’s no thing singled out by the proposition it appears in. ‘There is x such that x is referred to as ‘XO-2b’ by Disagreeable Me’ points to something; but the thermostat’s state, like the boxes above, doesn’t point to anything.

    I mean, let’s build this up from the ground: 0, or 1, don’t point to anything. But then, neither does ’00’ or ’11’—ultimately, it just means ‘if something is 0, another thing is also 0’, and ‘if something is 1, another thing is also 1’, which might pertain just as well to the lamp/switch as to the thermostat/heating and literally infinitely many other systems. The ‘cashing out’ of the correlations does not help in any way, because they’re just a template—knowing that ‘some thing is 1’, you may know that ‘another thing is also 1’, but since 0 or 1 don’t point, there is nothing singled out by this—only knowing that, say, one of those things is a switch, and the other a lamp, you can then genuinely cash out the reference, by saying that since the lamp is lit, the switch is flipped, and thus, use the light of the lamp as a reference to or indication of the state of the switch.

    Further complicating the web of correlations won’t help, either—unravelling it all will only again lead to sentences such as the above, without any more reference.

    This argument applies only to abstract structures which are not in interaction with the real world.

    But the problem is that all you have access to (on your construal) is the abstract structure.

    My brain will process sensory input in a manner analogous to yours, and I of course understand my own brain’s intentions.

    But how does it do that? All it’s got, in both cases—i.e. in understanding my intentionality and yours—is just a pattern of data, i.e. a ciphertext. This, in and of itself, just simply doesn’t point, doesn’t single anything out, as shown above.

    Again, this is just confusion about what “red” means. Two different beliefs.

    I disagree. Consider, for example, a case of inverted qualia (where I don’t want to understand qualia as something fundamental, but simply as that whichever it is that we take ourselfs to have by way of our apparent phenomenology), where one person has them ‘the right way’, and the other person is an invert. Both would consider themselves to be ‘experiencing red’ with equal justification; but their first-person phenomenology would differ. Thus, the belief is not immediately connected with the phenomenology; and indeed, if such a case is plausible, then one can be mistaken in the belief of experiencing red. In fact, even a system with no qualia at all could voice and hold this belief (at least as far as such systems can hold beliefs at all).

    The belief is merely about the signifier, but the connection to the signified is arbitrary and hence, may be erroneous; thus, the belief itself is not sufficient for the phenomenology we take ourselves to have.

    Which means you need to understand the language the rules were written in and any concepts they appeal to.

    They don’t appeal to any concepts, they appeal to signs; they’re not written in any language, but embodied in the way the signs are manipulated. Like you can learn chess by observing the game being played, you can learn the rules of symbol-manipulation by observing them being manipulated. Again, if it were any other way, then the MUH would be in trouble.

    Of course, in practice, the rules are generally hard-wired, i.e. in the actions a Turing machine performs when given a certain symbol. But that’s just convenience.

    To enable communication of mathematical structures from mind to mind.

    But minds are also mere mathematical structures, so then again the question arises of how mathematical structures can give rise to the understanding needed.

    News to me. I would have them as the very exemplars of abstract objects.

    That’s not how they’re generally analyzed, at least. Again, I can only point to Quine’s ‘On What There Is’, which is both very insightful and well written—and, IMO, one of the most important philosophical essays from the first half of the 20th century. A claim about a fictional entity can be analyzed as follows: ‘the present king of France is bald’ can be parsed as ‘there is something that is the present king of France and that is bald, and nothing else is the present king of France’, in the same way as a claim about a real entity can be parsed—‘the author of Waverley was a poet’ becomes ‘there is something such that it is the author of Waverley, and that something was a poet, and nothing else wrote Waverley’. The difference is merely that the extension of the first sentence is empty—there is no entity such that it fulfills that proposition, while the extension of the second is satisfied by Walter Scott. Both are, however, equally concrete.

    We could rearrange the thermometer so its intention now refers to the output of a random number generator and it would be none the wiser. I could choose to change my mind and refer to my mug as XO-2b and you would be none the wiser.

    Well, my intention would still refer in exactly the same way—to XO-2b qua the object you denote by that name (a concrete object, even if you chose something fictional to refer to). But the thermostat still would merely embody a correlation that can’t be cashed out, as per the above discussion.

  78. Hi Jochen,

    > we ground meaning by pointing to things in the world

    Sure. That makes sense on a view that there is real intentionality. On my view, when we point to things in the world, we are just pointing to our internal representations which have causal links to certain sensory inputs. When another person interprets the gesture, they do so in such a way as to involve their own internal representation. This means we can point to things which don’t actually physically exist such as mathematical objects.

    > leaves open what a tree is

    Well, not that open, since only a tree will (reliably) be perceived as a tree by our brains because of the causal links between sensory input and internal representations.

    > There’s no need to bring an intentional system into play, and hence, doing so I think fails to address the question.

    I think there is such a need because I think no matter how you phrase it what you really mean is how I as an intentional system can tell that it is detecting X’s. Because what it is for it to be an X detector, independent of this concern, is just for it to detect X’s, that is to be presented with some input derived from an X and for it to correctly identify that input as corresponding to an X. That this does not answer your question shows that your question is about not what X detection is but about my epistemic state regarding its status as an X detector.

    > We haven’t yet established the possibility of such an X-detection algorithm

    That X-detection algorithms are possible is beyond doubt. They already exist. What is in doubt is whether the interpretation of something as an X detection algorithm is entirely in the mind of the observer. But to even ask the question you need to presuppose that the observer understands what an X is. If I don’t know what a foo is then I have little hope of identifying an algorithm as a foo-detector. If you don’t want to bring the intentional states of the observers of the system into play then it is best to ask me a question such as how I think my own X-detection works.

    How, for instance, can I detect the numeral 5? Well, I think I have a little algorithm in my brain that sends a signal whenever I look at the numeral 5. That algorithm is a numeral 5 detector, and to me the numeral 5 is just what is detected by that algorithm. Now, in fairness it’s a bit more than that, because it has connections to other concepts and algorithms too, but this is a sketch of how I would answer the question.

    > You’d have to show that an X-detection algorithm exists that can account for you knowing X in order to validate that guess

    Only if I had to prove that my guess is correct. But that’s not what I’m trying to do. I’m trying to show that my view is coherent. The reasons for believing it are criteria such as parsimony, not that alternative views are logically incoherent.

    > No, I don’t believe in any mystical properties of the mind

    I wouldn’t have expected you to describe your views so. Whether or not the label “mystical” is appropriate, you do seem to believe in some sort of direct objective link between your mental states and objects in the world. That seems mystical to me. I tend to reserve the word for intuitive leaps that seem to resist any attempt at explanation or analysis or even definition, and interpreted so it appears applicable so far.

    Unless that is you can give some account of what it means or how it might work.

    The direct objective reference view also seems to fall foul of the evidence, seeing as the objects we think of need not even exist.

    It’s pretty weak, admittedly, to argue that my view must be correct because your view is not well-developed, but I would urge you to try to consider the implications of your assumption that there is such a direct link being wrong.

    > Correlation only allows reference if one side of the relation is known.

    One side *is* known. In fact both sides are known.

    Just to recap, we’re discussing a robot which has learned to navigate its environment and interact successfully with various objects therein (opening doors and so on). The question is whether the internal representations (which were not determined by us but developed according to some genetic algorithm or neural network) the robot has of such objects actually refer to the objects.

    I think we can agree the robot doesn’t “care” either way. Whether its internal representations really do refer to objects in the external world or not, it can navigate that world just fine. Any correlation there might be is for us to determine. We might say symbol X refers to object Y just in case we can spot a correlation between X and Y. In this case, we know what Y is. We also presumably know what X is. So we in fact know both sides of the correlation. We need to know both sides to spot that there is a correlation at all.

    Spotting these correlations is a third-person activity we can undertake from our perspective outside the system. The robot doesn’t need to spot these correlations because all it really knows is its own internal model. This is how it “understands” the external world. It has no explicit knowledge of the implementation or low level symbols or state of its internal model, so it doesn’t need to know that this symbol X represents that object Y any more than I know which of my neurons are correlated with which mental activities. Instead it only knows, via its model and these symbols, the virtual world represented by that model.

    So my position is not that there is a “real” correlation between our internal representations and the external world. Correlation is perhaps only in the mind of the beholder. Rather I’m suggesting that we are no different from the robot. We only know the virtual world represented by that model. We might be inclined to say there is a correlation when viewed from a third person perspective, but this is not what gives rise to our intentionality, rather it is what makes our intentionality useful in navigating the world. Our intentionality is instead inherent in the internal structure of our mental models. We cannot perceive what correlations there might be for ourselves, since we cannot perceive the external world directly. But we can perceive the correlations there are between the apparent mental states of others and objects in the external world (or really, our virtual model of the external world), just as we do with the robot. We can then make conclusions about ourselves by assuming we are no different from other people (or, in my case at least, from the robot).

    > And all this agent really carries out is data compression—which just pertains to the syntactic (redundant) properties of the data, but bears no connection to what the data is data about.

    I think this ignores that I’m arguing that all meaning is ultimately syntactic. Building models *is* data compression. It takes many more bits to represent the state of all the particles in a room than to list all the objects and their macroscopic states.

    AIXI seems to me to support my view rather than refute it. AIXI keeps building and testing hypotheses. These hypotheses are just the models I’m talking about.

    > was one in which a certain set of actions maximized the expected reward—nothing about a lion, or even about fleeing at all.

    In order for it to know that fleeing maximises reward, I think it needs to have some sort of representation of a lion (even if only qua that which must be fleed) in the hypothesis it is operating under.

    > I think there is a problem of understanding here.

    True. Apologies. In answer to your original point, then it doesn’t much help matters to say that to be an intentional object is to be a bound variable in the context of mental states if what we are debating is what counts as such a context.

    > ‘There is x such that x is referred to as ‘XO-2b’ by Disagreeable Me’ points to something; but the thermostat’s state, like the boxes above, doesn’t point to anything.

    Of course I don’t see the difference you do, but lets’ go on…

    > 0, or 1, don’t point to anything.

    They may not point to anything *else* but they can have meaning in themselves due to the role they play in, e.g. binary systems. As I’m trying to explain, even unexplained symbols can acquire meaning via the roles they play in abstract systems.

    > ultimately, it just means ‘if something is 0, another thing is also 0?, and ‘if something is 1, another thing is also 1?,

    Right! So you can delete that “just”. That’s what it means, in this particular example. There’s the meaning for you.

    Now, put that in context in a physical system, and we can see correlations between the first 1 and the heating element being on and between the second 1 and the room being cold. The thermostat therefore *seems* to refer to these things (an illusion you might say), even though all it knows of them are the roles they play in its trivial abstract structure.

    What I am saying is that we are no different. Like the thermostat, we can (seem to?) refer to things without knowing much about them. I’m not trying to elevate the thermostat to our level so much as dissolve what we think we know about our intentions and so take us down to the level of the thermostat. That’s why I’m throwing out the idea that there is any such thing as a direct objective reference. There seems to be such a thing only because of correlation. Rather than saying there *are* no references I prefer to say that this correlation *is* the reference. If we can be said to refer to things, then so can the thermostat.

    > only knowing that, say, one of those things is a switch, and the other a lamp, you can then genuinely cash out the reference

    That’s not cashing out anything because I have no idea how you think you know that one of those things is a lamp and the other is a switch. You’re appealing to reference in order to explain it (exactly what you accuse me of doing!).

    > But the problem is that all you have access to (on your construal) is the abstract structure.

    That’s all I have access to, yes. But it is in interaction with the real world. Or at least we suppose it is. That’s why the third-person perspective interpretation of one of my representations as referring to an object in the external world is not arbitrary. That’s why the interpretation of a thermostat’s representations as referring to temperature is not arbitrary.

    > All it’s got, in both cases—i.e. in understanding my intentionality and yours—is just a pattern of data, i.e. a ciphertext.

    That’s not true. It has a process. Stop thinking of it as marks on a page. It’s more like a computer program. It’s dynamic. Causal. In interaction with the world and with itself.

    > where one person has them ‘the right way’, and the other person is an invert.

    If you understand my position and Dennett’s, you should understand that we regard this setup as meaningless. If qualia have no content, then it makes no sense to suppose that the content of the qualia of one person is correct and the other person’s incorrect.

    > Both would consider themselves to be ‘experiencing red’ with equal justification

    True.

    > but their first-person phenomenology would differ.

    False. There is no first-person phenomenology in the way you conceive of it.

    > In fact, even a system with no qualia at all could voice and hold this belief (at least as far as such systems can hold beliefs at all).

    Exactly right. We are such systems.

    > Like you can learn chess by observing the game being played, you can learn the rules of symbol-manipulation by observing them being manipulated.

    Sure, and doing so relies on having certain mental faculties and understanding built in.

    > so then again the question arises of how mathematical structures can give rise to the understanding needed.

    This stuff is really really hard to talk about because there are multiple levels where we can talk of understanding things and it is easy to get confused. That’s what is happening here. In answer to your question, the way understanding gets bootstrapped is with the laws of physics. They don’t need to be understood to be put into action.

    You don’t need to understand the structure of your brain in order for the structure of your brain to enable you to understand. I don’t understand what this neuron does or what that other neuron means, but the firing of those neurons is what allows me to understand. Suppose that neuron *does* mean something, but that from a third-person perspective I cannot easily decipher what that might be. Even so, the firing of the neuron when I think about something *is* its meaning that thing from the first-person perspective.

    So, in my view, a computer program that performs some task understands that task (qua that task), but it does not typically understand itself. The Chinese Room understands Chinese, but it does not know anything about its own algorithm or implementation. I think this is no different from the kind of understanding we have.

    > my intention would still refer in exactly the same way—to XO-2b qua the object you denote by that name

    And the thermostat would still refer in exactly the same way–to whatever it is that is switching it on and off qua the thing that is switching it on and off.

    Again, everything you have so far said about XO-2b has an analogue for the thermostat. You are failing to draw the distinction you want to draw.

  79. Sure. That makes sense on a view that there is real intentionality.

    Which, to me, seems like a very good argument for the existence of real intentionality.

    On my view, when we point to things in the world, we are just pointing to our internal representations which have causal links to certain sensory inputs.

    Well, if there are some such internal representations that you can point to, then there is intentionality. But on your construal, you try to define things in terms of themselves, and then point to that in order to attempt to ground other things—but the whole edifice is built on air.

    I think there is such a need because I think no matter how you phrase it what you really mean is how I as an intentional system can tell that it is detecting X’s.

    But then, this means that your construction has a circularity that can’t be eliminated, and thus, doesn’t actually give any account of intentionality, as it must appeal to intentionality to account for its features. That’s of course all I’ve been saying those previous umpteen posts.

    Only if I had to prove that my guess is correct. But that’s not what I’m trying to do. I’m trying to show that my view is coherent.

    But if it has to appeal to its own correctness, then it isn’t coherent, it’s vacuous. If what you say only makes sense if your guess is correct, then there’s no logical force behind it whatsoever, because that is true for any conceivble proposition; but then, you haven’t actually brought forward any argument supporting your position.

    Whether or not the label “mystical” is appropriate, you do seem to believe in some sort of direct objective link between your mental states and objects in the world.

    Well, I don’t. In fact, I don’t believe we have access to the external world at all. However, I’ve not given any positive account of my views so far, because the discussion was prompted by you providing yours, and I don’t want to come across as a snake-oil salesman immediately jumping to hawking his own product. So far, all I’ve posted was merely in reaction to what I perceive as the holes in your account, judging purely from my own first-person experience that you propose to account for, nothing else.

    We also presumably know what X is. So we in fact know both sides of the correlation. We need to know both sides to spot that there is a correlation at all.

    We know both sides of the correlation because we are intentional systems; again, to use this as a starting point means that your whole position is logically unsound as an attempt at explaining how intentionality comes about.

    Also, attempting to give a third-personal account seems, to me, to completely miss the point. What we want to explain is how, to us, our own mental state can be about things not intrinsic to it (whether they be in the world, fictional, or otherwise mental constructs).

    Aboutness is a three-place relation, not a two-place one: A is not simply about B, it is about B to C. A text in some language you don’t speak isn’t about anything to you; but to a speaker of that language, it is meaningfully about whatever it’s about. An intentional relationship can be viewed as a folding-in of this relation: my mental state is about a tree to that same mental state. But a third-personal point of view can never touch this: in your example, X is about Y to you; but that doesn’t mean that X has any intentionality. So it’s true that on your account, you always need to bring in some external intentionality; but that’s not a feature, it’s a fatal flaw.

    In order for it to know that fleeing maximises reward, I think it needs to have some sort of representation of a lion (even if only qua that which must be fleed) in the hypothesis it is operating under.

    But it doesn’t even know that it’s fleeing. It receives some signals, shuttles them around, and produces signals (which then activate various motors and servos and produce motion). It’s basically the same thing as if you insert a coin into some vending machine, and receive a coke in response. Do you think the coin is about something to the vending machine? Of course, you could produce some story akin to the one you’re trying to tell—that the coin ‘means’ issuing a bottle of coke to the vending machine—but all that is, is mere projection.

    That’s all I have access to, yes. But it is in interaction with the real world.

    But how is that relevant? The abstract structure underdetermines what it is the structure of; it’s an empty net that can be cast to catch all manner of things. What I’m saying is merely that this is radically unlike our experience; there, we always have some concrete object as the intentional content of our mental state, while having merely access to the abstract structure leaves that content completely undetermined.

    The question I’m asking, that you so far haven’t really engaged with, is how, from the mere abstract structure, those concrete intentional objects are built. There’s nothing but handwaving towards piling up more and more structure, and then a miracle occurs, and somehow we move from the abstract structure to, say, a tree. In order for it to be plausible at all, I think you really need to try and show how this works on your account.

    That’s not true. It has a process. Stop thinking of it as marks on a page. It’s more like a computer program. It’s dynamic. Causal. In interaction with the world and with itself.

    Merely setting the whole thing in motion does actually nothing in order to reduce the arbitrariness—that’s the basic lesson of the stone argument, which you accept as per the thread on Sergio’s proposal. Just as you can imbue any set of symbols with any meaning, you can interpret any sequence of states as any computation—it’s the same thing. If the symbols are ordered spatially or in temporal sequence doesn’t make the slightest difference.

    Sure, and doing so relies on having certain mental faculties and understanding built in.

    Well, it requires some kind of general intelligence, I suppose, but nothing is brought to the table from the outside—the rules are wholly implicit; that one might fail to unpack them does not say anything about their in-principle communicability (a point I find myself repeating).

    Again, everything you have so far said about XO-2b has an analogue for the thermostat. You are failing to draw the distinction you want to draw.

    Well, from my perspective, I’m demonstrating a clear cut distinction, that you fail to appreciate. Once more, the thermostat embodies an abstract structure that can be implemented on arbitrary systems. An intentional system refers to a concrete entity that is singled out as that which satisfies a certain proposition.

    The thermostat may ‘know’ that its state implements a certain correlated system; but even if you want to interprete it as pointing to something, that something it points to is arbitrary. For instance, consider simply switching around the ‘code’: what 1 stood for is now called 0, and vice versa. So if originally, the sensor being triggered by the temperature exceeding a certain limit was denoted 1, it is now denoted 0; if the temperature falling below some limit was 0, it is now 1. The same goes for the state of the heating unit, or the diode indicating its function. So if earlier, 00 ‘meant’ that it’s too cold, it now means it’s too hot; likewise for the state 11. Hence, from the point of view of the thermostat, nothing has changed; however, if we want to interpret its states as referring to something (which we, as intentional beings, can do), it will now refer to the exact opposite.

    So again, all the thermostat ‘knows’ is that it refers to something; but that something is itself left completely unspecified. This is radically unlike the case of intentionality, where something is referred to qua its specifics: only the fact that the object you’re referring to by the name ‘XO-2b’ is by you so referred, makes it possible for me to have an intention towards it.

    Intentional reference is like the case of the two boxes: if you’re able to look into one, then you can refer to the contents of the other by means of the contents of your box. But you need to have access to the contents; simply being able to tell that there’s a correlation between the boxes does no work at all.

    Of course, that reference may be mistaken; the correlation may not actually be as we suppose it to be—our senses, in other words, may mislead us. But this doesn’t change that it’s a genuine reference. But, in order to account for the way we experience our own intentionality, we must be able to look into at least one box; so in order to provide an account for intentionality that has at least a superficial semblance to the way it seems to be, you should detail how to build something like the two boxes from mere correlation. This is, of course, not possible: whatever you propose, one could always just interpret the code you use differently, thereby eliminating the proposed reference; but at this point, I don’t think anything I say could convince you of that, so I suggest you just go ahead and try it for yourself.

  80. Hi Jochen,

    > but the whole edifice is built on air.

    Like any abstract structure, I guess. I mean, you define the sides of a triangle with reference to its vertices, but you can also define the vertices with reference to where the sides intersect. The whole edifice is built on air, as you say. On Platonism, this is not especially problematic.

    > as it must appeal to intentionality to account for its features. That’s of course all I’ve been saying those previous umpteen posts.

    And, of course, what I’ve been disagreeing with.

    I guess I want to return to the robot. Let’s say it has what we can recognise as internal representations — symbols that seem to correlate to objects in the world and which it seems to use to represent its “intentions” towards things. We didn’t give it these representations explicitly, it developed them itself with some sort of algorithm.

    This robot at least appears to have beliefs and intentions. We can see the trace of these beliefs written in its memory, and perhaps even interpret them. These beliefs reflect the state of the world as it appears to the robot. Like a human, it can be fooled by illusions and novelties, but by and large these intentions allow it to get around and manipulate things pretty successfully.

    So the question is, how do you know, without special pleading, that you are not like such a robot? How do you know that your beliefs are anything more than its beliefs*?

    > If what you say only makes sense if your guess is correct, then there’s no logical force behind it whatsoever

    I disagree. There’s a whole school of thought that truth is about coherence (although I’m more of a correspondence man myself, I have to admit). I’m not just proposing a single proposition, I’m proposing a way of thinking about intentionality and mentality. A view composed of a series of interlocking mutually supporting propositions. If that view matches up with reality to the extent that we can check, and if it is coherent, and if it has explanatory power, and if it is parsimonious, then there is reason to believe it. Of course you will likely disagree that it has many or any of these attributes, but please don’t tell me that coherence is vacuous or of no import.

    > In fact, I don’t believe we have access to the external world at all. However, I’ve not given any positive account of my views so far

    I stand corrected. However you do seem to me to be arguing as though you believe we have access to the external world. I must be misinterpreting you.

    > Also, attempting to give a third-personal account seems, to me, to completely miss the point.

    OK, but you seem to me to be asking for a third-person account. You are asking me how I know what various symbols are supposed to mean. That is a third-person perspective question. The intentional system itself doesn’t see its own symbols. It doesn’t interpret them explicitly. The interaction of those symbols is just how it thinks, like a computer program.

    > Aboutness is a three-place relation, not a two-place one: A is not simply about B, it is about B to C.

    I agree in the case of external symbols. I do not fully agree in the case of internal symbols embedded in an intentional system, as it seems to suggest that C interprets A to mean B, but there is no interpretation or translation of an internal representation. When we’re talking about internal representations, C has no access to B, which is in the external world, but only to A. So C only knows A by A’s causal role in its dynamic information processing system. A acts as a proxy for B, which it can do because of certain correlations and similarity of structure.

    Now, if we have another observer D, (to whom E means B, I guess), then that observer can from a third party perspective say that A means B to C. From a third-person perspective such as that of D there is a distinction between the mental model of C (i.e. A) and the object in the external world (i.e. B). Not so from a first-person perspective.

    In other words, you can’t see your internal symbols (well, not without an fMRI of some kind), so it is not correct to say that your internal symbols mean things to you in the same way that external symbols do.

    > that the coin ‘means’ issuing a bottle of coke to the vending machine—but all that is, is mere projection.

    Well, so you say. I don’t think it is projection. I think it’s identifying a continuity between intentionality in humans and intentionality in simpler systems. The vending machine is no different from the thermostat in this respect.

    > What I’m saying is merely that this is radically unlike our experience; there, we always have some concrete object as the intentional content of our mental state

    How do you know? You have said you don’t think you have direct access to the external world. How do you know you’re not just thinking about an abstract structure that underdetermines the object you are thinking about? In fact examples of such underdetermination have abounded in this discussion. XO-2b could be a planet or my mug!

    This underdetermination is not usually a problem because the actual objects which match up to the abstract structures we are thinking of are usually few in number — usually one or zero. My abstract representation of my mug could be satisfied by an infinite variety of possible objects of approximately such a shape and texture, but it so happens that there is only one object on earth that will actually be picked out by this representation, because this representation is sufficiently detailed to exclude any imposters that happen to exist. As such, the underdetermination is not a problem in practice. That is why we don’t usually notice it, but don’t be fooled into thinking that your mental model suffers any less from underdetermination than that of an abstract model.

    > how, from the mere abstract structure, those concrete intentional objects are built.

    I guess I’m saying there are no concrete intentional objects. It’s all abstract structure. The appearance of concreteness is just because the abstract structure is usually detailed enough to pick out only one object in practice.

    > and then a miracle occurs, and somehow we move from the abstract structure to, say, a tree.

    No miracle! As you add more detail to an abstract model, the number of real world objects that match it diminishes. Finally you may be met with only one object (or class of objects).

    And remember that links to sense data are incorporated into the abstract model. It’s not causally disconnected from the real world. This makes it even easier to pin it down to something specific.

    > that’s the basic lesson of the stone argument, which you accept as per the thread on Sergio’s proposal

    No. The basic lesson of the stone argument is that whatever abstract models we see supervening on a particular physical object is arbitrary. You can interpret it to be instantiating one model or another. You are making a different claim, that the interpretation of the meaning of one of these models in interaction with the physical world is arbitrary. There may be a point to be made there. But since I’m not defending absolute reference I don’t think that’s a problem. I am only saying that certain of these interpretations are more reasonable than others, and this is how we can say A means B to C from a third person perspective. From a first person perspective there is no interpretation.

    > Once more, the thermostat embodies an abstract structure that can be implemented on arbitrary systems. An intentional system refers to a concrete entity that is singled out as that which satisfies a certain proposition.

    You can keep repeating it, but the only difference I see is that you insist that the former is abstract and the latter is concrete (and intentional). Everything you say for one has an analogue for the other. If you take away your assertion that one is non-intentional and the other intentional you’re left with no distinction at all.

    > if we want to interpret its states as referring to something (which we, as intentional beings, can do)

    I attribute intentionality to abstract structure. Abstract structures which are isomorphic are in my view essentially the same structure. If you make changes to the thermostat so as to leave it with an isomorphic structure (only the labels having been switched around) then nothing of significance has changed. The symbol that is in the causal role of referring to cold still refers to cold. The symbol that is in the causal role of referring to hot still refers to hot. We have only changed how those symbols are physically implemented with binary bits.

    In the same way, though you may have a neuron that means “red”, and a neuron that means “blue”, I could in principle cut them out of your brain and swap their positions, wiring them back in just as their counterpart had been. Now, nothing of significance would have changed. You would know no difference. And yet the neuron that had meant “red” would now mean “blue” and vice versa.

    But you must see that all these thought experiments which involve mangling the thermostat in some way are futile. If you change the thermostat, or change the physical context in which it finds itself, then the referents may change. That is not surprising, nor does it prove anything. If I became a brain in a vat then my referents would change too. I would no longer be referring to real things but the products of a simulation.

    > So again, all the thermostat ‘knows’ is that it refers to something; but that something is itself left completely unspecified.

    No it isn’t. It refers to something qua its specifics, i.e. that which changes its state. That is not completely unspecified.

    > But this doesn’t change that it’s a genuine reference.

    This is characteristic of your argument. You keep insisting that *this* is genuine and *that* is not. You think you are drawing distinctions but you are just using different adjectives where you could just as well use the same.

    > but at this point, I don’t think anything I say could convince you of that

    It would be naive to expect anything different. With people such as you or I who’ve put a great deal of thought into questions such as these, the probability of anyone persuading the other is infinitesimal. Anything you say I will have likely already considered and dismissed and vice versa. The most we can hope for is to understand each other’s positions a little better, sharpening our own views and honing our arguments. And that’s what makes it worthwhile.

  81. On Platonism, this is not especially problematic.

    On the usual conception of mathematical Platonism, that’s right. But such a conception at best refers to mathematical structures as existing besides the concrete reality of the world, in some abstract realm. Your Platonism however attempts to go beyond that, and tries to substitute the abstraty mathematical structures as a foundation of concrete physical reality. For such a view, the underdetermination of relata by relations is a problem you’ve yet to overcome.

    We didn’t give it these representations explicitly, it developed them itself with some sort of algorithm.

    But only to us are they representations (thanks to our nature as intentional beings); to the robot, there need not be anything there. The question is, however, not whether the robot’s symbols are something to us, but how they can come to mean something to the robot.

    These beliefs reflect the state of the world as it appears to the robot.

    That the state of the world appears ‘like’ anything to the robot is what is to be shown; you can’t go and assume this as if it were some given pretext. If computationalism is false, then this might simply not be the case; and the question we’re discussing here is exactly whether computationalism is right.

    So the question is, how do you know, without special pleading, that you are not like such a robot? How do you know that your beliefs are anything more than its beliefs*?

    I don’t know that they are, but I do know that they seem to be; and your position must, at minimum, account for the way they seem. In other words, you need to show how what I take to be my subjective, intentional experience emerges from the relations and structure you deem fundamental. But this seems to me to be theoretical work you just haven’t done, but assume must be doable somehow.

    There’s a whole school of thought that truth is about coherence

    Anything circular is trivially coherent, but even to a coherence theorist, such truths are vacuous, and have no bearing on the world. ‘If it rains tomorrow, it rains tomorrow’ is certainly true (and coherent), but has no import on the question of whether it will rain tomorrow.

    Your account of intentionality depends on your guess that you have something like an X-detection algorithm, which you can use to identify other X-detection algorithms. And if that’s the case, then you can say that intentionality (in the sense of such detection) is underwritten by an algorithm. But this, as should be clear, does no work at all towards establishing that intentionality can actually be explained in terms of such algorithms—because it merely asserts that if there are X-detection algorithms, there are X-detection algorithms.

    If some other account of intentionality were right, then you could ‘detect’ X’s in some manner, and you could even define algorithms that pick out X’s exactly when you do; but that algorithm’s ‘intentionality’ will merely be derived from yours, and thus, the idea that intentionality can be grounded in such algorithms alone would be false.

    You are asking me how I know what various symbols are supposed to mean. That is a third-person perspective question.

    Ultimately, that question asks for your first-person perspective on things: you see a symbol, and observe its correlation with something, and thus, conclude that it refers to that something. But this is just grounding the reference in your own intentionality, as both the symbol and that with which it is correlated appear to you as intentional content of your thoughts. So when I’m asking how you know the meaning of some symbols, I’m asking how they come to appear in your intentional experience, from where you then deduce their correlation.

    When we’re talking about internal representations, C has no access to B, which is in the external world, but only to A.

    But B means A to C; now, of course, whether there is a corresponding B ‘out in the world’ is in principle subject to error, but this just means that C can’t verify whether A correctly refers, but it doesn’t change the nature of the aboutness relation.

    In other words, you can’t see your internal symbols (well, not without an fMRI of some kind), so it is not correct to say that your internal symbols mean things to you in the same way that external symbols do.

    I think ultimately they do—internal symbols mean internal mental states (patterns of neurons firing), and one pattern ‘means’ something to another pattern by virtue of effecting a change in that pattern (causing it to ‘do’ something). But that’s a different story.

    How do you know you’re not just thinking about an abstract structure that underdetermines the object you are thinking about? In fact examples of such underdetermination have abounded in this discussion. XO-2b could be a planet or my mug!

    But the underdetermination there is of a very different kind—different objects can fit the same reference, satisfy the same proposition, but in the case of the thermostat, the whole proposition itself is left undetermined.

    I mean, go back to the example of the two boxes, as opposed to the thermostat. I suppose you can see the difference between those, right? The thermostat embodies the same correlation as the boxes; but the boxes are a concrete realization of the abstract correlation. The boxes are one concrete example of how the structure given by the thermostat—00 or 11—can be instantiated; there is something additional to that structure, that isn’t determined by the structure. This is the underdetermination in the case of the thermostat.

    The boxes contain an underdetermination as in the case of XO-2b: I, as an intentional system, have access to one of the boxes. Its content, my representation, my intentional object, refers to the content of the other box ‘out in the world’ by means of its correlation. Now, what actually is ‘at the other end’ of this relation, so to speak, I have no way of knowing; but I do have a representation, which I (parsimoniously) may assume to refer.

    So, again: the thermostat only yields abstract structure, which can be instantiated in any number of different ways. The boxes represent a concrete instantiation, which by knowing one side (the mental, intentional one) allows to frame a reference (which might fail to refer to anything real).

    Thus, as it appears to me, you need something like the boxes to account for my intentional mental content as it is presented to me; my thoughts don’t contain some net of structures that can be underwritten by relata any which way, but they contain representations of concrete things, say, an apple, which I presume have correlata in the real world, but which may fail to. But even in the case of failure, the resulting ambiguity is of a very different kind as the ambiguity in the case where all we have is the structure, where not merely one side of the relation is left dangling, but the relation itself is left completely open.

    So, in order to produce a convincing account of intentionality, you need to show how to create something that’s like the boxes, from something that’s like the thermostat; this seems to me to be a minimum requirement.

    The basic lesson of the stone argument is that whatever abstract models we see supervening on a particular physical object is arbitrary.

    This cuts both ways: by the same token, what concrete system subvenes a given abstract model is just as open; that is, even ‘animated structure’ leaves the dynamic object that embodies it completely unspecified, because for any dynamic object, a mapping exists such that it embodies the structure.

    Abstract structures which are isomorphic are in my view essentially the same structure.

    This is trivially true for structures, and that’s where the problem comes from: the structures of the stack of books and the group of people are isomorphic, but the stack of books and the group of people are two different things. Hence, structure doesn’t capture all the relevant facts.

    It refers to something qua its specifics, i.e. that which changes its state. That is not completely unspecified.

    It does not even necessarily change state; only the mapping might change, for instance, such that what got mapped to 1 now maps to 0, and vice versa.

    The most we can hope for is to understand each other’s positions a little better, sharpening our own views and honing our arguments. And that’s what makes it worthwhile.

    OK, on that, at least, we can agree. 🙂

  82. Hi Jochen,

    > in some abstract realm

    *wince*

    OK, that’s fair enough, but I just hate that way of putting it because it gives the wrong message. They are not *in* anything. They don’t exist in a place or at a time. So I don’t like talk of realms.

    > the underdetermination of relata by relations is a problem you’ve yet to overcome.

    I don’t see how. If our universe is a mathematical structure, underdetermination doesn’t come into it because the structure does not refer to anything else.

    > But only to us are they representations (thanks to our nature as intentional beings); to the robot, there need not be anything there.

    On my view they are representations, in that these are what constitute the robot’s model of the world. I guess you can’t *see* them as representations unless you can see the symbols as well as what they refer to as two different things, which requires a third person perspective.

    > That the state of the world appears ‘like’ anything to the robot is what is to be shown; you can’t go and assume this as if it were some given pretext.

    Be that as it may, even though I’m just presenting a tentative hypothetical case here, the question remains: what reason do you have to suspect that you are not in the same position as the robot? Of course you get to that, so let’s continue.

    > I don’t know that they are, but I do know that they seem to be; and your position must, at minimum, account for the way they seem.

    But how else could they seem? What would it mean to believe* something but not really believe it? What do you mean for “the way they seem” anyway? Can you put it into words what exactly it is about how intentions seem to you that cannot be reconciled with the structural account? Is it not perhaps just the difference between the first person and the third person perspective?

    Going back to the Scientia Salon article, a simulation of you would profess the same beliefs as you do. It would claim that it seems to it that it is different from a simulation, that it has real beliefs even though a simulation does not. Even though (according to you) it is not an intentional agent, your arguments could not possibly persuade it of this unless the real you could also potentially accept that you are not an intentional agent. So either you must accept the possibility that you yourself are not intentional or there exists a case where adhering to reasoning such as yours would lead one astray.

    Or we just dissolve the supposed fundamental difference between intentional and non-intentional systems as I am inclined to do.

    > But this seems to me to be theoretical work you just haven’t done, but assume must be doable somehow.

    Not really. I deny there *is* any work to be done beyond understanding the functional operation of the brain. The idea is that there *is* no subjective, phenomenal experience independent of this function. There is a sense in which beliefs can be consistently attributed to systems, and my claim is that this is the only sense in which beliefs can be consistently attributed to systems. How real beliefs arise from functional beliefs does not need to be explained because they are one and the same. Why things seem to you the way they do does not need to be explained because we can from a purely functional perspective explain the origins of the functional belief that things seem to be more than functional. It comes from a confusion between first and third person perspectives, between internal and external symbols, between the intuition that we have direct access to the world and the fact that we only have indirect access via a virtual model in our brains.

    > But this, as should be clear, does no work at all towards establishing that intentionality can actually be explained in terms of such algorithms

    It’s not clear at all! I mean, this is how hypothesis formation works. You try to develop a self-consistent story that is not contradicted by the evidence to explain some phenomenon. What you are trying to do is show that it is contradicted or in some way undermined by the evidence, and I don’t think you succeed. If I’m right that my story is consistent with the evidence and with itself, that alone is enough to identify it as a candidate explanation, i.e. that as far as we know, intentionality can actually be explained in terms of such algorithms. Whether we ought to accept it as the best explanation depends on things like parsimony and so on. I’m not claiming that this is a scientific hypothesis, so falsifiability doesn’t come into it. I’m not even claiming that it has to be true. I’m just claiming it is the best explanation we have, and further that there is very little reason other than epistemic humility to doubt it.

    The converse position is also circular, by the way. It asserts that there is a difference between the pseudo-intentions of a thermostat and those of a human, because the former are meaningless and attributed while the latter are genuine and meaningful and objective. All the attempts you have made to draw distinctions only work if we accept this basic premise. Otherwise there are no distinctions. So there are only distinctions if there are distinctions. Again, this is self-consistent and consistent with the evidence, but I don’t think it is a good explanation because:

    1) We don’t need this distinction to explain the evidence, so we can do away with it
    2) What makes the distinction true (i.e. what is it about neurons as opposed to silicon?) is completely unexplained
    3) It is suspiciously intuitive, suggesting that perhaps it is taking human intuitions a little too seriously

    > Ultimately, that question asks for your first-person perspective on things: you see a symbol, and observe its correlation with something, and thus, conclude that it refers to that something.

    This is an example of what makes this topic so confusing to talk about. Here, you’re talking about how we identify the meaning of an external symbol. This is a question about how a third-person perspective symbol (for instance in a thermostat) is translated into a first person interpretation by an observer (e.g. me). This involves both intentional systems (thermostat and me), so both intentional systems will be involved in the explanation.

    I was talking about how the meaning of internal symbols (e.g. patterns in my neurons) come about. In this case I am completely unaware of the symbol as it appears from a third-person perspective and are even unaware of its correlation with objects in the external world (to which you have no direct access). In this case there is no appeal to a second intentional system. The symbol is directly involved in your information processing by virtue of its causal relationships. It models the thing in the outside world as a proxy. You have no direct access to the thing in the outside world but only to the way the structure of the symbol shapes your thoughts. The experience of this shaping is to seem to be thinking about some object in the world, because there is no other way it *could* seem. From your perspective it’s not so much that this shaping *means* that object or *refers* to that object (this reference is only apparent from a third person perspective), but that it *is* your experience of that object (which may not even exist). The best way to understand this is perhaps by analogy to computer programs or simulated versions of you.

    > But B means A to C

    Well, initially we said A means B to C. I’m not sure if you’ve mixed things up deliberately to make a point or if you just made a mistake (I assume the latter).

    > but it doesn’t change the nature of the aboutness relation.

    Yeah, OK, as long as you’re very clear about what you’re saying. A means B to C, but if it’s an internal symbol, C can’t see it and recognise that it means B. It has to be embedded in C’s internal structure for it to mean B to C, and when it does this is transparent to C. C does not realise that C’s understanding of B is mediated by A.

    > internal symbols mean internal mental states (patterns of neurons firing), and one pattern ‘means’ something to another pattern by virtue of effecting a change in that pattern (causing it to ‘do’ something).

    I couldn’t have put it better myself. That’s exactly right. The only point I am making is that these meanings don’t need to be interpreted. They cause things to happen directly, with no conscious effort or translation and completely transparently to the conscious mind. In that sense they are different from external symbols.

    > But the underdetermination there is of a very different kind

    And I really really wish you could explain how. I’ve been toying with the idea of a spreadsheet, with a column each for thermostat and mind, where you can enter propositions in one column and I can enter corresponding propositions in the other. I’d love to see that symmetry broken. It would be real progress. But it hasn’t been, as far as I can see.

    > but in the case of the thermostat, the whole proposition itself is left undetermined.

    But it isn’t! The proposition is, approximately, “that which causes the thermostat to change state”. Anything which satisfies this proposition is referred to by the thermostat.

    > The boxes are one concrete example of how the structure given by the thermostat—00 or 11—can be instantiated

    I think the switch and the lamp is a better analogy. The boxes are static and unchanging. The thermostat can change state. You can’t really have correlation in a static structure with a sample size of 1.

    But let’s run with it anyway…

    > there is something additional to that structure, that isn’t determined by the structure.

    Yes there is. There is the context in which they are deployed and how they relate to other structures. We are no different. The referents of my intentions depend on the context in which my mind is deployed. If I’m a brain in a vat, the referents of my intentions are the products of a simulation.

    > my thoughts don’t contain some net of structures that can be underwritten by relata any which way, but they contain representations of concrete things, say, an apple

    I’m not saying your thoughts contain some net of structures. I’m saying they are a net of structures. Containing a net of structures would, I suppose, mean thinking about a net of structures. The view is that when you are thinking about an apple, what is happening is that the symbol for apple in your brain is being activated and interacting with other parts of your brain. We call this a representation from a third person view because we can see an apparent correlation between this symbol and a physical apple. From a first person view it seems simply that you are thinking about an apple. This is hard to understand because looking at or thinking about this net of structures from a third person point of view bears no relation at all to thinking about an apple. This is why you (and almost everybody else) are confused in my view. This is why it seems to you that functionalism is not correct.

    > So, in order to produce a convincing account of intentionality, you need to show how to create something that’s like the boxes, from something that’s like the thermostat;

    I confess I am lost as to why you think the boxes are any more a concrete realisation than the thermostat. The thermostat is also a physical object. It’s just a different concrete realisation of (supposedly, although I have reservations on that) the same logical structure. I can’t really answer your question because I don’t understand what you are getting at.

    > This cuts both ways:

    True. Any structure could supervene on any object. I don’t think this is a problem for me? Is it?

    > Hence, structure doesn’t capture all the relevant facts.

    It depends what you hold as relevant. If all the relevant facts are only what is invariant under isomorphism, then they do capture all the relevant facts.

    Viewed from a third-person perspective, the abstract structures of external symbols seem to underdetermine what they refer to because we can imagine more than one interpretation (i.e. mapping onto our own internal abstract structures). But this is not so for our own internal abstract structures which are already in situ and need no interpretation. Yes, one could invent various isomorphic versions of these, but what we actually use to process information is just the original isomorphic structure and any further detail is precisely what I view as irrelevant.

    > It does not even necessarily change state;

    I mean the thermostat changes state. If it doesn’t then it’s not a thermostat. It changes from “on” to “off” in response to changes in temperature. Whether you represent hot or cold with a 1 or a 0, this remains the same. It refers to temperature qua that which changes its state (from on to off and vice versa).

  83. If our universe is a mathematical structure, underdetermination doesn’t come into it because the structure does not refer to anything else.

    Which is the main reason that the idea that the universe is a mathematical structure seems straightforwardly insufficient to account for our actual experience within it to me.

    But how else could they seem? What would it mean to believe* something but not really believe it?

    Well, it would not seem like anything. I mean, consider the good old giant lookup table—it could, at least on some finite stretch of time, reproduce the performance of an agent with representations, but that doesn’t imply that there is something this seems like to it, anymore than if I program my computer to print ‘I am conscious’ means that it actually is. (I suppose you don’t hold that it does, do you?)

    Can you put it into words what exactly it is about how intentions seem to you that cannot be reconciled with the structural account?

    I thought I had done that with the example of the boxes versus the abstract structure. My intentions seem to me as if I have access to some concrete object that is part of an aboutness-relation, i.e. that is about some other object by means of embodying a correlation—by virtue of me having access to this intentional object, I see my mental state as referring or pointing to something not intrinsic to it.

    However, with only structure, I don’t have any intentional object—I merely have an empty relation, which would need to be embodied by something concrete, since it can’t yield this concreteness itself (as you by now seem to agree). Simply knowing that correlation does not tell me anything—simply knowing that if there is a red ball in box A, then there also is a red ball in box B, and if there is a green ball in box B, there is also a green ball in box A, does not tell me anything about the contents of either box. I need to be able to look into one of the boxes, find a concrete object to anchor the reference; only then do I have access to a genuine reference—e.g. the green ball in box A, which refers to the green ball in box B.

    The abstract structure alone only tells me that if there is a green ball in box A, then there is a green ball in box B (and likewise for the red ball); but we need to cash out on the conditional to find out what is being represented, that is, we need to supply the condition, find out which is actual; otherwise, you just have the unfulfilled conditional statement.

    Going back to the Scientia Salon article, a simulation of you would profess the same beliefs as you do.

    Of course. The question is: does it actually? My computer can ‘profess’ the belief ‘I am conscious’ by simply printing the sentence on the screen; does that mean it is conscious?

    So either you must accept the possibility that you yourself are not intentional or there exists a case where adhering to reasoning such as yours would lead one astray.

    But on my account, while me and my simulation are indistinguishable from a third person point of view, that’s not all there is; and there is a difference from the first-personal account. My unfortunate simulation would lack this perspective, but I possess it. Otherwise, if you hold that the third personal account tells all, then it would seem you’re also committed to my computer being conscious when it prints out the statement that it is.

    And of course it’s possible that my views may be wrong; you don’t need to invoke my simulated, deceived double for this. For instance, if you could show that I’m inhabiting a simulation, I would be convinced; likewise, if you’d produce some account of intentionality the way I seem to have it in structural terms.

    So I do readily accept the possibility that I don’t have any intentionality in my sense; but if so, then something needs to take its place, to account for how things seem to me.

    I deny there *is* any work to be done beyond understanding the functional operation of the brain. The idea is that there *is* no subjective, phenomenal experience independent of this function.

    And that may well be the case. But the simple denial, of course, doesn’t amount to much. It’s all too often the case that eliminativist accounts content themselves with giving an account that pertains to the supposed fundamental entities, but actually, that’s merely when the real work starts.

    Consider an eliminativist about arms. It’s a reasonable stance to take—there are, after all, no arms in the world. There are muscle cells, bones, nerves, blood vessels and so on. But the mere statement of those facts does not explain the ‘arminess’ that, for instance, allows me to grasp things, or throw them, and so on. To explain that, one has to produce an account of how muscle cells, nerves, bones etc. work together in order to fulfill the role that arms seem to fulfill to the naif.

    And this is what so far seems to be missing. You assert that there are no real intentions, but you also seem unable or unwilling to propose an account of how what you want to replace them with yields what my mental states seem to me. You wish to reduce things to third-personal accounts, but the world as experienced by me does not merely contain third-personal accounts, but also a first-personal one. Now, this might be reducible to third-personal terms, but if so, I don’t have the foggiest of how—the more I probe, the more you seem to try and deny that there’s anything worth talking about. But again, flat denial doesn’t explain anything.

    I mean, this is how hypothesis formation works. You try to develop a self-consistent story that is not contradicted by the evidence to explain some phenomenon.

    But your story is logically devoid of content. It requires its own correctness to even be a meaningful story—you start with the assumption that X-detection algorithms exist and that you possess one; based on this, you then conclude that X-detection algorithms exist. This only tells us that if X-detection algorithms exist, then X-detection algorithms exist—which is trivially true. But it does not get us any further as regards the question of whether X-detection algorithms exist (they do, if they do, but that much was clear beforehand).

    It asserts that there is a difference between the pseudo-intentions of a thermostat and those of a human, because the former are meaningless and attributed while the latter are genuine and meaningful and objective. All the attempts you have made to draw distinctions only work if we accept this basic premise.

    No. My argument is based on two points: 1) that my own intentionality seems to be in terms of concrete intentional objects bearing reference to their referents (a piece of subjective data), and 2) that structure does not yield concrete objects and hence, is insufficient to underwrite intentionality as it appears to me.

    I’ve further argued for the second point by providing an example of what I mean by a genuine reference—as given by the two boxes—which embodies an abstract structure—a correlation—but is not determined by it, hence demonstrating that structure does not suffice to determine concrete objects. Nowhere does this assume that structure doesn’t suffice for intentionality, but it yields this conclusion from independent premises.

    You can meaningfully challenge these points in several ways. You can try to argue that 1) is mistaken by, e.g., providing an argument that I am mistaken about my subjective experience, that I am misinterpreting my subjective data; but then, you must produce an alternative, viable way to account for it. You can challenge 2) by showing how such concrete objects after all can emerge from structure (though by now it seems that you don’t think so yourself).

    Thus, there are ways for my position to be wrong, and hence, it must have some meaningful content—it serves to restrict the possibilities, so if the actual mechanism of intentionality is not within this restricted set, then my way of accounting for it is wrong. But however intentionality actually works ‘if X-detector algorithms exist, X-detector algorithms exist’ will always be true, but vacuous.

    I’m not sure if you’ve mixed things up deliberately to make a point or if you just made a mistake (I assume the latter).

    Yes, I think I mixed up.

    A means B to C, but if it’s an internal symbol, C can’t see it and recognise that it means B.

    I’m not really sure I get what you mean by not being able to see the internal symbols. It would be more natural to me to say that all we can ever see are the internal symbols, and never get at the things in the real world—all we have is access to the phenomena, but not to the things in themselves, to be a little Kantian about it.

    And I really really wish you could explain how.

    It’s not for lack of trying… Anyway, the abstract structure of the thermostat applies to all correlated systems. OK? Hence there is one kind of underdetermination: the structure doesn’t single out the system.

    Then, we have a concrete, correlated system—the boxes. There, knowing one half of it, and the structure, doesn’t single out the second half, since we don’t have access to the correlation—by virtue of not having access to the things in the world, the noumena, what have you. But we have something that we have access to—the ‘visible half’ of the correlated system—which we can use as a stand-in for the other half, provided the correlation holds true. In particular, we can use lots of such correlated pairs to build a representation of the external world on the inside, where structural relationships between our intentional objects mirror structural relationships between real-world objects. This is how we get a picture of the world.

    But if all we had is the structure, then again we only have an empty paint-by-numbers picture, or a grid of pixels, a string of zeros and ones which, using different codes for colours, might depict all manner of things—a horse, the love of your life, your childhood home, etc. The purpose of the intentional objects is to colour in the picture; they are what you get if you know the code.

    To me, the distinction couldn’t be clearer, and I’m at a loss as to how to get it across in a better way. Structure completely underdetermines the system it is the structure of; and while knowing one side of a correlated pair may still underdetermine the other, it nevertheless provides us with reference to it.

    Do you really not see any distinction between the concrete balls in boxes, and the abstract structure it embodies?

    The proposition is, approximately, “that which causes the thermostat to change state”.

    You don’t have access to whether the thermostat changes its state. You know that at one point, its state gets mapped to 00, and at another, it gets mapped to 11—but this doesn’t allow you to infer that any state change has taken place; it could merely be the mapping. Both 00 and 11 say the same thing: there is a correlation. Nothing else is known. It’s exactly the stone argument again.

    True. Any structure could supervene on any object. I don’t think this is a problem for me? Is it?

    Well, it’s Newman’s problem—which you originally seemed to think needed a response, but it seems have since decided to just accept. I brought up the analogy to a ciphertext, which you wanted to reject by appealing to the dynamic nature of the computation; but using the stone argument, we now see (and you now seem to agree) that the analogy was apt, since as you say any structure can supervene on any object. Then, we are in fact faced with the same problem as analyzing a ciphertext (which is purely structural: scratch-marks on paper) without knowledge of the key.

    I’m somewhat puzzled by the fact that you now seem to have no problem admitting this, by the way. Earlier on, I would have thought it to be fatal to your position, and you went to some lengths to try and get past this argument. But now, you seem resigned to simply hold that there’s actually no meaning at all, and what I’m writing right here is just some arbitrary assortment of pixels on a screen; that definitions are circular, and that somehow, that’s OK (which to me seems so impoverished a picture that I’d reject it on these grounds alone).

    But what would you do with a wikipedia that’s nothing but links from one page to another? How would you use it to look something up? What could you learn from it?

    I imagine you’d like to say that you could get at the structural relationships between the pages, but even this depends on your pre-fabricated intentionality: in order to get some picture of these relationships, you must represent them somehow in your own mind. And if you did this merely in terms of structure, then the problem iterates—the structure you hold in your mind doesn’t single out the structure of the wikipedia pages. Only if there were some concrete objects embodying that structure would this be possible; but this is exactly what you deny.

    If all the relevant facts are only what is invariant under isomorphism, then they do capture all the relevant facts.

    But it seems obvious that this isn’t the case (and again, I just want to point out that this stance essentially amounts to accepting the concluson of Newman). Take the books and the ancestors: what’s invariant for those is the relationships in which they stand; their structures are isomorphic. But my grandfather wasn’t a book.

    But this is not so for our own internal abstract structures which are already in situ and need no interpretation.

    I still don’t understand what you mean by internal abstract structures that don’t need an interpretation. This again seems to me to be a point where somehow, something happens: we have some internal structure, and since those are our structure, well, they then just give rise to that which we seem to experience, stop asking questions.

    But I can’t stop: this is the part where the story gets interesting. So tell me, how do these already interpreted structures work? What makes them generate what seems like intentional objects to their possessors? How are they instantiated? How do you construct a system that has an already interpreted structure at hand?

    If it doesn’t then it’s not a thermostat.

    That’s just the thing: a system having access only to structure doesn’t know whether it’s a thermostat, a set of boxes, a lamp and switch, or literally infinitely many other possibilities.

  84. Hi Jochen,

    I think we’re more or less at an impasse. I don’t know how to explain my views any clearer than I have, and I think you have the same problem. We’re shouting across a chasm. I just don’t see how to get to there from here.

    Nevertheless, let’s continue.

    > Well, it would not seem like anything.

    OK, so your problem is not “Why do my intentions seem like *this*” but “Why does anything seem like anything?”

    To which I propose the hypothesis that seeming is just functional seeming*. Computers can be deceived*, and so it’s not so hard to see that it is meaningful to say that something can seem* a certain way to a computer, even though it may not necessarily be so.

    Things seem in the way that the brain represents them. The way your brain represents your *representations* to you does not correspond well to how your brain represents the objects of your representation. You have no access to the fact that its all structure, to you your representations seem immediate and direct, because that is how it is represented to you.

    > I thought I had done that with the example of the boxes versus the abstract structure.

    I’m sort of lost with the example of the balls in the boxes. I still don’t really see what you’re getting at. Is it that you have a physical concrete object in one box to stand as a proxy for another? So the label is a physical object as opposed to just structure?

    I think the confusion is between the idea of using a bare structure as an external thing and using it as an internal thing. Again, the structure in your head does not need to be interpreted, so trying to imagine it as if it were on a page will lead you astray. Your thinking is the processing of information. That information is processed in a certain way because of how the structures in your brain are causally connected. The view is that you are no different from a simulated you which has analogous structures and process information in much the same way, schematically. It doesn’t seem to you that this is so any more than they would seem so for a simulated you. The apparent concreteness of your references is just because you don’t have conscious access to the underlying structures and abstract symbols that allow your consciousness to emerge, any more than the simulated you would have access to the code of the algorithm sustaining it.

    In the case of the simulated you, it is all structure. If you could talk to it, that simulated you would ask you for an account of how its qualia and its intentions come about if it’s really a computer program. Suppose you had to convince it of your view for some reason. How would you answer it? Because that’s the situation I’m in. I guess you would tell it it doesn’t really have qualia or intentions, even though it seems* to it that it does. You would have to try to convince it that how things seem to it is some kind of illusion — if that even makes sense, for actually nothing seems like anything to it.

    For me, this illustrates the problem with your view. You require the possibility that an AI could be deceived* by a seeming* into thinking that things seem to it. But of course if an AI can be deceived by such a seeming, then things can seem to it. Which means it isn’t deceived at all. And neither are we! Because having intentions is just having functional intentions. Seeming* is seeming.

    > My computer can ‘profess’ the belief ‘I am conscious’ by simply printing the sentence on the screen; does that mean it is conscious?

    No. Even from a third person perspective, I can see that it isn’t, because I can figure out how it works and see that it does not perform any information processing analogous to a human. My view is that human-style consciousness is associated with human-style information processing, so any system which doesn’t process information like a human is not conscious like a human. Your program for instance cannot be quizzed for instance on what it means by that statement, and cannot be engaged in a debate such as we are having on the nature of consciousness.

    > My unfortunate simulation would lack this perspective, but I possess it.

    However your unfortunate simulation believes it does not lack this perspective. So it is possible to be wrong about possessing the perspective. So it is possible that you are wrong. Indeed, whether or not you are truly an intentional system, you would believe yourself to have the perspective of an intentional system. So your belief that you are intentional (or more intentional than simulated you) is no evidence at all that you actually are.

    Now, of course this argument only works if beliefs* are beliefs. But, likewise, to insist that you can know that you have the perspective presupposes that your view is correct and that it is possible to know such a thing. As I said on Scientia Salon, we have two self-consistent views. But the fact that my view is compatible with the evidence shows there is no evidence to support your view over mine, which means there is no fatal flaw in my view as you seem to think there is.

    > but if so, then something needs to take its place, to account for how things seem to me.

    Right. And functional intentionality (seeming*, believing* and so on) is what you are looking for. What you actually need to be persuaded of is not the possibility that you are wrong but that this is an acceptable explanation and that it doesn’t omit anything. That is what you would need to persuade your simulated self of.

    > You assert that there are no real intentions, but you also seem unable or unwilling to propose an account of how what you want to replace them with yields what my mental states seem to me.

    Functional intentions. They don’t seem that way to you because you don’t have access to how they work.

    > It requires its own correctness to even be a meaningful story

    As does yours! If your story is not correct, then your asking me to explain how your real beliefs can be no more than the pseudobeliefs of a computer doesn’t make any sense. It’s like disbelief that mere chemistry could give rise to life without a soul. Where does the spark of life fit into the story? What replaces it? For the question to be meaningful requires that the concept of a spark of life be valid, but there is no such thing as a clear unambiguous distinction between living and non-living and there is no such thing as a clear unambiguous distinction between intentional and non-intentional.

    > You can try to argue that 1) is mistaken by, e.g., providing an argument that I am mistaken about my subjective experience, that I am misinterpreting my subjective data; but then, you must produce an alternative, viable way to account for it.

    Right. So, the alternative, viable way to account for it is however you would explain things to your simulated self, which you believe is mistaken about its subjective experience.

    If you say to me that you don’t think it is mistaken, you don’t think it has subjective experience at all, then I’ll say the same to you. You don’t have any subjective experience at all (the way you conceive of subjective experience at least, because subjective experience as you conceive of it doesn’t exist). You might flatly deny this, saying something along the lines that your subjective experience is the *one* thing you can be sure of, but then your simulated self would say the same thing and you would be unmoved (as am I).

    That’s not to say that I don’t think there is an account of subjective experience that does make sense, but that account would include the experience of simulated people and identifies intentions with their functional roles.

    > You can challenge 2) by showing how such concrete objects after all can emerge from structure

    I don’t even understand the question, to be honest. The balls and the boxes analogy remains confusing to me. I don’t get it.

    > I’m not really sure I get what you mean by not being able to see the internal symbols.

    Another example of why this stuff is hard to talk about. Understanding comes about by the interaction of the symbols (as understanding* comes about in a computer through the interaction of its symbols). But what we understand is not the symbols and their interaction but some abstract model supervening on them. A computer program may understand* how to buy and sell shares on the stock market but it does not typically understand* its own operation. Up until quite recently, human beings could understand all kinds of things but we had next to no idea at all how our minds worked at a physical level. So you see via the symbols, you don’t see the symbols themselves. Seeing the symbols themselves would be something like knowing by introspection which neurons were firing. The fact that we can’t do this goes some way towards explaining the mismatch between how things are and how they seem.

    But, because these things are hard to talk about, and can be talked about on multiple levels, I can’t guarantee that I will be consistent in my descriptions. I may slip into saying that we see symbols when what I mean is we see an abstract model or representation of an object in the real world.

    > all we have is access to the phenomena, but not to the things in themselves, to be a little Kantian about it.

    Agreed.

    > Anyway, the abstract structure of the thermostat applies to all correlated systems. OK?

    I think your point here is that we can imagine wildly disparate physical systems which all seem the same to the thermostat, so the thermostat’s structure underdetermines the physical system. This is easy to do for the structure of the thermostat — trivial really — because it is so simple, but not so easy to do for a human mind, because it is complex. Nevertheless it is not impossible. You could be a brain in a vat, in which case your intentions will not pick out the same intentions they do as an embodied brain in a physical world. You could be a simulation yourself, in which your mind would be supervening on a wildly different substrate, so the structure of your mind underdetermines the structure of your physical self (like a thermostat). You could exist in a universe where string theory is correct or a universe where some form of loop quantum gravity is correct (assuming both are consistent with observations so far). So your mental state underdetermines what you refer to and the physical situation in which you find yourself, so you are no different from the thermostat in this respect.

    > But if all we had is the structure, then again we only have an empty paint-by-numbers picture, or a grid of pixels, a string of zeros and ones which, using different codes for colours, might depict all manner of things

    The problem here is that you are making comparisons to marks on a page, to external symbols. These have to be interpreted and this interpretation can be difficult. But the structure in your head does not need to be interpreted. It is how you think, just as the structure in a computer program is how it thinks*. The two cases are different and this is leading you astray.

    > To me, the distinction couldn’t be clearer

    And it’s clear to me too, because you are comparing structure on a page with structure embedded in your mind. It’s night and day. You don’t program a computer by waving a printout of computer code in front of a webcam.

    > and while knowing one side of a correlated pair

    But in your example, you assume you have access to one side of the correlated pair. But you don’t, not really. The side you supposedly have access to is a ball, a physical object in the external world, to which we both agree you don’t have access. You only have access to a representation of that ball, which in my view is a structural thing and in your view is I-don’t-know-what. Again I think your intuitions are being led astray by trying to model internal representations as external representations. The two are completely different cases.

    > Do you really not see any distinction between the concrete balls in boxes, and the abstract structure it embodies?

    Well, ultimately I think the concrete balls in boxes *are* an abstract structure (since I think everything is, on the MUH). Do I see a distinction between this rather complex structure and the trivial structure of a simple correlation that can be projected onto it? Of course I do. Balls have properties such as roundness and greenness and mass and volume. A simple correlation doesn’t.

    But I think that if you were really able to capture all the properties of the balls and the boxes in an abstract structure, then I would not say there is an objective difference. I would however say there is a subjective difference since I am speaking as an observer embedded in the same larger structure (the universe) as the balls and boxes we call physical and so am causally connected to them in a way I am not to the balls and boxes we call abstract.

    > You don’t have access to whether the thermostat changes its state.

    Who doesn’t? I may be a bit lost here. I can see a thermostat changing its state when the temperature drops.

    > but using the stone argument, we now see (and you now seem to agree) that the analogy was apt,

    No, I don’t agree the analogy was apt. I’ll not dispute that any structure can supervene on any object (of a given cardinality of subunits, and with events causing other events and so on). Similarly I’ll not dispute that any appropriate object can subvene any given structure. You take this to mean that we have no way of interpreting the structure, when what it means is that there are innumerable ways of interpreting it. I’m quite happy with all of these interpretations (each of which corresponds to a different Platonic structure) being real.

    One of these interpretations in particular is likely to be more useful than the others in modelling the behaviour of the system and predicting what it will do. This is the interpretation Sergio is talking about and I won’t rehearse his argument here. The difference between me and him is only that I think the other interpretations exist also. But this interesting interpretation is the one we are talking about. The one which may be intentional and which we are most inclined to think of as the “mind” of that physical object, since it has a straightforward and consistent causal relationship with that object’s behaviour and its perceptions of the external world.

    This interpretation is not a ciphertext with no key because we can see how it is correlated with the physical events going on inside and outside the object.

    That’s the third person perspective answer. If I am such an abstract structure myself, then I am not a ciphertext to myself because I do not need to interpret my own structure. The structure is dynamic and evolving and changing its own state over time because of how different parts of it are causally connected. My thinking is the information processing carried out by this structure as it changes.

    > But now, you seem resigned to simply hold that there’s actually no meaning at all

    No, I’m not saying that. I’m saying there is no absolute reference. That all meaning is on some level ambiguous and open to interpretation.

    > that definitions are circular, and that somehow, that’s OK

    Whether it is OK depends on the context. If you’re describing a purely abstract structure that doesn’t refer to anything else, then the definitions of entities within that structure are necessarily circular — they can only be defined with respect to their relationships to other entities. This is not a problem in the study of abstract objects.

    It is a problem if you are trying to communicate some idea about the physical world to another human being who has no idea what you are talking about. In practice, this rarely arises, because human beings can usually hit upon some shared concept they both understand, and then build definitions on top of that. Still, if they were forced to define those shared concepts, and keep defining ad infinitum, circularity is inevitable. This is what I mean when I say all definitions are ultimately circular, and it is a reflection of the fact that ultimately the knowledge base of human beings is a self-contained abstract object like those I discussed in the last paragraph. Meanings are approximately fixed, but only loosely and not in any absolute sense, by correlations in interaction with the outside world.

    > the structure you hold in your mind doesn’t single out the structure of the wikipedia pages.

    Well, it does if it is isomorphic to it. It doesn’t single out that it is the structure of Wikipedia pages as opposed to TvTropes pages, but it does single out that structure qua structure. If the same structure could be mapped to TvTropes pages, then that is the same structure qua structure. It’s just realised in two different websites. Similarly, if there are two objects I call XO-2b then your intention towards XO-2b doesn’t pick out one of them in particular.

    > But my grandfather wasn’t a book.

    Which is only relevant if you care whether your grandfather was a book and irrelevant if all we’re talking about is a structure that can be embodied by people or books. If you want to distinguish between books and people you need more detail in the structure. And again, an analogy to a static unchanging graph of relations and relata is not what we are talking about. We’re talking about a structure that embodies causation, that can process information in reliable and directed ways.

    > I still don’t understand what you mean by internal abstract structures that don’t need an interpretation.

    I’ve had another stab at explaining it earlier in this comment. I’ll spare you the tedium of repeating myself.

    > how do these already interpreted structures work?

    As software does. As a simulated you would.

    > What makes them generate what seems like intentional objects to their possessors?

    They don’t really generate anything. If they seem like they do it is only because their possessors don’t have access to what is actually going on. It doesn’t seem like your consciousness is just the action of a bunch of neurons because you can’t directly perceive that it is. The same way seeming always works. It doesn’t seem that your desk is made of atoms which are mostly empty space because you can’t perceive that it is. It doesn’t seem that the earth is spinning at an enormous pace or even that it is spherical because you can’t perceive that it is. It doesn’t seem that the many worlds interpretation is true (if it is) because you can’t perceive that it is.

    But really, you can answer your question yourself, if you pretend you were asked by your simulated self.

    > How are they instantiated? How do you construct a system that has an already interpreted structure at hand?

    As above for both questions. Pretend you are asked those questions by your simulated self. We understand how all this works for software. My view is that we are no different.

  85. I don’t know how to explain my views any clearer than I have, and I think you have the same problem. We’re shouting across a chasm. I just don’t see how to get to there from here.

    I think you’ve made yourself quite commendably clear—and after all, if you recall, I used to stand on your side of the chasm. If you read the blog post I linked to earlier, there’s lots of the same points you make, even made in quite similar ways; and if you read some other articles on there, I think you’ll even recognize some kinship in general philosophy (although I rather preferred Jürgen Schmidhuber’s formulation to Tegmark’s MUH).

    What happened then is essentially that I realized that structure underdetermines content, while our minds (seem to) possess definite contents, which aren’t reducible to structure (I’m growing tired of adding ceaseless qualifiers—and they don’t seem to do much, either—so I’m going to leave them out from now on, with the understanding that any statement I make about the contents of the mind is to be read as including a disclaimer that while this is things as they seem to the mind’s owner, I recognize the possibility that this appearance may be deceiving).

    Computers can be deceived*, and so it’s not so hard to see that it is meaningful to say that something can seem* a certain way to a computer, even though it may not necessarily be so.

    Computers can only be deceived if they are capable of holding beliefs; they are only capable of holding beliefs, if they are capable of intentionality. So they can’t be deceived about possessing intentionality—the mere fact of being deceived (or ‘deceivable’) implies intentionality. Now, the question is of whether that intentionality is accountable for in terms of structure—as you hold—or if it isn’t—as I believe. If it is, then the computer isn’t deceived when thinking that is possesses intentional content; and if it’s not, then the computer can’t be deceived into thinking this. So anchoring this account on such deception doesn’t get you anywhere.

    I’m sort of lost with the example of the balls in the boxes. I still don’t really see what you’re getting at. Is it that you have a physical concrete object in one box to stand as a proxy for another? So the label is a physical object as opposed to just structure?

    No. The second ball stands for my mental content, for the intentional object that is (by whatever means; hypothesis non fingo) present in and to my introspection. It’s the apple I see when I see an apple.

    Let’s again try and clean up some excess verbiage. We both agree that the most parsimonious explanation of our internal phenomenology is that it’s in fact caused typically by external objects. So I’ll dispense for now with talk about whether a given intentional object accurately refers or not.

    Instead, let’s paint a picture. An object induces in my sensory organs some form of state change; by some means, this causes a representation of that object to be present before my inner eye (although it is misleading to speak that way). I have an intentional object, which, by virtue of being caused by the external object, provides a reference to it: since I do possess this intentional object, I know of the presence of the external object (to the extent that my senses aren’t deceived, etc etc.).

    This is what the ball I have access to stands for in my analogy, and it is what structure fails to provide—with mere structure, you lack this sort of definite intentional object, having instead at best access to questions of cardinality (as that is all that structure can settle).

    Let’s illustrate this another way. René Magritte, in his usual perceptive and highly insightful way, illustrated the situation described above in his painting ‘The Human Condition‘. There, the canvas—nearly invisible at first sight, illustrating the deceptive transparency of human mental states—represents out interior picture of the world, although of course we must be careful not to take the analogy too literally and posit some ‘inner observer’ that perceives the interior picture, on pain of incurring the homunculus fallacy. Instead, think of the picture as ‘already perceived’, without, for the moment, worrying about how that comes about.

    Contrast this with this mock-up of mine, ‘The Machine Condition‘. This is the correponding picture if all you have access to is mere structure, mere difference, mere relation. These are the bits coursing through a machine brain, the pixel patterns input to it by its camera-eyes, or whatever else you wish. Like above, we of course don’t imagine any further internal interpreter ‘looking at’ the picture, as we would fall into the homuncular trap; rather, we imagine this structure as immediately present to the agent.

    Such an agent could act in any way that a conscious human being could, propound the same propositions, flee in appropriate situations, etc.—we need only imagine a giant lookup table connecting the appropriate inputs to appropriate outputs. But nevertheless, from the first person perspektive, there is a clear difference—the structure it has access to underdetermines its content, and thus, there is no internal picture of the world, nothing that could be singled out as referencing anything ‘out there’.

    Now, of course, there is the possibility that this is how things really are, that underneath it all, we are merely in the machine condition; but then, as I think I’ve said before, we need to account for the fact that things seem like anything to us—that we can build what appears to be a definite inner picture from structure that doesn’t determine it (and, as seems clear to me, can’t do so). Both me and my double would be completely convinced by such a demonstration.

    In order to make the possibility of this plausible, you keep pointing towards more structure, that somehow, adding on ever more structural facts will create a net tight enough to only catch some definite object; and if you could demonstrate that this is the case, I’d be forced to agree with your proposal.

    But to me, quite the opposite seems to be the outcome: adding on structure just increases, not reduces, the arbitrariness. Consider again the thermostat. You might add structure, for example, by adding another unspecified measurement device that reacts to the whole state of the thermostat—say, if it is in 00, the measurement device is in state 1, and if the thermostat’s state is 11, then the measurement device reports 0. Have we now gained any definiteness?

    And to me, it’s clear that we haven’t. Where earlier, we had a correlation between two parts, we now have a correlation between three; the total system can be in the state 001 or 110, which allows for yet more freedom of interpretation. In fact, we can always interpret structure by means of a balls-in-boxes model: for any set of correlations, there will always be a set of balls in boxes that embodies this correlation. Hence, whatever your intended interpretation, I can always consider the structure to be ‘about’ balls in boxes instead.

    Furthermore, you pointed to dynamics to maybe single out one thing (or some small class of things) from the plethora of possibilities. That in some way, temporal correlation helps where property correlations fail. But in fact, this is again just additional structure: take all the states your system traverses, and put them side by side; then, the causal correlations can be taken care of in just the same way as any other correlation can. I can, once again, find a ball-model that embodies this; in fact, I can even do that in the dynamical case: for any state at time t, there is a configuration of balls that embodies this state (I just code a complete description of the state in binary, and put a ball in red in a box wherever that code digit is 1, and a ball in green wherever the digit is 0). Then, I just emulate the temporal evolution by chaging the configuration of balls appropriately.

    This is, again, nothing but the stone argument.

    Finally, even though you haven’t been very explicit about it, I suppose you could want to try and anchor reference by the environment’s responses to the system’s actions. Doing A reliably produces result B, that sort of thing. But again, this does not help pare down the possibilities—ultimately, such exploration can be used to fashion the lookup table dictating the system’s actions (which itself is of course purely structural), but it does nothing to help pin down what each entry of the lookup table means—this is still nothing but some uninterpreted binary code.

    So, in a nutshell: structure never pins down an internal picture; rather, it leaves that picture completely undetermined (it could always considered to be merely a balls-and-boxes model). But we do have a definite picture: I do not experience balls and boxes, but rather, a cup on my desk. Hence, the structural picture is inadequate to account for our first-personal view on the world.

    Again, you are free to convince me of the opposite: find some case where structure actually singles out what it pertains to, provides actually a definite picture.

    The apparent concreteness of your references is just because you don’t have conscious access to the underlying structures and abstract symbols that allow your consciousness to emerge, any more than the simulated you would have access to the code of the algorithm sustaining it.

    This is exactly the assertion I would like you to demonstrate: how does not having access to this structure yield a definite picture? How is this picture built from the underlying structure?

    Even from a third person perspective, I can see that it isn’t, because I can figure out how it works and see that it does not perform any information processing analogous to a human.

    Except of course you can’t, by the stone argument: no particular computation is singled out by the physical dynamics.

    Your program for instance cannot be quizzed for instance on what it means by that statement, and cannot be engaged in a debate such as we are having on the nature of consciousness.

    A lookup table could; is it conscious?

    As I said on Scientia Salon, we have two self-consistent views. But the fact that my view is compatible with the evidence shows there is no evidence to support your view over mine,

    Well, I haven’t really been proposing a view, I have merely challenged yours; and the challenge is precisely that it doesn’t seem to be consistent with the (first-personal) evidence we have. (And you have yet to propose an account of how it could be made to be.)

    If your story is not correct, then your asking me to explain how your real beliefs can be no more than the pseudobeliefs of a computer doesn’t make any sense.

    But that’s not what I’m asking (as I have been at pains to point out time and again). I’m not asking you to explain my ‘real’ beliefs; I’m asking you how whatever beliefs I seem to have could conceivably be accounted for in terms of your pseudobeliefs. I’m taking the first-personal data that I have, and ask you how your view accounts for this data, which to me it doesn’t seem to. (And as per the above, I strongly suppose it can’t.) There’s no circularity in doing so.

    Where does the spark of life fit into the story? What replaces it? For the question to be meaningful requires that the concept of a spark of life be valid,

    Again (and hopefully for the last time), I’m not asking about the spark of life, I’m merely asking about its phenomenology—i.e. how your proposed fundamental constituents could give rise to walking, talking, reproducing, entropy-minimizing (or however you want to define life) systems, because there are characteristics to life that I don’t see in your account (to stay with the analogy). So I observe lifeforms to do X, but I don’t see how your account could enable them to do X.

    So, the alternative, viable way to account for it is however you would explain things to your simulated self, which you believe is mistaken about its subjective experience.

    Well, I don’t believe that such a simulated self having no experience is possible (or indeed, even coherent). On my account, it would have just the same experience as I do, however, this experience does not arise from computation. So it’s kind of hard for me to engage with this hypothetical, because I believe it’s fundamentally misguided from the start.

    This is easy to do for the structure of the thermostat — trivial really — because it is so simple, but not so easy to do for a human mind, because it is complex.

    See, the exact opposite is the case: the more complex the structure, the more systems are there that embody it. It’s a simple combinatorial fact: from having a set of things of the right cardinality, it follows straightforwardly that one can imbue it with the same structure as that of the brain, ultimately from the existence of the powerset, which guarantees us that we can put any elements in a correspondence that exactly mirrors the way they correspond in the brain. Again, I can propose a balls-and-boxes model. You’ll need many balls and boxes, but it’s trivially possible.

    You keep thinking that by compounding structure with more structure, things will somehow get more definite; but if anything, the opposite is the case—the options for arbitrariness only increase.

    These have to be interpreted and this interpretation can be difficult. But the structure in your head does not need to be interpreted.

    Well, whatever may go on, what you certainly need is an operation that takes the structure in my head, and from it, yields the phenomenology I experience. In some way, it must be possible to take that structure, and derive what subjective experience it leads to from it. And the problem is, the structure underdetermines this phenomenology, so such an operation doesn’t exist.

    But in your example, you assume you have access to one side of the correlated pair. But you don’t, not really.

    I don’t assume this; it’s what I find to be the case, by introspection. Now, as I readily acknowledge, this introspection may be misguided; but this doesn’t change the explanatory burden—you still have to account for how this misguided introspection emerges from your proposal. Again, think about the arm-eliminativist.

    I may be a bit lost here. I can see a thermostat changing its state when the temperature drops.

    You can, yes, because you’re an intentional system; but the thermostat can’t, or any ‘intentional’ system supervening on it if it has access to only the structure.

    I’ll not dispute that any structure can supervene on any object (of a given cardinality of subunits, and with events causing other events and so on). Similarly I’ll not dispute that any appropriate object can subvene any given structure.

    This is in flat contradiction with several things you’ve said earlier; take merely the assertion that while it’s trivial to find systems that embody the structure of the thermostat, it’s harder to do so for the brain, because it is more complex—in fact, as you acknowledge here, it’s just as trivial.

    I’m quite happy with all of these interpretations (each of which corresponds to a different Platonic structure) being real.

    But the interpretations can’t be Platonic structures—if that were the case, then they would need interpretation themselves (they could be ball-and-box models, for instance), with the whole thing infinitely iterating.

    No, an interpretation of a structure is a concrete set of things embodying the structure, that is, things standing to one another in some relation—books, ancestors, etc. My challenge to your account is precisely that you can’t get to those from mere structure; so simply positing that you can in order to counter my challenge once again relies on assuming what you ought to be showing.

    Moreover, the assumption that every interpretation of some structure exists doesn’t actually net you anything, if those interpretations are themselves structures, since you already believe all structures exist—thus, believing in the existence of the interpretations doesn’t add anything to your ontology, and we could, instead of the to-be-interpreted structure, directly jump to the (structural) interpretation you think is somehow singled out by Sergio’s argument (although I have already pointed out that this singling-out can only work from the perspective of a third-person intentional system, and can’t be used to ground the interpretation in general, over in that thread).

    But then, we’d actually just have the same problem again: this structure, I can for instance interpret in terms of balls and boxes, hence showing that it doesn’t single out what we hoped it would after all.

    If you’re describing a purely abstract structure that doesn’t refer to anything else, then the definitions of entities within that structure are necessarily circular — they can only be defined with respect to their relationships to other entities.

    This isn’t circular. In such cases, you start from a set of axioms, and from them, the properties of the abstract entities are derived. Thus, they’re completely well defined in terms of those axioms.

    But I think that if you were really able to capture all the properties of the balls and the boxes in an abstract structure, then I would not say there is an objective difference.

    But this is the thing: you aren’t. Take any putative structurification of those two balls in their two boxes. You can write it down as a string of bits (probably a quite long one). Then, you can model this string of bits using as many balls in boxes; hence, the structure you proposed earlier does not single out the pair of balls.

    If you want to distinguish between books and people you need more detail in the structure.

    Well, demonstrate this. Demonstrate how you can, using mere structure, describe somethign that is unambiguously a book, rather than, e.g., a set of balls in boxes.

    I’ve had another stab at explaining it earlier in this comment. I’ll spare you the tedium of repeating myself.

    I’m not sure what you refer to; you’ve proposed a third-personal account that seems to widely miss the mark, but other than that, I only found this:

    If I am such an abstract structure myself, then I am not a ciphertext to myself because I do not need to interpret my own structure. The structure is dynamic and evolving and changing its own state over time because of how different parts of it are causally connected.

    Which just seems to re-state the assertion, and otherwise just points to added complexity which doesn’t help.

    And again, an analogy to a static unchanging graph of relations and relata is not what we are talking about. We’re talking about a structure that embodies causation, that can process information in reliable and directed ways.

    It’s unclear to me how you think this could help, and simultaneously believe the conclusion of the rock argument, which exactly says that even dynamics or causal connections don’t serve to narrow down the possibilities, but that this dynamical structure can be mapped to essentially any system at all.

    It doesn’t seem like your consciousness is just the action of a bunch of neurons because you can’t directly perceive that it is. The same way seeming always works. It doesn’t seem that your desk is made of atoms which are mostly empty space because you can’t perceive that it is. It doesn’t seem that the earth is spinning at an enormous pace or even that it is spherical because you can’t perceive that it is.

    This illustrates my point well: in all these cases, how things seem can be derived from how they are; in other words, my desk seems exactly the way that a collection of atoms made of mostly empty space ought to seem. In fact, only realizing its nature in these terms actually explains what I’m seeing: its extendedness going back to the Pauli principle; its solidity going back to electromagnetic interactions between the shell-electrons of the atoms in my hand and the atoms in the desk; and so on. This is a very satisfying picture!

    In contrast, your construal of phenomenology in terms of structure seems to obfuscate, rather than explicate. Whereas I can clearly see how the way the desk seems to be flows from the way it is actually composed of atoms following quantum-mechanical rules, how it seems that there is intentionality or phenomenal experience does not seem to be clarified at all by your structuralist proposal. In fact, it makes things more puzzling: while it seems to me that I have wholly definite perceptions and intentional objects (even in the case of XO-2b: the intentional object is completely definite, even if that which it refers to may not be), this definiteness is almost by definition absent on a structural account; and no explanation of how it could arise from it so far seems to be forthcoming from you. Thus, what grounds could I have to consider this proposal?

    If somebody were to tell you, your desk is made from ether-waves, what would you ask? I suspect you would ask them how these ether-waves account for its properties, say, its solidity. Would you be satisfied with the answer that, well, your desk is just an illusion, and thus, so is its solidity? No, you would probably pull a Dr. Johnson and refute it thus *thumps fist upon table*. Because even if the desk is ether waves, even if its solidity is an illusion, the fact that you perceive it as if it is solid needs explanation—which it gets via electromagnetic interaction, even though atoms are not what one would call ‘solid’.

  86. Hi Jochen,

    You’re casting yourself in the role of my future self, having seen the error of my ways. That may be the case. Unfortunately, I am not quite willing to accept that the problem in reconciling our views is simply that I have not yet had the epiphany you are trying to help me toward.

    I feel like I understand your position and reject it anyway, just as you no doubt feel towards mine. From my point of view, you are asking questions which though superficially and intuitively reasonable are ultimately confused and meaningless, pseudo-questions founded on false assumptions that belong in the same category as “Which of our ancestors first had a soul?” or “How many grains of sand make a heap?” or “What are the necessary and sufficient conditions for something being alive as opposed to a mechanical self-replicator?”. From your point of view, I am mistaken because I have not yet realised that I cannot answer your questions, but from mine you are mistaken because you do not yet realise that your questions need no answers.

    > Computers can only be deceived if they are capable of holding beliefs

    Well, I said deceived*, not deceived. I thought you would understand that I’m talking about the kind of functional pseudo-intention we can impute from Dennett’s intentional stance or from the computational interpretation Sergio would advocate. So I lay it all out there in terms of pseudo-intentions, a parallel for everything you ask me to explain. And then I make the unintuitive leap that there is no difference between pseudo-intentions and intentions. If that leap is allowed (which you won’t), then all your questions are answered and there is nothing left unexplained. There is no evidence which contradicts the leap. The only think holding you back from it (it seems to me) is that it is unintuitive. You insist that these are not true intentions. You insist that the references are not concrete, or are open or whatever, but these are all bald assertions to my ears.

    > with mere structure, you lack this sort of definite intentional object, having instead at best access to questions of cardinality

    I’m going to keep disagreeing with this, I’m afraid. You can answer questions about that structure qua that structure. You can manipulate it in various ways and see what results you get. This is how we do mathematics. This is how computer programs work.

    The problem is that you’re apparently laser-focused on an idea of absolute reference I disagree with. I’m not saying we use structures just to represent the external world in some sort of passive marks-on-a-page way, I’m saying the entirely non-arbitrary causal links between nodes in the structure drive the way we think. Our thinking is the manipulation of information in very directed and specific ways by the dynamics of this structure. This happens in such a way as to enable us to interact with objects in the external world (much like a robot) and so for all intents and purposes we can be said to have references — but these references are not a matter for arbitrary interpretation (where Newman’s problem might come in), they are what would be projected onto us by an external observer from the Intentional Stance. We are not in a position of having to interpret our own mental representation (there is no homunculus) any more than a simulated person has to interpret her own computer code. We are not even directly aware that we have mental representations — as far as we are concerned we are just thinking about the objects of our thoughts directly as if by magic.

    To me, your analogy to Magritte nicely shows what I see as your confusion between internal and external representations. I get that this is supposed to be just an illustration, not to be taken too literally, but the problems with that illustration are just the problems I see with your skepticism of functionalism. You might as well have painted an fMRI scan instead of the ones and zeros. You’re confusing what a functional representation looks like from a third-person perspective (ones and zeros or an fMRI) with what it looks like from a first-person perspective (Magritte’s original painting). Just as we don’t recognise your image of ones and zeros as corresponding to a landscape, neither would a landscape-image-recognition algorithm, even one which uses precisely that sequence to represent the image internally.

    > we need only imagine a giant lookup table connecting the appropriate inputs to appropriate outputs.

    I’m sorry I forgot to address the point of the lookup table last time.

    To prod your intuitions into alignment with mine, some questions first. How would you produce such a lookup table? How would you define it without listing all the data points? If you were to list the data points, how would you choose what to include and what to exclude?

    It seems to me that in order to produce such a lookup table, or in order to define one, you need some sort of algorithm. That’s where you’ll find the intentions. In case you craft the lookup table manually, those intentions are your own, the lookup table being produced by the algorithm that is your mind.

    But this is not how most AI systems are implemented. We don’t usually script everything they can do in advance, because this is hopelessly impractical. Instead we write algorithms that effectively generate the lookup table in real time. Again, that’s where you find the intentions.

    I anticipate you might counter this by positing that the lookup table was randomly generated and it just happens to have intentional behaviour. But in positing that it happens to have intentional behaviour, you are distinguishing it from those that don’t. This discrimination, were it made any more concrete, would need a set of specific criteria. In that case, the intentions reside in those criteria.

    In any case, since I am a Platonist and hold that all structures exist regardless of explicit physical instantiation, I can simply say that the intentions exist in the algorithm that would have produced the lookup table, even if we can find no physical instantiation of that algorithm. That is the mind that corresponds to what the system is doing. Of course there may be more than one such mind, in which case they are all equally real as far as I am concerned.

    > the structure it has access to underdetermines its content

    So, no, even in a lookup table scenario, there is an implicit dynamic structure such as an information processing algorithm. It may underdetermine its references (since references are generally underdetermined, as we seem to agree at least in the case of XO-2b referring to both a planet and a mug), but doesn’t underdetermine its content in my view because it has no content beyond the structural. You think it does because you cannot see that all your knowledge is just structural, because it doesn’t look anything like patterns of neurons firing from a first person perspective. This is simply because a representation of a pattern of neurons firing in order to represent a tree is not the same as a representation of a tree. Ceci n’est pas un pipe!

    > as I think I’ve said before, we need to account for the fact that things seem like anything to us

    Which I’ve done by proposing the hypothesis that seeming is no more than seeming*. You reject that hypothesis. Either way we’re just playing bald assertion tennis.

    > that we can build what appears to be a definite inner picture

    It appears to be definite because it represents itself as definite. Our representations present themselves to us as objects, not as representations of objects, because typically what we represent are not representations.

    > from structure that doesn’t determine it

    The content of our representation is not underdetermined because there is no content (beyond the structural). Just as you might tell a simulation of yourself.

    > adding on ever more structural facts will create a net tight enough to only catch some definite object

    Not really. I’m not sure you can ever pin it down to one object only. If space is infinite and earth and its surroundings has a duplicate, then each of my intentions also picks out a duplicate. I’m saying that adding more detail narrows it down so it picks out fewer and fewer objects. There is no magic threshold whereby it becomes definite.

    You model this addition of detail by adding a new value to the thermostat which just bears a one to one relationship with the existing values. I don’t think this is really adding more details at all, because you still just have the same simple correlation you had before, only echoed in more than one physical place. The extra value you have added is redundant. It does nothing.

    More detail would be like having the thermostat detect the temperature at more than one threshold. Perhaps if the temperature is very high it activates a fan or air conditioning unit, in addition to activating a heater if the temperature is low. With more detail, the correlation of the state of the thermostat to the temperature in the room, (and the interpretation that this is what the state of the thermostat is about) is therefore a little less arbitrary.

    > Then, I just emulate the temporal evolution by chaging the configuration of balls appropriately.

    But your balls are not interacting with the environment. It is this interaction that for all practical purposes allows us to identify an intention or reference.

    If your balls are just scripted to change state in a particular way, what you have is in effect a lookup table and I would answer it in the same way I did that.

    > I can always consider the structure to be ‘about’ balls in boxes instead.

    And you can consider the structure in a brain to be ‘about’ neurons. You can apply whatever interpretation you like to whatever object you like. Each interpretation is an abstract structure. Some of these are conscious. There is no definitive answer on what anything is about. Absolute reference does not exist. There is no fact of the matter on what we are thinking about, there are only interpretations that are more or less reasonable. Our thoughts can be taken as a closed abstract system. We think we are thinking about things in the external world (and that’s a reasonable interpretation), but really we’re just manipulating an abstract structure that could refer to anything. This makes no difference to us because we have no access to what the things actually are in themselves. As such we can’t see the contrast between thinking of them qua their abstract structure and between thinking of them as the things in themselves — the latter is not even a sensible proposition. The structure we do have access to is detailed enough to stand in as useful proxy for the things in themselves. When we think of these objects, we are in effect thinking only of their structural properties up to the level of detail captured in our models of them.

    > But we do have a definite picture

    I beg to differ!

    > I do not experience balls and boxes, but rather, a cup on my desk

    You experience an abstract structure that could most reasonably be interpreted as referring to a cup on your desk. Other less reasonable interpretations are possible. If these are isomorphic to your mental model it would make no difference to you, because to you a “cup on your desk” is just what you call this abstract structure when represented in your mind. I can’t even really illustrate what it would mean for this structure to refer to something other than a cup on a desk, because to do so I would need to distinguish between this alternative and a cup on a desk, but to do so would be to identify a structural difference between the two in your mental model, which ex hypothesi there isn’t.

    I mean, potentially I could make you aware of the distinction between a mug and a cup (if you weren’t already), and then you could understand that the abstract structure you previously thought of as a cup on a desk (which at the time didn’t distinguish between a mug and a cup) could be interpreted as referring either to a mug on a desk or to a cup on a desk. I guess that’s about the best I can do to illustrate the point.

    In the same way, if you’ll excuse the ridiculous anthropomorphisation, you could try explaining to a thermostat that its “temperature” could refer either to temperature or to altitude. It might not be too impressed, saying that it still has a definite concept in mind, and that temperature and altitude are really just two different types of “temperature” as far as it is concerned. The distinction is not important to it but that doesn’t mean it doesn’t have a reference.

    > find some case where structure actually singles out what it pertains to, provides actually a definite picture.

    I’ve been focused on the opposite tactic, to try to show that your own representations are not definite at all.

    > how does not having access to this structure yield a definite picture?

    There is no definite picture. The picture is no more definite than the structure. There is no detail outside of this structure which is present in our experience. If the structure is not definite with respect to further details, these further details are irrelevant to us.

    > Except of course you can’t, by the stone argument: no particular computation is singled out by the physical dynamics.

    Agreed. You can take me to be talking about the most reasonable interpretation. There is also some conscious algorithm which utters “I am conscious”, but that does not seem to be as useful in predicting the output of your computer program as the algorithm that simply outputs those symbols mindlessly.

    > and ask you how your view accounts for this data, which to me it doesn’t seem to. (And as per the above, I strongly suppose it can’t.) There’s no circularity in doing so.

    There’s no circularity in asking the question. I think your reasons for rejecting my explanations (or denying that I have even offered an explanation) are circular. Or at least no less circular than my view.

    > Again (and hopefully for the last time), I’m not asking about the spark of life

    I know you’re not. But I see your questions as similarly misguided (though perfectly understandable from an intuitive perspective).

    > On my account, it would have just the same experience as I do, however, this experience does not arise from computation.

    OK, this interests me. Perhaps I don’t understand you after all. So, we have a simulation of the atoms in your body in a computer program, which is running on an ordinary (though arbitrarily powerful) computer. You think it is conscious? If so, how is this not computationalism?

    > the more complex the structure, the more systems are there that embody it.

    Poppycock! At the very least, more complex systems require more events to embody them. You need at a minimum a certain cardinality. Trivially simple systems (such as the empty set, say) can be projected onto absolutely anything.

    > Well, whatever may go on, what you certainly need is an operation that takes the structure in my head, and from it, yields the phenomenology I experience.

    Meaningless. There is no phenomenology. What you experience is just the structure.

    > Again, think about the arm-eliminativist.

    A disanalogy because at least we can put into words what an arm is supposed to be. We can’t even articulate what this phenomenology that you expect me to account for is supposed to be.

    > You can, yes, because you’re an intentional system; but the thermostat can’t, or any ‘intentional’ system supervening on it if it has access to only the structure.

    No, I can because I have access to further structure than the thermostat does. Not because I’m a fundamentally different kind of thing than it is.

    > But the interpretations can’t be Platonic structures—if that were the case, then they would need interpretation themselves

    No. Platonic structures exist independently of interpretation.

    > No, an interpretation of a structure is a concrete set of things embodying the structure, that is, things standing to one another in some relation

    I wasn’t talking about interpretations of structures so as to map them to specific physical things but about interpretations of specific physical things as embodying particular abstract structures.

    > have already pointed out that this singling-out can only work from the perspective of a third-person intentional system, and can’t be used to ground the interpretation in general

    Agreed. But interpretation only applies from a third-person perspective. The first person perspective is not interpreted. We don’t interpret our models as referring to objects in the world. We only know our models. The inference, if there is one, is that there probably are objects in the world which correspond in some way to our abstract models.

    > Thus, they’re completely well defined in terms of those axioms.

    Sure, but the axioms themselves are circular. You can’t generally isolate a single axiom from a system of axioms and say it has meaning without bringing in assumptions from outside the system. Euclid’s axiom “A straight line segment can be drawn joining any two points” is meaningless unless you define what it means for a line to be straight, or what a line is or what a point is. The system of axioms as a whole constitutes these definitions.

    > the structure you proposed earlier does not single out the pair of balls.

    I’m not saying it “singles out” the pair of balls. I’m saying it *is* the pair of balls, since I think the pair of balls is just a substructure of the structure that is our universe. If you choose to represent this structure with a string of bits or some enormous collection of balls, then that physical representation is yet another structure again. In my view, structures don’t need to be represented or instantiated in order to be real.

    On the next bit you challenge me to explain what I mean by saying that internal representations are not interpreted but are instead causally connected and govern how we think instead of being read by a homunculus. I said I had already had a stab at explaining it and you didn’t know to what I referred. I feel like I’ve discussed it a lot (in this post too). For instance, a simulated person does not interpret its own computer code. I don’t interpret my own neurons any more than my neurons interpret the laws of physics. The neurons shape how I think in virtue of their causal connections, and the laws of physics shape how those causal connections work. The way my thinking is shaped by interpreting some external representation (such as an English sentence) is much more indirect and so not a fair analogy.

    > It’s unclear to me how you think this could help, and simultaneously believe the conclusion of the rock argument, which exactly says that even dynamics or causal connections don’t serve to narrow down the possibilities, but that this dynamical structure can be mapped to essentially any system at all.

    OK. Fair point. I am speaking loosely. I will most often speak of the structure that most closely or most usefully maps certain interesting causal relationships. We might say this is Sergio’s interpretation. But strictly speaking, we don’t think of mathematical structures as having causation per se. Rather they have entailment, which can be used to model causation, e.g. the state of the system at time t1 entails the state of the system at time t2.

    There is no narrowing down of possibilities because in my view all possibilities are real. But each of those possibilities has internal “causation” in the form of entailment. In the case of Sergio’s interpretation, this entailment maps to what we would call actual causation. This mapping allows us to identify references and aboutness for all practical purposes, at least from a third person perspective. From the first person perspective these representations have no aboutness in the sense of referring to other objects outside our experience because we only have access to what they are FAPP about via the model. So we see only the model and what this model is “about” doesn’t enter into it from a first person perspective. It doesn’t have to because we deal with the model directly. By standing as a proxy for the real world, the model allows us to refer to the real world FAPP but not absolutely.

    So, I think I am the structure Sergio would interpret my brain to be. There are other structures less reasonable people might interpret my brain to instantiate. I don’t consider myself to be these structures, but that is not to say that they don’t exist. They exist as structures in their own right, and where conscious can be considered to be observers in another universe.

    > in all these cases, how things seem can be derived from how they are

    It is no different for consciousness but you don’t accept the derivation I have offered, in my view because of intuitive barriers. To explain each of these seemings, we construct a hypothetical observer in the possible world where the proposition is true and consider how things would appear to it, i.e. what it has access to. You don’t have access to the fact that atoms are mostly empty space, so we understand why it seems to you that they are not, and therefore the appearance of solidity does not contradict the hypothesis that atoms are mostly empty space.

    By positing a simulation of a conscious person (again I’m confused here because against expectation you seem to agree that this would be conscious), we can see that it does not have access to the fact that it is a computer simulation. All the information available to it seems to confirm that it is real. If we assume it is a simulation of you or of me, it would even be interested in debating just these same questions, asking how it could be that it could seem to be more than a computation when it isn’t.

    In my view, this prediction constitutes an explanation. It is a derivation of how things seem from how they are.

  87. You’re casting yourself in the role of my future self, having seen the error of my ways

    No, it’s the other way around: I recognize in you exactly the position I used to hold, i.e. I see you as holding the same opinions as my past self did. I was wrong then (or consider myself to have been wrong); I might just as well be wrong now. I merely offer this position to guard against your assertion that all we have to do is a mere difference in intuition: my intuitions haven’t changed; I just now consider them to be misleading. I view myself as having been forced, by (subjective) evidence, and the inabilitity of the computationalist picture to account for it, to adopt a view of the mind that I find radically unintuitive.

    There is no evidence which contradicts the leap.

    See, I do think there is. Coming back to the difference between the balls-and-boxes and the thermostat, I think the simplest way of putting it is that while my mental state may underdetermine what it references, the structure on its own underdetermines even the mental state. Simply because, from the underlying structure—neurons firing, electrons cruising around in circuits, etc.—there is no way to derive the mental state I find myself in in any unique sense. Infinitely many arbitrary interpretations are possible, but this is not what we find.

    You agree with this by now, I believe, and have moved on to simply considering all of these interpretations to be real. I have already laid out in my last post why I consider this manoeuvre to fail: on your Platonic conception where interpretations are themselves structures, then any interpretation that we might consider to be the ‘reasonable’ one will itself not determine the mental state any better, will itself be compatible with any number of interpretations (say, in terms of balls-and-boxes, as opposed to my actual phenomenology—again here with the caveat that this is merely a shorthand for ‘the phenomenology I think or take myself to have’, which I’ve hoped we could leave implicit by now, but whose omission has again apparently misled you a couple of times about what I’m saying).

    I’m going to keep disagreeing with this, I’m afraid. You can answer questions about that structure qua that structure. You can manipulate it in various ways and see what results you get. This is how we do mathematics. This is how computer programs work.

    Disagree all you want, in this one case, it’s quite certain that what you’re saying is wrong. Any set of the right cardinality can be imbued with any structure. This is what makes both mathematics and computer programs so useful, since it accounts for the fact that we can describe widely disparate system using the same mathematics. I mean, it’s exactly the same thing as the stone argument says; to me, you’re variously agreeing and disagreeing with this, so I suspect that there’s still some shortcoming in your understanding—so to put it forcefully, the fact that every physical system traversing a set of states of the right cardinality can be seen to implement any FSA is the same as the fact that all a structural account settles are questions of cardinality. You agree with the former, but disagree with the latter, but really, they’re just the same thing.

    The problem is that you’re apparently laser-focused on an idea of absolute reference I disagree with.

    Where do you get this from? Haven’t I repeatedly acknowledged that the intentional objects I have access to underdetermine their referents?

    I do believe that there are definite intentional objects—simply because my introspection tells me so. In this, it may be mistaken (something I also readily and repeatedly have acknowledged). But, and I think you keep missing this bit, whether or not there in fact are definite intentional objects, your account still has to explain why my introspection tells me so!

    but these references are not a matter for arbitrary interpretation (where Newman’s problem might come in), they are what would be projected onto us by an external observer from the Intentional Stance.

    Well, but since we may just be the fever dream of a stone lying on the beach on your account, there probably wouldn’t be any intentions projected onto us that in any way correspond to the intentions we (take ourselves to; do I really have to keep doing this?) have.

    We are not in a position of having to interpret our own mental representation (there is no homunculus) any more than a simulated person has to interpret her own computer code.

    But still, if your account is to count as explaining anything, then at a minimum must that which we take ourselves to be experiencing be derivable from the structure we have access to—in the same sense as the solidity of the table is derivable from the atomic account of matter (something you seem to have missed as well). This isn’t saying that we in some way observe or interpret our own internal goings-on, merely to say that what we seem to experience must be explicable in terms of those internal dynamics; and if you say that they are given by simple structure, then structure must explain why I experience the bottle of beer on my desk (hey, it’s evening by now). But this, structure can’t do: with the same justification as I could derive the bottle of beer, I could derive a set of balls-and-boxes, or a myriad of other things. Stone; Newman; Pixies; what have you.

    To me, your analogy to Magritte nicely shows what I see as your confusion between internal and external representations

    See, it’s actually somewhat strange to me that you should put so much weight on that distinction. After all, to you, everything is accountable for in terms of structure; and all structure has a third-personal description; and hence, each first person viewpoint must be accountable for in third-personal terms. Otherwise, if you allow some irreducible first person into your account, then the whole mystery of what that first person is and how it comes about of course remains unchallenged.

    But be that as it may, your criticism is misplaced: all that goes on in my head, on your account is neurons firing, realising some structure, some computation, etc. This is, of course, wholly appreciable in terms of scratch-marks on paper—I mean, this reducibility is kind of the point why one would propose a computational account of the mind, after all.

    In any case, since I am a Platonist and hold that all structures exist regardless of explicit physical instantiation, I can simply say that the intentions exist in the algorithm that would have produced the lookup table, even if we can find no physical instantiation of that algorithm.

    And in doing so, again, you answer a challenge to your stance by using a reply that is only meaningful if your stance is correct, petitio-ing the principii six ways to Sunday.

    Which I’ve done by proposing the hypothesis that seeming is no more than seeming*. You reject that hypothesis. Either way we’re just playing bald assertion tennis.

    Well, no. I’m proposing first-personal evidence that your account does not seem to me to sufficiently explain; and your continuing rejection of even attemption such an explanation, or sketching how one could look, seems like the best confirmation that this is in fact a problem.

    It appears to be definite because it represents itself as definite. Our representations present themselves to us as objects, not as representations of objects, because typically what we represent are not representations.

    Uhm… huh? How does something indefinite represent itself as definite? To whom does it represent itself? How does it represent itself? I’m sorry, but I can’t really parse any of this…

    The content of our representation is not underdetermined because there is no content (beyond the structural).

    Yes, you’ve said so, and I’m completely on-board with that possibility. But what I’m asking about is why, if this is in fact the case, does it seem that there is content beyond the structural? The structural has interpretational freedom that my experience of the world lacks. I can’t view the bottle of beer before me as a set of balls and boxes; I can view any structural description of that bottle in this way. Why the disparity?

    Both me and my double would be convinced by an answer to this question, but no matter how often I have posed it, none seems to be forthcoming. Of course, this is no surprise, since none can’t be found.

    More detail would be like having the thermostat detect the temperature at more than one threshold. Perhaps if the temperature is very high it activates a fan or air conditioning unit, in addition to activating a heater if the temperature is low.

    Great, this is the kind of story I’m hoping to hear more of! Of course, this far, you’ve done nothing to achieve greater definiteness: still, all sets with the requisite cardinality can embody that structure, as was the case before. I can still represent this structure using balls-and-box models. We’ve not moved closer to narrowing down anything; we’ve just added more equally as uncertain elements. But please, keep going down this path, perhaps eventually, you’ll be able to show something that isn’t arbitrary anymore.

    But your balls are not interacting with the environment. It is this interaction that for all practical purposes allows us to identify an intention or reference.

    Again, you’re free to tell me how; I already pointed out that I don’t think this helps in my prior post. Also, since you seem to hold that all interpretations are equally real, we might as well supervene on an inert stone (like the wall people you proposed to exist earlier), in which case, there’s no environmental interaction at all.

    Our thoughts can be taken as a closed abstract system. We think we are thinking about things in the external world (and that’s a reasonable interpretation), but really we’re just manipulating an abstract structure that could refer to anything.

    See, if that were true, then there would be no fact of the matter regarding whether our thoughts are about wanting another sip from that bottle of beer, balls in boxes, or, I don’t know, cute little koala bears, provided that the cardinality of the objects subvening the structure of cute little koala bears is equal to that of wanting a bottle of beer; because simply, structure does not pin this down—it does not pin down anything beyond cardinality questions. But my mental content is (seems to be) definitely about the bottle of beer. (I’m beginning to sound like an alcoholic here, but even this is evidence against your thesis: on your account, I could just as well seem like the most stringent teetotalitarianist, cardinality permitting, but I don’t.)

    You experience an abstract structure that could most reasonably be interpreted as referring to a cup on your desk.

    When is any interpretation more reasonable than another?

    I’ve been focused on the opposite tactic, to try to show that your own representations are not definite at all.

    But all I’m concerned with (boilerplate disclaimer at the beginning of my last post) is the way these representations seem to me—this you must be able to account for. And it may be the case (once more with feeling!) that my representations aren’t definite; but that doesn’t save you from having to explain why they seem that way.

    There is also some conscious algorithm which utters “I am conscious”, but that does not seem to be as useful in predicting the output of your computer program as the algorithm that simply outputs those symbols mindlessly.

    But any given algorithm merely compounds the same kind of thing, adds some complexity. How does this help?

    I think your reasons for rejecting my explanations (or denying that I have even offered an explanation) are circular. Or at least no less circular than my view.

    No, my reasons for rejecting your explanations is that there is evidence that your explanations so far fail to explain, and that indeed seems to be completely contrary to your explanations. Again, I don’t reject your view because of my intutions; I reject it against them.

    OK, this interests me. Perhaps I don’t understand you after all. So, we have a simulation of the atoms in your body in a computer program, which is running on an ordinary (though arbitrarily powerful) computer. You think it is conscious? If so, how is this not computationalism?

    Well, it’s a difficult question that I have no short, non-misleading answer to. In brief, yes, I do believe it is conscious, if it is implemented in the right way; and I also believe that only this kind of implementation will produce a functional isomorph to me. And it’s not computationalism because it depends on the instantiation of certain patterns (which are physical things) with certain behaviours, for intentionality, and on input from an external reality, for phenomenal consciousness. The computation is incidental—which it must be, since there is no fact of the matter regarding which computation is taking place, without being determined third-personally from an already intentional being.

    If you want the full story, I’m somewhat reluctant to hawk it here, but if you have some means for me to send it to you, I’ve got a writeup that contains most of my current thinking (we could for instance exchange email adresses via Peter). In fact, I’d be very interested in hearing your criticism, so I’d be more than happy to send you my stuff.

    Poppycock! At the very least, more complex systems require more events to embody them. You need at a minimum a certain cardinality. Trivially simple systems (such as the empty set, say) can be projected onto absolutely anything.

    Well, if you adopt some coarse-graining, then that’s true. But if you want cardinality to be a strict criterion, then the higher the cardinality, the more systems exist with that cardinality, and hence, the less definite structure becomes.

    Meaningless. There is no phenomenology. What you experience is just the structure.

    Comments like this one are why I was careful to point out that I’m not assuming that my phenomenology is fundamental, or that I am necessariliy right in my assertions of what I experience, but that I’m merely talking about what I seem to be experiencing; and this your theory must certainly be able to accomodate. That is, whatever you’re saying, you certainly need an account of how what I take myself to experience is derived from the structure you claim to be fundamental.

    We can’t even articulate what this phenomenology that you expect me to account for is supposed to be.

    Well, we can’t articulate what porn is, or what constitutes a heap of sand; but still, when presented with some phenomenology, we’ll know that it is phenomenology. There’s no disanalogy: in both cases, an intersubjectively known phenomenon is supposed to be explicable by some fundamental entities; hence, both cases necessitate a story of how those entities give rise to the phenomenon they are supposed to underlie.

    No, I can because I have access to further structure than the thermostat does.

    And still, the demonstration that further structure yields greater definiteness is lacking.

    No. Platonic structures exist independently of interpretation.

    They well may, but as structure, they face the same obstacles in providing any definite mental content as the things they are supposed to interpret do. All I’m asking is how you believe your account to be able to handle these obstacles; but so far, the most I’ve gotten in this regard is a handwave in the direction of added complexity, causality, or interaction with the environment. But on your own account, all of this is just more structure, and thus, can’t help in adding content.

    I wasn’t talking about interpretations of structures so as to map them to specific physical things but about interpretations of specific physical things as embodying particular abstract structures.

    Which is of course the epitome of arbitrariness, as you agree regarding the stone argument.

    The inference, if there is one, is that there probably are objects in the world which correspond in some way to our abstract models.

    But our abstract models, if such they are, are applicable to any set of objects in the world; so any set of objects in the world ‘correspond in some way’ to our abstract models.

    Sure, but the axioms themselves are circular.

    That’s not even wrong. Axioms of a formal system don’t purport to carry any meaning; they are sets of symbols, explicitly lacking any interpretation, from which you can, using syntactic manipulations, construct theorems—that will then, once you add an interpretation to the whole edifice, turn out to be true as regards to the objects you have decided the axioms to be about. There’s certainly no circularity in the axioms.

    I’m not saying it “singles out” the pair of balls. I’m saying it *is* the pair of balls, since I think the pair of balls is just a substructure of the structure that is our universe.

    And this is all I’m saying: that given this structure, one ought to be able to arrive at the pair of balls. But it’s simply not the case that there is some structure such that the pair of balls is the preferred interpretation in any sense.

    I don’t interpret my own neurons any more than my neurons interpret the laws of physics. The neurons shape how I think in virtue of their causal connections, and the laws of physics shape how those causal connections work.

    And I’m very happy to accept all this. But there’s a fact of the matter regarding how my experience appears to me. This is data, and your hypothesis needs to be able to account for the data, or else, it is falsified by it. Hence, one must be able to derive, from your supposed fundamental entities, the data I have, just as one is able to derive the solidity of a table from the atomic hypothesis. That is, there must be some way to account for this data in terms of structure. This is what must be presented before your hypothesis could even be accepted as a serious contender of explaining conscious experience.

    So, I think I am the structure Sergio would interpret my brain to be. There are other structures less reasonable people might interpret my brain to instantiate.

    What do you consider to be reasonable here? And please don’t say that it is a better fit to external reality, because a) to establish that fit necessitates an arbitrary mapping from states of the world to states of your mind, thus singling out different interpretations as the ‘most reasonable structure’ depending on this mapping, and b) even being able to speak of a definite outside world necessitates an observer free from the problem of the arbitrariness of structure.

    You don’t have access to the fact that atoms are mostly empty space, so we understand why it seems to you that they are not, and therefore the appearance of solidity does not contradict the hypothesis that atoms are mostly empty space.

    But it is the very nature of atoms that underwrites the nature of my desk, e.g. its solidity; however, the very nature of structure seems to run counter to the nature of my mind, which I experience as having a definite content that structure seems to be unable to provide.

    In my view, this prediction constitutes an explanation. It is a derivation of how things seem from how they are.

    Well, I can’t do any more but tell you what would suffice to convince both me and my simulation that neither of us has ‘genuine’ intentionality: a derivation of why there seem to be definite intentional mental objects despite the fact that structure never yields definite objects, but leaves them maximally undefined. That is, construct some structure such that this structure defines some definite mental object (or, to repeat the boilerplate disclaimer you’re somehow prone to forget at these points, that seems to be a definite mental object), and I’d be happy to re-instate my intuitions and follow them towards the computability of minds.

    The basic problem to me is that accepting the stone/Newman argument leads to a rampant panpsychism; you seem to be inclined to bite the bullet and accept the panpsychism, the simultaneous reality of all possible interpretations—but this is not actually an option on your particular structural Platonian construal, since it doesn’t change anything. Basically, if you have some structure representing our world, then anything within it may be interpreted as some intentional agent, as a first step of applying the stone/Newman argument. So you say, well, anything is an intentional agent in its own universe then, but this just means that you introduce a plethora of new structures, which themselves carry all possible interpretations—and since the cardinality hasn’t changed at any point, those interpretations are exactly the same ones as before. So you haven’t actually gotten any closer to finding an intentional agent in your structural account; all you have is a neverending sequence of matryoshka puppets, with each further iteration being as arbitrarily structural as the one before it.

  88. If that leap is allowed (which you won’t), then all your questions are answered and there is nothing left unexplained. There is no evidence which contradicts the leap.

    Actually, in addition to the subjective evidence I noted in my previous post, I think there is other good empirical evidence contradicting this ‘leap’ of yours—namely, evidence that ‘belief*’ and ‘belief’ are two different things.

    Consider how you might get to know that some robot, animal, or even other person has belief* regarding some proposition: they will, in general, show this through differences in behaviour with respect to how they would behave if they lacked said belief*. Hence, one might call this ‘dispositional belief’. On the other hand, belief (in yourself) is known to you through introspection, and seems to have intentional character—hence, let’s call this ‘intentional belief’. Now, I think that in human beings, it’s an experimentally established fact that dispositional and intentional belief does not coincide, which makes it plausible that belief* and belief likewise are distinct.

    This is, it seems to me, born out by visual masking experiments, where the subject is shown first a target stimulus for some period of time shorter than a given critical time (about 200 ms), followed by a masking stimulus, which will prohibit the target from entering conscious experience. Hence, the subject does not have any intentional beliefs regarding the target stimulus.

    Nevertheless, in behavioural experiments carried out afterwards, the masked stimuli show a strong influence on subsequent behaviour, such as priming, i.e. influencing the distribution of later responses to different stimuli (say, given by the mask), as e.g. here.

    This, it seems to me, corresponds to an instance of dispositional belief, not accompanied by intentional belief: from observing the reactions of the subject, one could conclude that they were due to the presence of the target stimulus, and hence, that the subject possesses dispositional belief of its presence, in the same sense that one could, from a third person perspective, attribute to a fleeing monkey the (dispositional) belief in a Tiger’s presence. However, there is no accompanying intentional belief, or subjective experience (as per subjective reports, and as indicated by experiments that the visual phenomenology does not impact the priming).

    Of course, the dispositional evidence gained in such cases clearly distinguishes the ‘belief’ regarding the masked stimulus from belief towards a stimulus clearly present to the subject—that is, a stimulus of the latter kind may for instance prompt the subject to proclaim ‘I saw a right-pointing arrow’, upon being questioned, while the masked stimulus, not being available in this way, would presumably not (although perhaps a subject might be able to correctly guess with greater than chance accuracy, given a selection of possible stimuli, as in the case of blindsight).

    But I believe that this is merely a difference in degree, not in quality. One could easily imagine, it seems to me, a being whose dispositional beliefs also enabled it to make such exclamations as above—if it is in principle possible that masked stimuli influence behaviour, then I do not see why any given behaviour should be excluded. Hence, it does seem only like a mild extension of the empirical data to suppose that a being might exist that exclusively possesses dispositional beliefs, without any attendant intentional beliefs, and that nevertheless acts in such a way that one would third-personally attribute it with beliefs.

    In other words, the presence of the masked stimulus alters the probability distribution over possible future behaviours; since those future behaviours are all one has access to from the third-person point of view, all one can conclude is that the subject in question has dispositional beliefs towards the stimulus.

    However, those dispositional beliefs are your belief*: it is the belief you wish to attribute to a computer, or any other agent, based on its reactive behaviour to the environment (which might include, for instance, the internal representation of the stimulus in some way; after all, to influence behaviour, even the masked stimulus must in some way represented in our brain states). But your account hinges on the equivalence of belief* and belief: that in some sense, dispositional beliefs are sufficient for (taking oneself to have) intentional beliefs. Yet the experimental account seems to differ: subjects in visual masking experiments exhibit dispositional belief towards the masked stimuli, by altering their behaviour in its presence, yet profess a lack of intentional belief.

    Hence, in these cases, dispositional belief does not suffice to give rise to intentional belief; and if it is indeed plausible, as I suggest, that all behaviours could in principle be explained using dispositional beliefs, then intentional and dispositional belief—and hence, belief and belief*—are simply two different kinds of things. So it does not suffice to diagnose an entity with belief* to attribute belief to it.

    Now, one might try and hope that some kind of higher-order belief could fill in where dispositional beliefs fall short: that is, for intentional beliefs, dispositional beliefs aren’t sufficient, but maybe, beliefs about dispositional beliefs can fill the gap. Merely acting as if one has perceived a stimulus evidently does not suffice to attribute conscious, intentional perception of that stimulus to an agent; so one might additionally request that it is not this mere dispositional facility, but belief about this disposition that accounts for intentional beliefs, some belief(*) of the form ‘I’m acting this way because I have seen x’, say.

    But this, then, becomes simply question-begging: if belief* wasn’t sufficient, then what grounds do we have to think that additional belief* will help? If belief* could give rise to intentional belief, then it ought to be able to do so at the first rung of the ladder, so to speak. It does not seem plausible that the belief’s object changes anything about what internal phenomenology is brought about by it. We could simply iterate the above argument: even this second order belief* could be characterized in terms of dispositions—dispositions to give rise to certain internal states, to certain patterns of neuron- or transistor-activity, what have you. Hence, it is again plausible to perform an experiment such as above, which will produce the same internal dispositions, but with an absence of the attendant intentional mental content. Thus, it doesn’t seem plausible that layering on beliefs* should help.

    ________________________________

    Anyway, it seems that you have withdrawn from the discussion, and maybe with good reason (I tend to be somewhat unable to let things go). If that’s so, I hope I haven’t offended or insulted you in any way. Also, I’m not offering the above by way of a cheap parting shot, hoping to claim ‘victory’ due to your silence; so don’t feel obliged to reply. I’ve decided to post these ruminations simply because earlier, you seemed genuinely interested in and receptive to the empirical data as given by the colour-blind glasses; so I hope you’ll take the above in the same spirit.

  89. Hi Jochen,

    You haven’t offended me. I haven’t even read what you’ve sent yet (apart from that very last paragraph). I saw the mass of text and just despaired (I’m just as wordy as you, I know, so I can’t fault you). I’ve been losing enthusiasm a little bit as there seems to be more and more repetition and less and less progress. That and I’ve been busier than previously.

    I’ve been meaning to get around to answering it. Perhaps I will soon.

  90. Hi Jochen,

    I can’t see myself having the time to read the whole thing, digest and then reply to the whole lot, so perhaps instead of avoiding it altogether I’ll post little bits here and there as I read through it.

    > You agree with this by now, I believe

    To be clear, I don’t think my view has changed much or at all really since the beginning of the conversation. I think what may be changing is your understanding of that view as well as perhaps how I express it.

    > that we might consider to be the ‘reasonable’ one will itself not determine the mental state any better, will itself be compatible with any number of interpretations (say, in terms of balls-and-boxes, as opposed to my actual phenomenology

    Your actual phenomenology is the structure itself. It is not an interpretation.

    It might be interpreted to refer to balls and boxes from a third person view, sure. But that interpretation would only be reasonable if you were interacting with a system of balls and boxes without being aware that you were doing so. This seems unlikely. Perhaps you would need to be a brain in a vat interacting with a virtual world which supervenes on a computational substrate made of balls and boxes.

    > Disagree all you want, in this one case, it’s quite certain that what you’re saying is wrong.

    Ditto!

    > Any set of the right cardinality can be imbued with any structure.

    Sure! But that is not what I was disagreeing with. What you said was “with mere structure, you lack this sort of definite intentional object, having instead at best access to questions of cardinality”. What I was disagreeing with was the implication that defining a structure tells you nothing other than cardinality.

    So…

    True: any structure can be imposed on any set of a given cardinality
    False: a structure tells you nothing apart from cardinality

    By analogy:
    True: any computer with sufficient memory can run any algorithm
    False: running an algorithm tells you nothing other than that you have a computer with sufficient memory

    You can’t tell anything about the computer apart from that it has sufficient memory merely from the fact that it runs an algorithm, and this is analogous to what you’re focusing on. But what I’m focusing is that the algorithm itself can tell you things, not about its substrate but about abstract structures and so on.

    Similarly, any given structure provides you with more information than its cardinality. Since I think the only direct objects of intention are abstract structures, acting as proxies for real objects only for practical purposes, this is all I need.

    > Haven’t I repeatedly acknowledged that the intentional objects I have access to underdetermine their referents?

    Sure. Which is why I am perplexed that you keep (from my perspective) contradicting yourself by insisting that your references are definite and those of a computer are not. I assume I am misinterpreting you, but even if this is clarified, my point still stands if you interpret me to be saying “you’re apparently laser-focused on an idea of *foo* reference I disagree with.” where *foo* replaces “absolute” and simply means whatever property it is you think human intentions have that computer (pseudo-)intentions don’t.

    > Well, but since we may just be the fever dream of a stone lying on the beach on your account

    No. We are an abstract structure on my account. Whether a stone dreams of us is immaterial. As I said for Bostrom’s argument. We exist Platonically of our own nature as abstract structures and don’t need any physical system to instantiate us in order to exist. When physically instantiated (e.g. in a simulation), we are no more real than we were before.

    > there probably wouldn’t be any intentions projected onto us that in any way correspond to the intentions we (take ourselves to; do I really have to keep doing this?) have.

    The structure we are part of also includes the world outside us. You couldn’t interpret a stone as subvening your mind without also interpreting the stone to be subvening the world you perceive. Your intentions would correspond to objects in that world.

    > what we seem to experience must be explicable in terms of those internal dynamics;

    What we seem to experience is ineffable. My view is that it is ineffable because there is nothing there to be expressed or communicated, indeed the very idea of such communication or even explanation is rooted in a category mistake.

    This is not to say that we don’t experience, but rather that experience is a functional thing, and so things that function as we do have the same experience. The experience we have is just what these functions look like from a first person perspective. To ask for a derivation of what this *feels* like from what the functions *do* is to ask to derive a first person perspective from a third person perspective — a category mistake, in my view. To know what it is like to be a bat is simply to be a bat. There is nothing more to it than that.

    Instead of the impossible-in-principle category-mistaken explanation you have asked for, I have offered an alternative kind of explanation which you will not accept (and yet which is adequate for me). That is, on my account, we can explain why it is we think there is something to be explained, even though there isn’t (since we can see that a faithful simulation of a human would ask much the same questions). The takeaway is simply that we are no different from a computer with the same information processing abilities and the intuition that we are more than this is illusory.

    > and hence, each first person viewpoint must be accountable for in third-personal terms.

    No. The structure is accountable for in third-person structures. To know what it is like to be that structure is simply to be that structure. The idea that there is hidden, ineffable, incommunicable information there, that there really is content to this “knowledge” is mistaken in my view. It is not that there is something I am leaving out of my account, it is that what you expect to find in my account does not exist and cannot exist.

    > When is any interpretation more reasonable than another?

    I’ll leave that to Sergio (although unlike Sergio I’m not hanging any strong claims on what is or is not reasonable).

    > But any given algorithm merely compounds the same kind of thing, adds some complexity. How does this help?

    Because consciousness is only possible with complexity. Because human consciousness is a conjunction of a number of different information processing abilities, and it is not possible to implement these abilities in a trivial system.

    And no, I’m not saying “add complexity and magic happens”. I am saying certain functions demand complexity (this ought to be uncontroversial) and consciousness is an aggregation of certain functions; what these are is defined only vaguely and depends on what you would deem to be conscious, e.g. dogs, fish, insects etc.

    > you answer a challenge to your stance by using a reply that is only meaningful if your stance is correct,

    And I will continue to do so! Again (and I’ve made this point before) I’m not trying to prove my stance, I’m trying to defend it. In order to prove my stance false, you need to demonstrate either an internal inconsistency or inconsistency with empirical evidence (as you have been attempting to do). Arguments which only work if my stance is wrong can be put aside. To insist on them is begging the question (just what you think I am doing).

    > I’m sorry, but I can’t really parse any of this…

    I’m not surprised. This is confusing as hell, and from my perspective you are confused. My inability to express it any more clearly doesn’t help.

    So, from my perspective, you are confusing representations of objects with representations of representations of objects. A representation of an object is some neural state, say. When we think of objects, this is a neural activity. When we think of neural activity itself, this is also a neural activity. But thinking of an arbitrary object does not feel like thinking of a neural activity, so there is this illusion that what is going on is not a neural activity.

    > To whom does it represent itself?

    To the mind of which it is a part.

    > How does it represent itself?

    By changing how the mind processes information in virtue of its causal connections within the mind. The shape of the flow of information through the mind is a structure, and the representations are part of that structure.

    > How does something indefinite represent itself as definite?

    By being more tightly coupled to the representation of the concept of definiteness rather than that of indefiniteness.

    OK, that’s all I have time for today. Cheers.

  91. Your actual phenomenology is the structure itself. It is not an interpretation.

    But the problem is that phenomenology does not seem structural. What is, for instance, the structure of the experience of a uniformly red field of vision? Certainly, one can find structural relationship between elementary phenomenal experiences, but the problem is to account for those elements themselves in terms of structure.

    I’m also not making the point that my phenomenology is somehow my interpretation of the structure instantiated in my brain (I think you sometimes think I did); but simply, that if there is a structure in my brain, and if I have some phenomenology that is accounted for in terms of structure, then it ought to be possible to take that structure and find out what my phenomenology is, as again it is the case with the solid-seeming table whose solidity is accounted for in terms of atoms.

    But that interpretation would only be reasonable if you were interacting with a system of balls and boxes without being aware that you were doing so. This seems unlikely.

    How do you judge likelihood here? If it leads to the same structure, there is no difference on your account.

    True: any structure can be imposed on any set of a given cardinality
    False: a structure tells you nothing apart from cardinality
    By analogy:
    True: any computer with sufficient memory can run any algorithm
    False: running an algorithm tells you nothing other than that you have a computer with sufficient memory

    But that last bit is exactly what the stone argument tells us! Actually, it’s directly implied by computational universality, even—that a given program can be instantiated on a given system does not tell you anything about that system; merely, that it is capable of universal computation.

    Consider any given structure S, which is a set of sets; these sets define relations between certain elements of a base set B. Thus, any structure is a subset of the powerset of the base set, P(B). Now, what does having access to this structure tell us? Indeed nothing but the cardinality of B, since any B’ of equal cardinality can just as well support that structure. What else do you think that S tells us? I mean, consider any given relation from S, which is a set (e.g.) {a,b,c}. This set exists iff you have B of the right cardinality, because if B exists, then so does its powerset P(B). Hence, knowing that the set exists (which is all S can tell you) tells us nothing more than we already knew when we only knew the cardinality of B.

    I assume I am misinterpreting you, but even if this is clarified, my point still stands if you interpret me to be saying “you’re apparently laser-focused on an idea of *foo* reference I disagree with.” where *foo* replaces “absolute” and simply means whatever property it is you think human intentions have that computer (pseudo-)intentions don’t.

    The only thing I’m focused on is my direct first-personal evidence; there is no room for disagreement with the evidence, any theory hoping to be accepted must account for it. And my direct first-personal evidence is that of a definite mental state; but as I already pointed out, while my mental state may underdetermine its object of reference, mere structure underdetermines the mental state itself.

    We exist Platonically of our own nature as abstract structures and don’t need any physical system to instantiate us in order to exist.

    The same argument works with respect to pure abstract structure. Consider Benacerraf’s famous argument against ontological Platonism: take the Peano axioms—you would probably consider them to be an abstract structure. Now, but which structure? Well, let’s take von Neumann’s construction of the natural numbers: starting with the empty set, each following number is built up by joining the previous numbers. That is:

    0 = O,
    1 = {O},
    2 = {O,{O}},
    3 = {O,{O},{O,{O}}},

    and so on. The resulting collection of objects is a perfectly good realization of the abstract structure of the natural numbers. But then again, so is this one (due to Zermelo, I believe, but I may be mixing things up):

    0 = O,
    1 = {O},
    2 = {{O}},
    3 = {{{O}}},

    etc. But which is the right one? And make no mistake, those are genuinely different realizations: in one, e.g. the cardinality of the sets is equal to the number they represent; in the other, the cardinality is always 1. In one, all previous numbers are elements of a given number; in the other, only the immediate predecessor is. And of course, those are far from the only possible ways to interprete the structure given by the Peano axioms.

    So even within a mere structure, any given object (a substructure) can be interpreted in infinitely many different (structural) ways; there is no way to pin down its ‘true nature’, so to speak (beyond the question of cardinality). So the same thing goes for any hypothetical structures you wish to interprete as conscious beings: they can be interpreted any number of different ways, as completely different structures, with equal validity. So you can’t even say that the interpretation as conscious beings is out there, invoking some sort of Platonic anthropic principle and claiming that that one’s then just the one we experience; anything you would point to as being a conscious being is so only by virtue of some special interpretation—yours.

    You couldn’t interpret a stone as subvening your mind without also interpreting the stone to be subvening the world you perceive. Your intentions would correspond to objects in that world.

    All I have access to is my state of mind (I think we agree on that); so there would be no need to subvene anything more than that state. It would still contain my exact same phenomenology, but its objects would not refer to anything without; and it is this reference you (or Sergio) were proposing to use to fix what was going on inside, i.e. choose the interpretation such that these references come out true.

    This is not to say that we don’t experience, but rather that experience is a functional thing, and so things that function as we do have the same experience.

    And once more, I’m very happy to accept this, provided you can give even a hint towards an account of how what I take myself to experience—say, the monotonously red field of view from above—emerges from the functional account; or why anything definite should emerge at all, at least.

    That is, on my account, we can explain why it is we think there is something to be explained, even though there isn’t (since we can see that a faithful simulation of a human would ask much the same questions).

    And it would ask them for the same reason: it has phenomenal experience that can’t be accounted for in terms of mere structure. Now you might want to say, well then that proves it, because in fact, the simulation we know to be mere structure—but that again presupposes that your account is right, that something being mere structure makes sense. But consider once more the stone argument: that there is a simulated version of you or me in that computer is merely due to an arbitrary interpretation that we impose upon the object (structural or physical) we use to implement that computation; that is, its intentions etc., such as they are, are ultimately grounded in ours, and we’ve gone in circles again. Only if our intentions are in fact themselves reducible to the computational—the thesis you ought to be arguing for—does what we interpret our simulations to say carry any weight at all.

    The structure is accountable for in third-person structures. To know what it is like to be that structure is simply to be that structure.

    This is simply a re-statement of the problem, not a solution. That being something—whether it be structure, object, function, atom, or brain—comes with a first person perspective, with a way it is like to be that thing, is exactly what one sets out to explain when one explains consciousness.

    Because consciousness is only possible with complexity.

    But parts of it are very simple. Why should my simple experience of a red visual field be realized using any sort of opaque complexity? (Sci, I think, linked to Clifton’s empirical case against materialism upthread; I think he does a very thorough job at exposing how flawed this position of hiding behind a wall of complexity really is.)

    And I will continue to do so! Again (and I’ve made this point before) I’m not trying to prove my stance, I’m trying to defend it.

    Sure, but guarding against the allegation of circularity—a fatal inconsistency in any view—by merely being circular in response simply doesn’t do any work.

    Arguments which only work if my stance is wrong can be put aside. To insist on them is begging the question (just what you think I am doing).

    It’s the other way around: I propose an argument, against which you try to defend yourself; but the defense only works if your view is right, hence, it’s ineffective.

    A representation of an object is some neural state, say.

    Well, then answer me this: what is 00101101111001 a representation of? And what is any given neural state a representation of? It’s just as much an arbitrary string of symbols. You’ll probably want to say that the neural state is about whatever brings it about, but then again, you’re merely attributing intentionality by projecting your own: only if the external world is known (e.g. to you), and the neural state is known, can you correlate neural states to external circumstances. Otherwise, since the organism that has that neural state does not have access to the external world other than via this neural state, to him, what the state is about is left wholly open, since it could represent absolutely anything. But this is not what we find.

    By being more tightly coupled to the representation of the concept of definiteness rather than that of indefiniteness.

    But if it is indefinite, how can it have some coupling (a definite property) to anything?

  92. Hi Jochen,

    I’m not going to have the bandwidth to keep up with you, especially if you’re posting comments before I finish replying to the earlier ones.

    That said, I enjoy the conversation and would not want to leave you hanging.

    You can contact me via the form on my mostly inactive blog (which should be linked on my name this time) or on my mostly inactive Twitter account @Disagreeable_I. I think it might be interesting to schedule a Skype call or something. I’d be happy to discuss your views as well as mine.

  93. Yeah, sorry for butting in early there. I was bored at the computer when your reply came, so I just shot off some quick remarks (which as usually bloomed into basically a full essay’s worth of word salad).

    Anyway, you’re probably right to call curtains on this one—we’re not really moving forward, and each of us probably feels like the other doesn’t really address his points. Nevertheless, as some sort of closing statement, I think I’ll briefly (well, as briefly as I can manage) list those points I think your account still doesn’t adequately address. So, without further ado:

    1. The connection with the evidence, i.e. experience: Whatever else the merits of your approach, it certainly must be able to explain why we experience the things we think we experience; otherwise, it’s just not providing an answer to the question we’re asking. But I think you’ve so far mainly been evasive on this question; certainly, a simulated being would ask the same questions—but, on your account, it would ask them for the same reasons as I do, having the same experience derived from structure, and thus, has exactly the same point to make in wanting to know just how that experience derives from structure; on my account, it similarly asks the same questions for the same reason, having the same (non-structural) phenomenology, and wondering how you suppose this could come about due to mere structure.

    So in both cases, merely pointing to the simulation doesn’t provide an answer—why the simulation takes itself to experience its phenomenology is just as mysterious as why we do. You’ve pointed to simply being some structure as answer, but that likewise just restates the question: Why does being some structure come along with having subjective first-person phenomenology? It’s the same question the physicalist (etc.) faces—why should being a bat feel like anything, or feel the way it does? How is ‘feeling like’ accounted for in structural terms (or physical terms, etc.)?

    Or, as I put it in my last post: What is the structural underpinning of the uniform experience of a red field of vision? It seems easy to me to point to structures supervening on different items of phenomenology—the structural relationships between different parts of my field of vision, etc.—but for those primitive experiences themselves, I just don’t see how they come about from pure structure.

    And I think this is an account that you simply must provide in order to make your view plausible, because in some way, whether by being a structure, by instantiating it, or whatever, these subjective experiences must emerge from structure, just as the macroscopic properties like solidity emerge from the atomic structure of my table. You want to provide an explanation; but I think you haven’t yet provided a story of how it is supposed to explain.

    2. Structure only allows us access to questions of cardinality. From Peter M. Ainsworth, Newman’s Objection:

    For example, being told that a system has domain D = {a, b, c} (where a, b, and c are arbitrary names for three distinct but unspecified objects) and instantiates a relation R = {(a, b), (a, c), (b, c)} tells us no more than that the system consists of three objects, because some elementary set-theory reveals that any three objects instantiate seven non-empty one-place relations, 511 non-empty two-place relations (of which R is one) and 134,217,727 non-empty three-place relations. Being told that they instantiate R is both trivial (insofar as it follows from some elementary set-theory) and perversely specific (insofar as R is just one of the 134,218,245 non-empty relations they instantiate). Thus being told that the system has structure (D, R) is being told no more than that it contains three objects, because any system containing three objects can be taken to have this structure, along with a vast number of other structures (any tuple whose first member is D and whose other members are amongst the 134,218,245 relations instantiated by the members of D is a structure that can be taken to be possessed by any system containing three objects).

    This is the source of the underdetermination of the mental state itself: There is no definite mental state that is singled out over all other mental states (with the same cardinality of symbols). This is very different from how I (and I suspect, you) experience my own mental state: There certainly is a definite mental state—I could not interpret it as being anything other than it is—, even though that what this mental state represents may be underdetermined to some degree. The underdetermination is just on ‘one side’ of the relationships between my mental content and the things it refers to, in those things it refers to; but the underdetermination in case of having exclusively access to structure only is on both sides, not even pinning down the things I take to be present in my experience.

    And again, as pointed out above, even a ‘everything is structure’-view doesn’t single out anything more definite (i.e. attempting to argue that while the structure doesn’t tell us anything about the domain, it still tells us about the structure itself): the Peano axioms can be realized using both von Neumann’s and Zermelo’s constructions, which will yield different answers to certain questions, and hence, are genuinely different; and in fact, there are infinitely many other ways to realize this structure. So there is simply no way to point to a structure and say ‘this is the structure of a conscious mind’ (or perhaps, ‘this structure is a conscious mind’), since even in terms of structure, structures aren’t uniquely definite objects; under the interpretation you choose, it might be a conscious mind, while under the interpretation I choose, it might be something else entirely (say, balls and boxes). And also note that even providing this interpretation is dependent on you being an intentional being; hence, the intentionality of any structure thus interpreted is ultimately only derived from yours.

    Even the move that all interpretations exist simultaneously doesn’t buy you anything more: all those structures are themselves wholly indefinite objects, and thus, you can’t point to one and claim that it is a conscious being without yourself providing the interpretation that sees it as one.

    3. The empirical case against belief* being belief I mentioned two posts ago. Basically, in visual masking experiments, stimuli can be available to influence decisions—and hence, one would judge the subject to have according beliefs* towards them—can nevertheless be unavailable to conscious experience; thus, the subject possesses beliefs* without any accompanying beliefs. Hence, the mere belief* of having subjective experience need not be sufficient for actually having subjective experience (I’ve argued for this in more detail above).

    So, basically, I don’t see how your view is supposed to be empirically adequate (1.), I don’t think it possibly can be empirically adequate (2.), and even if it were (as regards intentionality), I don’t think it would suffice for the full phenomenological spectrum of the mind (3.). These are, I reiterate, not criticisms that only hold if my personal theory of consciousness (to the extent I could claim to have one) is true, but that are merely demands any theory must meet; there is no circularity in posing them. In defense, you have repeatedly offered rebuttals that rest on the correctness of your own account; but these just don’t hold any logical force. If I say that I don’t think your account is right because of X, then pointing out that if your account is right, I must be wrong about X, simply cuts no ice—in fact, it’s just a re-wording of my criticism. You must either find independent grounds on which to resist X, or show how your theory is in fact compatible with X; otherwise, the presence of X falsifies it.

    Well, of course that got longer than I thought it would, but I think I’ll leave it like this; if you want to address any part of it, you can do so at your leisure, I promise I won’t jump the gun again.

    I’m also right now in the process of completing an overhaul of my own views, so I’ll probably take you up on the offer to contact you via your blog about this; it’ll give you a chance to get into the role of the attacker for once, which I reckon is only fair. 🙂

  94. Hi Jochen,

    I look forward to hearing from you regarding your own theories.

    Some comments on your latest comment (still haven’t read all the preceding stuff, doubtful I will now, I guess, since you’ve highlighted the most important stuff).

    1. I agree that my favourite answer to your challenge probably does not work if you reject computational functionalism but still believe that a computational simulation of a conscious person would itself be conscious. This answer, the one that insists that you could ask the same questions of your simulated self, is supposed to counter the standard anti-computationalist viewpoint which would deny that the simulated self is conscious.

    I don’t yet understand how your view is not simply computational functionalism but I look forward to hearing your explanation. It may be that I will agree with your account and regard it as compatible with my own. Perhaps we’re using terms differently or something.

    If we can in principle see that a simulated entity would have the same questions (and even agree that it must be conscious), and you still think that there remains a question to be answered, then to me that question is either the non-philosophical problem of organising and understanding how to think about the complex information processing we are doing (i.e. making the way the brain processes information more comprehensible), or something akin to that famous question of Nagel’s: “What is it like to be a bat?”. My view of that latter question is that it is meaningless as usually interpreted and that the only way to “know” it is to actually be a bat. What is needed to answer the question is not not information or knowledge in the usual sense but just the state of being a bat.

    I think being a structure such as we comes with first-person phenomenology because it is impossible that it could not. We need to be able to remember and reflect upon our experiences, and so we need handles or labels by which to refer to those experiences and sensations. This is all qualia are. We cannot perceive them as what they really are (e.g. the firing of neurons in a certain pattern) because they are simply states of mind and we have no access to how these states are realised. The disconnect between what we know they are and what we have access to from a first person perspective is the source of the qualia problem. As such the idea that they are not simply the firing of neurons is an illusion akin to the way the blind spot on the retina is usually hidden from us (the way things are is not actually what we have access to, even though it often seems that way). I do not think it is conceivable that we could act and behave as we do and have no qualia at all (blindsight being an interesting partial example to the contrary) because this would mean not being able to reflect on or consciously identify what we are sensing. Qualia are labels and we need those labels to have intentions towards sense data.

    2. On whether structure gives information beyond cardinality or not.

    Let me agree that, strictly speaking, being told that physical system X instantiates abstract structure Y doesn’t tell you a whole lot about X beyond cardinality. It may lead you to suspect that X bears an obvious physical similarity to Y (e.g. being told that the object instantiates a triangle, you may expect it to have three sides), but this suspicion may be false if the interpretation is one of those daft rock-fever-dream ones conjured up for the purposes of philosophical thought experiments.

    But I think the paragraph from Ainsworth provides nice fodder for why I reject his view. He is right that R is just one relation from a large set of relations (an infinite set if relations can have an arbitrary number of parameters), but the choosing of R from this set is informative. It’s telling you something more than just the set of all possible relations. Similarly, choosing an image from the set of all possible grayscale 256-colour images of size 1024×1024 pixels is giving you information — (it’s worth at least a thousand words 😉 !). Just defining the set of all possible such images tells you very little.

    It is this “perversely specific” choice that tells you more than mere cardinality. This is not information about a physical object X but information about an abstract structure Y.

    This is why the mental state itself is not underdetermined. Because the mental state is defined by the choice of relations from the possible set of relations. The mental state is the abstract structure created by making those choices. The other possible structures you could form in this way are not the mind we are interested in but they also exist Platonically.

    I am not particularly well versed in the foundations of mathematics so I can’t discuss the Peano axioms in detail. But if it is the case that there are two systems which are compatible with the Peano axioms and these systems are different with respect to certain questions, then we simply have three different systems, two more complex ones which include the simpler embedded within them. The same kind of thing is probably true for all structures. If I identify a particular structure as being my mind, then that mind (Peano) could be embedded on a brain (von Neumann) or on a computer (Zermelo). In either case it’s still the same mind, but at a level of detail irrelevant to the mind there are differences. So the underdetermination you are concerned with is simply not a problem as far as I see.

    3. The empirical case against belief being belief*.

    I’ll come back to you on that when I have a chance to read what you wrote before.

    I continue to think you’re misusing criticisms of circularity. You don’t criticise a theory in pure mathematics by pointing out that if its axioms were wrong or different the theorems are no longer true. I’m advancing a view which I think is self-consistent and consistent with the evidence. If I am correct in this, then it cannot be ruled out. Of course that doesn’t mean that it is actually true — to insist so would indeed be circular. The argument that it is actually true is one grounded in parsimony and so on.

  95. This answer, the one that insists that you could ask the same questions of your simulated self, is supposed to counter the standard anti-computationalist viewpoint which would deny that the simulated self is conscious.

    But even then, at least for an epiphenomenalist, it could be resisted, since the simulation might just be wrong and not possess any subjective features. Regardless, I think there are numerous ways to not be a computationalist and think that a simulation might be conscious/ask the same questions as a conscious being: it could possess derived intentionality, for instance, since ultimately, only the user singles out what any simulation actually is about; and then, its questions would be about the consciousness of the user, ultimately. It could be conscious thanks to its interation with the world; you might want to say that one could just simulate a world alongside with it, but then, we don’t actually have access to anything the simulation is asking. It could be conscious thanks to being instantiated on a physical substrate whose properties allow for consciousness. It could be conscious because some sort of dual-aspect theory is true, which leads to computations of the right sort being associated with, but not the cause of, phenomenology. And it might even be the case that no such simulation is possible, if there is some significant nonalgorithmic aspect to minds that guides our behaviour, so even assuming this possibility goes somewhat beyond what I think one is licensed to assume.

    So I think that just pointing to the simulation showing the same behaviour as the original is not really an argument for computationalism at all—either the simulation might not be conscious, or even if it is, it might not be conscious because of, but maybe even despite of, being a simulation.

    But my real question was that even if there is a conscious simulation of me, then this doesn’t do anything to settling the question of how both me and my simulation are conscious; i.e. how it is that consciousness arises from a structural underpinning. You’ve started to try and sketch an account in this direction (for which I’m very thankful), so let’s have a look at that.

    I think being a structure such as we comes with first-person phenomenology because it is impossible that it could not. We need to be able to remember and reflect upon our experiences, and so we need handles or labels by which to refer to those experiences and sensations. This is all qualia are.

    I think you have to elaborate on that. You say that it is impossible that a structure could come without a subjective viewpoint; to me, it seems very possible. In fact, equating qualia with labels seems to reduce them to something that doesn’t have any qualitative dimension: a label is ultimately little more than a name, and just giving something a name does not seem to me to go along with any sort of phenomenology. It just eases referring to something, but my mental states don’t merely refer, they feel like something. How this arises is what interests me.

    I do not think it is conceivable that we could act and behave as we do and have no qualia at all (blindsight being an interesting partial example to the contrary) because this would mean not being able to reflect on or consciously identify what we are sensing.

    Maybe, but what’s your reason for thinking so? For any given behaviour, I can easily come up with a story that explains it in terms of simple cause-and-effect, without any need to refer to there being any subjective dimension at all. Now I understand you’ll want to argue that this cause-and-effect in some way just is our qualitative experience, but to me, this is just capitulation before the mystery: I want to know how, in the same way as I would not be satisfied that the immaterial, empty nature of atoms just is the solidity of my table; it is true, but I would not believe it merely on that assertion, but rather, ask for an argument (which can, of course, be given in this case, thus making the atomic explanation plausible).

    He is right that R is just one relation from a large set of relations (an infinite set if relations can have an arbitrary number of parameters), but the choosing of R from this set is informative.

    I think then you misunderstand his argument, because what makes the specificity of R perverse is just that it doesn’t yield any additional information. You think of R as being one specific relation, but it’s not—a, b, c are just labels, and under re-labeling, you get a relation between different objects of D. That is, if you make the replacement a -> b, b -> a, c -> c, then you still have the same structure, but instantiated differently on D. That is, the specification of R only tells you that a relation of this form exists; it doesn’t tell you that the objects a, b, and c really stand to one another in that relation, merely that you can consider them to. But once again, that consideration is arbitrary. Hence, all that the specification of R tells (there is some relation of this form) you is what you already knew from the cardinality of D.

    Similarly, choosing an image from the set of all possible grayscale 256-colour images of size 1024×1024 pixels is giving you information — (it’s worth at least a thousand words ???? !). Just defining the set of all possible such images tells you very little.

    Here you again introduce your own intentionality. Your perception introduces a maping from that picture to some internal state of your brain. Changing around some wiring, a different set of pixels may elicit that same internal brain state. That is, the picture you see is ultimately contingent on how you view it; and leaving the mapping arbitrary, that is, not introducing external intentionality, then indeed all you get out is the set of all possible images.

    But if it is the case that there are two systems which are compatible with the Peano axioms and these systems are different with respect to certain questions, then we simply have three different systems, two more complex ones which include the simpler embedded within them.

    Well, there are infinitely many possible ways to instantiate the Peano axioms, that’s the point. That is, there is no sense in which you can point at a structure and say, ‘that structure instantiates the Peano axioms’; this is always dependent on some choice of interpretation you make. Similarly, there is then no sense in which you can point to some structure and say ‘that structure instantiates a mind’—if it does, it only does so by virtue of your interpretation.

    I’ll come back to you on that when I have a chance to read what you wrote before.

    I’ll come back to you on that when I have a chance to read what you wrote before.

    You don’t criticise a theory in pure mathematics by pointing out that if its axioms were wrong or different the theorems are no longer true.

    But we don’t have a theory in pure mathematics here, but a theory which must account for some empirical evidence. And I know that you want to view everything as some mathematical structure, but even there, it’s the same: if some mathematical structure is too weak to prove what you want to prove, then you look for a different structure.

    However, what you’re doing is answering the criticism ‘your theory is wrong because of X’, with ‘but if my theory is right, you must be wrong about X’. Take some familiar cases. The Newtonian theory of gravity does not account for the perihelion rotation of Mercury. Hence, it was superseded by general relativity; nobody said, but since Newtonian mechanics is right, we must somehow be wrong about the perihelion rotation. Likwise, classical mechanics can’t account for the stability of atoms (or, indeed, the solidity of tables); hence, Bohr introduced the quantization hypothesis in order to fix this problem. Nobody argued that we’re wrong about the stability of atoms.

    Of course, it might have been the case that we were wrong about Mercury’s perihelion. Perhaps either the observations were fawed, or it is compatible with Newtonian gravity after all. But then, an argument must be found according to which the observations can be explained within the confines of the old theory; and if the theory is unable to do this, then it must be discarded. In any case, the mere circular insistence that from the theory’s being correct it follows that the observations must be wrong just doesn’t lead us anywhere.

  96. Whoops, sorry, copy-and-paste fail—beneath your quote “I’ll come back to you on that when I have a chance to read what you wrote before.”, I meant to write “Take your time, don’t feel rushed by me answering so soon.”.

  97. (I think I should have been more careful in my discussion of the greyscale image: what you have, on a purely structural account, of course are just the relationships between those pixels; given one interpretation, that will be a particular image, but sans interpretation, it’s indeed the same as the collection of all possible images.)

  98. Hi Jochen,

    Your empirical case against belief* being belief rests on the fact that there is empirical evidence to suggest that there are situations where we behave as if we have a certain belief but are not aware of holding these beliefs.

    There are a couple of ways this can come about, but you seem to be focused on cases where something akin to the belief is actually embedded in the brain state somehow (e.g. through priming), but we are not conscious that this is the case. I’ll focus on this but there are other situations which may also need an answer, e.g. interpreting behaviour to relate to one belief when it is actually better explained by another.

    If I can hold that a thermostat can hold beliefs, then my rebuttal should be relatively clear. Different parts of the brain can hold beliefs too. So, in this case, the priming sets up a belief (or a hankering, or a predisposition, or some other intentional state) in some part of your brain the internal workings of which are hidden from your conscious mind, which I see as a higher level process. Your conscious mind (i.e. those parts of your mind responsible for producing sentences describing introspection and so on) has no intentions towards the masked stimulus, but parts of your mind on which it depends do.

    I think you might well interject here with the point that now I am introducing this magic “consciousness” fairy dust that makes the conscious mind conscious and those lower-level submodules unconscious. But that is not what I am doing at all. When I say your conscious mind, I literally mean just that higher level process that integrates information from lower level processes so as to form a personal narrative.

    It could even be the case that those lower level processes are also conscious — if they were you wouldn’t be aware of it because you have no access to how they work, only to the effects of their output (e.g. when suddenly inspired with an idea you don’t typically know where the inspiration came from). It is consistent with my view that parts of your mind could be made up of homunculi that each have their own minds and their own memories and qualia and so on.

    To be clear, I don’t actually find this plausible. I don’t think these submodules are actually conscious because I reserve the word ‘consciousness’ for a description of the attributes of something akin to a high-level human mind and it seems improbable that any submodule of the mind would be organised similarly to mind as a whole. I see no reason to believe in such a fractal architecture.

    But even though they are not conscious in this sense I don’t see why they couldn’t have intentions.

    TLDR: beliefs are just beliefs* that the high-level personal-narrative-generating structure of your mind has access to. Lower level beliefs* would also be beliefs in this sense if we were concerned with access at lower levels of organisation rather than the highest, i.e. they could be held to be the beliefs of hypothetical homunculi.

  99. Hi Jochen,

    To follow up on some more recent points:

    > Regardless, I think there are numerous ways to not be a computationalist and think that a simulation might be conscious/ask the same questions as a conscious being

    Sure. What I’m saying is that I think a computationalist is someone who thinks that any system which carries out the same computation that his mind is carrying out, irrespective of substrate, is necessarily conscious in the same way that his brain is. You’re mostly talking about situations concerning systems which are computing other functions which happen to have similar behaviour.

    I would go farther and say that I don’t think it is possible for a system to behave as intelligently and adaptively as a human does and with the same level of apparent self-awareness without a conscious algorithm being involved at some point, even if only to produce a massive lookup table. I am not 100% sure about this but I am quite confident.

    You seem to be both denying that you are a computationalist and agreeing with me that any system which behaves as a human does (and presumably especially any system which processes information in a strictly analogous way) would be just as conscious as we. I can’t reconcile those two claims. It seems to me that you are just a computationalist who at the same time thinks that computationalism is indefensible. Of course that’s just a seeming. No doubt when you actually explain your views I’ll understand where I’ve gone awry.

    > So I think that just pointing to the simulation showing the same behaviour as the original is not really an argument for computationalism at all

    That’s not quite what I was saying. I was saying that if the anti-computationalist holds that it is possible for a simulation to process information in a way precisely analogous to how he does so himself, then he will have a hard time explaining to his simulated self why it should not be deemed to be conscious. If he can’t point to some lookup-table or ELIZA-type trick, it seems some sort of special pleading (for instance the appeal to biological neurons) would be needed. My suggestion is that it is actually more parsimonious to accept that he is actually in the same position as the simulated self — he can’t know that he isn’t because any evidence he can provide to the contrary has an analogue from the point of view of the simulacrum. So each one of the questions for which you find my answers unsatisfactory would need an answer from that anti-computationalist to account for how he can justify his hunch that his simulated self is not conscious, e.g. how it could seem to it that it has intentions even though it actually doesn’t. The fact that you think that the simulation also has to be conscious throws a spanner in the works for that particular tactic but leaves me bemused as to your claim not to be a computationalist.

    > In fact, equating qualia with labels seems to reduce them to something that doesn’t have any qualitative dimension:

    That’s more or less my intention. I think the qualitative dimension is an illusion, or at least is no more than the functional role played by qualia. Something could not be a quale and fail to seem to have a qualitative dimension, because seeming to have a qualitative dimension is just the condition of playing the functional role of a quale, i.e. being the handle or label for a particular sensation (i.e. some distilled property of sense data). You can no more articulate the content of the quale of “redness” than could an intelligent system for which “redness” was just a label, so perhaps it’s just a label for you too.

    It seems to be more though, and perhaps I can offer another account of how this can be. You can distinguish between red light and blue light. You can’t really articulate how you do it, you just know, as if by magic. In actual fact, this discrimination is down to differences in the signals passing along your optic nerve (and differences in the structural brain states arising from processing those signals), but you have no access to any of this. All you have access to is the brute, indisputable fact that you are experiencing *this* sensation rather than *that* sensation. The apparent certainty and force with which you hold this knowledge elevates it above a mere hunch, but the mysteriousness of how you come by it seems to imbue it with a mystical quality beyond the structural or propositional. It doesn’t seem like structure because you don’t have access to its structural underpinnings. It seems magical and mysterious and that makes sense given my account.

    I suspect that any system of “hunches” that was so detailed, finely tuned and directly wired into the conscious processes of our brain would manifest as qualia (because that’s all qualia are, in my view), and particularly so if associated with sensory apparatus directed at the outside world. As such I don’t think it is fanciful to portray echo-location in bats as akin to a kind of sight — it probably doesn’t feel much like hearing because the bats probably don’t have access to the audio processing that goes into building up a 3D model of their environment. Rather they have access to the model itself, much as we do. At the same time they are in no danger of confusing it for actual sight because when they process light they are in a different brain state; i.e. they can presumably distinguish between objects they can echo-locate but not see and vice versa.

    I gather that this is somewhat confirmed by anecdotes involving humans learning to process novel kinds of sensory information, for instance blind people who learn to “see” via a device which gives tactile feedback on the tongue or who learn to echo-locate by making clicking noises. All this is hard to pin down, but as far as I’m aware these cases seem to support my hypothesis that after a while they come to experience their surroundings more directly and without consciously processing the lower-level haptic or audio input, perhaps experiencing something akin to visual qualia.

    > I would not be satisfied that the immaterial, empty nature of atoms just is the solidity of my table

    I think there’s a disanalogy here because your example of the table and the atoms presupposes the concept of seeming. I don’t think there can be as satisfactory an explanation of seeming itself. Rather than appealing to concepts you understand intuitively (such as forces and repulsion and little bodies moving around in orbits and so on), what you need to do to understand consciousness and qualia (and seeming!) is to completely reorient your intuitions, and that’s not easy to do.

    This pessimism is perhaps a little like McGinn’s mysterianism, but unlike McGinn I think we already have the answer to the mystery of consciousness, it’s just that it remains mysterious because it is fundamentally at odds with our intuitions, and this is why few people can accept it. Unlike quantum mechanics or GR, we computationalists have no experimental data to back up our unintuitive claims, nor can we ever. This means the debate will likely never end. If my view ever does become the consensus, it will not be through clever argumentation or empirical evidence but through exposure to sentient computers and the realignment of intuitions which struggle to see them as unconscious automatons (c.f. Ex Machina).

    > because what makes the specificity of R perverse is just that it doesn’t yield any additional information

    But it does!

    > That is, if you make the replacement a -> b, b -> a, c -> c, then you still have the same structure, but instantiated differently on D

    Agreed! And if you were presented with that structure under such a relabelling, you would be presented with just the same information as originally. Which is not no information!

    Yes, it is true that {{a,b},{a,c},{b,c}} is essentially the same structure as {{b,a},{b,c},{a,c}}, but it is also true that it is not the same structure as {{a,c},{b,a},{c,c}} or {{a},{b},{c}} or {a,b,c}. As soon as you specify some kind of structure, singling it out from the infinite set of possible structures, then you are providing information.

    And of course the example you’re using is only one of simple unordered relations. There are many more complex kinds of relations possible in mathematics. We can even go wild and throw functions into the mix! I don’t want to get t0o caught up in the definition of a mathematical structure, but if we’re talking about the kinds of things that can be done with computers then a simple static structure like this is not a very helpful illustration. We need not just relations between elements, but relations between states of a system and rules about how states evolve into successor states. Just listing the states without such rules amounts to the lookup table argument and has the same response — you need a system of rules in order to produce or define that lookup table, and it is the system of rules which can be conscious.

    > But we don’t have a theory in pure mathematics here, but a theory which must account for some empirical evidence.

    I’m not deluded into thinking I have a theory of pure mathematics. I’m just trying to draw an analogy. If my theory can account for the evidence (and I think it can) and it is self-consistent, then you have no grounds for asserting that it is not true. Neither do I have grounds for saying it is true unless I have independent arguments to support it (e.g. parsimony or the incoherence or incompatibility with the evidence of competing views).

    > nobody said, but since Newtonian mechanics is right, we must somehow be wrong about the perihelion rotation.

    I’m sure they did! Didn’t they imagine that it could be explained by a hypothetical planet Vulcan? I’m sure you would agree they were right to consider this possibility even if it turned out to be worng.

    This was also the reaction on the recent reports of faster than light neutrinos, only in that case it turned out that the evidence was indeed spurious.

    But I think the difference between those cases and this is that I don’t agree that we have any evidence that contradicts my account. We certainly don’t have any objective predictions of computationalism that have been falsified.

    So I’m not saying you are mistaken about your evidence, I’m saying your evidence does not exist. All your evidence amounts to is the assertion that you are experiencing qualia, which is just what would be asserted by a p-zombie, so it is no evidence at all.

  100. TLDR: beliefs are just beliefs* that the high-level personal-narrative-generating structure of your mind has access to.

    OK, so first point: not all beliefs* are beliefs. This is, I think, very important. Your earlier claim was that it sufficed to have beliefs* in order to have beliefs; but it seems that you no longer hold to that, since you now say that beliefs are certain special kinds of beliefs*, namely, beliefs* that the high-level structure of the mind has access to.

    But what is it this high-level structure brings to the table that makes beliefs* into beliefs? Isn’t this just basically asserting some kind of homunculus that, once the beliefs* appear in its attention, or whatever, elevates them to beliefs?

    Furthermore, I think the empirical facts open up at least the possibility that everything we do can be attributed wholly to beliefs* without any accompanying phenomenology—that is, when we make some verbal report, it might be that we merely do so due to dispositional beliefs, i.e. because some earlier stimulus that has never appeared in our conscious experience predisposed us to do so. But then, it becomes easy to imagine (it seems to me) an entity that is behaviourally indistinguishable from us, but lacks beliefs; certainly, you can’t rely on your earlier argument that this mere behavioural indistinguishability guarantees beliefs—all it guarantees are beliefs*, and we’ve no seen that not all beliefs* give rise to beliefs; so it is at least possible that this particular entity only possesses beliefs*. But then, I think, your proposed theory crumbles and fails to account for our first-personal data.

    I would go farther and say that I don’t think it is possible for a system to behave as intelligently and adaptively as a human does and with the same level of apparent self-awareness without a conscious algorithm being involved at some point, even if only to produce a massive lookup table.

    I think the visual masking experiments put this possibility in doubt—if there is behaviour equivalent to conscious behaviour, but lacking the intentional aspect, then it seems that in principle all behaviour could merely be generated due to such dispositional beliefs.

    You seem to be both denying that you are a computationalist and agreeing with me that any system which behaves as a human does (and presumably especially any system which processes information in a strictly analogous way) would be just as conscious as we. I can’t reconcile those two claims.

    I’ve brought up several ways to reconcile them—dual-aspect monism perhaps being the most common one: there, some substance (I use the term loosely) is supposed to have both mental and physical aspects, and thus, the mind just goes along with what the physical aspects do. If we now adopt something like Chalmers’ principle of organizational invariance, then every system with isomorphic organization—such as a simulation of a conscious being—also instantiates the relevant mental aspects; thus, such a simulation would be conscious, without, however, the consciousness being in any way due to the computation (in fact, you could use such a system to perform any kind of computation at all, without that impacting the phenomenology it experiences in the slightest).

    If he can’t point to some lookup-table or ELIZA-type trick, it seems some sort of special pleading (for instance the appeal to biological neurons) would be needed.

    See, my way of looking at this is the opposite: if I were convinced of merely being a simulation (and that only the computation that is being performed is a sensible candidate for underpinning my phenomenology), then I would need some argument explaining to me how it is that I have the first person experience that I seem to have; i.e. an account of how phenomenology emerges from the computation. This is what I would find convincing regarding computationalism. So, to put it pointedly, whether I am a simulation or not wouldn’t change anything for me: I’d still want to know how what I’m made from produces what I experience. Thus, this appeal to simulation just falls flat with me.

    I think the qualitative dimension is an illusion, or at least is no more than the functional role played by qualia.

    And that’s what you’d have to demonstrate to be possible in order to make your account acceptable: how the appearance of a qualitative dimension springs forth from mere functional properties. Later on, you seem to be saying that this probably can’t be answered, putting yourself in the mysterians’ camp; if that’s so, then I think the view looses what I originally found to be of interest in computationalism, namely the promise of giving a genuinely reductive account of how our inner life is constructed from the stuff of the world. Along with panpsychism, dual-aspect theories, neutral monism and its ilk, it’d then just basically accept the central mystery—there’s somehow phenomenal stuff, deal with it. Shut up and calculate. To me, that’s only a last resort kind of view.

    You can no more articulate the content of the quale of “redness” than could an intelligent system for which “redness” was just a label, so perhaps it’s just a label for you too.

    But you can in general articulate what a given label labels. “Pegasus”, for example, labels the winged horse Bellerophon rode. In that sentence, there is no residual mystery: I have articulated what is labelled by “Pegasus”. How would the analogue look for “redness”?

    You can distinguish between red light and blue light. You can’t really articulate how you do it, you just know, as if by magic.

    I can distinguish red and blue by their structural relationship, e.g. as visualized in color space. Why should there be a problem with that? The problem is not the relationship between colours, but the fact that absent any relationships, each colour still has a distinct experience associated with itself.

    I think there’s a disanalogy here because your example of the table and the atoms presupposes the concept of seeming.

    Hmm, I don’t think so. The solidity of the table can be perfectly well captured in terms of hard data that hasn’t ‘seemed’ to anybody.

    As soon as you specify some kind of structure, singling it out from the infinite set of possible structures, then you are providing information.

    But the problem is that you’re not singling out any structure. With the above example, from the existence of R we know the existence of D, as a set of some fixed cardinality. But if we have D, then we also have every other possible structure on D, as subset of its powerset. That is, when you say ‘structure S = (R,D) exists’, then you have, as an immediate consequence, also the existence of every other structure S’ = (R’,D) possible on D in exactly the same sense. But since this is just what could be deduced from the cardinality of D, saying ‘structure S exists’ really does tell us no more than the cardinality.

    Now you might be tempted to say that somehow, structure S is special, that only R is real, or something like this, but all such attempts eventually either collapse or rely on the introduction of extra-structural knowledge (as is detailed in Ainsworth’s section 6).

    And of course the example you’re using is only one of simple unordered relations. There are many more complex kinds of relations possible in mathematics. We can even go wild and throw functions into the mix!

    Since all of modern mathematics is built on set theory, we can account for all of these entities in terms of sets and relations (in particular, a function is just a set of ordered pairs from domain and codomain).

    If my theory can account for the evidence (and I think it can) and it is self-consistent, then you have no grounds for asserting that it is not true.

    I have no issues with that, but that doesn’t license you to try and rebuff my evidence based on the assumption of the correctness of your theory. You need independent sensible grounds to reject my evidence, or failing this, a way to incorporate it into your theory.

    All your evidence amounts to is the assertion that you are experiencing qualia, which is just what would be asserted by a p-zombie, so it is no evidence at all.

    My evidence is not merely the assertion of experiencing qualia, but the appearance that I do, in fact, experience qualia (something the p-zombie lacks, per definitionem). This is irreducibly first-personal, but since I have good grounds to suspect that you share relevantly similar experience (although it has been suggested—in jest, of course—by I forgot whom that eliminativists, in particular Daniel Dennett, might actually be zombies, thus accounting for the apparently irreconcilable differences), this is no grounds for dismissal.

  101. Hi Jochen,

    > OK, so first point: not all beliefs* are beliefs.

    Well, not so fast. If beliefs* are just beliefs that the believer has access to, then whether a belief* is a belief or not depends on what you take the believer to be. As I was trying to illustrate, your high-level beliefs are not necessarily the beliefs* of submodules of your brain and vice versa, but that doesn’t mean that the beliefs* of submodules are not beliefs in their own right if we take the believer to be those submodules rather than the brain as a whole. What I’m saying is that the difference between beliefs* and beliefs is just a matter of perspective and there is no objective ontological difference.

    > But what is it this high-level structure brings to the table that makes beliefs* into beliefs?

    So, nothing really. That’s why I was at pains to explain I wasn’t trying to introduce magic consciousness fairy dust. The only thing the high-level structure brings to the table is the fact that it is one of the participants in this conversation, and it is its beliefs and what it has access to that we are talking about.

    > because some earlier stimulus that has never appeared in our conscious experience predisposed us to do so

    Right. And, in principle we can imagine more and more of our behaviour being handled in this way until we end up with something like a p-zombie, all of the stimulus processing being handled by submodules and nothing at all making it to the conscious mind.

    But in that case I think the sub-module you’re pushing more and more of the behaviour into is essentially becoming conscious in its own right. You’re right that the former conscious mind could be sitting alongside this as a passive passenger, unaware of what this homunculus is feeling or sensing, but that doesn’t mean that the homunculus (in effect now the person itself) doesn’t have phenomenal experience even if the nominally conscious self is unaware of it.

    I said I don’t think that submodules are conscious in their own right, but in the thought experiment where a submodule can work completely autonomously without intervention by the conscious mind and can do all the things a conscious mind can do, then in that far-fetched scenario I would think it was conscious in its own right. Indeed I would regard it to be the conscious mind of that person. You’d more or less end up back where you started.

    In the same way, in a split brain patient the right hemisphere is unaware of the consciousness of the left and vice versa, but that doesn’t mean that one hemisphere is conscious and has intentions and that the other isn’t and doesn’t. Different parts of the brain can have beliefs that are only beliefs* as far as other parts of the brain are concerned.

    > I’ve brought up several ways to reconcile them

    And I don’t see how any of them do. Each one either seems to be compatible with computationalism or not to entail that something that processes information as I do would be conscious as I am.

    > some substance is supposed to have both mental and physical aspects

    So, either this substance is particular to some arrangements of matter, in which case not all things that process information as I do would be conscious, or it is universal, in which case it becomes panpsychism. I guess panpsychism is a kind of anti-computationalism, but only in the sense that it is over-broad. I usually take anti-computationalists to be those who consider computationalism to be too broad. Panpsychists would possibly think that a computer might have just the same intentions and qualia that we do, which is the view I’m trying to defend.

    > then every system with isomorphic organization also instantiates the relevant mental aspects without, however, the consciousness being in any way due to the computation

    Huh? So the thing has consciousness by virtue of its abstract causal structure, but the consciousness has nothing to do with its abstract causal structure? Because that’s how I translate both “organization” and “computation”. We must be speaking different languages.

    > I’d still want to know how what I’m made from produces what I experience.

    OK. And I think you’ll be left wanting, because it seems to me that the actual explanation is not recognised as such by you, because it’s too unintuitive. But let’s drop it. If you think that a simulation of you would necessarily be conscious, then this argument is not for you.

    > it’d then just basically accept the central mystery—there’s somehow phenomenal stuff

    Well, no, there isn’t. That’s the point! But if we can see that a system such as ourselves would necessarily believe there to be phenomenal stuff, then there is nothing left to explain.

    > But you can in general articulate what a given label labels.

    You can say things about what the label labels. That’s what you have done. Articulating the content of the label would be saying something like “P-e-g-a-s-u-s”. You can do that because this particular label is an external one used for communication with other people and you do have access to the symbols that form it. An algorithm however does not typically have access to the names of the variables of an implementation of that algorithm (without some sort of code reflection API at least). The variables are handles for data but don’t have content in their own right apart from their functional roles.

    > How would the analogue look for “redness”?

    The analogue for what you did for Pegasus would be “The sensation experienced when exposed to light of such and such a wavelength”.

    > I can distinguish red and blue by their structural relationship, e.g. as visualized in color space.

    So if I show you a red object, how do you know where it falls in color space? How do you know it is red and not blue? I suggest that you just know, and you don’t have direct access to how you do it.

    > But since this is just what could be deduced from the cardinality of D, saying ‘structure S exists’ really does tell us no more than the cardinality.

    No. We’re not concerned with which structures exist. We are concerned with which structures have a certain property (consciousness, say). A subset is more informative than an exhaustive list of all possible subsets. I explained this with reference to the image. I think you missed the point with the worry about how the image is to be interpreted. Let’s say there is no interpretation. Say I need to tell you a PIN number. It is not useful for me to tell you it is a 4 digit number, which is a set of 10000 numbers. I need to tell you which four digits specifically.

    I wrote a blog post on this topic.

    http://disagreeableme.blogspot.co.uk/2012/05/part-is-more-complex-than-whole.html

    > Now you might be tempted to say that somehow, structure S is special, that only R is real, or something like this,

    No, nothing of the sort. Instead what you are being informed of is that we are talking about R and not about any of the other structures that could be defined on D. This means that R can have properties beyond cardinality that other structures do not, and that these properties might make R interesting in ways other structures are not.

    > in particular, a function is just a set of ordered pairs from domain and codomain

    It can be defined as such but I think that definition is misleading. If you have a function from reals to reals then it is not possible to list out all those pairs, so it is quite not possible to define the function as such a set even if you had infinite time. Instead you need to define the set according to some rule (the function, in essence) and only then can you use the set to define the function.

    > You need independent sensible grounds to reject my evidence, or failing this, a way to incorporate it into your theory.

    What is sensible is in the eye of the beholder, I guess. Again, parsimony would be these grounds. Anyway, I think I have been incorporating it into my theory, to the extent that you have evidence at all, which is not so much.

    > This is irreducibly first-personal,

    And that’s a problem, because I don’t think good evidence is irreducibly first-personal. Yes, it also appears to me that I am experiencing qualia, but since it would appear* to me that I was experiencing* qualia even if I were a p-zombie my own testimony is no more use to me than yours. I have no good grounds for believing myself to be more than a p-zombie, and that being the case I have no grounds for believing that the distinction between conscious people and p-zombies or appearances and appearances* is even meaningful. The whole thing is a confused mess arising out of the impossibility of constructing first-person experiences from third-person descriptions.

  102. As I was trying to illustrate, your high-level beliefs are not necessarily the beliefs* of submodules of your brain and vice versa, but that doesn’t mean that the beliefs* of submodules are not beliefs in their own right if we take the believer to be those submodules rather than the brain as a whole.

    The conclusion remains the same: there are beliefs* without any accompanying beliefs, i.e. stimuli that influence the behaviour of subjects in visual masking experiments without ever having been consciously registered, and hence, I see no reason to believe that juggling around beliefs* could ever lead to beliefs.

    The only thing the high-level structure brings to the table is the fact that it is one of the participants in this conversation, and it is its beliefs and what it has access to that we are talking about.

    But why is there any apparent (it seems I need to do the disclaimer thing again…) phenomenology associated with its beliefs, and not with those dispositional beliefs* arising in visual masking experiments?

    But in that case I think the sub-module you’re pushing more and more of the behaviour into is essentially becoming conscious in its own right.

    I know that you think that’s going to be the case, but do you have any reason for thinking so? If some beliefs* fail to give rise to beliefs, what is your reasoning for thinking that beliefs* could ever give rise to beliefs?

    Each one either seems to be compatible with computationalism or not to entail that something that processes information as I do would be conscious as I am.

    Well, if you were to counterfactually remove the mental pole of events in dual-aspect monism, then you’d still be performing the same computations, but without any attendant mental experience. Hence, it’s not computationalism.

    So the thing has consciousness by virtue of its abstract causal structure, but the consciousness has nothing to do with its abstract causal structure?

    No, it doesn’t have consciousness by virtue of its abstract structure; it’s merely that every thing with a certain organization has consciousness associated with it. Think of a set of bowling pins, each of which has a red dot on them: every thing that is a bowling pin also has a red dot on it; but it does not have a red dot on it by virtue of being a bowling pin, it’s just that it has extra attributes besides.

    Well, no, there isn’t. That’s the point! But if we can see that a system such as ourselves would necessarily believe there to be phenomenal stuff, then there is nothing left to explain.

    You seem to want to have it both ways here: being able to explain how it is that we take ourselves to have phenomenal experience, and claiming that we never can know how this apparent experience arises. But either you can tell a story of how it comes that we seem to have these experiences—then there is truly nothing left to explain. Or, you can’t—then, the mystery remains.

    And I’m still not on board with the idea that all that is there to explain is the belief (or rather, belief*) that we have phenomenal experience: it seems by now obvious that we can have this belief*—acting in a way that would convince everybody that we do—without there actually being some red-experience, or what we take to be a red-experience, from the data on visual masking experiments.

    The analogue for what you did for Pegasus would be “The sensation experienced when exposed to light of such and such a wavelength”.

    That stone over there doesn’t seem to have any red experience when exposed to light of the right colour… And neither does my zombie twin.

    So if I show you a red object, how do you know where it falls in color space?

    If you show me a stone, how do I know where it falls in real space? Only by being put in relation with some sort of reference. Likewise, red falls into colour space only in relation to other colours—this is what I meant by the structural relationship between colours that I can use to tell them apart.

    What I can’t derive from these structural data, however, is the redness of red (the structural relationships, for instance, might be conserved by a colour inversion).

    We are concerned with which structures have a certain property (consciousness, say).

    And either that’s a structural property—in which case, you’ve won nothing. Or, it’s a non-structural property—but of course, that’s incompatible with your metaphysics. No way of trying to single out some structure as ‘special’ will work, as extensively discussed by Ainsworth.

    A subset is more informative than an exhaustive list of all possible subsets.

    Of course. It’s just that with this one subset, you get all the others along, as well, and on equal footing. Hence, you get told nothing more than what cardinality tells you.

    If you have a function from reals to reals then it is not possible to list out all those pairs, so it is quite not possible to define the function as such a set even if you had infinite time.

    Not sure what you’re getting at here. Infinite sets, and even sets whose cardinality exceeds that of the reals infinitely, in whatever way you want to interpret that, are no problem at all. The point is merely that all mathematical objects can be built up from sets and relations thereon; just go back to the set theoretical construction of the natural numbers above (either one). And once you have that, you also have Gödel numbering, which can be used to formulate every effectively formalizable theory as a sub-theory of the natural numbers, and there you go. So an appeal to different mathematical structures doesn’t yield anything new.

    Again, parsimony would be these grounds.

    Parsimony can only be a guide in adjudicating between empirically equivalent theories, and only concerns theoretical entities that need to be postulated. It’s not really about simplicity or any such vague notions.

    Anyway, I think I have been incorporating it into my theory, to the extent that you have evidence at all, which is not so much.

    OK, so how does, on your theory, my experience of a uniformly red field of vision come about?

    The whole thing is a confused mess arising out of the impossibility of constructing first-person experiences from third-person descriptions.

    This is still strange to me. If that is in fact impossible, then what makes it so? How does this irreducible first person arise? Is it just an ontological primitive of the world? And in what sense is thinking so not merely agreeing that well, consciousness is mysterious?

    I would have thought that explaining the origin of subjectivity, of a first-personal viewpoint, is precisely what a theory of consciousness should be tasked to do, but this seems to be at odds with what you wrote…

  103. Hi Jochen,

    > The conclusion remains the same: there are beliefs* without any accompanying beliefs

    But that’s not my conclusion. I think beliefs are just intentional states and beliefs* are just intentional states, and that beliefs* are just beliefs. In all cases. So there can be intentional states held in parts of your mind which are not accessible to your higher level mind.

    > i.e. stimuli that influence the behaviour of subjects in visual masking experiments without ever having been consciously registered

    A thermostat doesn’t consciously register its beliefs either. To consciously register something you need to have introspection of some sort and so access to some kind of description of how it is you are processing information. You probably also need memory and a few other tricks of that nature.

    So, to me, a belief is purely an intentional state. Conscious things can have beliefs and unconscious things can have beliefs. Conscious things can also have beliefs of which they are unconscious. You are only conscious of a belief if you have direct access to it, for instance having the belief that you have a belief.

    > why is there any apparent (it seems I need to do the disclaimer thing again…) phenomenology associated with its beliefs, and not with those dispositional beliefs* arising in visual masking experiments?

    How do you know there isn’t? How do you know some part of your visual cortex does not experience qualia? That being a submodule of your mind, you would have no way of knowing.

    But I’m not proposing that it does, because the visual cortex is not organised like the brain as a whole. It doesn’t have a memory or the ability to introspect (presumably) or hold second order beliefs about what it believes. So the functional roles played by its internal states are not directly analogous to the functional roles played by states at higher levels of organisation, and so little reason to suppose it experiences qualia as the mind as a whole does.

    > Well, if you were to counterfactually remove the mental pole of events in dual-aspect monism,

    Either this is incoherent (i.e. there is no possible world in which this could be done) and any system organised so would be conscious (i.e. functionalism) or it is coherent and there is no logical necessity that a system so organised would be conscious (and so little reason to insist that a simulated person necessarily has to be conscious).

    > No, it doesn’t have consciousness by virtue of its abstract structure; it’s merely that every thing with a certain organization has consciousness associated with it.

    What do you think organization is if not an abstract structure? You can project different organizations onto things just like you can project different computations on them. A ball can be seen as a single entity or it can be interpreted as being organised into two hemispheres or as being organised as an interior and a surface etc.

    I think this point is key. You seem to think of computations as a series of discrete states, but that is not at all how I think of them. I think of them more in terms of this kind of organization thing you seem to be talking about. It’s the organisation of causal events within them that constitutes the structure I am talking about. As I said on the other thread, computations are how you define or bring about the series of discrete states — i.e. algorithms. Computations are not the outputs of computations, e.g. certain patterns of LED lights.

    All the times I have rejected your analogies because they were too static, or like marks on a page, I think this is what’s going on. I think we have radically different views of what computation is supposed to be. But if your own views attribute consciousness to “organisation” rather than what you call computation, then perhaps we are more substantially in agreement than we realised and we have been talking past each other.

    > being able to explain how it is that we take ourselves to have phenomenal experience

    Yes. Functionalism can explain in principle how it is that we believe* ourselves to have phenomenal experience. All we need to do is understand how the brain works from a third person perspective. That is not so easy in practice. But in the abstract we can see that a faithful simulation would necessarily implement all the same functions as a human brain, including claiming to have phenomenal experience and so on. As such, it is logically necessary that any information processing system which functions as a human brain does believes* itself to be having phenomenal experience. What I cannot prove but believe on grounds of parsimony is that belief is belief*, but if this is so then it implies that it is logically necessary that we believe ourselves to have phenomenal experience and so the “evidence” that we have phenomenal experience is explained and accounted for.

    Logical necessities do not need further explanation. So, *if* the hypothesis that belief is belief* is true, then there is nothing further to be explained. As such, my account constitutes an explanation, even if not one you find you can accept. You can reject it by disputing the hypothesis that belief is belief*, but since this hypothesis is apparently more parsimonious than the alternative (something you haven’t really challenged much, except vaguely in your last post), you need to find some evidence that does not presuppose that there is something substantial to qualia or beliefs beyond their functional roles.

    > claiming that we never can know how this apparent experience arises.

    I don’t think I made that claim as such, or at least I didn’t mean to. You can know it if you accept my explanation. If you don’t accept the explanation, then you can’t ever know it. The explanation will likely never be intuitive and it will never be proven because it is unfalsifiable. So we will never have a scientifically respectable answer to the question, but I do think functionalists have the correct answer even so.

    > it seems by now obvious that we can have this belief*—acting in a way that would convince everybody that we do—without there actually being some red-experience

    I really feel this is not a problem. The difference is just that there are differences in the accessibility of your beliefs* to different parts of your mind. There is nothing mysterious or challenging to my view in this observation. The differences, such as they are, are entirely structural and functional and subjective.

    > Likewise, red falls into colour space only in relation to other colours

    So, if you look at a red object and a blue object side by side, how do you tell how they relate to each other in colour space? On what information do you base your decision that they are different, or that red is a little closer to orange than to blue?

    > No way of trying to single out some structure as ‘special’ will work, as extensively discussed by Ainsworth.

    I disagree with Ainsworth. Based on your presentation of his argument at least, it doesn’t address the issue at all. He only shows that you can superimpose any structure on any set. But I’m talking about certain of those structures having certain interesting properties, such as intentionality and consciousness. So singling out a structure so as to explain how it has these attributes communicates a helluva lot more about that structure than cardinality.

    > Of course. It’s just that with this one subset, you get all the others along, as well, and on equal footing.

    What? No you don’t. There’s nothing to prevent singling out a specific subset for consideration without having to consider all its peers too. I really don’t understand your point here.

    > The point is merely that all mathematical objects can be built up from sets and relations thereon

    Sure, but you sometimes need infinite sets, so you can’t just list all the members, so you need rules to define them. The natural numbers is such a set, unless I’m mistaken. You need an algorithm or a rule to define the successor of each number, I would have thought.

    But perhaps I’m talking out of my arse. You could be right. But my main point stands, that simple, flat, static relations do a poor job of portraying what a dynamic algorithm is actually like. Perhaps it is possible to provide a purely relational definition of an algorithm but that would probably be very unwieldy and would in any case be quite unlike the examples you have been offering.

    > Parsimony can only be a guide in adjudicating between empirically equivalent theories, and only concerns theoretical entities that need to be postulated.

    I think that’s how I’m using it. You are postulating an inarticulable ineffable difference between a belief and a belief*. I am not. Both views are consistent with the evidence as far as I can see.

    > OK, so how does, on your theory, my experience of a uniformly red field of vision come about?

    On my theory, that’s a neuroscientific, functional question which has yet to be fully understood, just as I don’t understand how you learn to ride a bike or construct philosophical arguments. In the abstract, a functionally identical p-zombie clone of yours would believe itself to be experiencing red, so any evidence that you can offer that you are experiencing red is explained by my theory. I don’t claim to explain the details of how your mind works, only to offer a rebuttal to the supposed knockdown arguments against functionalism.

    > This is still strange to me. If that is in fact impossible, then what makes it so?

    Because there is no way to get from a description of a mental state to a direct first person experience of that mental state. Because knowing a first person state is just being in that state, and the idea that first person experience can be communicated like third person knowledge is a category mistake. As far as I’m concerned, a theory of consciousness is not tasked with allowing us to directly communicate first person experiences, it is tasked with cutting through all the apparent contradictions that have cropped up as we have tied ourselves in knots trying to understand what consciousness is.

  104. I think beliefs are just intentional states and beliefs* are just intentional states, and that beliefs* are just beliefs.

    But beliefs have some associated phenomenology, while the visual masking experiments show that beliefs* don’t, so how can the two be the same?

    Of course, you can once again choose to stick to your theory and declare that the evidence is wrong in some regard, or that there then must be some other consciousness instantiated in some inaccessible way, but then I wonder what becomes of your claims for parsimony.

    To consciously register something you need to have introspection of some sort and so access to some kind of description of how it is you are processing information.

    So basically, to consciously register something you need consciousness, hence, only consciously registered beliefs* are beliefs; but originally, you were hoping to build at least some nontrivial aspects of consciousness from beliefs. So once again, what you claim to explain depends on the very thing you set out to explain.

    Conscious things can have beliefs and unconscious things can have beliefs.

    To me, beliefs have at least some intentional component, so I don’t think that unconscious things can have beliefs, at least not in the sense I am using that term. So I think you’re simply equivocating here.

    How do you know there isn’t? How do you know some part of your visual cortex does not experience qualia? That being a submodule of your mind, you would have no way of knowing.

    Of course I don’t know that there isn’t. Likewise, I don’t know that there is no teapot between the orbits of mercury and venus, that there are no invisible pink unicorns on the dark side of the moon, and so on. But I don’t think there’s any use in speculating about such things.

    And besides, if you believe that there can be conscious subunits that are distinct from the ‘main’ consciousness, then you’re faced with the problem of how the parts of that main consciousness unite. Otherwise, you could just say that they’re part of the same information processing structure, but now, you’ll need some new principle for that.

    Either this is incoherent (i.e. there is no possible world in which this could be done) and any system organised so would be conscious (i.e. functionalism) or it is coherent and there is no logical necessity that a system so organised would be conscious (and so little reason to insist that a simulated person necessarily has to be conscious).

    OK, I don’t really want to defend a stance I don’t hold, but you’re giving it short shrift. Anyway, supposing that your simulated double is conscious because of the computation really is only compelling if you’re a computationalist already, so it’s not going to help convince anybody.

    As I said on the other thread, computations are how you define or bring about the series of discrete states — i.e. algorithms. Computations are not the outputs of computations, e.g. certain patterns of LED lights.

    As I already said in an earlier reply in that thread, since there is (in the case I set up there) a one-to-one correspondence between outputs and states (i.e. each state has a given output associated with it), we can label the states by the outputs they produce.

    Anyway, I hold to this particular view of computation because it’s quite far spread, well developed, and easily made sharp enough to prove unambiguous statements about it. Plus, it lets me implement every computation I could care to implement. If you provide me something that is similarly sharp rather than pointing to vague concepts like organization etc., then we can talk about that instead.

    What I cannot prove but believe on grounds of parsimony is that belief is belief*, but if this is so then it implies that it is logically necessary that we believe ourselves to have phenomenal experience and so the “evidence” that we have phenomenal experience is explained and accounted for.

    I think your parsimony argument is getting somewhat less convincing if you have to throw the possibility of additional non-accessible consciousnesses into the ring… And even if beliefs* were beliefs, you’d still have to show that believing to have a red experience suffices to have a red experience in whatever sense it is that we do. It seems to me that I could easily have a green experience while believing it to be a red experience.

    So, *if* the hypothesis that belief is belief* is true, then there is nothing further to be explained.

    Well, if the hypothesis that qualia are functional states is true, then there is nothing further to be explained. If the hypothesis that the phenomenal is simply another aspect of the physical is true, if the hypothesis that everything is conscious to some degree is true, if the hypothesis that structure captures content is true, etc etc. Coming up with hypotheses is never the problem; making them plausible is.

    On what information do you base your decision that they are different, or that red is a little closer to orange than to blue?

    On what information do you base your impression that the vase sits on the table? On the information you get from your senses; in the case of colour space, the relative activation of retinal receptors can be grouped in a three-dimensional manifold, such that the structural relationships of the manifold mirror the structural relationships between activation levels of colour-sensitive cones (and brightness-sensitive rods) in the retina. In the case of real space, you the relative location of activation patterns of retinal receptors can be grouped into a three-dimensional manifold whose structural relationships mirror that of the activation patterns on the retina. I really don’t see what you’re trying to get at?

    He only shows that you can superimpose any structure on any set. But I’m talking about certain of those structures having certain interesting properties, such as intentionality and consciousness. So singling out a structure so as to explain how it has these attributes communicates a helluva lot more about that structure than cardinality.

    Well, again, interestingness is either a structural property, or not. If it is a structural property, then the same problem remains; if it’s not structural, then your metaphysics is incomplete.

    There’s nothing to prevent singling out a specific subset for consideration without having to consider all its peers too. I really don’t understand your point here.

    Well, if all you have access to are structural fact, then a given structure can only be interesting by virtue of standing in some relation (i.e. it can only be a structural fact that makes it interesting). But what this relation tells us is that there is some set, of a given cardinality, whose elements stand to each other in that relation. But if that set exists, then so does the set of all its subsets, i.e. its powerset. And if that powerset exists, then so do all its subsets, i.e. all possible structures definable on that set. But then of course ‘interestingness’ fails to pick out that one structure you are trying to pick out, since all of the others come part and parcel with it.

    I think this is something you really need to get clear about, and I think it’s the reason you fail to appreciate the force of Newman’s objection. So let’s try and be a little more explicit. Say you have a relation R = {(a, b), (a, c), (b, c)}, as above. This tells us that there is a domain D = {a, b, c}, upon which that relation is defined. But since there is D, there also is its powerset P(D). And with its powerset, there also is every other relation R*, i.e. every subset of P(D).

    So, saying ‘there is R’ implies that ‘there is R*’, for R* being an arbitrary relation on D. But then, saying ‘there is R’ merely says the same as saying ‘there is some D with cardinality c’. Hence, all saying ‘there is R’ tells us is D’s cardinality. This is the standard Newman problem, and I hope up to this point, we’re still on the same page.

    But now you’re trying to say, well, there may be all these other R*, but R is special; it’s interesting; it’s conscious; whatever. Anyway, there is some predicate Q(R) such that R is a structure that we care about, and those R* aren’t. But of course, your metaphysics being structuralist, you’re committed to Q(R) being structural, i.e. it being some relation. But then you’re at the same point you were before. By saying ‘there is Q(R)’, you’re saying ‘there is some C with cardinality k’, along with its powerset P(C), and all imaginable subsets thereof—i.e. all relations over C. But you’re in particular saying ‘there is R’—in the same way that originally saying ‘there is R’ also entailed ‘there is a (and b, and c)’ (or ‘there is (a,b)’)—simply by virtue of a being an element of D, whose existence is assured by that of R. By the same token, there being Q(R) entails there being R, either as an element of the cardinality k set C or of the cardinality 2^k set P(C) (which tells us that C is simply P(D), or a subset thereof).

    But then, saying Q(R)—R is interesting, R is special, whatever—implies R; but R implies, as we have seen, all the above R*s. And since the existence of R and the R*s is equivalent to the existence of a set with cardinality c, then this, too, tells us no more than the cardinality.

    No matter how much structure you compound, you will always be facing this problem.

    You are postulating an inarticulable ineffable difference between a belief and a belief*. I am not. Both views are consistent with the evidence as far as I can see.

    No, the difference between beliefs and beliefs* is quite effable: the former go along with conscious experience, in particular, they have intentional content; the latter don’t go along with conscious experience, and can have intentional content only in a derived way, in the same sense that e.g. sentences do. The evidence is that we have both kinds of belief: in visual masking, we have beliefs* as well as beliefs, which have different propositional content. Hence, I think that the difference between beliefs and beliefs* is borne out experimentally.

    To me, beliefs and beliefs* are not theoretical entities, but rather, observational ones—both are observed in visual masking experiments. Hence, parsimony does not adjudicate between both. It might be that there is a most parsimonious theory on which one in some sense emerges from the other, but that theory’s parsimony is to be decided on the part of the theoretical entities it postulates.

    You could of course dodge that and claim that beliefs* do have some intentional content not available to the ‘main’ consciousness—but of course, this only comes at the expense of multiplying theoretical entities, thus giving away the argument from parsimony.

    I don’t claim to explain the details of how your mind works, only to offer a rebuttal to the supposed knockdown arguments against functionalism.

    But the problem is that your rebuttals only work if you subscribe to functionalism anyway. If I were a believer in the p-zombie argument, nothing you’ve said carries any force to convince me otherwise: my simulation would not be conscious; it would form verbal reports as if it were, it would act as if it were, but ultimately, none of this would be any different from my computer printing out ‘I am conscious’ or lighting up a little light labeled ‘consciousness’. All of the beliefs and intentions we might ascribe to this simulation are merely derived from our own, genuine intentions.

    As far as I’m concerned, a theory of consciousness is not tasked with allowing us to directly communicate first person experiences,

    But it is tasked with explaining how first person experiences crop up in the first place, either from elements which lack them (giving a reductive explanation), or from elements which intrinsically have them (giving a ‘mysterious’ explanation, accepting the existence of first persons as a brute fact, as dual-aspect theories and panpsychist accounts do). Simply postulating that there is some first person experience associated with ‘being a structure’ is the same thing as saying that there is some first person experience associated with being an electron; but then, ultimately, I don’t think you’re a functionalist, but rather, a panpsychist in denial.

  105. Well, if all you have access to are structural fact, then a given structure can only be interesting by virtue of standing in some relation (i.e. it can only be a structural fact that makes it interesting).

    I think the argument I gave there is needlessly confusing, and the point is important enought to try and clear it up. So, first, let’s get a grip on what a relation is: I originally defined it as a subset of the powerset P(D) of some base set D, which is OK for many purposes; but in general, relations may be ordered (if x > y then y < x is false), and so, each k-ary relation should be defined as a subset of the set of ordered k-tuples of elements of D. This way, also the numbers come out the same as in Ainsworth's example. Let's denote the set of all k-tuples on D as Tk(D), and thus, the set of all k-ary relations on D, i.e. the set of all subsets of Tk(D) as before by P(Tk(D)).

    Hence, our relation R above is a subset of P(T2({a,b,c})), i.e. a set of ordered tuples of elements a, b, and c. That sounds about right. Now, the rest of the argument proceeds essentially as above: R implies D with cardinality n; D implies the existence of all k-tuples of elements of D, and hence, the existence of all Tk(D). But this implies the existence of P(Tk(D)), and hence, of every relation R* definable on D.

    Now, you want to single out R as being special. This, as above, you can only do by introducing a structural predicate Q(R). This is an element of some set of relations on some P(Tk(D)), which we'll abbreviate by calling it S, the set of structures on D. Hence, it is a subset of the set of ordered k-tuples of elements of S, if Q is a k-ary relation. Thus, Q(R) implies S, in the same sense as R implies D. But then, also every set of ordered k-tuples of elements of S, and hence, the powerset of this set, P(Tk(S)), exists. But then, in particular, if Q(R), we also get Q(R*): R* is an element of S, and hence, Q(R*) is an element of P(Tk(S)). But then, you can just as well consider any R* to be singled out by Q. Now, of course, you could argue that well, Q(R) is special in some sense—that is, there is some structural predicate Q'(Q(R)) that singles out Q(R) over Q(R*). But it's clear that this will merely lead to an infinite regress of structural singlings-out.

    Hence, if you try to use a structural property to single out some structure, you end up 'singling out' all of them, due to the inherent underdetermination of structural predicates. But you also can't take recourse to nonstructural predicates, which don't exist on your metaphysics. But then, all that is accessible on your metaphysics is cardinality questions.

  106. Hi Jochen,

    > But beliefs have some associated phenomenology

    I don’t think they do, necessarily. Recall that I think a thermostat has beliefs, or at least proto-beliefs, certainly that more complex information processing systems such as air traffic control systems. I don’t think such systems have anything like human consciousness or phenomenology though.

    > while the visual masking experiments show that beliefs* don’t, so how can the two be the same?

    If you are aware that you have a belief that’s just a second order belief, a belief about a belief. You can also explain the difference in terms of having different scopes or frames of reference. You are not aware of my private beliefs but that doesn’t mean I don’t have private beliefs. You are not aware of the private beliefs of your visual cortex but that doesn’t mean your visual cortex does not have private beliefs.

    So I’m all for the idea that different modules in your brain have their own private beliefs. I am against the necessary association of beliefs with phenomenology. I think the two are different things and that something can have beliefs without phenomenology.

    > So basically, to consciously register something you need consciousness

    That’s not what I said. I said introspection. Introspection can be given a functional definition just like beliefs* or desires* or what have you. It is just the ability to analyse something about your internal state and how you appear* to be processing information. It may also include having a concept or representation of self. There is no difficulty in giving computer systems the ability to introspect* in this sense.

    > To me, beliefs have at least some intentional component, so I don’t think that unconscious things can have beliefs,

    I agree that beliefs have an intentional component, but I don’t think that having intentions requires consciousness. For me, an intentional component is just having a reference. I think that the state of the thermostat refers to the temperature, not absolutely but for all practical purposes, and that’s all I think reference ever is — a loose association that could be interpreted in other ways if you wanted to make things difficult for yourself. But I don’t think the thermostat is aware of the fact that it has an intention towards temperature (that would be a second order intention) and I think it’s far too trivial a structure to be deemed conscious.

    I think that for something to be deemed conscious it needs at a minimum some concept of memory, of self and perhaps the ability to simulate its environment so as to make plans in advance. That kind of thing. The particular set of abilities we identify as consciousness is somewhat arbitrary — I don’t think there’s one right definition. Consciousness to me is not some quality above and beyond a certain set of functions, it is just a handy label we apply to a suite of functions of a certain familiar kind. Essentially, something is conscious if its functions are somewhere in the ballpark of what humans can do, and thermostats are nowhere near.

    > And besides, if you believe that there can be conscious subunits that are distinct from the ‘main’ consciousness, then you’re faced with the problem of how the parts of that main consciousness unite.

    Perhaps you’re aware of Ned Block’s China Brain thought experiment. That would be an example of the kind of thing I’m talking about. In this thought experiment, individual neurons or groups of neurons are simulated by conscious people. My view is that the overall structure would be conscious, whereas of course Block’s goal was to show the opposite, deeming that conclusion to be too absurd to be entertained. Nevertheless it should offer an illustration of how I don’t see a problem with different parts of a system having their own private mental states. I don’t understand what needs to be explained about how the overall consciousness comes together because I genuinely don’t see a problem. What possible difference could it make to me if each of my neurons was a mind in its own right?

    > Anyway, supposing that your simulated double is conscious because of the computation really is only compelling if you’re a computationalist already, so it’s not going to help convince anybody.

    But I’m not assuming that. It’s kind of a reductio. I’m only assuming that your simulated double has all the same functional abilities as you do. That is more easily conceded than that it is conscious. Then, the argument goes, everything you are asking me to explain regarding how you experience or seem to experience phenomenology could also be asked of you by your double. If you insist your double is not conscious, you have to find a way of explaining to it how it could seem to it that it is, or how anything could seem like anything to it. It’s not so easy to see how you would do that, beyond simply insisting that it doesn’t (a tactic you find most unsatisfying when used by eliminativists). Most anti-computationalists concede that point but maintain that it doesn’t matter because the fact remains that they are conscious and their simulated double just isn’t. But to me that’s an empty argument, because it’s no different than what the simulated double would say. It undercuts the notion that we actually have evidence that there is something more going on than mere functionality because if that’s all there was we would still be in exactly the same boat — convinced (or convinced*) there’s more to it than functionality even if there isn’t.

    > As I already said in an earlier reply in that thread, since there is (in the case I set up there) a one-to-one correspondence between outputs and states (i.e. each state has a given output associated with it), we can label the states by the outputs they produce.

    But this does not answer the point. An algorithm is not a sequence of states. An algorithm is a means of producing or defining a sequence of states. In a sequence of states, there is no way of knowing what the next state is going to be from the current state. But if you have an algorithm, then the current state defines the next state.

    And though you may have sketched an example where there is a one to one relation between outputs and states, that is far from typical. There are a number of different algorithms with quite different properties for computing outputs such as the Fibonacci sequence or for sorting a list. The algorithm that leads to you professing yourself to be conscious is entirely different from that which simply has printf(“I am conscious!”);

    I think of computations in terms of the structures of their algorithms, not as sequences of states or outputs. I think that is how most computationalists think of them too. If you think I’m using the term incorrectly, that may be, but it means we may be getting closer to understanding each other. It may be that what you call organisation is just the same idea I refer to as computation.

    > Plus, it lets me implement every computation I could care to implement.

    Not really. It only lets you write out a trace of every computation, which you can only do anyway by implementing an algorithm in your head. You can’t even write the trace if you have finite time and an algorithm which doesn’t halt. For instance, your model won’t let you implement a computation which simply spits out successive natural numbers, but if you define it as an algorithm then that job becomes trivial.

    > If you provide me something that is similarly sharp rather than pointing to vague concepts like organization etc

    Turing’s account of an effective method or mechanical process is pretty sharp, I would have thought. Perhaps it won’t lend itself as easily to defending the points you want to make, but I think the seductive sharpness of the “sequence of states” model of computation is leading you astray.

    Besides, I’m only using the term “organisation” because you used it yourself, apparently implying that you think that organisation has something to do with consciousness. I’m trying to agree with you!

    > It seems to me that I could easily have a green experience while believing it to be a red experience.

    And it seems to me that you couldn’t. When you try to illustrate this thought experiment in more detail, it seems to me that the subject simply has the words “green” and “red” confused, as if I thought the German for red were “grün” and believed myself to be in a state of experiencing “grün” when I experienced red. Similarly, you think that the inverted qualia thought experiment is coherent and I don’t think it is.

    It seems to you that it is coherent because you can imagine it easily: you just imagine yourself to be perceiving a scene with colours inverted, as if you were wearing some kind of spectrum inversion goggles. But that is not what would be happening in a genuine colour inversion. Wearing spectrum inversion goggles would be a weird experience because strawberries would not be associated with the same sense data as you are accustomed to, but there would be nothing weird to you if your qualia were always inverted. When you imagine inverted qualia, I think what you are really imagining is this weirdness. This weirdness is a coherent idea, but inverted qualia without weirdness is not in my view. If I had a switch whereby I could invert your qualia at will but without disrupting their functional roles, you could not possibly have any idea when I flipped the switch. If this is so then that rather seems to illustrate that the whole idea is nonsense. The inversion of qualia is a distinction without a difference, since there is no possible observer to whom it could make a difference, and the whole idea of qualia having content independent of their functional roles can be discarded.

    > Coming up with hypotheses is never the problem; making them plausible is.

    All the hypotheses you list to illustrate the point are in my view broadly equivalent to what I am saying. They are just different glosses on the same idea, or at least that is how it seems to me. So this just reinforces my point. This basic idea explains everything there is to be explained while being parsimonious. So that makes it plausible.

    > the relative activation of retinal receptors can be grouped in a three-dimensional manifold

    You mean you have direct access to the relative activation of your retinal receptors? Because I don’t. So that means that you must perceive primary colours as being quite intrinsically different to secondary colours, and you must have no difficulty in interpreting the colour of that famous black/blue or gold/white dress. The black/gold is plainly about 40% red activation, 30% green activation, 20% blue activation, while the blue/white is plainly about 40%, 45%, 60%.

    Of course this is not how you perceive colour. Remember how surprising was Newton’s result that white light was composed of a spectrum of light of different colours. All this processing of relative activation is performed by your visual cortex or your retina or somewhere and you have no direct access to any of it. Instead, you are presented with what you call a bunch of qualia (akin to the proposition that this dress is blue/black or white/gold) and you have no idea how you arrived at it. This is why it seems mysterious to us.

    I’ll leave it for now. I’ll come back to the rest of it later. Cheers!

  107. Recall that I think a thermostat has beliefs, or at least proto-beliefs, certainly that more complex information processing systems such as air traffic control systems.

    Well, but this is question-begging: these things have beliefs if and only if beliefs* are beliefs.

    If you are aware that you have a belief that’s just a second order belief, a belief about a belief.

    That might be, but what we have at hand is at best a second-order belief* whose object is a belief*. Also, I discussed an extension of the original experiment in order to counter this higher-order belief* reply in my first post about the visual masking experiments.

    Introspection can be given a functional definition just like beliefs* or desires* or what have you.

    Again, that might be, but then you’d still have to show that introspection* directed at beliefs* gives rise to beliefs.

    I agree that beliefs have an intentional component, but I don’t think that having intentions requires consciousness. For me, an intentional component is just having a reference.

    But something has a reference only by virtue of appearing to an intentional agent. Take the word ‘Gaul’. What’s its reference? Well, it could be anything, it could be nothing. It could be a meaningless string of symbols. Certainly, it alone does not suffice to decide whether it refers to anything. In order to make it actually refer to anything, we need an intentional agent with the right understanding—in this case, a speaker of German, who knows that ‘Gaul’ is vulgar for ‘Pferd’, i.e. horse.

    And before you say, well, I could program a machine in order to make that identification, ask yourself: what would the machine do in order to do so? Well, it would, in some way, spit out a string of symbols—which you, the user, then interpret. The machine associates the symbols ‘Gaul’ with the symbols ‘horse’, but ‘horse’ does not in itself refer intrinsically any more than ‘Gaul’ does. Only by virtue of your interpretation does ‘horse’, and hence ‘Gaul’, refer; thus, reference in symbols is always derived intentionality, and so can’t be used to explain intentionality.

    I think that for something to be deemed conscious it needs at a minimum some concept of memory, of self and perhaps the ability to simulate its environment so as to make plans in advance.

    You say that it needs at minimum some concept of self as if that were an easy thing, but again, the same problems arise. You presumably mean something like a ‘self-symbol’, some sort of inner flag that lights up whenever something pertains to the agent. But again, that’s not enough: this self-symbol is about the agent only by virtue of our interpretation; it is not about anything to the computational agent, or at least, there would need to be a mechanism by which it could come to be about the agent to the agent, without incurring the circularity of the homunculus fallacy.

    It seems to me that this is a general feature of your thinking—convinced that there is somehow a reduction of various high-level properties to structural/computational categories, you feel entitled to simply postulate such things as basic elements of your account—be it introspection, reference, or some concept of self. But with this, you smuggle the explananda back in through the back door: yes, an agent may have intentionality if there is some inner system capable of generating meaningful references, but that itself is a capacity that presupposes intentionality.

    I don’t understand what needs to be explained about how the overall consciousness comes together because I genuinely don’t see a problem.

    Well, the problem is related to what’s known as the combination problem on panpsychist accounts. As I think William James put it, if you give each word of a twelve words long sentence to one of twelve men, then, no matter how hard each concentrates on his word, nowhere will there be any consciousness of the whole sentence. So, if you believe that subunits of the brain possess independent consciousness, you have to explain, e.g., how my visual and auditory subsystems’ consciousnesses come together to form one single integrated conscious experience, while on the other hand, some subsystems get left out.

    So the problem is not having separately conscious parts, it’s how to integrate them into a larger conscious experience; just the thing Block’s thought experiment is designed to put in doubt.

    Most anti-computationalists concede that point but maintain that it doesn’t matter because the fact remains that they are conscious and their simulated double just isn’t. But to me that’s an empty argument, because it’s no different than what the simulated double would say.

    Well yes, but in order to pay what the simulated double would say any mind, you’d have to believe that it knows what it’s talking about, so to speak. Consider the following analogy: you ask A and B a question; both answer identically, and correctly. Do both know the answer? You can’t tell: either or both could just have made a lucky guess. Thus, just making the right vocalisations doesn’t say anything about whether there’s the right sort of connection to make them into genuine knowledge.

    Now, let’s say A was once in a position to learn the answer to your question—being told, reading, or observing some relevant event; B never was. Hence, A spoke from genuine knowledge, B merely made a guess. In the analogy, this corresponds to A having phenomenal experience, and B not having it. Of course, that’s not to say that B, i.e. the simulation, merely guesses; but there are other ways to give the right answer and nevertheless not have the relevant knowledge—consider Gettier problems: say, your cell phone rings, and you take the call. Ordinarily, you would say that you knew someone was calling you. However, unbeknownst to you, it was actually not your cell phone that rang; yours was set to silent, which you’d forgotten. But by mere happenstance, in just the same instant you were called, the cell phone of a person next to you rang, having just the same ringtone. Thus, you justifiably believed that you were being called, but nevertheless, it seems odd to say that you knew that to be the case—the set of things that invoked this belief in you would not ordinarily produce any knowledge on your part.

    In the same way, then, one might hold the simulation’s utterances to be mistaken: it doesn’t actually know what it’s talking about, but the situation is such that, like you in the case of cell phones, there is no actual knowledge on its part, although it behaves as if there were.

    All this just to say, I see no reason that an anti-computationalist ought to be persuaded by the proposition that their double would be similarly unconvinced: they are unconvinced out of genuine knowledge, being directly aware of their phenomenal states; their doubles lack this knowledge, merely putting on a show as if by happenstance.

    An algorithm is not a sequence of states. An algorithm is a means of producing or defining a sequence of states.

    Yes. Essentially, in the case of finite automata, the transition table is the algorithm. In implementing a computation, one thus ensures that the transition table is realized by the causal connections of the physical system one uses to represent it. This is exactly the sense in which the LED-box implements the automaton.

    It may be that what you call organisation is just the same idea I refer to as computation.

    I used the word ‘organization’ in reference to dual-aspect monism merely in order to avoid getting tangled up in overburdened phraseology. Basically, my point there was related to what Sergio is proposing: that there is primitive intentionality out there, in the form of dual aspects; and that mechanisms may exist that fashion full-fledged intentions from them. Those mechanisms may then be regarded as computations (as anything may be), but it is not by virtue of their computation that an entity is conscious, but rather, by virtue of the presence, the harvesting and manipulation of environmental proto-intentionality. Thus, simulated me becomes conscious, however, the computation has nothing to do with it; merely that the right processes manipulate proto-intentionality in the right way. Without proto-intentionality, there would be no conscious experience, and my simulation is quite rightly skeptical regarding your computationalism.

    Not really. It only lets you write out a trace of every computation, which you can only do anyway by implementing an algorithm in your head. You can’t even write the trace if you have finite time and an algorithm which doesn’t halt.

    There are no non-halting algorithms of finite automata; they always either halt or repeat. This is the same for physically realizable machines; in fact, all physically realizable machines are equivalent to finite automata.

    And the execution trace of a computation is that computation: once I know that—which is simply given by a handy chart that tells me the computational state given the physical state that I always keep with me—I know the result of that computation (meaning, that specific computation; i.e. the value of f(x) for some given x).

    Turing’s account of an effective method or mechanical process is pretty sharp, I would have thought. Perhaps it won’t lend itself as easily to defending the points you want to make, but I think the seductive sharpness of the “sequence of states” model of computation is leading you astray.

    Hmm, the Turing model is of course fully equivalent (for bounded-length tapes) to the one I am proposing, so I’m not sure how you think it would lend itself to making my point any less well…

    This basic idea explains everything there is to be explained while being parsimonious.

    It only does so if you’re ready to accept without argument that belief* is belief, and other things like that, on the grounds of ‘parsimony’. But how belief* could be belief is exaclty what is to be explained; hence, it is only plausible if you accept it as plausible ab initio.

    You mean you have direct access to the relative activation of your retinal receptors? Because I don’t.

    Direct access, no. But just like any other sense data, it is present to me (and to you)—exactly in the structural relationships of colour space. Of course, this is a simplified cartoon, since there is postprocessing of the signals; but still, this is how I tell different hues apart—by their difference in location in colour space.

    I’ll leave it for now. I’ll come back to the rest of it later. Cheers!

    I’m looking forward to your response to my argument regarding structuralism!

  108. Hi Jochen,

    I’ll try to address your structural argument now. Since you say the later comment is clearer I’ll try to focus on that and refer to the former only when clarification is needed.

    > Hence, our relation R above is a subset of P(T2({a,b,c})), i.e. a set of ordered tuples of elements a, b, and c. That sounds about right.

    OK. I’m with you. I think!

    > R implies D with cardinality n; D implies the existence of all k-tuples of elements of D, and hence, the existence of all Tk(D). But this implies the existence of P(Tk(D)), and hence, of every relation R* definable on D.

    I don’t see the point of this argument. I’m a Platonist. That already implies that I accept the existence of all possible structures. You don’t even need this kind of reasoning to see that I must already believe that P(Tk(D)) exists and that every relation definable on D exists.

    > Now, you want to single out R as being special.

    Not necessarily. No more than I’m singling a particular sentence out as special when I write one. The existence of all English sentences is implied by the set of all sequences of English language characters, but that doesn’t mean that communication is redundant or that writing a sentence means nothing beyond cardinality. Yes, I know you say these things have to be interpreted, so I’m bringing intentionality into the picture, but I still think that is illustrative of what I am trying to say. I’m not necessarily saying R is special I’m just highlighting it or bringing it to your attention for some reason.

    Now, R may be “special” in some respects, but that’s another question. For instance, one way it might be special is for all its k-ary relations to consist of the same element, e.g {{a,a,a},{b,b,b}} is special in this regard and {{a,b},{b,c}} is not.

    > This, as above, you can only do by introducing a structural predicate Q(R). This is an element of some set of relations on some P(Tk(D))

    I guess so? I don’t know. For instance, is the property of having only the same element in its relations an element of some set of relations on some P(Tk(D))? It’s not clear to me how one would present it in these terms, so you’ve more or less lost me here. The way you’re thinking about these issues is so different to how I think of them that I’m not sure how to translate what you’re saying into what I’m trying to say or vice versa.

    > But of course, your metaphysics being structuralist, you’re committed to Q(R) being structural, i.e. it being some relation

    I think this is the problem. I accept that perhaps it is correct to say that all structure can be thought of as being some relation, but this is not how I think of structure usually and I find it difficult to translate between the two modes of thinking. I’m a software developer so I’m used to thinking of structures as dynamic, temporal things, where earlier states cause later states and algorithms are not defined as sequences or relations but instead define sequences and relations.

    > And since the existence of R and the R*s is equivalent to the existence of a set with cardinality c, then this, too, tells us no more than the cardinality.

    That’s if you’re concerned only with what exists. I’m not, because I think everything well-defined exists. If you’re instead interested in the properties of particular structures then there is much to be learned by examining particular structures. For instance, the existence of the Mandelbrot set is more or less entailed as soon as complex numbers are defined, but that doesn’t mean the Mandelbrot set is not worth studying in its own right and that we have nothing to learn about its properties. These properties are many and fascinating. Just have a look at its Wikipedia page and see all the stuff mathematicians have been able to learn about it. Are those properties structural? I would have said so — they are properties it has in virtue of its structure. Can they easily be expressed as relations? Perhaps, I don’t know. It’s not clear to me how in any case.

    One of the properties a structure might have is to have self-aware substructures such as us. I think the universe is such a structure and we are such substructures. I think within these substructures (humans) are further substructures (mental models) which are loosely isomorphic to aspects of the structure of the overall structure (the outside world) and that this is how we have intention and reference. We can see in the abstract that by simulating such a structure we would find the entities within behaving much as we do, so I’m just saying we have no way of knowing that we are not such entities ourselves.

    Newman’s objection just shows that all other structures of the same cardinality also exist. And I’m fine with that! It just doesn’t apply to me or to any Platonist view as far as I can see. I am a structure S with a certain cardinality. This may imply the existence of all other structures with the same cardinality, but I don’t care, because I’m not all the structures, I am only S. This makes S special, but only to me. Objectively it’s just another one of the structures that exists.

  109. Hi Jochen,

    On to some of your other points.

    > No, the difference between beliefs and beliefs* is quite effable: the former go along with conscious experience, in particular, they have intentional content; the latter don’t go along with conscious experience, and can have intentional content only in a derived way, in the same sense that e.g. sentences do.

    I find it hard to discuss this with you because I’m not clear on where you’re coming from with respect to the logical possibility of p-zombies, since you think that a simulated you would have to be conscious. Most anti-functionalists are either confused (I would say) or accept the logical possibility of some kind of functional p-zombie, e.g. a faithful computer simulation that is not conscious.

    So for these kinds of p-zombies, everything you can say about your consciousness has an “asterisked” analogue for the p-zombie. You have intentional states, it has intentional* states. It can even be said to have consciousness*, in that it seems to have a level of self-awareness* (in that it has beliefs* about itself) and the ability to introspect* and claims to experience phenomenology and so on.

    So in the end anti-functionalist insist on a distinction they cannot really express or articulate, other than by saying things like “the former go along with conscious experience, in particular, they have intentional content” without really explaining what is the difference between conscious experience and conscious* experience* or intentional content and intentional* content. That’s the ineffability I’m talking about.

    To be clear, I’m not talking about the difference between me and a thermostat. I don’t think a thermostat is conscious and I think the differences between me and a thermostat can be articulated quite clearly. But I do think a thermostat can have beliefs since I don’t see a need to tie beliefs to consciousness. Beliefs to me are just more or less what Dennett would say with respect to the intentional stance, though I think it may be helpful to also think of them as requiring some kind of internal representation.

    To me, beliefs and consciousness are two completely different things. Beliefs can exist with consciousness or independently of consciousness. We can be unconscious of beliefs or conscious of them. This is not what I think of as the distinction between beliefs and beliefs*. The distinction I’m trying to get at is those drawn by anti-computationalists between the beliefs of a person and the beliefs* of a p-zombie. Your evidence of the distinction between beliefs and beliefs* doesn’t get to this because a p-zombie could show just the same experimental results in such experiments as a person. The differences these experiments show between what you think of as beliefs* and beliefs are not only phenomenal but also structural, functional, and so have analogues in purely structural or functional systems. So I am saying that in all cases these beliefs are beliefs. The difference is that only some of these beliefs are accessible by the part of the structure responsible for such duties as the composition of sentences describing introspective activities. This is not a particularly profound or mystical distinction and poses no problems for my view. The accessibility of certain beliefs is what makes them conscious and the inaccessibility of other beliefs is what makes them unconscious.

    > Simply postulating that there is some first person experience associated with ‘being a structure’

    Not with being just any structure. With being a structure that believes itself to be experiencing something! In order to be such a structure you need a certain kind of organisation. You need to have beliefs about yourself and some sort of memory and some sort of model of the world and a connection between the world and that model via some sort of sensory apparatus.

    More than this, to have such an experience in anything like human terms you also need to be sophisticated enough to have representations or understanding of concepts such as experience itself as well as having a sophisticated and detailed mental model of the world and the self.

    > is the same thing as saying that there is some first person experience associated with being an electron;

    I don’t think this is quite the same as positing that electrons have experience in any meaningful sense.

    > but then, ultimately, I don’t think you’re a functionalist, but rather, a panpsychist in denial.

    Unlike panpsychists I think that particular structures are necessarily conscious by virtue of their structure, i.e. that consciousness is just a word we use to describe structures with certain attributes. Panpsychists seem to think that consciousness is contingent, that there are possible worlds which are physically or structurally the same but without any consciousness at all. As such, to a panpsychist consciousness is a kind of magical fairy dust sprinkled across the cosmos and bringing it to life. It is an extra something needed to breathe life into the equations, to paraphrase Stephen Hawking. I don’t think any such fairy dust is required.

  110. You don’t even need this kind of reasoning to see that I must already believe that P(Tk(D)) exists and that every relation definable on D exists.

    The problem is not so much the existence of all that stuff, but that you can’t single out any of it and say, that’s what I’m talking about, without implcitly introducing some non-structural element. If you claim, say, the world has structure (R,D), and I say, no, it has structure (R*,D), then who of us is right? If either is, then there must be some fact making them right; and if that fact again is structural, then nothing has been won. If neither is, then all that we can say about the world is that it has cardinality |D|.

    No more than I’m singling a particular sentence out as special when I write one.

    But this is singling out one sentence as being special—you wrote that sentence, rather than this one. On a structuralist account, this predicate—‘this is the sentence DM wrote’—would have to be a structural fact. That is, you and that sentence stand in some kind of relation. But then, of course, you can just reapply the above argument, and end up with the conclusion that that sentence isn’t singled out after all, unless you somehow ground the relation—i.e. you introduce some kind of fixed point, some non-structural additional fact. To me, this is exactly what our intentions do, hence the problem in accounting for them on a structural metaphysics.

    For instance, is the property of having only the same element in its relations an element of some set of relations on some P(Tk(D))?

    If it’s a structural property, then it must be, although I wouldn’t know how to express it either. But the important thing to keep in mind is that all of mathematics can be phrased in these terms, just the same way as it can be phrased in terms of Gödel numbers (the length of Gödel’s sentence, in his original numbering, is pretty much astronomical, and providing it would not illuminate anything; so it’s possible that the same thing occurs with even slightly nontrivial structural properties). But I suppose it’s clear that every structure definable on D does occur in some P(Tk(D)), no?

    The way you’re thinking about these issues is so different to how I think of them that I’m not sure how to translate what you’re saying into what I’m trying to say or vice versa.

    Well, I’m thinking about these things in these terms because it’s the only way I know how to formalize the otherwise rather vague concept of structure (well, besides Ramsey sentences, and that doesn’t make things any easier in my view). Otherwise, it’s just too easy to fall prey to some intuition that there somehow just is a privileged structure; once you formalize things and think carefully about them, it becomes easy to see that that’s not the case.

    I’m a software developer so I’m used to thinking of structures as dynamic, temporal things, where earlier states cause later states and algorithms are not defined as sequences or relations but instead define sequences and relations.

    This, for instance, I think already goes into dangerous territory. Is time structural? Certainly, one can formalize temporal evolution, and consider an ordering relation distinguishing earlier from later, but that doesn’t really give us time in the ordinary every-day sentence, because there is no distinguished present moment, while that’s all we ever seem to have. So, in principle, you introduce then an indexical fact, which I wouldn’t think can be analyzed structurally (and if it can, well…).

    If you’re instead interested in the properties of particular structures then there is much to be learned by examining particular structures.

    The problem is getting a handle on particular structures. What makes you able to say, ‘this structure’? How do you pick out one? Again, if you pick this structure, I could pick another, and be just as justified. And if you say that some structure is picked out because we are the creatures we are, then you just kick the problem up one rung: if we’re merely structural ourselves, then what makes us the creatures that we are, those particular structures? This leads to an infinite regress.

    Of course, if you get to say that we’re just those things, period, then you can find a grounding for structure; you get a fixed reference frame, so to speak. But structure alone doesn’t let you pick out such a fixed reference frame, so in introducing the assumption that we are just those creatures, you basically either appeal to some extra-structural fact, or to mysticism.

    For instance, the existence of the Mandelbrot set is more or less entailed as soon as complex numbers are defined, but that doesn’t mean the Mandelbrot set is not worth studying in its own right and that we have nothing to learn about its properties.

    If you think about the mandelbrot set, I wager you’re either thinking of the pretty apple-man picture, or of some set of equations. But in doing so, you’ve already introduced some reference-frame fixing: those equations only determine the mandelbrot set by virtue of meaning something to you; that picture you only interprete because of the way you’re wired. So you’ve covertly introduced a fixed background frame. But think about the algorithm creating the mandelbrot set: on a different computer with different fundamentals, with a different compiler, if it is a valid program at all, it could be a totally different program.

    One of the properties a structure might have is to have self-aware substructures such as us.

    Think about how you could hope to cash that claim out. You say that (R,D) has self-aware substructures. I say (equally well justified) that the structure is actually (R*,D), which has no self-aware substructures—say, I could just pick a structure on which we have some Humean collection of particular facts, just the one-element sets of members of D, for instance. My structure is just as well-founded as is yours, but we differ on whether there are any self-aware substructures. So, whether there are any or not can’t be a question with a factual answer.

    Or, indeed, use the argument above: the structural property that a structure has self-aware substructures is Q(R). Then, the proof shows that ‘having self-aware substructures’ doesn’t actually pick out any structure over any other.

    Newman’s objection just shows that all other structures of the same cardinality also exist.

    No, it shows that all that proclaiming a structure to exist establishes is that there exists some set of a certain cardinality; those are different claims.

    I am a structure S with a certain cardinality.

    See, this is a sentence that simply can’t be formalized in a purely structural metaphysics: it would again just introduce a certain Q(R), which by the above reasoning, fails to actually pick out any structure.

    Most anti-functionalists are either confused (I would say) or accept the logical possibility of some kind of functional p-zombie, e.g. a faithful computer simulation that is not conscious.

    Panpsychist anti-functionalists could very well likewise hold that their simulation is conscious, but that that consciousness is not due to the computation, and they wouldn’t necessarily be confused (to the extent that panpsychism itself is a consistent stance). As would many other kinds of non-reductive monists, not to mention the dark ghostly spectres of the remaining dualists. So I think that the claim that all anti-functionalists who do believe so are confused is a bit too uncharitable.

    You have intentional states, it has intentional* states. It can even be said to have consciousness*, in that it seems to have a level of self-awareness* (in that it has beliefs* about itself) and the ability to introspect* and claims to experience phenomenology and so on.

    The problem is that, while I don’t actually believe that’s the case, it’s perfectly well possible to claim to experience phenomenology while not actually doing so. Take the case of blindness denial: even though such patients are functionally blind, they claim to be able to see—they’re just deceived about that. Certainly, it might be the case that they hallucinate some phenomenology, but that’s unneccessary to the explanation of the condition. Now take the case of somebody with blindness denial who has, unwittingly, developed (or been augmented with) an uncanny sonar kind of sense: in many cases, his performance might equal that of a sighted person, without necessarily possessing any of the associated phenomenology.

    The claim that I might be in the situation of a simulation myself (and that simulation’s consciousness is simply due to computation) is only interesting if simultaneously you can produce an explanation of what I seem to experience; otherwise, I’m perfectly justified to believe that no, I’m not such a simulation, or if I am, then my conscious experience is not due to the computation, but incidental to it.

    without really explaining what is the difference between conscious experience and conscious* experience* or intentional content and intentional* content.

    The difference in experience is the same as the difference in your visual phenomenology of the things before you and the phenomenology of the things behind you. You could now have access, through some means, to a description of the things behind you, and report accurately on their characteristics—there is a white wall behind me, etc.—and even proclaim subjective experience of this white wall, but there simply wouldn’t be any. And if there were just computation, just structure, then I’d believe that this would indeed be the situation such a creature would be in, simply because structure is not enough to formulate the claim ‘there is a white wall behind me’ in any truthful sense.

    Beliefs to me are just more or less what Dennett would say with respect to the intentional stance, though I think it may be helpful to also think of them as requiring some kind of internal representation.

    And how do you think representation works, if it’s not representation to somebody? How does anything represent without assuming some already present intentionality?

    Beliefs can exist with consciousness or independently of consciousness.

    Beliefs need, at minimum, some intentionality: beliefs are always about something. And only consciousness, as far as we know, possesses intentionality—all other instances of intentionality can ultimately be traced back to some original intentional being. But a belief* lacks just this intentionality: certainly, it can cause behaviour that can be considered to be about some external stimulus; but even so, it can only be considered thus by another intentional being.

    The difference is that only some of these beliefs are accessible by the part of the structure responsible for such duties as the composition of sentences describing introspective activities.

    And I think that in assuming such an entity, you have effectively created a homunculus—just that sort of entity who can make a belief* into a belief by directing its attention at it. Certainly, either this structure has beliefs* about the first-order beliefs*, or it has beliefs about them. In the latter case, however, you’ve just proposed a mysterious entity with its own intentionality, and haven’t gotten any closer to a solution. In the former, you enter an infinite regress: if beliefs* don’t have any intentionality, any aboutness, then second-order beliefs* can’t lend it to them, because they themselves are not about the first-order beliefs*. In order to be about the first-order beliefs*, they would, themselves, need to be ‘accessible by the part of the structure responsible for such duties as the composition of sentences describing introspective activities’. And so on.

    I think that particular structures are necessarily conscious by virtue of their structure, i.e. that consciousness is just a word we use to describe structures with certain attributes.

    And again: if that is itself a structural attribute of a structure, my argument above shows that it can’t achieve what you hope it does; and if it isn’t, then you’ve broken from your structural metaphysics.

  111. Hi Jochen,

    Nearly caught up now.

    > Well, but this is question-begging: these things have beliefs if and only if beliefs* are beliefs.

    It’s not question begging to remind you what beliefs are in my view, so that arguments which take other interpretations of beliefs don’t really work to prove my view wrong without independent support.

    Again, if you’re attacking my view and I’m defending it, I think the onus is on you to demonstrate some inconsistency, internal or with evidence. If my view is consistent then you haven’t shown any problems with it. As such, arguments which rebut your objections by appealing to tenets of my view are perfectly legitimate as long as they are consistent.

    > Also, I discussed an extension of the original experiment in order to counter this higher-order belief* reply in my first post about the visual masking experiments.

    I think you’re referring to this argument.

    “One could easily imagine, it seems to me, a being whose dispositional beliefs also enabled it to make such exclamations as above—if it is in principle possible that masked stimuli influence behaviour, then I do not see why any given behaviour should be excluded.”

    My answer was that as you push more and more beliefs into the dispositional module, you are making that module more and more a p-zombie, and I don’t think that p-zombies are coherent so on my view you would just be making that module more and more conscious. Again we have mutually contradictory, self-consistent intuitions and no way to tell the difference between them empirically. So the empirical case for refutation of my view is vacuous.

    > but then you’d still have to show that introspection* directed at beliefs* gives rise to beliefs.

    Well that depends on where you see the burden of proof. If you’re arguing that my view is self-evidently false or untenable, then you would need to show that they cannot. I’m just saying that I see no reason to doubt that my view is correct. I don’t claim to have definitive proof.

    However, I will say that I don’t even think the proposition that my belief is false can be fully articulated, and until it can then proof either way is meaningless. It rests entirely on an appeal to an intuition that cannot be expressed, that there is an ineffable difference between belief and beliefs* (what I mean by this I explained in my last comment).

    > But something has a reference only by virtue of appearing to an intentional agent.

    Agreed (with one caveat, which I’ll get to)! So, if I think that a thermometer can have references, then I must take it to be an intentional agent. And I do! I just don’t think intentional agents need be conscious. They are just systems which take advantage of percepts in order to navigate their environment in some way. A robot is such an intentional agent a little closer to a human than a thermometer and so perhaps serves as a more reasonable or intermediate example of what it is to be an intentional agent in my view.

    The caveat is that you’re talking about “appearing to an intentional agent”. This language seems to imply the interpretation of external symbols, which I take to be a different process than the manipulation of internal ones. A cat is an intentional agent, I think we agree, but I don’t think it does much interpretation of external symbols. A thermometer has references like a cat, but as with a cat its references do not appear to it as symbols that need to be interpreted.

    > Take the word ‘Gaul’.

    That’s an external symbol and so can be disregarded as irrelevant.

    > the same problems arise. You presumably mean something like a ‘self-symbol’, some sort of inner flag that lights up whenever something pertains to the agent.

    Yup.

    > But again, that’s not enough: this self-symbol is about the agent only by virtue of our interpretation;

    No, it’s about the agent as much as anything is about anything: only approximately and FAPP but even so there is a link arising out of causal associations and limited quasi-isomorphism. That is my account of how reference arises from internal symbols. I do not agree that there is only reference when it is interpreted by an external agent. Otherwise how would your neurons refer to anything?

    > no matter how hard each concentrates on his word, nowhere will there be any consciousness of the whole sentence.

    Because there’s no causal associations set up between the different words and the different men. Implement the same structure as exists in the brain overall (China Brain) and you’ll have the same understanding.

    > Well yes, but in order to pay what the simulated double would say any mind, you’d have to believe that it knows what it’s talking about, so to speak.

    Not necessarily. All you have to believe is that it can produce and respond to rational argumentation. If you were motivated to try to persuade it of what you regard as the truth (say your life depended on it if you wish), you would have to pay attention to what it says including what it claims to be experiencing. You don’t believe it is actually experiencing anything but you can’t just disregard its claims if you want to engage in a debate with it. The impossibility of constructing a rational argument or empirical experiment to show that it is not conscious while you are ought in my view to give you pause and reason to reconsider your position.

    > Do both know the answer? You can’t tell: either or both could just have made a lucky guess.

    It’s a disanalogy, because ex hypothesi the simulated self processes information in a way precisely analogously to you. It is structurally the same, and I’m arguing that structure is all that matters. A and B are not structurally the same if they arrived at the same answer in different ways.

    > Essentially, in the case of finite automata, the transition table is the algorithm.

    No it isn’t. For finite automata, there are many algorithms that could produce the same sequence of states. One of these is an algorithm which simply uses a transition table. But there are other algorithms that work in completely different ways.

    By analogy to outputs instead of states, if you want an algorithm to list the Fibonacci numbers up to a million, then you could just hardcode the the list of numbers (analogous to defining the transition table) or you could have a recursive algorithm or an iterative algorithm or an algorithm that calculates numbers more directly by arithmetic using Phi and then rounding. In my view these are all different algorithms, whereas you appear to see them as all being the same.

    Now I realise you’re not talking about outputs but about internal states. Even so, we can define the transition table in different ways. We can either simply list the transition table or we can define a set of rules for how to determine the next state given the current state. You’re more interested in the former and I’m more interested in the latter. Your view has the transitions as arbitrary and meaningless, whereas my view has the transitions as necessary and not subject to arbitrary substitutions or interpretations. The transitions happen due to the structure of the algorithm itself and not simply because they were specified so in advance. Specifying them in advance masks all the meaning because the rules are now hidden away in the mind of the specifier, which in my view is itself an algorithm. So you define algorithms in terms of transition tables, whereas I define transition tables in terms of algorithms. Either view is tenable. It’s just a way of looking at things, ultimately, but I think my way of looking at things is more useful in the scenarios we are discussing.

    > environmental proto-intentionality

    Sounds a bit like panpsychism to me.

    > And the execution trace of a computation is that computation

    But you can only produce the execution trace by performing the computation! So you can only know the result of the computation by running the computation. So the execution trace is not the computation.

    I think the rest is just retreading old ground so I’ll leave it there.

  112. Hi Jochen,

    > If you claim, say, the world has structure (R,D), and I say, no, it has structure (R*,D), then who of us is right?

    There may be no fact of the matter. In day to day scenarios, R and R* may each reflect different aspects of the structure of the world — they may even be isomorphic to each other but we’re just confusing our terms.

    In the unrealistic case that we’re talking about specifying the structure of the world in complete detail, so we’re talking about the actual structure of the world with no simplification or mistakes, then one of us is right in case the proposed structure is isomorphic to the structure of the world. This makes sense only on the view that the world actually is a structure to begin with and is not merely described by a structure.

    > But this is singling out one sentence as being special

    OK. If that’s what you mean by special, that’s fine.

    > But then, of course, you can just reapply the above argument, and end up with the conclusion that that sentence isn’t singled out after all

    I’m not sure but I think this is what’s going on. Say I am denoted by D and I pick out a sentence which we’ll denote as S. Let’s also suppose that D and S are just standing in for some very complicated structures, with many substructures and so on. D and S stand in a relation. Combined, we end up with an overall structure something like {D,S}. The problem now is that we can’t uniquely pick out a structure of this form from all the other structures of that form as they are all isomorphic to each other.

    This to me is not a problem, as I have told you in our discussion on identity that what I identify with is any structure which is isomorphic to the structure I take to be me. If I managed to describe any such structure, I would hold myself to be identical to any structure at all which is isomorphic to that structure. That means if there are distant regions of space or other universes where there is an identical copy of me saying the same sentence, then the description of that state of affairs is also of the form {D,S} (say {D*,S*}), and since I identify my saying that sentence with any structure of this form then I don’t care that I can’t pick out which structure I happen to be, because I don’t think I am any one of them in particular — I certainly don’t have any way of knowing which one I am. Since I can’t tell the difference between any of them, indeed since there are no structural differences between them, then on my view there are no differences of any importance until they start to diverge and so differ in structure. Any one of them is an instance of me.

    > If it’s a structural property, then it must be

    Is it though? You don’t seem too sure. It is on my account of structural properties, since it’s a pretty well-defined attribute it has in virtue of its structure. You could easily build an algorithm that would detect whether it has that property for instance. But how you would say that property itself is just a relational structure I have no idea. Maybe it isn’t a structural property in that sense. Or maybe it is. What do you think?

    > But the important thing to keep in mind is that all of mathematics can be phrased in these terms

    OK, but is phrasing it in these terms the same thing as saying that what your phrasing is itself a relation? I mean, you can use Gödel numbers to represent arithmetic statements, but I don’t think that the numbers are themselves the statements. You need a system of rules in order to interpret them as meaning the statements, and different rules yield different numberings. That system of rules exists outside the particular representation scheme and provides the grounding you’re talking about in that case.

    I think of representations of structures as being distinct from the structures themselves, and it seems to me that a static relational model of a structure is just a form of representation. If you force yourself into having to represent the properties of relations in a purely relational form, then this seems only to be possible by brining in these rules from outside the system. So perhaps it’s better to take a broader view of structure and include concepts other than relations, such as the rest of mathematics.

    > but that doesn’t really give us time in the ordinary every-day sentence, because there is no distinguished present moment, while that’s all we ever seem to have.

    I prefer a B-theory of time. I really don’t think it’s hard to see why we would only ever seem to have one present moment even on this view.

    > Of course, if you get to say that we’re just those things, period, then you can find a grounding for structure;

    So, no. I’m saying we are just the things picked out by the particular structures we identify with. There may be an infinite number of variants of such structures, so I think we are each identical either to the class of such structures or to the distilled structure of the isomorphism itself — whatever it is that is invariant under isomorphism. We can only draw distinctions between entities we can perceive as distinct, and this perception necessitates some sort of structural difference. So if there are no structural distinctions to be drawn, I don’t think there is a need for distinctions to be drawn at all. So I don’t distinguish between me and an identical clone in another universe. I guess this means that that sentence doesn’t strictly make sense, because it presupposes that it is meaningful to say “me” and “an identical clone in another universe” in the first place, whereas what I mean to say is to deny that that distinction is meaningful at all.

    > If you think about the mandelbrot set, I wager you’re either thinking of the pretty apple-man picture, or of some set of equations.

    Not really. Both are just ways of representing it, especially the picture. The equation is more of a definition, but I would imagine other ways of defining the same thing are possible too. Again, I don’t think mathematical objects are their representations. They are things we can say things about independently of representation, for instance the Mandelbrot set has a certain area.

    I’m going to leave it there for now.

  113. Whoops, sorry, I hadn’t noticed before that you hadn’t yet reply to all of the old stuff before making my last post… But anyway, might as well get it all over with.

    Again, if you’re attacking my view and I’m defending it, I think the onus is on you to demonstrate some inconsistency, internal or with evidence.

    As far as I can see, that’s what I’m doing. We’re discussing whether beliefs* can possibly be beliefs. So I point to where I think the two differ. You then point out that on your conception, that beliefs* are beliefs, I must be wrong about that difference. Do you really think that this is a sound reply? I mean, basically it amounts to answering the question ‘can beliefs* be beliefs?’ with ‘they can if they can’, which is of course true, but vacuous.

    I think you’re referring to this argument.

    Sorry, I should have been more explicit, I meant the one in the two paragraphs starting with “Now, one might try and hope that some kind of higher-order belief could fill in where dispositional beliefs fall short:”. But also see my previous point about how such higher-order views end up in infinite regress.

    Well that depends on where you see the burden of proof.

    The burden of proof is always with the one making a (not self-evident) claim. If I claimed the moon was made from green cheese, you probably wouldn’t really give a damn unless I could somehow substantiate that claim. Shifting the burden of proof to the one doubting it incurs the problem that then all hypotheses stand on the same footing, even things like intelligent design, and so on—because you have to prove it false.

    It rests entirely on an appeal to an intuition that cannot be expressed, that there is an ineffable difference between belief and beliefs* (what I mean by this I explained in my last comment).

    If that difference were truly ineffable, how would we even know about cases like the visual masking experiments? We can clearly point to where the subjects had beliefs, and where they had mere beliefs*, or as I put it, mere dispositional beliefs as opposed to intentional ones.

    And besides, if you recall your own arguments from a couple of posts back, you yourself used to hold that ineffability is no barrier to structurification in regards to qualia, so now to hold that ineffability is some sort of problem seems a bit inconsistent to me.

    That’s an external symbol and so can be disregarded as irrelevant.

    What do you think is the relevant difference? Regardless, the same argument can be made using ‘internal’ symbols of whatever kind (though I must confess I’m still somewhat mystified by this distinction you’re drawing). So, a priori, an internal symbol is just some pattern of excitations, say, of a neural network. What does it refer to? Well, could be anything, could be nothing (as in the stone argument). Merely possessing this pattern of excitations alone certainly does not suffice to tell. Rather, we need some sort of internal homunculus in order to make the reference come out true, who uses this pattern as standing for something else.

    And of course, you could program a computer such that whenever this pattern of excitations comes up, it does something, like produce a description of whatever it is that the pattern is supposed to refer to. But then, this description again would need to be interpreted, in order to be meaningful. (This is not just a bald assertion: without any interpretation, there is simply no fact of the matter regarding what it’s about; the language must be understood by someone, they must have familiarity with the broad context, and so on.)

    I do not agree that there is only reference when it is interpreted by an external agent. Otherwise how would your neurons refer to anything?

    Well, that’s the problem, isn’t it? But if you think about it, it’s just as clear that the self-symbol’s reference is just as underdetermined as what a given physical system is computing; it’s basically the same argument again.

    The impossibility of constructing a rational argument or empirical experiment to show that it is not conscious while you are ought in my view to give you pause and reason to reconsider your position.

    I think there is a very rational argument that shows it isn’t conscious (based on computation alone); hence, I do believe that if I am a simulation, I am not conscious based on that computation, but due to factors external to this computation.

    It’s a disanalogy, because ex hypothesi the simulated self processes information in a way precisely analogously to you.

    An analogy is never analogous in all details (otherwise, it’d just be a different way to say the same thing), but only along some relevant line. The relevant line here is that knowledge can’t be decided third-personally, and not even first-personally (in the Gettier case, you believe yourself to know that your cell phone just rang); and that thus, the pronouncements of my simulation can’t suffice to convince me of its conscious nature.

    No it isn’t. For finite automata, there are many algorithms that could produce the same sequence of states.

    For a desktop computer, there are many algorithms that make it produce the same output (the output is the sequence of states, the execution trace, of the system). Consider what an algorithm does: based on an instruction, the state of the system is changed; thus, a set of states is traversed, as dictated by the algorithm. This is just what the transition table does in the FSA.

    In my view these are all different algorithms, whereas you appear to see them as all being the same.

    No, they would also all be different to me: the first one would be an FSA whose states are labelled by Fibonacci numbers, whose transition table merely specifies the next state to move to, for instance, while the second one would be, well, more complicated. They’d all have the same execution trace, though, which is nothing else but to say that they all output the Fibonacci numbers.

    Your view has the transitions as arbitrary and meaningless, whereas my view has the transitions as necessary and not subject to arbitrary substitutions or interpretations.

    But you said you would be happy with a formalisation in terms of Turing machines (I would be, too). How are the transitions there not ‘arbitrary and meaningless’? Based on the state of the tape, plus that of the read/write head, the next state is entered; this could be completely specified by a transition table, and hence, is exactly a finite automaton (again, for bounded tape lengths).

    But you can only produce the execution trace by performing the computation!

    Yes! Hence anything that produces the execution trace is performing the computation!

  114. There may be no fact of the matter. In day to day scenarios, R and R* may each reflect different aspects of the structure of the world — they may even be isomorphic to each other but we’re just confusing our terms.

    But the thing is, I can choose any structure, even non-isomorphic ones, and I’ll be right on just the same grounds as you are. If (R,D) is the structure of the world, then it has equally well structure (R*,D) for any R*. And if there is indeed no fact of the matter, then all we know about the world simply boils down to the cardinality of D.

    In the unrealistic case that we’re talking about specifying the structure of the world in complete detail, so we’re talking about the actual structure of the world with no simplification or mistakes, then one of us is right in case the proposed structure is isomorphic to the structure of the world.

    But this is simply not provided for by structural metaphysics. On what grounds would you say that (R*,D) is not the structure of the world? What could falsify any proposed structure? Because if it is a structure of the same cardinality as the ‘true’ structure, then it is just as well a structure the world has as that ‘true’ structure.

    This makes sense only on the view that the world actually is a structure to begin with and is not merely described by a structure.

    Certainly, if the world is a structure, it’s also described by that structure?

    The problem now is that we can’t uniquely pick out a structure of this form from all the other structures of that form as they are all isomorphic to each other.

    No, that’s not the problem. Even structures that are non-isomorphic can’t be excluded. The relation you and your sentence stand in ensures that there is a domain of the right cardinality for that relation to be instantiated. But if there is that domain, then all of the other relations definable on that domain likewise exist—they are merely constructed by various groupings of its elements. Hence, in order to intelligibly assert the relation between you and that sentence, you must simultaneously accept the existence of all other relations definable on the domain of that relation, exactly on par with the one you originally asserted.

    It’s really not any different as with the stone argument: you can consider a physical system to compute a simulation of you writing this sentence; but you could equally well consider it to compute a simulation of you writing any other sentence, or of you walking your dog, or of anything else (cardinality permitting). Hence, you can’t pick out that structure with a structural predicate, any more than you can use a second computation to fix the first one (since the second would itself be subject to arbitrary interpretation, and so on).

    OK, but is phrasing it in these terms the same thing as saying that what your phrasing is itself a relation?

    Sorry, could you rephrase that? (I don’t mean the typo, I genuinely don’t get what you’re trying to say here.)

    I mean, you can use Gödel numbers to represent arithmetic statements, but I don’t think that the numbers are themselves the statements. You need a system of rules in order to interpret them as meaning the statements, and different rules yield different numberings.

    The numbers are themselves the statements, in as much as they are just a different alphabet representing them—you likewise need a system of rules to interpret the original strings of symbols the Gödel numbers code for.

    I prefer a B-theory of time. I really don’t think it’s hard to see why we would only ever seem to have one present moment even on this view.

    I think most people would find that more confusing on the B-series view; for instance, it troubled Einstein greatly that while general relativity seemed most naturally to lend itself to a block universe view, there is nothing in there to explain that we only ever experience one single point in time.

    I’m saying we are just the things picked out by the particular structures we identify with.

    And that’s not consistent. If you say you are structure (R,D), then I can say you’re structure (R*,D), which is not isomorphic to the former, with equal justification, just as if you say that the stone computes C, I can say that it in fact computes C*, with the same justification. You’re still trying to set up some structural many-worlds universe; but they’re not different worlds, they’re predicated of the same worlds (and things). There’s just no structural predicate that could say that (R,D) is ‘you’ that would not also single out (R*,D) as ‘you’.

Leave a Reply

Your email address will not be published. Required fields are marked *