Whereof we cannot speak

bokkenThe latest JCS features a piece by Christopher Curtis Sensei about the experience of achieving mastery in Aikido. It seems he spent fifteen years cutting bokken (an exercise with wooden swords, don’t ask me), becoming very proficient technically but never satisfying the old Sensei. Finally he despaired and stopped trying; at which point, of course, he made the required breakthrough. He needed to stop thinking about it. You do feel that his teacher could perhaps have saved him a few years if he had just said so explicitly – but of course you cannot achieve the state of not thinking about something directly and deliberately. Intending to stop thinking about a pink hippo involves thinking about a pink hippo; you have to do something else altogether.

This unreflective state of mind crops up in many places; it has something to do with the desirable state of ‘flow’ in which people are said to give their best sporting or artistic performances; it seems to me to be related to the popular notion of mindfulness, and it recalls Taoist and other mystical ideas about cleared minds and going with the stream. To me it evokes Julian Jaynes, who believed that in earlier times human consciousness manifested itself to people as divine voices; what we’re after here is getting the gods to shut up at last.

Clearly this special state of mind is a form of consciousness (we don’t pass out when we achieve it) and in fact on one level I think it is very simple. It’s just the absence of second-order consciousness, of thoughts about thoughts, in other words. Some have suggested that second-order thought is the distinctive or even the constitutive feature of human consciousness; but it seems clear to me that we can in fact do without it for extended periods.

All pretty simple then. In fact we might even be able to define it physiologically – it could be the state in which the cortex stops interfering and let’s the cerebellum and other older bits of the brain do their stuff uninterrupted – we might develop a way of temporarily zapping or inhibiting cortical activity so we can all become masters of whatever we’re doing at the flick of a switch. What’s all the fuss about?

Except that arguably none of the foregoing is actually about this special state of mind at all. What we’re talking about is unconsidered thought, and I cannot report it or even refer to it without considering it; so what have I really been discussing? Some strange ghostly proxy? Nothing? Or are these worries just obfuscatory playing with words?
There’s another mental thing we shouldn’t, logically, be able to talk about – qualia. Qualia, the ineffable subjective aspect of things, are additional to the scientific account and so have no causal powers; they cannot therefore ever have caused any of the words uttered or written about them. Is there a link here? I think so. I think qualia are pure first-order experiences; we cannot talk about them because to talk about them is to move on to second-order cognition and so to slide away from the very thing we meant to address. We could say that qualia are the experiential equivalent of the pure activity which Curtis Sensei achieved when he finally cut bokken the right way. Fifteen years and I’ll understand qualia; I just won’t be able to tell anyone about it…

27 thoughts on “Whereof we cannot speak

  1. It’s probably much harder to do when you are pretending. I learned to drive and got good at not thinking about it. I also did it with writing, typing, playing the guitar, cycling, and many other skills where the feedback from doing a real action was palpable – one easily learns from mistakes when doing a real action. If one is only pretending to slay someone with a wooden sword there is no feedback, so it takes a lot longer, I imagine. And with no effective feedback from a teacher either, the process is drawn out.

    I think this is less to do with flow, and more to do with what is sometimes called “muscle memory”. I recently learned a new song on the guitar that had a syncopated rhythm I wasn’t used to. It took me about half an hour to get the muscles of my hands doing the right actions so that I could concentrate on the singing. Another half an hour and I could competently sing and play the tune. The movements now happen without me thinking about it. In fact, if I think about it too much, I make a mistake.

    We all develop physical competence in many ways all the time. Mastery takes a little longer of course. But it’s more or less the same process.

    You seem to be over thinking this one.

  2. ‘Mindfulness-with-qualia’ could be a state close to ‘flow’, if what qualifies such states is the shut-down of higher order interference.
    But to me it seems that there is an important distinction between the two states, indeed by some measures the phenomena of flow and qualia could be considered almost polar opposites…
    By my belief ‘flow’ is a state of being absorbed – wholly immersed in and interacting with – a spatiotemporally structured environment. Higher order chatter is shunted and thereby muted, making it possible for such a state to be come about. Attention is naturally given to the particulars of the interaction, with no distractions. Since the state is softly temporally bounded and not absolutely coherent, i.e. it is not like an absence seizure, some memory of the experience is oft-times retained; a ‘ghost’ that can be conjured up and reported upon in some detail.
    The essence of qualia, on the other hand seems to me to be that which is altogether beyond the scope of spatiotemporal pattern processing (qualia comprising the ‘texture’ of experiences). Obviously ‘primary’ qualia is structured and modulated with every conscious activity, yet the ultimate experience of ‘pure’, ‘elementary’, qualia (e.g. red) might require draining most information from first order processing as well (if that could be feasible in principle and in practice, without mixing modalities, falling asleep, or worse – triggering a seizure).
    Taking the risk being virtually stoned (in the concrete sense) by those advocating universal consciousness, I think reported experiences of dissolution of self might correspond to progressively information-depleted forms of cognition (characterizing some spectral bands of neural activity in these scenarios). A subject attaining the hypothetical ideal state, if not tuned exclusively internally, and if he is also subjected to some non-modulated external stimulation, might experience extremely amplified raw qualia… Which might not be very enlightening…

  3. What is watching? what does it mean to watch? are the “second-order” voices quiet when you watch? quietude is not absence. you can watch with others and share experiences where no one speaks and no one interrupts. this kind of experience is like flow. instead of speaking, instead of interrupting, instead of suggesting alternative courses of action, all of the second-order thoughts are quiet and “we” watch ourselves act. In my experience, I am more aware of the quiet second-order possibilities in a state of flow. But none of them come forward to dominate attention. There is no forcing of attention, there is no distracting either, there is simply watching.

    Which brings us to qualia. The idea of “forcing” attention is completely weird. There is no physical force the directs attention. All of the second-order thoughts that interrupt and that distract, are qualia too. What else could they be? A feeling of boredom, which prompts one to pull out their mobile phone to play a game or check for messages, is a string of qualia experiences.

    the meanings and experience of action is non-physical and cannot be separated from the underlying qualia. Because something is specific, does not mean it is not a qualitative experience. How a qualitative experience corresponds to physical phenomena is only matters if we care about the corresponding qualities. If I check my mobile phone in a dream, or in the Grand Theft Auto video game, or in the physical world, what matters to me is not the corresponding physical, or computational, or neuro-chemical correspondence but the quality and kind of the experience itself. I am seeking an experience and responses to my actions. And while the corresponding physical, computational, or neuro-chemical phenomena are critical to the kind of response that will occur, they are irrelevant to the fact experiences will occur. My battery may be dead, in GTA i may take pictures of the GTA environment, in my dream, my dead grandmother may start speaking to me through the phone before I even dial the number. The underlying physics, computations, or neuro-chemical processes produce a kind of result. But that outcome is not the experience itself, the outcome corresponds to the experience.

    A little kid that plays with a dead cellphone and makes pretend calls on it has an experience that does not correspond to the physical phenomena of an unworking phone. Correspondence is not necessary for experience. Experiences are made from qualia. correspondence or the absence of correspondence does not change the fundamental way that qualia is experience.

  4. Clearly this special state of mind is a form of consciousness (we don’t pass out when we achieve it)

    Not especially clear. People have driven cars while sleep walking.

    Sounds like unconsciousness to me. The brain ceasing to read itself as part of it’s input. And sure, becoming more efficient as a machine in recouping that processing power.

  5. Is “flow” a lack of second order consciousness, or a lack of feedback from the second order back down to the first? Being willing to passively observe one’s own first-order thoughts without trying to 2nd-guess them.

  6. From my (now far past) experience in martial arts, and my more current struggles with guitars, the status of flow (back then the word was Mushotoku, which is a different concept, but somehow related) is something I’ve experienced in different settings, and, as you would expect, I do actively try to reach it as much as I can. That’s why I currently struggle with the guitar.

    Anyway, it’s really interesting trying to plug it in the views about Qualia and consciousness in general. Also very tricky, because our introspective experiences/conclusions are so different that it becomes possible to say anything and its opposite. When I read Peter’s post, I follow quite happily, then Jayarava, Ron, Calvin and Micha added different/incompatible interpretations and I follow you all just as happily. Everyone is right, in some sense!

    To my surprise, if I ask myself what I myself believe, I find that my experience nicely slots into my own theories (no surprise!) and is different from all the opinions I’ve joust found understandable (or even agreeable). On the first/second order perspective, I don’t think that second order thoughts shut down, and neither that they stop interfering, what I recollect is very different: second order thoughts do chip in, but they do so constructively. There is no conflict between the automatic and deliberate processes, they, for once, work in cooperation without cancelling each other’s contributions.
    I have plenty of fresh examples in memory for improvising on the guitar, where explicit, high-order plans occurred to me, making me work to “set-up” a change. I found myself following a plan which I had no trouble expressing verbally (just after, of course!) “if you start passing through the major 7th instead of the flat 7th, you can then re-interpret the next three chords in a new way, etc”. I have also well imprinted some fighting memories where my deliberate thought processes “suggested” some cunning ways defeating a particularly resilient opponent: they worked a treat. So far, nothing really new, but I suspect that I simply don’t remember as “in the flow” all the other times where I made a mistake instead…

    If I’m right, this would make physiological characterisations of flow impossible (as it’s both chance and context dependent), and does suggest that saying “I was flowing/in the zone” is at least partially a post-hoc confabulation.
    Because/when everything turned out well (and it feels good to succeed) we call it “flow”. When it doesn’t work out, the feelgood factor is limited, so we don’t remember the experience with the same level of “fondness”.
    So that’s a slightly depressing view to start the week. :-/

  7. “Qualia, the ineffable subjective aspect of things, are additional to the scientific account and so have no causal powers”

    Oh, I get you’re going for a Zen paradox thing here…but I don’t know if qualia is additional to the scientific account necessarily. I mean IIT and Orch-OR are both either metaphysically neutral or panpsychic and they seem to have managed some experimental success. One might say the latter predicted quantum biology.

    Meanwhile last I checked the protege of the Churchlands, Graziano, was off playing with hand puppets. 😉

  8. The SMTT experiment suggests that a quale can be directly transferred from one person to another because both have the same kind of brain mechanism for creating the quale under a special circumstance.

  9. Cool live clip from Stan Lee’s old show – someone in the Samurai tradition who can cut bb gun pellets in half with a sword:

  10. Both Peter Watts’s Blindsight and Project Itoh’s Harmony can be seen as novels about the joys of flow 🙂

  11. Peter

    “qualia .. are additional to the scientific account and so have no causal powers; ”

    I think you’ve said this previously and I never understood it then. What is the scientific basis for this claim ? Or does it simply rest on an assumption that as they are not “material” they can’t be ?

    As for the rest of your article, I read that Zen masters see a kind of perfection in being like a toad, staying perpetually in the present and worrying about nothing. Seems fine to me ! But surely the overwhelming majority of what the brain does is not linked to “thought” at all, which is an involuntary by-product of the mass of activity in the organ.

  12. “……To me it evokes Julian Jaynes, who believed that in earlier times human consciousness manifested itself to people as divine voices; what we’re after here is getting the gods to shut up at last.”….It is obvious to many that this is how God designed it.

    Qualia do not fit any scientific logic so they seem superfluous but neither did gravity at one time, but you would not have science and mathematics if there were no qualia….I can imagine a world where people do science but have no qualia……..

    The ex-president of Mexico thinks it unfathomable that 50 percent of Hispanics including Mexican-Americans voted for Donald Trump in the Nevada primary 9maybe they were gambling on him?). The logic is that Mexican-Americans love being in the United States so much that building a wall will prevent them from going back.

    Secondary consciousness is just an evolutionary add-on for our species which gives us language that allow us to share qualia and invent universals like God and imaginary leaders we see on TV.

    Human hand waving and hand puppets all require secondary consciousness.

  13. Qualia: When the wood chipper, after breaking down the world outside of it (once fed in) tries to break down it’s own teeth with it’s own teeth and while obviously failing for being as much, it takes it instead that it’s encountered something unworldly.

  14. @ Callan: This wood chipper explanation, unless it’s going to explain why qualitative illusion is generated from quantitative processing, seems to rely on a type of brute fact explanation – that there has to be a materialist explanation.

    The escape clause seems to depend on qualia being “supernatural” or “unworldly” but I think this is merely a cultural artifact of Western thinking.

    From what I’ve read of Chinese cultural, for example, there was no division between mental and physical, between qualitative and quantitative aspect. What’s more natural than one’s qualitative experience?

    For example I personally find it far easier to swallow consciousness as a fundamental primitive for living organisms that grow old and enter oblivion than I do claims about uploaded “minds” that live forever inside Turing Machines.

    Even the eliminativist claim, while more respectable for sticking with the tenets of materialism all the way through, requires all my thoughts to be illusory – might be true but extraordinary claims need extraordinary evidence.

  15. Sci,

    might be true but extraordinary claims need extraordinary evidence.

    Well there’s the teeth trying to chew themselves.

    Let me go off topic for a moment – Doesn’t it make sense that a woodchipper can’t chew up it’s own teeth? That a processor can’t process it’s own processing? Like the teeth can’t reach themselves to chew themselves up, at a certain point the processor can’t reach the process they are? At some point it’s too tight an Ouroboros circle.

    I’d get if you’d say ‘that’s another topic and doesn’t apply’. But I’d be surprised if you don’t agree that there’s some kind of self referential limit to processing.

    Back to the main topic, I can only say that what I’m talking about is based on that off topic I raised.

  16. Sci, I think Callan is arguing (but please correct me if I get it wrong) for something akin to the principle of irreflexivity in Buddhist logic: eyes can see anything but themselves, fingertips can touch anything but themselves, the wood chipper chops up anything but itself. More generally, an entity can’t bring itself within the purview of its own operation. Hence, mind can cognize anything but itself, and thus, must seem mysterious to itself, as it is something that it can’t cognize; and if the mind further believes that it is able to cognize anything cognizable at all, it follows that it must itself be something fundamentally mysterious, and hence, different from all the things in the world.

    The problem is that it’s not clear that one can apply this principle so broadly. One of Callan’s own examples seems, to me, to actually be a counterexample: because a processor of course can process its own processing. It’s easy (well, conceptually) to build a model of a processor (or of a whole computer) using software running on that very same computer. So, there’s no mystery regarding the computer’s function to that same computer, and hence, the applicability of the same reasoning to the mind seems dubious. (I’m not sure how much the mind is like a computer, but I do think it’s much more like a computer than it is like a wood chipper.)

  17. @ Jochen: Ah, thanks. Does it then follow subjective experience is what the failure to cognize amounts to?

    I guess that’s where my confusion lies. But of course, eventually, the science will clarify or just reach the limit of what we can say regarding consciousness.

    Maybe then some group will hire philosophers to lobby on their behalf that the scientific method should be changed…at least if the Multiverse faithful pull it off. 😉

  18. Sci, I think Callan is arguing (but please correct me if I get it wrong) for something akin to the principle of irreflexivity in Buddhist logic: eyes can see anything but themselves, fingertips can touch anything but themselves, the wood chipper chops up anything but itself.

    Yep, that’s the stuff!

    More generally, an entity can’t bring itself within the purview of its own operation. Hence, mind can cognize anything but itself

    It’s a question of the point where it runs aground. The brain can cognize that it physically goes and gets food, does activities at a gross mechanical level. But as it tries to grasp further at itself in finer resolution, eventually the attempt of it’s processors to process their processing fails, due to getting closer to a self targeting that cannot physically happen.

    So within this theory qualia are like the results of a dog chasing it’s tail – the roots of qualia cannot be discovered thoroughly for always mysteriously pulling away each time we attempt to circle harder to find them. One might even suggest that the roots of qualia pull away exactly as hard as we pursue. Like a dogs tail pulls away harder the more the dog pursues it.

    The thing is the woodchipper can never chew itself up and the processor in such a state it can never be provided self verifiable evidence of it’s processordom. At best it can only make an educated guess and commit to that educated guess – ie, an educated leap of faith.

    One of Callan’s own examples seems, to me, to actually be a counterexample: because a processor of course can process its own processing. It’s easy (well, conceptually) to build a model of a processor (or of a whole computer) using software running on that very same computer.

    That’s not processing itself – it’s generating a data array and manipulating/processing that, not engaging it’s engaging. You could even call what it’s working with, an analogy. It’d be like you drawing a picture of yourself and then saying for knowing the picture, you know yourself.

    What you’re thinking of is like a woodchipper with a processor in it that simulates the woodchipper inside it. But while maybe they could make the simulation use it’s teeth to chew at it’s own teeth, the wood chipper is not grinding at it’s own actual teeth. It always runs short of that capacity.

    What this means is that in a intelligent processor, it might be able to model either a woodchipper not being able to chew up it’s own teeth or a processor not being able to directly process it’s own processor, but being unable to do so itself, it would at default treat these models as ‘analogies’ and ones that don’t apply to it, the intelligent processor, for lack of evidence as such (evidence it can never actually get!).

    That’s at default. When considering the ‘analogy’ from a stage further out, the intelligent processor might assess that such a processor would be locked into that default by thinking it isn’t a processor, for lack of evidence (evidence it cannot obtain). That such a default wouldn’t signify that something had failed to be proven – instead, it would just be another example that it cannot be proved to the processor.

    So the intelligence processor would be left with this example of a processor that denies it’s a processor, eternally, for lack of evidence it’s a processor – an evidence it cannot obtain. It’s locked like this in an eternal denial that’s entirely spurious.

    In responce the intelligent processor, for thinking it isn’t a processor, might then consider as a possibility ‘Well, I don’t think I’m just a processor like that processor eternally does…so maybe this applies to me? I keep trying to grasp parts of myself and are unable to, like the dog chasing it’s tail analogy’

    So the intelligent processor might conclude that if it were indeed a processor, it could only ever make an educated guess about itself rather than be able to get direct evidence, so based on that conclusion it makes an educated guess and commits to that – a leap of faith that it’s a processor, locked out from the evidence it’s a processor simply for being one.

    Alternatively the intelligent processor says the processor could build a model of a processor inside itself and that’s processing itself (though one might just call that an an analogy). And so, perhaps ironically, the whole example (the whole model) is assigned to being mere analogy.

  19. Sci:

    Ah, thanks. Does it then follow subjective experience is what the failure to cognize amounts to?

    Yes, that’s also one of my hangups with most eliminativist accounts (and particularly those kinds where the mind is just somehow deceived about its own functioning)—how does this lack of knowledge, or understanding, bring about (apparently) qualitative, subjective experience (or the illusion thereof)? One response to this might be that it doesn’t, all it supplies is the apparent mystery of these subjective states; but this, then, I wouldn’t really call ‘eliminativism’ anymore, but rather, a sort of panexperientialist/dual aspect theory, depending on whether the subjective aspects accompany (in some entirely nonmysterious way) most or all things in the world, or just those kinds of things that happen in brains.

    Callan:

    That’s not processing itself – it’s generating a data array and manipulating/processing that, not engaging it’s engaging.

    Not sure I see the distinction—processing anything is ‘generating a data array and manipulating that’. And anyhow, it seems to me if we could do with the mind what a computer can do with (a model of) itself, then the working of the mind would not hold any more mystery to us—consider a computer with a model of its own internal architecture: it could answer any question about its own workings correctly. That’s all I’d hope to be able regarding the mind (actually, it’s substantially more—I would claim to understand how a computer works, but I certainly don’t have a complete model of even the simplest pocket calculator that I can fully comprehend—but see below).

    And still, irreflexivity doesn’t apply to everything—while a forklift can’t lift itself, an airplane can; while a xerox machine can’t produce copies of itself, a von Neumann machine can; and so on. So whether a mind is the sort of thing that can bring itself within the purview of its own operation seems to be an open question.

    But anyway, I think your ideas can be precisely expressed in a formal way, and this will also allow us to better gauge their significance for the mind-body problem. Consider a general automaton to be something with a specific set of inputs, a set of internal states, a rule for transitioning between these internal states, and a set of outputs. Each step of operation of such an automaton may include: receiving an input; executing a state change; producing an output based on the state it is in.

    This is a very general description, but it suffices to prove some interesting results. In particular, we can formalize what it would mean for such a system to ‘understand itself’, that is, to have a theory of its own inner workings: it ought to be able to give an output such that it enables to decide what output would be given if a certain input were supplied. Call this an ‘intrinsic theory’ of the automaton.

    However, one can now use the diagonalization method (a type of proof that is also used to establish famous results like the uncountability of the real numbers, the unsolvability of the halting problem, and Gödel’s first incompleteness theorem) to establish the following result:

    In general, no complete intrinsic theory of a universal computable
    system can be obtained actively, i.e., by self-examination.

    (In this form, this is due to Svozil, Randomness and Undecidability in Physics, p. 185.)

    This, I think, comes close to being a formal version of the sort of thing you’re arguing for. So, is that it, then? Can we stop worrying about our inability to understand our own minds, and get on with our merry lives? I think this is a bit too fast.

    The formal nature of the above result also allows us to highlight its limitations: first of all, we don’t know whether the mind is sufficiently like an automaton for the above to be applicable, and if it isn’t, we don’t know whether similar considerations hold for it. I think that parts of the mind are likely to be describable in some such form; but I also believe that it’s not likely to be the whole story.

    Second, and more importantly, the above notion of ‘intrinsic theory’ is unlikely to be what we’re after when we’re talking about an ‘understanding of the mind’. Again, see above: I don’t have an understanding of any computer on the level of being able to give a full input-output mapping for it; but that doesn’t preclude me from understanding how it works, its general principles of operation, that sort of thing. It’s ultimately this sort of understanding we’re after regarding the mind, not something on the level ‘neurons A, B, C excited neuron D via their synaptic gaps, etc.’ which would not only be vastly unmanageable, but also, probably entirely uninformative. But it’s only such a theory the above forbids; a theory of the general sort of way how a mind works is not within the purview of the theorem.

    Indeed, it’s not clear that a full theory of how a brain works, input-output wise, or on the level of neuronal excitations, is even going to tell us much about how minds are produced—the Blue Brain project might succeed in simulating a whole brain, but it’s not at all clear that this then solves all the mysteries of the mind (and personally, I’d bet against it).

    Finally, we’re not limited to self-examination when working out a theory of the mind. There are other minds that we can observe; there are models of minds that we can construct. It’s well known that access to another copy of the same structure, or something encoding that structure, allows one to circumvent the limitation given above—which was indeed how von Neumann was able to produce his theory of self-reproduction. This is what enables something like the Blue Brain project, in the end.

    So I think that, on the whole, you’re taking an interesting analogy a bit too far, and derive some unwarranted conclusions from it.

  20. Jochen,

    If there weren’t companies trying to crack the human brain for a profit, I would fold my arms in disagreement but I wouldn’t press your not taking a leap of faith. However, there needs to be people who understand this and are not profit driven. Or at least people who remember it’s theory and then start to see how companies in future begin to use it to manipulate profit, and are not profit driven. So I will press.

    So I think that, on the whole, you’re taking an interesting analogy a bit too far, and derive some unwarranted conclusions from it.

    This is what, in my example, the first processor concludes. It then continues to commit to it’s conclusion that it’s not a processor and something else (despite having no evidence towards that, either – just a need to say it’s something so as to get on with its merry life).

    Your computer model idea does not convince the processor at all about it’s nature. That’s because a ‘model’ is merely a derived element of the system at work.

    Take a transistor – it has a wire running into it that has electricity on it, a wire running out that could take that electricity and a third wire that if electricity is passed through the wire, it will begin a saturation process inside the transistor that I read about years ago but now I forget how it works.

    So there are two layers to this
    1. A complicated saturation event
    2. Input ‘on’, output ‘on’, input ‘off’, output ‘off’.

    The second is a derived event of the first.

    There is nothing about the second event, no matter how many of them we link together, that informs the derived event of it’s origin. The on/offs don’t somehow sense their origin. There is only on/offs and that’s all there is for them, as much as touch can’t feel touching. Your model doesn’t provide conclusive evidence of it’s origin, because the model is only implemented in the second stage. The whole on/off arrangement wont ‘see’ that a range of on/off data (that models a saturation) as it’s very origin. It’ll just ‘see’ a bunch of on/off states. ‘Seeing’ the On/off states here being like seeing a thought or idea.

    Or analogy.

    And so the default processor will dismiss such a model as being mere analogy for lack of evidence. An analogy taken a bit too far.

    I’m describing the technical details of how a processor would end up at the same behavioral result as your own responce, in case it causes an itch. That itch is the starting up of the next stage, the recursive processor (the ‘intelligent processor’ from my previous post – which I named poorly, I realise in retrospect)

    If there is no itch, then I’m not sure if this is like being able to curl ones tongue – either a person can do it or they can’t. I get an itch from it, anyway. It seems to be a recursion thing – I’ll give a joke I found

    The recursive psychiatrist joke
    At his wit’s end, a guy walks into a psychiatrist’s office and tells him all his problems (which are far more difficult than yours). In response, the psychiatrist tells him the Recursive Psychiatrist Joke and then says, “See? It could be worse. You could be that guy.”

    This could be modified into the recursive processor joke:
    At his wit’s end, a guy walks into a psychiatrist’s office and tells him he thinks he is just a processing machine. The psychiatrist tells him that’s just the punchline of the joke and as much the man doesn’t have to think he is a processing machine.

  21. Callan:

    This is what, in my example, the first processor concludes. It then continues to commit to it’s conclusion that it’s not a processor and something else (despite having no evidence towards that, either – just a need to say it’s something so as to get on with its merry life).

    Well, but the processor can arrive at a theory of its own functioning without problems, see above; so it would quite rightly conclude that it is a processor working that particular way.

    So there are two layers to this
    1. A complicated saturation event
    2. Input ‘on’, output ‘on’, input ‘off’, output ‘off’.

    The question is, though, which ‘level’ is relevant for the functioning of the processor? If it’s just the latter—which is supported by the fact that you can exchange the transistor for a functionally equivalent part that doesn’t reproduce the dynamics of the first level—then understanding of the former is not necessary for a self-understanding.

    And regardless, the processor is perfectly capable of understanding the physics of p-n junctions, no?

    Anyway: recursion—and by extension, self-understanding—is not necessarily paradoxical. One might want to be a pessimist and believe that it is, in the case of the mind; but one can just as well be optimist and believe that while there’s a genuine mystery, it will eventually yield to analysis, as so many mysteries have in the past.

  22. Jochen,

    Well, but the processor can arrive at a theory of its own functioning without problems, see above; so it would quite rightly conclude that it is a processor working that particular way.

    I’ve just described how it, by just thinking/processing alone, cannot arrive at a theory of its own functioning – see above! Why do you take it that it engaging a model will mean it’s detecting the very nature of itself?

    I mean, where did the model come from? Humans? Why take them as sources of absolute truth?

    So how does it confirm the model does indeed describe it, rather than the model being for a machine when it thinks it has a soul and so thinks that model has nothing to do with it?

    It seems to me it can’t. It’s a process that is the result of the transistors – it can only engage other results of transistors. It can never engage what it is, only the results of what it is, just like you can’t see seeing.

    So how would it confirm that your model is what it is, when it’s stuck in that predicament?

    Again; To the processor, your model would just seem an analogy.

    Note: if it uses outside equipment to scan itself – that requires a leap of faith to trust the equipment, which is what I’ve been talking about.

    If it’s just the latter—which is supported by the fact that you can exchange the transistor for a functionally equivalent part that doesn’t reproduce the dynamics of the first level—then understanding of the former is not necessary for a self-understanding.

    If the self understanding was perfect, sure. When the self understanding is false, then it’s definitely needed. Except it’s impossible without a leap of faith.

    And regardless, the processor is perfectly capable of understanding the physics of p-n junctions, no?

    Sure. It’s just quite incapable of (sans using outside equipment and trusting that equipment) detecting it consists of p-n junctions. Being quite incapable of detecting that and assuming no trust of outside equipment, it will think it does not consist of p-n junctions.

    If it has a model, where did the model come from? Given to it by humans? Begs the question why trust them as being perfectly accurate? Self made model? What made it perfectly model itself, using no outside equipment/no leap of faith? Are you thinking AI, being computers, will be, without using external equipment, perfect at modeling themselves because of being computers?

  23. Most of the confusion with all this “mind cannot understand itself, because that would be like eyes seeing eyes” stems from the wrong supposition that the mind is a single “object”, that cannot scan itself. In fact, there are many small processes that comprise the mind, each can understand some other parts of the mind, thus an understanding of the mind arises.
    Of course it is not perfect. However the imperfection of gaining knowledge and understanding is not something inherent in the object of understanding. Epistemology has a long history of thought about such issues, mainly with regard to the outside world.
    Understanding will always be imperfect, no matter if it is to be about the mind or the outside worl, but it has nothing to do with “eyes cannot see eyes” analogy. As we all know, we can see other animals’ eyes. In the same way, some thoughts can probably “see” other thoughts and in a way cognize them.

  24. ihtio: “In the same way, some thoughts can probably ‘see’ other thoughts…”

    So far, the most significant experiments for giving us an empirical clue about the brain mechanisms that give us our conscious experience are the SMTT experiments, summarized in the figure attached below. I have argued that according to a bridging principle for relating brain mechanisms to conscious experience, we must formulate the structural and dynamic design of neuronal mechanisms that can generate proper neuronal analogs of conscious content. As far as I have been able to determine, the retinoid model of consciousness is the only theoretical model that has been able to explain/predict the findings of the critical SMTT experiments which demonstrate the direct transfer of a visual hallucination from one person to another.

Leave a Reply

Your email address will not be published. Required fields are marked *