You can’t build experience out of mere information. Not, at any rate, the way the Integrated Information Theory (IIT) seeks to do it. So says Garrett Mindt in a forthcoming paper for the JCS.

‘Information’ is notoriously a slippery term, and much depends on how you’re using it. Commonly people distinguish the everyday meaning, which makes information a matter of meaning or semantics, and the sense defined by Shannon, which is statistical and excludes meaning, but is rigorous and tractable.

It is a fairly common sceptical claim that you cannot get consciousness, or anything like intentionality or meaning, out of Shannon-style information. Mindt describes in his paper a couple of views that attack IIT on similar grounds. One is by Cerullo, who says:

‘Only by including syntactic, and most importantly semantic, concepts can a theory of information hope to model the causal properties of the brain…’

The other is by Searle, who argues that information, correctly understood, is observer dependent. The fact that this post, for example, contains information depends on conscious entities interpreting it as such, or it would be mere digital noise. Since information, defined this way, requires consciousness, any attempt to derive consciousness from it must be circular.

Although Mindt is ultimately rather sympathetic to both these cases, he says they fail because they assume that IIT is working with a Shannonian conception of information: but that’s not right. In fact IIT invokes a distinct causal conception of information as being ‘a difference that makes a difference’. A system conveys information, in this sense, if it can induce a change in the state of another system. Mindt likens this to the concept of information introduced by Bateson.

Mindt makes the interesting point that Searle and others tend to carve the problem up by separating syntax from semantics; but it’s not clear that semantics is required for hard-problem style conscious experience (in fact I think the question of what, if any, connection there is between the two is puzzling and potentially quite interesting). Better to use the distinction favoured by Tononi in the context of IIT, between extrinsic information – which covers both syntax and semantics – and intrinsic, which covers structure, dynamics, and phenomenal aspects.

Still, Mindt finds IIT vulnerable to a slightly different attack. Even with the clarifications he has made, the theory remains one of structure and dynamics, and physicalist structure and dynamics just don’t look like the sort of thing that could ever account for the phenomenal qualities of experience. There is no theoretical bridge arising from IIT that could take us across the explanatory gap.

I think the case is well made, although unfortunately it may be a case for despair. If this objection stands for IIT then it most likely stands for all physicalist theories. This is a little depressing because on one point of view, non-physicalist theories look unattractive. From that perspective, coming up with a physical explanation of phenomenal experience is exactly the point of the whole enquiry; if no such explanation is possible, no decent answer can ever be given.

It might still be the case that IIT is the best theory of its kind, and that it is capable of explaining many aspects of consciousness. We might even hope to squeeze the essential Hard Problem to one side. What if IIT could never explain why the integration of information gives rise to experience, but could explain everything, or most things, about the character of experience? Might we not then come to regard the Hard Problem as one of those knotty tangles that philosophers can mull over indefinitely, while the rest of us put together a perfectly good practical understanding of how mind and brain work?

I don’t know what Mindt would think about that, but he rounds out his case by addressing one claimed prediction of IIT; namely that if a large information complex is split, the attendant consciousness will also divide. This looks like what we might see in split-brain cases, although so far as I can see, nobody knows whether split-brain patients have two separate sets of phenomenal experiences, and I’m not sure there’s any way of testing the matter. Mindt points out that the prediction is really a matter of  ‘Easy Problem’ issues and doesn’t help otherwise: it’s also not an especially impressive prediction, as many other possible theories would predict the same thing.

Mindt’s prescription is that we should go back and have another try at that definition of information; without attempting to do that he smiles on dual aspect theories. I’m afraid I am left scowling at all of them; as always in this field the arguments against any idea seem so much better than the ones for.

 

18 Comments

  1. 1. Christophe Menant says:

    The meaning of information as related to IT has been addressed by the Turing Test, by the Chinese Room Argument and by the Symbol Grounding Problem.
    They are based on the meaning of information as managed by humans which may not be the best way to look at the subject because animals also manage meaningful information (the perception of a cat by a mouse means ‘danger’). Animals motivations are simpler to understand than human ones. Meaning generation by animals can be modeled and the model can be (partly) extended to humans.
    An evolutionary approach to the question of meaningful information shows that the problems we face with ’meaning’ in IT come first from our lack of understanding about the nature of life. Not from the mysterious nature of human mind which only comes in adition. More on this at
    https://philpapers.org/rec/MENTTC-2

  2. 2. SelfAwarePatterns says:

    The argument that consciousness can’t be explained by physical “structure and dynamics” is one that I think should receive scrutiny. What specifically can’t be explained in those terms? If you say “experience” or “feeling”, what specifically about those things are beyond explanation?

    Certainly the explanation may not *feel* like our feeling of the experience, but so what? It doesn’t feel like the Earth is spinning and circling the sun, nor does it feel like space and time are relative, or that we’re composed of mostly empty space. Science has a long history of giving us counter-intuitive answers.

    Not that I’m a fan of IIT, but I think its actual problem is that it’s incomplete. It seems to leave out why the integration happens, what it’s used for, and what experience actually is, resulting in the theory labeling systems as conscious that show none of the behavioral evidence for it we get from humans, mammals, or even simple vertebrates.

  3. 3. Paul Torek says:

    Scott Aaronson has a better reply to the Integrated Information Theory. To wit, it gives some wrong answers to “pretty-hard problems” about which beings are conscious, and/or to what degree. A couple of straightforward pretty-hard problems: are frogs conscious? Are insects? The wrong answers that IIT gives, are that it attributes very high levels of consciousness to very dumb, routine computer programs. NOT artificial intelligence, mind you, but parity-check programs, like the kind that verifies your downloads from the internet.

  4. 4. Callan S. says:

    It seems strange how people make these theories accountable to consciousness rather than evolutionary theory.

    Why is ‘does it make sense in regard to consciousness?’ some kind of important measure? Is it because consciousness is taken to be beyond the scope of evolution?

  5. 5. zarzuelazen says:

    Hi,

    You would think that ‘you can’t build the physical world out of mere information’ either, but you’d be wrong. If you look at the holographic principle from physics, it seems as if physicists have indeed done the near inpossible, showing how the ordinary physical world can ’emerge’ from pure quantum information.

    https://en.wikipedia.org/wiki/Holographic_principle

    I really don’t see that the jump from ‘information’ to ‘experience’ is any harder than the jump from ‘information’ to ‘physics’. In fact the explanatory gap actually seems smaller.

    I do have a good idea how its done. I think it’s an extension of thermodynamics (or what is called ‘non-equilibrium thermodynamics). It’s all about the emergence of an ‘arrow of time’ – as we ‘zoom out’ from the flow of time, high-level abstract properties emerge in the form of a model or symbolic representation of time flow – a continuous rolling narrrative that we call ‘consciousness’

    Time flow-consciousnesss=
    (memory-past, present, anticipation-future)

    I highly recommend reading ‘The Big Picture’ by Sean Carroll, where he talks about the arrow of time and how it’s linked to consciousness.

  6. 6. Christophe Menant says:

    Yes Callan, many supporters of the ‘hard problem’ take consciousness to be beyond the scope of evolution.
    And most of the time they implicitly position consciousness as being phenomenal consciousness (illustrated by the ‘what it is like’ question).
    The forgotten key element is self-consciousness (representing oneself as an existing entity, physical and mental) which should be taken as an explicit evolutionary component of phenomenal consciousness.

  7. 7. Garrett Mindt says:

    Thanks for the write up, Peter! And I think you did a nice job of capturing the main thrust and aim of the paper, so no disagreements there.

    To answer the following question:
    “I think the case is well made, although unfortunately it may be a case for despair. If this objection stands for IIT then it most likely stands for all physicalist theories. This is a little depressing because on one point of view, non-physicalist theories look unattractive. From that perspective, coming up with a physical explanation of phenomenal experience is exactly the point of the whole enquiry; if no such explanation is possible, no decent answer can ever be given.
    It might still be the case that IIT is the best theory of its kind, and that it is capable of explaining many aspects of consciousness. We might even hope to squeeze the essential Hard Problem to one side. What if IIT could never explain why the integration of information gives rise to experience, but could explain everything, or most things, about the character of experience? Might we not then come to regard the Hard Problem as one of those knotty tangles that philosophers can mull over indefinitely, while the rest of us put together a perfectly good practical understanding of how mind and brain work?
    I don’t know what Mindt would think about that…”

    Yes, I do think IIT is the best theory of consciousness of its kind, and we could learn a lot about consciousness from studying it further. Now I want to qualify that by saying just because it’s the best one on the market (in my opinion) doesn’t mean one should be uncritical of its claims or framework. I think part of the reason it would be fruitful to continue exploring is because it provides a well-developed and rigorously formulated backdrop to inquire about the relationship between information and consciousness. I think sometimes quite a bit is assumed about this relationship, and sometimes taken for granted. I’m all for an information-theoretic approach, if its conceptually coherent in capturing the phenomenon in question – consciousness!
    Perhaps the starting point in these debates is important. Given that IIT is a self-described consciousness first approach, i.e. it takes the character of our phenomenal experience and constructs a theory that can account for those features, I take it that IIT’s main project is to account for phenomenal consciousness. IIT sets itself against brain first approaches, those that search for neural correlates and ask how experience might arise from them. I try to take IIT on its own grounds, and I see no other way to interpret their claims. IF they weren’t trying to explain phenomenal consciousness, then I would be puzzled why they call the axioms “phenomenological axioms.” So according to their own words, that is the phenomenon they are attempting to explain. The question then arises, where does phenomenology enter the picture in the integration of information? I’m not saying that it doesn’t, or that integrated information could never account for it, but that question needs an answer and an explanation.
    My argument can be seen as conditional, in the sense that, IF one takes the hard problem as a legitimate problem then IIT doesn’t seem to account for it, for the reasons I argue in the paper. Now one might not think the hard problem is a legitimate problem, in which case the arguments I give in the essay will not be very persuasive. But this second option doesn’t seem to be an option for IIT itself, since as far as I can tell this is a problem that IIT is contending with.

    In the words of Tononi & Koch (2015), from their essay “Consciousness: here, there and everywhere?:

    “Some philosophers have claimed that the problem of explaining how matter can give rise to consciousness may forever elude us, dubbing it the Hard problem [71–73]. Indeed, as long as one starts from the brain and asks how it could possibly give rise to experience—in effect trying to ‘distill’ mind out of matter [74], the problem may be not only hard, but almost impossible to solve. But things may be less hard if one takes the opposite approach: start from consciousness itself, by identifying its essential properties, and then ask what kinds of physical mechanisms could possibly account for them. This is the approach taken by integrated information theory (IIT) [75–79], an evolving formal and quantitative framework that provides a principled account for what it takes for consciousness to arise, offers a parsimonious explanation for the empirical evidence, makes testable predictions and permits inferences and extrapolations (table 1).”

    Now, this is a bit ambiguous whether Tononi and Koch take the hard problem as a legitimate problem, and as far as I can tell it can be interpreted a couple of different ways. They could be saying that:
    1) The hard problem is legitimate if one starts from the brain/physical and tries to squeeze consciousness into the picture, but not a problem if you start the other way around, begin with the character of our experience and develop a physically realizable model of consciousness.
    or,
    2) A brain first approach may not only be hard, but impossible. Whereas the consciousness first approach is just hard, but not impossible.

    I’m inclined to interpret them as saying the first of these two (there may be more than these two ways of interpreting what they say in the quote there, but these are the two that pop out to me), but even if that is the case they still take it that the hard problem applies to a brain/physical first approach. And furthermore, if their notion of information is purely a structural-dynamical one, then I take it they have committed themselves to a physicalist position (again, if you take Chalmers arguments as legitimate), and so, on their own grounds would not have a solution to the hard problem, as I argue in the paper.

    Now you could take this as pointing out one of two things:
    1) There’s a problem with the notion of information in IIT (as I argue in the essay)
    or,
    2) There’s a problem with the hard problem

    I’m okay with either of these options, as each would be rather illuminating with regard to understanding the nature of consciousness.
    I’m by no means religious when it comes to the hard problem, and if that temple fell for well-argued reasons, then it falls. But I do see the hard problem as methodologically useful in attempting to come up with a conceptual framework under which to approach the problem of consciousness. I see the hard problem as pointing out a tension in our standard way of discussing these issues, i.e. the physical vs. the mental. Thus, I think the hard problem is legitimate in that it shows the division between the physical and mental as being conceptually incoherent. Now, that leaves us in the position of coming up with a new conceptual framework to approach consciousness. I think an informational framework might such a framework, or at least an interesting step in the right direction (again, if it can be made sense of in a non-question begging way). This is what my dissertation topic is on, and so this paper is a small slice of that larger project. Which means the paper doesn’t accurately reflect my total thoughts on these matters, but rather one possible objection to a theory of consciousness that I think overall is very good and worthy of increased interest and discussion. So, in other words, I would like to tackle my own problem I set up in this essay (I’ve always had a habit of creating problems for myself).

  8. 8. Garrett Mindt says:

    In reply to Christophe Menant #1:

    I’ll have to look at the paper you mention at the end of your post to get a better idea of what you are saying, but initially in the context of IIT, I’m not sure if answering what the “meaning of information” is would be the right question to ask with regard to IIT. As I think this would be a conflation of semantic information with the kind of information IIT seems to be describing. Now one may think that asking what information is, outside of the context of what the meaning of that information is, would be non-sensical. This would seem to be where the objections of Searle (2013) and Cerullo (2011, 2015) are coming from.

    With regard to consciousness, and understanding it as an information-theoretic phenomenon as IIT describes, I don’t think solving the meaning question would be the whole story for these types of approaches. The section of the essay discussing Searle’s and Cerullo’s criticisms to IIT on these grounds tries to make that clear.

    I also don’t mean to say there is absolutely no connection between consciousness and meaning, but I do question whether understanding meaning is the whole story.

    Also, in response to Callan #4 and your reply to him #6:

    I do think understanding the connection between evolution and consciousness is an essential connection that should be explained. And furthermore, how the mechanisms of evolution relate to those of consciousness, and vice versa. I don’t see anything about what I wrote that seems to elevate consciousness outside of the context of evolution, but if I did, please point that out so I can rectify it, as I don’t mean to do that.

    The reason I make the theory accountable to consciousness in the essay is because that’s what IIT itself does. In the sense that, Tononi and colleagues develop the theory as a consciousness first approach, and claim that it explains phenomenal consciousness. Since I want to object to IIT on its own grounds, I take that as their goal, and attempt to argue against them with regard to explaining consciousness. The dialectic of the essay may appear to take the questions of consciousness and evolution as separate issues (I’m inclined to think they are intimately connected), but that’s merely to frame the argument in the context of IIT as a theory of consciousness.

    In reply to SelfAwarePattern #2:

    There does seem to be something fishy in the structure and dynamics argument, though I can’t quite place my finger on it. That’s why I prefer to see the argument in the essay as a conditional, IF one takes the hard problem and structure and dynamics argument as legitimate issues then IIT seems to have a problem, mainly because of how information is defined in the theory.

    Yeah, I think it’s incomplete, and I don’t think those that are working on developing it would disagree. As from I can tell they point out in discussions and presentations that its a working theory, and constantly being improved. I think at this point it’s IIT 3.0 technically. The 3.0 bringing with it the more developed mathematics from what I understand. Conceptually it’s stayed fairly consistent, with improvements and additions along the way. For instance, a question that gets raised is, are the axioms and postulates complete? I think perhaps some could be removed and some added, and that’s okay, as this seems to be a developing theory. Perhaps, IIT 4.0 will bring some different ways of calculating Phi or different axioms/postulates.

    In reply to Paul Torek #3:

    I like Scott Aaronson’s discussion on this as well. I do see him as arguing more from the side of craziness though, as in, “parity check programs just cannot be conscious!” And I suppose IIT just has to bite the bullet on this, or develop an explanation of why that’s not the case, or why that’s not a problem. But I’m not sure what a fully worked out reply to that would look like.

    In reply to Zarzuelazan #5:

    I’m fairly interested in the implications pertaining to the ontological status of information in these contexts. Such as John Wheeler’s (1989) “it to bit” stuff, and what that means in terms of understanding the relationship between physical matter and information. I still think there needs to be some conceptual work done here in how to make those connections between information, matter, and consciousness, to give a fully worked out explanation. But overall, I’m interested in that project.

  9. 9. zarzuelazen says:

    Garrett, I think the physical/mental divide is simply arising from viewing reality at different levels of abstraction, and there is no hard problem.

    If you ‘zoom-in’ on a big macroscopic structure such as a mountain, for example, you see entirely new features appearing at smaller scales..ridges, rock-faces etc.

    It’s clear enough that there are multiple levels of abstraction for the spatial dimension, but people tend to forget that the same applies to the *temporal* dimension as well.

    Imagine zooming-in and zooming-out on the *temporal* level…for instance consider those spectacular time-lapse videos you can find on youtube. New *temporal* features appear at different time-scales, and here I think is the big tip-off that can lead to a full explanation of consciousness… it is a simply a way of viewing temporal features at a certain level of abstraction.

    Now the informational approach and thermodynamics dove-tails with this idea in a very natural way. See previous post about a recent paper pointing out a close link between informational entropy and consciousness for example.

  10. 10. vicp says:

    Hi Garrett, Very interesting paper which I’m still reading. I always take Searle’s side in the debate because as an electrical engineer, when Tononi says intrinsic consciousness in a photodiode or thermostat, the red flag goes up that he cannot see the difference between intrinsic biological sentience which causes conscious behavior and the appearance of conscious behavior, like the poet who thinks the clouds and trees are dancing on a windy day. The quote you cite on p18 is very telling because, Tononi who is an MD should know better that what he is giving is not a notion of information based in differentiation but actually a description of the most intrinsic piece of biological “information” or the sense of self which is based in our more fundamental biology, our limbic system.

    “IIT introduces a novel, non-Shannonian notion of information integrated information which can be measured as “differences that the make a difference” to a system from its intrinsic perspective, not relative to an observer. Such a novel notion of information is necessary for quantifying and characterizing consciousness as it is generated by brains and perhaps, one day, by machines.”(Koch and Tononi, 2013)

    The more appropriate idea would be along the lines of BIST or biological integrated state theory, which does not carry the baggage which drags thermostats and photodiodes into the argument. I would argue that this is a mammalian concept, for example my cat knows that when the back door is open or in the ‘one’ state it can go on the porch and use the litter box, when the door is closed or in the ‘zero’ state, it can scratch on the door to make me open it and put it in the ‘one’ state.

  11. 11. Peter says:

    Many thanks for these comments. Garrett – very much appreciated.

  12. 12. SelfAwarePatterns says:

    Garrett,
    I very much appreciate your engagement and thoughtful reply. Certainly if we take the hard problem as a given, I can see the points you make being valid. But my issue with the hard problem, essentially an argument from incredulity, is it seems to give people permission to reject *any* coherent attempt to explain consciousness. Nothing less than referencing magic would ever suffice.

    I think it’s far more productive to view the hard problem as actually two hard truths. The first is that subjective experience is *subjectively* irreducible, because we can’t experience the mechanics of experience. The second is that an outside observer can never have the experience of an observed system, no matter how much is known about the observed system’s internals. Labeling these stark truths as a “problem”, implying that it’s something that has to be “solved”, doesn’t seem very productive.

    I think my beef with IIT is it seems too abstract, too divorced from actual brain functionality. What is the role of the prefrontal cortex in experience, of the information flows to it from the sensory cortices and emotional circuits? How is our sense of self constructed in the brainstem, thalamus, and insular cortex? In my view, IIT is prematurely attempting to abstract consciousness before we understand it in the one system everyone agrees is conscious.

  13. 13. David Duffy says:

    I am very sympathetic to the kind of thing IIT is trying to do, and think the irreducibility of subjective experience etc a non-issue, along the lines of “what else would they be like, given our sensory inputs are of the same nature as those of other, lower, animals”. We don’t have to do a semantic analysis of the information flying around inside of
    https://arxiv.org/pdf/1507.00530.pdf
    – there is a one-to-one relationship between causal effect and bits.
    If the same task was being performed by an appropriate programmable computer, then we could assign semantics to the language in terms of the energetics. And if this was some kind of Alife system, then presumably this semantics would arise through evolutionary forces, as has bee language semantics, where there is a pretty straightforward relationship with bioenergetics. Does this helps with consciousness? I think semantics has been skipped, but I guess the Invariance Thesis in computing would suggest an equivalent underlying eg
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4880557/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4589845/

  14. 14. Christophe Menant says:

    Garrett
    Relations between information and consciousness are indeed interesting.
    Evolutionary wise it is clear, putting aside metaphysical hypothesis, that meaningful information came up with life. And as human consciousness came up on living entities we can say that human consciousness needs and uses life based meaning generation.
    As a consequence, research activities for artificial agents to access the performance of human consciousness should consider how these agents could deal with life based meaning generation.
    As said, a model of meaning generation based on internal constraint satisfaction shows that today AAs cannot do that (we do not know how to transfer a ‘stay alive’ constraint to AAs).
    This means that strong artificial life and strong AI are not possible today.
    The blocking points come more from our lack of understanding about life and human mind than from computers performances. An option could be to try to extend life with its ‘stay alive’ constraint within AAs. The AA would then be submitted to the constraints brought in with the living entity. As far as I know this is not possible today.

  15. 15. john davey says:

    SelfAware


    “If you say “experience” or “feeling”, what specifically about those things are beyond explanation?”

    Both, if you have no account of the cause. The point is the process cannot be modelled mathematically – that is a restriction placed on the “explanation” by the subject matter by it’s nature. But that doesn’t stop a non-mathematical “explanation”, if that’s a word you want to use ( and if you think such a word is apt to describe that which arises from the conventional mathematical approach)


    Certainly the explanation may not *feel* like our feeling of the experience, but so what?

    This depends upon what you regard as an explanation.
    Thus we may take properties A and B of brain system (S) and say that when

    S(A) -> S(B)

    there is a corresponding mental experience X.

    In my opinion, the above method is likely to be used and it’s as good as it’s going to get. It lacks any of the close-coupling between scalar and semantic that conventional physics possesses – thus making most algebra and calculus for instance unlikely to be much use – but it is a quasi-formal system. But of course there is no implicit and natural linkage between a mental state and a scalar, in the same way that human cognition offers such a neat coupling between linear time, length and mass.

    But it’s not really an “explanation”, more of a formal modelling system. Like physics.


    Science has a long history of giving us counter-intuitive answers.

    Science has a long history of replacing one theory with another. Our feelings (in the mental phenomena sense) aren’t theories. ‘Feelings’ in the sense you use in the second sentence is not really the same as the ‘feelings’ in the first part of the paragraph : by ‘feelings’ in the second sense you mean ‘intuition’. Intuition is cognitive, not qualitative.

    ‘Intuition’ might lead us to construct a theory that says the earth does not rotate but that, in every sense, is as scientific a claim as any other. It’s just wrong. It has nothing to do with the lack of information in mental phenomena and doesn’t alter the fact that a formal mathematical system cannot predict the existence of mental qualities

    JBD

  16. 16. SelfAwarePatterns says:

    John,
    First, thank you for responding to that question. I rarely get responses to it.

    It seems like there are two issues here. One is the issue of correlating a subjective experience with a neural state. The other is what we can then do mathematically with those neural states.

    I completely agree that correlation between a mental state and neural state is all we’ll ever get. But the thing is, that’s all we *ever* get in science. We work to isolate the correlation down to a single pair of variables, and when we do, we conclude we’ve found causation or equality, but as David Hume pointed out, we never actually observe anything other than the correlation.

    For example, Newton derived gravitation by observing the correlations between the mass, speed, and changes in movement of various objects. Of course, success required isolating those variables from things like light and air pressure. But all he ever had to work with were the observed correlations. As it turned out, eventually there were correlations his theory couldn’t account for which had to wait for Einstein’s more comprehensive general relativity.

    We can establish correlations between, say, my experience of red, and the wavelengths of the light striking and exciting the red sensitive light cones on my retina, the resulting cascade of signals up the optic nerve to the superior colliculus, thalamus, and occipital lobe, along with much of the subsequent processing. Certainly we’re not anywhere near a full accounting yet, and who knows what obstacles might be encountered, but nothing fundamental appears to stand in the way, aside from decades of mind-numbingly difficult and meticulous scientific work.

    Once we know the corresponding neural state, then the question is can we mathematically model it? As I understand it, computational neuroscientists are already doing this for isolated circuits, although as with the correlations above, there is a long way to go yet, again with mountains of meticulous work yet to happen.

    But taking a longer view, if we can establish correlations with increasing granularity, and we can mathematically model neural states then, and I know you’ll disagree, I’m comfortable connecting the dots and saying we’ll eventually be modeling mental states. This may be centuries from now, and it’s conceivable that chaos theory dynamics might always prevent accurate predictions in the same manner it prevents us from predicting what storm systems do. Although as with storm systems, the models will still give us insights into what’s happening.

  17. 17. Callan S. says:

    I think it’s probably possible to mathematically formulate a processor that along with going through a Darwinistic trial, tries not just to take process its environment but also itself – but it cannot because internal access fails to monitor the act of internal access as it conducts it. That inability to absolutely self monitor is probably quite possible to formulate mathematically – as well as the effects this ‘anomaly’ will have in it’s processing. Including the processors rejection of what it’s told of its ‘experience’, as the inability to absolutely self monitor is invisible (mechanically so) and thus it seems impossible to it that it does not have full access. Worst bit is it might reject a model of a processor that doesn’t have full access and the self reportage effects that’d have – not rejecting in a ‘That doesn’t apply to me’ way, but just rejecting perse, such is the intensity of absence.

  18. 18. john davey says:

    SelfAware


    ut as David Hume pointed out, we never actually observe anything other than the correlation.

    That is an interesting expropriation of Hume’s idea of “correlation” but the idea of a neural correlate is not the same thing.

    Hume’s correlates are part of his sceptical critique of causation. This if A precedes B, we think that A causes B. Hume’s idea was that there is no linkage between A and B and thus all we can talk about is a correlate in time.

    It has of course been somewhat discredited. Hume was referring to the science of his day, and Newtonian mechanics was the principal theory he utilised. This if two billiard balls hit each other there is no reason to assume the motion is causally linked, just correlated. Now though we have electrostatic and atomic theory, not available in Hume’s day, and arguably the burden of establishing a cause for the motion of billiard balls has been established.

    The idea of a neural correlate does not follow this pattern – external event E(t) correlates to mental event M(t) – they are simultaneous and it can’t be said that the physical event causes the mental one (not in the classical sense in any case) . Rather, the neural correlate is an external model of an internal state.

    There is no parallel in scientific history and drawing on previous patterns of scientific growth is futile. This is a novel situation and it’s the dinosaurs of the Dennett variety that can’t get their head around the fact that mental phenomena are just fundamentally different. It’s the single most important scientific fact these alleged rationalists cannot accept.

    They are so used to viewing humans as agents of perfection that they cannot accept a theory that defies a full mathematical analysis implicitly. This is what I mean when i say that behind the alleged atheistic attitudes of the computationalists is a hugely naive and childlike view of homo sapiens as being not just the top of of the biological tree, but in a state of rational perfection – a fusion of the mathematical and the biological. One God for another.

    Most of the computationalists are americans – america is still I think the last bastion of naive enlightenment thinking. They really do believe humans are entitled to know everything they want to know – it’s just a matter of time. Computationalist fantasies bely a lot of US liberal political attitudes and as America declines, we’ll find fewer computationalists ready to land conscious robots on Mars…

    I think Hume – a massive sceptic – summed gravitation up well. Although Newton’s three laws of motion, he said, suggested that the mysteries of nature had been opened up, Newton’s subsequent theory of Gravitation had made the universe more of a mystery than ever. He was right.

    J

Leave a Reply