Picture: substance. Mostyn W Jones might seem to be leading with his chin a little when he offers us, in the JCS, “a clear, simple mind-body solution”. His “clear physicalism”  is meant to banish many of the “obscurities” (a favourite word) involved in other accounts, especially reductionist and functionalist ones.

One problem with functionalism, according to Jones, is that it requires us to believe that conscious sensations are multiply realisable. So long as it embodies the right functions, any physical thing can have consciousness. But functions are abstract; what have they got to do either with my physical brain or with my vivid actual experience? These relations are surely “obscure”. Another problem is that none of these reductive theories can deal with qualia (the inherent phenomenal redness of red, hotness of heat, etc). Once you’ve reduced consciousness to computation, to non-computational functions, or to anything similar, you can no longer explain why it is that real inward experience occurs, or what causal relations that experience has with anything, or how it contrives to have them. Jones sympathises with Strawson and Stoljar: he would like to be able to say that qualia are just the reality of experience: science gives an outside account, and qualia are how it looks from the inside.

In some ways this is an appealing position to take, but there are a number of pitfalls. If all you’re saying is that science is the third-person perspective and qualia are the first-person perspective, you’re just re-stating the problem: why is there a first-person perspective, anyway? And if qualia are just the reality, you have two options. One is to find an explanation of why they don’t crop up all over the place, but seem only to arise in the presence of appropriate brain functions; the other is to bite the bullet and say that in fact they do crop up all over the place, and that panexperientialism is correct. Strawson, if I’ve understood correctly, leans in this latter direction; Jones, in spite of his hostility to functionalism, seems to lean back the other way: he describes consciousness as a neural substance arising from highly active, highly connected neural circuits; so there still seems to be a broadly functional explanation of why these phenomena are confined to brains.

“Substance” is a treacherous word: in ordinary parlance it suggests a lump of stuff, simple physical matter: but in philosophy, especially older philosophy, it has a much more slippery meaning as a basic element, one of the things that remains when analysis is complete. By pursuing this idea of a substance as something unanalysable to its logical conclusion, Leibniz produced the bizarre relativistic ontology of the Monadology:  Jones certainly doesn’t mean monads, but what does he mean? He says “Clear physicalism avoids this obscurity by treating qualia as electrochemical substances that underlie observable brain activity and do work in brains”.  The combination of both underlying observable physics yet doing work – causal work, we assume – seems very problematic. It would be odd but compatible with physics if Jones were merely saying that qualia are aspects of the physical world that run along with – perhaps over-determine – the causal effects specified by physics: but that doesn’t seem to be quite it.

If we take a step back, the idea of making consciousness a physical substance of any kind seems difficult to accept; consciousness, after all, comes and goes, but matter is conserved. If we take a more flexible view of the notion of substance and allow the substance of consciousness to come and go in line with certain vigorous firings of interlinked neurons it becomes a little hard to see what our quarrel with some sort of functionalism can be. Pain, it turns out, is not the firing of C-fibres (or whatever), but it is a substance that occurs in perfect correlation with the firing of C-fibres. Hmm.

37 Comments

  1. 1. Lloyd Rice says:

    Why doesn’t consciousness pop up everywhere? To my mind, the answer to that is simple: There are a number of conditions to be met. My metaphor for comparison goes like this: Suppose you had a computer program to compute some quantity, say, the volume of a certain shaped object. First, you would only get answers if the program is running. Second and more importantly, you only get answers if you put in the details for an example of that certain type of object. Follow the rules and you get results.

  2. 2. Vicente says:

    The paper presents a “clear physicalism” theory that is not clear and it is physicalist just because the words “physics” or “electrochemical” happen to randomly appear in the text. Section 9 is just great. The best thing is that it definitely brings a solution to the problem of the “self”.

  3. 3. Charles Wolverton says:

    I’m reminded of Rorty’s criticism of certain purported “explanations” as “explaining the obscure with the even more obscure”.

    Early in my foray into consciousness studies, I became aware of the difficulty of clearly and precisely expressing relevant concepts owing to the absence of a widely accepted and closely adhered-to vocabulary. Consequently, when encountering writing that purports to offer a new, clarifying insight into consciousness, I expect the author to employ the best vocabulary available and to be as precise as possible in its employment. This paper seems at every turn to fall short of that expectation.

    In a cursory scan, I encountered numerous examples of this, one in an area with which I actually do have relevant background and can speak with some authority:

    “it’s puzzling how abstract information gets ‘realized’ in brains and affects brains”

    This suggests some confusion about the role of “abstract information”. As I see it, the concept of information – as employed in information theory, which in turn is employed in consciousness studies by Crick, Koch, et al – is not something that “gets realized” but rather is a mathematical measure of a priori uncertainty that is used in the analysis of communication channels, the function of which is to resolve that uncertainty – AKA, to “transfer information”. Viewed from that perspective, it is channels that get realized, and that channels can “get realized” in (and affect) brains (more accurately, I would think, get realized in the various sensory input processing paths) is no more puzzling than that such transfer is realized in the Internet.

  4. 4. John says:

    On the one hand I like this article because it is a “no nonsense” statement of the obvious truth that the mind is in the brain, on the other hand this article makes me worry about the professionalism of the JCS editors. Throughout the history of the mind-brain problem it is the regress arguments that have stymied the identification of mind with body yet Jones seems to simply ignore this difficulty. He has also failed to tackle the problem of conscious experience being historical in the sense that there is nothing in experience that is not past. Even children understand this problem, say “now!” – when was now? Experience is always past, it is never now… See Time and Conscious Experience for a quick resume of what needs to be explained.

  5. 5. Paul Bello says:

    panexperientialism (or protopanpsychism) provides a clear answer to Lloyd’s question without equivocating consciousness with structured software programs. The brain is something like a radio reciever for conscious “stuff” — damage it, and it will predictably fail.

  6. 6. Vicente says:

    Paul[5]: just receiver? could it also be emitter, having a complete transmitter. This is the idea I have time ago, that the brain has two main functions:

    1) transponder/interface to somewhere/someone/something else.
    2) Autopilot & on board computer (when the function is switched on)/ zombie mode.

    Of course, to posit such a model entails all the causal problems, and physics closure ploblems, etc. The Princeton Noosphere project is the closest to envisage a scientific strategy to tackle the poblem I have seen.

  7. 7. John Davey says:

    Perhaps the obvious thing to suggest is that consciousness is physical but not reducible. Or perhaps a more suitable vocabulary is that consciousness is natural, rather than physical, if some people think that the word ‘physical’ requires reduction to physics at some point, which clearly canot occur.

    Consciousness evidently exists, but that doesn’t mean it has to exist in the same way that matter exists. That’s just something that physics (and consequently, ourselves) require in order to deal with the world more easily. But alas the universe and its contents are not there for the benefit of our ease of analysis.

  8. 8. Burt says:

    @Vicente [6]: You and Paul Bello [5] are both correct. The brain is a receiver and a transmitter/emitter – each of our consciousnesses builds individual brains and tunes the receiving neural circuits to detect its individual’s orthogonal consciousness frequency (the “I” frequency and generally rejects other “I” consciousness frequencies – when a brain fails to reject other “I” frequencies it is often deemed schizophrenic.)

    It also transmits/emits frequencies of wave/particle consciousness units in patterns according to the beliefs and desires of our individual consciousnesses which coalesce into objectified reality (I say objectified as the objects are subjective creations of each consciousness.) The subjective objects then appear to be perceived by the brain and translated into our experience. This order of events seems to imply that time is involved (time doesn’t exist in reality or to consciousness) but the sense of time is just an illusion created by the brain to distinguish among simultaneous events so everything doesn’t appear to happen at once.

    @John Davey[7] Anent “natural”: Everything that exists is natural as nothing can exist apart from “nature” or consciousness.

    Consciousness is both physical (when it manifests as matter/energy) and non-physical – it is partially reducible while physical (as physicists are attempting to create in the LHC) to the point where it becomes non-physical and perhaps even to its monadic (consciousness unit “cubit”) state but that will require a “shift” in our consciousness.

    The world and physics require that we exist in order for their existence to be realized. We are here for the benefit of the ease and existence of the universe and its contents as a prerequisite (strong anthropically speaking.)

  9. 9. John Davey says:

    Charles

    “This suggests some confusion about the role of “abstract information”. As I see it, the concept of information – .. is a mathematical measure of a priori uncertainty that is used in the analysis of communication channels”

    Absolutely correct, but I recall my Information Theory at college – many years ago – although the concept of information seemed to revolve less around uncertainty and more around efficiency. The ‘uncertainty’ as I understood it was linked to redundancy, itself an efficiency measure in the sense that disruption to communication channels could still be recovered from.

    But the whole point of information theory that it was a study of syntactical systems used to convey messages that were of course largely semantic in content. ‘Information’ without the rules linking the system’s tokens to the semantic they represent is completely devoid of content.

    I think the point that the writer was trying to make ( i agree it is vague) is that brains know about semantic as well as syntax ( although I might be wrong about that) and this makes the brains processes, therefore, more than just mathematical processes which are condemned to live in a world of representation and syntax.

    I suppose the analogy is with theories of matter and physics. The theories of physics are more than just mathematical statements : they are mathematical statements plus a semantic mapping by us of the constituent dimensions of physical laws. Matter is not mathematics : it is more than that. Consequently physics is not just mathematics either. It is mathematics plus a psychological step which equates the dimensions of physics in mathematical statements with those dimensions as we understand them. And we do understand them, but we can’t explain them ( exactly in the same wasy as we can’t describe ‘red’ to a blind man) .

    The dimensions we just know about ( time, extension, matter etc) without explanation ( I think nobody has ever bettered Kant on this). Physics has nothing to say about them, nothing at all. We plot mathematically, then we map the maths to our implicit understanding of unsyntactical, entirely semantic notions of space, time and matter. Without the implicit semantic mapping , physical laws are meaningless . With the mapping – a stage that must be interpreted as non-computational in entirety – we have meaningful content. Some physicists may talk about the need for reductionism but the process of physics is itself not reducible !

    The trouble is that physics is to some extent determined by the contents of our consciousness. Time, space and matter are integral parts of our consciousness. The relationships between space, matter and time have well established mathematical relationships (the great mystery of physics ) but that is all we know about them.

    Its thus odd to use materialism to try to ditch the idea of mental phenomena when materialism is itself a complete product of those pheomena !

  10. 10. John Davey says:

    Charles

    “I think the point that the writer was trying to make ( i agree it is vague) is that brains know about semantic as well as syntax ( although I might be wrong about that) ”

    Just to be clear, as regards the latter point I might be wrong about the intentions of the writer, not the fact that brais know sematic

    J

  11. 11. Charles Wolverton says:

    John -

    Info theory, as I studied it (no doubt even more years ago than you) was about reliable communication of raw – ie, semantics-free – data. The objectives ideally are to determine for an error-prone communications channel the maximum number of bits-per-second (the so-called “channel capacity”) one can get through the channel and to encode the raw data (ie, inject the redundancy to which you allude) so that the error rate is kept below a specified maximum while the effective data rate is close to the channel capacity. (Getting the effective data rate close to the channel capacity could be considered “efficient” use of the channel – although I vaguely recall that there are such things as “efficient codes” for which “efficient” may indicate some feature of that is not directly related to maximizing channel utilization.)

    If all that is correct, “information” ala “information theory” has nothing to do with semantics – and since I studied it for a couple of years and recall no mention of “semantics”, I suspect it is correct. And despite my possible advantage in understanding the vocabulary of information theory, I have no idea whether the author had semantics in mind when referring to “information”. Which was my point – when an area of study is already confused, partly because of the absence of a prescribed vocabulary with which to discuss relevant issues, IMO the least one can do is to use carefully and accurately whatever vocabulary is chosen – which I think the author didn’t do.

    And I picked that example because of my familiarity with that vocabulary. There were many other examples for which I have minimal familiarity with the chosen vocabulary but nonetheless found the writing careless and quite possibly wrong.

  12. 12. John says:

    Charles,

    The concept of information has been extended well beyond simple communication theory. A Turing machine can represent any measurement as a series of bits and can also represent comparisons and branching instructions as symbols in the form of bits. This has led to “information processing” and, as you know from Shannon’s analysis, information can use any sort of “bit” so information processing can also use any sort of “bit”. An information processor could use turnips on a conveyor belt as “bits” though charges on gates are faster! I hope this discussion of how Shannon’s concept of “information” has been generalised is of some assistance.

    Unfortunately information processors cannot replicate “mind” – see: The symbol grounding problem and Chinese Room>/a>.

  13. 13. Kar Lee says:

    Paul and Vicente,
    “…The brain is something like a radio reciever for conscious “stuff” — damage it, and it will predictably fail…”

    How about more fun by metaphoring it further…How about as the CPU is to the mind and individual softwares are to the individual brains — damage any piece of software, the CPU cannot think when running that instance? :)

  14. 14. Charles Wolverton says:

    John -

    Let’s review the bidding. Jones used the term “abstract information”. In my initial comment, I tried to make it clear that I was inferring (not necessarily correctly) that in using that specific term he had in mind the concept of “abstract information” used in information theory.

    Now, he may have had some other concept in mind, in which case my inference was wrong and that part of my comment was irrelevant to an assessment of the article. But if my inference was correct, topics involving other meanings of “information” are – while possibly quite interesting – irrelevant to my comment. In particular, the Symbol Grounding Problem (per the wiki entry) apparently deals with semantics, which classical info theory – as far as I know – doesn’t.

    On the other hand, I wasn’t familiar with Harnad, and based on a skim of some of his papers referenced in the wiki entry on the SGP, it is clear that he is someone with whom one interested in these matters should be familiar. So, thanks for that pointer.

    I especially enjoyed his discussion of the Chinese Room Argument, which clarified for me some of its issues. It also reminded me of why I don’t like thought experiments and analogies. To faithfully represent the original scenario, they often must be massaged to the point of being effectively isomorphic to that scenario, ie, redundant. So, while for a time they may help to refine, focus, or otherwise help in thinking about the original scenario, once they attain a life of their own they are probably best left to die a natural death. (“Mary’s Room” being another example, one which did – in a sense – “die” since its “father” subsequently recanted.)

    I’m unclear on your point about “bits” and turnips. If it was merely that in order to convey information it is not necessary for symbols to be binary, or even digital, as an ex comm system analyst I can only respond “doh” (or “duh” – I’m never quite tuned-in to the vocabulary of the “media-hip”).

  15. 15. Gilbert Wesley Purdy says:

    “Clear physicalism favors another view of binding that avoids these troubles by attributing minds and their unity to electromagnetic fields in brains. These field theories of mind are proliferating because they avoid the troubles above (McFadden, 2002), and because qualia exhibit correlations with field activity (Pockett, 2000). Also, fields resemble sensory images in that both arguably arise from discrete neurons as continuous wholes spread across space (Libet, 1994).

    …This seems to explain the mind’s unity without computationalism’s troubles above.”

    As you might suspect, to me these “fields” are a manifestation of an emergent quality of grey matter complexity. Computationalism — which is a rigorous form of positivism — can not account for Consciousness but the underlying dynamics of “fields” (emergence) can not yet be explained (we can only acknowledge persistent patterns of activity) and, coincidentally or not, the same must be said of Consciousness vis-a-vis the brain. Emergence is akin to a “change of state”. Positivism has yet to find a way to relate classical computationalism to that change of state.

    @J. Davies: “I suppose the analogy is with theories of matter and physics. The theories of physics are more than just mathematical statements : they are mathematical statements plus a semantic mapping by us of the constituent dimensions of physical laws.” Jones’s paper is indeed vague indicating that he is not himself sure just what specifically he is saying. But your point — whether or not it is his point to the extent that he has one — highlights for me the state of change boundary I’m speaking of. Computationalism is rigorously mathematical and consciousness is by its nature metaphorical. At the beginning of language, “semantic” and “metaphorical” are synonymous. Historically, consciousness only “learns” mathematics after millennia of progress. It knows metaphor from the first, or at least from the inception of language (if there is a difference between the two).

  16. 16. John says:

    Charles: “In particular, the Symbol Grounding Problem (per the wiki entry) apparently deals with semantics, which classical info theory – as far as I know – doesn’t.”

    Exactly. This was the point of the Symbol Grounding Problem: there is no meaning in a 3D set of bits and no meaning in a succession of 3D sets of bits. You can stick a brain full of bits, abstract or not, and you do not get a model of our experience without some other theory. This is all very well known, even Aristotle raised the problem, so it makes me wonder about the editors of JCS..

  17. 17. Charles Wolverton says:

    John – what do you mean by “a 3D set of bits”? I assume “3D” has the usual meaning, but I don’t understand what idea the whole phrase is intended to capture, unless it’s the idea that the brain is a 3D “container”.

    And I just noticed that I never explained the meaning of “a priori uncertainty” in my first comment, viz, the entropy (info theory sense) of the sample space of possible “symbols” sent over a comm channel. Eg, before one of two equally likely values of random variable X is transmitted, the entropy is one bit (H(X) = 1). If the transmission is error-free, receiving the transmitted symbol “resolves the uncertainty”, so one bit of information has been received. The “uncertainty” that has to do with coding redundancy is due to a channel being error-prone (AKA “noisy”). That is a separate issue.

    I have only briefly skimmed one paper by Crick and Koch and remember essentially nothing about it. But my guess is that they are trying to apply info theory in the first sense; eg, perhaps (conceptually) pixelizing the FOV and estimating the information content of an “image” received by the eye. (One might think that given my background I’d be especially interested in that approach, but apparently 30 years of dealing with that stuff was enough!)

  18. 18. Vicente says:

    Charles[16]

    “estimating the information content of an “image” received by the eye”

    What information? the one related to the signal that transmits the image according to information theory (eg: RGB coding),i.e. the necessary information to store the image, or the information that observer can extract from the image? if there is no observer there is no image.

    Does the eye receive an image? or is the image created in the brain?

    At the end of the day all research paths lead to meaning , that is what makes the difference.

  19. 20. Kar Lee says:

    I don’t know if I should point out the obvious: Consciousness is NOT a third person observable.

    You cannot build a consciousness detector to distinguish a “conscious” human from an automaton, such as an idealized automatic vacuum cleaner (perhaps one that is 100 times better than Roomba, responds to voice command, with voice output, etc).

    All third person observables are behaviors, structures, correlations, etc, which can be described by science as-a-matter-of-factly, completely bypassing the usage or inference of the word “consciousness”.

    It is only when one looks at himself or herself, the first person observable fact is: “I am conscious.” That is how consciousness is observed: Through the first person point of view. The existence of this first person point of view, through the existence of qualia that are being felt, is at the heart of the mystery of consciousness.

    There are generally two groups of people participating in consciousness discussion: 1)Those who attempt to discuss other people’s “consciousness” as if consciousness is a third person observable, 2) Those who are trying to explain the existence of this first person POV.

    First group of people are doing science, behavioral science, I may add. The link given by Charles is a paper of behavior simulation. It is hard core science. It is not consciousness study, but mistaken to be one by some. But if we change what we mean by “consciousness” to fit the conclusion of the paper, then it is about “consciousness”. However, it is definitely not about the mystery of this first person POV because there is none.

    The true discussion of consciousness should always address these two fundamental questions: 1) What happens to this first person POV when the neural substrate disintegrated? (easy answer: it dies with it) 2) How does this first person POV come about when the underlying neural substrate is formed (easy answer: it forms with it)? But these are the wrong answers because they mistook behavior as consciousness. They think when something moves or is responsive, it is conscious.

    Ultimately, you have to address this question: Why when some physical brain is formed in a womb, you acquire a point of view, you become conscious? Think about the billions and billions of brains that have formed, or are forming, and that have not brought about your point of view. How do you explain the fact that only one out of the so many brains that have formed over the millenia brought about your point of view? What happen to the other brains?

    If one does not address this question, whatever he/she may name the study, it is not a study of consciousness. It is just a study of behavior of some creatures named humans and their brain structures. In this view, the “Clear Physicalism” paper is addressing something else.

  20. 21. Vicente says:

    Kar Lee,

    they mistook behavior as consciousness

    Great statement !

    We should acknowledge that although consciousness has quite an impact in behaviour, they are definitely not the same.

    Probably the most pure effect of clean consciousness on beheviour would be the cease of action , ie: the behaviour of no behavior.

  21. 22. John says:

    Charles: “what do you mean by “a 3D set of bits”?”

    The amount of information that can be encoded by a system depends upon the distinguishable states available in the system. The number of states depends upon the number of independent axes available, a volume (3D) and a plane (2D) can hold more states than a line and, for those who understand that dimensional time exists, a spacetime extent can hold more than a 3D extent and, for those who follow QM, qubits multiply the information encoding capacity.

    I referred to a “3D” set of bits because the article was envisaging information encoded in the bulk of the brain. However, I could not discover how the information would be “read out”, the paper considered pain so it was imagining some sort of internal readout within the brain – I cannot go any further because the paper did not explain.

    Vicente: “At the end of the day all research paths lead to meaning , that is what makes the difference.”

    Agreed. As I was discussing with Charles, meaning is not available in a 3D set of bits. See New Empiricism and meaning.

  22. 23. Charles Wolverton says:

    “a volume (3D) and a plane (2D) can hold more states than a line”

    I don’t see this. If we are considering a finite number of states, then the 1-tuple (1D) of integers is overkill – and suffices even for a countably infinite number of states. So, how about an aleph-1 number of states? Then the 1D real line suffices (and actually is overkill – any interval suffices).

    And in that sense of “number of states” (ie, cardinality), adding dimensions doesn’t change anything. Eg, the cardinalities of N-tuples and 1-tuples of integers is the same. So, in what sense that I am missing is that quote true?

  23. 24. Vicente says:

    Charles[23]

    Because a n-tuple can hold more information, for example a 3D vector indicates: module, direction and sense, while a real number can only account for module (and sign), so adding dimensions allows to hold more information for each point. You can extent the concepts to other objects, the more parameters you have per object the more information you can hold. In addition, if you consider a network or a mesh, the more dimensions, the more conections you can make, a 3D mesh can hold more information in its topology than a 2D mesh, and there is no 1D mesh. Think of neurons and synapsis as a 3D mesh, for some people 3D mess ;-)

  24. 25. Charles Wolverton says:

    If this ends up being a double post, apologies – the first try hasn’t appeared after half an hour.

    [Sorry, Charles - recently my anti-spam software has been blocking comments at random. I don't moderate comments in advance, so they should appear immediately - if they don't please drop me an email. - Peter]

    Vicente -

    I was responding specifically to what John said, viz:

    “The amount of information that can be encoded by a system depends upon the distinguishable states available in the system.”

    If you think that an inadequate definition of information, take it up with him.

    In any event, I see “module” [I assume you mean modulus], direction, and sense (whatever that means) as being more what John has in mind by “meaning”. In which case they aren’t inherent in the information content (John’s definition) of the 3-tuple but are added by an interpreter.

    For those who are familiar with the concept of layered comm protocols, the distinction is between what is transported at the lower layers (raw data) versus by the top (Application) layer, which is where interpretation occurs and “meaning” enters the picture. Anyone (other than a specialist) looking at the uninterpreted data structures of the lower levels would find the expression “it’s all Greek to me” applicable.

  25. 26. John says:

    Charles: “I don’t see this”
    At the simplest level biological systems are finite and the parts of those systems that hold a bit of information have a finite size. If a 1 mm cube can hold a bit then 10 cm of bits in a line can hold 100 bits and 10cm square a thousand bits etc… At a more complex level Vicente is right, especially if physics is correct and reality is a four dimensional continuum where the reality of objects is described by 4 vectors.

    The article that is the subject of this thread was unhelpful because although it postulated a brain full of information it did not consider meaning beyond saying that some of the information was “pain” etc. How? As most correspondents here have said or implied, information is meaningless if it is just a static array of states of a substrate. Meaning only arises if the direction of events is included in our considerations ie: 4 dimensional states. See Presentism and the denial of mind

  26. 27. Charles Wolverton says:

    John -

    Ah, but then “line” and “plane” are irrelevant. All you’re saying is that for a given memory technology, (ignoring side issues like interconnections, power, and weight) memory capacity is roughly proportional to the volume available to implement it. I’ll buy that.

  27. 28. Charles Wolverton says:

    “Because a n-tuple can hold more information”

    You missed the point of my comment 23. If the components of the n-tuple are drawn from a finite set, comprising, say, two elements, then there are 2**n possible n-tuples. The information content (in the “distinguishable states available” sense of John’s comment 22, to which I was responding) of the 2**n possible n-tuples is exactly the same as that of a 1-tuple drawn from a set comprising 2**n elements.

    And although I don’t think it’s meaningful in the context of information transfer to consider drawing from infinite sets, the same general idea applies if the components are drawn from either the integers or the real numbers because there are exactly the “same number” (in the sense of cardinality) of n-tuples of integers as there are 1-tuples; likewise for real numbers.

    I don’t see that modulus and direction in vector spaces have anything to do with the information content (in the sense discussed above) of n-tuples. I have no idea what the “sense” of a member of a vector space is, but I doubt it’s relevant either.

  28. 29. Christophe Menant says:

    Several interesting items are addressed in Peter’s analysis and in the comments:
    A) Reduction to physics:
    In post 7), John Davey reminds us that “the word ‘physical’ requires reduction to physics at some point”. True, but why not consider the level of life in this reduction process ? The reduction of consciousness to matter and its laws by-passes the level of life which exists between matter and consciousness. Consciousness exists only within living organisms, not in inert entities. I feel that the reduction of Consciousness should be based on life, not on matter. The “Mind-Matter” problem is a misleading perspective. Looking at reduction of consciousness to matter takes as given the nature of life. But we do not know the nature of life, and skipping the level of life in the reduction of consciousness just implicitly introduces another unknown level.
    B) Semantic aspect of information
    Several posts address the semantic aspect of information, the relations between information and meaning. The capability for a channel to transfer information (Shannon) is indeed independent of the meaning of the information. But where, when, why and how does the meaning come in?
    I feel that the key point is about the reason of being of the meaning. Meaningful information does not exist by itself but is always generated by or for a system, and related to a constraint that the system has to satisfy (stay alive, obey social rules, avoid obstacles, look for happiness, …).
    Short paper introducing that perspective on meaning generation at: http://crmenant.free.fr/ResUK/MGS.pdf

  29. 30. John says:

    Christophe: “The meaning is formed of the connection existing between the received information
    and the constraint of the system.”

    Yes, as Brentano showed over a century ago meaning is “intentional”. It involves directed elements operating over a period of time (“directional” in itself implies this operation over a period). The problem with the analysis in your article is that at any instant there cannot be any meaning see Presentism and the Denial of Mind. You use the word “grounding” but have not addressed the problem of regress that occurs see The Symobol Grounding Problem. Are you sure that your scheme is not a system for allowing conscious observers to grasp a certain type of “meaning” rather than a scheme for a system itself to grasp meanings?

  30. 31. Christophe Menant says:

    John, let me answer your post 30 points:

    John: “The problem with the analysis in your article is that at any instant there cannot be any meaning see Presentism and the Denial of Mind”.
    CM: I’m afraid that if we accept Presentism, all systems that contain measurements at a given time are impossible. The MGS like any other. So putting aside the specificities of Presentism, I do not see very well the problem you are talking about. Could you please tell more?
    John: “You use the word “grounding” but have not addressed the problem of regress that occurs see The Symobol Grounding Problem”.
    CM: The regress problem does not have to exist in the proposed grounding as there is no observer that needs to be observed in order to exist. The MGS is data processing that can be used for a frog, a human or a robot. The application makes the difference. The case of conscious humans can bring up the homunculus problem which does not exist for a frog. More detailed presentation available at http://www.idt.mdh.se/ECAP-2005/INFOCOMPBOOK/CHAPTERS/10-Menant.pdf
    John: “Are you sure that your scheme is not a system for allowing conscious observers to grasp a certain type of “meaning” rather than a scheme for a system itself to grasp meanings?”
    CM: As said, the MGS is a system, and it does not contain per se any element related to a conscious observer (remember that the starting example is a paramecium).

  31. 32. John says:

    The regress that underlies the homunculus problem exists for any information system. The regress was first spotted by Aristotle who noted that for sensory impressions in the eye to be “seen” they would need to be transferred to somewhere that sees them but then all we would have is another sensory impression at a different place which would need to be transferred to yet another place and so on ad infinitum. The “homunculus problem” is a way of restating this regress argument in terms of little men within little men.

    Harnad spotted that the regress also applies to digital information processing systems. A store of electrical charges at any instant is just a store of electrical charges, it can only be related to something else through another store of electrical charges and this can only be related to something else by another store… This shows that in a digital computer each information store is isolated and meaningless at any instant.

    What is in common to all of these arguments is that they treat events as being successions of frozen, 3D sets of objects. If the universe is like this then at each instant nothing can be known and there is no meaning. Knowing and meaning always occur in the next instant..but the next instant is always in need of the next instant for knowing or meaning to occur. You can try to overcome this by extending events out into the world but the picture is always the same, the instants are still frozen. You can draw diagrams with arrows to suggest that more than an instant is involved but at each instant the arrows are meaningless and nothing can be known.

    If your MGS were actually to work you would need to say explicitly that you do not accept presentism and are adopting a four dimensional cosmology. If you do not say this you are implying that your MGS is no more than a frozen set of objects at any instant and that at any instant there are no other instants available to it.

    When I said that the MGS implies an external, conscious observer, what I meant was that it needs a real observer such as you or I to see how the arrows in the diagrams work. We are observers who extend in time, we can hear whole words and bars of tunes, not just nothing at all during an “instant” that has no duration.

    Incidently, how does your “MGS” hear a whole bar of a tune extended in time rather than less than a single note?

  32. 33. Christophe Menant says:

    John, Let me answer your post 32 remarks:
    “The regress that underlies the homunculus problem exists for any information system. The regress was first spotted by Aristotle who noted that for sensory impressions in the eye to be “seen” they would need to be transferred to somewhere that sees them but then all we would have is another sensory impression at a different place which would need to be transferred to yet another place and so on ad infinitum. The “homunculus problem” is a way of restating this regress argument in terms of little men within little men”.
    CM: I’m afraid we may not be speaking about the same thing. Take a robot programmed to avoid obstacles and a hungry frog. When the robot identifies an obstacle or when the frog sees a moving black dot, the actions that will come as consequences (turn to avoid, snap tongue to catch) close the perception/action process without requesting any type of regress in the perception process.

    “Harnad spotted that the regress also applies to digital information processing systems. A store of electrical charges at any instant is just a store of electrical charges, it can only be related to something else through another store of electrical charges and this can only be related to something else by another store… This shows that in a digital computer each information store is isolated and meaningless at any instant”.
    CM: Agreed that information stored in the computer can be considered as meaningless for the computer. But the Harnad position about regression applying to digital info processing is new to me. Could you please email the corresponding Harnad text or its link ?

    “What is in common to all of these arguments is that they treat events as being successions of frozen, 3D sets of objects. If the universe is like this then at each instant nothing can be known and there is no meaning. Knowing and meaning always occur in the next instant..but the next instant is always in need of the next instant for knowing or meaning to occur. You can try to overcome this by extending events out into the world but the picture is always the same, the instants are still frozen. You can draw diagrams with arrows to suggest that more than an instant is involved but at each instant the arrows are meaningless and nothing can be known.
    If your MGS were actually to work you would need to say explicitly that you do not accept presentism and are adopting a four dimensional cosmology. If you do not say this you are implying that your MGS is no more than a frozen set of objects at any instant and that at any instant there are no other instants available to it”.
    CM: What you write for the MGS is true for any real system. On a physical standpoint, a measurement cannot be done instantly Any measurement of an event is a transfer of energy, and all transfers need time. Practically, the measurement duration is most of the time very small vs the time scale of the event. Tonight at 10:50 the sun is down. I have made a measurement that took a few seconds, looking at the clock and through the window. But my statement as still valid for 10:50.
    “When I said that the MGS implies an external, conscious observer, what I meant was that it needs a real observer such as you or I to see how the arrows in the diagrams work. We are observers who extend in time, we can hear whole words and bars of tunes, not just nothing at all during an “instant” that has no duration”.
    CM: As said, I agree in the sense that any modeling of reality is done by humans, which can assess the validity of the modeling.
    “Incidently, how does your “MGS” hear a whole bar of a tune extended in time rather than less than a single note?”
    CM: The functions present in the MGS make no hypothesis about the duration of the corresponding data processing. All measurements need time. Taking the hand away from a hot place takes less time than thinking about how to positioning hot places so they cannot be accessed, but both actions are about the same constraint to be satisfied: avoid getting burned.

  33. 34. Vicente says:

    Charles[25]

    in any event, I see “module” [I assume you mean modulus], direction, and sense (whatever that means) as being more what John has in mind by “meaning”. In which case they aren’t inherent in the information content (John’s definition) of the 3-tuple but are added by an interpreter.

    modulus! sorry. By “sense” I meant that direction, in a very old fashion way, was a line plus a sense of movement along that line… memory games. So vectors were defined by: modulus, line and sense.

    Going back to the point. You are absolutely right! but even more, the concept itself of n-tuple, number, anything you could mentioned aren’t inherent in the “data”, they are all products of the observer interpretation, or understanding of the perception (data, stimulus, anything…) All meaning is related and based on the existence of an observer/interpreter and added or created by it.

    There is no observer there is no Universe, and the Universe is what it is plus what the observer understands it to be (I am aware there is a contradiction in this statement).

    I believe that one of the inherent/intrinsic faculties of consciousness is to produce meaning, to be able to interpret or understand a perception, even if that interpretation is wrong.

    To me: consciosness + interpretation(meaning) = awareness

  34. 35. John says:

    CM: “I’m afraid we may not be speaking about the same thing. Take a robot programmed to avoid obstacles and a hungry frog. When the robot identifies an obstacle or when the frog sees a moving black dot, the actions that will come as consequences (turn to avoid, snap tongue to catch) close the perception/action process without requesting any type of regress in the perception process. ”

    I agree that simple machines work but they do not contain the geometrical form that we call perception or mind, with objects laid out concurrently and simultaneously in space and time. Look around, only biological machines of a high degree of complexity have that view, a bunch of logic gates won’t do it. What we mean by “meaning” occurs in that mind.

    CM: “The functions present in the MGS make no hypothesis about the duration of the corresponding data processing. All measurements need time. Taking the hand away from a hot place takes less time than thinking about how to positioning hot places so they cannot be accessed, but both actions are about the same constraint to be satisfied: avoid getting burned.”

    The point was not that the words have a duration, it was that we have whole words and whole bars of tunes in our experience. When you listen to someone saying “hello” you don’t have “h” then a completely separate “e” with the “h” gone then a completely separate “l” with the “h” and “e” gone then a separate “o” in your experience, you have the whole word “hello” stretched out in time. In fact, if time were composed of a succession of durationless instants you would have nothing in your experience followed by nothing followed by nothing because nothing happens in no time at all. Your MSG will have no experience, it is just a simple machine unless you introduce some form of extension through time into it.

    The Harnad reference is at the bottom of: The symbol grounding problem and Chinese room

  35. 36. Peter says:

    A response from Mostyn Jones:

    “Hello Peter (if I may).

    I was impressed by your perceptive review of my “Making mind-brain relations clear”. Here are some thoughts of my own.

    (1) I tried to show how the perennial mind-body problems in dualism, reductionism, etc. can be avoided by a realist view that consciousness is what specific brain events are physically like behind perceptions of them created by sense organs. For example, pain is what certain brain events wholly consists of. It exerts their forces, which EEGs can detect. So the pain I experience from a first-person perspective is perceived by others as my brain events from their third-person perspective.

    You replied that this just restates the mind-body problem, for it doesn’t explain why there’s a first-person perspective. Perhaps you’re reiterating conceivability arguments (Kripke, Chalmers, etc.) that neuroscience can’t explain first-person pains because it’s always conceivable that third-person neural activities aren’t accompanied by these pains. This is a common reaction to my theory.

    My reply is that this isn’t conceivable in my theory since pains are the underlying substance of certain neural activities. This echoes Maxwell’s 1979 reply to Kripke. Chalmers acknowledges that this type of reply exploits loopholes in conceivability arguments. He adds that there are no strong reasons to reject realist approaches to physicalism (mine being an example). They may ultimately provide the best integration of the mental and physical, he concluded.

    But perhaps you mean something further by saying that I don’t explain why there’s a first-person perspective? I’m searching here, but perhaps you mean something like I don’t explain why the universe came into being with a first-person perspective. I’d reply here that this is a cosmological question about how the universe was created, rather than a theory-of-mind question about how minds can be physical – so it wouldn’t threaten my physicalism.

    (2) In §6 I said that unified consciousness comes from highly active, highly connected neural circuits, and goes when this activity ceases. This seemed to you broadly functionalist. I can see why you say this. But keep in mind that what’s conscious here isn’t information flow, as in functionalism, but electrical flow, which is quite different mind-brain identity theory based on type identities.

    (3) You’re understandably skeptical about consciousness being a substance, since consciousness comes and goes, while matter is conserved. You could add that a substance is supposed to be an enduring stuff, but consciousness isn’t. But in my theory, consciousness, itself, is actually an enduring stuff that doesn’t come and go, for it’s the substance that certain molecules always consist of down to fundamental levels. Instead it’s the unity of consciousness that comes and goes. This unity arises from strong, unified neuroelectricity. As this electricity wanes, the brain’s consciousness dissolves into negligible, subliminal microexperiences.

    (4) Finally, you say that in treating consciousness as a substance I have it underlying observable physics yet doing work. This seems problematic to you. Again, I can see your point. But keep in mind that physicists admit that they only describe forces in terms of their observable interactions with particles – and that they can’t ultimately say what it is that exerts these forces. As Bertrand Russell noted, physics can’t tell us what quanta are like in themselves, but only what their observable effects are. I’m simply filling in what they’re like in themselves. In general my paper sets out to avoid the perennial problems (including functionalism’s) that have deadlocked mind-body theories, and it does so by just filling in what physics is silent about, namely, what brain matter is like behind perceptions of it.

    Cheers, Mostyn Jones”

  36. 37. Peter says:

    Mostyn,

    Many thanks for this response, and for your kind words. What you say clears up a number of points for me – in particular I think I misunderstood originally the way you see consciousness being built up.

    I like the idea that qualia are just the reality of experience, but I still think it leaves you with some explaining to do. You hint at an explanation of this kind when you say “pains are the underlying substance of certain neural activities”; the question is, what kind of neural activity makes them pains as opposed to pleasure (or perhaps better, as opposed to nothing), and why?

    I must confess that I really need to think more about this, so I hope that makes some sense!

    Thanks again.

Leave a Reply