no botsI liked this account by Bobby Azarian of why digital computation can’t do consciousness. It has several virtues; it’s clear, identifies the right issues and is honest about what we don’t know (rather than passing off the author’s own speculations as the obvious truth or the emerging orthodoxy). Also, remarkably, I almost completely agree with it.

Azarian starts off well by suggesting that lack of intentionality is a key issue. Computers don’t have intentions and don’t deal in meanings, though some put up a good pretence in special conditions.  Azarian takes a Searlian line by relating the lack of intentionality to the maxim that you can’t get meaning-related semantics from mere rule-bound syntax. Shuffling digital data is all computers do, and that can never lead to semantics (or any other form of meaning or intentionality). He cites Searle’s celebrated Chinese Room argument (actually a thought experiment) in which a man given a set of rules that allow him to provide answers to questions in Chinese does not thereby come to understand Chinese. But, the argument goes, if the man, by following rules, cannot gain understanding, then a computer can’t either. Azarian mentions one of the objections Searle himself first named, the ‘systems response’: this says that the man doesn’t understand, but a system composed of him and his apparatus, does. Searle really only offered rhetoric against this objection, and in my view it is essentially correct. The answers the Chinese Room gives are not answers from the man, so why should his lack of understanding show anything?

Still, although I think the Chinese Room fails, I think the conclusion it was meant to establish – no semantics from syntax – turns out to be correct, so I’m still with Azarian. He moves on to make another  Searlian point; simulation is not duplication. Searle pointed out that nobody gets wet from digitally simulated rain, and hence simulating a brain on a computer should not be expected to produce consciousness. Azarian gives some good examples.

The underlying point here, I would say, is that a simulation always seeks to reproduce some properties of the thing simulated, and drops others which are not relevant for the purposes of the simulation. Simulations are selective and ontologically smaller than the thing simulated – which, by the way, is why Nick Bostrom’s idea of indefinitely nested world simulations doesn’t work. The same thing can however be simulated in different ways depending on what the simulation is for. If I get a computer to simulate me doing arithmetic by calculating, then I get the correct result. If it simulates me doing arithmetic by operating a humanoid writing random characters on a board with chalk, it doesn’t – although the latter kind of simulation might be best if I were putting on a play. It follows that Searle isn’t necessarily exactly right, even about the rain. If my rain simulation program turns on sprinklers at the right stage of a dramatic performance, then that kind of simulation will certainly make people wet.

Searle’s real point, of course, is really that the properties a computer has in itself, of running sets of rules, are not the relevant ones for consciousness, and Searle hypothesises that the required properties are biological ones we have yet to identify. This general view, endorsed by Azarian, is roughly correct, I think. But it’s still plausibly deniable. What kind of properties does a conscious mind need? Alright we don’t know, but might not information processing be relevant? It looks to a lot of people as if it might be, in which case that’s what we should need for consciousness in an effective brain simulator. And what properties does a digital computer, in itself have – the property of doing information processing? Booyah! So maybe we even need to look again at whether we can get semantics from syntax. Maybe in some sense semantic operations can underpin processes which transcend mere semantics?

Unless you accept Roger Penrose’s proof that human thinking is not algorithmic (it seems to have drifted off the radar in recent years) this means we’re still really left with a contest of intuitions, at least until we find out for sure what the magic missing ingredient for consciousness is. My intuitions are with Azarian, partly because the history of failure with strong AI looks to me very like a history of running up against the inadequacy of algorithms. But I reckon I can go further and say what the missing element is. The point is that consciousness is not computation, it’s recognition. Humans have taken recognition to a new level where we recognise not just items of food or danger, but general entities, concepts, processes, future contingencies, logical connections, and even philosophical ontologies. The process of moving from recognised entity to recognised entity by recognising the links between them is exactly the process of thought. But recognition, in us, does not work by comparing items with an existing list, as an algorithm might do; it works by throwing a mass of potential patterns at reality and seeing what sticks. Until something works, we can’t tell what are patterns at all; the locks create their own keys.

It follows that consciousness is not essentially computational (I still wonder whether computation might not subserve the process at some level). But now I’m doing what I praised Azarian for avoiding, and presenting my own speculations…

132 Comments

  1. 1. John Davey says:

    I think that consciousness is what it is. It’s neither information processing nor recognition nor integration nor any other function : it’s an irreducible widespread natural phenomena. As far as human cognition is concerned, consciousness sits in the same space as time and space : it’s recognisable but neither definable nor reducible into simpler components. It is what it is, like time and space, and we’re stuck with it.

  2. 2. calvin says:

    as a machine consciousness developer, these arguments are all excellent. I’m glad you appreciate that current computational approaches can never produce consciousness precisely because of the semantic/recognition problem.

    In my search for a computational approach, I could find no typical models that lead to “recognition” solutions. Recognition always belongs with the programmer (or with the data manager for neural networks). The approach I’m using is to produce a computational substrate of many many program/data objects that interact with the goal of producing a homeostatic “cell”. The cell becomes the basis of representation and future recognition.

    The hope is that the right kinds of cells will interact and create networks with other cells. These networks of cells (and their cellular processes) would instantiate ideas, recognition, semantic and syntactic information, and especially qualia. The programs and data themselves interact with each other and as a side-effect form structures that represent the things the whole computational system recognizes. Essentially, programs and data complexes become representations of the contents of consciousness because of their intrinsic behavior, not because of extrinsic instructions.

    I think this is exactly what our own molecules do. They create cells, which form network structures. But the molecules are just engaging in their own chemical interactions with no extrinsic relationship to the cognitive processes. The network structures and cells become the representations (qualia) that an organism recognizes. This embodiment of representation is the only way I found to satisfy both the physical facts and the experiential facts such as qualia, dreams, thought etc.

    There must be an inside-out or bottom up approach to producing the content of consciousness just as there is in biology. Unfortunately, modern computation are very much top-down or outside-in where the contents of consciousness never actually exist in the computer system. Building computer systems from a “molecular” basis on up is the only way I found that has a possible path to machine consciousness or that doesn’t violate Searle’s “rules”.

  3. 3. Hunt says:

    Supposing Azarian (and you, and Searle) are right, let’s say we have a robot that simulates consciousness including the subset of properties that computation allow. I would call it “sub” conscious, except that’s already taken; let’s call it conscious-lite. How does a conscious-lite robot act? Is it pretty much a p-zombie? This wouldn’t preclude a mixture of computation and biology that would be fully conscious, depending on the nature of these missing properties. This leads into the “replace a neuron with a simulated neuron and keep going” arguement. It would seem that if parts of a biological brain could be replaced by computation, while preserving consciousness, then the key properties are not continuous across the biological material of the brain; there is a “center” of consciousness, and including this biological center into the conscious-lite robot, with an appropriate interface, would make it conscious.

  4. 4. Callan S. says:

    Augh, how does this pass first muster?

    Let’s take just a fairly conventional extermination approach – the robot has ranged weapons. And take what we know of a homing missile – a proven autonomous robot, still being used widely today.

    The article wants ‘consciousness’ to do double duty – make us special and the robot unable to initiate anything.

    But everyone who’s done any programming has written a program that has done something they didn’t expect. They might call it a bug or more rarely keep it as a feature. But in programming programs regularly go outside the expected boundaries of their authors. It doesn’t take any magic consciousness for that to happen.

    Killing people is just a logistical puzzle. But that’s the double duty fallacy right there – as if to solve problems, one needs consciousness. There are plenty of programs that can put together a jigsaw puzzle by itself – it solves a puzzle. Shooting bullets is just putting one part of a puzzle, a bullet, into another part of the puzzle, a torso or skull. He’d have to argue jigsaw puzzle solving programs are not possible and that programs cannot accidentally extend outside their expected boundaries, in order to actually engage the topic. But it’s an easy way to be right, by simply being off topic.

    I’d get if he had argued that such machines would not be very bright or adaptive, thus easily quashed. There’s problems with that as well, but at least it’d pass first muster. But instead it’s stuck at an exceptionalism and one also protects us!

  5. 5. Stephen says:

    The author points out that: “A Turing machine’s operations are said to be “syntactical”, meaning they only recognize symbols and not the meaning of those symbols—i.e., their semantics. Even the word “recognize” is misleading because it implies a subjective experience, so perhaps it is better to simply say that computers are sensitive to symbols, whereas the brain is capable of semantic understanding.”

    What he doesn’t say, because no one knows, is how the brain attains it’s semantic understanding. His assumption seems to be that it is somehow through means that are not syntactical.

    We can make a case, though, that the brain is a collection of various types of neurons connected together in a particular manner, with some sort of behaviour modulation caused by it’s environment and that consciousness is an emergent behaviour from this. Now, we know that we can model simple neural networks on digital computers. Making very small neural networks which realistically model something in a brain is much more challenging, but has also been done. So now it is a matter of degree. Conceivably, a whole functioning brain could be modelled. What is most likely, (just my opinion though), is that the approach using silicon based digital computers is too inefficient. With a network of thousands of next gen Deep Blue’s loaded up with a complex neural network model it might take weeks to get a response to a simple stimulus. Semiconductors only work up to certain speeds and even the speed of light would be a constraint for the distance electrons would have to move. The point is that, in my opinion, there is nothing inherently constraining about using a Turing machine to host a consciousness.

    To develop strong AI, I expect we will need some sort of neural computer, just for practical reasons. Rather than creating a simulation we would focus on creating an artificial brain. They would probably initially compare to a human brain the way a soap box racer compares to a real race car, but everything takes time.

    The problem with the article, as with many similar articles, is that the writer reviews the current state of technology assuming that it will never advance, and on that basis makes statements like “we may never achieve Strong A.I.”. This is akin to the argument that we have been told strong AI is coming soon, and since it clearly isn’t imminent it must be impossible. It doesn’t take much imagination to conceive of programs that are more about relationships between symbolic representations rather than a simple series of IF statements, memory accesses and computational calculations, which you might see in a typical current day AI algorithm. Maybe some patience might help. This is going to take some time.

    One thing I can agree with is: “The best approach to achieving Strong A.I. requires finding out how the brain (does) what it does first, and machine learning researchers’ biggest mistake is to think they can take a shortcut around it.”

  6. 6. Sci says:

    Thanks Peter – Glad to see this article, I suspect there will be increasing criticisms of computationalism as the years go by.

    That said a limited computationalism like the one Sergio has posited might still tell us something useful about consciousness. I suspect certain aspects of our existence can at the least be modeled via programs and this can assist us in finding equivalent structures in the brain.

  7. 7. Hunt says:

    One thing I can agree with is: “The best approach to achieving Strong A.I. requires finding out how the brain (does) what it does first, and machine learning researchers’ biggest mistake is to think they can take a shortcut around it.”

    The historical approach has been to make a frontal assault on strong AI without a real battle plan, philosophical or scientific grasp. Basically I think researchers thought they would charge in and problems would fall as they came to them. Eventually the philosophical or scientific high ground would be achieved and walls would start falling. “Brains” would play a certain, but small part in it, just as the flight of birds played only a minor role in achieving artificial flight. Well, things haven’t exactly gone to plan, but as you say, it really hasn’t been that long and even shorter since research has fallen back to tediously dissecting bird wings.

  8. 8. Luís Ferreira says:

    I’m totally with Steven here. I’d also like to add a few points.
    The argument of the limitations of simulation is totally unconvincing. The Turing test, for instance, doesn’t seem to be bothered by that.
    Also, reducing the computer processing to rule-based algorithms is not understanding what has been done in not so recent years about it. What about genetic and neural algorithms?
    Finally, the argument of intentions and meanings. I’m yet to accept that humans have intentions and meanings, semantics, and free will for that matter. Instead of pointing the likely fact that programs running on current computers don’t exhibit any of that, we should be checking what exactly is meaning, what exactly is free will in a world explained by Physics (do you feel that some quantum uncertainty gives you freedom? might as well take decisions based on flipism https://en.wikipedia.org/wiki/Flipism).
    The greatest challenge for me is qualia. All the rest is peanuts compared with that.

  9. 9. Sergio Graziosi says:

    Oh, I’ve been “evoked”!
    (Thanks Sci)
    Re the original article, I’m sort-of with Peter, in the sense that reading it didn’t make me haul in exasperation, which is very rare. Predictably I disagree with the main point: the fact that we don’t have a reasonably agreed idea on how to move from syntax to semantics is not, cannot be and never will be able to guarantee that it can’t be done. As simple as that: you can’t make a conclusive claim based solely on ignorance (lots of previous dialogues on this).
    I take courage, because unlike this article, something else has been bought to my attention, and in that case, I found nothing (or nothing which I can remember) to quibble about, it’s a interview/podcast featuringColin Allen. I’m mentioning it here because:
    – Allen also makes the same point above.
    – He reminded me that I ought to give much more attention to ethology,
    – He mentions a quite good experimental paradigm to evaluate if an organism is conscious.
    – Does so by linking a very special kind of learning with consciousness (which is what I’m trying to do as well: I love being made redundant!).
    Thus, I’d be really interested to hear the impressions from you all.

    Re the point Peter is re-stating:

    The process of moving from recognised entity to recognised entity by recognising the links between them is exactly the process of thought. But recognition, in us, does not work by comparing items with an existing list, as an algorithm might do; it works by throwing a mass of potential patterns at reality and seeing what sticks. Until something works, we can’t tell what are patterns at all; the locks create their own keys.

    I don’t know if it’s “exactly the process of thought” but I agree it is a very important component, foundational even. The problem I have with the rest of the quote is, again, predictable. Peter, you produced what in my line of work would look as the first description of how a to-be-written program should work. You are spelling out the rules of how such a thing should operate (very sketchy, high-level description), you are dangerously close from describing an algorithm. Sure, that’s not how pattern-matching algorithms typically work today, but again, there is no in-principle reason to believe what you describe can’t be done algorithmically. [Peter: hint, hint! You might want to re-look at predictive coding and the like – “throwing a mass of potential patterns at reality and seeing what sticks” seems like a possible way of describing the process.]

    Re “howling in exasperation”: the already mentioned essay from Epstein did offend my sensibility, to the point that I had to write a reply (you may want to compare and contrast Azarian and Epstein’s takes: one is essentially reasonable, the other isn’t, IMHO). I mention it here on a ICYMI basis, and with a little worry, as I don’t mean to re-heat an already (uncharacteristically) pugnacious debate. Usual disclaimer: I don’t know how quick I can be in replying…

  10. 10. John Davey says:

    excellent Chomsky analysis – a must for readers of these pages

  11. 11. Sci says:

    OT: Strawson makes the renewed case for the panpsychic/neutral-monist argument that consciousness is stuff…I think?:

    Consciousness Isn’t a Mystery. It’s Matter.

    http://www.nytimes.com/2016/05/16/opinion/consciousness-isnt-a-mystery-its-matter.html

  12. 12. john davey says:

    Sci

    Good article. The problem with consciousness-denying is that you have to acknowledge its existence in order to deny it. If there was no such thing as consciousness – which is not a metaphor, or a mystical concept, but an aspect of psychology with NO similarities to anything else – then you can’t deny it without actually knowing what it is in the first place.

    You wouldn’t get a dog denying the existence of special relativity. It’s beyond his cognitive scope in the first place. Likewise a colour blind man won’t deny the existence of red or green – he doesn’t know what they are in the first place.

    But funnily enough consciousness deniers seem to have no problem recognising and knowing what it is they are denying. They thereby ironically validate it’s very existence. Knowledge of consciousness is the same thing as having it. Zombies would not talk about consciousness as they don’t have knowledge of it.

  13. 13. john davey says:

    Luis

    “what exactly is free will in a world explained by Physics”

    see the Chomsky video. Since the days of Newton, if there’s one thing that’s true it’s that is the world isn’t explained by physics. ‘Explanation’ is not a simple word, fluctuating between “making intelligible” (something that physics has never been very good at since the law of gravitation) and metric prediction (that physics is good at and has been useful for)

    the fixation with the supremacy of physics is creating a massive pile up of wasted energy in the field of brain study.

  14. 14. john davey says:

    Sci

    Don’t think the Strawson article is strictly monist/pansychic or whatever. It’s a simple scientific claim that matter causes minds because of it’s properties, I don’t think that they are necessarily equated (then again I suppose, what is matter ? Good question).

    J

  15. 15. VicP says:

    John, A vehicle is a mode of transportation. Before the late 19th Century you would consider a horse a vehicle. An automobile is an artificial vehicle which does the same things a horse does except faster and better.

    What we are really dealing with here is the computational bias which is very valid. Valid why? A computer is simply a mass of hardware junk if you disable its (artificial) clock. Better put, the simplest nervous systems produce time or sense of time. What is really bound up here in these discussions is the Sellarsian notion of a “manifest image” of time. If you unbind time as a biological sense along with gravity/balance for the sensorimotor skills, then you have a starting point.

  16. 16. Michael Murden says:

    The problem I’ve always had with the Chinese Room thought experiment is that the range of human ‘language related activities’ goes far beyond responding to words with other words. An appropriate response to “get down” might be to start dancing, or drop below the parapet to avoid incoming fire, or laugh. The question one ought to ask is whether a manufactured thing can respond appropriately to human communicative (not merely linguistic) activities over the entire range of human communicative activities. It remains to be seen if such a thing can be manufactured, but if it can be I see no reason not to consider such a thing to be a person.

    If manufactured persons prove to be possible they will also have to be taken as proof that the distinction between semantics and syntax that drives much philosophy is not valid. It’s a curious sort of distinction anyway. On the one hand you have syntax, written and spoken language, which we can all see and which we all agree exists. On the other hand we have semantics. We can’t see inside other people’s heads to see their semantics. We just infer their semantics from our mutually shared syntax. Is it possible that we just infer the existence of our own semantics from our syntax? The word “rose” can cause people who read it to summon images of roses to their mind’s eye. It can cause them to remember variations of the “roses are read, violets are blue” poem or perform a myriad of language related mental tasks. These tasks seem all to be syntactical in that they all relate symbols to other symbols rather than relating symbols to real things. It seems to me that there has to be some necessary connection between syntactical objects and non-syntactical objects (real things) rather than merely connections between syntactical objects and other syntactical objects in order for semantics to be real. The question “what is the nature of the relationship between syntactical objects and the non-syntactical universe that constitutes semantics” is one for which I have not yet seen a convincing answer.

  17. 17. Tom Clark says:

    Michael Murden in #16: “The question one ought to ask is whether a manufactured thing can respond appropriately to human communicative (not merely linguistic) activities over the entire range of human communicative activities.”

    This seems to me a helpful way of framing the question of machine understanding or what we might call semantic competence: can it respond appropriately? This is an objective test that makes no appeal to the subjective experience of understanding something. Using this criterion (but in a restricted domain of communication), IBM’s Watson certainly seemed to understand questions posed to it on Jeopardy. I don’t see that there’s anything in principle that would prevent an AI from attaining semantic competence in further domains, eventually approaching and perhaps surpassing the full human range.

    The appropriateness test of course relativizes understanding to a community, which will either accept or reject the entity in question as a cognitive peer. This was one of Hilary Putnam’s major points against the possibility of reducing semantics to operations on physically realized internal syntactic states alone, whether viewed computationally or more broadly functionally: “meaning ain’t in the head” he said.

    Michael again: “The question ‘what is the nature of the relationship between syntactical objects and the non-syntactical universe that constitutes semantics’ is one for which I have not yet seen a convincing answer.”

    I just finished reading Putnam’s Representation and Reality (1988) in which he ends up saying “I do not see any possibility of a scientific theory of the ‘nature’ of the intentional realm, and the very assumption that that such a theory *must* be possible if there is anything ‘to’ intentional phenomena at all is one that I regard as wholly wrong.” I don’t know if in later works he ends up changing his mind about this (as he did about functionalism), but as yet there is still no accepted reductive account of intentionality, only a plethora of proposals, some of which (like Putnam’s) are explicitly non-reductive. And the same goes for explaining phenomenal experience (qualia). But as Sergio says in #9, our ignorance and current lack of theoretical closure are no grounds for supposing that machines will never achieve consciousness.

  18. 18. VicP says:

    Michael, As part of the answer to your above, there are obviously structures in the brain dedicated or re-adapted for the syntactic to semantic.

    Sergio, Really enjoyed your comments and referenced blogs. I agree to a point but believe Epstein is offering up a very good argument to shake us from our computational slumber. The author is a psychologist, and they know that many psychological problems originate from past emotional feelings and not representations that may trigger them. In other words he is trying to get us to see the brain as more of a hierarchy built on emotions, drives, sensorimotor etc. I don’t let the representational argument bother me because the brain has dedicated areas for visual, auditory representation. As a programmer yourself, he is really pointing up a “pointers” theory or how the brain builds networks which trigger, or we recall the networks, not the representations to trigger the memories.

    Think of returning to the school you attended when you were ten years old. You may have a flood of recalled emotions and experience as if you were still ten. The physical school itself is the representational memory and the feelings are simply a re-enactment, like playing the piano which Epstein references.

    Peter, A new Michael Graziano AST article. Has some flavor of Arnold’s retinoid Theory.

    http://www.theatlantic.com/science/archive/2016/06/how-consciousness-evolved/485558/

  19. 19. john davey says:

    VicP

    “John, A vehicle is a mode of transportation. Before the late 19th Century you would consider a horse a vehicle. An automobile is an artificial vehicle which does the same things a horse does except faster and better.”

    You would consider a horse a horse. You can use it for transport but it’s still a horse.

    I don’t get your point. Are you saying there are computers in nature that you can find? Where ?


    What we are really dealing with here is the computational bias which is very valid. Valid why? A computer is simply a mass of hardware junk if you disable its (artificial) clock. Better put, the simplest nervous systems produce time or sense of time. What is really bound up here in these discussions is the Sellarsian notion of a “manifest image” of time. If you unbind time as a biological sense along with gravity/balance for the sensorimotor skills, then you have a starting point.

    Aren’t all clocks artificial ? Other than that I don’t understand the rest of the paragraph at all

  20. 20. john davey says:

    VicP


    there are obviously structures in the brain dedicated or re-adapted for the syntactic to semantic.

    This is a bold statement. Can you refer me to the relevant academic/neurological sources that prove this. And please, please don’t talk about ‘feedback loops’….

    I thought that “syntactic” is by definition not “semantic”. They are naturally mixed together in the world of natural language and experience and we make the effort to actually separate them in the first place. They are distinguished by their orthogonality : never the twain shall meet. That’s how we notice the difference in the first place.

    Can you think of a mechanism to transfer the label ‘red’ into the subjective experience of red using digital number transfer (ie computation) ? What sequence of digital transfers would achieve such a thing ?

    J

  21. 21. john davey says:

    Michael


    The problem I’ve always had with the Chinese Room thought experiment is that the range of human ‘language related activities’ goes far beyond responding to words with other words.

    I don’t think that’s relevant. The Chinese room is more general and I think Searle was right in that he conceded to much and made it too narrow. The point is there is a total disconnect between symbol manipulation and the function that symbol manipulation is trying to achieve. You cannot deduce from a computer programs’ internal symbolic manipulation what it is actually trying to do.

    The man in the Chinese room wouldn’t know if he was running a system to translate Chinese, sell stocks or control traffic – let alone the loftier goal of ‘understanding’ Chinese.

    The only way to find out what semantic function a program is mean to be fulfilling is to ask the guy who wrote the program what it’s meant to be doing. Semantic in computational systems is only extant in the designers and users of such systems, not the machines themselves. Computational systems are value added comms points between designer and user, akin to fancy telephones.

  22. 22. VicP says:

    John,

    Was just making the distinction between a biological vehicle and a manmade vehicle.

    All clocks are mechanisms including our biological clocks. Time is like color. Our bodies detect it and we can build devices to detect it.

    The bold statement may require a longer explanation or article. I have to work on it.

  23. 23. VicP says:

    Michael, You said above: “roses are read, violets are blue”. Actually roses are red.

    1)For the sake of argument lets say: “Roses are rd”

    2) Repeat this statement “Rd roses are rd”, except force all of the words thought your left brain lobe.

    2) Repeat this statement “Rd roses are rd”, except force all of the words thought your right brain lobe.

  24. 24. calvin says:

    John (#20)

    I can explain how syntactic processing becomes semantic information. at least how I think it’s done in organisms and how I am attempting to do it with computers. (VicPs argument is somewhat flawed).

    The syntactic processes of the brain are molecular interactions. The molecular chemistry in cells themselves is syntactic (or syntactically equivalent). Individual cells do not have or “know” information. The collection of molecules in a cell respond as a complex physical system to each individual physical change of molecules in that system. Cells do not store qualia in any way. Cells do not have intention in any kind of way. As the brain is merely a collection of cells, and cells are a collection of molecules, there is no obvious way the brain can convert syntactic information to semantic experience.

    For examples, wavelengths of light induce molecular changes in photoreceptors, which then mechanically produce molecular interactions through neurons leading to some molecular interactions between neurons in the visual cortex and throughout the brain. All of these interactions are molecular, or involve changes of charge which induce individual molecular changes. There is no level of interaction above the molecular phenomena. Never do we see the 460nm wavelengths become blue in the whole visual system, blue is not a molecular, physical, or syntactic phenomena. But irrefutably we see blue under most conditions of seeing 460nm light.

    To make the syntactic to semantic switch, there must be structures which exist that represents syntactic facts as semantic information. a structure which does the representational process: 460nm_light as blue. This structure and process must produce an equivalence between 460nm light and blue. for example: 460nm_light as blue = blue and blue = blue as 460nm_light. So that the structures that instantiate blue semantically and the physical (syntactic) phenomena of 460nm light co-occur.

    To get to that level of complexity requires a low level physical phenomena that is purely semantic. That is, a physical phenomena of molecules that persists over time, that is itself not syntactic, but is co-occurrent with whatever syntactic processes are associated with that semantic idea (and semantics are always ideas). The fundamental semantic thing is the cell itself.

    the cell is not a physical phenomena; it is a semantic phenomena. The molecules of the cell are the physical phenomena. the cell cannot “cause” it’s molecules to do anything. the cell has no functions. The molecular functions that occur in a cell are individual and completely agnostic to the cell. There is no program or intention that tells a cell what to do. DNA is just another syntactic part of a cell, that encodes many syntactic components, but is itself insufficient to make a cell. The DNA has no intention. DNA is not semantic. The cell cannot communicate to it’s DNA. Using semantic concepts to describe cellular phenomena is completely incorrect because all of the cells phenomena are syntactic phenomena of molecules. It’s chemistry and the “cell” has nothing to do with it. However, the cell, and particularly it’s metabolic and homeostatic “needs” are obvious semantic information. The cell itself is the semantic base, or the representational foundation. If cells die, they lose their semantic meaning or semantic content – the molecules no longer represent a cell. But the molecular interactions keep going – just not as a cell. The molecules do not care about the cell at all. There is no such thing as a cell to the molecules, to the chemistry, or to the physics.

    But obviously there are cells. “Molecules as cell” is the representational structure. The cell is semantic.

    What follows then is that collections and networks of cells can form structures to instantiate and represent more complex semantic information. creating the right cell networks can create mechanisms that back-propogate action from the semantic content the cell networks represent, down to the purely syntactic processes the molecules in those cells perform. This is the only way I found to explain the bi-directional causation problem (the mind-body problem). Cell networks instantiate ideas and those structures induce internal molecular interactions under certain conditions, and certain molecular interactions will affect cell networks so that those networks instantiate semantic information.
    460nm_light affects cell – 460nm_affected_cell as “blue_cell” “blue_cell” only makes sense if it’s part of a larger semantic – visual network.

    There has to be a path from the idea/qualia down to the molecular interactions and from the molecular interactions up to the idea/qualia phenomena. I am using a similar process with computation to produce a computational cell from complex collections of interacting automata.

  25. 25. Michael Murden says:

    To John Davey (21)

    If I understand Searle’s larger point, it’s that mere symbol manipulation does not suffice to demonstrate comprehension. If that is what the Chinese Room Experiment purports to demonstrate the problem we have to address is that the only thing we have to demonstrate comprehension between humans is symbol manipulation (in the wide sense I meant to specify above by “whole range of human communicative activities”). We have our syntax in common, as demonstrated by the fact that you responded to my symbol manipulation in a way that allowed me to respond here to your symbol manipulation. At best, I can infer that you have semantic capabilities from my perception of your syntax and at best you can infer that I have semantic capabilities from your perception of my syntax.

    This leads me to two questions. The first is given that we can only infer semantics from syntax, what reason do we have to claim that a manufactured thing which performs syntactically at human equivalent level does not have semantics? The second question is given that all we have to validate the existence of semantics in others is syntax, do we have something other than inference from syntax to validate the existence of semantics in ourselves?

    If we can’t prove through shared language that other human beings aren’t philosophical zombies is there another way to prove if? How can you prove to yourself that you are not a philosophical zombie? Each human being is a Chinese Room to every other human being, so whatever convinces you of the humanity of one Chinese Room should suffice to convince you of the humanity of any other Chinese Room.

  26. 26. Charles T Wolverton says:

    As Tom and Michael both suggest, semantics is context-dependent. Semantics requires a priori agreement (possibly implicit) between users of a syntax as to the “meaning” (ie, “appropriate response” per Tom’s comment 17) of syntactic tokens. “get down” is syntactically sound English, but it’s semantics depend on context: disco, battlefield, pillow factory, etc. Thus, John’s observations about the disconnect between syntax and semantics is correct in the absence of context, but context can provide a connection.

    From this perspective, I’d express Michael’s first point in comment 25 as follows. He and John can exchange syntactic tokens (AKA symbols) successfully in the CE context because both are members of a semantic community relevant to this context. Either of their emissions of those tokens would be largely meaningless to the vast majority of English speakers (eg, to me as of only a few years ago). So, rather than saying that semantics is inferred from syntax, I’d say that the semantics of syntactic tokens may be inferred from context by members of a relevant semantic community. So, reversing this in response to Michael’s first question in that comment, I’m inclined to say that if in a given context an entity’s responses to syntactic tokens are responses that would be expected of a member of a relevant semantic community, then the entity is a member of that community. Which seems consistent with Michael’s suggestion in the first paragraph of comment 16 about when to interpret an entity as being person-equivalent.

    My perspective comes from comm protocol stacks. In the lower levels of a stack, there is only syntax, and the objective at those levels is just to communicate syntactic tokens with some fidelity. Semantics is relevant only at the higher levels of a stack and is the result of a priori agreement between communicators as to what actions to take upon receiving those syntactic tokens. Eg, “one if by land, two if by sea” is syntax; the semantics is the action to take in response to each of those syntactic tokens.

  27. 27. Sergio Graziosi says:

    Interesting ideas floating around…

    Calvin (#24)
    You lost me at the point below, for reasons that should become apparent from the rest of my comment.

    To make the syntactic to semantic switch, there must be structures which exist that represents syntactic facts as semantic information. a structure which does the representational process: 460nm_light as blue. This structure and process must produce an equivalence between 460nm light and blue. for example: 460nm_light as blue = blue and blue = blue as 460nm_light. So that the structures that instantiate blue semantically and the physical (syntactic) phenomena of 460nm light co-occur.

    VicP (#18) thanks!
    I don’t think we disagree much. I’ve been sloppy in writing my comment (#9) here, I got carried away. The reason why I felt the urge to write a strong rebuttal is that I agree with the general spirit of Epstein’s attempt, but strongly disagree with both the execution (how the argument is substantiated) and the concrete conclusions (there is no information in brains, stuff like that). I just can’t help it, when I see an argument that tries to go in the right direction (e.g. “many psychological problems originate from past emotional feelings and not representations that may trigger them”), but does so by making one mistake after the other, I feel that it is actually undermining the valuable side of the intended message. You can’t dispel wrong assumptions or conclusions by replacing them with other, equally wrong ones – it just doesn’t work: people will see the mistake(s) and dismiss the whole message…

    As a programmer yourself, he is really pointing up a “pointers” theory or how the brain builds networks which trigger, or we recall the networks, not the representations to trigger the memories.

    This is a way of looking at things that does make sense to me. It’s a first tiny step between bridging radical embodiment approaches with our own mental life, and I think it works. If Epstein had taken this route (the metaphors you use are entirely amenable to be explored in computational terms), I would not have felt the urge to reply.

    Michael (#25), Charles (#26) and All:
    this is probably just me, with my DYI philosophical education and my vast ignorance. But aren’t we happily dancing around a big elephant? What is our working and generally agreed definition of “semantic”? This is a genuine question: how do we draw a sharp boundary between semantic and syntactic? Take Charles’ example: lower down at the levels of the IP stack, the mechanisms are all “syntactic”, the semantic side supposedly applies only to the higher levels (closer to what the user sees, BTW), but why? Inside our machines, all levels are “processed” via exactly the same mechanisms – what grants us the possibility of drawing a distinction if it’s turtles all the way up? If we only wanted to receive one bit (signalling the remote system is working, for example) we could stop at the lowest possible level of the stack and “assign” a semantic value to it. So is it all “just” pre-agreed a-priori convention? It can’t be, because we’d have nothing much to explain. Thus, I don’t feel this route helps. Otherwise, grounding semantics in action (shades of Jochen, Michael #25 and many others before) looks more promising to my eye (hence my interest in embodiment, if you wish).
    Furthermore, shouldn’t we distinguish between the semantic of signals (in a strict “communications” domain) where the “pre-agreed”, context-dependent properties very much apply, and perception in general, where the issue is reversed, taking the form of the framing problem (i.e. how do we pick the right context to understand what sensory signals mean to us?).
    I’m just thinking in writing, but I guess I’ve reached one conclusion. Yes, we should make the sharp distinction between explicit signals and generic perception. Overall, we are all much more interested in the latter case than the first, right?
    Thus, open question: how do we go about formally describing both the distinction and the two sides?
    It’s very likely that you guys know about many attempts of doing something similar, I’m merely exposing my ignorance (so that it may be mitigated). Thanks!

  28. 28. VicP says:

    I think the best way to RESPOND to this discussion is to say that we are biological Stimulus-Response SR systems. What we are doing is responding to others’ responses. The fact that we have intermediary stimuli that we call syntax or language that transcends a physical environment can be trivialized. The complexity of the syntax and syntax processing which include intermediary internal SR and mixing of other stimuli (the context argument) adds nervous system complexity which adds complexity to the arguments, but does not change my basic premise of SR systems.

    One conclusion I draw from the Chinese Room is that Searle’s understanding of computers and computer architecture is deficient so he draws a poor conclusion. I agree that you can trivialize computation, but he lacks the understanding why. True we are biological and computers are not, but computers are designed by humans and their architecture mimic quite a lot of our nervous structure. What most people lack in the discussion is an understanding that brains themselves are actually a system of organs, but we continue to dwell in this Blind Brain territory with these discussions treating the brain as a single entity.

  29. 29. Charles T Wolverton says:

    Sergio: What is our working and generally agreed definition of “semantic”?

    Implicit in my comment (and in Tom’s comment 17 as well) is that the “semantics” of a context are the associations of syntactic tokens and responsive actions that have been adopted by a relevant community. Ie, my concept is indeed “grounded in action”.

    Of course, this suggests a simplistic SR system (in the sense of VicP’s comment 28). But as he notes, altho in the case of human language that concept of “semantics” may have heuristic value, obviously the details of its implementation must be extraordinarily complex.

  30. 30. Sci says:

    @ John Davey –

    “I don’t think that’s relevant. The Chinese room is more general and I think Searle was right in that he conceded to much and made it too narrow.”

    It does seem a bit odd to me discussion hasn’t shifted from the Chinese Room to Searle’s Is the Brain a Digital Computer*? where he says the question is not even wrong.

    But I’ve also found that most lay people better respond to Lanier’s You Can’t Argue with a Zombie**.

    *link -> https://philosophy.as.uky.edu/sites/default/files/Is%20the%20Brain%20a%20Digital%20Computer%20-%20John%20R.%20Searle.pdf

    **link -> http://www.jaronlanier.com/zombie.html

  31. 31. Sergio Graziosi says:

    VicP and Charles,
    yes, that’s what I was thinking, but didn’t want to inject too many of my own biases.
    What I’m aiming at is precisely the heuristic side, with implications on the fuzziness of the “semantic” concept itself (it’s the reason why I’ve implied there is an elephant in the room). If we can’t identify the rules with which semantics operates, we can’t even identify the rules to identify what is semantic and what isn’t (big Blind Brain Theory hint, here). Peter can even get away with the suggestion that there are no rules in semantic operations: our understanding of the world can’t be algorithmic, for many, not only Peter!

    People invent words, and readers/listeners will frequently react appropriately: I could talk of “chinese-roomisms” or “ignorantistic argumentations” and most readers of this thread would be able to make sense of my writing.
    This touches the pattern-matching abilities that we manifestly have and are crucial to Peter’s position, but it also suggests that when we talk about semantics we don’t really know what we’re talking about. Pretty much like consciousness: if we don’t know what it is exactly, explaining it becomes really hard. (You could reverse the argument and say “if you do know what it is exactly, you have explained it already”. Chicken and egg, as usual.)
    Overall, I do like grounding semantics in actions, but I don’t think it gets us out of trouble (shades of Jochen again): as this thread nicely demonstrates, it seems to me that deciding when some syntactic process becomes semantic is entirely arbitrary, is it not? (I suppose Calvin would disagree) If it is, perhaps the whole “semantic” concept needs to be either ditched (not my preference) or heavily redefined (count me in, I’d love to help).

    For the record: VicP, my interest is around the mind-body problem, or how to build a naturalist explanation of mental phenomena. [With “explanation”, I mean at least identifying the criteria to separate systems that have mental phenomena from those that haven’t.] Treating an entire system as a Stimulus-Response system looks far from ideal, because whatever decision making abilities a given system will show will be reproducible by implementing a single gigantic lookup table. Hence, I steer towards functionalism, but one that is interested in the functions therein: in the approach I’ve been developing (by advocating it!), how responses are selected is as important as which responses are selected. Thus, a big monolithic black-box would just fail to meet my own requirements… :-/
    But then I always end up asking myself: does it (my position/hope) even make sense?

  32. 32. VicP says:

    Sergio, Absolutely! as they say or language itself is full of this flexibility which is reflected in philosophy. Like the new forms of logic where things can be both True and False or Not True and Not False. However it is our human nature to try and wrap things in nice neat packages, which is why I concentrate on the sensorimotor system or the evolution of brains in nature started with movement. Language came later, so they say.

  33. 33. Arnold Trehub says:

    Pure propositional systems, like digital computers, are completely symbolic and can only take other symbols or symbol strings as their referents. Semantics requires referents that are spatial patterns that can be recognized in relation to a locus of spatiotemporal perspectival origin, a self. So, no subjectivity no semantics.

  34. 34. Charles T Wolverton says:

    it seems to me that deciding when some syntactic process becomes semantic is entirely arbitrary

    I wouldn’t say that a “syntactic process becomes semantic”. As I have described my view, semantics is essentially a mapping from syntactic tokens to responsive actions. That definition is indeed arbitary and quite possibly inadequate, but isn’t it (or some better alternative) an improvement over simply using the word “semantics” undefined?

  35. 35. Arnold Trehub says:

    Charles,

    As you read a the syntactic strings of a novel, what are your responsive actions?

  36. 36. Jochen says:

    Sergio:

    Predictably I disagree with the main point: the fact that we don’t have a reasonably agreed idea on how to move from syntax to semantics is not, cannot be and never will be able to guarantee that it can’t be done. As simple as that: you can’t make a conclusive claim based solely on ignorance (lots of previous dialogues on this).

    I think this misrepresents the state of play somewhat. Those (like me) who think that you can’t go from syntax to semantics typically take themselves to have positive arguments that at the very least call this possibility into question, and perhaps even suffice to establish impossibility—it’s not just ‘we don’t know how, so it can’t be done’. I do overestimate my intellectual capacities on occasion, but not so much as to believe that something can’t be done because I can’t think of a way how to do it!

    The most basic such argument really is that if I give you a string of symbols—say, aa9owi5sce—you have no way to move to its (possible) semantic meaning, i.e., there’s nothing that starts out with the pattern of differences encoded in that string, and then outputs some unequivocal link to a given concept, object, or whatever. To somebody capable of interpreting this string, it will do so, however; if I use it to denote “dog”, then being handed this string will have all the same effects as being handed the string ‘dog’ does to somebody who uses ‘dog’ to mean “dog”. But there’s nothing inherent in the string that makes it refer to “dog” rather than “god”, just as there is nothing inherent in the string ‘dog’ that makes it mean “dog”.

    So the argument is basically if there’s no universal code-breaker—no device that takes a page of coded text and takes it to plain text, without knowledge of the code—then there’s no way to move from syntax to semantics. This means that text (or other symbolic representations) doesn’t have an objective, observer-independent meaning—i.e. that given the syntax, you still know nothing about its semantics.

    So if all that went on in our heads were just symbol-processing, just shuffling symbols around based on their syntactic properties, then this wouldn’t fix its semantics in any way; yet, our thought processes do seem to have definite syntactic content. If this is right, then that’s a reductio of the possibility to reduce semantics to syntax (and consequently, of any theory trying to explain mind in terms of computation).

    (I see that my shadow has already made appearances in this thread. I hope it served not to obscure… :-P)

  37. 37. Jochen says:

    Correction: “…our thought processes do seem to have definite syntactic content” was meant to be “…our thought processes do seem to have definite semantic content”.

  38. 38. Charles T Wolverton says:

    Arnold –

    I think of actions as being reified in what I term “context-dependent behavioral dispositions”, essentially sensorimotor neural structures. In which case “actions” are latent until activated by sensory excitation – which may also alter existing structures or create new ones. Reading a novel is a form of sensory excitation.

  39. 39. Tom Clark says:

    Jochen:

    “…our thought processes do seem to have definite semantic content”.

    Agreed, and the question is: what states of affairs, internal and external to the system, make it a fact that they actually *do* have semantic content? One wants uncontroversial examples of semanticity that can be investigated to see what the conditions are for successful reference and for claiming a system understands something. As you say, a computational reduction of semantics to a system’s syntactical operations likely isn’t in the offing, but this isn’t to say that there’s necessarily anything non-algorithmic or non-mechanistic needed for semantics. We just have to look at the system’s relations to its environment and its peers to get mind out of mindless symbol processing. “Just” 🙂

  40. 40. Jochen says:

    We just have to look at the system’s relations to its environment and its peers to get mind out of mindless symbol processing.

    Well, the problem here is that ‘structure’ is itself merely a syntactic notion, and severely—actually maximally—underdetermines content (Newman’s famous problem): all that you ever objectively can derive from knowing the structure of a set of objects—i.e. the relations in which these objects stand to one another—is the cardinality of the set, but nothing at all about the objects themselves. Just my relationships with, reactions to, and influences from the things in the world neither determine what those things are, nor what the content of my intentional states directed at them is; those facts are left completely open, and need an additional choice to fix—which is exactly equivalent to the choice of code in deciding what a string of symbols means.

  41. 41. Tom Clark says:

    “…those facts are left completely open, and need an additional choice to fix—which is exactly equivalent to the choice of code in deciding what a string of symbols means.”

    And presumably the choice of code boils down to the linguistic and other meaning-involving practices of the community or milieu within which the system is embedded. So there’s no god’s eye, view-from-nowhere determinant of content.

  42. 42. VicP says:

    Sergio,

    “Treating an entire system as a Stimulus-Response system looks far from ideal, because whatever decision making abilities a given system will show will be reproducible by implementing a single gigantic lookup table. Hence, I steer towards functionalism, but one that is interested in the functions therein: in the approach I’ve been developing (by advocating it!), how responses are selected is as important as which responses are selected. Thus, a big monolithic black-box would just fail to meet my own requirements… :-/
    But then I always end up asking myself: does it (my position/hope) even make sense?”

    SR is the basic building block just like a toggling bit in a computer is the basic building block. Each toggle is actually an event or clock or piece of time. Just like nature has no clear demarcation between the visual system and motor system, adding a camera or other peripheral to a computer may require loading sw drivers and object code etc. Even reverse engineering a competitior’s system without the keys to the sw is a daunting if not impossible task. Agree you can’t easily reduce it to a lookup table.

  43. 43. Arnold Trehub says:

    Charles: “Reading a novel is a form of sensory excitation.”

    Isn’t your reading a novel much more than that? It would be meaningless without all of the images that are evoked in your brain by the symbol strings/syntax on the pages.

  44. 44. Charles T Wolverton says:

    Hi Jochen –

    Just my relationships with, reactions to, and influences from the things in the world neither determine what those things are, nor what the content of my intentional states directed at them is

    My admittedly primitive understanding of Gibson’s ecological psychology suggests to me that the former are sufficient, at least for practical purposes. What do the latter buy us other than grist for philosophizing?

    which is exactly equivalent to the choice of code in deciding what a string of symbols means

    Are you using “code” here in its comm theory sense? If so, I can’t quite parse this statement since in that sense the result of “coding” meaningless symbols” is just more meaningless symbols. Or perhaps you’re using “code” to refer to what I called a “mapping” in comment 34 (which seems to be how Tom interpreted it per his comment 41).

  45. 45. calvin says:

    Tom,

    “what states of affairs, internal and external to the system, make it a fact that they actually *do* have semantic content?”

    What is semantic content? In this discussion “semantic content” is no different than experiences or contents of awareness. The syntactic information in organisms, such as acetylcholine used in muscle movement, is not semantic content. The molecules which do everything in the body are not experienced and are not content of awareness. To define semantic content, sufficiently to identify when something becomes semantic, requires going down the rabbit hole of phenomenology towards idealism. Where we find that our own syntactic information is based on semantic precursors or external axioms. (Godel)

    Syntactic processes of chemistry function in organisms because of their intrinsic nature. In computers these functions occur because of extrinsic rules. “which is exactly equivalent to the choice of code in deciding what a string of symbols means.” -Jochen The extrinsic construction of syntactic behavior is semantic action, it is not syntactic but semantic content.

    Semantic information is excluded from extrinsically constructed syntactical systems because semantic information(meaning) is irreducible in some fundamental ways and we cannot reproduce semantic information physically- because semantics is explicitly not physical. Qualia, meaning, ideas can be expressed in physical forms with mark making or symbols or artifacts, but the semantic content is never in the object but only in the creator or user.

    However, molecules have no extrinsic force causing their reactions. The forces are intrinsic to the atoms/electrons. And I think it’s because of this, that intrinsic systems can lead to creating structures which are have semantic content. An intrinsic system could, in theory, instantiate structures which are meaningful to itself but which are not syntactic or intrinsic features of that system. If that system can become complicated enough to represent it’s own semantic facts, then it could move beyond it’s initial asemic condition to more specific semantic content and interact semantically with other semantic producing systems.

    Obviously for semantic information to arise in a syntactic system it must be embodied by the syntactic processes. How else could semantic information arise in a physical system? We can’t put semantic content into physical systems because semantic content is not physical. Semantic content must be recognized by the system itself.

  46. 46. Charles T Wolverton says:

    It would be meaningless without all of the images that are evoked in your brain by the symbol strings/syntax on the pages.

    As I suggested earlier, I don’t know how to deal with “meaning”, “semantics”, et al, if those words aren’t clearly defined. In the S-R concept of “meaning” that I assume, actions (possibly latent) consequent to sensory excitation constitute the basis of “meaning”. If an author’s intent in writing a passage is to evoke detailed visual images in the reader, then from my perspective the writing is largely meaningless because I’m not very visual. OTOH, if I think to myself “Hey, that argument sounds exactly right, so in the future I’ll use it!”, then for me it did have meaning.

  47. 47. Charles T Wolverton says:

    Ooops – comment 46 was in response to Arnold (43) and the first sentence was supposed to be a quote.

  48. 48. Stephen says:

    Jochen: in (36) you say “The most basic such argument really is that if I give you a string of symbols—say, aa9owi5sce—you have no way to move to its (possible) semantic meaning”

    This leads me to think we also need to agree on a definition for “syntactical”. My understanding is that syntactical meaning is given by following a set of rules. In language, that’s grammar and spelling. Without the rules, isn’t it just a string of meaningless symbols and not syntactic at all? That is enough to transfer the idea between entities. The receiving entity, if endowed with semantic capability, applies associations and memory of other experiences to provide context and imbue the message with semantic meaning.

  49. 49. Jochen says:

    Tom:

    And presumably the choice of code boils down to the linguistic and other meaning-involving practices of the community or milieu within which the system is embedded. So there’s no god’s eye, view-from-nowhere determinant of content.

    But if all you have at hand is just ‘more syntax’, then how does that code choice ever get made? That’s, I think, the core problem. It’s easy if you posit some already-intentional observer, some homunculus that can make use of the calcified contingencies in a linguistic community to craft their idiosyncratic representations, but that obviously doesn’t cut it if what you’re trying to explain is intentionality itself (forgive me if I just gloss over the distinctions between intentionality and semantic meaning for the moment).

    In other words, there isn’t just no view from nowhere, even the view from now, here doesn’t get pinned down—the syntactic processes going on in your brain (on a computationalist perspective) simply don’t suffice to pin down their referents.

    Charles:

    My admittedly primitive understanding of Gibson’s ecological psychology suggests to me that the former are sufficient, at least for practical purposes. What do the latter buy us other than grist for philosophizing?

    I’m not sure what exactly you’re referring to; by ‘the latter’, do you mean our intentional content? If so, then in my view it’s not something we add to the description to while away the dull evenings philosophizing around the campfire, but rather, it’s the data we want to have explained: our thoughts seem to have intentional contents. Whether they actually do is, of course, a matter for discussion, but the fact that they seem to is an explanandum (even if the explanation ends up getting rid of them). I confess I’m not familiar with Gibson, though.

    Are you using “code” here in its comm theory sense?

    To me, a code is a mapping between two languages, taking sentences/words/strings from one and returning some from the other (taking plaintext to ciphertext, for example).

    Stephen:

    This leads me to think we also need to agree on a definition for “syntactical”. My understanding is that syntactical meaning is given by following a set of rules. In language, that’s grammar and spelling. Without the rules, isn’t it just a string of meaningless symbols and not syntactic at all?

    Yeah, I guess you’re right, strictly speaking; syntax is the set of rules that decides the form of valid strings, so in a sense, the string’s form is rather a result of the syntax than the syntax itself. So I generally refer to the form of a string when I talk about ‘syntax’ or perhaps better, ‘syntactic properties’—those are after all the properties we have access to, or to which there is objective access, and the properties a Turing machine manipulates.

    So when I say things like ‘computers are syntactic engines’, I really mean that they do what they do because of the form their input takes, which form is dictated by the syntactic rules according to which that input was generated (which however don’t matter too much). (Not sure that’s really managed to clear up anything…)

  50. 50. Jochen says:

    Stephen, sorry, forgot to reply to this bit:

    The receiving entity, if endowed with semantic capability, applies associations and memory of other experiences to provide context and imbue the message with semantic meaning.

    But the question is exactly how to endow an entity with ‘semantic capability’. Merely queuing up associations and recalling memory don’t suffice—that can be done on an entirely syntactic level: the form, or structure, of some input stimulus serves to call up some other set of symbols, which in itself doesn’t add any meaning to either.

    (Think of a finite state machine: you enter symbol A, it returns symbol B, if it has earlier on received symbol B as input; you can interpret this as ‘recalling a memory’, but that doesn’t imbue either symbol A or B with meaning to the machine. The whole evolution of such a machine can be written as a long string of symbols, which, without interpretation by some observer, is about nothing at all; but we’re trying to explain how an observer’s facility to interpret comes about in the first place.)

  51. 51. Stephen says:

    Jochen: “But the question is exactly how to endow an entity with ‘semantic capability’.”

    That will quickly devolve into defining how to endow an entity with consciousness, which we won’t be resolving here any time soon. I think we have to discuss this within the context of a conscious entity. Asking how a syntactical processor can have semantic capability seems to have a rather constrained answer.

    Yes, our typical digital computer is a syntactical processor. I’m not entirely sure a generalized finite state machine couldn’t have semantic abilities, though. Consider two ideas. The first is that neural networks can be modelled on a finite state machine and since we have semantic capabilities perhaps we are just another implementation of a finite state machine, albeit a very complex one. Secondly, a brain isn’t anything like a formal proof. It’s closer to a piece of engineering and uses the “close enough for all practical purposes” concept in abundance. When looking through this filter, we can see how many things can be possible, even if the formal logic describing finite state machines doesn’t seem to allow for it.

  52. 52. VicP says:

    If a finite state machine is constantly receiving and generating symbols, a simple case being numbers, if they are increasing, then it also generates an internal positive slope (or negative if decreasing etc). I don’t see the problem even as I generate these sentences I am internally sounding them or mirroring them intuitively. The idea of a major problem here escapes me.

  53. 53. calvin says:

    Jochen: “But the question is exactly how to endow an entity with ‘semantic capability’…”

    Association as a form of co-occurrence is sufficient to create instances of semantic content. The question is where does the association occur? When a computation produces a result, we as human beings make the association of value to the syntactical output. However, if a computer system can make continuous associations about it’s own syntactical processes, inputs, and outputs, it would be indistinguishable from a conscious organism.

    Associations are not syntactic. Associations are made by programmers and coded as data and computations. It may look like a program (or finite state automata) is making an association, but it was the programmer who (intentionally or not) makes the association. Programs simply follow a procedure. Associations are semantic data.

    Association making is a representational problem that follows a standard form which is not syntactic. If some object x is associated to some object y : x as y. Then that association is valid for x where y exists- if there is an x, it is the same x even if it is in the form of y. if this exists functionally, but not as an extrinsic rule, then we naturally see it as an actual association and not as a coded function.

    Syntactical processes cannot make associations because associations are arbitrary and syntactical operations are not arbitrary (but rule driven or random). For association making to occur in a computer system requires multiple separate syntactical processes that interact with each other arbitrarily. These interactions must co-occur in such a way that the interactions of the syntactic processes are neither syntactic nor random. To achieve representation of these associations, the interactions must be captured both as structures and as a complex of underlying syntactic processes (from different syntactic systems) which continuously interact.

    We think of stimulus response as associative, but from the molecules point of view, where all the action happens, there is no such thing as stimulus or response. When we try to code stimulus response we code a rule set, or simulate the functionality syntactically, and get neither stimulus or response, we just get a functions meaninglessly processing inputs. To get actual stimulus and response in a computer system requires multiple interacting syntactical systems (thousands of FSA) whose interacting artifacts produce associative structures and what we call associative processes. but the actual processes are purely syntactical following an intrinsic rule set of their own. the interacting finite state automata produce persistent side-effects that are not coded for.

    We see a weak example of this in cellular automata with gliders in Conway’s game of life. Gliders are artifacts of syntactic processing and multiple syntactic interactions. but to actually represent things such as gliders requires second layer of syntactic processing which functions on it’s own and only happens to be responsive to the cellular automata glider conditions – but that response is not codified in the syntactic rule set. So that when a glider happens to occur, this second system will, because of it’s intrinsic behavior, respond to the glider. Here is the catch: there is no such thing as a glider. Gliders are “recognized” as things; Gliders become semantic content as an ephemeral condition of the various interacting syntactic processes.

    Obviously this second process could respond to all kinds of other conditions as if a glider exists. But that is exactly what we would expect of semantic content. That is what illusions are. The problem with semantics is not trying to figure out how to model the true facts or true conditions. it’s trying to produce a system that has illusions and errors, and can recognize an error.

  54. 54. Arnold Trehub says:

    Charles,

    Semantics has natural features, whereas symbols/syntax do not. If a text refers to a glorious sunset, you can give some kind of description of natural features that fit the text. This requires that some kind of image is evoked in your brain with the proper features.

  55. 55. calvin says:

    Vicp: “I don’t see the problem even as I generate these sentences I am internally sounding them or mirroring them intuitively.”

    But what you are doing is semantic. You brain is doing nothing with sentences. The sentences and words and letters are semantic content. The syntactic content of your brain is molecules. The rule set is the intrinsic nature of electromagnetism of the atoms (and possible some quantum effects). Individual photoreceptor cells responding to light from the screen you are reading do not process letters. the the retina, the optic nerve, the visual cortex, the rest of the brain, none of it processes letters. the neurons are interacting with other neurons at a molecular level.

    For a modern computer, the problem is that the ascii characters and processor functions are not sentences either. Nor are they words, not even letters.

    the letter “A” in ascii is “01000001”. the letter “a” in ascii is “01100001”. We associate those two letters as the same thing. But a computer does not. It requires an explicit function like this:

    if binary == ‘01100001’ or binary == ‘01100001’:
    semantic_A = ‘A’
    semantic_A_lowercase = ‘a’
    semantic_A_uppercase = ‘A’

    and something not unlike this happens as you use the shift key on the keyboard selecting lower and upper case values from an address in memory. but your computer never has a concept of letters, or numbers, or A, b, or C.

    Increasing or decreasing numbers of a finite state machine doesn’t mean anything to the computer system. It might mean something to you or to me, but not the computer. It’s meaning that is the problem.

    Just ask the question: Where are letters in the brain? And the answer is, they are not there. Because the brain is a physical thing and a letter is a conceptual, non-physical thing. Similarly there are no numbers in the brain. There are no numbers in a computer system. Finite automata do not compute numbers, they compute states. We associate those states to numbers, and because of the configuration of those machines, we can do calculations with those states _AS IF_ those states were numbers.

    We are so embedded in semantic experience, and our language is so enmeshed, it’s very hard to see how big the divide is from our semantic world to the facts of syntactic processing.

  56. 56. VicPanzica says:

    Calvin, Yes the computer is just millions of timing states and we ‘talk’ to it semantically via higher computer languages which translate to machine language. True physics occurs at the molecular level but we can think of neurons as biological molecules, since they are repeatable structures, that interact to form qualia. Think of physical particles as having very short event horizons or times. Neurons are much larger and slower structures. Whaen they combine in networks form even slower structures and emergent qualia. As you read these words you may be hearing the silent sounds of their qualia in your head.

  57. 57. calvin says:

    Vicp: “When they combine in networks form even slower structures and emergent qualia.”

    I’m not at all sure what you mean by emergent, or why speed matters. Numbers, or mathematics, or all other concepts are non-physical, so I don’t know how they can emerge from physical phenomena. What is the emergent process?

    we use the term translate when we compile higher level languages to machine code, but there is no actual translation. One set of codes is generated by instructions from another set of codes. But the meaning of what some piece of code, does not exist either at the higher language level or the machine code. A book, for instance, has no intrinsic meaning. It only has meaning to readers and authors. The same applies to source code. The mechanism that converts source to machine instructions is syntactic, and the machine instructions are syntactic. There is no transfer of meaning.

    Part of the problem is we think and talk about meaning being embedded in things, and that simply isn’t the case. The world of ideas, which is not physical, and the universe of physical phenomena are separate things. Ideas exist. For instance pain exists, zero exists, blue exists, true and false exist but these are not physical phenomena. If there were no people, mathematics would still exist. Just as there are concepts we haven’t discovered.

    The issue isn’t how to get qualia or meaning to arise from syntactic operations but how to get syntactic processes to embody or instantiate qualia and ideas. To get semantics, meaning, or qualia to occur in a physical, syntactical process, requires the syntactic process to begin making representations of it’s own. A syntactic process must represent some other syntactic phenomena with it’s own syntactic elements. But this representational phenomena cannot be coded for. We may say it must emerge, but only in the sense that features emerge from evolution, not in the more mystic sense of magic phenomena emerging from nothing.

    syntactic processes cannot directly produce semantic content. If they could, this would already be a solved problem. To produce semantics, syntactic processes must engage in representation making as a side-effect of their function. As representation making processes get more complicated, the side-effects of making representations can also become more complex – if and only if the side-effects of syntactic processing can embody more and more complex representations.

    Neurons are a side-effect of the syntactic activity of molecules. The neuron, as a neuron, has no powers or effects. Cells are a semantic product of the right collection of molecules under the right conditions. A cell is an idea, it’s not a physical thing, the molecules are the physical things. The cell itself is the semantic content produced by the underlying syntactic interaction of molecules. For a computer to become conscious, it must be composed in a similar way, where the syntactic components (code/data) form larger structures (cells) as a side-effect of their interactions.

  58. 58. Michael Murden says:

    When scientists hypothesize the existence of things they can’t observe they often are trying to explain anomalies in things they can observe. For example astronomers hypothesized dark matter to explain stars rotating as if under the influence of more gravity than could be provided by things they could see. What anomalous observations were scientists (or philosophers) trying to explain by hypothesizing the existence of semantics? I’d guess philosophers were trying to explain the way words could seem to be about things. If that is not the case, what does the hypothesized existence of semantics purport to explain? If semantics is intended to explain how language can seem to be about things, I have to ask three questions. Does language really have the ability to be about things? If it does, does that aboutness require semantics as an explanation? Given that nobody seems able to explain with any precision what semantics is, does semantics actually explain aboutness, and if so how?

    Maybe there is no aboutness relationship between language and the world. If no such relationship exists then we don’t need to hypothesize the existence of semantics to explain it. Maybe syntax alone suffices to explain the efficacy of language. If natural language is just a mechanism to allow human beings to interact and cooperate with each other does natural language need to do or be anything more than the languages that allow switches, routers, servers, hosts and communication facilities to be the internet?

  59. 59. VicPanzica says:

    Calvin, A neuron or cell may be an idea but it is also a structure that contains all type of molecules. Some of those molecules perform the same cellular functions as any cell in your body. However what happens when neural networks perform learning and the magic occurs forms the semantics. The magic which a magician creates is magical because he causes the unexpected. By physical we mean all of the forces of nature, so H2O becoming liquid between 0 and 100 Degrees C is magical because of the dipole effect. What you mean by semantics, I call eventfulness or the slower processes that occur in biology. Consider that it only takes a single motor neuron from your brain to move a muscle in your leg or move a human body within this physical space but dense networks of neurons to represent this same physical space.

  60. 61. calvin says:

    Michael: “Given that nobody seems able to explain with any precision what semantics is, does semantics actually explain aboutness, and if so how?”

    Semantics is the study of meaning. So semantic content is content that is meaningful. Aboutness is meaningful. It is probably a fundamental meaning. When we say one thing is about another thing we are making an association. And it follows the same representational process as any association. some content as some other content (x as y) What the values of that content are, are not necessarily what the aboutness refers to. For instance 01000001 is the binary ascii value for the letter A. but it is also how much money I wish I had in the bank.

    What 01000001 is about is are representational feature not stored in syntactic systems. We can’t construct syntactical systems which store aboutness, and when we try, what we actually construct are syntactic systems that store symbols of the aboutness. The symbol values are both semantic marks which have a meaning and functional elements of a syntactic system. But the meaning or associations the mark has are not stored in the syntactic system.

    Syntactic information is always part of a procedure. semantic content is some mark or signal which has meaning, and it is the meaning or aboutness itself which makes a mark semantic. Semantic content only makes sense if the meaning is embodied in the syntactic process. Stigmergy is the best concept I have found to describe how semantic content and syntactic processes interact.

    we can read each others words because we share the same semantic content, not because we have the same syntactic processes.

  61. 62. calvin says:

    VicP: “Consider that it only takes a single motor neuron from your brain to move a muscle in your leg or move a human body within this physical space but dense networks of neurons to represent this same physical space.”

    A single motor neuron does not move a muscle in your leg. What is gained by attributing semantic effects onto a complex molecular process? “muscle” movement is a vast constellation of molecular interactions: https://en.wikipedia.org/wiki/Muscle_contraction#Sliding_filament_theory
    It simply isn’t necessary to make reference to semantic content at all to understand the causes of “muscle contraction” and thus leg movement.

    The actual syntactic processes are millions times more complicated and specialized then the semantic way we talk about them. I would argue that the semantic content is also much more complicated then the language can account for.

    I agree dense networks of neurons may be representing physical space with their network configuration and molecular processes, but the neurons themselves are a representation, an effect and not a cause. Cells are a semantic effect of the underlying, meaningless, syntactic process of chemistry. To understand how semantics can occur in syntactic process (such as complex molecular structures like organisms, or computer systems) requires separating causes from effects and requires separating the meaningless syntactic processes from our semantic attributions. A syntactic system must self-attribute semantic content.

    What does it even mean for something to be “unexpected” in a syntactic system? “Unexpected” is an “aboutness” meaning. It’s impossible for a syntactic process to produce “unexpected” because it’s semantic content and not a syntactic product. “Unexpected” requires a pre-existing set of semantic facts to already exist.

    I’m not saying semantic phenomena do not exist, I’m actually suggesting the opposite, that all semantic phenomena, including magic and unicorns and Ancient Tandem Psychic War Elephants all exist [ https://www.youtube.com/watch?v=lIoWeZM3XCg ]. But they do not necessarily have physically associated instances. Just as the sun does not move through sky, there is no “freezing” force which makes H2O turn into ice (again, it’s the intrinsic molecular processes that occur – water molecules lose energy and change how the oxygen atoms electron orbits interact with other H2O molecules. Ice is a side-effect of underlying molecular processes. Representations are side-effect of molecular behavior of so-called “neurons”.

    For non-meaningful processes such as molecular interactions or FSAs to instantiate semantics requires very specific representational processes to occur which, like ice, are side-effects of the underlying syntactic phenomena. These initial representations are only barely meaningful. They are often a representation that is merely about some syntactic process, with nothing more than that. These initial representations are asemic, below the level of expected or unexpected. “Expected” and “unexpected” are more complex meanings that can only exist atop a more primitive set of semantic content.

    I guess the point I’m trying to make is that the situation for generating a machine which has semantic content is much more complicated and particular than we like to admit. And this is obviously true for organisms like ourselves, but it’s also true for machine sentience.

  62. 63. Callan S. says:

    A syntactic system must self-attribute semantic content.

    That or the processing power of the neurons operates only at a certain resolution – there are finer grains of processing possible that the matrices of neurons simply do not engage. THE problem is, the neurons are smaller than the resolution they process at! Thus the neuron matrices, when it comes to trying to process itself, suddenly hits what seems to it to be the floor. When really it’s just the limits of its processing resolution – the real floor is much, much further down. But the false floor seems as deep as it gets…the last turtle…so it seems that floor is a real thing and is being generated by the neuron matrices. The floor, the apparent ground of the world, gets called semantics. It’s ‘self attributing’ because the processes involved in neurons are too small for neurons to process, so the floor/semantics seems to pop out of nowhere then adhere to stuff.

  63. 64. Jochen says:

    Stephen:

    The first is that neural networks can be modelled on a finite state machine and since we have semantic capabilities perhaps we are just another implementation of a finite state machine, albeit a very complex one.

    But from there, it doesn’t follow that our brain is nothing but a finite state machine, even if it may be true that it can be modeled as one. The modeling relation is not one of equivalence: in modeling something, you take the structural relationships of a system, and implement them using a different system—i.e. you create a syntactically equivalent one. You can model the relationship ‘is a direct paternal ancestor of’ with books of varying thickness: a thicker book is the ‘father’ of a thinner one. From this model, you can read of anything that you can read of the direct line of ancestors; but certainly, that doesn’t mean that books are paternal ancestors.

    So it may be that what imbues brains with their semantic capacities is exactly what gets lost in brain-models using FSAs (what Peter sometimes refers to as ‘haecceity’). The computational paradigm is based ultimately on the faith that there’s nothing to things but their structural relationships, but there’s no compelling reason (that I can see) why this should be true. There could well be irreducible, intrinsic properties to things (like brains) that simply don’t make the transition from the real thing to the model.

    Moreover, in order to fill a computational model with content, it needs to be interpreted: to even consider that some FSA models a brain, you need a brain interpreting that FSA as a model of a brain. Otherwise, the FSA could be a model of virtually anything (subject only to restrictions of cardinality). You and I might take the same FSA to model radically different things, without there being any objective answer as to who of us is right (and indeed, that fact is basically the reason why computers are so useful—every piece of software can be run on any (sufficiently powerful) computer whatsoever, without any need to change anything about the physical implementation—only the interpretation has to be changed).

    VicP:

    If a finite state machine is constantly receiving and generating symbols, a simple case being numbers, if they are increasing, then it also generates an internal positive slope (or negative if decreasing etc).

    Well, but the machine knows nothing about that fact. In order to have it do so, we would need to add some representation of the fact that its inputs have been numbers in increasing order—but that would just be adding another syntactic layer, which gets us nowhere with respect to semantics. You might interpret that machine to be representing numbers of increasing order; I could use it to represent my paternal ancestry tree, or the order of my bookshelf according to thickness. The interpretation is arbitrary, and relies on an observer with intentional mental content, which is not arbitrary (and hence, which can’t work according to the same principles—not just, at least).

    calvin:

    Syntactical processes cannot make associations because associations are arbitrary and syntactical operations are not arbitrary (but rule driven or random). For association making to occur in a computer system requires multiple separate syntactical processes that interact with each other arbitrarily.

    I’m not sure how this helps. A collection of finite state machines is again a finite state machine, no matter whether the machines are coupled according to fixed rules, to varying rules, or to random rules (in which case, it will be a non-deterministic FSA).

    There’s a general trend that people readily grasp how sufficiently simple computational systems are purely syntactic entities, but then propose to just couple together enough of them, heap enough complexity onto the problem so that it’s no longer readily obvious what actually happens, and then somehow, semantics is supposed to emerge. But this doesn’t do any work; it’s just more of the same, and not as transparent as the simple examples that can be readily analyzed.

    For example, it’s clear to most people that some string of coded text doesn’t admit of an unambiguous translation, but somehow, if it’s correlated or associated with enough other bits of coded text, or maybe if it’s animated by some machines making alterations on the text, etc., then those same people suddenly think that maybe a semantic level somehow emerges. But one can always just describe this as one larger string of coded text, or the evolution of some appropriate Turing machine, which has no idea at all what the symbols on its tape actually refer to, what they actually mean. An addition of quantity here does not bring about a change in quality, and that’s ultimately what’s needed. At the bottom of every computational system where somebody proclaims to find some meaning, if one digs deep enough, one eventually finds that meaning firmly rooted in an interpreting observer’s mind.

  64. 65. VicPanzica says:

    Thanks gents, all very interesting and sensible comments.
    The Donkey Kong analogy is not quite as bad as waiving chicken entrails or tin foil hats, but neuroscience studies secondary effects, tokens i.e. blood flow in brain areas etc.
    The brain is a complex system of sub-organs, subsystems, integral systems, sub functions, heuristics etc. Leibniz famous thought experiment was to consider the mechanism of a mill. Well consider a mill with the gears made from styrofoam or jello. Such a mill could not grind wheat because the gears or the mechanism really does not transform the power of the river or stream but the forces of nature inside metal gears do the real function, but it’s only the gears we see.

    A magic trick is not magic to another magician who observes it. He may be mildly tricked if it’s not one he has seen before but he can usually figure it out because that’s his expertise vs the layman’s.

    As far as semantic meaning, suppose I was planning a trip to the park today and wake up to see it is raining? Suppose another person is a farmer and it has not rained for weeks and he wakes up to see it is raining? Different brains have different intermediary layers of conditioning and rules for interpretaion so the meaning of the ones and zeroes is different? Maybe the farmer really is upset as well because he hates farming and has been buying wheat futures, betting on a terrible wheat crop so he can get rich? You can’t always see what is beneath the surface so it seems like magic sometimes.

  65. 66. Charles T Wolverton says:

    Arnold @54: If a text refers to a glorious sunset, you can give some kind of description of natural features that fit the text. This requires that some kind of image is evoked in your brain with the proper features.

    I find this statement surprising. My impression is that you and I had agreed some time back that it’s likely that in processing visual sensory stimulation, verbal responses either precede or are concurrent with the production of mental imagery. In which case, wouldn’t it be likely that a verbal response consequent to reading about a sunset would also be produced directly from the text (as extracted from the visual sensory stimulation) rather than indirectly via a mental image, the latter procedure seemingly reminiscent of the Cartesian Theater scenario?

  66. 67. Michael Murden says:

    Calvin:
    “Semantics is the study of meaning.” As you point out with your binary example, syntactic tokens do not have meanings in and of themselves. Your definition of semantics suggests that one of the tasks of semantics is to understand the process whereby syntactic tokens acquire or are assigned meanings. Consider the word tulip. When you hear or read that word, does an image of a particular sort of flower form in your mind’s eye? If so, is that image the semantic meaning of the word tulip, or is it just another syntactic token? If the image is just another syntactic token we still have to ask ourselves how to get from that syntactic token to a meaning. If the image is a meaning for the word tulip, then the question would seem to be how the human brain associates specific images with specific words. That question strikes me as more neurological than philosophical. To put it more simply, if semantics is the study of meaning, what is meaning? Your definition of semantics seems to me to imply that you have a theory of meaning. What is your understanding of meaning?

  67. 68. VicPanzica says:

    Jochen, I see your argument but FSA’a like people are designed to be embedded in environments, so raining vs not raining has meaning wrt the internal environment and external environments. Reading about a beautiful sunset has meaning because we have the external environment or use the ability to recall the external environment in us.

    In your example above if it is counting apples via an apple detector and we keep removing apples one at a time, it reaches zero when it stops counting. Zero? Zero what? Zero apples? zero oranges? Zero anything? Since there is nothing there, zero is always zero unless we knew what was previously there. Context and memory are the key. The syntax problem is classical tail chasing.

  68. 69. Stephen says:

    Jochen: re 64 “even if it may be true…”, “so it may be that…”, “there could well be…”

    It seems clear that neither of us actually knows whether a brain model (or perhaps better expressed as an artificial brain or alternate brain implementation) would be able to develop semantic meaning. We each have our predispositions as to what seems likely. Looks like someone will have to build one and if it doesn’t have the capability try to discern whether we are just missing some critical processes or whether there is something innate about our brain that prevents us from developing the same capacity on an alternative platform.

    Sorry, but I don’t quite follow your point about requiring a brain to interpret the model in order to fill it with content. Anything we do seems to require that.

  69. 70. calvin says:

    Michael: “if semantics is the study of meaning, what is meaning?”

    To answer this question requires understanding what experience itself is, which is really just asking what awareness is and how awareness functions. Awareness is a fundamental kind of problem that has to be understood to even get near “consciousness” (which has persistent qualities). Awareness looks like a function which consistently behaves in the same way. Firstly awareness is always awareness of something. Awareness always has content. I illustrate this like this: Awareness:x where x is some content. Secondly, awareness of that content is indistinguishable from the content itself. so awareness of a tulip is the tulip. I illustrate this with a function: awareness: x = x. (or shortened aw:x = x)

    this functional nature of awareness is counter-intuitive but it’s always the case. counter-examples to this identity function of awareness are multiple contents of awareness, with multiple identity relationships. so that “tulip” the word is the same as the awareness of “tulip” the word. my image of a tulip = my awareness of that image of a tulip. an actual tulip that I see = my awareness of an actual tulip that I see.

    aw:”tulip” = “tulip”
    aw: tulip_image = tulip_image
    aw: tulip_flower = tulip_flower

    By being rigorous about what the contents of awareness are it becomes clear that whenever the contents of awareness change – the content itself changes. Also, whenever the contents change, the awareness of the contents change. Awareness (or experience if you prefer), is very fleeting and attaches only functionally to it’s contents. But awareness itself is a kind of different thing than it’s contents. But the contents of awareness always form an identity relationship to the content.

    This identity function is the only way to account for experiences like errors, illusions, dreams, memories, and the forgetting of memories and also account for the physical and sensational phenomena we observe, and how our perception of those phenomena change over time. This function is testable. Anyone can test it out, but when you try you must be rigorous about what the contents of awareness are. When you try to determine if some content of awareness differs from the actual content what happens is the contents of awareness (and thus the contents themselves) proliferate.

    Which brings us to the next obvious problem. We are aware of contents being related to each other, and actual contents are related to each other. We observer differences and similarities of objects, and the fact of those differences and similarities are contents of awareness too. This association of contents is representation. The contents of awareness do have relationships to each other, and those relationships always work the same way. It is when some content, like the word “tulip” represents some other content like an actual_tulip. This can be expressed as: tulip_word as actual_tulip or tulip_word in the form of actual_tulip. Symbolically i use the semi-colon. tulip_word ; actual_tulip.

    An actual tulip has features like bulb, pistol, stamen, petals, stem, leaves and those are associated to tulips – so that (bulb, pistol, stamen, petals, stem, leaves) ; tulip. a tulip the sprouts from a bulb is the same tulip in both states so that tulip_bulb ; tulip and flowering_tulip ; tulip are further associations we can make. But surprisingly representation also forms an identity. so that one side of the representation is identical to itself.

    tulip = tulip as particular_flowering_plant
    particular_flowering_plant = particular_flowering_plant as tulip.
    tulip = tulip as painting_of_particular_flower
    painting_of_particular_flower = painting_of_particular_flower as tulip

    Which is how we talk about it: while pointing at a flower we say “that is a tulip”. the tulip concept is the physical phenomena we see. But it also simultaneously is not identical to the phenomena because. If I say, “look at that tulip in the painting”, I mean a tulip just as much as if I pointed to a tulip in a garden. However:

    tulip != particular_flowering_plant
    particular_flowering_plant != paintings_of_particular_flower
    paintings_of_particular_flower != particular_flowering_plant

    One is a concept, one is a biological organism nothing like the tulip concept, and the third is colorful marks. but tulip is the same tulip as particular_flowering_plant or tulip is the same tulip in the form of paintings_of_particular_flower.

    We use this representation function to understand phenomena that change. For instance, we associate a billiard ball moving across a table as being the same billiard ball, even though the “billiard ball” is a component of a visual field of changing RGB information. you associate your body as being the same body, even though all the molecules have been changed out over time.

    Representations always work according to this same function: x= x;y [x as y]. What this gives us is a way to structure the contents of awareness and thus the contents themselves. So that aw:x;y = x because x = x;y and aw:x = x. (this actually goes both ways) There are obviously many more nuances. But one important and critical feature is that representation making – where a representation is asserted or exist is completely arbitrary. this arbitrariness is the only way to account for features such as pretending and imagination and improvisation. Another feature is that different experiences of awareness and representation are constrained by their inter-relationships – so that our experience of physical phenomena are a set of particular representations we represent as physical phenomena.

    So, meaning is any kind of representation. And because of the function of awareness, all representations exist (we just are not aware of all of them). Syntactical system do not make representations – where some x is the same x when it is in the form of y. But it is that association making which is what meaning is. Reproducing this function of representation with a computer system is the only way I’ve found to begin creating meaning producing systems.

    Sorry if this is confusing…but it IS confusing. We simply don’t have good language to work with these problems very well. You can click my name to see my Tucson poster which may or may not help to explain what I’ve tried to summarize here.

  70. 71. calvin says:

    Jochen: “then propose to just couple together enough of them, heap enough complexity onto the problem so that it’s no longer readily obvious what actually happens, and then somehow, semantics is supposed to emerge. But this doesn’t do any work; it’s just more of the same, and not as transparent as the simple examples that can be readily analyzed.”

    I am not suggesting this at all. Emergence is like saying “and then something magical happens.” Emergence as a theory is magical thinking. Syntactic processing cannot produce semantic content in the syntactic system. It’s impossible.

    However, if there were syntactic process (a FSM) whose interactions produced structures that were neither coded for nor read by the syntactic processes, they would be side-effects of the syntactic processes. What else would you call those structures or phenomena? (for instance, the display on a screen is a side-effect of the graphics processing code to render this text.)

    If a separate syntactic process (FSM) responded to the side-effects of the first process, and produces side-effects of its own, what would you say these second order side-effects are in relation to the first FSM? Both systems are syntactic, but their inputs and outputs are orthogonal. And those two systems would not be a finite state machine. A collection of finite state machines is not necessarily a finite state machine. (that depends mostly on if they share alphabets and rules.)

    In the cell, proteins (and other molecules) act like self-modifying finite state machines. For instance a protein will bind to a molecule (effectively becoming a new molecule), fold into a new shape, release some part of the original bound molecule, fold into a different shape, and release the other part of the original bound molecule, then fold again into the proteins original shape. We could treat the protein as a FSM which takes a particular input string and produces 2 output strings from the original string, except that the protein has no alphabet. Nor does the protein have a rule which describes how it binds, or how it folds. Because what produces this behavior is partly intrinsic to the protein molecule, but it’s also intrinsic to the other molecule it binds too. The two molecules are necessary to produce what looks like a FSM function. All the proteins and other molecules interact in a way to build things like membranes and regulatory structures that instantiate a cell under the right conditions. The cell is a side-effect of this complex of interacting processes.

    What class of thing is a cell? Is it a syntactic thing or a semantic thing?

    How would it ever be possible to do this kind of intrinsic functionality with computation with neither alphabets or rule sets? To get around the alphabet/rule set problem, I am using automata which intrinsically encode their own rule set and alphabet values from a set of global rules and a global alphabet managed by an interpreter. The “tape” of these FSM is a random read from a local (or higher level) space (a bit like a directory structure). and the output to tape is into the local or super. essentially the tape is only as long as the alphabet values it reads or writes to the FSM. By themselves, they are pretty useless finite state machines.

    There are a few other requirements for these FSM and for the interpreter. Bytes must be conserved. No rule can generate or disappear bytes. For instance, the value 123 can be changed to ABC. Or ABC can be split into to two strings A and BC. Or a string can be moved up or down spaces. but a function like 123/123 is not allowed because it converts 7 bytes to one byte (123/123 -> 1).

    Another requirement is that all the FSM can interact with each other (in their accessible spaces). So FSM can rewrite other FSM breaking them so they don’t work (making them simple data), or making ones that don’t work into working FSM. They really are useless and all they do is produce side-effects.

    But I can put the right set of them in a “soup” and they can create partitions and regulate which strings can move across these partitions (form a membrane). with the right set of FSM and a long string of restricted values, I can replicate that string, and transcribe the string into finite state machines. (I just copied the RNA processes).

    But does this mean anything to me? No, it doesn’t – and that is the point. The hope at this stage is to produce what I see as a cell, from the collection of FSM. The FSM wouldn’t see a cell at all. A cell would be a side-effect of the processing of the right FSM. A side effect where the group FSMs neither halt, nor, as collection, loop, but instead functions in a steady state as long the FSM receives the right data inputs for to keep them all running (and creating and destroying each other too). – the goal is to achieve homeostasis.

    Assuming this is successful, is the cell a finite state machine? No, it’s not. Because the inputs to the cell are not inputs to the cell at all, but inputs to particular FSM on the cells surface and in the interior of the cell. Those inputs are only syntactically relevant to the FSMs that can process them because of the alphabets each FSM uses. The cell becomes a thing, a semantic thing, because of the artifact of homeostasis of the right inputs and the right collection of FSM in the right configuration. Stop the inputs, break the configuration, change the FSM and the cell will lose homeostasis producing halts or loops to FSMs “in the cell”.

    Could I prove a cell would continue to function under the right conditions? No, because I’m constrained by incompleteness and I’m am especially constrained by loop conditions. Loop conditions are intrinsically semantic (a bit like halts) and are outside the bounds of what can be proved about FSM or Turing machines.

    Above the level of cells, I would hope find a way for cells clusters to interact and produce representational behavior based on individual cell conditions. But again, underlying this is the purely mechanistic FSM. But the cell isn’t a mechanistic thing. Cell conditions are semantic statements we would apply to particular artifacts of particular FSM interactions. Groups of cells that behave as if their cells have needs (acting on a level above the data inputs of strings and FSM) would be a good indicator that this cell approach is the right one to form cell structures like nervous systems, and engage in even higher order representations.

    Uri Alon’s work on system’s biology is a great way to think about how FSM in the form of proteins can interact and be interacted with to produce a variety of structures and effects which are side-effects of underlying syntactic (chemistry) processes. Systems biology shows how biological features (which are semantic) get selected for through evolution where the actual selection process occurs to the underlying syntactic elements (introns/exons) of DNAs and other molecular structures.

    https://www.amazon.com/Introduction-Systems-Biology-Mathematical-Computational/dp/1584886420

  71. 72. Jochen says:

    Stephen:

    It seems clear that neither of us actually knows whether a brain model (or perhaps better expressed as an artificial brain or alternate brain implementation) would be able to develop semantic meaning.

    Personally, I find the argument that syntax doesn’t suffice for semantics very persuasive; but at any point, I’m also aware of the fact that I may be wrong, hence the careful wording. But it’d need some very strong counterargument in order to make a belief in the possibility of brain models having a semantic dimension reasonable, and so far I haven’t seen one.

    Basically, the whole thing comes down to the fact that a model is simply a description, and a description is not the thing it describes. Like a description, a model has to be interpreted as being about the thing it models—that’s where we need a brain in order to even make the model of a brain a model of a brain: without such interpretation, all there is is a set of symbols being shunted around, which could be equally well interpreted as being about anything—a computation of the detailed topography of my navel just as much as a model of a brain.

    This parallels the fact that a description has to be understood in order to describe something—without understanding, it’s just a set of symbols; with a different understanding, using a different code, it might be about something completely different.

    So claiming that a brain model could have semantics is like saying a description of a digestive tract could eat—it’s a category error, an instance of mistaking the map for the territory.

    calvin:

    However, if there were syntactic process (a FSM) whose interactions produced structures that were neither coded for nor read by the syntactic processes, they would be side-effects of the syntactic processes. What else would you call those structures or phenomena? (for instance, the display on a screen is a side-effect of the graphics processing code to render this text.)

    Well, to me, the text displayed on the screen is simply a (complicated) flag indicating the state the FSA (sorry, writing FSM always reminds me of the Flying Spaghetti Monster, I hope it doesn’t cause any confusion); so a second FSA would simply be reacting to that state, taking it as input.

    And those two systems would not be a finite state machine. A collection of finite state machines is not necessarily a finite state machine. (that depends mostly on if they share alphabets and rules.)

    The way you’ve described it, a description in terms of a single FSA seems to be very natural, though: you just concatenate the alphabets and rules, except on those points where they agree, i.e. the ‘side effects’ of the first, which the second one is sensitive to. That FSA would show exactly the same behavior as the collection of two FSAs. (And even if the alphabets and rulesets are wholly disjoint, you can always concatenate the two to generate a new FSA, no?)

    Those inputs are only syntactically relevant to the FSMs that can process them because of the alphabets each FSM uses. The cell becomes a thing, a semantic thing, because of the artifact of homeostasis of the right inputs and the right collection of FSM in the right configuration.

    I’m afraid you’ll have to elaborate here. How does the cell suddenly become a semantic thing? How does that jump from syntactic processes occur? And what exactly do you mean by the cell being a semantic thing—what has meaning to it? What are your criteria for something being ‘a semantic thing’, and how does the cell suddenly satisfy them, when individual FSA don’t?

    Loop conditions are intrinsically semantic (a bit like halts) and are outside the bounds of what can be proved about FSM or Turing machines.

    That’s true for Turing machines, but not for FSAs, as their halting problem is always solvable (and you can also always check if you enter a loop). And in what sense are loop conditions ‘intrinsically semantic’? It’s true that, for TMs at least, no syntactic process can check whether a TM loops, but is that enough for calling it semantic?

    I know that Gödel’s first incompleteness theorem, or rather, its proof, has been described as ‘a way to wrest semantics from syntax’, or something like that, but I’ve never actually understood what that’s supposed to mean (I would, however, very much like it to be true, so if you can explain it to me, I’d be most grateful).

  72. 73. Arnold Trehub says:

    Charles: “… wouldn’t it be likely that a verbal response consequent to reading about a sunset would also be produced directly from the text (as extracted from the visual sensory stimulation) rather than indirectly via a mental image, the latter procedure seemingly reminiscent of the Cartesian Theater scenario?”

    The mental image is what gives meaning to the character string that you read. These images are preconscious and would not necessarily be content in a “Cartesian Theater”. The point is that without the imagistic referents the text would be meaningless. See Linking the Semantic Network to the Real World, pp. 112 – 114 in “Building a Semantic Network” on my Research Gate page.

  73. 74. Stephen says:

    Jochen: re 72

    Yes, that’s why I suggested an artificial brain or a alternate brain implementation as an alternative to a model. A model of a gut on a computer doesn’t actually digest food, you would need some sort of artificial intestine with a place to input food and a mechanism to change the input into some other molecular structure to do that. Since, in my opinion, a brain is a type of neural computer, it is completely reasonable that it could be recreated in some form on another platform. That is, there is no need to process actual molecules to transform them into something else. There is nothing we know of for sure which would not allow this. There is only the presumption that there is something special in the brain that might not be recreatable.

  74. 75. calvin says:

    Jochen: “Well, to me, the text displayed on the screen is simply a (complicated) flag indicating the state the FSA…”

    No it’s not. You don’t see flags from the graphics processing engine. The pixels on the screen are letters, words, and pictures to you. You wrote that message in what you see as a text-box. The graphics rendering code simply rendered a string of bits to output without thinking of text-boxes. The code would render chinese, or dingbats, or this Arial font in the same way. You can’t know the state of the FSA which produces the text unless you can see or wrote the FSA which produces the display output, and even then you need to see the input data to think of the rendered output as indicating states of the FSA. The rendering on the screen has meaning to you – as letters, words, sentences, concepts.

    “sorry, writing FSM always reminds me of the Flying Spaghetti Monster, I hope it doesn’t cause any confusion”

    Ha! Thanks, I kept having to convert my FSAs to FSMs! So how do you get to the meaning of FSM being the Flying Spagetti Monster when at some level it’s just syntactic information. What rule exists that makes FSM its Noodley Greatness vs. some other meaning? I would say there is no rule. Your molecules have made neurons, which form connections, producing a set of otherwise impossible molecular interactions to occur, which is a process and structure which, represents FSM as the Flying Spagetti Monster. It is an act of representation, not a syntactic process but a representational process.

    Consider how human beings learn to read and write. They do not start with a set of symbols, then process a rule set which “associates” those symbols to a set of marks we call letters and apply another rule set to “associate” those symbols to sounds. That is what Google is trying to do. What we do is start with marks and start with sounds without a rule set. We actually (not procedurally) associate marks with sounds. we associate these marks with more sounds and with images, experiences, things in our environment. This is a representational process. It is not a syntactic process at all.

    The proof of this is that people can learn the wrong thing, because they make the wrong associations, the wrong representations. This is how we get language drift and word changes. It’s how we can use words to make drawings (as we see in arabic) or how the same sound can have completely opposite meanings: If I say, “Jochen, your questions are bad!” Do I mean you asked awesome questions or do I mean, they are pathetic? Obvs the first one, but how do you know? Knowing simply can’t be determined from content. It can’t be determined syntactically.

    But what’s important about this fact is that it happens. People can make representations, and they can make opposite representations. To get to machines which understand meaning, the machines have to be able to make their own representations – because its the representation making which produces meaning. We have to be able to show how representations work, in some kind of functional way, to know when a system is engaging in representation.

    I use of the concept “as” or “in the form of” is sufficient as a means to model this function of representation. I use “;” to indicate “as” or “in the form of” to create symbolic representations of representations.

    “bad” ; positive (word “bad” as a positive value)
    “bad” ; negative

    “bad” = “bad” ; positive (the word “bad” is the same as the word “bad” representing or in the form of a positive value)
    “bad” = “bad” ; negative

    negative = negative ; “bad” (a negative value is the same as the negative value in the form of the word “bad”)
    positive = positive ; “bad”

    positive != negative (the above situations are obviously true, even though positive and negative cannot be the same thing.)

    On the one hand, representation is a function that forms identities. And this is how we get things like meaning. Some content is associated to some other content. x ; y

    x;y is a kind of rule. to make x;y is to make a rule. You see this rule making all the time in pretend games. “Simon Says” is a great example of this representational “rule” making, where the rules are constantly being changed by what Simon Says. A computer cannot play Simon Says precisely because a computer would never get confused by Simon Says, and if we could “confuse” the computer, the computer would never recognize it’s error. A computer would follow it’s syntactic rules and keep “being confused” and we could always “fool” the computer.
    -Because a syntactic system can only follow it’s rules.

    Human beings are not so constrained, they can make errors. they can make errors intentionally. Syntactic systems do not have errors in them. If a syntactic system does have an error, it’s us that recognizes the error, not the syntactic system, because the error is a representational phenomena, and not syntactic phenomena. Modeling representation itself, gives us a way to model errors.

    Errors and all other kinds of the representational contents exists in a kind of web of representational connections. We add new content to this web all the time (and only lose content through forgetting). If something is mistaken in our associative web, we do not lose it, we change the associations, we change the representational connections. The only way to lose what a representation is, is to forget. Brain damage, memory loss, are losses of representational structures. (whatever those underlying structure may be).

    I could keep arguing this point of representation, but the basic issue is that representations, which are the contents of experience and awareness, are real things. Representations may may be syntactic or physical or imagined. And they can be varied, contradictory, novel, errors, illusions, etc. We need to model representation generally to get to the next step of asking how do representations occur in nature, and how could they occur in a computer system.

    So, when I say semantic content, I mean a representation. A syntactic process such as a FSA does not make a representation, it follows a procedure based on inputs and outputs. We speak about FSA as doing representations, but that is us imputing our representational activity onto the procedural activity of the FSA. If we ask where is the representation a FSA performs, we would say, it’s the rule set. But it’s not. That is itself a procedural element, not a representation. If I mstype a word, you know what the misstyped word means without having a table to account for mssytping. It might take you longer to _MAKE_ the association, but you do not need a pre-existing rule. You make up representations as you go along.

    You make:

    “mstype” ; mistype
    “misstyped” ; mistyped
    “msstyping” ; mistyping

    and you have already created:

    “mistype” ; mistype_concept
    “mistyped” ; mistype_concept
    “mistyping” ; mistype_concept

    and misstypist ; something completely different.

    “I’m afraid you’ll have to elaborate here. How does the cell suddenly become a semantic thing? How does that jump from syntactic processes occur? And what exactly do you mean by the cell being a semantic thing—what has meaning to it? What are your criteria for something being ‘a semantic thing’, and how does the cell suddenly satisfy them, when individual FSA don’t?”

    Representation making goes all the way down. It’s representing turtles all the way down. As human beings we simply cannot get out of the field of representation making. But the converse is also true. It’s syntactic (molecular) processes all the way up. There is nothing else but molecules doing their molecular things.

    We live in a particle universe. All of the forces of nature are particle forces. It’s all intrinsic. There are no extrinsic functions or laws which “force” events to happen. It’s all particles doing particle things on up. When we describe an object, like a person running down the street, it is not a person running. It’s a bunch of molecules interacting. none of the molecules are running. There is no such physical phenomena as running. Running, person, street, these are all conceptual – representational phenomena. We represent the molecules in a certain configuration as a person, we represent the molecules interacting in certain way as running. We arbitrarily separate the molecules around the ‘person’ molecules and call those the ‘street’ molecules. Its all very arbitrary and purely representational.

    From our perspective, it’s only representations that exist. Even if we talk about molecules, they are representations too! but from the molecules perspective, there is nothing but molecules. There are no larger structures than molecules. There are particles, atoms, molecules. That is it. There are no other physical phenomena. Collections of molecules are not any different, in a physical sense than any other collection. A dead cell or a dead body is identical to a live cell or a live body from the molecules point of view. it’s the same molecules!

    But obviously there are cells, and people, and mountains, and planets, and ideas, and concepts, and feelings, and on and on. The representations can be made for the different collections and kinds of molecules and particles. And the molecules collect into forms _AS_ representational content we recognize. Discoveries of course, are when there is some unrepresented collection of molecules that then we recognize and organize into some kind of coherent representational structure. Science is all about making representations consistent with the phenomena we can experience – that are not representations (the physical world).

    So, how do the representational things and the non-representational things come together or how do the physical things and the non-physical things interact? how do we get bi-directional causation? How can an idea produce particular chemical reactions to cause you to run down the street? and how does a particular collection of light dependent reactions in opsin molecules (seeing an accident) produce an idea of danger – which then causes you to run down the street?

    there must be a place in the representational web of content where a representation and non-representational things co-occur. (co-occurrence is the essence of representation making). And there must be a point where some molecules exist in such a way that they instantiate a representation. If both these conditions are true, then at that point, if we twiddle the representation, we twiddle the molecules and their interrelated molecules. And if we twiddle the molecules, we twiddle the representation, and it’s connected representational structure.

    That point is the cell. And to understand why requires thinking about asemic content and stigmergy.

    The word asemic means: having no specific semantic content. this is the bottom of the the representational turtles. Where there is “something” but it doesn’t mean anything. If I say “something is about to happen” and you say “What?” and I say “I don’t know!” that is an asemic like statement. It means something, but the meaning is… proto-semantic? This is how all representation making starts. An association indicating “something”.

    Stigmergy is a mechanism of indirect coordination, through the environment, between agents or actions. It was first conceptualized with termites. The essence is that termites leave chemical marks, which other termites respond to (in various ways depending on the termite state), and their responses indirectly lead to coordinated action. For instance, there are no rules on how to build a termite nest, but termite nests have very clear requirements, especially related to cooling, which must be met for a nest to survive – but there is no rule set for nest building. Termite nests are a semantic outcome from the syntactic behavior of termites. Another way to think of stigmergy is as co-occurrence of non-representational phenomena that produce a representational result.

    The cell is a stigmergic outcome from the non-representational coordination of molecules and their molecular interactions. But what is a cell? It’s really nothing. It doesn’t cause the phenomena of the molecules in any way. A cell is just the molecules doing things, the “cell” are irrelevant to the FSA nature of molecules. However, a cell is asemic. It is a proto-semantic thing. A bunch of cells can “work together” (not really, it’s the molecules doing “the work”), to create a form, or to create a structure, or to create a network. And if these forms and process features can perist and co-occur, we get embodiment of representation.

    Structures and co-occurrence instantiate representations. Initial representations will not “mean” anything. They will all be asemic. However, as structures and co-occurrence features become more complex, more complex representations can be created which “mean” something to the “organism”. But if you looked at this from the molecules or from a FSA point of view, all of the structures, features, co-occurrent phenomena are entirely uncoordinated.

    To produce sentient machines, we just need to recapitulate stigmergy with FSA to achieve asemic representations, and then build up from there.

    Now, under those very specific conditions, do the FSA automata generate semantic content? No they do not. You are correct:

    “That (single) FSA would show exactly the same behavior as the collection of two FSAs. (And even if the alphabets and rulesets are wholly disjoint, you can always concatenate the two to generate a new FSA, no?)”

    But this is only true if the point of the two FSA is to generate what I interpret as a semantic result from the syntactic behavior of the two FSA. In that case, we can can recapapitulate the two FSA into one FSA to produce the same outcome. I completely agree with this.

    But what I am attempting to do is create a representational result that is asemic, that is… representational but has no specific meaing. It’s this absence of specificity that matters.

    Can FSA be used to produce stigmergic behavior and produce stigmergic forms and features? And can those forms and features then be used as the basis for asemic reprsentation? I think yes.

  75. 76. Charles T Wolverton says:

    Arnold: The mental image is what gives meaning to the character string that you read.

    As best I can tell from the cited pages of your paper, for you the referent of “mental image” is an active subset of an array of cells that correspond to the array of visual sensors in an eye. If so, it appears we’re using the term “mental image” to refer to different things. I understand that term to refer to a phenomenal experience – a “picture in the mind”, if you will. Even then I’m not sure why one would call that an “image” since I think of an image as something available for public viewing.

    Saying that a mental image gives “meaning” to a text string implies some concept of that word. If I read the definition of an abstract mathematical space and can then say something coherent about such a space, the definition must in some sense have had “meaning” for me. But what is the “mental image” of such a space?

  76. 77. Arnold Trehub says:

    Charles,

    A mathematical “space” is a metaphor to promote understanding of the symbolic structure. What do you imagine a space to be like?

  77. 78. Jochen says:

    calvin:

    No it’s not. You don’t see flags from the graphics processing engine. The pixels on the screen are letters, words, and pictures to you. You wrote that message in what you see as a text-box.

    Yes, because I already have my semantics down; but, from the position of viewing the computer as an FSA, its state is simply the detailed physical configuration of ones and zeros in various memories, caches, etc., at any given point in time, which is (or may be) accompanied by a particular set of pixels lighting up. Now, this isn’t a one-to-one relationship, obviously, but it’s fundamentally no different from a green lamp lighting up when the FSA is in state 1, 3, or 753. It’s just a complicated lamp.

    We actually (not procedurally) associate marks with sounds. we associate these marks with more sounds and with images, experiences, things in our environment. This is a representational process. It is not a syntactic process at all.

    What do you mean by ‘actually associating’ here? How does this process work? What makes it non-syntactic?

    The proof of this is that people can learn the wrong thing, because they make the wrong associations, the wrong representations.

    Well, neural networks can also learn wrong things, and every neural network has an algorithmic/Turing machine equivalent, so there’s nothing semantic about this that I can see…

    If I say, “Jochen, your questions are bad!” Do I mean you asked awesome questions or do I mean, they are pathetic? Obvs the first one, but how do you know? Knowing simply can’t be determined from content. It can’t be determined syntactically.

    One could write down a simple syntactic rule coming to the correct conclusion in this case, though; it’s input wouldn’t just be the words, but also some additional conjectural facts of the environment—say, the words above, plus setting the ‘doesn’t_want_to_insult’-flag to 1, would cause an output of ‘Thanks for the compliment!’, wile if ‘doesn’t_want_to_insult’ were 0, the output might be ‘Why, you little…!’.

    As to what sets that bit—well, for many cases, a heuristic procedure to determine it from prior data does not seem to be too far-fetched. People are much more predictable than they think they are—take for instance this oracle that predicts what key you will press next, even if you try your best to make the key press random—its efficiency, after a bit of training, is typically way in excess of 50%. But that doesn’t mean it has any semantic insight in whatever thought process you use to come up with the next key stroke!

    As for the rest of the time, where heuristics aren’t available? We get it wrong. We misunderstand one another all the time; indeed, misunderstandings are probably more common than perfect communication. Heck, if that weren’t the case, half of Shakespeare’s plays’ plots would have been resolved half-way through the first act!

    But what’s important about this fact is that it happens. People can make representations, and they can make opposite representations.

    Lots of people would dispute that, on the basis that a representation always requires somebody who uses something as a representation, and thus, if we had representations ‘in the head’ somewhere, we also would need some homunculus to use these representations—but since using representations is again something that needs representations, that homunculus needs its own homunculus, and so on, leading directly into vicious regress.

    A computer cannot play Simon Says precisely because a computer would never get confused by Simon Says, and if we could “confuse” the computer, the computer would never recognize it’s error.

    I think this is a too narrow view of what computers can and can’t do. No computer in contact with the real world is describable in terms of a fixed rule set; information learned from the environment can act to modify the rules. Proposals for general AI are self-modifying in some way: neural nets modify the connection strengths, things like Hutter’s AIXI or Schmidhuber’s Gödel machines directly rewrite their own code.

    So I don’t see why a computer could never get confused, then learn a new rule, and discover the error behind its confusion.

    For instance, there are no rules on how to build a termite nest, but termite nests have very clear requirements, especially related to cooling, which must be met for a nest to survive – but there is no rule set for nest building. Termite nests are a semantic outcome from the syntactic behavior of termites.

    I’m not sure how to understand this. In what sense is the termite nest semantic? Does it have something to do with the absence of (explicit) rules? I mean, of course there are rules of termite behavior; and the termite nest is an outcome of these rules. In theory, at least, one can write down a huge ruleset governing the behavior of all the termites in the colony, which will then exactly tell you how the nest will look like—you could, for instance, simulate it on a computer.

    It’s not different from how one can’t glean the explicit rules of, say, face detection in pictures for some neural net from the neural net; but nevertheless, in principle, one could translate the operation of the neural net into an algorithm following rules made explicit that shows the same performance on face recognition tasks. In some sense, you seem to be saying that the former would be semantic (the termite nest) while the latter wouldn’t be; but they’re equivalent.

    However, a cell is asemic. It is a proto-semantic thing.

    […]

    Structures and co-occurrence instantiate representations. Initial representations will not “mean” anything. They will all be asemic. However, as structures and co-occurrence features become more complex, more complex representations can be created which “mean” something to the “organism”.

    And again, in what sense is a cell something ‘proto-semantic’? Sorry if I’m being just thick, but I don’t see it. To me, you’re enunciating the problems very well, but then you get to sentences like the above, and I’m just like, woah, wait a second there! It seems like you skip over the most crucial part of the problem. Certainly, if a cell were something proto-semantic, if ‘stigmergic’ products had some semantic content, if ‘structures and co-occurrences’ yielded representation, then I can see how one might get off the ground with some semantic engine based on these notions; but what I don’t see is why I should believe these things.

    And then you follow it up with the old appeal to complexity—at first, things don’t mean anything, but then you heap more things on the things, and poof, meaning!

    Can FSA be used to produce stigmergic behavior and produce stigmergic forms and features? And can those forms and features then be used as the basis for asemic reprsentation? I think yes.

    Well, I don’t know. The thing I would like to see is some argument ending in ‘…and hence, x means y to z!’. So far, I haven’t been able to glean one from your writing (which may, of course, be nothing but a statement of my own limitations).

  78. 79. calvin says:

    Jochen:

    “What do you mean by ‘actually associating’ here? How does this process work? What makes it non-syntactic?”

    (how do you do the nice embedded comments? I tried to deal your critiques here, but I generally think they all stem from a difference of perspective, which I hope to illuminate.)

    Do you think what human beings are doing is syntactic? I would say that what humans do is explicitly not-syntactic. We develop associations – representations and then we create further syntactic representations to describe those associations in consistent ways.

    I am making the claim that experience is what we have and it is representational and not based on syntax. Errors are a classic example of that. Syntactic processes cannot allow for an error. What we call an error in a FSA is not an error, but merely another state. The oracle is really cool, but could the oracle be written to lose? to get it wrong? Of course, just add a line of code which switches it’s response to the opposite value. But would the oracle be making an actual error in that case? No, it would simply be following it’s programming producing the “right” answer, which is really the “wrong” answer. “Winning” isn’t what the oracle is doing, it’s only following a procedure. (we agree on that I think.)

    You clearly understand something about writing code, so try to write a program that makes errors. Then try writing one that can recognize it made an error. It’s impossible. It’s like trying to conceive of a mathematical function that produces an error. A mathematical function is a process that describes a set of values. An error would be outside that set of values, hence to capture some error would require a different function describing a different set – but that result would not be an error producing function either, because it could only produce the results it describes.

    Errors are just like bugs. Bugs are not bugs to the syntactic process. the famous pentium bug was a bug to us, but the pentium processor could have no way of representing or knowing its FDIV tables had missing elements. Regardless of the complexity of a syntactic system or FSA, that process cannot do something it was not designed or programmed to do – It can only produce procedural outcomes. An error in this context, is by definition, a non-procedural outcome. Neural nets produce all kinds of weirdness because they are purely procedural: http://cs.nyu.edu/~zaremba/docs/understanding.pdf (Like you, I would say that these results demonstrate that data sets for neural nets should be thought of as a programming refinement process and not as “learning”. )

    Human beings screw up their procedural activity all the time. They use the wrong inputs, they mix up two procedures, they guess. They know what a procedure is supposed to and do something different, sometimes intentionally. While humans can do procedures, procedural activity is a subset of the kind of activity human beings do. Which is making associations, making representations. For human beings procedural activity is not a first order activity, but is a second order activity. We have to learn procedures, we are not programmed and then “know” representations. We know and make representations and then we make procedures to do or perform that representation making “automatically”.

    I think you and I actually agree that FSA and other kinds of physical processes are syntactic and cannot produce semantic content. What I am saying is there is lots of content, even related to syntactic concepts, that simply cannot be accounted for with any kind of syntactic approach. (Godel proved this syntactically for axioms)

    When we learn a syntactic process such as addition, we can do that syntactic processing. But we follow the rules because they are “semantic” and mean something, not because we have a rule set and an alphabet that programmed us to process arithmetic inputs.

    If you are in fact a syntactic processor, where is any given rule? Where is an alphabet? Where are the states? Where is the tape?

    Representation is not a product of syntactic processing. Syntactic processing is a subset of representational activity. Semantics is not a product of syntax. It also is a subset of representation. Sometimes we can make (produce a representation) of semantic content with a syntactic process. But we can’t do that for all semantic content, and we certainly cannot do it for all representational content. What would be the syntactic process of turning 460nm light into blue? What is the semantic meaning of blue. Blue is an experience, a sensory representation of 460nm light. It is proto-semantic, and by all accounts, not-syntactic. It was so blue in my dreams, is not about 460nm light. If I say I am blue. I am making an association to a color. I am not making an association to 460nm light. Where is the rule that assigns these associations? There isn’t one. We spontaneously create associations.

    I am treating representations and the contents of awareness as first order objects. They exist. Full stop. I feel like you may be treating semantic content, qualia, etc as somehow second order phenomena of an underlying non-representational, physical universe. I don’t think it’s possible to make that argument without accepting that representations exist, just not physically. To make the argument requires treating representation as first order features of the universe in which the argument is made. For instance, arguing the physical universe gives rise to representations from syntactic like processes requires a representation of syntactic processes to pre-exist. We live in a universe that has representational things (ideas, concepts, qualia, feelings, logic, errors, etc) and non-representational things (the molecules). We live in a universe made up of physical phenomena and non-physical phenomena. (Is that claim controversial?)

    The process to make a representation is just associating one thing as another thing. x as y. It can be either arbitrary or not arbitrary. Pretend games, like when kids play being their favorite animal, or improvisation are examples of very arbitrary representation making. Fantasizing, story telling, hallucinating are all activities that are more arbitrary and even non-procedural.

    Yes, for us, this occurs by using our brains. But I am not arguing for a homunculus mind in our brains. the idea of a mind as the “source” or “cause” of thoughts is ridiculous and Nietzche pointed out Descartes error long ago. (the “mind” is an idea, it’s a content, not a cause). I agree there is no homunculus.

    That said, representations do not arise from non-representational things. Just as non-representational (physical) things do not arise from representation. All the possible representations exist, they just don’t exist physically, or they just haven’t been thought of (with brains). Physical phenomena exist and also representational “phenomena” exist. The issue is, HOW do both kinds of phenomena come to exist together in organisms. How – do representations and representation making occur in us as masses of molecules? And how could a syntactic process like a computer system also INSTANTIATE representations?

    We do not cause representations to emerge with the molecular processes that go in inside our skulls (our brains). We instantiate representations that already exist. We embody representations. That is why brain damage causes the loss of representations but brain damage doesn’t extinguish ideas. That is the only way to explain how we can forget, but also how we can discover, and remember.

    Forgetting, discovering, remembering are all representational processes that occur. A computer system cannot forget, or discover, or remember. It just does not make sense to talk about physical processes in that way. We don’t even use those words when talking about computer memory, because that isn’t what the computer is doing. We forget and we remember and we discover, so those are things that happen even if how they happen is up for debate.

    An example: The sun revolves around the earth. Now, what really happens is that the earth rotates making it appear the sun revolves around the earth. That representation of the sun which revolves around the earth is a real thing. but it does not precisely correlate, in this instance, to the actual physical phenomena, to the thing referred to by the word “sun” revolving around the earth. All the other attendant representations are also wrong. There are no such physical phenomena as sunsets, sunrises, night, day, etc. those are representational phenomena which we experience. When I see the sunset in Grand Theft Auto what is happening? it’s not even a sun at all but a polygon set with changing textures. – That doesn’t matter. Whether it’s video games, or rotating planets- sunsets are sunsets, and that is the essence of experience. Experience is representational. Awareness is representational. Representation is a different class of objects and functions in our universe.

    FSAs do not make representations; they follow procedures. So the question is how can we get procedural systems to begin performing representation making functions? And then could we get those functions to make and embody – to instantiate – representations which exist? Could it instantiate itself as a thing? In the physical world, the cell is a thing, but all the procedures occur at the molecular level. Even so, the cells are things. And different cells are different things. And different cells do different things.

    As soon as we reach the cellular level and above we begin to talk about cells in representational ways and it’s mostly sensible. But it actually makes no sense to talk about the molecular activity of cells in representational ways, because what the molecules are doing is explicitly not representational but something that looks procedural.

    A cell does not cause any of the procedural behavior of it’s molecules. In the physical world, everything happens because of molecular changes and nothing more. but we recognize arms, and eyes, and see colors, and patterns, and feel pain etc. the cell is at the boundary point where we can begin seeing representational phenomena, even though it’s all molecular phenomena all the time in the physical world.

    Some group of molecules, all behaving according to their own intrinsic “procedures”, without coordination (or rules or a model), form a persistent and coherent structure we call a cell. Those molecules, which change all the time, are forming a representational thing.

    In this way, cells re like termite mounds. Termite mounds are not all the same. But they all function the same way. they are built in an uncoordinated fashion. There is no termite mound model, nor termite mound design, no rule set for building termite mounds. If there is, where is it? Could you create a termite mound simulation with FSA agents acting as termites? Yes, you could. (no question about that.) But could you do it without selecting for a mound design? This kind of agent programming is really interesting because it gets right up to the edge of meaning which is what I’m talking about. It’s a fun exercise.

    Could your virtual termite mound perform actual termite mound functions like cooling? The causal properties of a termite mound (initiating cooling effects) are not programmatic features of the termites. And certainly they are not features of molecules which is what termites, physically, really are. The molecules have not procedural elements to create cells. The cells have no procedural elements to create termites, and the termites have no procedural elements to create mounds.

    I don’t want to say that termite mounds or cells have an explicit or specific meaning. I don’t believe they are that specific. But they are not without meaning or meaningless. They are something produced by syntactic processes that is not itself syntactic. Termite mounds and cells are asemic and they do produce effects.

    With termite mounds that is as far as we go down the meaning path, or up the representational ladder. But with cells, groups of cells can do things like quorum sensing. combining groups of cells we get slime molds and slime mold behaviors, the we get things like volvox. And on up the chain of more complex organisms that interact _simultaneously_ in both molecular and in representational ways. Certain molecular phenomena and certain representational phenomena co-occur.

    The only way I have found to explain the breadth of human experience and natural observations is to accept that the physical phenomena are first order phenomena of the universe and that the representational phenomena are ALSO first order phenomena of the universe and that these two very different kinds of phenomena, operating in very different ways, interact or intersect in organisms. (no homunculi necessary).

    Materialism and materialist perspectives break down because there is just no way to get from physical phenomena to experience, representations and concepts without assuming representation as a primary feature in the universe. And idealism obviously breaks down, because we cannot get to physical phenomena which are procedural and are meaningless and indifferent to representations, ideas, and experiences. Most of the time these two aspects of the universe do not interact. Magnetism doesn’t alter multiplication and pronoun usage doesn’t change emissivity of molecules. But these two “sides” of the universe do co-exist in us and other organisms.

    If this is is the case- then I don’t see any obvious reason why representation making couldn’t be achieved based on FSA. But if you think representational phenomena are not first order phenomena, then yes, it should be impossible to get FSA in any kind of system to produce semantic content. However, that then requires showing how experiences and representational phenomena exist at all from the molecular and and atomic foundations of physics – which I don’t think anyone can do. Ergo representational and physical phenomena are both fundamentally real. The problem then is how to get FSA to instantiate representational phenomena so both co-exist in in some kind of computational processes like how they co-occur in our organic molecular processes

  79. 80. Jochen says:

    calvin:

    Do you think what human beings are doing is syntactic? I would say that what humans do is explicitly not-syntactic.

    I’d have to agree; the question is how to create non-syntactic processes (especially if one starts with purely syntactic ones, like FSA). (I have my own thoughts on the matter.)

    (Incidentally, the boxed quotes are generated by putting [blockquote] and [/blockquote] tags around the quoted text, with the square brackets replaced by angle brackets .)

    You clearly understand something about writing code, so try to write a program that makes errors. Then try writing one that can recognize it made an error. It’s impossible. It’s like trying to conceive of a mathematical function that produces an error.

    The error is in the application of the function, not in its results. For instance, take a neural network, and train it for image recognition. It will make plenty of errors: recognize things for faces that aren’t (such errors can lead to hilarious results, like when say face swap identifies the radiator in the background of the image as a face, and swaps it with yours).

    Of course, it’s not making an error according to whatever function governs its actions: that thing is sufficiently face-like in order to pass for a face, according to its learned criteria. It applies the function correctly, and its output is—as you say—definitionally correct: its the only output it could yield. But it fails to match reality, and that’s where the error lies.

    Now, say, you keep a record of such previous identification, and continue to train the network on more images of faces. It will improve its performance, eventually up to a point where the radiator is no longer identified as a face. Hence, reviewing its previous identification, it may well flag that identification as having been mistaken—it may recognize its error. I see no essential problem prohibiting this.

    One might also imagine several layers of checking: first, a quick-and-dirty check with a reduced, but very fast implementation, and only if that couldn’t exclude the presence of a face in the data, you use the slower, but more sophisticated routine. That’s indeed very close to what seems to happen in human vision: we see a face with a quick glance, but then, closer examination convinces us that it’s really just an odd pattern in the wood grain, or something (well, or we believe it’s a miracle, and Jesus has deigned to appear in a tortilla’s burn marks; could go either way really).

    And of course, everyone’s by now seen those fantastic pictures of google’s deep dream, which can hallucinate pictures from nothing but random input.

    What I am saying is there is lots of content, even related to syntactic concepts, that simply cannot be accounted for with any kind of syntactic approach. (Godel proved this syntactically for axioms)

    Unless I misunderstand, this isn’t really true—for every sentence undecidable in some formal axiomatic system T, there exist extensions T’ that are strictly stronger than T, and prove all undecidable statements of T, in a fully syntactic way. (Of course, there will again be undecidable statements of T’, but then, there exists a system T”…)

    If you are in fact a syntactic processor, where is any given rule? Where is an alphabet? Where are the states? Where is the tape?

    You don’t need to convince me that I’m not just syntax: I think on that point, we fully agree. What I would like to understand is how you propose to get to semantics!

    I am treating representations and the contents of awareness as first order objects. They exist. Full stop. I feel like you may be treating semantic content, qualia, etc as somehow second order phenomena of an underlying non-representational, physical universe.

    Ah, OK; so would you say you’re a panpsychist, in some sense?

    We live in a universe made up of physical phenomena and non-physical phenomena. (Is that claim controversial?)

    Hugely! Many people these days are physicalists, claiming there exists nothing but the entities of physics, and anything else must be reducible to them.

    I’m also a physicalist, or at least broadly materialist, but of a less extreme stripe: for one, I don’t believe that the objects of physics, as a science, exhaust the things in the universe (in the terminology of Stawson—I think?—that would make me a physicalist, but not a physics-alist). I’m a physicist by training, and I think it’s all too much of a burden placed upon us to account for everything in the whole universe!

    The way I see it, in brief, is that there are properties to physical systems that are not transmitted to a model of that physical system (a good thing too, or else, the model would have to become that physical system). The properties that can be transferred are basically structural: the relations between a system’s parts. So, in order to model the solar system, you build an orrery, which has the property that, say, the distances of the little balls representing planets stand to one another in the same relationships as the distances of the planets do.

    However, nobody is going to try and send a rover to the fourth of those little balls; to do so, would be just a massive case of mistaking the map for the territory (and an even worse screwup than the time NASA crashed a Mars mission because somebody forgot to convert from imperial to metric). That’s because there are properties—intrinsic properties—that are not implemented in the model (as I said, if they were, then the model and the original system would be identical).

    So, to me, physical science essentially studies the relational, structural properties of physical stuff—that’s why mathematics is so uniquely powerful there: it’s essentially the science of structure (mathematics can be, and often is, based on set theory—the theory of collections of things and their relationships). That’s also why computers are so damn useful: being a universal computer is nothing but admitting every possible structure (under the right interpretation).

    But ultimately, to me, it’s the non-structural, intrinsic properties that are responsible for meaningful reference, for subjective experience, and the like. That’s why science has such troubles getting at them: it’s concerned with systems that can be modeled; but the very act of modeling leaves these properties by the wayside.

    So, to me, there’s nothing but physics—it’s just that there’s more to physics (the stuff) than physics (the science) can capture.

    The only way I have found to explain the breadth of human experience and natural observations is to accept that the physical phenomena are first order phenomena of the universe and that the representational phenomena are ALSO first order phenomena of the universe and that these two very different kinds of phenomena, operating in very different ways, interact or intersect in organisms.

    OK, so I guess you’re more accurately described as something like a dual-aspect monist, along similar lines as David Chalmers. That’s a valid stance, I think, although it always has a sort of hollow ring to me: one needs to postulate that there just are some primitive attributes of the world. This obviously falls far short of fulfilling the original goal of explaining consciousness; hence, to me, it’s rather a stance one should only take if all else has demonstrably failed—one risks to give away the game prematurely.

    But I think I do understand your stance much better now, thank you. I had taken you to suggest that ‘meaning’ would just somehow emerge from the interaction of syntactic engines, like FSA; but you’re not doing that, rather, you’re claiming that there’s no fundamental explanation of representation and the like, it’s merely a part of our universe, like electrons and photons and all that other stuff. (Which, again, I think is a valid stance.)

    One thing I still haven’t quite gotten is your notion of ‘asemic’, and how one builds representations out of these things—how is a thing meaningful, but has no particular meaning? I would have said that having some specific meaning is only what allows us to call something ‘meaningful’. And how does one assemble such things to generate particular meanings?

  80. 81. Charles T Wolverton says:

    Arnold –

    I don’t “imagine” a mathematical space to be anything – I know it to be a set of definitions and rules. My point is that I can (and did at one point long ago) “understand” (in my action-based sense of developing relevant behavioral dispositions) those defs and rules notwithstanding being unable to visualize them. Some – say an N-dimensional vector space – can be dealt with by visualizing a 3-D space and generalizing. But good luck visualizing a reproducing-kernel Hilbert space.

  81. 82. Arnold Trehub says:

    Charles,

    I am not claiming that you have to visualize the referents of symbol strings in order to understand them. My argument is that you have to evoke an *image* of the referents in order to understand what they mean. Blind people are capable of imaging such referents. Imagination is not restricted to the visual modality.

  82. 83. calvin says:

    Jochen:

    Firstly, I REALLY want to read your paper. (calvin _at_ xmission.com)

    “The way I see it, in brief, is that there are properties to physical systems that are not transmitted to a model of that physical system (a good thing too, or else, the model would have to become that physical system). The properties that can be transferred are basically structural: the relations between a system’s parts.”

    I would say those transferred properties are representations. Relations are (in almost all cases) representations and not physical phenomena. The properties are not “structural” but representational. I will go even further and say that what you refer to as properties to physical systems are almost always effects and not properties.

    We are so bound up in representing phenomena, that it is hard to discern that physical phenomena are only molecules and their interactions and not representations of some described phenomena. Larger phenomena, which we may ordinarily describe as properties are also effects of the billions of individual molecular interactions and the 4 physical forces. Ice and freezing is an obvious example. There is no “freezing” force. Ice is not a physical thing, per se, but a representation of the relationship of the mass of invisible water molecules. Yes, ice has certain properties, but those properties are effects of the interaction of the molecules, and not causal in anyway.

    It’s very easy to talk about properties as if they are causal features of physical systems, but they are not. There are only 4 causal forces of physical phenomena and those are intrinsic to the particles. Everything else that we say is a force or say is a property are not causal phenomena. I know I’m saying something you already know, but it’s very important to understand that there are no extrinsic forces or extrinsic causes in the physical universe. There are no extrinsic laws that determine how anything happens. As human beings we just have this very bad habit thinking physical phenomena happen because of rules, or stories, or a “program”.

    I know there is an argument about state lurking against what I’m saying, and I will argue against that too. State is not a causal phenomena either. States are effects of underlying molecular and particle interactions and not extrinsic phenomena which cause or force outcomes.

    I’m pushing so hard on this idea to push harder on the notion that representations (ideas) are first order phenomena. There just is no way to get to the set of rational numbers (clearly an idea), from the physical phenomena. sets, ideas, symbols, errors are simply not derivative from physical phenomena. Ergo, they must be something else. Materialism simply can’t describe how we get ideas from the physical phenomena without admitting ideas exist a priori. And materialisms implicit admission of representational phenomena make it harder to understand how physical phenomena work, by admitting such things as states, and non-particle “causes”.

    That is not the only reason I think representations are first order phenomena? The primary reason is because of awareness. The problem with describing what awareness is, leads us naturally to representation.
    Before we can tackle the consciousness problem requires tackling the awareness problem I have a short summary of my argument here: http://asemicstigmergic.net/tucson/%2302%20%20Awareness%20and%20representation%20illustrated.html

    In short awareness itself is a function that always behaves the same way. Awareness is always awareness of some content. and that awareness of the content is the same as the content. Awareness of x = x. This is counter-intuitive. So, try to construct a counter example to see if there is every any difference. Use whatever you like for x, and you will find that the awareness of your x and the x itself is always the same. What you fill find is you have lots of x’s that are associated in various ways, but everyone is identical to your awareness of it.

    “how is a thing meaningful, but has no particular meaning?”

    Asemic and non-specific meanings are very common. If hold up my hand with my fingers apart, what does that mean? It may mean I’m waving. it may mean I’m saying “five”. It may mean I just wanted to lift my arm and I’m communicating no meaning. You may think I’m waving and wave back – which I would be puzzled by because I was only raising my hand to stretch. The particular meaning in this case derives from context and as part of context, intention. If you wave back, it implies you imputed intention to me that I waved to you.

    When a child (or many animals) are learning, they often engage in “no” behaviors. Humans understand the non-specific “no” very early in life. Sometimes young children even get in situations where everything is wrong and act out their “no” to that everything. It doesn’t make sense in those situations to think that those kinds of tantrum negative responses to everything are specific, but are non-specific, not particular assertions of wrongness.

    Moods are another example of non-specific meaning. When we say “a bad hair day”, we are illustrating something off about the experience we are having that day. but the experience or mood is non-specific. There may be particular things that occur, but they simply reinforce or confirm that we are having a “bad hair day”.

    Pain can also be non-specific. We can be happy for no good reason. you can just be lucky. And what does that specifically mean? The absence of specificity or the absence of particular associations does not mean something is without meaning.

    And there is no way to get to these kinds of phenomena from the physics. Physical phenomena do not produce pain. Regardless of the effects of mass*acceleration in relation to stubbing my toe. How do the molecules produce pain? And obviously they don’t. But pain is extraordinarily non-particular. We may know where it hurts, but where it hurts isn’t the pain, that is just where the pain is.

    Itches are even more interesting. Is your head itchy? Are you going to scratch it? http://www.newyorker.com/magazine/2008/06/30/the-itch There is something just not right about itches – whatever that specifically means.

    I’m assuming (at this point) you acknowledge there are things that are meaningful without being specific. -especially when we enter the realm of the non-linguistic and non-visual.

    “how does one assemble such things to generate particular meanings?”

    Assuming there are asemic representations, and representations are real things. It’s reasonable to assume that complex representations are built up from simpler ones. And this is sort of the lesson one learns from experimenting with phenomenology. The first 2 kinds of meanings or representations we can make about the contents of awareness are:
    that one content is different from another content of awareness
    or that one thing is the same as some other thing.
    similarity and difference. –

    which brings us right back to the function of representation, where on object or content of representation can be the as another object or content x = x as y. and yet x != y

    A system that creates structures or processes that does this representation making function is a system that ’embodies’ representation representation making. it’s a function, not an emergent phenomena. Where some physical process begins performing representational functions, and capturing those functions for further representations, that system is instantiating meaning. – meaning does not emerge, it is made. And then representational functions are then performed on that stored representation.

    A system that takes some inputs (x, y, z) and represents them as some value a, (x,y,z);a is doing a representational function. If it stores that representation, we could say it has made a representation, it has created a meaning. but what is it? It’s really not meaningful at all. It’s certainly not the meaning we think it is.

    And before you suggest we do that all the time with FSA, I will agree we do. But that is as far as it goes. For technical reasons, a FSA cannot make representations about it’s representations. All the coding we do is always of the same order. Mathematical functions cannot represent the non-mathematical. Math is subset of representation, not the superset. And low level representations like difference and similarity are not mathematical differences, but simply representational ones. That is, they are differences not often of quantity but of kind.

    That’s why science has such troubles getting at them: it’s concerned with systems that can be modeled; but the very act of modeling leaves these properties by the wayside.

    I think it is the fact mathematics is subset of representation that dooms model building (or simulation) as a means to create conscious machines. But science does not have to be limited to mathematics. Mathematics is a tool to elucidate and describe. but in this one area (representation making itself) mathematical (computational) approaches will always fail.

    Right, so how to bridge the gap? indirect coordination. Your hunch about cellular automata “my own thoughts” is right on. The problem I encountered trying to use CA to achieve representation making is a long and fraught battle that ended in failure. Primarily because of addressing and rules. CA are constrained by their addresses and by their rule set. I had to find a way out of those restrictions, but also preserve the mechanism automata have of building structures of themselves in an uncoordinated fashion. those structures become the foundation of basic representations and with enough complexity, higher order structures hopefully become higher order representations.

  83. 84. Charles T Wolverton says:

    Arnold:

    you have to visualize the referents of symbol strings in order to understand them.

    you have to evoke an *image* of the referents in order to understand what they mean.

    It seems we are using the key words differently since I interpret these assertions as equivalent. For me:

    1. “visualize” = “evoke an image”

    2. “meaning” = the response intended by the author of the symbol string

    3. “understand” = respond in a way that approximates the meaning (as in 2) of the symbol string

    Of course, 2 is just my assumed action-based definition of “meaning”, not a generally accepted one (which as far as I know, doesn’t exist). So, with which of these do you disagree?

    In any event, I don’t know what to do with any concept of meaning that requires “visualizing”, “imaging”, or any similar word. If I say/write/tap-out-in-Morse-code or otherwise produce the string “Stop!”, what mental event describable by one of those words do you contend must occur in order for me to do as commanded?

    You speak of “referents” which makes me wonder if you are thinking of language in the Augustinian way critiqued by Wittgenstein in the opening pages of “Philosophical Investigations”? (BTW, those early sections and a relevant chapter of Sellars’ “Empiricism and Phil of Mind” motivated my adoption of definition 2 above – I didn’t just make it up.)

  84. 85. Arnold Trehub says:

    Charles,

    1. “visualize” = “evoke an image”

    I agree if you specify that the evoked image can be a pattern in any sensory modality.

    2. “meaning” = the response intended by the author of the symbol string

    I don’t agree. The reader of the symbol string might assign different meanings to the strings than those intended by the author.

    3. “understand” = respond in a way that approximates the meaning (as in 2) of the symbol string

    This hinges on the reader’s response matching what the author intended. This often does not happen.

    What is wrong about speaking of the referents of the physical marks of words/symbols?

  85. 86. Charles T Wolverton says:

    The reader of the symbol string might assign different meanings to the strings than those intended by the author.

    In my assumed definition of “meaning” per 2, it isn’t something assigned by the reader. The reader either responds as intended by the author or doesn’t. If the former, the string was “understood” per my definition. Otherwise, it wasn’t.

    This hinges on the reader’s response matching what the author intended. This often does not happen.

    Again, in which case the reader didn’t “understand” the string.

    I’ve defined what I mean by “meaning” and “understand”. You may object to those definitions, but you can’t counter them by implicitly importing your own. WHich is why I keep asking, what are your assumed definitions? Maybe they’re better than mine, but I can’t make that assessment unless I know what they are.

    What is wrong about speaking of the referents of the physical marks of words/symbols?

    Nothing except it than seems to assume that naming objects is all there is to language.

  86. 87. Arnold Trehub says:

    Charles,

    In my model, the meaning of a character string/word is given by the sensory images or events that are directly or indirectly evoked in the preconscious semantic brain mechanisms of the reader. For example, the character string has meaning only by virtue of its evocation of sensory exemplars/images of particular kinds of acts that we call “courageous”. At a simpler level, the meaning of the character string is given by evocation of at least one sensory exemplar/image of a dog. The dictionary meaning of a character string in terms of other character strings must be cashed out in terms of the referents/exemplars that they evoke in the brain of the reader. I do not assume that naming objects is all there is to language.

    Understanding a text is confirmed when the referents evoked in a brain conform to the referents/exemplars commonly accepted in a particular social group.

  87. 88. Charles T Wolverton says:

    Understanding a text is confirmed when the referents evoked in a brain conform to the referents/exemplars commonly accepted in a particular social group.

    A key observation. Since I consider “context” to include author and reader, in my model an action (AKA context-dependent behavioral disposition – CDBD) will depend on to which “social groups” – if any – both belong.

    In comment 26 above, I gave as an example the readership of this blog. Many words that are “understood” (in my sense of triggering, modifying, or creating a CDBD) in that social group wouldn’t be understood in most other social groups. In other words, I don’t see strings as having intrinsic “meaning” independent of context.

    A more complex example is political speech, which motivated my emphasis on speaker “meaning” – how the speaker wants the hearer to react. Consider the harping by Republican operatives on Kerry’s statement during the 2004 presidential campaign “I voted for the bill before I voted against it”. For those who knew the legislative history (the “informed”), this was merely a fact about the legislative process. For GWB’s supporters, this was intended by those operatives to suggest flip-flopping and hence a perceived lack of decisiveness. I suppose that in hearers in either group (informed or uninformed) the statement could evoke something that could be called an “image”, but I can’t grasp what that might be, especially in the case of the informed who might well consider the statement warrants no response at all.

    As an aside, note that in the political example, truth plays no role. Kerry’s statement was true but it was used by the operatives to evoke a response (“he’s indecisive’) that might or might not have been true and in neither case was supported by the statement.

  88. 89. john davey says:

    Calvin

    I can explain how syntactic processing becomes semantic information.
    Great. Where do we post the Nobel Prize ?

    The syntactic processes of the brain are molecular interactions.
    OK. Can you give the scientific references for this ?

    Cells do not store qualia in any way.
    OK. Can you give the scientific references for this ?

    Cells do not have intention in any kind of way.
    As brains are collections of cells, this is evidently plain wrong. So they must have them in some kind of way.

    As the brain is merely a collection of cells, and cells are a collection of molecules, there is no obvious way the brain can convert syntactic information to semantic experience.
    Thats true. But it doesn’t stop them from doing it does it ?

    There is no level of interaction above the molecular phenomena.
    OK. Can you give the scientific references for this ?

    never do we see the 460nm wavelengths become blue in the whole visual system
    Speak for yourself. I do. If yours doesn’t, you need to see a doctor or an optician.

    the cell is not a physical phenomena; it is a semantic phenomena.
    it’s actually an oberver relative structure , like an atom. Syntactical in fact, given the imagery. The problem is that there is no coherent understanding of the physical : the ever-reducible models of matter make this clear. Linking semantic to matter gets you nowhere.

    What follows then is that collections and networks of cells can form structures to instantiate and represent more complex semantic information.
    Evidently true, as it’s what brains do.

    am using a similar process with computation to produce a computational cell from complex collections of interacting automata.

    great. when you’ve done it, we’ll post the nobel prize.

    J

  89. 90. john davey says:

    Michael

    “If I understand Searle’s larger point, it’s that mere symbol manipulation does not suffice to demonstrate comprehension”

    That was his original argument made about 30 years ago. He has since made far stronger arguments of a more general nature, but the AI community focusses on the Chinese Room because it is still a vague, namely about the notion of “understanding”.
    The key point is there is no necessary link between semantic and syntax, between the function of computer program and the symol manipulation used to achieve it. The link is arbitrary.

    This leads me to two questions. The first is given that we can only infer semantics from syntax, what reason do we have to claim that a manufactured thing which performs syntactically at human equivalent level does not have semantics?
    See previous point.

    The second question is given that all we have to validate the existence of semantics in others is syntax, do we have something other than inference from syntax to validate the existence of semantics in ourselves?
    Yes. When we program, we model semantic structures in syntax so that we can use arbitrary symbol processors.

    We don’t ‘validate’ the existence of semantin in others through syntax. We ‘validate’ semantic in others by recognition of the fact that they are humans, like us. When you were a baby did you ‘recognise’ semantic in your mother ? No, she was just mum.

    If we can’t prove through shared language that other human beings aren’t philosophical zombies is there another way to prove it

    You could be the only conscious person in the world. That makes you pretty special doesn’t it? The short answer is that you have no reason to think they’re any different to you.

    Solipsism seems to be the inevitable dead end of AI .. don’t fall into it. Bertrand Russell used to quote a letter he received from a lady who maintained she was a solipsist – “the trouble with being a solipsist”, she wrote “is that nobody seems to take you seriously”.

    How can you prove to yourself that you are not a philosophical zombie?
    How can I prove I’m not a piece of cheese, cunningly disguised ? That I’m not living in a dream world ?
    Proof of absence is a notorious piece of philosophical junk. How can I prove there isn’t a gnome at the centre of Jupiter called Frank who eats mice ?

    Answer is simple – I’ve no reason to think there IS a gnome at the centre of Jupiter who eats mice. or that I’m in a dream world. Or that I’m a zombie – in fact I know from the definition of a zombie I’m not a zomble, because I’m conscious. As you are.

    J

  90. 91. calvin says:

    John,

    Cells do not store qualia in any way. ?OK. Can you give the scientific references for this ?

    No, because there is no evidence of this. The cell is made up of molecules that interact and form structures. If a cell could store qualia in some form, what would it store? Would it cache some collection of molecules in a vesicle to represent… what? The question is where does semantic information (meaning/qualia/etc) arise or exist in the organism? Does it exist in the cell? And if it does, what are the information molecules other molecules of the cell interact with to both produce and apprehend meaning?

    Cells do not have intention in any kind of way.?As brains are collections of cells, this is evidently plain wrong. So they must have them in some kind of way.

    No, they don’t. None of the experimental evidence indicates cells have intention. All of the behaviors of cells can be explained as molecular or sub-molecular interactions. The cells do not have intention. If they do, where is the intention stored? How does the intention effect molecular phenomena? How do molecular phenomena induce the change or production of intention?

    The literature often uses intentional language to describe cellular “behavior” but that language is understood to be mistaken. An E.coli bacterium which seeks out an attractant is not “seeking”. The bacterium’s flagella change it’s rotational direction in response to molecules it encounters, which through a series of interactions produce ion change which in turn cause flagellar motors to rotate clockwise or counter clockwise. The bacterium either swims directionally or tumble depending on the rotation of their motors which depend on ionization, which is regulated through a series of interactions by molecules they encounter. the bacterium follows an attractant or avoids a repellant in a haphazard fashion. see: https://en.wikipedia.org/wiki/Chemotaxis

    Cellular behavior isn’t “behavior” at all, but an amalgam of interactions that depend on the gradient or steady state of molecules inside and outside the cell membrane. Cellular processes are mechanical or mechanistic. And it is for this reason that I say molecular interactions are syntactic or syntactic like process. An attractant molecule affects the motion of a bacterium even if there is no nutrient source. The idea of “nutrients” or “food” or “sources” or “wanting” or “needing” doesn’t enter into the interactions which occur. The cell does not have “urges” or “needs”.

    But, brains are collections of cells, and brains can be affected or damaged to alter and even initiate intention. Such as with L-dopa and gambling addiction in parkinson’s patients: http://www.prd-journal.com/article/S1353-8020(13)00074-6/abstract This is a case where clearly a molecule affects intention. So how could this work?

    because:

    The cell is not a physical phenomena; it is a semantic phenomena.
    it’s actually an observer relative structure , like an atom. Syntactical in fact, given the imagery. The problem is that there is no coherent understanding of the physical : the ever-reducible models of matter make this clear. Linking semantic to matter gets you nowhere.

    the cell is not observer relative to itself and it’s not observer relative to you. (You can be infected by cells, you do not observe.) The cell is not observed by other cells. Cells stand in this funny intersection where the cell isn’t a thing it knows about itself but it is this thing it maintains. And other cells do not know about each other, because they are purely mechanistic. But multiple cells clearly interact and produce complex structures that allow for all kinds of actions and specifically intentions. The cell is a semantic thing. It’s like the proto-semantic thing.

    When cells engage in co-occurrent processes, it is the fact of co-occurrence which looks what we mean by intention or by qualia. Quorum sensing is a primitive example of that phenomena.

    By constructing networks, of say computer programs, it’s possible to show all kinds of representational or semantic effects that are driven by the underly syntactic processes. Petri-nets, neural nets, and cellular automata can all demonstrate these kinds of phenomena. The network of programs can indicate all kinds of phenomena such as time, state, gradient driven behavior etc. Organic cells also make networks and perform these “semantic like” functions. But cells, such as neurons, produce this “behavior” because of their underlying mechanistic (syntactic) processes. Computer program nodes form connections because of our syntactic programming. Whereas neurons do not form “connections” because of your intention to learn, they change their shapes and surface features because of their intrinsic molecular interactions. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3079328/

    In organisms, what we see is semantic like phenomena at the level of networks of cells. And we see syntactic like, non-semantic phenomena in the individual cell. But crucially, the syntactic processes of cells are not intended to produce the semantic like phenomena of their networks. That is a side-effect. It’s not unlike the kind of side-effects we see with cellular automata. The gradient driven processes inside cells produce macro side-effects in the organism. see: https://www.amazon.com/Introduction-Systems-Biology-Mathematical-Computational/dp/1584886420 particularly ch.8

    The supposition is that if we put the right kinds of cells together, they will form networks that generate and respond based to the meaning inherent in the network itself. and those networks generate actions connected nodes and respond to environmental effects nodes of those networks react to. http://jn.physiology.org/content/80/1/1.full

    If this supposition is true, then the obvious thing to produce digital consciousness is to create programs which follow their own intrinsic processes and as a side-effect the right set of programs would create a digital cell or cell equivalent. Like a real cell, it’s primary “purpose” is homeostasis, but that is a purpose not programmed for, but selected for from a bunch of programs with their own syntactic operations. And then secondly, find the right kinds of cells that would form networks and generate meaning and causative action like we suppose must be happening in our brains.

    never do we see the 460nm wavelengths become blue in the whole visual system
    Speak for yourself. I do. If yours doesn’t, you need to see a doctor or an optician.

    my point was that blue is not a phenomena of the photons, nor the opsin molecules, nor other molecules of the photoreceptor cells, and is not a component of the neurons of retina or optic nerve, or the cells of the visual cortex. There simply is no such thing as “blue” in the physical phenomena of vision. Blue is a qualia; it is an idea. Blue is meaningful, it is semantic, and not a syntactic feature of biological vision.

    I would like to know what wavelength of light you see when you dream or imagine something blue?
    Also: https://en.wikipedia.org/wiki/The_dress_(viral_phenomenon
    And: http://www.smithsonianmag.com/innovation/scientist-accidentally-developed-sunglasses-that-could-correct-color-blindness-180954456/?no-ist

  91. 92. John Davey says:

    Calvin


    Q. Cells do not store qualia in any way. ?OK. Can you give the scientific references for this ?
    A. No, because there is no evidence of this.

    The correct answer is ‘noone has ever studied the question in a meaningful scientific way, so the scientific evident is not available, so noone is in a position to make such statements’.


    Q. As brains are collections of cells, this is evidently plain wrong. So they must have them in some kind of way.
    A. None of the experimental evidence indicates cells have intention.

    OK. Can you give the scientific references for this ?


    the cell is not observer relative to itself and it’s not observer relative to you.

    It most certainly is relative to my perespective. Who’s elses’ would it be ? I choose to isolate the constituents of the cell as a cell ; I could choose any set of objects.


    never do we see the 460nm wavelengths become blue in the whole visual system

    the ‘whole’ visual system doesn’t include that aspect of brain that realises visual imagery including qualia ?


    When cells engage in co-occurrent processes, it is the fact of co-occurrence which looks what we mean by intention or by qualia. Quorum sensing is a primitive example of that phenomena.
    By constructing networks, of say computer programs, it’s possible to show all kinds of representational or semantic effects that are driven by the underly syntactic processes. Petri-nets, neural nets, and cellular automata can all demonstrate these kinds of phenomena. The network of programs can indicate all kinds of phenomena such as time, state, gradient driven behavior etc. Organic cells also make networks and perform these “semantic like” functions. But cells, such as neurons, produce this “behavior” because of their underlying mechanistic (syntactic) processes. Computer program nodes form connections because of our syntactic programming. Whereas neurons do not form “connections” because of your intention to learn, they change their shapes and surface features because of their intrinsic molecular interactions. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3079328/

    I’ve written neural nets. I disagree. They are just programs – syntactical – and that’s it. They have no more inherent sophistication than a program to calculate Vedic astrology or feed a pet virtual cat.

    JBD

  92. 93. calvin says:

    John,

    Q. Cells do not store qualia in any way. ?OK. Can you give the scientific references for this ?
    A. No, because there is no evidence of this.

    The correct answer is ‘no one has ever studied the question in a meaningful scientific way, so the scientific evident is not available, so no one is in a position to make such statements’.

    This isn’t correct. The chemotaxis research all began with questions about intent and knowledge by researchers asking how cells know where a food sources was or toxic environments. Which is a qualia question. And how did cells decide to move to food sources and avoid toxic environments? Which is an intent question. What was discovered is that cells neither know nor intend. This research contradicted the common sense belief that cells have intention and knowledge.

    What the physical science shows over and over again is that we naturally posit representational features to physical phenomena, yet the physical phenomena have no relationship to concepts or ideas. Experiments and research continually demonstrate that what causes physical phenomena are particles and the interactions those particles produce which are wholly determined by the intrinsic features (forces) of the particles themselves.

    There is a fundamental issue here. This mind/body problem, the digital consciousness question, the problem of intention and qualia are all particular examples of one underlying issue: How do ideas arise? How do ideas affect physical phenomena? And how do physical phenomena affect ideas? In human beings, it’s the problem of bi-directional causation. How do ideas cause physical phenomena and how do physical phenomena cause ideas.

    By ideas I mean any kind of representation or information. Information itself doesn’t enter into physical phenomena. As you must know, information doesn’t exist in computational systems. we take the “state” of the machine (or it’s parts‚) as information. we make a representation of physical state to be some kind of particular information. No computer ever does that. The computer only responds to state but is not aware of the state as something else, like information. You are correct, no neural net is actually creating semantic information. Computers do not bridge the gap from physical phenomena to idea. And I did not mean to suggest a neural net could.

    I am proposing something more fundamental and subtly different. to deal with the problem of awareness and representation directly. Representation is non-physical. And the physical phenomena are not representational. There is no middle ground. Our universe is one of both non-physical representations and ideas and physical particles that are non-representational and these two kinds of things somehow interact in organisms. And when they do interact, the physical phenomena instantiate representations.

    I made an argument for how awareness and representation work in my discussion with Jochen, and that they work according to a particular function. awareness of an object is the same as the object. And that object and awareness diversity is the result of representational functions. Where one object is the same as itself, as another object. object x = object x as object y
    x = x ; y

    If we don’t deal with how awareness and representation function, How could we understand how those non-physical phenomena function in an organism? Conversely, if we don’t understand how the physical (non-representational) features of an organism function, we can’t understand how awareness and representations could be instantiated. We have to find those is situations where both representational functions and objects and physical process co-occur.

    To understand how digital consciousness would work requires a similar two pronged attack. And the point I made was that we must recapitulate, using computation, the representational primitives that arise in the physical space. That is, we must build computational cells. Not simulate cells, build actual computational cells. Where the underlying computations, like molecules, behave in their own intrinsic ways to form a cell, which the sub-programs maintain as a side-effect of their processing.

    It seems like in your critiques, you are mixing up the physical phenomena with the non-physical phenomena. And that naturally makes my arguments look loopy. I’m making a case that both physical phenomena and non-physical phenomena exist, and that they interact in organisms. (and hopefully in computation based organisms).

    It isn’t really arguable that physical and non-physical phenomenal do not exist. Arguments against either require both as pre-existing components. Which means the useful question is: How do physical and non-physical phenomena interact? Under what physical and non-physical conditions do we get interaction of those two parts of our universe?

    I propose that cells themselves are the basic building block of that interaction. That the cell itself is both a physical and a non-physical (representational) thing. for reference: http://asemicstigmergic.net/tucson/

  93. 94. John Davey says:

    Calvin


    “This isn’t correct. The chemotaxis research all began with questions about intent and knowledge by researchers asking how cells know where a food sources was or toxic environments.

    What on earth does this have to with intention ? Chemotaxis – bacteria scraping after food ?
    You will have to give links to the relevant scholarship – proper research with real animals, not fantasy-planet stuff by cognitive scientists using PCs.

    And how did cells decide to move to food sources and avoid toxic environments? .Which is an intent question.

    No it isn’t. That’s like describing an apple being thrown up in the air having “intention” to fall down again.


    How do physical and non-physical phenomena interact?

    You have hit the philosophical non-problem on the head, I would suggest. By making an artificial split between the “physical” (as yet undefined) and the “mental” we have synthesized a non-existent quandary.
    There is no mind-body problem, as noone knows what “body” is.

    The non-existent “mind-body” problem is surely the philosophical argument that has wasted the most time of all. It has had a ruinous effect on brain research and kept it in the dark ages, by favouring ludicrous computationalism in the same way that doctors used leeches. No amount of logic and endless contradiction seems to placate these folk nonetheless.

    JBD

  94. 95. calvin says:

    By making an artificial split between the “physical” (as yet undefined) and the “mental” we have synthesized a non-existent quandary.
    There is no mind-body problem, as noone knows what “body” is.

    (did you mean no one knows what “mind” is?)

    I am not suggesting there are “mental” phenomena, at least not in the sense of there being a mind. The mind is an idea, just like other ideas. but cannot be the source of thought or consciousness. But that does not mean ideas do not exist. It does not mean experiences does not exist. Meaning is a real thing. Semantics is an actual problem.

    None of the brain research gets us to things like mathematics, abstractions, feelings, or qualia of any sort. I understand there is a physical path from atoms to molecules to complexes of molecules, to cells, to brains. So, how do you get the number 2 from that? How do you get the function of square roots? How do you get beauty?

    I understand your frustration with the mind-body problem. but I don’t understand that we don’t know what the body is. I thought that was mostly empirically obvious. Do you mean to extend the body to be ideas as well? If so, how? What is the body and how does it produce and discover ideas? what does idea discovery even mean from a physical-body perspective?

    The only soluble approach to account for ideas and physical phenomena that I could find to accept that there are both physical and non-physical features in our universe. (mathematics would be a non-physical feature). Illusions are an example of a non-physical feature. Illusions exist, we can talk about them and recreate them, but they are non-physical phenomena. Do you have a different approach that doesn’t bifurcate? Or do you simply reject one kind of phenomena?

    And that if both are actual phenomena, I believe the best way to talk about their interaction is to think of the physical phenomena as instantiating ideas, instantiating meaning, instantiating illusions, instantiating awareness – as representation. The best reason for this is because different people, with clearly different brain configurations (very similar, but not identical) can have the same ideas. And if machine consciousness is possible, then the problem of instantiating consciousness in a machine requires an obviously very different physical structure that produces the same non-physical phenomena.

    However, I am very curious to try alternative models of producing machine consciousness than the one I am following. Especially because the only approach I have found (metabolic computation) is… complicated. Do you have a different approach, or do you have a counter-arguments as to why digital consciousness would be impossible?

  95. 96. Arnold Trehub says:

    Calvin,

    Do you claim that subjectivity — the first-person perspective– is a non- physical phenomenon?

  96. 97. John Davey says:

    Calvin

    There are things thay we think of as physical and things that we think of as mental. The mental is understood fully- in the sense that all humans comprehend it. It us the physical that is not comprehended, and never has been since Newton first proposed gravitation when he invented physics 300 years ago.

    Gravitation defied comprehension of body, as no one could understand the notion of action at a distance. Newton himself thought the idea ridiculous, and failed to explain it, merely recounting that it was not the task of physics to provide understanding, but instead to provide an accurate prediction of certain metrics.

    In more recent times we’ve seen quantum mechanics defy any comprehension of the physical. We cannot conceive of matter both particulate and wavelike. Nonetheless the mathematics works.

    Likewise we cannot conceive of a method of physics that leads to mental phenomena. THAT is the mind-body problem. It is not a problem of the brain or the universe : it is a problem of physics and human cognition.

    Jbd

  97. 98. calvin says:

    Arnold,

    my intuition is yes. subjectivity (as the first person perspective) is a non-physical phenomena. I would say that objectivity (a third person perspective) is also a non-physical phenomena.

    For two reasons: One, perspectives are irrelevant to physical interactions and phenomena. The absence or presence of objectivity or subjectivity does not alter physical phenomena. Yes, this is almost the definition of objectivity, but we use objectivity only because we actually have subjective experiences (ie, objectivity is real, just not a physical phenomena).
    (the quantum argument here doesn’t require objectivity or subjectivity per se, it only requires the act of observation, which could be a singular event and not a persistence of perspective)

    Two, it seems impossible to specify what objectivity or subjectivity actually is. What is an objective view? What is a subjective view? It’s impossible to specify the contents of either viewpoint, except in the most gross or general terms. And even then the descriptions can be recast in entirely different terms. For instance a valid view (either subjective or objective) must include molecular interactions, but no one treats the molecular interactions as the basic way to view either subjective experience or objective phenomena. Subjectivity and objectivity are concepts and it looks like we apply them in a general sense.

    It’s like how we talk about night and morning, sunrises and sunsets. Night-time, like subjectivity, isn’t a physical phenomena. It’s at best an observation of a side-effect of a physical phenomena. But night also has all these agglomerations of associated representations attached to it, just like subjectivity. But night doesn’t do anything. It doesn’t increase danger, or cause us to be sleepy, and it doesn’t pass into morning. It is not a physical phenomena, night is a representation.

    Subjectivity is a representation. What we have in this universe are physical phenomena, which are non-representational phenomena, and representations, which are non-physical phenomena. As human beings we constantly mix-up these things; because, we embody both. My point is that the two kinds of actual phenomena interact under particular circumstances (organisms). But we can’t understand how that interaction happens if we don’t treat both representations and physical phenomena (mostly molecular interactions) as first order processes and objects in our universe.

    To be more blunt, subjectivity and objectivity are not causal states. they are representations. They may be contents of awareness, but that doesn’t really tell us anything. The fact that there are contents of awareness doesn’t indicate there is some source of awareness like a mind. It’s more straightforward to suggest there is an underlying process, which when it occurs produces awareness. We don’t need assert the concept of mind (or subjectivity) as an explanatory principle because to do so means the principle itself must pre-exist it’s assertion which is an obvious contradiction.

    Taking the opposite position: Even if physical processes can give rise to subjectivity (or a mind), what is the thing they are producing? And that thing is, at least in part, a concept. Subjectivity is a concept which is non-physical. So how does the physical phenomena produce the concept? The concept has to pre-exist the physical manifestation or has to somehow co-occur. And in both cases, it means that non-physical phenomena such as concepts exist independently of physical phenomena.

    We just can’t get away from the fact representations and things like concepts exist. And representations are irrelevant to physical processes. ergo both exist in the universe. Seeing the problem this way recasts the issues in a way I have found to be much more useful to make progress toward actual machine consciousness. The mind-body question really is a kind of red herring, because the mind is just another concept. the issue is how to get physical processes to instantiate ideas at all.

  98. 99. calvin says:

    John,

    There are things that we think of as physical and things that we think of as mental. The mental is understood fully- in the sense that all humans comprehend it. It us the physical that is not comprehended, and never has been since Newton first proposed gravitation when he invented physics 300 years ago.

    I want to unpack this. One, the “mental” is not comprehended fully. That is disprovable. Novelty, discovery, creative origination are all examples of uncomprehended “mental” contents becoming comprehended “mental” contents. There is certainly undiscovered mathematics just as there is undiscovered music. Moreover, “mental” contents that we think we may comprehend, we may not. That’s a lesson every programmer learns as they write programs that have uncomprehended outcomes (bugs).

    And I really want to push back on the idea of “mental” contents. what are feelings then? what is aesthetics? what are itches? what is pain? what are urges? Do you comprehend what an itch _is_? or do you mean we can name things? Do you comprehend sexual desire? Or why you have one dream over another? Or why some coincidences are meaningful and others are not? I reject this notion of comprehension. If you mean apprehension, or awareness, then certainly. We are aware. Awareness does occur. And if that is what you meant, then I fully agree. Because awareness functions in one particular way.

    If there is awareness, it is always awareness of contents. And that awareness of contents is always identical to the content itself. Awareness of content X = X. There are not counter-examples to this assertion. Any suggested counter example is actually using different content or different awareness. Aw:X = X and Aw:Y = Y And that is where representation comes in. where some content of awareness X is the the same as some other content Y. X as Y X = X as Y Y = Y as X. but X != Y.

    As to comprehending what the physical is… well, I tend to agree with you. the physical is incomprehensible. I’ve been saying that consistently. The physical is non-representational. Our representations of physical phenomena are representations and not physical phenomena. So what? that doesn’t mean we cannot apprehend physical phenomena. it does not mean we cannot be aware of physical phenomena. It does not mean our representations cannot be consistent predictors of physical phenomena.

    In more recent times we’ve seen quantum mechanics defy any comprehension of the physical. We cannot conceive of matter both particulate and wavelike. Nonetheless the mathematics works.

    This is a self contradictory statement. The mathematics is comprehension. Is it perfect comprehension? no. But that has been my point. We live in a universe with both physical phenomena and with representational phenomena. Both exist. Of course the physical phenomena are “incomprehensible” – they are not representations of anything. they are “syntactic” and their syntactic “functions” are all intrinsic. There are no extrinsic rules which drive particle behavior. All particle behavior is driven intrinsically. Gravity is not an extrinsic “law” that forces some particles to be attracted, but is an intrinsic force of the particles themselves. There is no such physical thing as “Gravity”; there are just particles that interact, and we call some of those interactions “Gravity”.

    Likewise we cannot conceive of a method of physics that leads to mental phenomena. THAT is the mind-body problem. It is not a problem of the brain or the universe : it is a problem of physics and human cognition

    Yes we can conceive of a method where physics leads to “mental phenomena”. Obviously, we have instances. The question is what conception and how do the physical phenomena work to produce representations?

    Mental phenomena are a sub group of the class of all representations. Just as some matter (a rock) is a sub group of all matter (the physical universe). Therefore physical processes which make representations and engage in representation making processes would be good candidates for producing mental phenomena.

    Looking at the problem in that way, we can ask: How does representation making work? And how do we instantiate representations with physical processes? These are useful questions that at least give us a falsifiability path to understand how consciousness works.

    But you can’t get there by only dealing with one subgroup of representation, such as mental phenomena. You can’t get there by asserting some idea of “mind”. because mind is just another example of representational content, and not the source of all representations. eg. novelty and forgetting demonstrate that mind has no causal power and is not encompassing. I do not believe it’s possible to conceive of a solution to the “mind-body” problem when you do not treat the physical elements and the representational elements both as first order objects in a single universe.

    It seems to me that your suggestion of mental contents is one about a mind, where the mind is a first order object, and the contents of mind are second order objects. this does not work. As Nietzche pointed out about Descartes Cogito argument, the language posits a doer to every deed, which is an error. It requires that language construct to pre-exist. There is no extrinsic or intrinsic reason for there to be a mind apart from how we talk about it. Therefore, mind is just one of the things we talk about – it is another element of representational content. If we treat all representations as first order objects, just as molecules are first order objects, the question then becomes: How do representations and representational functions (like thinking) get instantiated with physical processes?

    this is a subtly different question. One that makes no sense if you don’t have a framework of viewing both physical phenomena (molecules mostly) and non-physical phenomena in a co-equal way. The only ontological argument that doesn’t have obvious counter-examples is to accept that representations (things like itches, ideas, and mental phenomena) and physical phenomena (molecules, particles etc) both exist in the universe. I understand it’s culturally easy to say that representational phenomena are secondary and physical phenomena are primary, but it just doesn’t work, it’s actually counter-explanatory.

    We live in a particle universe, where there are ideas. We don’t need to figure out what a mind is, or how mental phenomena work. We need to figure out what physical processes engage in representation making – how representational processes and physical processes interact. – that is, where both physical and non-physical processes co-occur.

  99. 100. Arnold Trehub says:

    Calvin: “We need to figure out what physical processes engage in representation making – how representational processes and physical processes interact. – that is, where both physical and non-physical processes co-occur.”

    In my view representational processes are physical processes. See “Where am I? Redux” and “A Foundation for the Scientific Study of Consciousness” on my Research Gate page.

  100. 101. calvin says:

    Arnold,

    I read through the Redux paper. Much of it I found very familiar. Unfortunately I just couldn’t use this kind of approach to get close to writing the right kind of software to produce representation making processes and hopefully down the road, actual sentience

    I agree that neurons must be doing the representational heavy lifting (I argue for that point). It’s been shown that alteration of neurons and neural connections alters representations and experiences. But I don’t think you deal with the problem I keep harping on. That representations must be treated as first order objects. Nor does the research show that particular neural structures correlate to particular representations. that is they may, and they may not, and there is no physical cause for why they or don’t. The issue is not the structure of neurons per se but the representation a neural structure instantiates. And I felt like you were mixing the two things together.

    I have a few particular problems with your theories practical details. One is the reductionist argument. It’s been demonstrated that the left and right retinae, optic nerve, and visual cortex are not identical. However, we can see the same images with both eyes. How is that possible? The associations between the two images to one, I understand. but how do you reduce the one image to two? How does the representation move backwards towards the different neural progenitors? And also back to the differing features of the images etc? We assume that it does, but we have yet to see this correlation. For instance, there are no progenitors of qualia primitives such as color, or shape, or volume etc. This is the syntactic vs semantic problem again.

    It seems you try to manage reduction by invoking a virtual space as an explanatory mechanism, but where is the space? The space is a representational phenomena, not a physical phenomena. I agree there are correlates for representations, but only so far that neurons instantiate different representations with their functional architecture. – the neuron structures do not have to recapitulate the conceptual structures of representations. You say that they do. I only think neurons have to capture conceptual structures if that conceptual structure is a representation the organism is aware of. But it does not recapitulate the conceptual framework in the neural architecture. It merely associates the two together.

    There is good evidence for this. We can imagine all kinds of different conceptual frameworks for representational details. This must mean that whatever is managing this representational process only has to associate the different representations together, but does not have to recapitulate the underlying representational structure (eg. your self-concept in a volumetric space). Dreams demonstrate this problem very well. We move around dream spaces dramatically. they fit the criteria for volumetric spaces you describe, but they do not correspond to the neural structures you suggest. That is “the human brain must have an innate biological structure that can provide us with a volumetric analog of our personal world from an egocentric perspective.” Clearly dream experiences upend this volumetric concept. Especially when we see ourselves in a dream (the locus is bifurcated when you see yourself in a dream).

    The notion of a foundational volumetric space is problematic for non-visual modalities. smell, sound, touch, taste, physical feelings are very different phenomena that don’t have the obvious connection to space as an explanatory mechanism. but these experiences are just as much first order experiences as vision. You are using space as an organizing principle when you could be using something from modalities like touch or smell or taste as an organizing principle. Why the modality selection bias?

    We don’t assume someone lacks consciousness because they do not think of themselves as occupying a space. for instance, we wouldn’t say a congenitally blind child has no ego if they have little or no conception of volumetric space they are supposed to be inhabiting.

    Lastly, it seems as if you give some kinds of representational phenomena (like volumetric space) special status in the world, while making others secondary. The ego is describe as a kind of by-product of awareness of volumetric space. Why is ego secondary to volumetric apprehension? How does one representation supervene(?) on another representation physically? Don’t ego and volumetric space experience co-occur? What is your basis for biasing one over the other?

    I could never find a way for this class of ideas, where representations fall into hierarchies like conceptual frameworks, to work as a basis for machine sentience. To make some hierarchy structure be existentially valid, requires that the framework somehow pre-exist the syntactic (or physical) structure and that no other framework is possible. Frankly, the variety of frameworks all obviously exist because they are representations. But none of them reveal themselves as emergent properties from either looking at slides of neurons or building up 1000’s of lines of code. It just seems like this one makes a lot of sense representationally, but it doesn’t make any more sense physically (from the point of view of interacting molecules)>

    I take the complete opposite view. All these representations exist. So things like volumetric spaces are real things. So how do our neurons instantiate volumetric representations? The ideas of ego, self, identity, even consciousness are ideas. They are representational content. And so our neurons must be instantiating them too. But we don’t have to recapitulate a conceptual hierarchy of self or being or cartesian theater to get ego, identity, or consciousness. We need to produce the basic representational process which then produces those particular representations. The conceptual hierarchy you imply in Redux is not the same as the elements the hierarchy arranges. They are different representations. There is no reason you can’t have an ego, and not have a conception of volumetric space.

    And to argue otherwise is a question of representations, and not a question of the physical phenomena. Because the only relevant physical phenomena are the interactions of molecules. The neurons do not “do” anything. It’s only molecules and chemistry. Which means I should be able to ask: How do the molecules and molecular interactions produce a volumetric space? And obviously they don’t. You make an appeal to neurons, but neurons are not physical things, they are conceptual things we apply to a bunch of molecules.

    I make the same appeal, but I am explicit in saying the neuron is a conceptual thing. It is a representational phenomena that is a side-effect of the underlying molecular interactions. The neuron is a proto-idea produced by underlying molecular processes. We can build up neuron structures after that, but very clearly we are building up a representational structure that co-occurs because of the underlying molecular chemistry. But the neuron structures I refer to mean a different class of representation than the kinds you are talking about. The cells are a representational system that then instantiates the representational objects we call experience where there are things like volumetrics and egos.
    [ representations/concepts/qualia representational processes co-occurring as persistent physical side-effects(cells) chemistry ] -To get digital consciousness, we replace the chemistry element with computation.

    I don’t know of any other way to solve the problem of bi-directional causation. I just couldn’t see how you bridge the gap to chemistry. And I didn’t see an explanation of how tweaking the chemistry changes the phenomena in your model- which it should. Obviously, a tiny bit of LSD is going to have a profound effects on what volumetric space and ego mean in the retinoid model. But those kinds of alterations seem wholly out of bounds from the description. The Retinoid model doesn’t describe how LSD could change the volumetric experience, and conversely it doesn’t describe how volumetric information can change molecular phenomena, like through the molecular interactions we call muscle movement which occur because we move our eyes and look around.

  101. 102. calvin says:

    Arnold,

    It wasn’t clear in the text, but i am saying there must be a bi-directional causal chain. At one end we have representations at the other end molecular phenomena. Representations are affected by molecular phenomena and also generate molecular phenomena. I think the only way to get this bi-directional causation is by having a necessary middle layer where representational processes and physical phenomena co-occur. (which is the practical reason why representation and physical phenomena must both be treated as first order objects).

    Representations, such as thoughts, egos, feelings, ideas etc are one layer.

    The middle layer is where representational processes and primitive structures co-occur as persistent physical side-effects – these are cells and cell groups.

    And the physical layer is where we have molecular interactions ie. chemistry.

    I just couldn’t see in the Retinoid model how ego driven desire and “actions” produce actual molecular changes… which in turn produce “movement” and cause changes to volumetric space. Whether the experiences occur in physical space, a video game, or a dream, both physical phenomena (molecular interactions) and representational phenomena (subjective experience) are occurring. but the retinoid model doesn’t tell me how that happens. Isn’t that a key question the model should answer?

  102. 103. calvin says:

    Arnold,

    I just read (skimmed the repeating parts) of Foundation… Both papers have these great observations but then jump off into this complicated description of what is happening, but don’t explain how or why the complicated description must be necessary.

    I have argued that a solid foundation for the scientific study of consciousness can be built on three general principles. First is the metaphysical assumption of dual-aspect monism in which private descriptions and public descriptions are separate accounts of a common underlying reality. Second is the adoption of the bridging principle of corresponding analogs between phenomenal events and biophysical brain events. And third is the adoption, as a working definition, that consciousness is a transparent brain representation of the world from a privileged egocentric perspective; i.e., subjectivity.

    I don’t understand why it’s necessary to assume a metaphysical dual-aspect monism. Then assert the importance of particular descriptions and features in that kind of monism. Then assert a bridging principle all to get to a particular kind of subjectivity. Why make it so complicated?

    Why assert subjectivity is some special thing you have to explain scientifically? What if it’s not? It seems you are making the case that subjectivity or consciousness is, more or less, a mind. So, what happens if you abandon that apparatus completely? If you stop asserting there is this thing, mind…subjectivity…whatever, what happens? does anything change?

    You obviously don’t get rid of the contents of subjectivity, but what if subjectivity itself is just a content, and not a functional thing at all? If mind is a content of representation but not a source of representation, what changes? I would say nothing changes. There are representational processes going on (obviously), so why is it necessary to assert those representational processes require subjectivity or a mind to occur? why can’t subjectivity just be the fact of their occurrence but not the cause of it?

    Why not just say, the contents of experience exist, full stop? They don’t have to exist because of a mind. Adding a mind (or some similar idea) into the mix means you have to describe what that thing is and how it functions… which you have done. But what do you get from that apart from the added complexity and an argument to prop up the idea of a source of non-physical phenomena?

    I mean, we don’t assert a source of physical phenomena – they just are. why do we have to assert a source of non-physical phenomena? It just seems overly complicated.

    And that isn’t even exploring the particular cases where your subjectivity model can’t account for subjective phenomena – such as twins who have the same ideas about things. Or when lovers think the same thoughts. or when you and your friend are on the same wavelength. Or mathematics and abstractions. How can two different brains have the same abstractions? It’s like only a particular kind of subjectivity is allowed in the model (ego-centric). Yet it seems like human experience is enormously expansive and wider than that description can allow, never mind the animals!

    I really want to know what you lose if you junk the whole metaphysical apparatus and treat ideas, feelings, qualia, egos, etc. – the contents of experiences not as products or features of a “mind” but as things in the universe directly? How do you even test the model you put forward to see if it works? How would you implement it in a machine? For instance how do beliefs form? How do beliefs differ from perspectives? How do they differ when looking at the physical phenomena?

    ?

  103. 104. Arnold Trehub says:

    Calvin: “How do you even test the model you put forward to see if it works?”

    In my seeing-more-then-is-there (SMTT) experiment, I was able to demonstrate that the neuronal structure and dynamics of the brain’s putative *retinoid mechanism* successfully predicted that subjects would have a vivid conscious experience/feeling of a triangle oscillating in space when, in fact, there was no such object in their visual field. In fact, this experience can be properly understood as a hallucination. Moreover, the properties of the putative retinoid mechanism enabled the experimenter to independently control the height of the subject’s phenomenal/felt triangle while the subject was able to control the width of the phenomenal triangle to actually maintain approximate height-width equality. It seems to me that this SMTT experiment is analogous to the double-slit experiment in physics that demonstrated that light has the complementary properties of particle and wave, because it demonstrates that conscious content/feeling has the complementary properties of a particular kind of brain activity (3d-person perspective) and phenomenal experience/feelings (1st-person perspective).

    Given this experimental demonstration that shows a putative biological mechanism predicting the features of a complex conscious experience, why shouldn’t we consider the retinoid model to be a biological explanation of consciousness within scientific norms?

  104. 105. calvin says:

    Arnold,

    I’ve been reading ch.14 to understand your claim. and on the face of things, I have no disagreements with the experiments or the representational phenomena those experiments demonstrate.

    But why do those representational phenomena require a functional egocentric space? It seems much more straightforward to say, under x and y conditions we get a and b representational phenomena. No egocentric space necessary. No 3-D Retinoid space necessary. We can test other models, say computational networks, and determine if they can create node structures which also reproduce outputs only when the semantic content we are looking for occurs. And the answer to that is a likely yes because this entire argument rests on network configurations with stored values in nodes. And the nodes simply have to wire up so that the present observed value, and the previous observed value co-occur and are represented by a third node. that third node’s value is the “something that isn’t there”.

    Obviously computational networks do not have ego centric spaces. I know there are experiments with rat neurons where the neurons will auto wire to respond to visual information mediated by computers, and even interact with the environment. It seems like an experiment could be constructed where such a cluster of neurons could capture the SMTT effect, without having an egocentric space.

    “I argue that egocentric space is anisotropically represented in the brain’s 3-D retinoid system so that representations of objects within regions of increasing distance are progressively collapsed onto nearer Z-planes…”

    Firstly, the brains’ 3-D retinoid system, is itself a representation. The brain is a mass of molecules. the molecules form structures we call cells, and the cells have these different shapes and form a very interesting network. But that network is not in the form of and does not describe a 3-D system. The brain doesn’t have any kind of physical hierarchy. there is no 0,0,0 cell. Sure the brain has a developmental history, but there is no extrinsic structure to the brain. the brain structure is determined only by the intrinsic interaction of molecules. How do the molecules “set” the 0,0,0 cell? The retinoid model requires a mechanism which describes how a structure (like the 3-D retinoid structure) back propagates down to the formation of neuron shapes in the brain. Where is the mechanism to cause the back-propagation which determines neuron shapes, and to set which cell is the 0,0,0 cell?

    what I’m saying is that you are mixing up ideas and representations with molecular structures and asserting they must correlate. I am saying, they categorically cannot, because these ideas have no relationship whatsoever with the molecules, which produce the cells in the first place.

    This confusion shows very strongly in this statement:
    “A guiding premise motivating the models that I have proposed is that the structure of the human cognitive brain has been shaped by evolution to cope with the ecologically significant demands of the human environment. In this process, we can imagine a principle of neuronal economy at work. We would expect limited genotypic neuronal resources to be allocated for specific kinds of cognitive representation and computation in rough accordance with their importance to the survival of the species. In the visual system of lower animals, there are clear examples of this principle.

    Evolution does not shape brains. The molecules (DNA) which selectively endure because of cellular and organism survival, do not encounter demands, and they do not encode brains or brain shape. Neuronal economy is not a principle that affects gene expression. but gene expression is a primary function which determines if a cell becomes a neuron, what kind of neuron a cell is, and what shape that neuron takes. How does the principle of economy affect allocation of resources?

    The larger and more complicated forms of physical phenomena we can observe, such as brains, are not determined from the top down by ideas (such as evolution, or demands, or environment). but are generated from the bottom up by the interaction of molecules.

    Let me illustrate this with the SMTT effect. The demand of acquiring food to survive (which is a demand, urges become very demanding), would, in theory, drive the development of better mechanisms of identifying food sources. In this case, binocular vision allows us to better see through forward obstacles, like trees in a forest, or through tall grasses, or bamboo stalks, and determine the shape of an object through those gaps. (the SMTT effect) Animals with eyes on the sides of their heads are not able to do this to the same degree (eg. horses, cows, etc). And thus an animal which improves on this capacity will be able to hunt food sources in a greater variety of conditions and better survive the demands of it’s environment than an animal which does not.

    So how does the demand for food, the fact of binocular vision, the importance of improving vision for survival produce changes to DNA structures and expression in the cells of animals? And obviously those _concepts_ do not affect DNA or gene expression. The DNA is only ever selected for or transcribes, or expresses because of molecular level phenomena. Survival of organisms is not a causative force. It is a side-effect of other molecular phenomena we (representationally) describe as “finding food” and “not getting eaten”. But survival does not cause the formation of features like brains or brain structures. Natural selection does select organism features. (it’s a misnomer). Organism features are solely determined by molecular phenomena (like DNA expression) and it’s only the Cells and DNA which are preserved over time which propagate. If a new feature evolves, it means the DNA structure changed slightly, or some other feature of the cell changed so that expression was altered. But to be clear the organisms feature (such as a brain with better memory) was a side-effect of the underlying molecular processes.

    Ideas, and it doesn’t matter what the idea is, do not back propagate to molecular interactions. The shaping of the human brain is solely caused by molecular phenomena. It is not caused by the ecological demands of the environment, because those things are ideas and not molecular interactions. It seems like ideas such as demands and environments and structures would cause related physical phenomena like brain structure. – in theory. but they do not cause those phenomena in fact. It’s only molecules that produce structures of groups of molecules. Ideas are not necessary.

    Lastly, you show we have these representational phenomena (SMTT, moon illusion etc.) but then assert there must be some representational system (the 3-D retinoid system plus the egocentric space) as means to account for them. But the 3-D retinoid system does not account for the fact SMTT and the moon illusions exist. Illusions exist, and the model glosses over the fact SMTT is a kind of illusion. but doesn’t account for how an illusion can exist. It implicitly treats the physical phenomena as superior to the representational phenomena – because it’s an illusion and then tries to account for why it’s an illusion. But it skips the central question: What are errors, and how can they exist? How can illusion even exist? How does the model determine when an illusion is an illusion?

    The model breaks down in all kinds of examples at this level. For instance, watching the cavalry come over the hill in a western is where the cavalry is in the far distance of a putative 3-D retinoid space, but is in fact, only as far away as the movie screen. Looking at landscape paintings we see things in the distance, the middle ground and the foreground. But that is just an illusion. They are different size shapes and in particular positions to make it appear there is perspectival distance. When I play Grand Theft Auto, I drive all over the map. I am moving around in a putative 3-D space. but in fact, the only thing changing are pixels on a screen. All the phenomena are equidistant.

    to explain this kind of phenomena I would guess you will say that the retinoid system is being tasked to manage different kinds of experiences as volumetric spaces. But then what system is doing the task managing of 3-D retinoid space itself? How does my ego-centric space get converted into GTA as I explore Los Santos and then repurposed as I walk down an actual street in “meat” space? There must be some meta level representational processing going on that itself has a neural correlate, but it’s not a space where there is a cell with a 0,0,0 coordinate, but something completely different. So what is that thing and what and where is it in the brain?

    My point is that these representational architectures (mind models) do not explain anything, they just add cruft. they don’t explain _how_ there can be things like errors or illusions (which is their basic job). and they fall victim to all kinds of meta-critiques when we move up the representational ladder. (the retinoid model really does break down when you start thinking about how it works in video game land).

    It was amazing to read your paper and come across a diagram and be like, “I know that diagram!” I’ve drawn the same kinds of things. The little circuit diagram, the 3-D retinoid ego-centric space diagram. yeah. very familiar.

    However, I could never get these kinds of models to work. They leave off two key questions. One, how do different representations and kinds of representations exist? The model doesn’t account for representation itself. It doesn’t account for how numbers exist for instance. And two, it doesn’t give an account down to the actual physical phenomena which are the molecular interactions. These models stop at a representational description but do not account for how molecular interactions are changed by ideas (ideas producing muscle cell changes) and how molecular interactions change ideas. (eg the LSD problem).

    If you have answers to these questions, which preserve your particular model, I would be interested in reading them. But I don’t think you can ask these kinds of questions in your model, because the answers have to be mediated though the model itself, and that is why it is overly complicated. you have to embed all the representational phenomena inside your model. but the retinoid space itself is a representation and you can’t embed that in the model. it’s a contradiction.

  105. 106. calvin says:

    Arnold,

    Let me see if I can turn my counter argument into a syllogism.

    Particles and particle aggregates (eg molecules and molecule composites) exist.
    Physical phenomena are composed of particles and occur because of intrinsic particle properties (forces).
    The 3-D Retinoid system is not composed particles.
    therefore the 3-D Retinoid system is not physical

    Question: How does the Retinoid system affect particles, eg molecules?

    Other phenomena exist (eg numbers, feelings, ideas, qualia, actions, experiences)
    These phenomena are not composed of particles.
    Therefore these phenomena are not physical.

    Some non-physical phenomena represent physical phenomena.
    Some non-physical phenomena represent other non-physical phenomena.
    There are no non-physical phenomena which are not representations -eg representational associations or relations.
    When representation occurs, the representation or association itself is not made of molecules (even if both associated elements are physical, the association itself is non-physical).
    Therefore, representation is a non-physical process or function, and non-physical phenomena are representations or representational phenomena.

    The 3-D Retinoid system is a non-physical phenomena.
    There are no non-physical phenomena which are not representations.
    Therefore, the Retinoid system is a representational phenomena.

    Question: Can we determine the correctness of the Retinoid system in it’s representation of physical phenomena?

    one kind of representations are what we call accuracy or truth or consistency or correctness.
    many representations are associated to this particular (phenomena) instance of correctness.
    some of these representations are inaccurate and some are very accurate.
    how different correctnesses are determined is a representational process.
    there are a corollary to the idea of errors.
    [representations are associated to the error phenomena often based on other co-occurring representations (context).]

    Physical processes are non-representational.
    Correctness and errors are not physical processes, not molecular interactions.
    Therefore correctness or errors do not affect or apply to physical phenomena. (this is the gap between syntax and semantics)

    The Retinoid system is a representational phenomena.
    It’s correctness is a representational phenomena.
    Therefore the correctness of the Retinoid system has no relationship to the molecular phenomena the Retinoid system purports to represent.
    -that is, if the Retinoid system is correct, it’s accuracy to the physical phenomena is coincidental because accuracy is not a physical phenomena.
    additionally, the arguments (which are representations too) that show why the system is correct are themselves only valid if they are coincidentally accurate. (it’s the same syllogism).

    Conclusion: It isn’t likely the Retinoid system is an accurate represesentation of how representations occur or function, nor how that happens from physical phenomena because it requires too many coincidental arguments to also be true.

    Suggestion: Instead of building up complex representations of how physical and non-physical phenomena occur and interact. Instead we should use a functional approach to describe the representation process and see if those functions can support the plethora of representational phenomena and processes which we observe. And then see if those functions are recapitulated in molecular processes – coincidentally.

    Instead of a complicated system, reduce the problem to simple functions and phenomena which give rise to the observed complexity (reductionism).

  106. 107. john davey says:

    Calvin


    One, the “mental” is not comprehended fully.

    It is. The nature of mental life is undertood by all human beings who possess it : it is no mystery. The physical on the other constantly eludes understanding.


    Do you comprehend what an itch _is_?

    Yes. It’s an itch. That is comprehension of an itch. What you are referring to is a scientific explanation, which is something different.


    The mathematics is comprehension.

    No it isn’t. It’s not any kind of comprehension, as gravitation and quantum mechanics demonstrate. It is prediction.


    Yes we can conceive of a method where physics leads to “mental phenomena”

    OK. Provide one.


    Mental phenomena are a sub group of the class of all representations.

    Ok – we’ll need some scientific references (and some accepted definitions of the terms) for that.


    We live in a particle universe

    We live in a universe. In that universe there are human beings who have developed a particulate theory of matter. There is nothing requiring or obliging the universe to provide humans with a way of predicting mental phenomena from physics and its particle theories – in fact, as currently constituted we know it to be an impossibility.

  107. 108. calvin says:

    John,

    Do you comprehend what an itch _is_ ?
    Yes. It’s an itch. That is comprehension of an itch. What you are referring to is a scientific explanation, which is something different.

    Okay, How is simple perception of an itch different from a scientific comprehension? meaning, what makes the two apprehensions different? How or why is a scientific explanation not a comprehension?

    How is a prediction of phenomena, not a form or a kind of comprehension? If it’s not a comprehension then what is it? Mathematics doesn’t only apply to physical phenomena. It applies to abstractions and concepts such as numbers, sets, functions etc. How is a mathematical understanding of these phenomena not comprehension?

    I am curious what you mean by “comprehension”? Do you mean awareness?

    Yes we can conceive of a method where physics leads to “mental phenomena”
    OK. Provide one.

    alright, but before I do, I need to translate what I am talking about into concepts you find acceptable. otherwise you’ll tell me to cite a scientific reference or tell me some phenomena like mathematics isn’t comprehensible.

    So, what are your definitions of mental phenomena? What objects fall into the class of mental phenomena? Obviously you do NOT think “mental phenomena” are, or can be generated by, physical phenomena. So mental and physical phenomena (whatever physical phenomena may be) are different. Is there some third class of phenomena that is neither physical nor mental?

    For instance, are mathematical concepts such as numbers or functions mental phenomena? I’m actually confused on this point because you say mathematics cannot be comprehended but then say all mental phenomena are comprehended fully.

    I’m also confused on how all mental phenomena are understood. You must have encountered novel mental phenomena at some point which means before the encounter of novel mental phenomena those phenomena were comprehended? It seems you are asserting there is no such thing as novel mental phenomena – eg. so such thing as new music, or new feelings, or new itches. What are novel non-physical phenomena? If before they are discovered they are not comprehended, then they cannot be mental phenomena? … but they clearly are not physical phenomena, so what kind of phenomena are undiscovered ideas and experiences?

    If you can give me how you classify phenomena (and maybe what kinds of processes distinguish those phenomena) then, I think I may be able to give you a conceptual framework for how physics leads to “mental phenomena” (note that I don’t mean physics causes the mental phenomena, rather physics instantiates mental phenomena). Now if you come back with 5000 different classes of phenomena, I will not be able to make an argument you will accept. But if you come back with there being two types of phenomena, and give me a decent range of examples of what those different classes of phenomena refer to, then I think I can give you a conceptual framework.

    If you come back with these two classes, but there are phenomena that do fit into either of those classes, then I’m not going to be able to make an argument you will accept because you can just shove whatever I say into the undefined class(es) and say it makes my concepts do not apply etc.

    If you come back with more than two, I’m going to argue that really there are only two and at least some of your additional classes are subsets of more basic classes. So, please consider that when you explain what your classes are and why there are more than two. For instance, the physical and non-physical phenomena are the two classes I refer too. And the non-physical are representational – see the syllogism argument I made to Arnold Trehub above. But I’m happy to work through the reasoning if you have more than two classes of phenomena. And, I would be really interested in examining that kind of metaphysics.

    So, you describe what classes of phenomena there are, and what kinds of things each class contains, and generally what makes one kind of phenomena, say mental phenomena and what makes physical phenomena physical phenomena (and whatever other classes you think you need). And then I’ll see what I can do about constructing a conceptual framework that you might find acceptable.

  108. 109. Arnold Trehub says:

    Calvin: “But why do those representational phenomena require a functional egocentric space? It seems much more straightforward to say, under x and y conditions we get a and b representational phenomena. No egocentric space necessary. No 3-D Retinoid space necessary.”

    What you claim is straightforward to say is simply a description of our experience. It does not describe what causes our experience. In other words what you propose has no explanatory value.

  109. 110. calvin says:

    Arnold,

    “What you claim is straightforward to say is simply a description of our experience. It does not describe what causes our experience. In other words what you propose has no explanatory value.”

    tl:dr

    Yes, what I am saying is description. But we need accurate descriptions to build good explanations. One issue with consciousness and representational phenomena is explaining how explanations themselves come about. The Retinoid system does not explain how itself is generated. Where is the Retinoid system in egocentric space?  How does representation itself happen? That is the key question.

    We need to explain how representational processes work – as representational processes. then we need to see that certain physical phenomena, when they occur, co-occur with representational phenomena. With those two things alone we can be fairly certain that in some way, the physical processes are recapitulating the representational processes. If we know under x and y physical conditions we get a and b representational phenomena, then somehow the x and y conditions are producing the same representational processes that get us a and b -representationally. We don’t have to add on extra representations like a 3-D Retinoid system, we just need to explain how representation itself functions. (see below)

    I’m going to try to layout some descriptive facts that I think we both agree on.

    From the point of view of physical phenomena, explanations are unnecessary. The physical phenomena do not need to explain how experience occurs, or even how chemistry works. The molecules do not care if consciousness is happening or not. Explanations of how consciousness and ego and other phenomena occur have no effect on the physics or physical processes. The molecules do not care if modern physical science is explanatory or not.  Molecules simply don’t care about any of those things (if there was such a thing as caring to molecules). Explanations are not connected to molecules, so how do explanations happen?

    We know that experiences occur, that we are aware.  We experience things like colors, pains, numbers, sentences, affection, certainty, arguments etc. And that “etc” is a pretty long list of different things. When we say we experience something, we mean we are aware of it. And, as you show, awareness (or experience) always has content. It’s always the awareness of something – Awareness of some X.  Awareness is always like that. These contents of awareness are not made of molecules. eg. the number 2 isn’t made up of molecules. 

    The contents of awareness or experiences have relationships to each other. We use some of them to describe and explain physical phenomena. One example of this is the word “phenomena”. This word is a representation of another kind of thing, a concept. This associating activity just happens. We can direct the association making (like when we do improvisation) but it is also constantly happening. We do not necessarily choose what associations occur. We do not choose what colors to see when we look at around. 

    I’m going to call this class of things, which are non-physical, representational phenomena. – as a description of the facts, I think we are mostly in agreement here. We could call these the phenomenological objects as opposed to the physical objects or something like that but I like using representational because it refers to that property these phenomena have of being associated to each other and how some of them are associated to physical phenomena.

    It appears that when certain physical phenomena, under certain conditions occur, representational phenomena also occur.  For instance, when there is a particular composition of molecules we call a body, with a brain and eyes, where there are particular so called opsin molecules in certain cells, with enough free levels of acetyl choline molecules and a whole bunch of other unspecified physical conditions the representational phenomena we call seeing and colors happens. — this is the part where I am saying under x and y conditions we get a and b representations. 

    Now, you are saying there must be an explanation or reason for why this happens. And you are also saying there must be a particular kind of representational phenomena which happens (Retinoid system) which arranges and structures (into egocentric space) the representational phenomena (a and b) which are co-occurring under x and y conditions.  

    You describe the Retinoid system as a representational phenomena (you acknowledge it’s not made of molecules). And I think from reading your own description, a bullet-to-the-head will extinguish an individual Retinoid system. So, isn’t the Retinoid system itself an a or b representational phenomena which co-occurs under certain x and y physical conditions? You do not give an explanation for how the Retinoid system itself comes about. Like me, you are just describing that it does. 

    What bothers me is the Retinoid system claim that all other representations are arranged or managed by the Retinoid system and that the Retinoid system arranges them according to another representation called egocentric space. How does the Retinoid system do the arranging? How does the Retinoid system generate or connect to the representation of egocentric space? What is the non-physical mechanism or function?  Or is it the neurons which do it? How do the molecules of the neurons do that representational arranging? 

    Why is the generic description that a and b representational phenomena co-occur with different x and y physical conditions somehow a lesser description than your supposition that x and y physical conditions are producing a particular representational phenomena in the form of the 0,0,0 cell?. It seems like the general observation must be true in for the Retinoid system to work but you also need specific phenomena to support the specific representational features of the Retinoid system. That is much more complicated. It requires the existence of a valid and true metaphysics (the Retinoid system) and that metaphysics just happens to correspond to x and y physical conditions. Plus there must be another representation, the 0,0,0 position in egocentric space, which also occurs because of some other p and q physical conditions.  How do you tell that is actually happening? How do you explain the special status these components of the Retinoid system have?  

    And how do you construct the representational hierarchy where the Retinoid system is a special type of representation with special  features that let’s it arrange the other representations?

    The question is: why a Retinoid system versus say a programmatic system like a video game? A program where the contents of the video game (the representational phenomena) are data elements.  eg. where some strings are polygons and pixels for vision and other data strings are smells, some strings are wave files for sounds, other strings are concepts etc. Why is the 3-D space representation necessary? what makes 3-D space necessary vs something else? The retinoid does not explain why other models cannot occur.

    Couldn’t I make the exact same argument you do with 3-D Retinoid space, without putting all the representational phenomena into an egocentric structure? Instead claim the brain corresponds to a computer and runs a program which runs all the representational phenomena which are different kinds of data that the program can process? People actually make this argument, and have some experimental evidence to show that is what is going on (eg. doing mathematics requires some kind of neuro-mathematical correlate). And I would ask the same questions about that kind of system – how could the program running on the brain, be contained in the program itself, since the program is a representational thing?

    I don’t get why all the non-physical phenomena, except for the Retinoid system itself obviously, are forced to fit into the Retinoid system? What is forcing that?

    Instead I go back to the simpler descriptions.  Awareness exists and has contents. There are representational phenomena, which are the contents of awareness. There are physical phenomena (what we call particles, molecules and things made of molecules and affected by particle forces). And it appears that under x and y physical conditions we get different a and b representational phenomena. 

    Just from that, what do we know?  

    We know that Awareness is a function. Awareness of contents – Aw:X   And if it’s a function, what does the function do?  Aw:X = ??  By observation we notice that when contents change, the awareness of those contents changes.  And when the awareness changes the contents change. If I think of something different, the thoughts themselves change.  If something comes into view, the things I see change.  It turns out that Aw:X = X  or Aw:X X  Whatever the content of awareness are, the contents themselves are identical. And there are a lot of contents. So as contents changes or as awareness changes we see the variety of contents proliferate.

    And what do we observe from that? That we have before and after contents and we have before and after awareness. We have awareness of before and after themselves.  And this concept of before and after is itself a content of awareness. And these concepts seem much more durable than the contents which come and go.  We also notice there is this incredible structure or association between all the contents. Sometimes you see a person, then you don’t see a person, but you remember that person, then you see that person again, but they look different, but they are still the same.  So there is this thing of difference and similarity, and we associate different and similar things together.  

    Eg. the memory of a person is not the person you physically see, but they are associated together. the view of a person in a mirror and directly are very similar, but we represent the two images as being very different things. How does all this kind of stuff work?  Well, lets just describe how the associative process works.  

    Lets take 3 objects.  An image of person you see (image), the reflection of the person (mirror), and the memory of the person (memory).  

    Aw:image = image  but then the person steps out of view and all you see their mirrored reflection. Aw:mirror = mirror. then they move further away and the mirror disappears so, Aw:memory = memory.  

    What do all these things have in common?  they are images, mirror, and memories of a person.  So there is this 4th object (person) that exists.  What is the persons relationship to the image?   we are aware of the image as the awareness of the person. 

    Aw:image as Aw:person

    And we are aware of the image as the person.   So, Aw: (image as person)

    Let’s use a single symbol for this “as” function.  the semi-colon.  image;person said “image as person” or “image in the form of person”

    thus, Aw:image;person = image;person   Which is how we experience things.  We look at a picture of someone, like our mother, and say, “That is my mother.”  We mean, that is a picture of my mother, but the point is the same  picture ; mother. 

    So we can describe, the relationships the contents of awareness have to each other. 

    image ; mirror
    image ; memory
    person ; image
    person ; mirror
    person ; memory

    Can we do more than that?  Can we describe the relationship itself functionally? 

    Yes.  Because obviously image is the same image when it’s mirrored.   eg.  image = image ; mirror

    person = person ; image
    person = person ; mirror
    person = person ; memory

    this is associative identity (or representational identity).  But it is one directional. 

    person = person ; image 
      and
    image = image ; person  
      but 
    image != person

    image -> image;person -> person
    person -> person;image -> image

    Let’s recapitulate seeing an image of a person, seeing the person move away and only seeing the reflection, then having them disappear and remembering them.

    image = Aw:image
    Aw:image;person = image;person
    image;person = person
    person = Aw:person

    -so we have two objects and two awareness of objects. the person moves and disappears but we still see their reflection, so the image and Aw:image vanish. Aw:person remains. 

    mirror = Aw:mirror
    Aw:mirror;person = mirror;person
    mirror;person = person
    person = Aw:person

    -so now we have two contents again, (person and mirror). then the person moves again and the reflection disappears, thus Aw:mirror disappears. But the person remains. Is there anything else?

    Aw:person = person
    Aw:(Aw:person) = Aw:person
    Aw:(Aw:person);memory = (Aw:person);memory
    memory = memory;(Aw:person)
    memory = Aw:memory
    memory = memory;person -> person = person;memory

    From the descriptions of phenomena we know, and the precursor functions there is no reason to think Aw:X cannot be a thing itself, as long as that awareness can be content of awareness. Moreover, it’s exactly this kind of functionality that lets us do things like imagine. Where what causes changes to representational phenomena is not what we see, but what we think, that is, when the representational phenomena change themselves around. For instance we discover things with mathematics, and those things are purely representational. The only way to account for this expansion of mathematical knowledge and information is with representational functions and not molecular interactions. (changing the Aw:X side of the equation changes the =X side)

    Now, this particular way to describe how awareness and representation work may not be correct. But the principle it illustrates seems necessary to account for any kind more complicated representational process. (Although I think it works as is and am using it as the way to generate computations which co-incidentally produce representations). You need to have a principle of how representational phenomena work, to underly more complex metaphysics which provide their own way representations work.

    The advantage of taking the reductionist approach is that I don’t need to have a complicated framework (like Retinoid)) which does the heavy lifting this function provides.  I simply have to accept awareness is real and has contents, and these contents are real phenomena but are non-physical, and these phenomena interact in particular ways.  

    Given these premises and basic interactions, I think it’s possible to articulate the 3-D Retinoid system as a very complex set of representations and representational functions. That is, we should be able to generate the Retinoid system as a product of the right kind of representational primitives and representational functions. And if we can do that, then we should be able to generate a whole bunch of other kinds of explanatory models.  Eg. Cartesian theater, or materialism, the metaphysics of Deepak Chopra. And that makes sense, because obviously people have developed all kinds of models for how representational phenomena happen. 

    A workable description of how awareness and representational phenomena work, but be able to manifest the plethora of metaphysics people have come up with, because those ar also representational phenomena. Which means that, in principle, this kind of approach is also sufficient to describe and articulate all the representational phenomena (including itself). 

    So, how does the brain do awareness and representation?  And that is the important point- the brain does not produce representations (the brain is only molecules).  Representations must be treated as first order objects. But if representations just exist, and we know the brain and body are molecular things, why and how are those molecular phenomena necessary for experience?

    And the answer for that is the molecules instantiate the representational phenomena and awareness functions. both the representational phenomena and the molecular phenomena co-occur.  And this co-occurrence happens because the comoplex of associations and functions that are representations x = x;y  and x -> x;y -> y that let us generate representations are occurring in the physical phenomena are functional things which sometimes molecules and molecule complexes like cells will also do. 

    There is not a physical correlate for particular content like a memory of your mother to particular molecular structures, but rather that the function of representation is occurring (as a side-effect) of physical processes.  It’s only the functions and structures necessary to produce those representations that correlate, and the functions and components can be completely generic. But the sheer number of particular functions and particular associations for any particular representational phenomena is incredibly complex. (that’s what phenomenology demonstrates so well).  So we only need an extraordinarily complex physical set of processes which (as side-effects) recapitulate the functional complexity of any particular representational phenomena – to get that representational phenomena eg. we need x and y conditions to get a and b representational phenomena.

    Well, the brain and body are particularly complicated, and we know that neurons (as you showed with your SMTT circuit diagram) do recapitulate representational functions. But the molecules only do this as a side-effect. The recapitulation is purely co-incidental. It’s a co-occurrence of the representational functions and the molecular phenomena. That is, the molecules don’t intend to produce experiences or qualia or consciousness. But if the molecules produce the same functions as representational functions, then representational functions happen – to the associated representational phenomena.

    The point is there are principles of how representational phenomena work. And if these principles are sufficiently reductive to generic functions (which I think they are). then we can explain and build up complex representations from generic starting points. We can construct the experiential edifice from a reductionist starting point. Then we only need show that these reductive representational processes also happen in physical interactions (not the content, the processes). And then we should see – where the same set of representational processes happen physically, we get a corresponding representational phenomena happening experientially. The beauty of this reductional/functionalist approach is we don’t need any fancy metaphysics or metaphysical constructs. And we don’t need any particular phenomenal correlate like an 0,0,0 neuron. We only need function correlates of representational functions. Performing the representational functions will instantiate the representational phenomena.

  110. 111. Arnold Trehub says:

    Calvin: “The Retinoid system does not explain how itself is generated. [1] Where is the Retinoid system in egocentric space? [2]”

    1. The retinoid model was formulated to explain how consciousness/subjectivity is generated. Explaining how the putative retinoid system was formed in the course of creature evolution is an entirely different matter.

    2. A description/diagram of the retinoid system can be part of our egocentric retinoid space, but the biological system as such is in the brain, not in egocentric space.

  111. 112. Arnold Trehub says:

    Calvin, what do you think performs the representational functions?

  112. 113. calvin says:

    Arnold,

    Where is the egocentric retinoid space? How does it arrange and structure things like urges, a diagram of the retinoid system, or mathematical concepts or numbers? Where is the diagram in retinoid egocentric space?

    I’m sure you have had different versions of the retinoid model, where are those versions in the retinoid egocentric space? Where are the early versions that you have forgotten? Where are the versions you may yet discover? Certainly you had versions that were incomplete or erroneous. Where are those versions? Where is “erroneousness” in egocentric space?

    The physical phenomena cannot be “erroneous”. it doesn’t make sense to think of molecules or atoms as doing the wrong things. Even if the physics generates something akin to the Retinoid system, how does the physics generate errors? How does it generate the error concept? How does the Retinoid system generate errors? Where is error stored in retinoid egocentric space?

    Representational phenomena (subjective phenomena if you prefer) are observable, we think and talk about errors, and mathematics, and beauty, and pain, and colors. We are now talking about a retinoid egocentric space. But there is no such physical space. I can’t point my finger and say – this where retinoid egocentric space is located, and this is where gentleness is located in that space, and this where the number pi is located. Because, retinoid egocentric space is a representation and not actual physical space.

    There are all kinds of representational phenomena that cannot be stored in the retinoid egocentric space. It is not storing pi. Because if it is, it has to be storing all the digits of pi. Pi is not a physical phenomena, and it’s not stored in egocentric space, so where is Pi and what is Pi if it’s not physical and not a component in egocentric space?

    what do you think performs the representational functions?

    And the answer is the brain appears to be performing those functions. But the functions exist whether the brain performs the function or not. functions are representations too. Representations exist.

    For instance, 15241578750190521 is a number. Numbers are non-physical representational phenomena. Do you know what the square root of 15241578750190521 is? It must have a square root. The relationship of 15241578750190521 to it’s square root is a function. If I do not compute the function, does that mean the function does not exist? Of course the function still exists.

    We don’t believe that the functions of algebra did not exist in prehistory, we just think no one had discovered those representations and those representational functions. Did you know the square root of 15241578750190521 is 123459876? Because if you did, you already had the relationship of those numbers too each other (perhaps via a mathematical function) – in your retinoid egocentric space. but if you didn’t, and instead calculated the square root, then where was the relationship – the function before the calculation? and then how did the answer “pop” into egocentric space?

    Did you notice that I have a typo in what the square root is? 123459876 is not the square root. If you noticed that, how did you know it was an error? The only way to explain errors, is to think of them as representations of the outcomes of other functions where these other functions produce differing relationships or answers (the primitive representational function of same and difference again).

    it’s possible to experience 123459876 as the square root and 123456789 as the square root. It’s just that one of them is also the wrong answer. but there is nothing that prevents the representation itself from existing. 2+2=5 is a valid representation. It’s just not valid mathematics (a subclass of representations). We know it’s a valid representation because we use it all the time to indicate when someone makes a calculation type of error.

    Representation phenomena exist. All of them exist. Fantasies, illusions, things we have never thought, they all exist. The Retinoid system is a real thing that exists. The first incorrect versions of the Retinoid system are also real things that exist. All the representations have relationships to each other. These relationships are functions and these functions are also representations.

    Representational phenomena exist, and they work in a particular kind of way which is a function. I don’t know why this is. I don’t know why our universe has has particles with charge.

    But for the Retinoid system to work, it must be underpinned by the principle that representations exist and they have functional relationships to each other. the egocentric space is an expression of a function that arranges representations into representational space relative to an ego. You describe the function sort of generally, but you have no doubt that representational function exists, otherwise the retinoid system wouldn’t work. The space generation itself is a function. Ego assignment to a 0,0,0 point is a function.

    And your argument is that when the brain performs these representational functions, the brain instantiates a Retinoid egocentric space. And, in theory I agree with you. But what is happening? The physics is performing representational functions, which correspond to the representational functions which make a retinoid egocentric space. it’s a form of co-occurrence, because the representational functions of the retinoid system already exist, even before they are performed in brains – because the retinoid system is a non-physical, representational phenomena.

    As I said at the beginning of our exchange, subjectivity is not an actual phenomena. it’s a representation of phenomena. What accounts for subjectivity is just the fact that representational functions are being performed. The performance alone instantiates the representations (in the performing medium).

    The retinoid model was formulated to explain how consciousness/subjectivity is generated.

    What I am saying is there is no such thing as subjectivity; there is no such thing as consciousness (vis a vis a mind or subjectivity). It just looks like there is. You are creating a cosmology to explain how the sun moves. And what I’m saying is that if you look at how representation itself works, it becomes apparent that subjectivity is not a thing that does anything at all. Subjectivity is an illusion that happens when representational functions happen in a particular kind of way. That is, you need the right kind of representational functions to happen to have the qualia of subjective experience.

    the beauty of the reductionist approach is that if representational phenomena are driven by representational functions, then you should get lots of other kinds of qualia and experiences, not just subjective experiences. Different kinds of representations and functions will produce different, and even non-subjective experiences. And that is what we see happen with human beings. Spiritual connectedness, the fusion of oneness that lovers experience, the anomalies of synchronicity, buddhist emptiness, drug induced hallucinogenic awakenings – all of these phenomena happen and don’t really fit into subjectivist/mind models. But it becomes much clearer how these kinds of phenomena can occur as the products of the performances of the right representational functions.

    You agree in principle that (non-physical) representational phenomena exist, and that they have relationships (function) to each other. It’s a premise of the retinoid model. I’m just saying that is the case explicitly. And if this is the case, then we don’t need a metaphysics or a mind-model that we must backward map to the physics, in a top down way, to explain the already existing representational phenomena and functions – including subjective experience. The right representational primitives and functions will produce subjective experience automatically. And, the supposition is that performing those representational functions instantiates those representations in the performing system.

    I’m arguing that we treat representational phenomena reductively, and that from primitives and basic functions we get all the complexity that occurs in experience. And then we don’t have to explain why one metaphysics works and others do not.

    You have spent countless hours working on the retinoid system, and I’m fairly sure you have your own list of problematic examples that don’t easily map into it. Perhaps it’s the mathematical phenomena (where are the mathematical phenomena that haven’t been learned, which are not yet in a persons egocentric space? Are they in some other space?) Perhaps it’s something else. the point is that anomalies to the retinoid system are representations, and treated reductively, all the representational phenomena are acceptable. It isn’t necessary to develop a model, or update or correct a model, models themselves are the products of representational primitives and functions.

    To explain physical things that make and have representations (such as organisms with subjective experience) we just need to show how the physical phenomena recapitulate basic representational functions. Forcing an explanation of one particular representational model on top of that is added complexity and not a reductionist approach.

  113. 114. Arnold Trehub says:

    Calvin,

    So your theoretical primitive is not the physical world, but a world of functional concepts — a Platonic world of pure ideas. Science has no way to get a grip in a non-physicalworld.

  114. 115. john davey says:

    Calvin


    “How is simple perception of an itch different from a scientific comprehension”

    When I get an itch, I have full understanding that I’ve got an itch. I know what it relates to:what part of the body, why I’ve got it, and how to get rid of it.

    I know it’s an itch : I know it’s not a pain in the knee, or a passenger jet. I know what it is unambiguosly. I comprehend it totally.

    On the other hand, I don’t have an explanation of how the material of my body brings about my itch : that requires a scientific explanation


    How is a prediction of phenomena, not a form or a kind of comprehension?

    Because it doesn’t require any comprehension of the subject matter : just a comprehension of the mathematics.

    It’s not a requirement of physics that you don’t have some comprehension of what’s going on : it’s just not necessary. So take Newtonian mechanics. The 17th century’s understanding of how objects interacted was through contact. And, indeed most of Newton’s mechanics relates to bodies interacting through contact. Newton’s Laws – inasmuch as they related to bodies interacting through contact, were widely accepted and praised at the time. It was gravitation that created derision amongst Newton’s detractors, for suggesting that bodies interact, at unlimited speed, through empty space. There was no comprehension of how this happened or came about : there was simply a theory of the mathematics of that interaction. No comprehension necessary.


    Obviously you do NOT think “mental phenomena” are, or can be generated by, physical phenomena.

    I think that mental phenomena are caused by the material of brains.


    Is there some third class of phenomena that is neither physical nor mental?

    There is one universe, and classifications into “physical” and “mental” are artificial human constructs. Although the “mental” is unambiguous, the “physical” is not.


    For instance, are mathematical concepts such as numbers or functions mental phenomena?

    obviuously no. They are knowledge-based artefacts in the public space, like language. Mental phenomena are first-person and subjective.


    I’m actually confused on this point because you say mathematics cannot be comprehended

    No I didn’t.


    If you can give me how you classify phenomena

    First person experiences in brains. It doesn’t get simpler.

  115. 116. calvin says:

    John,

    regarding itches, I want to refer you to this article: http://www.newyorker.com/magazine/2008/06/30/the-itch

    I don’t disagree with how you describe the physical phenomena. It’s how you talk about the non-physical phenomena that I do not understand.

    I said: The mathematics is comprehension.

    Your replied: No it isn’t. It’s not any kind of comprehension, as gravitation and quantum mechanics demonstrate. It is prediction.

    and now you say mathematics are “…are knowledge-based artefacts in the public space, like language.”

    I agree they are artifacts. Are they non-physical artifacts? where is this public space you refer to? Is it a physical space? Can I move the molecules of my body to that space? How can a knowledge based artifact not also be a mental phenomena? What if you are the first person to think some mathematical thought? How is that not a first-person and a subjective experience?

    There is one universe, and classifications into “physical” and “mental” are artificial human constructs. Although the “mental” is unambiguous, the “physical” is not.

    To be clear, I have all along divided phenomena into physical and non-physical. And further, I say the non-physical phenomena are representational phenomena. I say this, because the non-physical phenomena seem to work in a certain representational way. The physical phenomena do not work in a representational way. I think of mental phenomena as a subclass of non-physical phenomena. Mathematics is another subclass.

    So there are something like classes of phenomena. Even if the classification of phenomena like mental and physical is incorrect (which is what you argue), the fact there are classes isn’t something you disagree with is it? Paintings and mathematics are different classes of things. Do you agree with that?

    You say the mental phenomena are CAUSED by the material of the brains. Mathematics, things like the square root of 7 are not a mental phenomena. So I assume the square root of 7 is not caused by brains.

    So, in this one universe, where is the square root of 7? I mean you don’t say it’s a physical phenomena, or has characteristics of physical phenomena. ambiguity seems to be a feature of physical phenomena. Can ambiguity can also be a feature of mathematical phenomena? Is the square root of Pi ambiguous?

    It’s not a requirement of physics that you don’t have some comprehension of what’s going on : it’s just not necessary.

    I agree with this statement completely. comprehension, explanation, the mathematics that describe physics is completely unnecessary to the physics itself. That is the point I’ve argued all along. But doesn’t that show that comprehension, explanation, and mathematics are non-physical phenomena? Because they are irrelevant to the physics – no amount of physical phenomena change the mathematical phenomena, and no amount of mathematical phenomena affects the physics. Is that assertion correct in your view?

    But you claim all those phenomena are part of one universe, so what is the relationship the physics stuff to the mathematics and explanations?

    Now for the weird question. If said, “tell me if 317 is prime, and give me the answer in the next message.” (and you did). Was the mathematic necessary to the physical events? Was my language based request necessary to enact the physical events your performed? How did language from the “public space” affect the physics to cause you to type an answer to the question? What if you get the answer wrong? How do you tell what an error is? How did the physical stuff produce the error?

    errors happen, but certainly not in the physics right? so where is erroneousness in this one universe and how does it relate to other phenomena? Where is erroneousness in the brain?

    … then you say this:

    If you can give me how you classify phenomena

    First person experiences in brains. It doesn’t get simpler.

    So all phenomena are first person experiences in brains? So the brain itself is a first person experience in a brain?

    For the record, I have looked at brains. I’ve seen live animal brains, and I’ve seen dead human brains. I’ve never seen magenta in brains. I’ve not seen smells in brains. I’ve not seen affection, or anticipation, or poems in brains. So where are those things in the brain?

    is the brain a physical thing? Or is everything first person experiences that somehow make everything that happens?

    And if the material of the brain causes mental phenomena. And all phenomena are first person experiences in brains, then doesn’t that mean all phenomena are mental phenomena?

    Just for fun, can you tell me what magenta is? Because it’s not a physical phenomena, even in the ordinary scientific sense. It is not a wavelength of light. We see magenta when we see two wavelengths of light together (in ordinary experience). But it’s not like magenta is additive of 450 and 580nm wavelengths of light.

    When you have a dream, that dream has something magenta in it, that is a first person experience that is occurring in the brain. And if the brain is a physical thing, how does it cause that magenta to happen?

    Since we all can see magenta, I’m guessing magenta is in the “public space” too? How does it get from the public space into the brain? Or, do you mean that brains produce the whole public space as just another part of first person experience? So, somehow in our brains, we have all the languages and mathematics from public space?

    And if you don’t mean that, then what kind of phenomena is “public space” where mathematics and languages are?

  116. 117. calvin says:

    Arnold,

    So your theoretical primitive is not the physical world, but a world of functional concepts — a Platonic world of pure ideas. Science has no way to get a grip in a non-physicalworld.

    Arnold,

    I think science can be applied to non-physical phenomena as long as the non-physical phenomena have consistencies, that there is a systematic model we can apply to the phenomena, and that we can apply that model to phenomena and make explanatory and falsifiable predictions. I’m saying we need to be scientific about the non-physical phenomena and we can approach the non-physical phenomena scientifically.

    I am not arguing for idealism, and especially not for platonic idealism. I am arguing that physical phenomena AND non-physical phenomena exist. That non-physical phenomena can be observed, and we can see how non-physical phenomena interact and relate to each other. And when we do that, we see that non-physical phenomena have a representational relationship to each other. And that relationship can be expressed as a function. [And further that we can use those observations to postulate how the phenomena of “consciousness” happens. And that we may be able to use those observations and representational functions to create conscious machines. ]

    We can conduct experiments on non-physical phenomena. computer science and mathematics for instance. We can construct experiments to test mathematical ideas. We can construct experiments to test how much people remember and forget over time, and how different kinds of remembering and forgetting content change the rate of remembering and forgetting. remembering and forgetting and the contents remembered and forgotten are all non-physical phenomena.

    Some studies have shown that reduction in dendrites affects memory, or even causes forgetting. But we can look at dendrites and we do not see the things remembered or forgotten – because the things remembered or forgotten are not molecules, but are non-physical phenomena. eg. learning a path through a maze is a conceptual phenomena, and different drugs that affect dendrite formation in rats are often tested with mazes or learning where an underwater platform is in a water test. So we do science with physical and non-physical elements.

    Advertisers test advertising all the time, and there are numerous scientific studies on advertising effects. These are studies that are basically about ideas and non-physical phenomena, like feelings. Sexy and wealth looking people in car ads sell more cars than poor ugly people do. Sexy, wealthy looking, poor looking, ugly are non-physical phenomena. they are aesthetic or representational phenomena. But there has been quite a bit of science done on the concept of beauty.

    The argument:

    There are physical phenomena.
    There are non-physical phenomena.
    The non-physical phenomena do have relationships to each other and to physical phenomena.
    These relationships are representations. These representations are also non-physical phenomena.
    All the non-physical phenomena can be used in relationships or as representations. ergo, let’s just call them representational phenomena.
    The representations and relations can be expressed as functions. (these functions are, of course, also representations and non physical phenomena).
    By contrast, the physical phenomena do not interact in this relationship or representation function way. the physical phenomena all seem to be composed of fundamental elements – the particles of physics. And all the physical phenomena appear to occur because of what the particles themselves dictate.
    Particles generally collect into particle groups we call molecules. And these molecules are combinations of the effects the individual particles can produce. (eg. a giant mass of molecules reveals the tiny effect of gravity produced by every atom).
    Mostly we are talking about molecules when we talk about the physical phenomena.
    We can describe a bunch of molecules in a shape or a form, like an arm or a leg, or a body, or a star.
    Those forms are representations of what the molecules produced through their intrinsic interactions. The form does not dictate the particles behavior, and the particles intrinsic interactions do not contain designs for those forms.
    Everything about the physics is produced from the bottom up, from the particles on up.
    When we talk about the form groups of molecules take, we are actually talking about representations. And this area of study is called phenomenology. For instance a dream arm, or a video game arm, or an actual arm made from molecules are all kinds of arms (representations) but are not kinds of particle groups.
    There is one particular kind of form that physics produces, which we call a cell. It is not designed or forced to occur because of some extrinsic design or representation. It just happens because the right set of molecules incidental create that kind of physical phenomena.

    Cells are interesting because they have all kinds of different processes their molecules produce. Some of these processes are also representational functions. Where one thing is the same as another thing. x as y (x ; y)
    The cell itself is this kind of x as y thing. Because the molecules of the cell can completely be changed out, but the system the molecules manifest (the cell) keeps going, and even reproducing. In this way, cells are both physical phenomena, and representational phenomena.
    if all the molecules of the cell at a point in time (1): can be referred to as Set1, and Set1 is a Cell. Then we can say Set1 ; Cell_A (Set1 as Cell_A). If at point in time (2) all the molecules of the cell have been replaced, but the same cell exists, then we can say Set2 ; Cell_A (Set2 as Cell_A)

    The cell is not a physical thing, anymore than an arm is physical thing. Cells and arms are representations of some set of underlying physical things – the molecules. cells and arms do not CAUSE any phenomena. The molecules cause all the phenomena, mostly through charge differentials and molecular interactions with other molecules.

    We do not specify what a cell is or how it exists, or when it terminates. Our representations and representational functions do not affect cells. Only molecular interactions can make a cell or not make a cell. But cells have this representational property. And networks of interacting cells also have this representational property.

    A network of cells can represent other physical phenomena. And a network of cells can affect the molecular interaction of other cells. Different network structures of different kinds of cells can produce different kinds of effects in cells attached to the network, depending on how the network itself interacts. And the network can be affected by molecular changes to cells attached to that network. (a nervous system with motor cell outputs and sensing cell inputs).

    When molecules form a cell, we can say the molecules are instantiating a cell. when groups of cells produce representational functions, we can say the cells are instantiating those representations. The representations exist, they are just getting instantiated in particular network structures.

    To illustrate this: When cells divide, both the cells are the same kind of cell in two slightly different molecular forms. SetX ; Cell_A and SetY ; Cell_A
    This divisional feature is another example of representation occurring. As cells propagate, the representation which is the cell itself is what is propagating. Even though it’s completely different molecules, the cells are the same. this is the essence of what representation means – different, but the same.

    At best I think you can call this a theory of representation.

  117. 118. john davey says:

    Calvin


    “regarding itches, I want to refer you to this article:”

    She had an itch. It remained an itch. She knew it was an itch. The cause of the itch was abnormal, but it does not alter the fact the itch was an itch.
    You have to learn to distinguish between perspectives : the 3rd person perspective indicated unusualness : the 1st person perspective remained the same : it was an itch.


    “and now you say mathematics are “…are knowledge-based artefacts in the public space, like language.””
    ..
    “where is this public space you refer to? Is it a physical space?

    I hope that’s sarcasm !


    How can a knowledge based artifact not also be a mental phenomena?

    It’s you that splits the world into “physical” and “mental”. I don’t. There are cultural facts too – social facts if you like. Mathematics is a cultural fact – it is clearly not a mental phenomena. It “exists” (if you must use the word) in the “cultural” space.
    Mental phenomena are subjective, impermanent and can only be associated with one mind at a time. Humans can use their thinking and language faculties to share information – that information is the cultural space, if you like.


    ” How is that not a first-person and a subjective experience?”

    I think I know the source of your confusion. You are confusion the act of thinking (the “phenomena”) with the object of that thinking. You need to differentiate process from product.


    Paintings and mathematics are different classes of things. Do you agree with that?

    Not in the sense we are talking about I don’t think. They are the same : objects of thought but they are not thinking.


    So I assume the square root of 7 is not caused by brains.

    The square root of 7 is in the wrong ontological class to have a “cause”. It was invented by a thought process in a creative act.


    “I agree with this statement completely. comprehension, explanation, the mathematics that describe physics is completely unnecessary to the physics itself. That is the point I’ve argued all along.

    Hm. I disagree. I thought you said “mathematics was comprehension”.


    “But doesn’t that show that comprehension, explanation, and mathematics are non-physical phenomena? ”

    Never said they were.


    Because they are irrelevant to the physics – no amount of physical phenomena change the mathematical phenomena, and no amount of mathematical phenomena affects the physics. Is that assertion correct in your view?

    Don’t know what this is all about. There are no such thing as mathematical “phenomena” so I can’t have mentioned it.


    so what is the relationship the physics stuff to the mathematics and explanations?

    Physics is method based upon the use of mathematical axioms. Explanations can sometimes be intuitive – newtonian mechanics for example -or vaguely based upon an interpretation of the maths. The reason for explanations is simple : to provide a visualusation. But physics works regardless.


    So all phenomena are first person experiences in brains? So the brain itself is a first person experience in a brain?

    Sorry, I wasn’t clear- all mentalphenomena are first person experiences in brains.


    And if the material of the brain causes mental phenomena. And all phenomena are first person experiences in brains, then doesn’t that mean all phenomena are mental phenomena?

    No.


    is the brain a physical thing?

    yes


    can you tell me what magenta is

    It’s a sensory experience.


    And if the brain is a physical thing, how does it cause that magenta to happen?

    That is the million dollar question. And you can sit and stew all day in confusion over your hard insistence of classifying everything as “physical”, “mental”, “non-physical”,”angelic”,”demonic” whatever – the “physical” brain creates mental imagery regardless of however much you think it doesn’t make sense. Don’t see the mistake in the phenomena : there aren’t any. See the mistake in the way you are thinking.

  118. 119. calvin says:

    John,

    I could you explain the metaphysics you are advocating from first principles? I am honestly confused.

    My question about “where is this public space you refer to? Is it a physical space?” was only a little bit sarcastic. I seriously want to know what public space, or cultural space is. Because I think it’s a representation, and not some actual metaphysical space (and obvs. not-sarcasm-a physical space). And since it is a representation, then to apprehend or be aware of this kind of space, requires thinking about it, which means it must *also* be a mental phenomena. I can feel you make your argument against this, which means I am honestly confused about the structure and fundamental features of the metaphysics you are describing.

    For instance, you say: “The square root of 7 is in the wrong ontological class to have a “cause”. It was invented by a thought process in a creative act.” So what thinker invented this? When was it invented? Does everyone with sufficient mathematical experience re-invent the square root of 7? Or, is the square root of 7 an actual thing, apart from whether anyone ever thinks of it?

    Can you explain the ontology of things to me? So I can draw connections between different contents?

    You said: “I think I know the source of your confusion. You are confusion the act of thinking (the “phenomena”) with the object of that thinking. You need to differentiate process from product.

    I want to make an argument, based on observation, and ask if you can provide a counter observation to refute the argument (because I cannot).

    There is no observable difference between the act of thinking and the object of that thinking. That whenever there is content or an object of thought, we assert there is thinking which is occurring. And that whenever we assert there is thinking happening, the thinking is always of some particular content or object of thought. Even when the content of thought may be ambiguous, it is particular in it’s ambiguity. And objects of thinking may be things like a process, an action, or an object.

    The phenomena of thinking is always the content thought.

    Is this what you observe when you examine your own thinking? Can you observe your own thinking where there is no content? And is there some kind of difference between the contents of your thoughts and the thinking itself? Can you describe what that difference is?

    I do not observe any difference between my thought process and the content of my thoughts. Sometimes I think there is a difference, but those thoughts are themselves content and not the actual process of thinking I can observe. I have also noticed that this same “rule” seems to apply to all the other kinds of experiences I have. Sensations, perceptions, etc. That whatever process is creating experience, it is indistinguishable from the contents of the experience.

    I have taken to showing this symbolically. That whatever the process of thought, or the process of experience or whatever the process of awareness is, it is always of or about some content and it always forms an identity to the content itself. So that, awareness of some content X is that X. Aw:X = X And that whenever some content shows up (like sensations or unbidden thoughts), that the process of awareness or process of thinking must also be happening. So X = Aw:X

    Do you have some particular examples where you observe that objects of thought and the process of thinking diverge?

  119. 120. John Davey says:

    Calvin


    I could you explain the metaphysics you are advocating from first principles?

    I’m not advotaing any metaphysics from first principles. For me this is a scientific question, for you it would appear to be metaphysics


    I seriously want to know what public space, or cultural space is.

    Sorry Calvin, I don’t believe you don’t know what culture is.
    The point you want to make is “what is the reductionist science of this thing called culture”. What are the maths ? Where are the atmoms?
    Well, there isn’t one. That’s the problem of being a reductionist – not my problem – yours.


    Because I think it’s a representation

    representation of what ?


    And since it is a representation, then to apprehend or be aware of this kind of space, requires thinking about it, which means it must *also* be a mental phenomena.

    Why ? Why must it be the same thing ? That’s not a scientific claim – that’s a definition.


    Or, is the square root of 7 an actual thing, apart from whether anyone ever thinks of it?

    Homo Sapiens has been around for 100,000 years and for at least 98,000 of them there’s been no square root of 7. There’s been no english language until recently either : or television sets.

    I suppose you can argue thatevrything that was ever invented was waiting to be invented, in the same way as the television set or English or Newton’s Laws of motion.
    But to argue that it was “always there” in the structure of the universe is to ignore the fact that physics and mathematics are basically creative. 50,000 years ago there was no square root of 7 or quantum mechanics.


    There is no observable difference between the act of thinking and the object of that thinking.

    Woe. Real religious stuff. We are angels ! We don’t think, we are pure mathematics !

    How do you know what the act of thinking is ? have you stumbled across some scientific text noone else has ?

    “thinking” – I am going to shock you here – is most likely a physical process. It is likely to be material and hence unconnected to its output, which is conscious and unconscious thought.
    That’s a refutation and it’s backed by a good deal of the science that we have. We don’t “control” our thoughts, and most of them originate unconsciously, and that suggests an origin in material processes.
    For instance, putting a bullet in the head stops much thinking. Substantially changing the material environment through use of drugs changes mental contents.

    The process is so clearly different from the product I’m suprised you even entertain the idea – this being the 21st century.

    The act of thinking – you seem to think – is somewhat more ethereal than the reality. Don’t forget we are descended from worms and mushrooms.


    Can you observe your own thinking

    I don’t observe my own “thinking processes” – merely their outputs

    “Do you have some particular examples where you observe that objects of thought and the process of thinking diverge?

    I can’t, because I can’t observe my own “thinking processes” – they are 3rd person and not within my 1st person grasp.

    J

  120. 121. calvin says:

    John,

    “thinking” – I am going to shock you here – is most likely a physical process. It is likely to be material and hence unconnected to its output, which is conscious and unconscious thought.
    That’s a refutation and it’s backed by a good deal of the science that we have. We don’t “control” our thoughts, and most of them originate unconsciously, and that suggests an origin in material processes.
    For instance, putting a bullet in the head stops much thinking. Substantially changing the material environment through use of drugs changes mental contents.

    This is true, as far as it goes. Yes, drugs and bullets affect brain activity which in turn affects thinking. But you skip over saying what content is. You skip over saying what a thought actually is. Then you espouse this bit of metaphysics

    The process is so clearly different from the product I’m suprised you even entertain the idea – this being the 21st century.

    I am saying this idea is wrong. It is observably wrong. Yes, first person observations demonstrate this to be wrong. That whatever a “thought process” might be -certainly something descended from worms and mushrooms- that it is indistinguishable from it’s products.

    I asked if you could observe a difference between the contents of your thoughts and your thinking process. And you said you could not. But you continue to believe otherwise. You believe there must be a difference between your thought processes and their outputs because to accept the observation doesn’t fit with your metaphysical beliefs.

    If you had an explanation and some experiments which show how thoughts come about. Which explained how concepts like the square root of 7 can be created by brains, or explained why the “thinking process” itself could not be observed, then I could understand your reasoning. But you don’t. You reject the simple conclusion of your own observations.

    More than that, you assert that thoughts are the products of physical processes of brains. How is that not a reductionist argument? I’m just pointing out that you can’t draw a materialist or reductionist connection between physical phenomena of brains and the square root of 7. I know culture isn’t a physical phenomena. That is why I was asking you to describe what it was. You talk about it as a thing we interact with, but then gloss over how it fits in relationship to molecules, or brain cells. It would be nice if you are criticizing my argument from examples instead of your beliefs.

    I KNOW it sounds wacky to say the process of thinking and the products or content of thinking are the same thing. But I also know that all the other approaches do not deal with this observable fact. And those other approaches do not get us any closer to understanding HOW thoughts arise from matter – or how matter produces a thinking process.

    Of course, the physical phenomena have this huge effect on thinking and thoughts. Head injuries and LSD are pretty conclusive examples we can offer as proof. Any explanation for how thoughts happen, AT ALL, requires accepting the physical facts.

    My point is that any scientific explanation also requires accepting the non-physical facts. Which you seem to have a real problem with. Trying to smash all the non-physical things into some kind of materialist belief just doesn’t work. Every time I try to hone in on a discrepancy in that line of thinking, you tell me I’m making distinctions between phenomena that aren’t there, or it’s my need to be reductionist that is the problem… and then you go on to make distinctions between phenomena and make a reductionist argument.

    Yes, I UNDERSTAND, that saying that non-physical phenomena are actual things, is definitely a looney position to take. But again, that is what we observe. Our brains do not create prime numbers. We discover prime numbers. And what about error or the concept of error? Was that invented? Was meaning invented? Was color invented? Sound? Or smells? How were qualia invented? And when I bring up these examples you just dismiss them, but you don’t deal with why these examples are problematic to a materialist belief or to your belief.

    Conversely, if all non-physical things can be invented, what is it invented from? What are a “mistake concept” precursors? Accepting that qualia and ideas and information are all inventions, how is that functionally any different than when I say they just exist and they get instantiated? If they got invented, what was it that got invented? It’s the content itself that needs an explanation. This invention argument is a metaphysical description of what happens, but not how it happens.

    How do you know what the act of thinking is ? have you stumbled across some scientific text noone else has ? What is this? an appeal to scripture? The claim that: “whatever the process of thinking is, is indistinguishable from the contents of thought” is based on observation. You made the same observation. Yeah, wouldn’t it be cool if there was some other explanation for that observation than the simplest one?

    So please, since the simple conclusion from observation isn’t good enough for you, tell me what the process of thinking is and then we will see how it’s different than it’s products. Because I bet I can write code to reproduce that process (not simulate, reproduce). I know you said you can’t describe the process of thinking, but could you at least try? Because I’ve tried. I’ve seen others try. I’ve been working on creating digital consciousness for over a decade now, and i’m open to any kind of implementable suggestion.

    In that time, the only way I have found to make progress is to accept the facts of what I observe about thinking- about the non-physical phenomena. Then I try and figure out how the physical phenomena would generate those facts. That seems like a very reasonable approach to me. And I have been able to make both theoretical and practical progress.

    If I abandoned the approach I am on and switched over to what you are arguing, what kind of advantages does that offer? I don’t see how you get to digital consciousness from your basic premises. Materialism has been a consistent dead end, not just for me, but for everyone else too. (Chalmers and Peter’s arguments are pretty good.) You understand the syntax/semantics problem as good as anyone. If I thought that the materialist way of thinking worked, don’t you think I would rely on it too? But materialism is known to lead to intractable problems. the kind that prevent the conceptual breakthroughs necessary to make progress towards machine consciousness. Doesn’t it seem obvious that a different theoretical approach is necessary?

    So, yeah, if you want to take pot shots that’s cool. But I would really appreciate an argument and critiques on the merits. I would appreciate some different observations or different readings of observations. Instead your criticize me for being a reductionist, as if that is bad thing, and an idealist, which is absurd on it’s face. I just want to do a reset and ask three questions.

    Do you think digital consciousness is possible?
    What facts have to be accounted for or reproduced to generate digital consciousness?
    How do different approaches affect our ability to account for those facts and thus affect our ability to reproduce those facts to generate digital consciousness (if it is possible)?

  121. 122. john davey says:

    Calvin


    You believe there must be a difference between your thought processes and their outputs because to accept the observation doesn’t fit with your metaphysical beliefs.

    Not metaphysical beliefs – common sense and known scientific facts. I believe that brains cause minds, yes.


    If you had an explanation and some experiments which show how thoughts come about. Which explained how concepts like the square root of 7 can be created by brains, or explained why the “thinking process” itself could not be observed, then I could understand your reasoning. But you don’t. You reject the simple conclusion of your own observations.

    No. I accept the simple conclusion of my own observations to this extent : I believe that brains cause minds. The stuff of brains gives rise to mental content. If you don’t believe that you must be some kind of idealist.

    It’s plain – at least to me – that I’ve defined mental content in the simplest of terms – 1st person experiences in brains. WHen you wake up – you acquire the conscious awareness that characterises mental contents. It is not mathematical, it is not linguistic, it is mental content – 1st person, irreducible, natural.

    You are falling into the traditional mental/physical category error – deciding, like the Greeks that the world was split into conceptual(mental) and physical. So mathematics and the desire to urinate are arbitrarily lumped into the same class merely because of an association with brains. I recommend reading John Searle on the subject of consciousness and the ancient greeks – he highlights this mistake more clearly than most.


    How is that not a reductionist argument?

    Because you can’t reduce the process using traditional mathematical physics.


    I’m just pointing out that you can’t draw a materialist or reductionist connection between physical phenomena of brains and the square root of 7.

    You can if you’re not a reductionist, which I’m not.


    You talk about it as a thing we interact with, but then gloss over how it fits in relationship to molecules, or brain cells.

    See ? I told you you were a reductionist ! I’m not by the way. Culture is culture is culture, it doesn’t have a relationship to atoms through mathematical physics.


    It would be nice if you are criticizing my argument from examples instead of your beliefs.

    What examples ? Examples of what ..


    But I also know that all the other approaches do not deal with this observable fact.

    You are basically saying “all thoughts are first person”. It’s not really revealing is it ? We know that to be true. But what is the relationship of the brain to thought – the stuff of brains – THAT is the key to the answer of “what is thinking”. You are just thinking of thoughts, which is different.


    Trying to smash all the non-physical things into some kind of materialist belief just doesn’t work.

    It wouldn’t. But I’m not trying to do that. You’re confused : you are constraining yourself to substance issues .. is it physical .. is it mental .. is it .. whatever. Until you liberate yourself from this you’ll get tangled, I would suggest.


    Our brains do not create prime numbers.

    They do. They really, really do. They arise from the human cognitive framework – unless you know any dog mathematicians.


    We discover prime numbers.

    In that case we discovered television sets, photography and the english language. They were all there waiting to be found.

    What you want to believe is that prime numbers exist outside the human cognitive framework – I think that’s plain wrong. Mathematics evolves just like any other science, so whether you decide it “finds” or “creates” is up to you. But without humans, they’re not there.

    What you want I think is to believe that ‘information’ is universally fundamental in some way. This is an old argument that gets recycled from time to time. It fails I think to grasp that the universe is just ‘stuff’, and our ideas and models of it are not the same as the actual universe. Just models – in our heads.


    I don’t see how you get to digital consciousness from your basic premises.

    Now you’ve really lost me. Me ? A computationalist ? How on earth could you end up there ?You’re getting me confused.


    Do you think digital consciousness is possible?

    Easy one. No. Never. Ever.


    What facts have to be accounted for or reproduced to generate digital consciousness?

    If we remove the expression “digital”, then I’d say artificial consciousness could be generated by any machine with the causal ability to create consciousness. Computation is an observer-relative dead end : it has no natural existence (it exists in the same way as a book) and hence no natural causal powers. Computation is in the head.

    As it stands, artificial consciousness is likely to be produced by biologists synthesising brain tissue : computationalists will still be scratching their heads in 1,000 years time I should expect, failing to see why none of it works.

  122. 123. calvin says:

    John,

    I’m fascinated by how you say things I agree with, but then come to a different conclusion.

    I agree with this: that I’ve defined mental content in the simplest of terms – 1st person experiences in brains. WHen you wake up – you acquire the conscious awareness that characterises mental contents. It is not mathematical, it is not linguistic, it is mental content – 1st person, irreducible, natural.

    And I agree with your statement about mental content: Because you can’t reduce the process using traditional mathematical physics.
    We can’t get culture from atoms or math.

    I want to know then if mathematics and linguistics is a kind of mental content. I’m sorry, I’m not clear on that. I get conceptually that mathematics is invented and “the stuff of brains gives rise to mental contents” From which I conclude that mathematics and mental content are either in the same class of things -products of the brain’s “thinking process” or that mathematics is a subset of mental contents. Is that a correct way to classify these things?

    I think of them as being in the same class of things, which I refer to generically as representations or representational phenomena. That the “thinking processes” of the brain produce representational phenomena.

    I have another problem that is not explained by your expressed position. If mental content is 1st person, how can mental contents be shared? How can you and I speak about prime numbers like 7 unless we share the same mental content? We can ask this question about all the kinds of first person experiences people have – together or apart. How does reality work in such a way that we can share the same mental content?

    It looks to me like when people share something, their brains are inventing [or instantiating] some of the the same mental [or representational] content. If I ask you to complete the phrase “To be or …” And you respond with “not to be.” Are we not aware of this same bit of literature? How do you describe shared mental contents if they are necessarily first person only? Doesn’t that make the mental contents third person mental contents?

    {As an aside: How can there be anything such as third person? Isn’t that a supposition? Isn’t the very idea of third person itself mental content? Doesn’t a third person perspective imply the same mental contents?}

    Let’s call this the sharing problem. And I’m curious to know how you explain the phenomena of sharing ideas and understanding without accepting that there are 3rd person mental contents?

  123. 124. calvin says:

    John,

    I believe that brains cause minds, yes.

    Do you actually believe there is a thing called a mind? And if the brain causes a mind, do you accept that the mind can cause physical phenomena (like movement)?

    And if you accept a mind can be a causal source, what is the process the mind itself engages in to produce a physical cause?

    And if the brain causes a mind and the brain produces “mental contents” then are the mental contents “in the mind” or is the mind the result of the brain generated “thinking processes too?” That is, are mental contents in the mind, or are mental contents and the mind both examples of the same thing?

    I do not believe there is such a thing as a mind. The mind is a product of “thought processes”. I think the mind is just a reference to the set of representations [mental content] our brains can produce. And that when our brains instantiate [invent] representations [mental content] with their processes, some of those representations can and do cause physical phenomena (like movement). But to make this case, I have to accept that mental phenomena are natural, 1st person, and irreducible. But I also have to accept that mental phenomena can also be 3rd person.

    Otherwise it’s impossible to describe a game like Simon Says. Simon Says requires 3rd person mental phenomena and it’s fun to play because of the mistakes people make about the third person perspective in regards to what Simon has said. How do you describe the necessary initiating condition of someone speaking: “Simon says, jump” and then people jumping. And conversely, how do you describe when someone jumps without “simon says” because they were mistaken. The absence of the “simon says” to the command of jump is problematic for a causal description, because there was no initiating condition – that is, it’s an error, yet the error produces a physical phenomena.

    I don’t understand how we can talk about error initiating physical phenomena as solely the result of brain generated “thinking processes” without admitting mental contents as extrinsic facts. How could the mistake have happened at all if the thinking process and physical movements are solely generated by the brain? It requires the biological “thinking process” to be confused. And then it requires the biological “thinking process” to correct itself or acknowledge the error.

    I’m guessing you throw all of these content problems (simon, errors, confusion, correction) into “the mind”. Is that right?

    How do you distinguish the mental contents, the mathematics, the brains and physical phenomena from each other? Isn’t the distinction itself the result of the “thinking process”? How can the brain generated “thinking process” be generating a second order “mental content” of “distinction”? “distinguishing” as a concept is a meta-concept. How does the problem of meta-ness get handled if thought processes are brain generated? By which I mean, the brain itself doesn’t capture “meta” in it’s components or structure.

    So while I agree mental content is irreducible, that does not mean it cannot be understood functionally, on it’s own terms. Which is why I talk about mental content, mathematics, perceptions, sensations, etc as representational phenomena. So that we can talk about how the irreducible mental content can relate to each other. To provide a framework to understand how “meta” and “error” and association and representation occur – how these things functionally relate to each other.

    Because i don’t think we can account for how consciousness works or occurs, unless we can account for how mental contents themselves behave when we wake and “acquire the conscious awareness that characterises mental contents.” I think, based on observation, that “mental contents” interact in a particular set of ways.

    You acknowledge there are mental contents, and things like mathematics. What I’m saying is that if you observe how we think about those things, how awareness seems to work, it follows a pattern.

    What you want I think is to believe that ‘information’ is universally fundamental in some way. I am arguing only that information is part of reality yes. If first person mental content happens, then it’s part of reality. I am not arguing that whatever statements or intent or meaning some first person mental content entails therefore physically happens. Merely, that the mental content itself exists. (and is obviously not physical).

    You are basically saying “all thoughts are first person”. It’s not really revealing is it ?

    Yes, it is, if we ask: how do thoughts work? Can you take a step backward, and simply observe how your thoughts work? Without making assumptions or assertions to brains or to minds or to anything else, simply look at the process as you observe it and what do you see? What conclusions can you draw from your own observations alone without making appeals to a metaphysics or a hierarchy of reality? (a hierarchy would something like: physical universe -> brains -> thinking process -> thoughts/mental content )

    What you will see is that “thinking” and thoughts are the same thing. That to be aware means that content of awareness exists.

    I am not making an argument about substance. I’m not suggesting that whatever the contents of your awareness are, means they have some sort of supernatural effects. What I am saying is the observation reveals a functional fact.

    And that further examination reveals the contents of awareness associate to each other. And that association is also functional.

    I am saying awareness itself works in a certain way. Regardless of whatever underlying physical phenomena are generating “thought processes”, that awareness itself can be described functionally. I do not mean we can reduce one mental content to another, nor to mathematics. I mean that all the mental contents, all the contents of awareness associate to each other in the same way. And that association process can be described with a function.

    I am not saying we can throw culture, or mental contents, or mathematics itself into a mathematical tautology nor am I suggesting these non-physical things have a relationship to mathematical physics. What I am saying is that we can describe the form and interactions of everything we could call contents of awareness with representational functions. And if we can do that, then we can look for mechanisms in biology and computation that reproduces that functionality.

    Said another way: by observing the products (thoughts or mental content) of the hidden “thought processes” we can discern something about the process itself. Not how any particular content is generated- physically-, but how the contents or products of thought interact and are generated representationally. And that these representations, relations, and interactions can be described functionally. Further, I think that functionality is sufficient to explain how consciousness occurs.

    That is my hypothesis. To test the hypothesis of representational functionality requires reproducing the functionality in a different physical system. Eg. like a computer. If we can create a program or a set of programs which reproduces the same representational functionality, and I run that program on a computer and we observe the computer to behave in a way we consider to be conscious and then I run some other kind of program on the computer, one that does not reproduce the representational functionality, and we observe the computer and conclude it is certainly is not conscious, then I would say the representational functionality is (by all appearances) a sufficient theory to explain how consciousness occurs.

    the whole enterprise though rests on accepting that products of thinking exist, and that they do function in a certain kind of way (irrespective of the underlying physical phenomena).

    You seem to disclaim my basic argument that the contents of awareness do observably behave in a functional way. And say this kind of approach can’t work because of common sense, or because of things you read or simply because you don’t believe it. That’s not really a testable hypothesis is it?

    Furthermore, I am taking John Searle’s argument about how to make a computer sentient as valid. https://www.youtube.com/watch?v=rHKwIYsPXLg

    Searle argues that the brain is doing a process, a “thinking process” (or consciousness or attention, or awareness or whatever), which is a process in the same way that digestion is a process. The typical approach of computer science is simulation. So if we simulate digestion, we are obviously _not_ doing digestion. And thinking or consciousness is something where just simulating it cannot reproduce it. To achieve computer sentience requires duplicating the “thinking process” or the consciousness process.

    Which means to produce conscious machines requires reproducing the “thinking process” on computers – not simulating, reproducing. My argument is one which describes what the “thinking process” is doing is by observing how the thinking products work. The “thinking process” is producing a functional and representational process. I may be wrong about that.

    But, I may be right about that. Regardless, the hypothesis at least provides a path for producing experiments to show if consciousness does arise from a particular process and thus prove Searle’s argument correct. But if my experiments fail, it likely just means I do not understand the “thinking process” or consciousness process sufficiently to reproduce it.

    And while I may be wrong about the details or facts of the “thinking process”, the principle I am are relying upon seems like the correct one. And you seem to agree with the principle of that approach. eg. “computationalists will still be scratching their heads in 1,000 years time I should expect, failing to see why none of it works.

    I am not arguing you can get consciousness from computation. Doesn’t work. Searle’s arguments are bullet proof. But that does not mean you can’t use computation to reproduce the same processes the biology is producing. And then have that process produce consciousness. It’s a two-step. And I do not mean simulate the biology. I mean reproduce the biological functionality that then reproduces the “thinking process” or consciousness.

    Do you think machine (or digital) consciousness would be possible under those conditions?

    Now, if you want to make an argument about that computational approach, I would love to engage in that. Because there just are not very many people like you who can see through the syntax/semantic problem and not mistake what a computer is doing for actual thinking. But to do that requires accepting, at least in principle, that there is a “thinking process” which produces mental contents and that “thinking process” can be abstracted away from the biology.

  124. 125. John Davey says:

    Calvin

    I want to know then if mathematics and linguistics is a kind of mental content. I’m sorry, I’m not clear on that.

    It is not mental content as such – it is information, which is a tool of humans which allows ideas to be transmitted. Those ideas are objects of thought but you must not confuse them with the actual process of thinking.
    Human beings can construct abstract ideas but they don’t subsist in the same sense as actual mental phenomena.

    i) Ideas are abstract – such as the “idea” of mental phenomena, or the letter “A”, but mental phenomena are concrete, like feeling ill, or wanting to urinate, or feeling like you are about to sneeze, or seeing a bright red
    ii) Ideas contain representation ; mental phenomena have none
    iii) Mental phenomene hava an absolute shape and aspect which is completely unrepresentational. For instance the feeling of being sad cannot be said to be like the feeling of being happy : they are feelings bu they are totally distinct and have no relationship

    I think the expression “thinking is a physical act” perhaps is a way to best get some idea of where thinking processes sit in the universe. Thinking is physical but its output are not material.


    If mental content is 1st person, how can mental contents be shared?

    Language, information, bodily expression ..


    How can you and I speak about prime numbers like 7 unless we share the same mental content?

    I think we share the same ideas and culture, not “mental contents”. It’s best to stop thinking so much about substance. A brain thinks about a lot of things – the moon, the sun. Just doing so doesn’t turn either the sun or the moon into “mental content”. “Mental content” in the sense you are referring to it means “anything that the brain can think about” – I’d recommend reading about intentionality although personally I find it can be confusing.


    How does reality work in such a way that we can share the same mental content?

    We can and we can’t. We can because most humans are biologically similar and we are programmed to assume that most other people are like us. We don’t have a problem assuming our experience of “green” is the same as everybody else’s.
    Nonetheless – if we had a man blind since birth, we could never communicate to him what ‘green’ looks like. We operate as biological specimens, on instinct, with the implicit assumption that we’re all basically the same. And guess what, it works.

  125. 126. John Davey says:

    Calvin

    Do you actually believe there is a thing called a mind?

    I’m sure of it


    And if the brain causes a mind, do you accept that the mind can cause physical phenomena (like movement)?

    I think it’s very likely, but I’m not as sure of this as the existence of mind itself.


    And if you accept a mind can be a causal source, what is the process the mind itself engages in to produce a physical cause?

    I don’t know, and I am happy to accept that mathematical physics and conventional theories of matter are as much use as chocolate teapots on this point

    I do not believe there is such a thing as a mind. The mind is a product of “thought processes”. I think the mind is just a reference to the set of representations [mental content] our brains can produce.

    Isn’t this a contradiction ?


    But I also have to accept that mental phenomena can also be 3rd person.

    Of course it can – I can refer to your consciousness, my consciousness. But I can’t access your consciousness from my consciousness. Nor can I access my own physical thinking processes which are not available to consciousness .


    Can you take a step backward, and simply observe how your thoughts work?

    Very Zen. Yes – but only the conscious ones.


    The “thinking process” is producing a functional and representational process. I may be wrong about that.

    Almost certainly – but you won’t be the first ! ..


    f. But that does not mean you can’t use computation to reproduce the same processes the biology is producing.

    Actually it does, and if you read his works in more detail you’ll find out why. Computers do not exist outside of human culture : they exist like books, cultural artefacts with a relationship to human congntion but none to the phenomenal world. There are no such things as natural computers : therefore they cannot possess natural causal powers. As consciousness is evidently a widespread naturally-originating phenomena there is no way or means by which a computer could ever cause consciousness. It’s central to Searle’s thesis on the subjective. Computation is observer-relative – it doesn’t exist. So any output from computation doesn’t exist either. I don’t see how you could accept Searle’s arguments and then claim you can run a program which has causal powers ! You need to go over his arguments again.

    J

  126. 127. calvin says:

    John,

    Human beings can construct abstract ideas but they don’t subsist in the same sense as actual mental phenomena.
    ii) Ideas contain representation ; mental phenomena have none
    iii) Mental phenomene hava an absolute shape and aspect which is completely unrepresentational.

    I want to try to unpack this set of ideas and see how they play out in a couple examples.

    First, I want to make sure that ideas, and the outputs of the physical thinking process, are all non-material. Are mental phenomena non-material? For instance, if I take a recently dead body, and electro-stimulate it to ejaculate, does it have an orgasm? Does it have the feelings of ejaculation?

    And I think you would say the dead body would feel nothing (have no mental phenomena), but I am not sure if this is because you think the mental phenomena are not material, or because the dead body no longer has a mind. Or both?

    Another example I want to present is the one of taste. I am able to taste phenylthiocarbamide. When I was first tested for this with PTC taste strips, I had just rinsed with listerine. When I put the strip into my mouth, my nose wrinkled, and my face grimaced as if I were tasting something very bitter. But I could really only taste the paper and barely taste any bitter. After awhile we retested (using both a control and ptc paper) and I could strongly taste the bitter and again my nose wrinkled and my face grimaced. (settling a dispute with my son.) I believe this illustrates that the experience of bitter, is mental phenomena and is not a material phenomena.

    What about mental phenomena that occur in dreams- Are dreamed mental phenomena non-material?

    Second, can you tell me how ideas, the representations they contain, and mental phenomena subsist separately? Do you mean the subsisting process which produces them is different? or that they have a different quality or features? And if it’s a qualitative difference, is the subsisting process the same between ideas and mental phenomena (which if they are both products of a brain I would say is true).

    To illustrate that, if I dream about an experience which produces the mental phenomena of sadness or arousal, and I dream about doing mathematics, it seems it could only be brain process which produce both the mental phenomena. and the ideas. and products of thinking. And if this set of facts is true, then are all of these phenomena non-material? And could we reasonably classify all these phenomena under a single class? could we call that class “experiences”?

    I think by treating all these brain produced phenomena under a single class, it lets us talk about how different kinds of experiences can and do relate to each other. And that the differences between ideas of mathematics and the feeling of about-to-sneeze are different because the contents of those experiences are different. And that if we change the contents, we change the experience. Conversely, we can change the brains physical thinking process. We can introduce drugs that induce the feeling of about-to-sneeze (nedocromil) and drugs that help with math (adderall).

    …information, which is a tool of humans which allows ideas to be transmitted.

    I infer that you define information to mean (from apples dictionary) “2: what is conveyed or represented by a particular arrangement or sequence of things” and not the “1: facts provided or learned about something or someone” The facts are purely ideas, and the “what is conveyed” are ideas plus the particular arrangement of things. I’m going to assume you mean something like the arrangement which conveys ideas. if that is what you mean, that is where we run into the syntax to semantic problem, correct? An arrangement is syntactical, and the “what is conveyed” is the semantic.

    I have two questions about this. One is how do you explain the generation and the recognition of errors. Or how do you explain the ability to recognize a paradox? I find the Simon Says game very illustrative of the problem of error generation and recognition. But also recognizing patterns from randomness, or perceiving illusions – as illusions, presents a particular kind of problem. Where there is no conveyance of an idea or where the arrangement is a deception.

    I mean, how can you account for the origination of these kinds meta ideas, which by definition seem dependent on other ideas. And also how do you account for idea representations, where an idea can be an error. Furthermore, how do you account for how an arrangement or syntactical information, which conveys an idea, be an error. how does the idea or representation of error originate?

    Secondly, how do ideas, such as an error get associated to mental phenomena, such as the feeling of disgust? Some of your comments to me strongly indicated a feeling of how wrong or misguided you felt my point of view was. Or another example would be reading a story about say, police brutality, we can feel anger, and disgust, sympathy, frustration etc. How do the words of such a story get associated to the mental phenomena of feelings? And How do the ideas which attach to those words get associated to those feelings?

    I think of this as the Art problem. How can a piece of art, such as a photograph of a mother, elicit sorrow, loss, and crying after a mother who has died? or even as a mechanism which elicits memories of childhood? a photograph, or a jpeg, or a painting are information which convey something which can induce mental phenomena to occur. And also induce ideas to occur in the person who experiences that art. And more interestingly, they can induce the wrong ideas, the wrong mental phenomena. – like how talking about itching and scratching can induce people to feel itchy.

    I am going to surmise from what you have said, that this effect occurs because of the nature of the brain which produces both mental phenomena and ideas, and the products of thinking (thoughts). And that it is similarities of brains which allows the syntactic content to produce semantic responses in different people. So that someone who only knew chinese would not get any meaning from a story about police brutality because their brains have not been structured to process the arrangements of colors to read english letters, and associate those shapes to sounds, words, and ideas. The chinese speakers physical thought processes cannot read or think in english.

    Thinking is physical but its output are not material.

    Can you tell me what it means to “think in english”? It seems to me that the physical thinking process, is producing another process we also call thinking, which allows us to do processing of ideas and feelings with the english language. English, meaning the words, the syntax, the concepts are all ideas, and non-material. They are products of a physical thinking process. Is the process of interacting with these ideas also a product of the physical thinking process? Are there non-material processes?

  127. 128. calvin says:

    John,

    Can you tell me what a mind is? Can you give me evidence for a mind?
    Does the mind do anything? Does it have processes or functions? I am assumig the mind is non-material, is that correct?

  128. 129. calvin says:

    John,

    Regarding Searle’s argument:
    https://www.youtube.com/watch?v=vtD-X9MCyVY

    Searle defines consciousness as having particular features:
    1) Consciousness exists and it is irreducible.
    2) Consciousness is real and it functions causally.
    3) We know consciousness is caused by neurobiological processes and we know it is realized in the brain as higher level or system feature.
    4) Conscious states of all those states of feeling, thinking, and awareness. The subjective qualia or states are the essence of consciousness. Dreams are a form of consciousness. All conscious states are qualitative, not just colors, or music, or pain, or ideas, but all of conscious experiences. All these states have a subjective quality.
    5) All conscious states occur as part of a unified conscious field.
    6) The entanglement of the biology and chemistry of the body with conscious states is generally called the “mind-body problem”.
    7) All conscious states are qualitative and are caused by lower level brain processes and are realized in the brain as higher level or system features.

    Given this argument is correct, in principle, why is it necessary to assert there is such a thing as a mind? Does mind mean “unified conscious field”?

    What, in your view, produces conscious causation? How do you account for bi-directional causation? meaning, how do you account for the fact a drug can change conscious states, and that conscious states (such as an idea) can change molecular behavior and produce action? How could you account for the interesting features that occur in a game of Simon Says?

    And absent an account for the bi-directional causation between conscious states and molecular phenomena do you think that it is impossible to generate an account which can be tested or demonstrated?

    I want to take issue with two parts of Searle’s argument.

    Could you explain what is wrong with my analysis and conclusion?

    First, if we can account for how conscious states function, as higher level features, irrespective of the underlying biology, then could we look for and possibly find these features in something non-biological? In Searle’s presentation he uses liquidity as a higher level feature of the water in his glass. It’s not a feature of the molecules. If we found the same feature in other groups of molecules, it seems reasonable to say those other groups of molecules are also liquid. eg. liquid metal.

    So that if we encountered something that had conscious states, and demonstrated the other features of consciousness, and was also non-biological, we would necessarily conclude the non-biological phenomena had consciousness. is that a valid possibility, in principle?

    Second, that consciousness is caused by neurobiological processes is a description of the facts we see, but not necessarily a causal fact itself. For instance, only birds, insects, and bats which have wings can fly (a neurobiological requirement). That does not mean that having wings is a necessary component of flight. SpaceX rocket’s fly, and they certainly do not have wings.

    The point being, that if we can abstract what the neurobiological processes are doing which produce the conscious states, then we may be able to implement that same process in a non-neurobiological system. Do you have any evidence that this proposition is impossible?

    I think both are valid possibilities in Searle’s framework. Searle’s key point is that any system that creates consciousness has to duplicate those (the consciousness) causal powers.
    https://www.youtube.com/watch?v=rHKwIYsPXLg (@ 59:45)

    Which means that to create a conscious machine requires understanding what conscious states are to produce those states from the same processes that produce them biologically. As Searle says, “There is nothing in my account that says a computer could never become conscious.” So that artificial consciousness is possible using a machine (or a group of machines). [Searle just doesn’t think the typical approach of programming is a very likely path. I agree with him on that!]

  129. 130. John Davey says:


    Does mind mean “unified conscious field”?

    it probably does in Searle’s scheme .. or rather a “mind” is perhaps a less well-defined idea than ‘consciousness’


    What, in your view, produces conscious causation?

    No idea. That’s what neurobiologists are for, and they don’t seem to have got too far. To be fair, the subject matter is hugely statistically complex and that creates massive problems. You may have a great respect for the general arms of physics – quantum mechanics, cosmology etc but there are substantial bodies of people engaged in statistical physics – and their ‘performance’ is way less impressive than their cosmology/quantum counterparts. That is because the mathematics of single bodies is way easier than that of large collection sof bodies.


    How could you account for the interesting features that occur in a game of Simon Says?

    Think I’ve already pointed out that nothing interesting is going on there. Consciousness has a first person character – when experienced – but it’s still a universal fact. I can talk of your consciousness, my consciousness etc.

    In Searle’s presentation he uses liquidity as a higher level feature of the water in his glass. It’s not a feature of the molecules.

    This is the weakest part of Searle’s argument. He also uses surface tension I think. I think it’s a poor analogy. It is a feature of the molecules, and it’s a very simple calculation to predict it’s existence. It’s not reducible in a qualitative sense but it’s VERY reducible in a mathematical sense.


    So that if we encountered something that had conscious states, and demonstrated the other features of consciousness, and was also non-biological, we would necessarily conclude the non-biological phenomena had consciousness. is that a valid possibility, in principle?

    You’d need a test for consciousness and that’s not really there yet. I suppose ‘non-biological’ consciousness is a possibility.


    Do you have any evidence that this proposition is impossible?

    Do you have any evidence that there isn’t a small midget at the centre of jupiter who’s name is Frank and who eats mice ?

    Spare me the disproof of the vaguely possible please… it’s ridiculous. There is no reason to think that consciousness can subsist in ‘non-biological’ systems as no-ones ever come across it. It might be possible – as possible as a god who is a giant toad called Steve.


    Searle’s key point is that any system that creates consciousness has to duplicate those (the consciousness) causal powers.

    Correct, and I think that he’s entirely right


    Which means that to create a conscious machine requires understanding what conscious states are to produce those states from the same processes that produce them biologically.

    don’t fall into the trap of thinking that Searle’s view of “processes” are the same informational processes that you are thinking about. He’s not talking about computation, about the flow of thoughts, he’s talking about matter in action. You’re not in the same ballpark on that one.

  130. 131. John Davey says:

    calvin


    I am assuming the mind is non-material

    Non-material but still, in my opinion, phenomenal – or “physical” for want of a better word.

    J

  131. 132. John Davey says:

    calvin


    For instance, if I take a recently dead body, and electro-stimulate it to ejaculate, does it have an orgasm? Does it have the feelings of ejaculation?

    This a plan of yours ??

    I think you’d the brain to be up and running – in which case it wouldn’t be dead I suppose.


    believe this illustrates that the experience of bitter, is mental phenomena and is not a material phenomena.

    You are falling into that bi-class again. Mental and material etc.. give it up. Mental processes are not material butthey’re not abstract either – they’re natural – “physical” for want of a better word. Space and time are similar – non-material but physical.


    Can you tell me what it means to “think in english”?

    You are referring to the difference between conscious and non-conscious thought. I’m not even going to ty to speculate on the difference between the two : that’s for scientists, definitely not for me. Above my pay grade.

    J

Leave a Reply