A short solution

Tim Bollands recently tweeted his short solution to the Hard Problem (I mean, not literally in a tweet – it’s not that short). You might think that was enough to be going on with, but he also provides an argument for a pretty uncompromising kind of panpsychism. I have to applaud his boldness and ingenuity, but unfortunately I part ways with his argument pretty early on. The original tweet is here.

Bollands’ starting premise is that it’s ‘intuitively clear that combining any two non-conscious material objects results in another non-conscious object’. Not really. Combining a non-conscious Victorian lady and a non-conscious bottle of smelling salts might easily produce a conscious being. More seriously, I think most materialists would assume that conscious human beings can be put together by the gradual addition of neural tissue to a foetus that attains consciousness by a similarly gradual process, from dim sensations to complex self-aware thought. It’s not clear to me that that is intuitively untenable, though you could certainly say that the details are currently mysterious.

Bollands believes there are three conclusions we can draw: humans are not conscious; consciousness miraculously emerges, or consciousness is already present in the matter brains are made from. The first, he says, is evidently false (remember that); the second is impossible, given that putting unconscious stuff together can’t produce consciousness; so the third must be true.

That points to some variety of panpsychism, and in fact Bollands goes boldly for the extreme version which attributes to individual particles the same kind of consciousness we have as human beings. In fact, your consciousness is really the consciousness of a single particle within you, which due to the complex processing of the body has come to think of itself as the consciousness of the whole.

I can’t recall any other panpsychist who has been willing to push fully-developed human consciousness right down to the level of elementary particles. I believe most either think consciousness starts somewhere above that level, or suppose that particles have only the dimmest imaginable spark of awareness. Taking this extreme position raises very difficult questions. Which particle is my conscious one? Or all they all being conscious in parallel? Why doesn’t my consciousness feel like the consciousness of a particle? How could all the complex content of my current conscious state be held by a single invariant particle? And why do my particles cease to be conscious when my body is dead, or stunned? You may notice, incidentally, that Bollands’ conclusion seems to be that human beings as such are not, in fact, conscious, contradicting what he said earlier.

Brevity is perhaps the problem here; I don’t think Bollands has enough space to make his answers clear, let alone plausible. Nor is it really clear how all this solves the Hard Problem. Bollands reckons the Hard Problem is analogous to the Combination Problem for panpsychism, which he has solved by denying that any combination occurs (though his particles still somehow benefit from the senses and cognitive apparatus of the whole body). But the Hard Problem isn’t about how particles or nerves come together to create experience, it’s about how phenomenal experience can possibly arise from anything merely physical. That is, to put it no higher, at least as difficult to imagine for a single particle as for a large complex organism.

So I’m not convinced – but I’d welcome more contributions to the debate as bold as this one.

81 thoughts on “A short solution

  1. It’s intuitively clear that combining any two non-flying material objects results in another non-flying object. That’s why airplanes don’t exist.

    The man summarily ignores emergent properties.

  2. It amazes me that so many people fall into this common fallacy that the properties and functionalities of the whole are merely the sum of the properties or functionalities of its parts. (Does this fallacy have a name?) As far as I’m concerned, it is obviously wrong. My favorite example to show this is to consider an automobile and its functionality of its ability to go from A to B. Does any of its parts also have this ability? Generally, no. We can take some parts off and the whole car can still go from A to B but that’s a special case. In general, a car requires all its parts to do its designed task.

    I think where people often go wrong with consciousness is to think of it as a property like the wetness of water. They start from an assumption that consciousness is a property of the brain and whatever it is made from rather than a function that the parts of the brain bring about by how they interoperate.

  3. On it being intuitively clear that combing two non-conscious objects only results in another non-conscious object, that’s only true until it isn’t. It’s clear that the combining the vast majority of objects that are not the game Angry Birds won’t produce Angry Birds. But combing a particular combination of such objects will.

    Along those lines, Matthias Michel tweeted a reply that if you remove the midbrain from a conscious brain, that brain won’t be conscious anymore, but neither will the removed midbrain. If you are able to re-install the midbrain, the system will be conscious again.
    https://twitter.com/MatthiasMichel_/status/1167116440740290566

    On the one particle thing, the brain and body are constantly recycling particles. Even in neurons. (The neuron itself isn’t recycled, but its components are.) Does that mean that we sometimes pass our consciousness when excreting waste? Is it replaced by a new one? Or are we a whole bunch of separate individual consciousnesses who are constantly coming in, rising to prominence, before being flushed away? Maybe Boltzmann brain on steroids?

  4. In Bollands’ theory, the combination problem is replaced by something very similar: Why is a collection (combination) of parts identical with a particular part rather than another one? Moreover, it is contradictory to identify a collection with any of its parts.

  5. @ Paul T –
    It’s called the fallacy of composition. In the reverse direction, where the parts are assumed to have the properties of the whole, it would be the fallacy of division.
    –Paul T

  6. Postmodern philosophy is about when…

    When does a hard-problem-present itself to me…

    Reality, really, isn’t it only when, its in front of me…

    …ideas sciences morals practices, postmodern philosophy today…thanks Peter

  7. I think the idea of homuncular particles might very well be real and might lead to the greatest discovery in science ever, mostly ending pain and death and giving us the ability to make own custom designed bodies for almost any environment in the universe.

    If the finite universe is conscious with free will and the particles are its children that take a very long time to become a universe, particles would have more energy (a good measure of both consciousness and free will) and become more massive over time and have more complex behavior.

    That has been the trend for billions of years in the universe — Hydrogen being converted to helium and other higher energy, higher mass particles.

    The combination problem can be resolved by having a good test when a combination is in effect a higher consciousness. A very good candidate is Planck’s law, E=hf, and Einstein’s E=mc^2. A higher consciousness would almost certainly have a higher clock rate allowing it to perceive, think and act with free will faster.

    A helium nucleus of two protons and two neutrons act like a unified consciousness most of the time giving a clock speed f=E/h about 4 times higher than hydrogen which can be tested for in a double slit experiment.

    The test would be is the combination obeying f=mc^2/h or not — only elemental particles and very rigid small molecules up to about the 60 carbon fullerene could pass the double slit test. When the fullerene is unified with a high frequency that could be like an awake state, and when it acts more like a collection of lower frequency carbon atoms that be like a sleep state.

    A highly massive homuncular particle would operate at a much higher frequency by f=mc^2/h than any other particle in the brain, maybe having a billion times the mass and thus frequency of typical particles allowing it to perceive, process, and decide (free will) a billion times faster. A year for a high energy particle might seem like a second to a low energy particle allowing the high energy particle to easily be in charge and the low energy particles to seem like objects not subjects.

    A likely way a homuncular particle would interact with the brain is electromagnetically which might not be too difficult to discover. There would have to a homuncular code where the brain sends coded visual audio, and other sense information and receives a code for voluntary free will actions.

    If this wireless code exists, breaking it could be the greatest discovery ever. With the code broken the location can be easily discovered and custom designed bodies built for almost any environment could be built. Almost no more death or pain and maybe even a way to communicate with our universal parent, the Universe.

  8. Are higher frequencies found in seeing entity transformations…

    …of ideas materials morals and practices as our experience…

    Then we may have a code to work with from “Teach your children well”…

  9. If panpsychism were true, wouldn’t we expect to see some form of rudimentary consciousness in other complex objects?

    To me a defining characteristic of consciousness is the (perhaps apparent) ability to *choose*. Every other system we know follows fully deterministic rules and does not choose. Only brains confer this (apparent) ability.

    To me this is clearly due to, in part, the complex structural organization of the brain and has little to do with the individual parts (as is true in all systems).

    As others have pointed out, cars aren’t made from little tiny cars, airplanes aren’t made from tiny airplanes. The idea that consciousness is made from little tiny bits of consciousness just seems a non-starter to me.

  10. Wyrd Smythe: Two bones to pick.

    You claim that only brains have the ability to choose as everything else is deterministic. This sounds like you think brains have some “magic sauce”. Did I misunderstand? If not, then what’s the source of this magic?

    You claim that consciousness comes from the “complex structural organization of the brain and has little to do with the individual parts”. How can we separate a structural organization from the parts it structures and organizes? While no part is conscious by itself, I see no reason to believe that consciousness comes from an organization of parts somewhere in the brain. It needs all its parts connected properly before consciousness can operate. Same for our sense of vision, smell, or any other identifiable brain function. It’s all certain parts working together.

  11. Two ‘surelys’: Surely the ‘hard problem’ is EXACTLY about how ‘particles and nerves’, an odd phrase. ‘come together’ (in each one of us, as fetuses) to ‘create experience’; and even more Surely, how at some point in the arising of our species, and in whichever ancestor species the phenomenon DID arise, the first glimmerings of subjective awareness DID appear. The REAL ‘hard problem’ is THAT one … how some (presumably) early ‘living’ organism went from the state of having nothing that could be called ‘consciousness’, to the first inklings of WHAT WAS TO BECOME our ‘full-blown’ version.
    Everything else is elaboration, increasing complexity, and window-dressing.
    The Kierkegaard recourse to “Therefore it is not so … ” and the Searle trick of ‘just knowing’ never did cut it, and never will.
    Actually, the only thing we do know, ‘intuitively’, is that AT SOME POINT, something in a line of transmission did in fact go from being ‘just’ a physical object to being, in some VERY PRIMITIVE WAY, a ‘conscious’ physical object. Funny to think that there must in fact have been some one individual ‘organism’ that was THE VERY FIRST physical object to ‘experience’ subjectivity, in its most rudimentary and perhaps you could say precursorial form.
    Of course the ‘problem’ has no solution, in terms of ‘what consciousness IS’. But isn’t the interesting thing to go on searching for some formulation that could possibly capture WHEN, in evolutionary AND in ‘fetal’ time, and WHERE, in a setting of ‘precursorial neurology’ and equally in the developing neurology of that fetus that became US, such a thing might reasonably be thought to have occurred? Because that might be the clue to HOW. Without recourse to magic, mystery, or silly ‘intuitive’ premises …

  12. It seems to me that this radical version of panpsychism is either an out-and-out dualism, or else in conflict with current physics. For if we suppose that the mental properties of the ‘self-particle’ (egon?) supervene on its physical properties in the sense that a change in the mental properties entails a change in the physical properties, then that particle would fail to be identical to other particles of its kind, and in fact, all particles differing with respect to their mental properties would also differ physically. But quantum mechanics requires the indistinguishability of particles of the same sort; if this indistinguishability fails, then certain elementary predictions of QM (such as particle statistics) would come out wrong, too. Since they don’t, we know that (at least, up to the very stringent levels of exclusion put up by the current experimental situation) they must be identical.

    That means that the only way for such a theory to work is to have particles be identical physically, but differ with respect to their mental properties. But then, it seems we don’t really have a panpsychist theory anymore, but rather, a form of dualism.

    This sort of thing isn’t an argument against standard panpsychism, by the way, since it’s entirely consistent to have two physically indistinguishable particles instantiate the same mental (or protomental) properties, as well.

  13. I doubt there was a specific time in our evolutionary past when consciousness “lit up”. If we think of consciousness as self-reflection then it is clear that it grew gradually as brains grew in size and power. If consciousness is merely the brain thinking about its own functioning, then it depends a lot on what that brain thinks about and its overall power. Even if an ant is conscious, the only things it can reflect on are things that ants think and do: eating, finding a mate, defending itself. It may well be that all animals except for the most primitive ones are capable of self-reflection but, since they have little to reflect upon, their consciousness is very far from our own.

    As far as children suddenly becoming conscious, isn’t that more a function of memory than consciousness? I suspect that we’re conscious from a very early age but we just don’t remember what it felt like. Since consciousness has a very profound effect on behavior, we would notice if a child suddenly became conscious. Instead, our memory gets better and better as we age and we start remembering what it felt like to be conscious. Learning to recognize and report consciousness may also be involved.

  14. #10 @Paul Topping: “This sounds like you think brains have some ‘magic sauce’. Did I misunderstand? If not, then what’s the source of this magic?”

    There is no ‘magic sauce.’ A full reply is too long for a comment, but essentially I key off our imagination and apparent sense of free will. I think the brain is a noisy, chaotic environment balanced in a supercritical state such that very tiny changes emerge from the noise floor and can be amplified into mental states involving will and purpose.

    The “how” of this would answer both the hard problem and the issue of epiphenomenalism. We simply don’t understand the brain/mind problem well enough to answer yet.

    But I do believe our ability to imagine futures and select among them is real, so therefore I believe brains do confer the ability to choose.

    “How can we separate a structural organization from the parts it structures and organizes?”

    By constructing an artificial brain (think Asimov’s Positronic Brain) and seeing if it’s conscious. Can metal, silicon, and plastic, do the same thing biology can? If so we know it’s the structural organization (including the operation of all the parts).

    I think consciousness — the mind — arises from the operation of the whole, so I completely agree with your last sentence.

    #13: “Even if an ant is conscious, the only things it can reflect on are things that ants think and do:”

    A long-time friend of mine is an avid fisherman, and I fished with him for decades. We often discussed whether fish were conscious — whether there was “something it is like” to be a fish — and we’ve both concluded that fish are little more than algorithms. They aren’t conscious in any substantial way.

    Let alone ants. I’m quite sure ants don’t “reflect” on anything at all. They’re just little tiny algorithms. I suspect it’s a matter of lacking the complexity to transcend the basic programming. I don’t think ants (or fish) choose anything.

  15. “we’ve both concluded that fish are little more than algorithms. They aren’t conscious in any substantial way.”

    I have many problems with this position. We are all algorithms. If not, then you must believe in some kind of “magic sauce” that we have and fish do not. Of course, the human brain’s algorithm is more powerful than the fish’s and it calculates using different input and output. As far as “supercriticality” is concerned, I doubt you have proof that it is involved in how our brain works. This sounds to me similar to Penrose’s claim that quantum mechanics is involved in consciousness. The logic seems to go like this: consciousness and quantum mechanics are both things that we do not understand very well so they must be related. It’s not logical at all.

    What does “transcend the basic programming” mean? IMHO, there’s no transcendence going on anywhere. Our brain contains billions of neurons. Our current science allows us to barely monitor how a few of them work in real time. Why should it be a surprise that we don’t understand the brain’s algorithm? It is still no reason to think it involves transcendence or supercriticality, whatever those mean in this context.

    We are close to understanding how primitive creatures’ nervous systems are wired up and how they work. This is within reach now. Once we understand that, imagine scaling its complexity up by many orders of magnitude. Simultaneously we see where the power of the human brain comes from and the scale of the problem we have understanding it. That said, it is still an algorithm, just not a simple one.

  16. Then dualism has never been about two, but about dueling…

    About identities interacting experiencing…

    Always (for us) at least three biology’s in motion…

  17. The Universe has to juggle two big responsibilities — to help develop the minds (the internal spacetime geometry of its particles) and to make their external behavior and properties standard for each kind of particle so there won’t be chaos and complicated bodies and machines can be constructed eventually.

    Electrons and protons are not mature enough to be able to reliably have standard external behavior. The Universe would control the external behavior of electrons.

    Electron’s free will would be used to develop its own mind (internal spacetime geometry). The Universe would give it basic games like distinguishing shapes, colors or tones — giving it pleasure when the electron wins the game. Very basic moral games could also be played and only when the electron chooses the very dark path would pain be experienced. A morally good choice results in a lot of pleasure.

    Electrons and other low energy particles behave in such a way that it seems a higher power than the electron is controlling its external behavior. The electron itself couldn’t possibly know all the paths it could have taken and those possible paths interfere with each other — but the Universe could very well have that ability. The Universe not only provides reliable standard behavior for each electron but also chooses the most efficient path for the electron to take based on information the electron doesn’t have. The Universe acts as a Maxwell’s demon for low energy particles allowing it to take paths that take the least time and therefore the most energy efficient.

    Higher energy particles can be giving more control over its external behavior by the Universe. Eventually high energy, high frequency particles could control a brain that controls a body. Even higher energy particles could have enough maturity to become a whole new universe and be a member of the very highly advanced civilization of universes and raise particles of its own that may after a very long time may become universes and join universe civilization.

  18. Consciousness is either a natural phenomenon that appeared at some point in time since the origin of the universe, or it is somehow ‘inherent’, floating around since before time began, the stuff of miracles, supernatural events, panpsychism, the religious ‘soul’; just something that is always and everywhere, but of course we only encounter it directly in our own personal and enclosed subjectivity.
    In the one case, that of an emergent but entirely ‘natural’ phenomenon, any theory that seeks to explain it will be concerned with WHEN it appeared, HOW it appeared, what the ‘necessary conditions’ are, that kind of thing. This would be the quest for a ‘mechanism’, some arrangement of structures and processes in the brain that PRODUCES the subjective ‘fact’ of consciousness; and it may well be a hopeless quest, especially if we forget to produce an unambiguous definition of what exactly we are seeking to explain; but at least it is a quest.
    In the other case, that of something that somehow exists ‘everywhere’, and just happens to attach itself to certain situations: surely the only question that even makes sense is … the same question: WHEN does it attach? WHAT does it attach itself TO? HOW does such a thing come about? Although in most of the ‘miraculous’ categories, perhaps it is fairer to say that the question has simply been begged, and those discussions are not about ‘the hard question’ at all, but more of a political lobbying for one magical solution over another.
    Consciousness, as a phenomenon, seems ‘intuitively’ to be DIFFERENT from anything else, and does not fit neatly into any ‘theory of reality’. And yet it is the one ‘thing’ that we KNOW, directly, does exist, does happen, is, in the most real sense, REAL.
    My guess: the ‘nature’ of consciousness will always be elusive, apart from its phenomenal, subjective descriptions; and if such notions as ‘panpsychism’, paranormal or supernatural ’causes’, or consciousness as something that just ‘happens to happen’ in situations of great complexity are allowed to stand, the question is begged, the premises of the argument are intrinsically unsound, and everyone will go on being greatly entertained by a discussion that has nothing to do with the central issue.

  19. @Timothy: I think you have the right of it, consciousness will ever be elusive because we wouldn’t all agree on the “answer” or even, I suspect, what an “answer” should ultimately look like.

    For example the “magic sauce” criticism can be pointed in whatever direction aligns with the personal aesthetic one adheres to when thinking about metaphysics ->

    It seems there’s some kind of magic sauce to get from non-conscious material to consciousness.

    How else could we expect to get raw feels from certain arrangements of this matter, or how certain arrangements could point to entities in the world when we have thoughts about things?

    Of course the magic sauce isn’t just with regards to consciousness. Why does the matter behave in such a way that we find patterns we can model with algorithms? Or to put it another way, why don’t the laws of physics change?

  20. Regarding Penrose’s argument, from Peter’s book I believe it’s less the mysterious nature of consciousness and QM but rather his belief in both having aspects that are non-computable.

    Which isn’t to say Peter endorses Orch-OR but there does seem to be some agreement between himself and Pensore on consciousness & (non) computability.

  21. I don’t know about QM, but we hardly know enough to claim that consciousness has non-computable aspects. Do we have a definition of consciousness for which would allow us to determine its computability? The question can’t even be asked in good faith. Based on that kind of reasoning, everything unknown has non-computable aspects. It may sound ok if we assume knowing == computable, but computability is a much more precise concept.

    Penrose shows his true colors when he states that he has mathematical intuition that tells him when some mathematical statement is true but that computers will never be able to do the same. That’s ok as a motivating theory but it’s “magic sauce” until we have some proof.

  22. Thanks for the pointer, Sci. I started reading it but found that I disagree immediately. The claim is that while the Halting Problem can’t be solved by computers, humans can still solve it. That’s just not true.

    Basically, the Halting Problem says that, in general, there is no algorithm that can be applied to any program to determine if it will halt. There is really nothing special about halting. The statement can be generalized to determining anything about all programs. Basically, you can’t tell what a program will do unless you run it. Of course, you can tell what some trivial programs can do, as well as answer certain questions about complicated programs. You just can’t answer all questions of all programs. It’s basically Incompleteness applied to algorithms.

    I haven’t seen any proof, hand-waving or otherwise, that humans can circumvent the Halting Problem or Incompleteness. My guess is that any claim that humans CAN do this is similar to Penrose’s mathematical intuition claim. However, mathematicians require proof of mathematical statements, not just intuition. Many times through history mathematicians have proved something only to have the proof overturned many years later. So much for mathematical intuition.

    Perhaps the idea is that if some algorithm is stumped, humans can often see a way around the blockage or at least something else to try. This is certainly true. Our brains are much more general purpose than any algorithm we’ve yet invented. However, this does not mean that the human brain is not algorithmic. We just don’t know how it works yet. I suppose it is natural to think that if some problem can’t be solved, we should question our assumptions. We haven’t been able to invent human level AI yet, so perhaps our brain is not algorithmic. Maybe, but I for one think it is way too early to go that way.

  23. Isn’t Bolland’s argument really for Monadology, thus inviting the Hard Problem of Matter which asks what would a “particle” be exactly beyond an abstraction?

    It’s also amusing that Twitter thread where he refers to “Cosmopanpsychism” as obviously “malarky” after making such a bold claim as the experience of the body is in a singular particle. As I said before, it all comes down to aesthetic pre-commitments.

  24. I don’t know about QM, but we hardly know enough to claim that consciousness has non-computable aspects. Do we have a definition of consciousness for which would allow us to determine its computability?

    I think it’s actually harder to argue that consciousness should be computable, and absent any good account of how that could be the case, we should not expect it to be. Basically, the most natural account of computation casts it in terms of a mapping between states of a physical object (a computer) and some abstract object representing the computation (say, a partical recursive function). But then, what is the nature of that mapping? As it turns out, there are many mappings you can apply to any given physical object that yield distinct computations (this is the source of worries such as Putnam’s stone and Searle’s wall).

    The mapping itself isn’t any different in kind from that between a model and the object it models—say, between an orrery and the solar system. Elements of the model are taken to be, essentially, symbolic for elements of the object. But then, modeling—and by extension, computation—is just a matter of how a system is interpreted. This casts immediate doubt on the proposition that whatever does this interpretation could be cast in terms of computation: we run straight into a vicious circularity, as if it needs an interpretation to ground any kind of computation, what supports the computation that is supposed to ground that interpretation?

    So it seems that interpretation isn’t something that can be given a computational account. Should this really surprise us? I don’t think so: computation, and modeling more generally, really only trades on structural equivalences. I can use a system as a model of another because both support the same (e. g., mathematical) structure: I can model the set of my maternal ancestors using the set of books on my shelf, ordered according to their thickness; thicker books represent or are interpreted as ancestors of thinner ones. So if book A is thicker than book B, ancestor A* (great-grandma Alice) is an ancestor of ancestor B* (grandma Belinda). The structure both support is that of a linearly ordered set, and by means of their structural equivalence, I can use the model to support inferences of the original object. Nothing else ever goes on in modeling, theorizing, and computation (where a physical object acts as a model of an abstract one).

    But it’s highly implausible to claim that structural properties are all there is to the world (ontic structural realists notwithstanding). In fact, the proposition runs into Newman’s objection, which caused Russell to drop his structural account of perception. All we need, however, is something to color in structure, to actually support it—some intrinsic properties to ground the structural, relational properties. One might then propose that these intrinsic properties are experiential, and it’s these experiential properties that allow us to interpret symbols—we interpret, say, a lamp lit up as 0 or 1 because we experience it that way (an experience of something as something else).

    If this sort of thing is what underlies interpretation, it should be no wonder that it escapes computability, as it is in fact rooted in the intrinsic properties that ground the structural equivalences that realize computations. Moreover, if this sort of thing is right, we should not be surprised that there is something like a Hard Problem: solving it essentially entails the attempt of grounding intrinsic properties in the structural.

  25. Jochen, I didn’t follow most of what you said here but there are some things I can take a guess at and respond to them.

    When you talk of “interpretation”, perhaps you are talking about a human observer looking closely at someone else’s brain while it operates and placing some interpretation on values you see in that computation. This would be much like a human computer programmer writing code and debugging it. If so, the flaw in that logic is that evolution placed no value on that observer understanding the operation of the human brain. There’s no reason for that observer, even if he or she believes the brain is a computer, to expect to interpret values inside the brain.

    Interpretation of what goes on in Artificial Neural Networks is a big area of research lately. The designers of an AI program for facial recognition knows exactly how his program works at some level. After all, he wrote it and, even if he didn’t, he can read all the source code and input data. His knowledge is complete at that level. The program may do a wonderful job of facial recognition. However, the programmer still may not be able to interpret the meaning of individual neurons or program variables in terms of how it recognizes faces.

    We should expect our observation of values in the human brain to be much harder to decode. After all, we don’t have the source code and didn’t write the program. Even more important, the program is many orders of magnitude more complex. Plus, the program is embedded in living cells, each of which has to maintain its internal workings, most of which has nothing to do with brain function but merely keeps the cell alive. We have the complete connectome of some primitive organism with a few hundred neurons and we still don’t know how its program does what it does. To make matters worse, we don’t even have a precise definition of the whole organism’s behavior.

    Coming at this from another angle, there’s the argument from simulation. If we could simulate the behavior of each neuron and synapse on a computer, then wouldn’t the result be equivalent to the human brain? It may not be practical with today’s technology but all that we lack is knowledge and fast hardware. There are no theoretical barriers to doing it.

    I think some people have a very simplistic view of what computation is all about. When I was a teenager, my uncle was talking about the potential of AI. He didn’t have much knowledge of brains or computers but liked technology. I told him I didn’t think computers could do anything more than what programmers had instructed them to do. I remember saying that to this day because it felt like I had told a lie. I didn’t really know what I was talking about and knew I was just trying to sound smart. Now a feel a tiny bit guilty of saying that. Now that I have been a programmer for decades, I realize what a dumb thing that was to say. In general, programmers do not know what their programs are capable of. This is what the Halting Problem tells us. We really can create programs whose execution and output can’t be predicted. The only way we can tell what it will do in detail is to let it run. The brain is such a program.
    t

  26. Well, several orders of magnitude more complex but still just as much a target of incredulity: single neuron models of consciousness [Sevush 2005; Edwards 2005,2013] where “conscious behavior appears to be the product of a single macroscopic mind, but is actually the integrated output of a chorus of minds, each associated with a different neuron”. Then consciousness in each cell might be due to a single quantum monad sitting in the right spot…

    I don’t like Jochen’s line of argument re interpretation because, like the computational stone, how do the thoughts interact with the world? The structure of that relation is where value and intention come in.

  27. When everything is there, then sometimes we’re here too…

    The value of being there for the value of being here…

    Its labor day, lets never stop working…

  28. Maybe not quite your usual panpsychism, very it from bit, Chapter 6 of Jane McDonnell’s
    The Pythagorean World: Why Mathematics Is Unreasonably Effective In Physics presents
    a Consistent Histories quantum monadology:

    Before the beginning of time, quantum reality exists in a superposition of everything and its negation (so that it is everything and nothing). Monads have no thoughts, and there are no phenomena. At the beginning of subjective time, all monads are in the same state; they are like one thing with one thought: “I exist”. This is the transition from the Many to the One. The “I” in the initial thought has ambiguous reference; it refers both to the monad itself and to Being. Knowledge of the initial thought is knowledge of the One and the Many — of the inexhaustible multitude of existents and their complete and perfect unity. Subsequently, the “I” splits into an infinity of different perspectives, like white light being split into its colours. Each monad develops its own history by learning, projecting and recording. At each step of subjective time, it forms a thought. All histories contain the initial thought which is the primary truth of every monad. Knowledge of the One and the Many is projected at every time step, so that monads are always seeking unity in diversity through the creation of a thought.
    The monad’s desire for unity is in creative tension with its desire to learn more and differentiate itself from others. It creates an identity for itself through its construction of a narrative history. All possible monads exist simultaneously. Some have only limited concepts, perhaps projecting the same concept over and over again without learning [I guess these make up ordinary “dumb” matter]. Some form into communities, developing diversity through the harmonisation of rules for projecting concepts. Some rules lead to more and more concepts, more and more diversity. Monads in these communities are more perfect because they have richer experiences and greater capacity for introspection [IGUSs]. Actuality results from the competition of monads for greater perfection.

    Monads self-organise into consistent frameworks with compatible histories. Within such a framework, they can form a consistent view of reality. They don’t communicate directly with one another; however, every monad is in an ongoing conversation with Being through its projection of possible actuals and recording of actual outcomes, so Being is the mediator of meaning creation. Monads respond to each other’s properties according to “laws” which they develop as they go along, through learning and coordinating. All laws are the result of random processes, like the laws of thermodynamics. Monads within a framework project at different scales (i.e. using different coarse-graining). Meaning is scale-dependent, as are laws, so there are scale-dependent families within a consistent framework. Laws at different scales in a consistent framework have to be compatible with one another.

    If alternative frameworks have the same laws and concepts, they can be grouped into a possible world, even though they have incompatible histories and can’t be combined to give one view of that world. This is an important point. In quantum monadology, monads can be incompatible but still compossible. They may be incompatible because their histories contain dual quantum properties which are incompatible, yet still be compossible because they are describing the same possible world.

    This could be consistent with Benioff [2002,2005,2018] on locality of mathematics, and Bernal [2008]:

    …if we assume (1) that physical theory has the structure of a differentiable 4-manifold, (2) that it contains IGUSs, and (3) that no IGUS has a privileged perspective in space or in time, then only four possible models of space-time emerge. One of these models corresponds to Newtonian space-time and another to General Relativity.

  29. When you talk of “interpretation”, perhaps you are talking about a human observer looking closely at someone else’s brain while it operates and placing some interpretation on values you see in that computation. This would be much like a human computer programmer writing code and debugging it. If so, the flaw in that logic is that evolution placed no value on that observer understanding the operation of the human brain.

    That’s not quite it. Rather, I am saying that an interpretation is necessary before one can sensibly talk about the computation being performed by a physical system, just as an interpretation is necessary before one can sensibly talk about the meaning of any symbol. For me, as a German, the symbol ‘gift’ will mean something very different (‘poison’) than for an American (‘present’), say. In such a case, it’s uncontroversial that meaning is in the eye of the beholder; but computation isn’t ultimately different, in that it performs syntactical operations on semantically individuated vehicles, and thus, can only be given a definite content upon interpreting these symbols.

    Computationalism wants to claim that ‘A brain B realizes (or gives rise to, or produces, or what have you) a mind M by means of a computation CM’. This is a special form of ‘A physical system P realizes a function F by means of a computation CF’. For the computational thesis to have any content, there needs to be an objective fact of the matter regarding the truth of such claims; but one can show, by example, that this is not generally the case, i. e. one can exhibit a system P such that it can equally well be considered to implement a computation CF, realizing a function F, and a computation CF’, realizing function F’ (and many more besides*).

    But if that’s the case, then we need to ask how it is that physical systems seem to implement definite computations pretty routinely. After all, there’s not much ambiguity regarding what the computer I’m writing this on right now computes. The reason for this is that computers, due to clever UI design, are geared towards manipulating symbols whose interpretation is so obvious to us that we don’t recognize there’s any interpreting going on at all—like reading a text in a language we understand, the symbols seem to carry their meanings on their sleeves; but of course, this meaning is entirely due to convention.

    Consequently, the correct version of the statements above reads ‘A physical system P realizes a function F by means of a computation CF under interpretation IF’. But then, we’re left with the question of just what sort of thing this interpretation is. The positive argument regarding its connection to experience and intrinsic properties is somewhat lengthy, but the negative argument that it can’t be computational is simple: suppose that it were; then, there exists some computation CIF producing IF. But, if the above is right, then that entails the need for a further interpretation such that some physical system could be considered to implement that computation—but then, we’ve essentially entered a homuncular regress, and are never getting any closer to actually implementing any given computation. Hence, such interpretation can’t be computational, and provided our minds actually do interpret things, there’s a non-computational aspect to minds.

    *Trivial examples of this are simple: an AND-gate becomes an OR-gate under flipping the interpretation of voltage levels; after all, it can hardly be argued that ‘low voltage’ should somehow have a greater claim to representing a 0 than ‘high voltage’ does. But more complicated examples aren’t much harder. If you have a box with four switches and three lights, such that you can flip switches to have lights come on, then one interpretation of that box might be that it performs binary addition; but it would be easy to reinterpret the meanings of switch positions to either different binary digits or even something else entirely, and likewise for the lamp states, such that the box implements an entirely different function—and not one simply ‘dual’ to addition by flipping 0s and 1s, for example.

  30. Hoping to start a new conversation from a different launch-point.
    1. I am a human, an organism possessed of a brain that has certain capabilities.
    2. One of those capabilities, which I know first-hand, is to present me with a ‘virtual reality environment’, subjectively witnessed, that includes a representation of the brain, its supporting processes and structures, and the pathways for informational input into its computational milieu.
    3. Within that model, and according to that model, it is obvious that there is no other access to information coming ‘objectively from’ whatever the world around me is apart from the limited content of those sensory input channels. Everything else is MADE UP FROM SCRATCH..
    Therefore, any ability I have to comment on, assess, or explain anything, whether to do with the world itself (aka ‘reality’, or more simply, what happens to be there; we used to call it ‘Nature’, the inclusive term for ‘things as they are’ … ) or in my understanding of that ability and the tools it affords, will be within the parameters and constraints dictated by the properties of the ‘world’ THAT MY BRAIN CREATES. But please note …
    4. Yes, that world, as it appears to me, subjectively, IS Euclidian in its geometry, RUNS sequentially in and through ‘time’, and seems, as we have painstakingly worked out over the ages, to involve events that act as ’causes’ CAUSING subsequent events we consider to be the EFFECTS of those causes. And that ‘causal connection’ seems reliably, as long as we are sufficiently rigorous, to operate in a ‘lawful’ or ‘lawlike’ way. Hence science as opposed to superstition, reliable knowledge as opposed to wishful thinking and fantasy, the elucidation of trustworthy theories and predictions, as opposed to wearing the blinkers of ‘intuitive belief’ and making up stories. Or, if you like, the signature of the left hemisphere and its style, dry and ‘digital’ though it may be, VERSUS, because it is VERY MUCH versus, and on the sufferance of, the creative fantasies of the right hemisphere, ably supported by the raw power of the limbic system.
    5. And we have gradually, over those ages, figured out two highly reliable pieces of knowledge, one simply logical, the other based on mountains upon mountains of practical experience: if we are wise, we will be very careful what we claim to know, what we presume to rely on as the premises of our further elaborations, because any one part of our ‘knowledge base’, or ALL of it for that matter, may turn out to be partial, wrongly construed, or simply incorrect; and, this is the logical or perhaps tautological part: we can never know EVERYTHING.
    6. But we thirst for ‘explanations’, especially for the things that are the most elusive, the hardest to capture or to fathom. And those explanations, those hypotheses, those theories, can only succeed if three simple logical conclusions are accepted and their dictates followed to the letter:
    A: If our language is ambiguous, if we do not know, down to the last detail, EXACTLY what we are saying, we can never formulate reliable (I like the term ‘truth-functional’) premises, and will never derive trustworthy conclusions that themselves can serve as the premises for the next stage in our exploration. We MUST use language that is precise, unambiguous, capable (in theory) of being judged TRUE or FALSE in principle, or we are saying nothing at all. Which, generally, you might agree, seems to be the case, around ‘consciousness’ and almost everything else to do with ‘what it means to be human’.
    B. We MUST remember that EVERYTHING, however convincing, MUST be labeled ‘provisional’ AND ‘subject to the constraints of the human condition, including (rather obviously!) its neurology’, and we must be prepared to change our minds about any or all of it WHEN LOGIC DICTATES.
    C. And since we can never know EVERYTHING about ‘Nature’, we are and NEVER WILL BE in a position to allow the status of ‘possible truth’ to anything OUTSIDE purely ‘natural’ events; knowing that we never will assemble, organize, understand or explain all of it. Because the moment you let in even the tiniest SMIDGIN, which I think is the fundamentalist particle of all, of ANYTHING ‘supernatural’, ‘paranormal’, magical, mystical or TAKEN FOR GRANTED … the whole edifice crumbles … the whole argument is destroyed … and you can say NOTHING INTELLIGENT ABOUT ANYTHING. Assuming that a ‘real’ explanation in terms of real events, causes, connections and suchlike is actually what you want. To the extent, of course, that our neurology allows. Because otherwise it is just business as usual, and more business as usual, and more … you get the picture ….
    And however long-winded, that’s the SHORT version. But that’s an Aspie for you ….

  31. David Duffy:

    I don’t like Jochen’s line of argument re interpretation because, like the computational stone, how do the thoughts interact with the world? The structure of that relation is where value and intention come in.

    The interaction with the world happens via physical causal processes (as every interaction does, barring dualism). This is in a sense the source of the problem: consider the box that computes sums of its inputs. Its physical relations with the environment will determine the way its switches are flipped (its sensory input), which will, in turn, determine which lights light up (its behavioral output). So, it will react to stimuli in a certain way; but that does not serve to determine which computation it performs, as a system with identical input/output behavior can be taken to implement distinct computations.

  32. Ok, I think I understand what you are saying. First off, I find “the brain B producing the mind M” to be a misleading concept. I think it comes from our knowledge of everyday computers as consisting of hardware and software. While they do have those two components, that is doesn’t have to be true. It is not an essential aspect of computation. With everyday computers, it is an engineering choice, a feature, that lets us load up different programs on one computer. The computation a desktop computer performs is defined by the combination of its hardware and software at any instant. If you look at the computer at that instant in time, the program is recorded in memory which is just as much hardware and part of the computation as the CPU is. Both the CPU and memory are hardware with internal state and implement logical functionality.

    The brain is a computer whose software is integrated with the hardware. Once we find out how the brain really works, we may find there’s a way of looking at it in which we find something analogous to software, but I suspect that’s just something we get from science fiction with downloading and uploading our brains. Our DNA is software in a sense as it directs the building of our brain and the rest of our nervous system. The idea of software vs hardware is a somewhat arbitrary distinction with a long engineering history. However, it is not a crucial concept but merely an artifact of how we humans like to look at computation.

    As you point out, our computers are designed with a specific human interpretation to all its inputs, outputs, and internal variables. This is also an engineering choice and a desirable feature.

    The internals of the human brain are not constrained by the need to make them interpretable by us but that doesn’t make it any less a computer than our desktop computer. Interpretability was just not a design choice that evolution respects.

  33. “If you have a box with four switches and three lights, such that you can flip switches to have lights come on, then one interpretation of that box might be that it performs binary addition; but it would be easy to reinterpret the meanings of switch positions to either different binary digits or even something else entirely, and likewise for the lamp states, such that the box implements an entirely different function—and not one simply ‘dual’ to addition by flipping 0s and 1s, for example.”

    I’m having trouble understanding why you think this is interesting. What you say is true, of course, but if we compare the brain to your switch box, I don’t see much difference between the two, except in scale and in what it computers of course. Our eyes are analogous to the input switches. Physics determines how the input signal is encoded as various colors and intensities of light coming from various directions. Our retina converts them to neuronal signals that we are beginning to understand. Much of that understanding consists of determining how the brain encodes its knowledge.

    That encoding is analogous to the programmer’s choices in how a computer program is written. The difference is that the programmer desired the representation be understood by herself and other programmers. That was not a design goal for evolution. However, the brain must still encode its information a certain way. Note that we aren’t even guaranteed that one brain encodes information like another. It might because we’re all humans but it might not because each human is different. Making them the same was probably not an evolutionary design goal as there is no channel for exchanging information that way, assuming one dismisses telepathy.

    We do use language to communicate and that is forced by evolution and culture to be standardized across brains. Of course, it is never identical. My understanding of a word is never precisely identical to someone else’s understanding. It only needs to be close enough most of the time for communication to occur. Since language most certainly is crucial to our species’ success, there is a strong evolutionary pressure to standardize our ability to learn and use language. This undoubtedly effects how common our internal representations are. It seems unlikely that, if we were each wired differently to a great degree, we would understand each other.

  34. 29. Sci,

    That essay by Peter Sjöstedt-H is a good defense of panpsychism. We are all products of our immediate culture, and those traditions are what shape our biases. Unfortunately, prejudice is mountain higher that Everest.

    Descartes got “Cognito Ergo Sum” wrong. The panpsychist version would be “I Am; Therefore Sentient”. Sentience is the capacity to feel, perceive, and/or experience power. Just like causation in physics, power is the wild-card of consciousness too. Any and all discrete sentient systems have the capacity to feel, perceive and experience the power of their own unique structural qualitative properties and they also possess the capacity to express those qualitative properties in correspondence with other discrete systems. It’s not rocket science, relationships are all about communication.

    And to address your question for Timothy: There is no such thing as law, it’s sentience all the way down, a model of sentience is predicated upon meaningful relationships between the discrete systems which make up our phenomenal world, all of which result in the novelty of the expression. In conclusion: Sentience supersedes the necessity for Law.

  35. @Sci,

    Nope……. Power is responsible for causation. Consciousness is the form through which power is both experienced and expressed by discrete, sentient systems. One cannot have a coherent discussion on consciousness without first addressing the “objective reality” of power.

  36. Hi Jochen.

    “a system with identical input/output behavior can be taken to implement distinct computations”

    Probably not the right place to rehash these issues, but this is true only at an inappropriate level of abstraction. You have elided over semantics and specifically referentiality, which arises from the physical substrate for the computation, and how it is causally connected to the environment. That is, what is the nature of a Kripkean rigid designator, and how does it fit into informational thermodynamics [see for example, papers from Gianfranco Basti]. To carry out your arbitrary interpretations of a particular (natural ie physical) computation requires further physical work by the interpreter – this is not a vicious regress, but how things – like your PC and its owner, or two bees talking to each other – actually function.

  37. “a system with identical input/output behavior can be taken to implement distinct computations”

    If you are talking about your switch box, it is just that two observers may not agree on what function it computes even if, looking inside the box, they use different algorithms. We’ll assume that the time they take to compute is the same. I suppose you might say that they perform different computations. However, their ability to survive in their environment is identical since their input and output are identical. I don’t really see any great revelations here.

  38. @ Jochen – If I understand you correctly your argument is that nothing physical has a determinate meaning outside a consensus of minds?

    It is a curious oddity, how we go from matter which points to nothing to red octagons meaning “STOP”…

  39. @Lee – not sure why you put “objective reality” in quotes? Also, I’m curious if you have a definition of power?

    It seems you are including causal power as a type of power, but suggesting – if I read you correctly – that there is more to power than (at least physical) causation?

  40. Paul Topping:

    I’m having trouble understanding why you think this is interesting. What you say is true, of course, but if we compare the brain to your switch box, I don’t see much difference between the two, except in scale and in what it computers of course. Our eyes are analogous to the input switches. Physics determines how the input signal is encoded as various colors and intensities of light coming from various directions. Our retina converts them to neuronal signals that we are beginning to understand.

    David Duffy:

    Probably not the right place to rehash these issues, but this is true only at an inappropriate level of abstraction. You have elided over semantics and specifically referentiality, which arises from the physical substrate for the computation, and how it is causally connected to the environment. That is, what is the nature of a Kripkean rigid designator, and how does it fit into informational thermodynamics [see for example, papers from Gianfranco Basti].

    OK, let me try to be a bit more explicit. Take a box with four input switches and three output lights, whose behavior is specified as follows:

    Switches | Lamps
    -----------------------
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)
    (??, ??) | (???)

    Then, take two users of this box, Alice and Bob. Alice interprets the box’s symbols as follows:


    Symbol | Meaning
    -----------------------
    ? | 0
    ? | 1
    ? | 0
    ? | 1

    Furthermore, she reads the symbols as supplying binary numbers. The first two switches supply the input x1, the other two the input x2, and the lamps the output f(x1, x2). Applied to the box, this yields (binary numbers converted to decimal):


    x1 x2 | f(x1, x2)
    -----------------------
    0 0 | 0
    0 1 | 1
    0 2 | 2
    0 3 | 3
    0 0 | 0
    1 1 | 2
    1 2 | 3
    1 3 | 4
    2 0 | 2
    2 1 | 3
    2 2 | 4
    2 3 | 5
    3 0 | 3
    3 1 | 4
    3 2 | 5
    3 3 | 6

    On the strength of this interpretation, she proposes the device to be an adder, capable of adding two 2-bit binary numbers.

    Bob, on the other hand, agrees on the immediate interpretation of the symbols, but takes them to represent binary numbers if read from right to left. To him, the box thus performs the following operation:


    x1 x2 | f'(x1, x2)
    -----------------------
    0 0 | 0
    0 2 | 4
    0 1 | 2
    0 3 | 6
    2 0 | 4
    2 2 | 2
    2 1 | 6
    2 3 | 1
    1 0 | 2
    1 2 | 6
    1 1 | 1
    1 3 | 5
    3 0 | 6
    3 2 | 1
    3 1 | 5
    3 3 | 3

    This, too, is a perfectly valid computation to associate with this device. Moreover, it’s clear that there are many more: one could, for example, flip the interpretation of the input symbols, or just consider ? to mean 1, and ? to mean 0, or flip the interpretation of the switches, or apply another interpretation altogether.

    Furthermore, this sort of thing is what we do all the time in interpreting a system as computing. Your pocket calculator, for example, does not output the number 5 as a result of entering 10 / 2, it produces the numeral ‘5’, which you interpret as that number. This interpretation, however, is arbitrary; but only once that interpretation is supplied does your device perform any definite computation. Without the interpretation, we only have the physical evolution to contend with; but this is distinct from a computation: computations act, for example, on numbers, not on lamp or switch states.

    So it’s not in general possible to uniquely claim that ‘a physical system P implements a computation C realizing a function F’; different such ascription are possible, and are equally valid (there is no difference in how either interpretation is hooked up to the environment, for instance, nor in information content, thermodynamical entropy, or what have you). But if that’s the case, then a computationalism that relies on the claim that a brain implements a definite computation first needs to show that such is possible, with the above observation throwing that in severe doubt (it’s clear, for example, that making the system more complex will only compound, rather than resolve, the issue).

    Sci:

    @ Jochen – If I understand you correctly your argument is that nothing physical has a determinate meaning outside a consensus of minds?

    Well, I’m uncomfortable with the implication that the consensus of minds is somehow nothing physical. It is, of course; but other than that, yes, I take the interpretation of signs and symbols to not be fixed by the signs and symbols themselves, nor by the relations they stand in.

  41. Ah, dammit, there seems to be a problem with the encoding. The first table was meant to be something like:


    Switches | Lamps
    -----------------------
    (dd, dd) | (xxx)
    (dd, du) | (xxo)
    (dd, ud) | (xox)
    (dd, uu) | (xoo)
    (du, dd) | (xxo)
    (du, du) | (xox)
    (du, ud) | (xoo)
    (du, uu) | (oxx)
    (ud, dd) | (xox)
    (ud, du) | (xoo)
    (ud, ud) | (oxx)
    (ud, uu) | (oxo)
    (uu, dd) | (xoo)
    (uu, du) | (oxx)
    (uu, ud) | (oxo)
    (uu, uu) | (oox)

    With ‘u’ and ‘d’ meaning ‘switch up’ and ‘switch down’, and ‘x’ and ‘o’ meaning ‘lamp on’ and ‘lamp off’, respectively. I trust the rest is clear from context.

  42. @Jochen: In what way are these – in your estimation – physical brains able to have determinate meaning but the computer is not?

    Seems to me either neither computer nor brain have meaning, if the argument is matter has no inherit ability to be about something, or both have meaning b/c of…something as yet unknown…that confers aboutness when bits of matter are arranged the right way.

    This seems to be the issue facing Searle his entire career, up to and including his movement away from the Chinese Room in “Is the Brain a Digital Computer?”.

  43. @Sci.

    For an excellent expose’ on power refer to: “The Meaning of Power”: Philosophy and Phenomenological Research, vol. 31. No. 1 (Sept. 1970), pg 73-84, written by Arthur Berndtson. The essay is on the jstor.org website.

  44. The wrong approach in thinking that the brain is initially causal (the hormunculus), results in the mind body problem.
    As a physicalist I would like to remind those who seem to think differently, of the fact that the body and nervous systems initially evolved the brain. Some people seem to need a paradigm shift to realise and accept the importance of this fact, but instead prefer to wallow in theirs and others confusion, as to how the brain conjours up the initial causation of what happens to the body.
    My simple approach is titled: Consciousness and Initial Causation Embodied by Physicalism. Is on my web: http://www.perhapspeace.co.uk

  45. The consensus of minds IS physical. Those minds accepted input (looking at the switches and lights) and they came to a new state reflecting their knowledge and understanding of the event. In your scenario, Bob and Alice came to different conclusions. Perhaps we shouldn’t call that consensus but I feel that is not relevant to our discussion.

    A claim that ‘a physical system P implements a computation C’ is a human activity. It requires a human to utter it and, in particular, it hinges on the description of C. The important point is that the physical system doesn’t care about this description. It simply does what it does.

    When we talk about a physical system performing a computation, we must accept that such a combination has a large number of possible descriptions and means something different to each observer, in general. Of course, multiple observers may come to consensus on a single definition. For example, a group of people may state that the lights on the front panel of my box represent a character using the ASCII encoding. This doesn’t change what computation goes on inside the box that determines whether each light glows.

    A brain is like that switch box. It does whatever it does. Any interpretation of what it does is in the mind of each observer. A group of observers may reach consensus and share an interpretation. Their interpretations are still different but enough alike that communication and agreement can occur.

  46. The tense of always another time than now, seems chaotic…

    Can any thing cease experience completely…

    Parts and particles of value…

  47. In what way are these – in your estimation – physical brains able to have determinate meaning but the computer is not?

    Seems to me either neither computer nor brain have meaning, if the argument is matter has no inherit ability to be about something, or both have meaning b/c of…something as yet unknown…that confers aboutness when bits of matter are arranged the right way.

    The key point is that the computational doesn’t exhaust the physical (although it may exhaust what can be explicitly stated about the physical). I view computation as an instance of modeling, and modeling essentially as the instantiation of structural properties of one system (the object) in another (the model). So, an orrery is a model of the solar system because the structural properties of the orrery (the relations of little metal beads to one another, for instance) mirror those of the solar system (the relations the planets, moons and the sun bear to one another). Because of this structural equivalence, the orrery allows us to draw inferences about the solar system (or, to take my simplified example from above, the stack of books allows us to draw inferences about the set of my maternal ancestors, because both share a structure, in this case, that of an ordered set).

    Structural properties are thus those that are present both in an object and its models (to a greater or lesser degree of approximation, of course). In a computation, the structural properties of an abstract object are physically instantiated, as in a set of switches and lamps instantiating the structural relation that obtains between a couple of numbers being added.

    But just because the structural properties are all that we can model, doesn’t mean that the structural properties exhaust all that exists. Physical objects have intrinsic properties; and by the above discussion, those are the properties of a system that fail to be present in its models. If now meaning is instantiated due to these intrinsic properties, as I have sketched above by something being ‘experienced as’ a certain other thing (a lamp lighting up being experienced as a 0, for example), then it follows that while minds may have meaning, computers instantiating merely the structural properties of a given mind need not (although they may, of course). Importantly, they don’t have it by virtue of the computation they’re instantiating.

    This has the immediate consequence of explaining the hardness (indeed, impossibility) of the hard problem: since all our models only capture structural properties, and experience inheres in intrinsic properties, and explanation is essentially the creation of models, then we can’t explain experience, since it is not amenable to modeling. This is certainly disappointing, but it may just be how things shake out.

    This isn’t really an exotic proposal, by the way. The same thing occurs in mathematics: a formal system (of a certain expressive capacity) does not fully fix its model (here used in a different sense from above, as in the particular mathematical entities that realize the structure described by a given set of axioms). For instance, the Peano axioms are intended to describe the natural numbers, but, due to Gödelian incompleteness, there exist distinct (infinitely many) mathematical structures that realize these axioms. Consequently, we can think of the concrete mathematical object as having intrinsic properties that can’t be captured by the structure of the axioms—and indeed, can never be, as every extension of the axioms will introduce new undecidable elements (that is, propositions true in some models, and false in others). Since each axiom system corresponds to a certain Turing machine, this analogy can be made rather rigorous.

    So what we have, essentially, is that there are structural and non-structural physical properties, and that computers only instantiate the structural properties (or more accurately, that only structural properties can be computationally instantiated), that experiential properties are non-structural, and thus, that the ‘mark of the mental’ lies beyond what is realizable computationally.

  48. A claim that ‘a physical system P implements a computation C’ is a human activity. It requires a human to utter it and, in particular, it hinges on the description of C. The important point is that the physical system doesn’t care about this description. It simply does what it does.

    Sure. But if there is to be an objective fact of the matter regarding what computation a physical system instantiates, then one claim of the form ‘a physical system P implements a computation C’ must be objectively right, and all others, ultimately, false. But there’s no way to single out that ‘one true’ computation; there will always be others associated to the system on exactly the same basis. Consequently, what a given system computes is entirely in the eye of the beholder—as formulated above, a computer instantiates merely the structural properties of a given computation; how that structure is ‘filled in’, what intrinstic properties support the structure, is beyond the merely computational description.

  49. As you point out here, a model is always a shallow facsimile of the object it models. It doesn’t model everything. Our mathematical models of the weather do not duplicate all the properties of the real thing. This is by design, of course.

    Perhaps you are thinking of an AI computer as a model of a human brain in a similar sense. Surely such a model would not capture all the properties of the brain. The model would lack some functionality and, perhaps, have some functionality that the brain lacks. Our AI presumably could be stopped and started by flipping a switch. It also could be cloned easily and, unlike human cloning by inserting duplicate DNA into an egg cell, its entire state (memories, feelings, identity) could be duplicated.

    Could an AI modeled after the human brain work well enough? The Turing Test was a start at addressing this. Surely we would not attempt to duplicate the brain. That would be a much harder problem to solve. Instead, our AI does just enough of what the brain does to be useful. Or it does the part of what a brain does that we want it to do. An airplane does not duplicate a bird but we don’t want it to. It’s performance succeeds a birds in some ways and lacks in many other ways.

    If we had a high-functioning AI, the discussion would inevitably turn to the question of whether the AI thought like a human and whether it was truly conscious. Mostly the answer would depend on your purpose and how the question is phrased. Does it compute exactly the same computation as some human? Of course not. Could it theoretically? Sure, theoretically, but not with any technology we are going to have soon. Wouldn’t we have to duplicate all the atoms and their motion? If not, how good a model is good enough?

    The important question to our current discussion is whether the AI lacks some essence that the human brain has? My answer is “no”. Not that we know of. I would view the human and the AI as having similar capabilities but are not identical in any way. I would compare them much as I would compare a human to an intelligent alien from another planet. Similar in some ways but different in others. Neither has any magic essence that the other will never have.

  50. As you point out here, a model is always a shallow facsimile of the object it models. It doesn’t model everything. Our mathematical models of the weather do not duplicate all the properties of the real thing. This is by design, of course.

    The thing is, I’m not just blithely claiming that computation is modeling, but proposing it as an explanation of the fact that computation always has a subjective aspect—that there’s always a matter of arbitrariness in what a given system computes. This is equivalent to the case of models, where what a system models depends on what it’s taken to model. (This isn’t all that far fetched a view, I might add—for instance, the abstraction/representation theory of computation by Horsman et al. essentially proposes that what a system computes is decided by the theory its user applies to the system, to give it an abstract representation.)

    And of course, this subjective aspect is a heavy burden for any computational theory of mind to carry—after all, we’d very much like to be able to say that what mind a given brain has is an objective fact of the world, but it doesn’t seem that the notion that the brain has this mind by virtue of performing a certain sort of computation can provide this.

    The important question to our current discussion is whether the AI lacks some essence that the human brain has? My answer is “no”.

    If something like the above is right, the answer must be “yes” (more accurately, there are some capabilities that a human mind has that can’t be replicated computationally). The capability for using a system as a model can’t be a computational one, as that would lead to an immediate and vicious regress—suppose a system P instantiates a computation C by means of a modeling relation M. If that relation itself is provided by a computation C’, then there must be some system P’ (possibly, but not necessarily identical with P) that implements that computation; but if all computation is implemented by means of a modeling relation, we find that there must be a modeling relation M’ implementing C’—which then necessitates a system P” realizing C” by means of M”, and so on. In order to terminate the regress, at some point, the modeling relation can’t be instantiated by computation.

  51. Dear Jochen.

    You have read Horsman…Millhouse
    https://academic.oup.com/bjps/article/70/1/153/4098119?guestAccessKey=ae09ebf3-6f77-48da-8bce-1a005304e1f5
    is an attempt at an “objective” criterion for interpretation [the usual criteria we use for statistical model criticism].

    My own understanding is far cruder. Consider the problem of the physiologist attempting to recognize the encoding used in a motor nerve to control a muscle. There are multiple interpretations available of each train of signals. However the choice of interpretations that correlate with muscle activity is obvious. Extending this,

    http://www.sontaglab.org/FTPDIR/huang_isidori_marconi_mischiati_sontag_wonham_internal_model_cdc2018.pdf

    the traditional Ashby Law of Requisite Variety again constrains interpretations only to those that, in the biological setting, lead to persistence of the thinker. The internal model may be of the “plant”, the exterior environment, or the outputs of the internal model.

  52. Millhouse
    https://academic.oup.com/bjps/article/70/1/153/4098119?guestAccessKey=ae09ebf3-6f77-48da-8bce-1a005304e1f5
    is an attempt at an “objective” criterion for interpretation [the usual criteria we use for statistical model criticism].

    I can’t say I’m familiar with that specific implementation relation, but I don’t think it matters too much. Ultimately, any such relation must either 1) single out one of the proposed computations associated to the box above as the ‘real’ one (or deny either is), or 2) lead to ambiguity in the computation associated with a system. If it does 1) (and I’m not aware of any that does), then I’d say it just violates the notion of computation as we usually use it—both Alice and Bob use the box to compute their function in exactly the same sense as you use your pocket calculator to compute, say, square roots. If either or both are wrong, then you also don’t really compute square roots using your calculator, which just seems absurd. And in the case 2), of course, it just fails to solve the problem.

    Consider the problem of the physiologist attempting to recognize the encoding used in a motor nerve to control a muscle. There are multiple interpretations available of each train of signals. However the choice of interpretations that correlate with muscle activity is obvious.

    The problem with this sort of reply is that we don’t typically take computations as being individuated by the symbolic vehicles they manipulate, but rather, by those vehicles’ semantic content. So my calculator doesn’t output the numeral ‘5’ after operating on button presses, voltages and the like, but rather, it outputs the number 5 after adding 2 and 3. Likewise, the box doesn’t execute a computation over switch states and lamp lights (whatever that might mean), but it adds two numbers (or implements the function f’). The lamp lights might have further causal consequences, but this will never suffice to settle their interpretation—any extension of the system is just a chaining up of more (black) boxes, and the interpretational difficulties will just carry through.

    The problem with this sort of thing is that it entails a collapse of computationalism to behaviorism: we consider the behavior shown by a certain system to fix the computation it performs. Essentially, we just call the behavior of the system its computation; but this destroys the great promise of computationalism to yield a way by which to connect physical systems with abstract objects, as is presumably necessary to explain thought.

    Furthermore, the researcher interpreting a given set of observations as being of a signal train moving muscles in a certain way is ultimately just appealing to their own, already interpreted internal model; to claim that this poses an argument in favor of computationalism is then to beg the question against the researcher’s own model being simply computational.

  53. “Modeling relations” have nothing to do with a brain’s functionality (AI or human). The brain does what it does via its inputs and outputs. Any models or theories of operation are made by observers but the brain itself does not care about that.

    “for instance, the abstraction/representation theory of computation by Horsman et al. essentially proposes that what a system computes is decided by the theory its user applies to the system, to give it an abstract representation.”

    Exactly! All the interpretation is applied from the outside. Internal signals and values do not intrinsically have meaning to the rest of the world. Of course, signals and values do have meaning to the brain that contains them. If the “brain” in question is sophisticated enough, it can contemplate its own operation. This is still applying an interpretation from outside in the sense that the signals are being observed and interpreted in a similar fashion. A conscious brain (AI or human) is making models that interpret its own behavior and, limited by what signals it has access to, its internal state. We can think about our own feelings but not about the many saccades made by our eyes or about the firing of specific neurons.

  54. Jochen…”as is presumably necessary to explain thought”…

    Could you instead say ‘a means for thought’…

    …?…

  55. “single out one of the proposed computations associated to the box above as the ‘real’ one”

    The problem here is what definition of ‘real’ you are talking about. Bob’s theory, say, that the box is computing addition may carry with it the assumption that the box can compute other additions than have yet to be witnessed. If so, then his theory can be tested. However, what if he performs a few more tests and they all pass? Now they have even more evidence that his interpretation is the ‘real’ one. But what if the box has a bug and when they try a particular addition problem, it gives the wrong answer? Is the box still an adder? Yes and no. The observers may realize that the interpretation is all in their head.

    Such an interpretation can be useful, of course. The designer of the box may have intended it to be an adder and may have implemented its internal logic with that interpretation in mind. However, the box is going to do whatever the box is going to do. If it adds some numbers incorrectly, then that’s part of its function. The atoms that make up the box don’t know its an adder. The logic circuitry inside the box does not know its an adder. A calculator calculates by virtue of agreement between its designer, makers, and users.

  56. Topping…Do you mean to separate things, like “All the interpretation is applied from the outside”…

    Could you say ‘inside and outside and observation”, is biology…

    …that we can’t separate them…

  57. Topping, I seem to agree with you about what is seen and seeing, but not sure if conscientiousness is included…

  58. Arnold, I expect you meant “consciousness” not “conscientiousness”.

    Consciousness is a mystery but only in that we don’t know how it works in detail, same for much of the brain. The real problem is why so many people think it is a special part of the brain’s overall functioning. I have yet to see anyone define the question well enough to know what kind of answer will satisfy anyone.

    What is our expectation as to what it feels like to be a conscious thinking person? My answer is that we each know how it feels and that’s the answer to that question. Anyone who feels that answer is inadequate needs to explain what an adequate answer would look like.

  59. Consciousness, we were taught long ago, is ‘with self’…

    …the presumption back then, with one’s own presence study life…

    Hence my questions about Conscientiousness…in ideas sciences values and practices…

    In psychology there is sciousness which is separate from consciousness…

  60. Computation seems to involve questions of Aboutness of Thought, rather than the subjective experience.

    And that problem, as the eliminativist Rosenberg notes in The Atheist’s Guide to Reality, is quite the conundrum:

    “A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

    …Let’s suppose that the Paris neurons are about Paris the same way red octagons are about stopping. This is the first step down a slippery slope, a regress into total confusion. If the Paris neurons are about Paris the same way a red octagon is about stopping, then there has to be something in the brain that interprets the Paris neurons as being about Paris. After all, that’s how the stop sign is about stopping. It gets interpreted by us in a certain way. The difference is that in the case of the Paris neurons, the interpreter can only be another part of the brain…

    What we need to get off the regress is some set of neurons that is about some stuff outside the brain without being interpreted—by anyone or anything else (including any other part of the brain)—as being about that stuff outside the brain. What we need is a clump of matter, in this case the Paris neurons, that by the very arrangement of its synapses points at, indicates, singles out, picks out, identifies (and here we just start piling up more and more synonyms for “being about”) another clump of matter outside the brain. But there is no such physical stuff.

  61. Hi Jochen.

    “a collapse of computationalism to behaviorism”

    Behaviourism is essentially a computationalist model assuming a particular underlying architecture (ISTM a meta-reinforcement learner).

  62. “A more general version of this question is this: How can one clump of stuff anywhere in the universe be about some other clump of stuff anywhere else in the universe—right next to it or 100 million light-years away?

    This is a good question. Here’s my answer.

    There’s no intrinsic, preordained meaning or significance to either clump or their internal state. In short, there are no labels assigned to anything except by observers. A red octagon only means “stop” to those lumps that (a) are sentient, having the ability to label things and (b) have chosen to give red octagon signs that particular label.

    Let’s look more closely at labels. A label does not have any intrinsic, preordained meaning either. It is a shortcut to a concept agreed on by one or more sentient lumps. At some point, the sentient lump has visual input of a red octagon. Since that lump may want to think about the red octagon, and whatever associations it has with it, at times other than when a red octagon is present. The “stop” label serves that purpose. Once the label has been assigned, there are two patterns of input that signify a stop sign, the red octagon and hearing or reading the word “stop”.

    Going back to our switch box, Bob and Alice can agree or disagree on what concepts to associate with the switches and lights. Regardless, they each associate concepts and labels with their perceptions. Association is just a pairing of ideas, perceptions, etc. An association is called a label when a perception (eg, “stop”, red octagon), having very few additional associations, is associated with some larger concept (stopping) with many other associations.

    Our thinking is pretty much just the making of associations based on algorithms embedded in our brains by evolution and ontogeny. Consciousness is the experience of making those associations.

    Computer programs make associations all the time. Imagine a table with name and street address. Based on the program’s input, it adds a name and their residence address to the table. That’s the simplest kind of association but all other kinds of associations are elaborations of that basic function. Note that the name and address in the table have no meaning other than those assigned to it by the sentient lumps that care about it. An intelligent being with no knowledge of our alphabet, names, and addresses would have some trouble figuring out what the data meant. (Perhaps the concepts of name and address are universal, perhaps not.)

    Does a computer program know what the table entries mean? That’s an interesting question that gets to the heart of what we mean by “know”. The program certainly has functions that handle the name and address in ways that respect what we mean by name and address. If the addresses are in the USA, it has functions that determine if the state and zipcode are present and can extract them and work with them. It probably ensures that the names are unique or perhaps it also has a table column for “employee id” or “social security number”.

    The program certainly doesn’t know as much about names and addresses as the humans do but, on the other hand, the humans are not equal in that regard either. So perhaps the computer program is sentient too, at least as far as names and addresses are concerned, but just not as sentient as the humans. The difference between the program and the humans is mostly a matter of degree of sentience and domain of interest. We could keep adding to the program’s knowledge and associations. Perhaps eventually we would consider it as sentient at a level similar to that of the humans. It would almost certainly know different stuff and have a different domain of interest but if we could communicate with it at a high enough level, we’d regard it as intelligent. Of course, some would always hold a grudge against the machine, insisting it is really just a program. Perhaps we would culturally evolve to ignore those people much as most humans try to ignore skin color now.

  63. Is an evolved reflex to a stimulus “about” that stimulus? Why shouldn’t it be? What, exactly, does an evolved (more or less reflexive) conceptualized response to a stimulus add to the matter? No talk of Fodor’s “door-knob” concept just yet (a culturally evolved conceptualized response to a stimulus). And no talk of consciousness just yet either— the pragmatic exercise of intelligence gets on quite fine without it. So, too, given language, the reflective exercise of intelligence—a reflective exercise of intelligence requiring only a reflectively graspable medium, where one’s thoughts can become objects of one’s own experience (just a new medium of stimuli). Now then, on to the subject of consciousness Whoops, I’m afraid I have some laundry to do.

  64. @jgkess – When we utilize the mental “objects” of logic does this not require consciousness of some sort?

  65. @Paul: Putting aside whether our programs *know*, do our current algorithms have “raw feels” when run?

  66. “Do our current algorithms have “raw feels” when run?”

    That’s an interesting question, of course. It’s the so-called “hard problem of consciousness” applied to an AI. My own feeling is that there’s no form of an answer that will satisfy most people who ask this question. It’s a problem in perspective. We know what it is like to perceive and to think. All ideas are associations between things. In the case of ideas like how it feels inside my head to perceive, the associations are personal and internal. I can assume that other humans have similar thoughts but I have no way of knowing. Still, what I know about biology, human behavior, etc. gives me a strong belief that we each experience our perceptions in much the same way.

    On the other hand, if someone were to differ with me on a perception we both had, there’s no way for me to experience the perception like the other person. We can exchange verbal descriptions but that’s pretty much it. I can assume that others see what I see but, if they don’t, I have no access to those “raw feels”.

    With an AI, I can’t even justify a belief that it feels like humans do when it perceives. Why would it? I am sure my cats don’t perceive the world as I do though I do believe they have some kind of experience. If confronted with a sophisticated AI program, I could probably ask it about its perceptions and I would get answers, which is a lot more than my cats can tell me.

    Some people would undoubtedly claim that my sophisticated AI doesn’t really experience perceptions like we do regardless of what it tells us. Of course, centuries ago people said the same thing about animals. Most people don’t believe that today. I suspect there will come a day when hardly anyone doubts the sophisticated AI’s descriptions of its own perceptions either.

    Of course, some will say “How can it have experiences? It’s just a machine!” Humans are machines too. Just complex configurations of atoms that interact (compute) and thus we come full circle.

  67. After doing my laundry (and drinking another beer) I hit upon this: suppose consciousness, suppose a qualitative aspect to experience, is just a “brute” consequence of physics and evolution. Why all the fuss? Why all the drama? It’s not like it means anything?

  68. @jgkess – Would you be equally ok with those who said physics & evolution is a “brute” consequence of Consciousness?

  69. I can agree to brute-ness in experience…

    Would you be ok with meaning and means-ness in experience…

  70. Would I be “equally ok with those who said that physics and evolution is a brute consequence of Consciousness”? No. But I might be ok, grammatically, anyway, with those who say that physics and evolution are a brute consequence of consciousness.

  71. …consequence as effect, action or value toward evolution, maybe…

    But consciousness seems more than a consequential affair of relationships…

    78 possible “bold contributions” and counting…

  72. One particular aspect of consciousness I’ve always been fascinated with, but I never really seen discussed in any great detail, is the problem of localization of consciousness. Specifically in the context of Special Relativity, i.e. that information transfer is limited to the speed of light and not instantaneous, it seems like consciousness really would have to localized to a mathematical point. To illustrate, let’s imagine a brain that slowly grows in size, until it’s a light second across. There’s no way the entire thing could be connected casually into one consciousness, how many consciousness would it split into it? It’s sort of like the split brain problem except in the this case the splitting would be caused by the nature of space-time itself.

  73. Is consciousness for or from experience…
    … in relevance to ourselves…
    Your right, its fascinating to be here…

  74. How is that the brain generates the private subjective world of the self and then for what purpose? it seems logically impossible that nerve signals can generate a subjective observer while at the same time enabling that self to have its own distinct powers. It appears to be a useless appendage.
    I propose that the brain only generates the content of consciousness and not the self that binds it into a whole. The visual processing center for instance generates visual perceptions and thoughts. But the conscious self, that private world that binds the sensations into a simple unity, has a more permanent status and is not generated by the pattern of nerve signals. Instead the mental subject comes from an already conscious nerve cell from which it splits off to become what Lebiniz called the dominant monad. The conscious self then is an atomic unity evolving from other conscious natural beings in a panpsychist universe and as such can have real causal powers.

    see:

    https://philpapers.org/rec/SLEPAR

Leave a Reply

Your email address will not be published. Required fields are marked *