The Ontological Gap

There’s a fundamental ontological difference between people and programs which means that uploading a mind into a machine is quite impossible.

I thought I’d get my view in first (hey, it’s my blog), but I was inspired to do so by Beth Elderkin’s compilation of expert views in Gizmodo, inspired in turn by Netflix’s series Altered Carbon. The question of uploading is often discussed in terms of a hypothetical Star Trek style scanner and the puzzling thought experiments it enables. What if instead of producing a duplicate of me far away, the scanner produced two duplicates? What if my original body was not destroyed – which is me? But let’s cut to the chase; digital data and a real person belong to different ontological realms. Digital data is a set of numbers, and so has a kind of eternal Platonic essence. A person is a messy entity bound into time and space. The former subsist, the latter exist; you cannot turn one into the other, any more than an integer can become a biscuit and get eaten.

Or look at it this way; a digitisation is a description. Descriptions, however good, do not contain the thing described (which is why the science of colour vision does not contain the colour red, as Mary found in the famous thought experiment).

OK, well, that that, see you next time… Oh, sorry, yes, the experts…

Actually there are many good points in the expert views. Susan Schneider makes three main ones. First, we don’t know what features of a human brain are essential, so we cannot be sure we are reproducing them; quantum physics imposes some limits on how precisely we can copy th3 brain anyway. Second, the person left behind by a non-destructive scanner surely is still you, so a destructive scan amounts to death. Third, we don’t know whether AI consciousness is possible at all. So no dice.

Anders Sandberg makes the philosophical point that it’s debatable whether a scanner transfers identity. He tends to agree with Parfit that there is no truth of the matter about it. He also makes the technical point that scanning a brain in sufficient detail is an impossibly vast and challenging task, well beyond current technology at least. While a digital insert controlling a biological body seems feasible in theory, reshaping a biological brain is probably out of the question. He goes on to consider ethical objections to uploading, which don’t convince him.

Randal Koene thinks uploading probably is possible. Human consciousness, according to the evidence, arises from brain processes; if we reproduce the processes, we reproduce the mind. The way forward may be through brain prostheses that replace damaged sections of brain, which might lead ultimately to a full replacement. He thinks we must pursue the possibility of uploading in order to escape from the ecological niche in which we may otherwise be trapped (I think humans have other potential ways around that problem).

Miguel A. L. Nicolelis dismisses the idea. Our minds are not digital at all, he says, and depend on information embedded in the brain tissue that cannot be extracted by digital means.

I’m basically with Nicolelis, I fear.

 

98 thoughts on “The Ontological Gap

  1. A hurricane isn’t digital, but we like to think we can simulate it. Unlike the hurricane, where the winds are ontologically different from the simulated winds, minds secrete thoughts and words, which seem a lot closer in nature to the world of computation to many of us.

  2. As far as the hardware-software distinction goes, I see that as no big deal. A 3D scanner without a program is a piece of junk. The gist of the creation is surely in the program. And then it becomes real. What’s the problem?
    A scanner, if it could be built, would defnitely make two you’s. Whether or not you kill one of them doesn’t really matter (except that maybe it’s murder). Assuming the copy is perfect, both are really you. As of the instant of copying, the two go separate ways.
    As far as creating the color red, I suspect there’s some random pattern generation involved there. After all, nobody else can check what you see. It might as well be random.
    I guess I’m with Randal Koene, at least most of the way.

  3. There’s a theoretical limit on the amount of information contained in a volume of space. So, what information would remain missing that is contained in the brain tissue if we can extract a description with enough fidelity, hypothetically?

  4. “There’s a fundamental ontological difference between people and programs which means that uploading a mind into a machine is quite impossible.”

    The question I always have with this stance is, what is that difference? The science seems to point to the mind being a physical system that operates according to the laws of physics. What about those physics would make them impossible to replicate somewhere else?

    You do give an answer to this question:
    “a digitisation is a description. Descriptions, however good, do not contain the thing described”

    Consider that the description is of a physical system. Also consider that the description itself is a physical system. If the description is complete and granular enough, eventually the physics of the description become isomorphic with the physics of the original system. In other words, eventually the description becomes another physical system that is functionally equivalent to the original, at least in the sense that it has the same effects on its environment that the original had on its environment.

    Is the description then conscious? That depends on your stance on philosophical zombies, but my thinking is that consciousness has a functional metaprocessing role to play, and that reproducing the original system’s effects on its environment requires reproducing that functional role.

    I do have some sympathy with the argument that reproducing the processing of a brain using the current architecture of commercial computers could require more computing power than might ever exist. But people are working on new architectures, such as physical neural networks. Ultimately, once it’s possible in principle, making it possible in practice becomes an engineering problem.

  5. David – yes, I see what you mean. Thoughts seem more like the kind of thing you could build out of computation than rainstorms. You can generate strings of symbols computationally that look like thoughts. But in itself computation is meaningless, whereas thoughts are all meaning.

  6. SAP – but I deny that a description is a physical system! What mass and position does a description have? We mustn’t identify data with the various substrates in which it can be encoded.

  7. I’m on your side Peter (with Nicolelis). But using a different thread.
    Nicolelis says that it depends on information embedded in the brain tissue that cannot be extracted by digital means.
    Fine, but what about the nature of ‘information embedded in the brain tissue’?.
    Information in biological entities do not exist by themselves. Information in our brains have reasons of being there. They have meanings related to our metabolism, to our emotions, to our thoughts, to our… And transferring these information from our minds to computers is not only about transforming neurons polarisations into bit strings. It is also (and mostly) about transferring the meanings of these information into information meaningful for computers. This brings us to known subjects like the Turing Test that can be analyzed by a modeling of meaning generation applicable to both humans and artificial agents. The outcome is that the TT is not possible today as we cannot transfer an organic meaning in an artificial agents (see https://philpapers.org/rec/MENTTC-2). Such approach also allows to highlights associated ethical concerns and positions artificial life as an important step on the road to strong AI.
    To complement Nicolesis wording I would say that it depends on information embedded in the brain tissue, the meaning of which cannot be transferred to computers.

  8. Copying a brain at the atomic level is very hard so we can dismiss that for now. Copying just the parts that are important for thought might be possible if we knew how the brain works. We don’t, so it is not possible currently to resolve the posed question. The “brain is analog but computers are digital” argument doesn’t help as we don’t know the accuracy required for an adequate brain copy (or mind copy if you prefer) because, again, we don’t know how the brain works.

  9. Selfawarepatterns:

    Consider that the description is of a physical system. Also consider that the description itself is a physical system. If the description is complete and granular enough, eventually the physics of the description become isomorphic with the physics of the original system.

    But this argument is ultimately cyclical: you get two systems that have the same description, but from there, it only follows that both would likewise be conscious if having the same description suffices for consciousness—but that’s the question that’s at issue.

    My way out, of course, is that there’s a noncomputable—and hence, nondescribable—element to conscious experience. So a computational equivalence won’t be sufficient for a claim of identity in a perfectly ordinary way: there are properties that are missed by every computational model, and consequently, there’s room to differ along those properties. And what differs in its properties must differ absolutely.

    Putting this another way around, there’s really no computation without there being a mind interpreting a given physical system as computing something. That point is just the same as saying that ‘dog’ does not inherently mean dog without a mind interpreting it that way (after all, another mind could interpret it as meaning cat); yet it’s somehow often missed.

  10. What proof do you have that human consciousness is not computable? Since we do not know how the brain works, I know we also have no proof of this. Of course, it might be true but the history of science is littered with such woo. Some fraction of all scientists respond to their own inability to solve the mysteries of the universe with “Perhaps we’ll never know!”

  11. This sort of argument against the possibility of something requires more than saying “we call A an X and B a Y and they are different” – one should show that the mutual exclusion is absolute and sound, and not just, for example, a contingency of history or a consequence of our incomplete knowledge. For example, it was once held that organic and inorganic chemistry were distinct in this way, until it was demonstrated to be otherwise, and, more recently, the ontological gulf between waves and particles has succumbed to increasing knowledge.

  12. Peter #9,
    Whether or not the description has any platonic existence, to be useful, doesn’t it have to be instantiated in at least one substrate? And wouldn’t the implementation of the description in that substrate have mass and position?

  13. Jochen #13,
    I agree that there’s non-describable aspect to conscious experience, but that’s only because language is ultimately about conscious experience. Eventually, we always get to a language element that can’t be broken down further, except to point at what is being referred to. For example, the concept of red can’t really be described to a person born blind. But the ineffability of redness shouldn’t be taken as proof of anything, except that language, and all elements of symbolic thought, are ultimately placeholders for primal experience.

    (It’s interesting to consider what about the human mind gives it the ability to have symbolic thought. Lately I’ve concluded that it’s our metacognitive abilities. The scientific evidence for metacognition in non-human animals outside of primates is non-existent, and even other primates only seem to have it in a comparatively limited fashion.)

    “there’s really no computation without there being a mind interpreting a given physical system as computing something. ”

    I think that’s debatable. Computation is functionality. It seems similar to asserting that the heart isn’t a pump if we’re not around to interpret its pump nature. But my point about the copied system was that, once it’s implemented, it has a physical existence regardless of whether anyone chooses to interpret it as computational.

  14. SAP – OK, but what if it does? The question is whether a real, actual person like me can become a digital description of themselves. Actually become that description, not correspond with it or be intpretable as the thing described. I’m saying the two things – person and description – are radically different at a metaphysical level.
    Put it another way; I’m saying that I’m not made of data. I’m a particular physical animal.

  15. Re #7 and #18: If you could 3D print an identical copy of something, same materials and atom for atom, it would not matter which was which, the original and the copy are the same, they are both “original”. That would still be true even if the items in question were alive and yet even if they were conscious beings.
    If the copy was of “you”, then there are two indistinguishable “you”s. Basically, the printer has turned a description into a real object. That’s a big step more than your basic computer code usually does.

  16. “…the TT is not possible today as we cannot transfer an organic meaning [to an] artificial agent”

    I am reminded of claims about how male (old/white/etc) novelists cannot enter the mental worlds of woman characters in a realistic way. Viz the original imitation game. Are there non-computational differences between male and female consciousness?

    “…a mind interpreting a given physical system as computing something…”, “..computation is meaningless…”

    I’ve previously pointed to the recent literature on Maxwell’s Demon, computation and thermodynamics, and how this relates to embodiment. Are the computations bacteria doing meaningless? Do they have a self, even though they don’t know it in the way we do? It seems to me these have a naturally emerging (“teleonomic”) semantics, but I also think we can simulate the processes giving rise to these semantics without any vicious regresses. Does that help with the uploading question? Maybe.

  17. Peter #18: A vitalist would say you are not a physical animal because there is no such thing: animals are living things, not physical things, a different ontological category (of course, you might yourself be a vitalist speaking loosely in this case, in which case this argument won’t make much impression you!)

    Another point to consider is that the physical you contains almost none of the atoms it was comprised of a couple of decades ago. In whatever sense you are the same person as then, that sameness does not come from the matter within you. If it comes from the arrangement of that matter while it is in you, then that is information (or if not, then what?)

    Lloyd: Whenever you are discussing replication at the atomic level and smaller, you have to consider the no-cloning theorem of quantum mechanics. Of course, you can waive it in a thought experiment, so long as you are consistent about doing so.

  18. David # 20: “It seems to me these have a naturally emerging (“teleonomic”) semantics, but I also think we can simulate the processes giving rise to these semantics without any vicious regresses. Does that help with the uploading question? Maybe.”

    ‘Emerging semantics’ can be a consequence of the ‘emergence of local constraints’ in an a-biotic universe submitted to ubiquist physico-chemical laws.
    The first emerging constraint could have been ‘maintain far from thermodynamic equilibrium status’, applied to a defined volume.
    Then:
    – An agent can be defined as an entity submitted to internal constraints and capable of actions for the satisfaction of the constraints (ex: animals submitted to a ‘stay alive’ constraint).
    – ‘Semantic emergence’ becomes ‘meaning generation’ by an agent when it receives information that has a connection with a constraint. The generated meaning is precisely that connection. It will be used for the determination of an action that the agent will implement to satisfy the constraint.
    – Normativity and teleology can also be added to the ‘internal constraints’ thread.

  19. Seems to me that we are much closer to a set of nested control loops operating on a substrate than to the physical material world (what ever that is). That’s true of us physically, mentally and consciously. ‘I’ am not the specific material I’m made of at any one time, since it doesn’t matter to me if all that changes as my body works away, what matters is that the patterns that constitute me survive, like an eddy in water. When I die, it is a set of control loops that cease to operate, so consciousness goes, neural control goes, and the body decays.

    Therefore something that replicates the set of control loops that enable me to keep existing (as a set of control loops) would be me, and I would experience it as me, even if I then experienced some differences as a consequence of existing in a different substrate, or in a different context.

    At a more abstract level, what we think of as an entity is a distributed pattern, embedded in a set of relationships with other patterns that determine its ongoing evolution. I see no (theoretical) problem in capturing this as information, although it needs an underlying substrate to ‘run’ on (which can itself be a set of interacting patterns).

  20. Peter #18,
    Fair enough. I see that in particular as a philosophical conclusion where there is no fact of the matter. If that’s the way you choose to view it, I see no way for me to say you’re necessarily wrong.

  21. SelfAwarePatterns:

    I agree that there’s non-describable aspect to conscious experience, but that’s only because language is ultimately about conscious experience.

    I mean ‘description’ in a very general sense—including patterns of ones and zeros.

    It seems similar to asserting that the heart isn’t a pump if we’re not around to interpret its pump nature.

    Well, no, there’s an important difference: whatever you call it, what a heart does is moving fluid. There’s no interpretive freedom on that end: its function is fixed regardless of who may interpret it (or not, as the case most often is).

    Computers, however, are just physical systems traversing a sequence of states upon which an additional interpretive gloss is layered. Take a binary adder, which represents numbers via LEDs that may be either on or off: this interpretation isn’t fixed by anything within the adder itself. It would not be wrong, for instance, to interpret the nth element of an m-element row of LEDs not as representing the 2^nth power, but instead, as the 2^(m-n)th—such that instead of 2^0 = 1, the first light represents instead (for m = 3, say) 2^3 = 8, the second LED represents 4 in both cases, and the third either 8 or 1; basically just turning the labeling on the LEDs around.

    The ‘adder’ then would no longer perform addition, but some other computation; moreover, every other assignment of meanings to lights, or groups of lights, is equally well possible. The physical system here does not fix its function; it’s only due to being interpreted the right way that we can say that it ‘adds’.

    Moreover, even if the mapping between states of the LED output and states of a computation were unique, it still would need that mapping in order to claim that the system computes—after all, a pattern of lights isn’t a number. No mapping, no computation; no interpretation, no mapping; no mind, no interpretation. Consequently, computation is a mind-dependent notion.

    A heart acts on fluid, and its function is that action. But a computer acts on symbols (physically instantiated ones, whether patterns of lights, of switches, of electrons, or even marks on paper), and what we take its function to be depends on how we interpret those symbols.

  22. “the person left behind by a non-destructive scanner surely is still you”

    It seems obvious that there would not be continuity of consciousness with the copy but is there any evidence that there is even for the original? If there is no continuity beyond memories then the copy isn’t really worse off, and uploading might be preferable compared to living normally, though in that rather depressing scenario neither seems particularly desirable.

  23. Mind and body are deeply intertwined and one without the other will not function. A copy of your brain processes isn’t enough to create another you. That’s why when Peter says he is not his description, he is correct in at least one sense.

    For this exercise, we could divide a person into their physical part and their information processing part. We would need to replace the functionality of the physical part that affects information processing in order for the emulated processing part to work properly. For example, the brain controls the production of chemicals that affect our emotions. We would need a stub on the uploaded brain function to replace these or we wouldn’t have our emotions, and without them we aren’t the same person. Glial cells in the brain may affect our information processing, so somehow their affect would have to be replicated as well. And so on.

    While complicated, it doesn’t seem like there is anything preventing this kind of replication other than the uncomfortableness of being “just” a meat machine and a massively daunting technical challenge.

  24. Jochen #25,

    “I mean ‘description’ in a very general sense—including patterns of ones and zeros.”

    I think we have to make a distinction here between what can happen from inside of a conscious system vs what can happen from outside of it. From inside, we eventually reach a layer of the very stuff of conscious experience, of sensory, emotional, and motor perception. That layer is ineffable. All we can do is associate a symbol (word, etc) with it, but we can’t describe it any further.

    But from the outside, we can describe the correlated neural patterns and firings. Certainly we don’t have a full account of those correlates yet, but I think we have good reason to think we eventually will. Of course, we won’t look at those correlates and intuitively see our internal experience, but that gets to the hard problem and the limits of introspection.

    “A heart acts on fluid, and its function is that action. But a computer acts on symbol”

    But they’re both physical systems. If the adder you describe above was part of a processor in the control unit of a robot, the interpretation would be reified by the robot’s body and physical actions. The adder’s functionality would no longer be relative to a mind. The designer’s interpretation would have objective existence in the world independent of anyone’s mind.

    Looked at another way, our peripheral nervous system and overall body essentially reifies evolution’s interpretation of what happens in our central nervous system. If an evil scientist isolated a live cluster of neurons from my prefrontal cortex, he would be free to interpret that cluster’s neural firings in any way he chose.

  25. I’m so confident that I’ve cracked consciousness, I recently tweeted my solution to Dan Dennett 😀

    Here’s my best current crack at it:

    “Consciousness is an extension of dynamical systems theory. In dynamical systems, it’s a form of *internal* time keeping (a *second* arrow of time) and a symbolic language for self-modelling. Language provides temporal coordination!”

    “The key is to realize that conventional thermodynamics needs to be extended! The entropic arrow of time measures *external* time, but dynamical systems have an *internal* time! Two arrows! The internal time flow is consciousness!”

    “When the dynamical system effects an accurate self-model, it reduces entropy dissipation (free energy principle). External coordination of multiple sub-agents (global work-space) combined with internal integration of symbolic models! (memory and imagination) 2nd arrow of time!”

  26. #6 Peter and #13 Jochen

    I would start by drawing a sharp distinction between *static* objects (like rocks, tables and chairs) and *dynamical* systems (like a hurricane). A *dynamical system* is all moving parts in constant flow – for instance if you look at the vortex of a tornado, that’s not dependent on the material details of the matter making it up- it’s a *pattern* of events.

    So I think looking at dynamical systems, the distinction between computation/matter on the one hand, and mind on the other starts to dissolve.

    https://www.wikiwand.com/en/Dynamical_systems_theory

    Then you can start to look the principles governing these dynamical systems…thermodynamics and information theory.

    As regards consciousness, I think it doesn’t take an Einstein to realize it’s closely tied in with language, because language is what lets us reflect on our own thoughts (by forming *symbols* to represent them).

    Now as you said Peter ‘thoughts are all meaning’ – precisely so! Thoughts are all about *language* and meaning yes. But you also conceded that ” Thoughts seem more like the kind of thing you could build out of computation than rainstorms” . So why can’t thoughts be computations?

    Now what happens when we connect the insight that thoughts are a symbolic language with the other insight that the brain is a dynamical system? Combine the two ideas.

    Then you can imagine a physical dynamical system that works as a symbolic language generator , and it can form *models* of itself. This points naturally in the direction of two theories of consciousness… integrated information theory and global work-space theory.

    A dynamical system needs a way to coordinate all the activities of it’s parts…it’s precisely symbolic language that lets the system do this! So there are two key parts, one external , the other internal. The external part is the physical coordination of the dynamical system. And internal part is the integration of the system’s symbolic models.

    And I’m saying that Coordination (external) and Integration (internal) of a dynamical system is consciousness.

  27. # 28. SelfAwarePattern.
    “A heart acts on fluid, and its function is that action. But a computer acts on symbol”
    But they’re both physical systems.

    They are both physical systems but a heart is alive, a computer is not. And we don’t know the nature of life. So I’m afraid we cannot today consider that the interpretation by a computer can be similar to the interpretation made by a living entity.
    It will probably become possible in the future. The first thing to do is to bring life to computers. To look at how we could transfer to artificial agents the ‘stay alive’ constraint which looks as intrinsic to living entities.
    The nature of mind, taking life as a given, remains of course a key subject to investigate.

  28. Many thoughtful points here that really deserve a response from me, but I’m afraid I haven’t currently got time to do them justice. Apologies and thanks.

  29. SelfAwarePatterns:

    I think we have to make a distinction here between what can happen from inside of a conscious system vs what can happen from outside of it.

    I think I basically agree with most of what you write here, but surely, this should speak against any computational account, no? After all, a computation is fully specified by, say, the systems states and transition rules. No question of ‘inner’ vs. ‘outer’ description pops up.

    If the adder you describe above was part of a processor in the control unit of a robot, the interpretation would be reified by the robot’s body and physical actions.

    First of all, this already concedes half my point (at least): computation alone isn’t sufficient, interaction with the environment is necessary. A black box, sitting in the dark, won’t yield conscious experience. This should come as a relief to those worried that our whole world might be a simulation!

    But more importantly, the robot is really just like the heart: what it does is entirely physical. If we speak of computation at all, then again merely as an interpretational gloss. In the end, the robot, like the heart, consumes some vehicles of the environment, leading to internal state transitions, leading to action on the environment. It ‘computes’ only in the sense that the heart does; but actually, all that happens is that the environment impinges on its contact surface, which causes some internal changes (switches flipping, electrons shuffling around, valves opening: it does not matter), which then yield some reaction (say, light in a given pattern impinges on the cells of a CCD-chip, which sends certain currents through a variety of gates, transistors, resistors, and the like, ultimately activating some servos that, say, make an arm extend towards an apple). This isn’t computation anymore than, e.g., a ball bouncing down a hill is; it’s simply physical causality.

    Sure, we can interpret this as computation—but we can interpret a heart as performing some computation, too. This says something about our capacity for interpretation, for generating meaning, but nothing about the systems themselves. An analogy is a turntable translating the grooves etched into vinyl into sound: while on some level, one might claim that the grooves constitute a kind of ‘writing’, the turntable hasn’t in any sense ‘read’ that writing out loud. It has merely reacted to the physical properties of the vehicle it was presented with; and that’s all a robot does, too.

  30. “If we speak of computation at all, then again merely as an interpretational gloss”. We know that non-reversible computation requires the use of energy to erase memory. I don’t believe that this can be completely interpretational in nature. Similarly, algorithmic complexity and randomness are not interpretational in nature. At the physical level, thermodynamic disequilibrium, cognition and intentionality require an inside and an outside.

  31. Jochen,
    “No question of ‘inner’ vs. ‘outer’ description pops up.”

    Actually it does. From within the information framework of software, a bit is an irreducible concept (even in machine language). But in hardware, a bit is a transistor, which is reducible to its components (terminals, junctions, etc). Likewise, software can only see certain things about I/O ports. The hardware reality of those ports is opaque to it.

    It even happens within software layers. An application program often only has access to the logical constructs created by the operating system. It doesn’t have access to the mechanisms underpinning those constructs. Layers of abstraction.

    “computation alone isn’t sufficient, interaction with the environment is necessary.”

    I don’t know of any useful computational devices that don’t interact with their environment, so I’m not seeing why this would be a point in your favor. To your point about simulations, I don’t see any reason why interaction with a simulated environment wouldn’t work. (Not that I see the simulation hypothesis as a particularly useful outlook.) I think we agree though that minds are information nexuses of their environment. But as far as I can see, so are computers.

    On interpretation, presumably you spent money for the device you’re reading this with. You spent that money because the effort to interpret it as doing computation is far less than the effort to interpret the nearest rock as doing so. If it took as much effort to interpret a brain as doing computation as it does the rock, the computational outlook wouldn’t be compelling.

    Your description of the robot at a physical level matches my point above that we can look at any computational system purely in terms of its physics. Incidentally, we can do the same with brains, describing everything that happens purely in terms of electrochemical reactions. In both cases, it’s tedious since we’re forgoing the benefits of a useful higher layer of abstraction.

  32. @21

    Another point to consider is that the physical you contains almost none of the atoms it was comprised of a couple of decades ago. In whatever sense you are the same person as then, that sameness does not come from the matter within you. If it comes from the arrangement of that matter while it is in you, then that is information (or if not, then what?)

    I think the contention is that a person is dependent on the physical properties of the atoms/molecules, which doesn’t preclude the possibility that a personal potential can’t be specified by pure information. The particular origin of the “stuff” that makes you is immaterial (heh); physics already tells us that one elementary particle is identical to all others.

    Another way to put it is that if “mind uploading” is impossible, that doesn’t imply that “mind recording” is impossible. Your mind might still be captured, say on a DVD set, pending the day you can be reimplemented in physical form.

  33. @28

    I think we have to make a distinction here between what can happen from inside of a conscious system vs what can happen from outside of it. From inside, we eventually reach a layer of the very stuff of conscious experience, of sensory, emotional, and motor perception. That layer is ineffable. All we can do is associate a symbol (word, etc) with it, but we can’t describe it any further.

    We describe things by invoking similar experience or sentiment in the target audience. It depends on what you mean by “ineffable”. Language is remarkably impotent in actually describing anything; it’s really just message passing with the intention of invoking semantic routines in the target. The word “describe” begins by begging the question, since you “describe” something to a target that interprets. Really another words is needed, like “specify” or something.

    But natural language (or any language for that matter) doesn’t “specify” anything by itself.

  34. David Duffy:

    We know that non-reversible computation requires the use of energy to erase memory.

    We know that flipping a physical system’s state requires a minimum energy (even more accurately: flipping it in such a way that no other system ends up being reliably correlated with it), but that this is the erasure of information is, again, only thanks to interpreting one state as ‘0’ and the other as ‘1’—think about how easily one could reverse this interpretation, for instance.

    Similarly, algorithmic complexity and randomness are not interpretational in nature.

    Well, they’re still relative: given an appropriate oracle, a random string may be non-random, so randomness is at least interpretational in so far as it requires a certain kind of observer.

    Furthermore, both really measure an information-bearing capacity: after all, a random string, served up to you, does not inform you of anything. It’s only if you have the proper code that you can ‘uncover’ the information sent to you, and with a different code, you could similarly uncover any other information at all.

    This is the problem with the way ‘information’ is used in everyday and in technical cases: in the technical sense, it’s really just a measure of the complexity of a given object—how many elementary differences it contains, how many yes-no questions one would have to answer to fully describe that object. It’s a purely syntactic quantity, but computation isn’t purely syntactic—the symbols a computer manipulates have meaning, and it’s us that supply this meaning. Without us, there’s nothing decoding the symbols, and consequently, there’s simply no computation there.

    SelfAwarePatterns:

    From within the information framework of software, a bit is an irreducible concept (even in machine language).

    But this framework already only exists if a mind interprets transistor states to represent, say, ‘1’ or ‘0’. Because again, transistor states and ones and zeroes are different kinds of thing entirely: this is shown by the fact that one can easily flip the assignment of which to call one, and which to call zero. That is, in proposing that there is an ‘interior’ level to the ‘software’, all you’ve really done is imported the interior level of the mind interpreting a physical system as implementing a certain kind of software.

    To your point about simulations, I don’t see any reason why interaction with a simulated environment wouldn’t work.

    Because if interaction were sufficient to ‘fix’ what computation is being done, without interaction with the ‘real’ outside world, there would not even be a simulated environment to interact with. There would simply be tiny currents changing transistor states.

    I think we agree though that minds are information nexuses of their environment.

    Actually, I wouldn’t agree with that: information is something that minds bring into the environment. Without minds, there is no information. In fact, without mental models, there is no information (but see my FQXi essay for that).

    You spent that money because the effort to interpret it as doing computation is far less than the effort to interpret the nearest rock as doing so.

    Indeed, and I spend money on books because it’s easier to interpret the marks on their pages as describing the story of star-crossed lovers or alien invaders than it is to interpret a rock in that way. But this doesn’t mean that those star-crossed lovers or alien invaders have any independent reality: they are conjured up by my mind. It’s exactly the same with computers.

    Take for instance the case of a set of cracks in a stone being ‘deciphered’ as an epic poem written in runes. It’s not that, by accident, these cracks happened to ‘mean’ that poem; it’s that a particularly inventive mind interpreted them a certain way. Their meaning was not discovered, it was imported. It’s the same with computation: we can interpret systems as computing, but absent such interpretation, there’s just no ‘there’ there.

  35. If you think about an inside-outside distinction, this provides grounds for extending conventional thermodynamics: the standard entropy measure only defines a system relative to its environment.

    What I’m suggesting is introducing a *second* measure of entropy (or a second ‘arrow of time’ if you like), which would apply to dynamical systems and defines an *internal* measure of the system…or the system relative to itself (the evolution of internal symbolic models).

    Jochen, the computational model continues to enjoy rapid success. For instance in this podcast, Sam Harris recently did an extensive interview with top neuroscientist Anil Seth, which is very good indeed:

    https://samharris.org/podcasts/113-consciousness-and-the-self/

    The ‘free energy principle’ and ‘predictive coding’ are computational models of consciousness that are on the rise.

    https://www.wikiwand.com/en/Free_energy_principle
    https://www.wikiwand.com/en/Predictive_coding

  36. I’m pretty much with SelfAwarePatterns in this discussion, and because I’m starting in the middle, I want to summarize the important background first.

    1. There are (effectively) two kinds of computers, analog and digital. Both of these kinds of computers can be programmable or not. When we talk about “computers” in normal conversation, we are almost always talking about programmable digital computers. No one thinks the brain is a programmable digital computer. People who think the brain is a computer would say it a non-programmable analog computer (with digital elements and the ability to learn).

    2. It is a proven fact (I’m pretty sure) that any function computed by an analog computer can be simulated by a programmable digital computer to any degree of accuracy except perfection. That means that, theoretically, a brain (as an analog computer) could be simulated by a digital programmable computer. That does NOT mean that the simulation would necessarily run in “real time”. The closer to perfection that you want to get, the bigger the program will be and the slower the digital simulation will run.

    The above should not be controversial. The following may be. It is my personal take on things, and I think it corresponds to what SelfAware has been saying.

    3. A system that can be described as functional (i.e. a computer as described above, including emergent analog computers like the brain) has two descriptions: a physical description and a functional description. This is the hardware/software divide.

    4. The functional description (software description) is necessarily hardware independent. That means that the functional description says absolutely nothing about physics. Thus Descartes’s dualism.

    5. The physical description alone does not specify function. One man’s Dell laptop computer is another man’s paperweight. However, if two systems have the same physical description, they will have the same set of possible functional descriptions. An exact copy (atom for atom) of a Dell computer running Word has the same functionality. A real copy of a Dell computer running Word wherein all of the values in all of the memory locations are identical also shares the exact same functionality, because they are digital. A standard copy of a Dell computer (my computer and yours, say) has nearly the same functionality

    6. What makes a physical system a computer/person/agent is a combination of the physical description and the functional description of choice.

    [I’m pretty sure SelfAware will buy into all of the above. I’m less sure about the following]

    7. The basic model for a computation (including non-programmable analog computation) looks like this:
    Input –>[agent]–> Output.
    In this format, the agent = the system, so we can talk about the physical agent and the functional agent.

    So now, general comments relevant to the OP, assuming the above is correct:

    You, the reader, are not a functional agent, nor are you a digital simulation that duplicates the functionality of a functional agent. You are a functional agent running on/realized by a specific physical agent. A digital program running on a programmable computer that is equivalent to your functionality is theoretically possible but certainly not feasible in the near future, and probably pointless in the far future where it might be feasible.

    *

  37. mjgeddes:

    Jochen, the computational model continues to enjoy rapid success. For instance in this podcast, Sam Harris recently did an extensive interview with top neuroscientist Anil Seth, which is very good indeed:

    I think the sort of work Anil Seth is doing is very important and intriguing. For one, he’s careful to distinguish between the hard problem—why and how there is conscious experience at all—and the real problem—how consciousness can be assessed, measured, described—i.e. studied in itself in terms of measurable properties of the human brain. Essentially, he’s saying, let’s not worry about the hard problem for now, but instead try and get consciousness itself within the purview of scientific study. Once we understand better how it behaves, what its properties are, and so forth, maybe then will we be able to attack the hard problem—or who knows, maybe, as with life, the hard problem will seem much less forbidding by then.

    I think for this sort of work, the computationalist metaphor can be very helpful, because you get to take certain things for granted—that consciousness exists, and that it indeed performs manipulations on abstract entities, i.e. computations. But for explaining what consciousness is, you’re just not getting anywhere with computation, since computation always relied on mind having been smuggled it at the ground floor somewhere, meaning it can’t account for its emergence at the top level.

    ————————————————————

    James of Seattle:

    It is a proven fact (I’m pretty sure) that any function computed by an analog computer can be simulated by a programmable digital computer to any degree of accuracy except perfection.

    Better than that, actually: as long as the analog signal is bandlimited, or, in other words, as long as there is some finite accuracy for analog parameters, digital signals can exactly recover all of the information in analog ones. So the sets of functions computed by analog and digital computers are exactly the same.

    (If, of course, the exact real number value of analog parameters matters, we’re into the realm of hypercomputation, that is, of devices capable of solving problems that no Turing-machine equivalent computer can solve.)

    A system that can be described as functional (i.e. a computer as described above, including emergent analog computers like the brain) has two descriptions: a physical description and a functional description. This is the hardware/software divide.

    You have to be careful what you mean by ‘functional’ description, I think. A heart, for instance, can functionally be described as a pump, but that doesn’t mean it has any kind of ‘software’. Usually, what we call ‘functional’ is the performance of some physical task that, however, could equally well be performed by a physically different system—so a heart can be replaced by an artificial pump, for example, without any loss of functionality.

    This has several complications, however. First of all, what ‘task’ we recognize in a physical system depends on our interests: we describe a heart as pumping blood, but we could equally well describe it as metabolizing sugar. Also, the level of coarse-graining plays a role: at a molecular level, that a heart pumps blood may not be readily obvious; instead, we see a lot of functions being performed that are wholly opaque to us at the macroscopic level. Additionally, how the system is embedded into its environment matters: a heart can only pump blood if it’s within an organism, although physically, nothing changes about it if we extricate it and put it on ice for transplantation. A heart in an ice box no longer pumps blood, however.

    So there’s already a lot of observer-relativeness in ascribing a proper function to a physical system. Things get worse once we move on to computation.

    Picture a ‘binary adder’ (quotes because that already implies more than we’re strictly entitled to conclude at this point—consider it just a name). It consists of two rows of LEDs, each of which is coupled to a switch that either turns it on, or off. A button, if pressed, will yield to a third row of LEDs lighting up. The pattern in which it does so depends on the pattern of the LEDs in the first two rows in such a way that if they are interpreted as binary numbers—on for 1, off for 0, with the leftmost LED having the highest value—then the third row can be interpreted as a binary number in the same way that corresponds to the sum of both ‘inputs’.

    Now, the question is: does this device perform addition? Does it, in any objective sense, compute the sum of its inputs?

    Most people would probably answer ‘yes’; but the answer is actually a straightforward ‘no’. For consider what happens if we change the interpretation of the LEDs: for instance, now off corresponds to 1, and on to 0. The device will still compute a function of two binary numbers; however, that function will no longer be addition. But which interpretation is right? If you use that adder in order to compute the sum of two numbers, I can use it with just the same effort to compute whatever function is implemented after the change of interpretation.

    This is, of course, not the only way in which we can change the interpretation. We could also modify the significance of each LED: say, consider the rightmost light to map to the highest value. We can even jumble up things in different ways: have the leftmost one have the highest value, the one next to it the lowest, then the second highest, and so on. In each of these cases, we would get a device that performs a different computation. Thus, computation is entirely in the eye of the beholder.

    Now, sometimes people claim (as SelfAwarePatterns did above) that hooking such a device up to the environment serves to reduce, or even eliminate, the ambiguity. But this actually does no work at all: consider the input rows to be set by ‘the environment’—sensory stimuli, for instance—and the output row the ‘reaction’ of the system. Then, under all of the above interpretations, the stimuli would be the same, and such would likewise be the reaction. What computational gloss we put upon this is entirely irrelevant; all that matters is the physical causality that makes a given LED light up if a certain pattern of LEDs was set.

    This last, the physical causality, is then all we have to work with in an objective way; computation and function are ultimately subjective glosses, interpretations of that objective data that have no standing on their own. Consequently, trying to ground mind in such notions can’t work, as they depend on mind already.

    This is a very simple and necessary conclusion. I suspect that really all that keeps it from being accepted more widely is social inertia: computers seemed to be almost magical devices in the 20th century, capable of feats hardly imagined up to that point. Why not of explaining consciousness?

    But in the end, as in some glurgy kid’s movie, the magic really was in us all along.

  38. James of Seattle,
    I think you nailed my outlook, except for a couple of points. First, from what I know of Descartes’ dualism, it’s much stronger than what you describe. Among other things, the functional / physical divide in perspective doesn’t require the pineal gland to be a medium between the ghostly mind and the physical body.

    In the end, both the functional and physical descriptions are mental images / models / representations / theories we construct to understand something, and that understanding is only useful to the extent it gives us accurate predictions about future preceptions. Calling one real and the other not misses the epistemic limitations we always have to contend with, that we only ever have access to our own consciousness and the theories we construct about what lies outside of it.

    Second, regarding your last point, if mind copying were available right now, I wouldn’t view it as pointless. To the contrary, I’d try to make sure I maintained recent backups at all times. And I’d choose to view the backup being booted up after my death as me surviving. I can’t say people who choose to view it as another entity are necessarily wrong, but I can’t see that they can legitimately say I’m wrong either.

  39. Jochen,

    Most of what you wrote is fine, but you made mistakes at the end. You said
    [quote]computation and function are ultimately subjective glosses, interpretations of that objective data that have no standing on their own.[/quote]
    This is not correct. Their are objective explanations for functionality. There are objective reasons an eyeball functions the way it does. It’s true there are subjective glosses possible, which would provide an explanation for re-purposing a heart as a sugar sink. But the “ultimate” explanations are objective, even, if they seem to include subjective goals. Those subjective goals have objective explanations.

    [quote]Consequently, trying to ground mind in such notions can’t work, as they depend on mind already.
    This is a very simple and necessary conclusion.[/quote]
    Simple, yes. Necessary? No. Several significant people are coming around to the notion of how to ground meaning (function) from natural sources. See David Haig’s “Making sense: information interpreted as meaning”, or Carlo Rovelli’s “Meaning and Intentionality = Information + Evolution”

    *

  40. [Dang. How do you do quotes? How do you edit?]

    SelfAwarePatterns,
    I did not understand the first part of your response. You said

    “Calling one real and the other not misses the epistemic limitations we always have to contend with, that we only ever have access to our own consciousness and the theories we construct about what lies outside of it.”

    I didn’t think I was calling one real and the other not, and I’m not sure what real or not thing(s) you are referring to. Are you talking about the difference between a physical agent and a functional agent? The descriptions (physical v. functional) are just concepts. References to physical and functional agents are references to different, but real, aspects of the same thing.

    Also, not sure which epistemic limitations you are referring to.

    *
    [the last point I’ll concede. My point was meant to be that a back-up capability will be post-singularity, so all bets are off]

  41. James of Seattle:

    There are objective reasons an eyeball functions the way it does.

    And those reasons are completely exhausted by the physical causality within which the eye exists. So why add anything beyond that?

    And again, I’m not saying the reasons for an eye’s function are (to some degree, at least) subjective, but what function you consider it to have. To see? To move around in its socket? To store the aqueous humor? To give social cues to other members of the species? To be a particularly tasty morsel to carrion birds, once an organism has deceased?

    All of these (and many more) are certainly things an eye does; but what its function is, is something we choose. For some, it can only fulfill the function if it is properly embedded within its environment: to see, it needs to be hooked up to a nervous system, for instance. To give social cues, it needs to be embedded in a society whose members are capable of receiving these cues.

    There is also a disturbing teleology in the notion of function: we typically claim something functions in such a way if it achieves some goal. But how does the goal, the future, intended end-point of an action, determine function?

    Some, like Ruth Millikan, appeal to evolution to explain this: a part’s proper function is what it has evolved to do. This carries the somewhat disturbing implication that an eyeball that has spontaneously congealed out of the vacuum (a Boltzmann-eyeball) doesn’t have a function, despite being molecule-by-molecule identical with my own functional eyeball: so function does not lie in the physics.

    So, what do we really mean when we say, an artificial heart is functionally equivalent to a biological one? Well, we have isolated one particular range of physical behavior that we deem important, and created a system that replicates it. The replacement heart will differ with respect to many functions: it won’t metabolize sugar, for instance. But we’ve singled out the blood-pumping function, not because we wanted a replacement heart, but a replacement blood-pump. In general, however, it’s not at all easy to figure out which function to focus on.

    But my real target is computation. Here, the issue reduces to a simple question: what does the adder (as described above) compute? If there’s no objective answer to that, then computationalism is simply wrong. (It’s actually still not right if there is an answer, since we still need a mind to map physical states to abstract objects even if that mapping is unique, but uniqueness is necessary, if not sufficient.)

    Several significant people are coming around to the notion of how to ground meaning (function) from natural sources.

    Or indeed, see my own essay “Von Neumann Minds: A Toy Model of Meaning in a Natural World“.

    I’m not in any way saying that mind needs to be grounded in something non-natural: quite the opposite, I’m trying to formulate a physicalist picture of the mind. I disagree with many commonly held beliefs in that area, though, chiefly among them that mind can be explicated in terms of computation.

  42. http://www.umsl.edu/~piccininig/Computationalism_in_the_Philosophy_of_Mind.pdf

    Computationalism is the view that intelligent behavior is causally explained by computations performed by the agent’s cognitive system (or brain). In roughly equivalent terms, computationalism says that cognition is computation. Computationalism has been mainstream in philosophy of mind– as well as psychology and neuroscience – for several decades.

    Jochen says:

    First of all, this already concedes half my point (at least): computation alone isn’t sufficient, interaction with the environment is necessary. A black box, sitting in the dark, won’t yield conscious experience.

    This isn’t computation anymore than, e.g., a ball bouncing down a hill is; it’s simply physical causality.

    SAP says:

    I don’t know of any useful computational devices that don’t interact with their environment, so I’m not seeing why this would be a point in your favor.

    If real world interaction is all that is required to “fix” computation into one aspect of causal mechanism, I don’t see how the “problem of interpretation” presents a problem for computationalism. From the quoted definition, computationalism is the view that intelligent behavior is causally explained by computation, BUT as SAP points out, interaction with environment is usually assumed. But if explicit specification is all it takes to resolve the argument…

    Having said all that, I actually don’t buy the “interaction with environment” rebuttal. I’ve always been swayed by the brain in a vat argument: a brain excised from a scull and somehow kept alive in a vat would still be conscious, though totally cut off from its environment. As a more realistic example: those unfortunate enough to suffer “locked in” stroke are still conscious, thought mostly cutoff from their environment.

    (As a side note: a preview button would be great. Sure hope my markup is right!)

  43. (Sorry, I meant to add: in order to do quotes, you need to use the html “blockquote” tag—enclose the quoted part in “blockquote” and “/blockquote” in angular brackets (i.e. “”). You can’t edit, unfortunately.)

    SelfAwarePatterns:

    And I’d choose to view the backup being booted up after my death as me surviving.

    So, there is a copying machine. You’re offered the following deal: you get into the machine, a copy is made and reconstituted in a different room; that a copy was made is afterwards proven to your satisfaction, say by video. You have every reason to believe the process works as advertised—say, you’ve been copied many times, it’s a routine procedure used all over the world, there’s no doubt that it does what it says on the tin.

    You’re presented with two options, from which you must choose before the procedure: either you choose that the original is destroyed; in that case, the copy will receive five million dollars. Or, you can choose that the original lives; in that case, the clone will be destroyed, and you get nothing. (Oh, and you can’t choose not to play: you’re prisoner of the usual mad scientist type, and if you renege, you’ll just be killed without copy all old-fashioned like.)

    Which do you choose? (I’m genuinely curious here; I have no settled views on the matter of personal identity. Still, my intuition is that one ought to decide against the original being killed, since I can’t shake the idea that otherwise, my experience would be—make decision, enter cloning machine, die; not—make decision, enter cloning machine, receive five million bucks.)

    A variation:

    Neither of you can tell, immediately after the cloning, whether they’re clone or original—the cloning procedure requires anaesthesia, and you wake up in identical surroundings.

    Does that have any influence on your answer? In what way?

  44. Hunt:

    If real world interaction is all that is required to “fix” computation into one aspect of causal mechanism, I don’t see how the “problem of interpretation” presents a problem for computationalism.

    I don’t think interaction with the environment suffices for ambiguities of interpretation to be resolved; I was merely pointing out that even if that were correct, we don’t really have a computationalist account, as interaction with the environment is not a computational process.

    Regarding why interaction can’t suffice, see my first answer to James above, with the example of the ‘binary adder’. If you hook a device up to the environment, then it’s its input-output behavior that is relevant—the way it reacts to environmental stimuli. This doesn’t help fix the interpretation of inputs and outputs (and thus, of what is being computed) in any way, though.

  45. Jochen,

    This doesn’t help fix the interpretation of inputs and outputs (and thus, of what is being computed) in any way, though.

    Why does it matter? Again returning to the definition above (which can be quibbled with, of course), computationalism is the idea that cognition is computation. That no specific interpretation can be given to the computation that goes between input and output is irrelevant. The computation is just another part of the causal explanation of intelligent behavior. I think there’s a danger of losing sight of when the problem has been answered here: it doesn’t matter what computation is being performed! Therefore interpretation is irrelevant. Problem solved. Yes/no?

  46. Speaking for myself, I just know I wouldn’t want to be the clone that gets drowned in the tank of water, like in “The Prestige”.

  47. Hunt:

    Again returning to the definition above (which can be quibbled with, of course), computationalism is the idea that cognition is computation.

    But without fixing the interpretation, there’s no fact of the matter which computation is being performed. That is, by saying ‘the binary adder adds’, one means that it implements a concrete algorithm, and it’s this algorithm that is meant by ‘the computation’. Likewise, if mind were computational, it would at least be equivalent to a certain algorithm being implemented physically.

    But without fixing the meaning of inputs and outputs, we can’t say that the binary adder adds—there are, as I described above, even within the restricted domain of binary functions a great many that one can consider the ‘adder’ to implement. Consequently, there’s many different algorithms that one could consider the adder to perform, and hence, many different computations: without making an interpretational choice (saying ‘that LED corresponds to place value 2^0, this one to 2^1’, and so on), no computation at all is being performed.

    Consequently, without interpretational choice, it’s not the case that the brain implements a certain computation.

  48. Hunt:

    Speaking for myself, I just know I wouldn’t want to be the clone that gets drowned in the tank of water, like in “The Prestige”.

    But in order to avoid that fate, which strategy would you, if pressed, choose? Have the original destroyed (and have the clone pocketing five million bucks), or have the clone destroyed? (And yes, either would get to experience their imminent end; there’s no unnecessary torture, but it’s also not a painless blinking out of existence.)

    Does your choice depend on the amount of money? If no money at all is offered, would you be fine with a coin toss? Would ten bucks suffice to choose the clone as survivor? Is there a threshold such that you would take the deal?

  49. I mean, it’s in the end quite simple: you come across a black box in the emptiness of space. It’s some complex mechanism, it has gears, switches, lights, and whatnot. Manipulating switches causes lights to go on, whatever. What does it compute?

    This is impossible to answer. You might make a reasonable guess at what the intended meanings of the lights are, and get a consistent interpretation; but other consistent interpretations will exist.

    The problem is exactly the same as: you come across a page of text in an unknown language. What does it say? Again, this is impossible to answer. In principle, it could say anything that can be said in the amount of characters it contains—in this way, one would regard it as a one-time pad, and a decoding exists such that every English text of that given length, every Chinese text, every German text, whatever, can be gathered from it.

    The meaning is not just in the physical artifacts; one always needs to interpret them. But since such interpretation is necessary, it’s just not the case that in any objective sense, the brain computes something. It can only be interpreted as such. But this falsifies computationalism.

  50. James of Seattle #44,
    Sorry! That paragraph was really just thrown in as an aside comment about the overall conversation in this thread. I really should have broken it out. Totally my bad.

    On the real or not thing, my point was that designating the functional aspect as an interpretation but not the physical aspect, is artificial. They’re both interpretations of reality, mental models we create. And the epistemic limitation is that we only ever have access to the models.

  51. Hunt #46,
    Actually a brain in a vat (in the typical scenarios) is not cut off from its environment. The usual idea is that an evil scientist is feeding it sensory input and responding to its motor output. Certainly its environment is radically different than what it perceives, but it’s still an environment.

    A patient who is completely locked in might still be conscious. They still have their memories if nothing else from when they did interact with their environment. Although I’m not aware of any patients who experienced complete and utter lock in and recovered enough to describe it, so I’m not sure how much we really know here. But would a brain completely locked in since the early fetal stages be conscious? I don’t know the answer to that question, but I’m pretty sure if it did, it would be a desolate and impoverished consciousness by any standard.

  52. Jochen #47,
    These scenarios are intuitively difficult because they are outside of anything our instincts evolved to handle. We never had to worry about copies of ourselves in the hunter-gatherer environments, so no answer is going to feel good to us.

    Before addressing the scenario you laid out, let me lay out another one. The mad scientist holding you captive is going to flip a coin. It it’s tails, you die. You get to decide now what happens if it’s heads. Option one is you go free with nothing. Option two is you go free with $5 million. Option two here seems like the rational choice. In either case, you have a 50% chance of dying and a 50% chance of living, so you might as well choose the option where in the 50% chance you live, you live in style.

    Okay, back to your scenario. An important detail is that you specify that I must decide *before* the procedure. In my view, in my personal subjective future timeline, I have a 50% chance of coming out of the machine as copy-me, and a 50% chance coming out as original-me. No matter which option I choose, there’s a 50% chance that death is in my future. For me, the choice here is identical to the one in the alternate scenario above. (It’s actually slightly better, since some version of me definitely does get out alive.) The alternate scenario you present, where there is an explicit interruption in consciousness and an obfuscation of identity afterward, just makes the choice easier.

    The choice would be harder if the scenario was that I had to decide *after* the procedure, since now, as original-me, if I choose for copy-me to survive, there is a 100% chance of death in my subjective future. But again looking at an alternate scenario helps. Suppose the choice was whether to have my memory of the current situation wiped and then set be free with $5 million, or simply to be set free with no memory wipe and no money? Subjectively, for me, the two scenarios are equivalent, although admittedly my survival instincts would make that hard to remember if I were in the decide after copy scenario.

    Suppose the mad scientist gave me a choice before the procedure. In choice one, copy-me is tortured for several hours and then killed, while original-me leaves with $5 million. In choice two, both copy-me and original-me are set free unharmed. Again from a subjective perspective, this seems equivalent to the mad scientist giving me a choice of enduring several hours of torture but then being restored to full health and having my memory of the torture session wiped before being set free with the money, or simply being set free with no torture or money. (In both scenarios, unless I desperately needed the money, I think I’d skip the torture.)

    Again, these are all situations our instincts didn’t evolve to handle, so no answer is going to feel categorically right. Indeed, I can’t say that someone who makes different choices than I did is wrong. It’s all in our conceptions of self.

  53. Interesting answer, thanks! To me, it sort of boils down to: the copy procedure didn’t change anything about me; from my point of view, from the point of view of me as a physical system, there’s absolutely no difference regarding whether there’s a clone made after I have been scanned, or not. So what could possibly cause me, in a physical sense, to have any different experience than—be scanned, then die? It seems that anything else would have my experience be contingent on distant facts—i.e. whether there is actually a clone made or not. This just doesn’t sit well with me.

    But as I said, I don’t think I have a good answer myself here.

  54. I should note that a lot depends on my faith in the copying process. If I’m not sure about it, I’d probably be closer to your position. The stream of consciousness I know is preferable to one with substantial question marks. But if the procedure were common and I’d been through the copying process before, such that I remembered being the original followed by being the copy, then it’s easier for me to consider the copy an aspect of me.

  55. Jochen #45 said

    There is also a disturbing teleology in the notion of function: we typically claim something functions in such a way if it achieves some goal. But how does the goal, the future, intended end-point of an action, determine function?

    Actually, the relation of “some goal” (or purpose) to function is that the former is an explanation of the latter. Something has a function because it was created (selected) to achieve a purpose. That purpose can be long gone, but it still explains the function. The function under consideration will always be objectively tied to the purpose for which it was created. So a mechanism is an “adder” if it’s purpose when it was created and/or situated was to add. That same mechanism could be a subtractor, but only if created or situated for the purpose of subtracting.

    So it’s not that the Boltzmann eyeball doesn’t have a function so much as that it has no function related to a purpose.

    But my real target is computation. Here, the issue reduces to a simple question: what does the adder (as described above) compute?

    The answer is the same as above. What it computes is determined by the objective purpose for which it was created or situated.

    we still need a mind to map physical states to abstract objects

    Do you mean we need a mind to do the mapping, or we need a mind for there to be a mapping? I would say that given a purpose, there is an objective mind-free mapping.

    *

  56. Jochen writes: “this is the erasure of information is, again, only thanks to interpreting one state as ‘0’ and the other as ‘1’—think about how easily one could reverse this interpretation” and “algorithmic complexity and randomness are…still relative: given an appropriate oracle, a random string may be non-random”.

    These kind of comments keep confusing abstract and concrete computation. The random string is non-random only because a suitably formed physical dynamical process can expend energy to decode it – the total information must be message plus reader. The reason we have Landauer’s principle is that hypercomputing cum oracles at the level of thermodynamics give perpetual motion, which is generally thought a bad thing. And Maxwell’s demon cannot start swapping his semantics re ones-and-zeroes half-way through and expect his books to balance.

    As to teleosemantics a la Millikan and others, your Swamp Thing/Boltzmann Brain type objection fails because natural selection is “merely” a search engine finding actual physical solutions. Sure you can throw together a de novo solution that is just as effective, but again, in this world, you have to have expended work to get to such an unlikely state from the position in which we usually find ourselves.

  57. James of Seattle:

    So a mechanism is an “adder” if it’s purpose when it was created and/or situated was to add. That same mechanism could be a subtractor, but only if created or situated for the purpose of subtracting.

    But natural selection simply doesn’t act on that level. It selects which LEDs light up in response to which switches have been flipped, as that determines the system’s behavior within the environment; but whether you consider those lights to represent binary numbers, and how you do so, doesn’t make a wit of a difference for selection.

    The same holds for man-made objects: no matter how hard you want your device to be an adder, I can always use it to implement a different function, and there’s nothing you can do to stop me. The same goes for every system that purportedly computes something.

    This is in stark contrast to any objective properties of a physical system. It’s not open to interpretation what its mass, or size, may be; and even in cases where different observers come to different results (say, relativistic length contraction), there is a simple, lawful connection that gives a unique answer for every combination of observer (i.e. frame of reference) and system. No such agreement exists for the notion of ‘computation’.

    No computationalist ever is going to solve the question of how to find out what an arbitrary system computes; yet if that were an objective property of a system, this should be possible at least in principle. I’ll even throw in the complete state-transition table of the system for free, since thanks to Moore’s theorem, you can’t generally figure that out just by observing input/output behavior.

    It won’t help: different people can use this system to compute different functions, and even the same person can use it differently. What a system computes is always relative to how it is interpreted; but consequently, computation just can’t figure in explaining minds.

    —————————

    David Duffy:

    The reason we have Landauer’s principle is that hypercomputing cum oracles at the level of thermodynamics give perpetual motion,

    Landauer’s principle, as I mentioned, simply characterizes the fact that to switch an unknown state, a minimum average energy is necessary. It has connections to information only as far as one can use the state that was switched to represent a ‘bit’ of information. (One can easily derive the principle without any mention of information or computation at all, for one.)

    Still more correct would it be to say that one erases correlations between systems. Since such correlations derive from constraints, this entails removing the constraints; that this needs physical work to be performed should not be surprising at all.

    As for Maxwell’s demon, again, it’s completely irrelevant whether you call the elements of its memory ‘1’ or ‘0’; what is useful to the demon is the physical correlation between that memory and the position of the particles. It can be completely understood in terms of switches being flipped, circuits being closed, or whatever; anything informational is, again, merely an after-the-fact gloss.

    natural selection is “merely” a search engine finding actual physical solutions

    The problem is not finding solutions; it’s formulating the problems in an objective way. So the heart evolved in order to pumped blood, and it’s by this process that, as teleosemantics would have it, its function indeed is ‘to pump blood’. This makes the question which function the heart fulfills dependent on its evolutionary history; a heart with a different history, or none at all, would thus have a different function even if it was physically identical to an evolved one. It would be the same ‘solution’, but for a different problem, or no problem at all.

  58. Jochen: “correlations”: “correlation”, “mutual information”, “feedback”, “measurement” are informational concepts. Autonomous Maxwell’s Demons of several types have been built.

    in structured environments, whether the correlations are temporal or spatial. Ashby’s
    Law of Requisite Variety – a controller must have at least the same variety as its input so that the whole system can adapt to and compensate that variety and achieve homeostasis – was an early attempt at [a] general principle of regulation and control. In
    essence, a controller’s variety should match that of its environment. Above, paralleling
    this, we showed that a near-optimal thermal agent (information engine) interacting with
    a structured input (information reservoir) obeys a similar variety-matching principle.

    Arising naturally in recent analyses of Maxwellian demons and information engines , information reservoirs have come to play a central role as idealized physical
    systems that exchange information but not energy. Their inclusion led rather
    directly to an extended Second Law of Thermodynamics for complex systems: The total
    physical (Clausius) entropy of the universe and the Shannon entropy of its information
    reservoirs cannot decrease in time. We refer to this generalization as the
    Information Processing Second Law (IPSL).

    A specific realization of an information reservoir is a tape of symbols where information is encoded in the symbols’ values.

    Both from

    http://csc.ucdavis.edu/~cmg/compmech/pubs/abboyddissn.htm

    And see also a primer by Sagawa [2017] arXiv:1712.06858v1

    The question is whether moving up to the level of Friston’s models of the brain’s function is justifiable.

  59. David Duffy:

    “correlations”: “correlation”, “mutual information”, “feedback”, “measurement” are informational concepts.

    They’re syntactical concepts that tell you nothing at all about semantics; it’s semantics we need for minds. A sheet of paper with random characters on it will have a high Shannon information, but still, that doesn’t mean it transports any sort of message at all. Yet, any sort of message with the right length can be decoded from it.

    The quantities you’re talking about are best thought of as being related to information-bearing capacity, rather than information itself.

    When it gets right down to it, using the notions you refer to, a bit is nothing but an elementary difference between two physical systems. Say, a red ball and a green ball: their color is a difference across one property, and hence, I can provide you with one ‘bit’ of information in this sense if I give you either the green or the red ball.

    But what we need for minds is meaning: say, that one bit of information tells you the answer to a yes/no question, for instance, that the English are attacking by sea. This is not reducible to the physical system on its own; it is your interpretation that makes it about that. It could have just as easily meant something completely different. There is no sense in which a green ball inherently and objectively means that the English will attack by sea. This is an additional interpretive gloss put upon that system by an interpreting observer.

    It’s exactly the same with computation. That the machine I describe is an adder is just the same kind of fact as that a green ball means that the English attack by sea: without the right interpretation, there’s simply no fact of the matter there.

    Hence, we can’t explain mind—which we use to furnish these interpretations—in terms of computation—which is always dependent on such interpretations.

  60. In other words, what we’re saying when we say ‘the machine computes the sum of its inputs’ is the same kind of thing as what we’re saying when we say ‘the word ‘dog’ means a certain kind of quadruped’. It’s convention: it could just as easily be different. ‘Dog’ could mean a certain plant, a style of literature, or just be a nonsense syllable. Likewise, the ‘adder’ may be thought of as adding, but could just as well perform a plethora of other functions.

    Any way around this would have to postulate that there are symbols that, somehow, just inherently mean things. This does not strike me as reasonable.

  61. Jochen,

    I think I see the disconnect between you and me, but I am not sure I will be able to convince you because you seem wedded to the idea that computation is necessarily independent of mind (which does the interpreting). For what follows I ask that you suspend this stance and consider the possibilities presented on their own merits.

    What I think you are missing is that interpretation is computation.

    Let me change the example from an adder, which I don’t find intuitive, to a kind of thermostat. This thermostat, instead of regulating the heat of a room, simply closes a circuit that turns on a light if the temperature is above 70 degrees Fahrenheit. In this case the thermostat interprets a certain amount of ambient kinetic energy as “turn on the light”. That’s what I am calling a computation. The output (light on or off) is determined by the input (ambient kinetic energy). Presumably the purpose of this thermostat was simply to generate a symbol relating to the temperature.

    Now a subsequent system can take that light as input, and depending on its purpose, produce an output, such as displaying “The temperature is 70 degrees”. Alternatively, the output might be opening windows (making a proper thermostat). These are interpretations/computations, determined by the functions of the secondary systems as explained by their respective purposes. With some deeper knowledge, a system such as yourself can make the computation/interpretation that the light is creating heat and possibly working against the thermostatic purpose.

    So, does that change your mind? 🙂

    *

  62. James of Seattle:

    you seem wedded to the idea that computation is necessarily independent of mind

    I presume you meant to write ‘dependent on mind’ here. And it’s not an idea that I came to easily and without a fight—indeed, an earlier FQXi-entry of mine attempted to rigorously define an information-based ontology in order to attack the hard problem of consciousness, but the solution just never seemed satisfactory. But in the end, it’s just a simple fact that for every system you claim computes a function x, I can explicitly specify a function y that it can just as well be considered to compute, with the difference between both cases merely resting in interpretation.

    It’s again exactly the same as the fact that you can always decode a given string of symbols in several distinct ways. I suppose you take no issue with that, at least?

    Let me change the example from an adder, which I don’t find intuitive

    The purpose of the adder example is exactly that it allows us to check our intuitions by being fully well specified: it is completely clear that you can, by a mere relabeling of what the LEDs represent, change the algorithm that is implemented by the system. If that goes against your intuition, then maybe you should reconsider that intuition.

    In this case the thermostat interprets a certain amount of ambient kinetic energy as “turn on the light”.

    No; this is your interpretation of what the thermostat does. The thermostat knows nothing of temperature, or of light: it is simply a physical mechanism where a state-change caused by an increase of temperature closes a circuit. Anything more than that is unearned.

    Furthermore, this is not actually what one typically considers a computation; it’s a control loop, which doesn’t work on formal objects, as a computation does, but which regulates physical quantities. It’s easy, in this sort of example, to muddy the waters: we never look at the world without interpreting it, and thus, we see interpreted things everywhere in the world, and consider them a feature of the world, rather than of our minds. It’s the old problem of the fish that don’t know what water is, since they’re too immersed in it to notice. So it’s easy to think that a rise in temperature means ‘switch the light on’ to the thermostat; but without one’s interpretation, this is not anything like a meaning.

    Otherwise, every physical interaction would suddenly acquire a meaningful nature: the jagged piece of rock that sends a ball tumbling down an incline up in the air would mean ‘jump three meters’ to the ball. But that’s again just overinterpretation: it merely causes the ball to jump three meters, just as the temperature merely causes the thermostat to switch on the light.

    This doesn’t change if we add further system that react to the thermostat’s light going on. A display showing the temperature does not interpret the light; you interpret what’s shown on the display. Think about somebody speaking a curious language in which ‘temperature’ means ‘color’ and ’70 degrees’ means ‘pink’: to them, the display would ‘interpret’ the thermostat’s light as indicating that ‘the color is pink’. But neither the mapping of the letters ‘temperature’ to average molecular kinetic energy nor to light frequency has any precedence, so that person is just as justified in their pronouncement: the interpretation comes at the level of the mind watching the mechanism.

    Or again, take a window being blown open by the wind: do you think that this is because the ambient air pressure indicated to the window that it ought to open? But that’s not different than the window being opened by the thermostat. The physical causality is a little more complex, but at the bottom, exactly the same kind of process.

  63. Jochen,

    I’ll take one last shot. The key is function (and purpose, in a broad sense). The thermostat was designed/situated with a purpose, and therefore a specific function. When it functions correctly, the light goes on when the temp. reaches 70. The displayer mechanism’s function is to announce the temp. The wind blowing open the window does not have a purpose, and therefor no function. I am defining this as computation. This is what I mean when I say mind is computational.

    Now you can re-interpret the output, but that does not change the function of the original mechanism. But your re-interpretation is a second computation.

    You can say “that’s not what I mean by computation”, but that doesn’t matter. I say every mental event is my kind of computation. Every interpretation is this kind of event, and I’m calling it computation for lack of a better word.

    *

  64. Hi Jochen: “that one bit of information tells you the answer to a yes/no question”…
    is exactly the same problem as the original Maxwell’s demon – how is that “neat fingered” intelligences breaks the laws of thermodynamics? How is that intentionality leads to my brain getting its 2000 kJ every day by creating “negative entropy”?

    Where does the viewer of the one-bit-of-information green lantern get all that data from? In that case it is simple. It was all transferred earlier into a “system possessing multiple, distinct metastable states”. As in the case of Maxwell’s demon, one has to look at all the involved systems. The fact that I walk around with mental representations of refrigerators means that the correct model of my brain’s thermodynamics must include the properties of supermarkets etc, as per the “law of requisite varieties”. Correlations are not purely syntactic, in that within a physical computing device they represent the nonequilibrium free energy that can be realized if the transduction between measurement and action is efficient enough. And this is just metabolism and environmental prediction – I have mentioned nothing of the informational nature of replication in biology at the molecular level.

  65. James of Seattle:

    I’ll take one last shot. The key is function (and purpose, in a broad sense). The thermostat was designed/situated with a purpose, and therefore a specific function.

    I think I get what you’re proposing, and I’ve already told you why I think it doesn’t work; simply restating your proposal won’t do any more to convince me.

    In short, humans just aren’t designed for a purpose; evolution doesn’t have a purpose. And even if it did, then it would follow that a physically identical copy that randomly coalesced from a swamp would not have consciousness, since it would not be designed for a purpose and hence, have no function. And yet still, if you design something with an explicit purpose in mind, nothing is going to deter me from using it for a different purpose. Indeed, in general, merely being presented with the artifact isn’t even going to tell me its purpose: it’s not an objective property of the physical object itself.

    I am defining this as computation.

    You’re of course free to define things any which way you like, but in a discussion, this runs the risk of Humpty Dumpty problems: if your words always mean exactly what you want them to mean, I’m left ignorant of what it is you mean (ironically, this is exactly what I’m trying to get across re computation). So just to get clear on this point, I tend to think of computation in roughly the same way wikipedia has it:

    Computation is any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model understood and described as, for example, an algorithm.

    In other words, something like what the adder does, under the right interpretation. Using this definition, I think it also becomes completely clear how the adder can be seen to implement distinct algorithms, and hence, how computation depends on interpretation.

    ————————————————————

    David Duffy:

    is exactly the same problem as the original Maxwell’s demon – how is that “neat fingered” intelligences breaks the laws of thermodynamics? How is that intentionality leads to my brain getting its 2000 kJ every day by creating “negative entropy”?

    I don’t really get what you mean here. Neither my brain nor Maxwell’s demon breaks the laws of thermodynamics.

    As in the case of Maxwell’s demon, one has to look at all the involved systems.

    This is just an argument from complexity: because it’s easy to see in simple systems that no information is actually involved, an appeal is made to complex systems that aren’t so easily analyzed anymore, because maybe, somehow, somewhere, information is going to creep in there. But it doesn’t: it’s the same as trying to define a word by referral to other words—the word never ends up getting defined, since you have to define all the defining words first, and those defining them, and so on, so meaning is eternally deferred. Ultimately, one must ground everything in something other than words—i.e. the real world. The same with information and meaning: digging deep enough, one always finds a mind interpreting some symbols as pertaining to something beyond themselves.

    And this is just metabolism and environmental prediction – I have mentioned nothing of the informational nature of replication in biology at the molecular level.

    Which is a good thing, since there is none. What happens are processes that are entirely analogous to keys opening locks: by their form, proteins, enzymes, and what have you, cause or catalyze certain chemical reactions. This is, again, not different from the wind blowing open a door; a strand of mRNA doesn’t have inform a ribosome to assemble proteins any more than the air pressure ‘tells’ the door to open. The pressure simply reaches a certain strength that overcomes the door’s inertia; and likewise, the shape of mRNA triggers changes in the state of the ribosome that lead to certain amino acids being preferentially attached (or something like that; high school biology was a long time ago, so I apologize for any errors).

    Sure, it is often advantageous to speak of ‘the genetic code’, and of mRNA as ‘carrying information’ about proteins. There’s nothing wrong with that, as long as one is clear on the fact that such talk is metaphorical. It’s not even strictly wrong to say that ‘this mRNA means that protein to the ribosome’. But meaning can’t be exhausted by this: it leads to infinite regress. This is the picture of mental symbols meaning something to an internal observer: in other words, the homunculus fallacy. Hence, one needs to work a little more in order to get a mind out.

  66. David Duffy

    I think you have succinctly summarised the belief underlying the entire belief set of computationalism.

    ” minds secrete thoughts and words, which seem a lot closer in nature to the world of computation to many of us.”

    It’s a kind of common sense analogy. Its simple enough to think like for computer savvy persons. It’s also based upon notions inherited from greek philosophy and that have permeated western thought for centuries. But it’s wrong.

    The greek belief was based on the idea that the world consisted of the physical and the conceptual/mathematical. Thus if you couldn’t touch it or feel it, it was mathematical or an idea. Certainly it’s natural enough to see words and thoughts as conceptual as they are most definitely not physical. Apparently.

    The beliefs of computationalism can be stated thus I think :-

    i) brains secrete thoughts (i will include ‘words’ within the notion of ‘thought’ for now)
    ii) thoughts are conceptual
    iii) computer programs represent concepts
    iv) representation of a concept is the same thing as the concept
    v) therefore computer programs running brain-like software are the same think as software

    I don’t disagree with i). But ii) is wrong, iii) is wrong (as formulated) – and also because iv) is wrong. I guess by the time we get to v) you could say I’ve lost hope.

    I’ll start with iii). Computers can’t represent anything – words, ideas, teeth, rainstorms. They can represent activities in binary arithmeticand that’s it. That’s the end of their representative capacity.

    They have as-is representation IF – and only if- there exists a user U for whom the digital sequences are mapped to ideas of real meaning. The representation occurs therefore outside of the computer, whose digital sequences are interpreted by a semantic mapping that is unlimited, arbitrary and thus very powerful as a tool of data manipulation.

    So iii) can be reformulated as “computers can produce output that can be treated as representation, although that use is external to the inherent computational activity of a computer which is restricted to binary arithmetic”.

    That kind of ends it – for me. That finishes computationalism off every time. However, let’s think about ii). Some thoughts are clearly “conceptual” in a hand-wavy sense- in fact most are. But we must distinguish between thinking as an act and thoughts, formulated into neat packages for communication, as the output of the thinking process. It doesn’t make sense to me to say that when I “think” about my dad I’m dealing with a concept. When I think of him I firstly have an emotional experience – a light feeling of security maybe, a comforting feeling. Then I’ll see his face in my mind’s eye. Then I might hear his voice slightly. These are all thoughts but they aren’t concepts. They don’t represent anything.They are concepts when I talk about them certainly, as now, but they are not concepts when I’m thinking them. Visual images apart they don’t represent anything, they’re just thoughts, sequences of feelings, images, that frequently don’t include any words or reflections or rational categorisations. But it’s still thinking.

    We can delve deeper of course and find that a great deal of what the brain does is not conceptual, as these pages have dealt with extensively. Mary’s red is not about anything. It’s not a word. It is what it is, a specific sensory output of the brain. Red is a fact of biology, not physics. If you think about it, for any comprehension of the universe to be meaningful implies that there must be a base level of fundamental knowledge and this level can’t be conceptual. Otherwise the hierarchy of concepts – abstract representations – upon which human knowledge is based will be indefinitely open-ended and rootless. There must be a point where we just know stuff. So even in it’s non-sensory capacity, thinking still can’t be entirely conceptual.

    So my view of ii) is that thoughts can produce and use concepts as a kind of process output, but thinking itself is a physical act with a physiological output of mental phenomena.

    iv) of course isn’t too difficult to prove is wrong. A duck is not the same thing as a painting of a duck, end of.

    JBD

  67. Hi John.

    “I’ll start with iii). Computers can’t represent anything”: No it’s the opposite – a computer can represent anything more or less exactly. The requirement is appropriate measurement or transduction, so that there is a causal relationship between the ones-and-zeroes and the (external) Real. That was the interest of the cyberneticists, and continues up through to the neurophenomenalists and enactivists: “…the idea of a strong continuity of life and mind. One way to put this idea is that life and mind share a common pattern or organization, and the organizational properties characteristic of mind are an enriched version of those fundamental to life…” [Thompson 2004]. This is why I am quite interested in the thermodynamics of life and how it relates at a basic level to the computations life does in order to keep going. In the same way, my (scattered) thoughts about consciousness also are bottom up eg I automatically assume other mammals (at the very least) have experiences and qualia. Then the “problem” of qualia is what else would they be like, given we share them with non-language users who carry out far fewer computational operations per second than our brains do. And we have human models too – some individuals seem to have memories from very early in life, before the childhood amnesia sets in around 5-7, when we first become our selves.

    As to the non-propositional or vague nature of concepts – we already have computational-cum-mathematical models that have very similar properties eg the *representations* present in artificial neural networks of how to play Go, or what cats look like.

  68. David Duffy:

    No it’s the opposite – a computer can represent anything more or less exactly. The requirement is appropriate measurement or transduction, so that there is a causal relationship between the ones-and-zeroes and the (external) Real.

    I don’t see how that’s supposed to help; any string of ones and zeros can be in a causal relationship with any particular external state of affairs. It simply depends on the transduction: the same weather, for instance, can be transduced into infinitely many different bit strings and back.

    But I can choose what I consider the transduction to be. A simple example would be that I consider the weather as part of the transducer, and take the causes of the weather to be what’s transduced; thus, I can consider the computation you believe is about the weather to be about butterflies flapping in China.

    But I’m of course not limited to such simple manipulations; what I consider to be ‘transduced’ need not have any causal relationship to the weather at all. After all, the stipulation that the relation be causal is just yours alone; I need not follow it, and will still be able to use the device you’re using to compute weather patterns to compute something else entirely, by any ordinary definition of computation. And of course, we have the question of how we can use computers to manipulate abstract quantities, that aren’t typically causally related to the physical world: my adder, above, is not causally related to numbers, or anything like that, yet I can use it to perform addition, or equivalently to perform any of a set of different functions.

  69. Hi Jochen: “…argument from complexity…I can choose what I consider the transduction to be”. Jochen, you just go round and round, conflating computations being done in your head, in the choice of what _you_ consider an entity is doing, as opposed to whatever autonomous computation the entity itself does in its interactions with the external world. It is your brain where all the _physical work_ is being done to pick out the analogy between weather and coarsened internal thermodynamic state of the device. And you continue to claim some computations are not computations at all, just “keys opening locks”, when this is the point of Leibniz’s Mill – it has to be all physics. Of course, translation of sequences of interchangeable bases in RNA into sequences of amino acids in a protein is a chemical process. The point is that organisms do editing, shuffling, duplication of DNA codes that are stable over millions of years despite rapid turnover of the physical substrate they are written in, and these control the manufacture of the effectors – proteins.

    If we take an animal solving a puzzle in order to get food, you seem to be saying that it quite legitimate to interpret the firing of neurons involved as a weather forecast, and that its success in obtaining calories is some kind of pathetic animism or observer-dependent.

    Of course, if I had a clear idea of how all this actually works, I wouldn’t be here, but publishing it!

  70. David


    “No it’s the opposite – a computer can represent anything more or less exactly.

    I don’t think you quite got my point .. if you read what I wrote again, you’d see.

    There is no inherentrepresentation going on inside a computer at all. It’s pushing it, but I suppose you could argue that a digital computer represents binary arithetmetic inherently – but nothing more.

    The mapping of digital sequences to semantic is completely arbitrary. “001011010” could mean “sad”, “00110111000” could mean “happy”. There is no essesntial link between a digital sequence and a state of the universe.

    But that is why computers are so useful. We can any digital sequence and map it to anything. Hence – as long as we remember that the representation is NOT an act of the computer, but a human interpretation of the output of a computer, we can say :-

    a computer can represent anything more or less exactly

    There is not, and cannot be, a necessary relationship between a digital sequence and a state of the universe. I don’t know if that’s the “causal” relationship the “cyberneticists” are on about, but this is utter nonsense, and displays a fixation with computational systems at the expense of scientific curiosity.

    I’ve worked with neural nets, worked with them plenty. They have nothing to do with intelligence. They are a distributed, deterministic memory array structure. “AI” is a sales pitch.

    JBD

  71. Hi John.

    I think we are talking about reference here. You are saying that the states of a computing device in the abstract TCS sense cannot intrinsically refer to external reality. Sure. This comes back to Jochen’s mention of Rosen: “anticipatory systems” (ie life forms)

    Nick Rossiter
    http://nickrossiter.org.uk/process/
    has a certain amount on Whiteheadian process theory, metaphysics, category theory and semantics (which are a heady mix) but which I think apply to this question. The 2009 paper at that site “The Natural Metaphysics of Computing Anticipatory Systems” is the one directly extending Rosen’s model. If we think of computing as a concrete physical (temporal) process, then it is quite reasonable to think that such a process can be causally connected to the external world, in a way that is determined by its physical structure. We can still have the functionalist insight that one “black box control system” with the same inputs and outputs can replace another, even though one is hydraulic and the other electronic or biological.

    After Castonguay (1973), we can define extension as reference to the external world, and intension as reference to other elements of the formal system that the computer is acting as a substrate for. In a related way, Harnad [2002] talks about the problem of symbol grounding

    But let us grant that if the symbolic approach ever succeeds in connecting its meaningless symbols to the world in the right way, this will amount to a kind of wide theory of meaning, encompassing the internal symbols and their external meanings via the yet-to-be-announced “causal connection.” Is there a narrow approach that holds onto symbols rather than
    giving up on them altogether, but attempts to ground them on the basis
    of purely internal resources?

    There is a hybrid approach that in a sense internalizes the problem of finding the connection between symbols and their meanings; but instead of looking for a connection between symbols and the wide world, it looks only for a connection between symbols and the sensorimotor projections of the kinds of things the symbols designate: it is not a connection between symbols and the distal objects they designate but a connection
    between symbols and the proximal “shadows” that the distal objects cast on the system’s sensorimotor surfaces…the capacity to sort our sensorimotor projections into categories on the basis of sensorimotor interactions with the distal objects of which they are the proximal projections is undeniably a sensorimotor capacity; indeed, we might just as well call it a robotic capacity.

    He then goes on to say that communications between learners of the world allows “cognitive theft” (his phrase, the advantage of theft over honest sensorimotor toil). It seems to me these are references to entities in other thinker’s formal systems, only indirectly pointing to the sensorimotor “projections” and then to reality. The objective fact that some of these are “true” references is a function of the whole system (insert Darwinist argument, tigers blah blah).

    Fulda [2017] “Natural Agency: The Case of Bacterial Cognition” points out that bacteria have complex behaviours and agency:

    I argue that the Cartesian conception, the view that agency presupposes cognition, forces us to choose between attributing full-blown belief-desire psychology to bacteria or treating them as mere automata. This conceptual scheme, however, is not sufficiently nuanced to capture the middle ground between these two extremes that most organisms, including unicellular ones, occupy. On the one hand, their capacities and activities are too supplely adaptive to count as mere machines. They act purposively in response to the relevance that their conditions of existence have for attaining their lifestyle. On the other hand, they lack the open-ended responsiveness of cognitive agents to rational norms. The Cartesian conception thus leads into a dilemma between mechanism, which fails by underestimating bacterial agency, and intellectualism, which fails by overestimating their agency as cognitive.

    I, as you might guess, think there is a continuum up to intellectualism. As to neural networks, I hope you will agree that particular neural networks can closely match particular mental faculties – your argument is that “open-ended responsiveness” can’t arise from piling faculty upon faculty in a massively modular way.

  72. David Duffy:

    Jochen, you just go round and round, conflating computations being done in your head, in the choice of what _you_ consider an entity is doing, as opposed to whatever autonomous computation the entity itself does in its interactions with the external world.

    Well, from my point of view, you just go round and round, conflating interpretations being performed by your mind with computations being performed by external physical systems.

    But I think one problem might be that we’re not quite clear about what we mean by ‘computation’—or at least, it’s not quite clear to me what you mean by it.

    To me, computation is the implementation of some algorithm, a computable function. Again, I think the wikipedia definition comes closest:

    Computation is any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model understood and described as, for example, an algorithm.

    So, we have a physical substrate implementing an abstract structure—a program, an algorithm, what have you.

    Computationalism then is the claim that the mind is essentially such an abstract structure, physically implemented by the brain. Any system implementing that particular algorithm will then be conscious.

    But this can’t work: whenever we see a system implementing an algorithm, it does so only thanks to interpretational choices. The adder example establishes this: I can use it to add numbers; somebody else can use it, with exactly the same justification and effort, to implement another function. Computation is in the eye of the user—in exactly the same way as the meaning of a text is in the eye of the reader (there may be another language in which this text means something entirely different).

    You’re proposing to ‘settle’ on one interpretation of a physical system’s evolution by considering how things are causally hooked up to it. Again, the adder shows that—under the definition of computation as I use it—this will not help: the inputs can be replaced by some sort of ‘sensory data’, and the outputs considered to be the system’s ‘behavior’ in response to this data. The behavior of the system then will be exactly the same in every case, yet still, I can use the thing to compute sums, and somebody else to compute something else altogether—and exactly with the same justification.

    Because what’s fixed by the causal structure is merely syntactic: which LEDs light up in response to which switches are pressed. This is just the physical chain of causality. But this underdetermines computation: the same physical system, doing exactly the same things, can be considered to perform different computations. Again, the analogy to text and its meaning is illuminating: just the physical features of a piece of text do not suffice to fix its meaning. It’s simply not the case that ‘dog’ must refer to a furry quadruped because of any sort of characteristic of the symbol ‘dog’. It’s convention, interpretation; and it’s likewise convention and interpretation to consider a row of LEDs to stand for a certain number in binary code.

    And all of this is fine, basically, until somebody comes around and tries to use computation to explain how the faculties that allow us to form conventions and interpretations work. Because this is trying to explain text by more text: it simply never bottoms out. If I have a dictionary written entirely in one single language, with ‘dog’ being explained as ‘furry barking quadruped’, ‘furry’ being explained as ‘largely covered in hair’, ‘hair’ being explained—and so on, I will never be able to use it to discover the meaning of a single damn word.

    This isn’t quite what you’re doing, however. I think that ultimately, you’re switching around the notion of ‘computation’ until it means little else but ‘what the system is physically doing’. So, on your notion of computation, my adder would ‘compute’ which lights to switch on—it would not actually perform any ‘addition’ at all.

    So you essentially stipulate that what an animal is doing to solve a puzzle must be equal to some ‘computation’—but I see no reason for this. What we have, here, is simply a physical chain of causes and effects, which in the end, lead to the animal obtaining food—there is no need to talk of ‘computation’ at all, and indeed, I would consider this at best ill defined.

    Say my adder where part of some animal’s system for obtaining food. Depending on the number of LEDs being lit, certain behaviors are executed. Would we say the animal computed a sum in order to find its food? We might; but we equally well might consider it to have computed some other function, because all the functions we can consider it to compute have one thing in common, namely, that they will successfully lead to the animal obtaining food—which is a consequence of the right LEDs being lit. So why talk about this in terms of ‘computation’, rather than in terms of LEDs? Computation here is just a superfluous word—this form of computationalism is ultimately an empty thesis, because we simply call whatever an animal does ‘computation’.

    The thesis that consciousness is a program, implemented as described above, on the other hand, has genuine, nontrivial content. It’s just wrong. (And if that is, in fact, the thesis you want to defend, then there is still the question that I would like to have answered first: you find a box, with complicated inner mechanics and electronics, floating in empty space, gears whirring and lights blinking; what does it compute?).

  73. Hi Jochen.

    “So you essentially stipulate that what an animal is doing to solve a puzzle must be equal to some ‘computation’—but I see no reason for this. What we have, here, is simply a physical chain of causes and effects, which in the end, lead to the animal obtaining food – there is no need to talk of ‘computation’ at all”.

    If a chimpanzee travels for 30 minutes to retrieve some suitably long pieces of grass and returns to an ant nest and uses the grass as a tool to extract ants, then yes it is a physical chain of causes and effects – we are all physicalists here, aren’t we? And if we were talking about a human being, we would say that he prepared and executed a plan, that is an algorithm, for reaching a goal. When the chimpanzee reaches down and picks up the piece of grass, we understand the mechanics of this as involving sensory-motor negative feedback loops implemented in the nervous system – ie an algorithmic approach to controlling matter. When a dog successfully catches a thrown ball by applying a heuristic that varies its running speed with the altitude and azimuth of the ball, it is carrying out an algorithm. In each case, a vast number of alternative algorithms or alternative interpretations lead to a failure to satisfy the problem constraints – mistakes, disease states.

    Is there a physical characteristic of the living systems that are carrying out such actions that differs from “dumb” matter? And is this observer independent? Landauer’s principle seems to imply that unless one is in a special setup, at some time one is going to have to do work to erase the memory required to successfully carry out these actions. But the information ratchets and other complex tricks living systems use to maintain themselves based on various types of memory seem to generally more efficient than ordinary physical processes eg stars. Tell me if you enjoy

    https://arxiv.org/abs/1706.05043

  74. David Duffy:

    When a dog successfully catches a thrown ball by applying a heuristic that varies its running speed with the altitude and azimuth of the ball, it is carrying out an algorithm.

    I think that’s exactly the confusion between different notions of computation I was pointing out. For instance, these Japanese bamboo water fountains (which google tells me are called ‘shishi-odoshi’) that tip over when they have reached a certain filling level, would you say that they execute an algorithm? If waterlevel = x, then tip?

    I don’t think that’s a useful description of the system—although it’s probably not strictly false, either. But all of your examples are really just more elaborate mechanisms of the same basic kind—one could imagine much more complex ‘circuits’ of water, opening up and closing waterways, moving loads, and so on. One could even imagine such a waterway moving, or picking up something. Does it therefore compute?

    I think that if the fountain thingy didn’t compute, then it doesn’t help to just pile on complexity. It may get easier to talk in terms of computations, ‘logical operations’ and so on, but this is just a metaphor—used in the same sense as when we sometimes ascribe aims and goals to evolution: useful metaphor, but pernicious if taken too literally. If that were all there is to computation, then the notion would merely be useful shorthand, nothing more. I don’t think that’s the case, though.

    I think there’s something very important missing from such a notion of ‘computation’—and that is reference, meaning, symbolism. States of a computer stand for something, they mean something—the electrons flowing hither or thither mean one or zero, mean numbers, letters, words, or pictures. That’s where the computation happens: on the referents of a computer’s symbolic vehicles. The adder adds numbers, which are the referents of the strings of LEDs. The operation it carries out is not merely shorthand for its physical evolution; it’s an interpretation of that evolution. The computation the adder carries out is not lighting LEDs based on inputs, it’s adding numbers.

    In the same sense, computationalism holds that thoughts are computations carried out by the brain. They’re what neuron firings stand for, in the same sense that the adder’s states stand for numbers. That’s where the thesis of computationalism is nontrivial. If we use your notion of ‘computation’, then all we’re left with are just neuron firings. Everything’s just like the tipping bamboo piece: it stands for nothing else; it just fills with water, and tips over. That we can say it executes the program ‘if waterlevel = x, then tip’ simply doesn’t tell us anything about the bamboo—it merely tells us something about how we interpret things.

    The same goes for Landauer’s principle. I have to produce a minimum amount of entropy to flip a switch; but flipping a switch is not erasing memory. Only if I say that in one position, the switch corresponds to a one, and in the other, to a zero, do I erase something; that is, the entropy increase occurs quite independently of whether we consider the system to bear information.

    Computation simply isn’t an objective property of a system, anymore than its meaning is an objective property of this text (or do you disagree here, too?). Nobody so far has been able to tell me how to find out what a black box computes, but failure to be able to do so also does not seem to cause much distress.

  75. “complex ‘circuits’ of water”: I presume you are alluding to https://en.wikipedia.org/wiki/Fluidics

    It doesn’t sound like you read the Kempes article (appeared JRSSA), or any of the papers I have previously referenced. In evolution, we have design without a designer, which is algorithmic.

  76. David Duffy:

    It doesn’t sound like you read the Kempes article (appeared JRSSA), or any of the papers I have previously referenced.

    The problem is that your avoidance of responding to any of the points I made doesn’t really raise my hopes that investing the time into reading these articles would lead to anything worthwhile. Either you misunderstand them, or they labor under the same misunderstandings as you do. In neither case is there much point to reading.

    In evolution, we have design without a designer, which is algorithmic.

    Again, this uses a definition of computation in which it merely means ‘physics’. This completely trivializes computationalism, and has it collapse to a sort of naive identity theory (the problems of which were a large part in formulating non-trivial computationalism in the first place).

    Computation involves symbolic representation; a stone falling down a mountain isn’t computing the best path to reach the lowest elevation, it’s just falling. But all your examples of ‘computation’ are just like that stone. A system computes if it performs algorithmical work on symbolic vehicles; the adder adds because it manipulates numbers, not because it switches currents.

  77. Dear Jochen. We return once more to Maxwell’s Demon. The Demon was Maxwell’s answer to John Tyndall’s materialism, which held that life, complexity, and consciousness are inherent in the physical properties of atoms and molecules (there are papers arguing whether Tyndall was a panpsychist). We both agree that the Demon is “trivial” physics. The “paradox” is that the Demon seems to break the thermodynamics laws, which is resolved by Landauer’s insight that symbolic information is physical, and in a Szilard engine and in many other molecular machines has a one to one relationship with entropy. The literature that you consider to be full of misunderstanding is sophisticated, and in the case of molecular motors that act as “information ratchets”, it is not a loose metaphor.

  78. David Duffy:

    The “paradox” is that the Demon seems to break the thermodynamics laws, which is resolved by Landauer’s insight that symbolic information is physical,

    There’s nothing about symbolic information in Landauer’s principle. There’s correlations, but correlations aren’t symbolic. In order to use a correlation to symbolize something, you need to be aware of the correlation—in other words, you need to be able to represent that correlation to yourself. Which requires symbols and reference. Which then turns the circle once more.

    That one lantern is lit at the Old North Church does not, in itself, tell you anything; it doesn’t symbolize anything. It’s only once you know that ‘one if by land, two if by sea’ that the lantern comes to represent something—but this item of knowledge is itself representational. So correlations can be used to establish representation, but can’t be used to explain representation.

    Nevertheless, we can exploit a correlation to perform work. We can hook up a photodetector to two different recordings, such that if the light of one lantern is detected, the message ‘The English attack by land!’ is played, and if light of two lanterns is detected, the message ‘The English attack by sea!’ is played. This doesn’t mean that the photodetector understands what the lanterns symbolize—reference, symbolic information doesn’t enter the picture.

    It’s absolutely the same with Maxwell’s demon. The information it stores does not mean anything to it; to imbue it with meaning is to commit a category error, is to mistake the meaning we read into the world as being inherent in the world itself. It’s ultimately to say, one lantern just simply means that the English attack by land—but we know it’s not true. It could just as easily have been the other way around.

    The literature that you consider to be full of misunderstanding is sophisticated, and in the case of molecular motors that act as “information ratchets”, it is not a loose metaphor.

    Where the literature is sophisticated, it will be aware of the distinction between syntactic information—really just the (logarithm of) the number of physically distinguishable states a system can be in—and semantic information. Often enough, however, that distinction is glossed over, and without any harm—after all, most of that literature is not ultimately concerned with how meaning and consciousness comes into the world, but can take it for granted. My formal training is in quantum information theory, and I’ve read many variants of the ideas of Maxwell, Landauer, Bennett, Lloyd and so on. This is meaningful and important science, but it’s not ultimately telling us much about how symbols acquire meaning.

    Have you tried, at least for yourself if you’re unwilling to share the result with us, to come up with a way to answer the question of what computation a given physical system performs? ‘Computation’ here taken in the sense in which the adder adds—because after all, that’s the sense in which computationalism is interesting. The motivation is that numbers are abstract objects, as are thoughts; and in computation, we seem to have a physical system embodying abstract objects. So maybe this works as well for thoughts? Maybe neuron firing patterns can connect to thoughts in the same way as electric currents being switched can connect to numbers?

    It’s a great idea, but it doesn’t work; and I think you basically know that, and hence, you’re trying to defend a different notion of computation, in which basically just a physical system performing work is counted as ‘computing’; in which a Japanese bamboo spring computes when to tip over based on the water level. But this ‘computation’ doesn’t have the property of connecting to abstract entities—it’s not about numbers, but about water levels and bamboo. So you can’t fulfill the hope of computationalism with it, since then our brains would be computing neuron firing patterns, or muscle contractions, or something like that. But that’s not what our thoughts are about.

    Hence, if you leave the symbolism in the notion of computation, it’s clear that computation is no objective notion anymore, and can’t ground mind, as it depends on mind; if you take it out, then you simply loose the hoped-for explanatory power. Consequently, the notions of computation and information ought to be crossed out from any attempt at finding an ultimate explanation of how the mind works.

  79. “syntax..and..semantics”: If you are interested in a formal semantics for Maxwell’s Demon, then Abramsky and Horsman [2015]
    http://www.cs.ox.ac.uk/qpl2015/preproceedings/52.pdf

    I have to admit, my simple-minded thoughts about semantics tend to telosemantics a la Millikan etc. I see a certain resonance between the Demon and the case of bee languages, where the syntax and semantics also seems pretty straightforward – all the symbols are well grounded in the basic thermodynamics of flight and metabolism, though I would agree that the bees don’t know that. But they do engage in what Friston calls “active inference” (eg winter hive site selection), which I would strongly argue is not the same as your passive examples driven completely by external inputs. They spend energy as computation to get energy for metabolic needs. So this is one way to define what computation is being done.

    If your main area is quantum information theory, then you would probably be aware of the resource theory type approach, which has also been applied to classical thermodynamics
    https://arxiv.org/pdf/1309.6586.pdf
    “it is critical to disentangle those aspects of the theory that are due to considerations of energetics and those that are due to considerations of information theory…we are here study-
    ing the particular type of thermal nonequilibrium corresponding to purely informational nonequilibrium”.

    Given that resource theory treats entropy as a resource to be used by agents to do work (a store of value ;)), I think again this points to links with the computations that biological systems do.

    How does this help with the reflective, intentional, mind you think is not being captured? Just the usual way computationalists have always reasoned – here are all these intermediate states between dumb matter and the stuff that goes on in my mind which I think relies on the matter between my ears. And I evolved from such organisms – there must be some relationship. I know the architecture of my brain is not greatly different from those of other apes – it is just bigger and burns more calories, so maybe it is not that difficult. Also, I _can_ understand the things bees do and chimpanzees do as computation – “thoughts without a thinker”.

  80. David


    “If we think of computing as a concrete physical (temporal) process”

    why on earth would we think that ? It started out as a cultural artefact and remains one. You can’t simply stuff it into another ontological category to make your arguments work. That line of argument – started years ago by Hilary Putnam – was, is and remains nonsense.

    A model of physics is not reality. Metrics are not real, they are relative to the person doing the measuring. A length of wood of 1m has no meaning to the wood, only to the person doing the measuring.

    Yet all physics relations are metrics relations – mathematical, not semantic. Observer relative. But the development of those metrics in relation to each other is an internal aspect of physical systems, and finding out those internal relations are what physics does. None of that affects the fact that models of physics are not the same – ontologically speaking – as the stuff they seek to model.

    “Model” is a poor word. “Transpose to usable metrics” is a better way of describing what physics does and perhaps, to the non-physicist, gets them away from this illusion that physics is more real than reality.

    But computational systems are far, far flimsier than physics models. There is nothing that is being modelled and no independent aspect of any physical system in which the relationship of metrics is being investigated. Arbitrary physical metrics (such as voltage levels) are arbitrarily assigned to arbitrary numbers. If it’s meaningful to describe that as physical, you might as well describe the Bible as a physical entity (which it isn’t).

    JBD

  81. Hi John –

    “It started out as a cultural artefact and remains one…”. I’m sorry, but the “informational turn” in quantum mechanics – eg the recent papers on the spectral gap – is about what real, physical systems do, and not what people might agree to differ on what they think is going on.

    As to your more general doubts about the relationship computations and the world, I thought this was pretty good:

    https://www.cse.buffalo.edu//~rapaport/Papers/rapaport4IACAP.pdf

    “Computation in the wild must allow for input from the external world (including or-
    acles). And that is where our thread re-appears: Computation must interact with the
    world. A computer without physical transducers that couple it to the environment
    would be solipsistic.”

  82. David Duffy:

    I see a certain resonance between the Demon and the case of bee languages, where the syntax and semantics also seems pretty straightforward

    I don’t think there’s an issue of ‘semantics’ here at all. Say I have a wind-up toy, a little robot, such that I can set the number of steps it takes by the number of turns I give the key, and also can tell it to rotate a fixed amount of degrees by turning another key. Thus, I can make it follow paths such as ‘three steps forward, turn 90°, four steps forward, turn 45°, two steps forward…’ and so on.

    But does that mean my turning of the key has meaning to the robot? I really doubt it does. After all, all I really do is set up a set of constraints in the state-space of the robot—in much the same way that the force of gravity provides a constraint on the evolution of a rock. Does gravity have meaning to the rock? Does it tell it to accelerate downwards at 9.81 m/s²? I think any theory that requires something like this is good for the garbage heap only. But just by adding on constraints, I somehow suddenly have meaning springing up?

    Now assume that I replace the key-turning mechanism with something more sophisticated, like a photodetector that registers pulses of light (and then gives an ‘internal key’ the requisite number of turns), or colored squares on paper. Do those pulses of light suddenly mean something to my toy robot? Physically, nothing has changed: there’s no appreciable difference between turning the key by hand or ‘by light’. And if we don’t want to get caught in the same absurdity as before, we must maintain that the pulses of light still are just as meaningless as the key-turnings were. The light-pulses make the robot do something, but so does gravity make the rock do something; there’s no difference here. The same goes for the bee’s dance: effectively, it just turns the other bee’s keys in order to make them follow a certain path.

    The pernicious thing is that it’s so easy for us to interpret meaning into these symbols. Interpreting things as being meaningful is like air to us: so ubiquitous that we hardly notice it. But the fact that we interpret things as meaningful that clearly aren’t (note again the Runamo inscription) should give us pause whenever we notice meaning out there in the world.

    This connects to the issue of Shannon ‘information’ and Maxwell demons. The basic thing is that Shannon information quantifies how much information we could maximally send across a given channel. That something bears a nonzero Shannon information (or one of its various derivatives, mutual or conditional information, etc.) does not mean that it has any meaningful content; it merely means that an entity capable of using symbols (such as human being) could use this thing in order to communicate information to another such entity, provided both share a code.

    Information in this potential sense can be used to make entities do stuff in the same way as the key-turnings above, without having to get into semantics at all (and where indeed all ‘semantics’ only comes in by way of us interpreting something). A ‘register’ capable of storing n bits enables you to differentiate between 2^n possibilities, and thus, carry out one of in principle 2^n different actions in response. The colored squares for the toy robot is a ‘register’ in this sense: based on the pattern of colored squares, the robot will carry out certain actions. We could compute the Shannon entropy of this pattern, and consequently the ‘information’ contained in the register.

    But again, as we’ve seen above, this doesn’t entail that the colored squares mean anything to the robot; far from it, if it did, we’d also be compelled to consider gravity meaning something to rocks, and so on. Every physical interaction then would be ‘meaningful’, and the notion of meaning be utterly trivial.

    It’s the same with Maxwell demons, Landauer’s principle and the like. Landauer just tells us that we have to use up energy in order to eliminate all of the colored squares on the paper without any remaining trace. It tells us nothing about actual information, merely about the potential to carry information: it’s to remove this potential that we need to use up energy and increase entropy. Which is only sensible, since any physical difference is a potential carrier of information (whenever system A is different from system B along at least one two-valued property, I can transport a bit of information by either handing you A or B); this yields an entropy increase.

    In a sense, information theory should really be called ‘information potential theory’; and such an information potential forms a resource that can be exchanged for other resources (such as the ability to perform useful work). However, this doesn’t have anything to do with using a system that has a certain information potential to tell you what the weather tomorrow will be like; for this, you need to get to semantics, which isn’t something that Landauer, Maxwell and resource theories do.

  83. David


    I’m sorry, but the “informational turn” in quantum mechanics – eg the recent papers on the spectral gap – is about what real, physical systems do, and not what people might agree to differ on what they think is going on.

    Perhaps you can tell me – in your own clear, meaningful words, without links to articles by postgrad enthusiasts – on precisely why that article on quantum physics has any bearing whatsoever on the argument that computation is not physical.


    “Computation in the wild must allow for input from the external world (including or-
    acles). And that is where our thread re-appears: Computation must interact with the
    world. A computer without physical transducers that couple it to the environment
    would be solipsistic.”

    Let’s take these sentences one by one.

    Computation in the wild must allow for input from the external world

    Simply not true, and in fact impossible. Computation can be done quite easily in the head without necessity of input from the “external world” (by which I presume he means physical objects).
    A computation is a mathematical artefact and requires a non-physical input – a symbol – that may or may not have been arbitrarily linked (in a totally indeterminstic manner) by an observer. But as a standalone statement, clearly totally false. Then again, I suppose the author doesn’t claim to be an expert and the article has the character of a new subject for him.

    And that is where our thread re-appears: Computation must interact with the
    world.

    Again, not true. Not true in any sense : computation could be said to be capable of interacting with the world, as long as it includes the (non-physical) rules of association between metric and symbol (ie ‘1’ is +1 V in a silicon array, or ‘0’ = 1 Newton/Meter square in a computer made of steam). But ‘must’ is nonsense : ridiculous.

    A computer without physical transducers that couple it to the environment
    would be solipsistic.

    Computers are just their CPUs. It’s common linguistic shorthand to include the keyboard and screen etc but we’re not in the land of folk-language imprecision here. In an electronic – as opposed to a steam computer – a computer wouldn’t have any ‘transducers’ at all. They read only from memory locations and write only to them. So the author either doesn’t understand computers or has decided to redefine the external devices as being part of the computer too. It could be hubris, but my bet is it’s a belief that screens and keyboards are parts of a computer, which I can say they most definitely are not.

    JBD

  84. Hi John.

    In reverse, you regularly use computers that have no inputs or outputs? That can’t be programmed? That really would be a solipsistic machine 😉

    It would be worth you reading the entire paper by Rapaport – I found it very thoughtful on computational semantics. My free extrapolation of his mode of thinking to the human mind is that all our inputs come via perception (a la the phenomenal-noumenal distinction), so much (but not all) of our semantics are downstream from here, that is they are actually syntactic in nature, built up to match the percepts and nothing else. the rest is external/causal in nature. The former can replace the latter in any given instance (simulating another aspect of the world, but the latter is the driver for the idea that the computer has to have connections to the world.

    As to quantum computing and quantum information theory, my simple minded understanding is that the physicists involved think they are doing physics and publish in physics journals. And when they say stuff like the spectral gap is undecidable, the halting problem is a mathematical entity, but also a characteristic of certain physical systems.

  85. David


    “In reverse, you regularly use computers that have no inputs or outputs? That can’t be programmed? ”

    A computer is (in folk terms) a CPU plus peripheral devices (keyboards, microphones etc), which the article somewhat quaintly refers to as ‘transducers’. The role of peripheral devices is to alter the state of memory that the CPU uses and thus the computational flow of control.

    However – in this conversation, we are not using folk terms but being precise. In which case a computer is strictly just the CPU plus associated memory. This excludes the ‘transducers’ of the article. CPUs reference a fixed body of memory, and that’s it. There is no need for peripheral devices or “transducers”. They aren’t involved. Indeed the goal of performance improvement in computer programs is to avoid a CPU wait for a peripheral device to transmit information (usually hard disks). So that’s the reason I take issue with the reference to no computation being possible without ‘transducers’

    As far as the rest is concerned, I think you need to go back to basics.

    A computer is a logical machine. Not a physical one. The first thing that chip designers sit and work out what logical steps they want the chip to perform

    eg they may define operation “1” as

    i) load register 1 with contents of the stack pointer
    ii) load register 2 with the contents of the stack pointer
    iii) add contents of register 1 with register 2 and place in register 3

    etc

    Everything that the chip must do is designed first as a logical engine. Once that is decided, the physical implementation can begin. But before physical implemnentation even starts, the logical engine is complete. Anybody – you, me the engineers, can run an 8086 program by taking all the 0s and 1’s resident in a computer program and applying the chipset rules to it. We can use a piece of paper and a pencil. We can amend the memory contents by using an eraser, and swapping a ‘1’ to a ‘0’ and vice versa when necessary. We are running a program.

    In fact – if you were a person of tremendous memory, you wouldn’t even need to write it down. You could just remember, in your own memory, what the contents of memory are and what the rules of the chipset are. You could do it all in your head.You are still running the program. No transducers, no physical input, because you have decided to compute without using a physical engine to help you.

    If you don’t believe me, I’m afraid that means you just don’t understand what computation is and what computers are, and you need to learn about it.

    Having said that, it’s not very helpful to run an Intel 8086 computer program in your head- that’s what physical computers are for. In fact, a better analogy is long division and long multiplication – two computational methods that most people use in their heads, without reference to ‘transducers’.


    And when they say stuff like the spectral gap is undecidable

    I say “beware the lure of quantum mechanics as an explanation for you don’t know”.

    The spectral gap finding has zero bearing – none whatsoever – on the status of computation as non-physical. It states that mathematical physics – as a syntactical, symbol-based non-physical activity, can’t solve certain problems as computational problems. End of. That means we can’t do what physics usually does, which is take the scalar products of the non-physical activity that is mathematical physics, and interpret it through the semantic lens of mass,space and time to have real meaning about the actual stuff of the universe.

    It is an aspect of mathematical physics with no intersection to the notion that “computers require transducers”. No bearing. Not even tangientially connected.

    JBD

  86. Jochen,

    I believe I can solve consciousness to a precise technical level.

    Just bear with me – for now I’ll provide a brief sketch of my proposed technical solution, showing how computation can give rise to semantics (meaning) and thus consciousness.

    Start with classical logic. The roots of computation are there, in deductive inference, set theory and category theory. But we need to do three things to transform classical logic into computational models that deal with semantics.

    (1) We need a way to represent vagueness and uncertainty. So we need to extend bivalent (2-value) logic into many-valued logic. Fuzzy logic is a canonical example of this. Instead of just 2 truth values (T/F), fuzzy logic allows for degrees of truth , quantifying the degree of precision in a concept or model

    (2) We need a way to represent discrete dynamical systems. Our logic needs to have a built-in notion of time. Temporal logic (a form of modal logic), does this.

    When we combine (1) and (2), a new type of logic springs forth, and that’s ‘Fuzzy Temporal Logic’. So now consider my new model of computation – ‘Fuzzy Temporal Logic’.

    Fuzzy Temporal Logic still doesn’t explain consciousness or semantics by itself, but I want to suggest it’s the right tool to begin to do this. And now let’s apply a 3rd ingredient: a model of the flow of time in a dynamical system (the time evolution of a dynamical system).

    If we apply Fuzzy Temporal Logic to model a single (deterministic) time-evolution we’re already some of the way towards semantics! Now what happens when we extend our model further by considering multiple possible (probabilistic) time-evolutions?

    Fuzzy Temporal Logic >>>>> World Models (Semantics and Consciousness)!

    where ‘>>>>’ represents modeling of possible time-evolutions of dynamical systems.

    Finally introduce self-reflection. When the dynamical system being modeled is a mind, you have a mind modeling itself, and now computation has indeed become imbued with semantic meaning!

    On the left we have computation. And on the right semantics . I’ve done it! I’ve shown how to inject semantics into consciousness.

    I’ve solved consciousness! 😀

  87. I just want to summarize the proposed steps I give in the previous post to close the ontological gap between computation and consciousness, this time linking to wikipedia to give you clear technical definitions.

    (1) I started with classical logic
    https://www.wikiwand.com/en/Classical_logic

    I said extend classical logic into many-valued logic
    https://www.wikiwand.com/en/Many-valued_logic

    Fuzzy logic was the result:
    https://www.wikiwand.com/en/Fuzzy_logic

    (2) I said find a logic that has a built-in notion of time, a ‘modal logic’
    https://www.wikiwand.com/en/Modal_logic

    Temporal logic was the result:
    https://www.wikiwand.com/en/Temporal_logic

    I said combine the logics in (1) and (2).
    The result is a novel model of computation ‘Fuzzy Temporal Logic’

    (3) I proposed modeling the mind as a ‘dynamical system’, a pattern of events that evolves in time:
    https://www.wikiwand.com/en/Dynamical_system

    I said use my novel form of logic (‘Fuzzy Temporal Logic’) to do this.

    I said introduce self-reflection. Let the mind (a dynamical system x) model it’s own time-evolution, to generate a symbolic model of the dynamical system x. This symbolic model is *itself* also a dynamical system, y.

    https://www.wikiwand.com/en/Self-reference

    I said extend the model of time-evolution to deal with counter-factuals (rather than just trying to predict one future, look at many possible futures):

    https://www.wikiwand.com/en/Counterfactual_thinking

    Computation has transformed into consciousness (computation has been imbued with semantics) and the ontological gap has closed!

  88. mjgeddes:

    I believe I can solve consciousness to a precise technical level.

    While I admire your chutzpah, I’m afraid I don’t really share your optimism here. Just some quick points:

    1. Starting with logic is much too high-level: we already need to have interpretational, representational faculties in place before anything can sensibly be said to ‘use’ logic. The elements of logics—truth values, propositions, and so on—are not elements of the natural world; they only enter if such elements are interpreted as logical functions.

    2. Many-valued logics don’t actually give you anything fundamentally new: a computer built on the basis of a many-valued logic will be able to compute all the same functions a binary computer does, and none more. The same goes even in the case of analogue (‘fuzzy’) computers (however, if you permit computations to depend on the exact, infinite real value of inputs and outputs, we enter the territory of hypercomputation). Consequently, a computer built on ‘fuzzy temporal logic’ (if one can actually consistently define that logic) won’t give you anything you can’t do with a computer running on binary logic.

    3. The semantics of a logic are given by a model of that logic. Thanks to the Löwenheim-Skolem theorem, we know that each logic of the type you describe has an infinite number of models; indeed, it doesn’t even have a unique model up to isomorphism. Hence, your logic can’t fix its own semantics: for every interpretation you claim is ‘the one true one’, I can show you infinitely many that work just as well.

    (But I’m not sure that this is really what you have in mind—I have to confess, I can’t make heads nor tails of the sentence “If we apply Fuzzy Temporal Logic to model a single (deterministic) time-evolution we’re already some of the way towards semantics!”. Sure, if we apply logic to model something—anything—we’ve given it a semantics, as we have interpreted its formulae (otherwise, they’d just be strings of symbols). But that sort of interpretation is what we seek to explain, not what we can make use of in the explanation.)

    4. Self-reflection is a notoriously difficult notion to formalize, and most systems permitted to talk about themselves simply collapse into inconsistency. That’s why we talk about object- and metalanguage, about types, about sets and classes, and so on. Otherwise, in naive set theory, we fall prey to Russell’s paradox of ‘the set that do not contain themselves’. Showing that your system avoids this is probably going to be quite hard; and even then, you don’t just automatically get ‘a mind modeling itself’.

    5. Furthermore, all of this really only relates to intentionality—to the question of how thoughts come to be about things in the world, roughly. The more difficult problem of consciousness (the hard one, that is), according to many philosophers, is not merely how to get meaning, but how to get subjectivity—i.e. how it comes about that there is something it is like to be a conscious subject. You’d have to show how your ‘fuzzy temporal logic’ addresses this (which of course no logic does), in order to ‘solve consciousness’. I.e., show how you can use it to show Mary what it’s like to see red—then we may have the start of something.

  89. Russell’s paradox of ‘the set that do not contain themselves’.

    That should’ve been ‘the set of all sets that do not contain themselves’.

  90. 1. That’s the issue under debate (computationalism vs. non-computationalism as regards consciousness). As a computationalist, I think logic is prior to consciousness, as a non-conmputationalist, you no doubt think the reverse. The question is who has the better explanatory model

    2. It’s a question of an explanatory framework. My motivation for ‘Fuzzy Temporal Logic’ is that it provides a way to define reflection. Fuzzy truth values allow one to define reflection as the reduction of vagueness. Temporal logic also allows one to deals with counter-factuals (i.e what could have happened), which also seems an important part of reflection.

    3. I was talking about using fuzzy logic to model the state changes in goal-directed dynamical systems (for example, the brain).

    4. See (2) and (3). My idea was to use fuzzy logic to model how a mind reflects on itself. The brain’s a dynamical system, and I’m suggesting that self-reflection is a symbolic model of the state-changes in that system (time perception). I thus want to use fuzzy temporal logic to model this temporal perception (the symbolic representation of the time-evolution of the dynamical system that is the brain).

    5. I think intentionality (in the most general sense) and subjective experience will turn out to be the same thing. See (1)-(4).

    Thanks for the comments!

  91. Dear John.

    “A computer is a logical machine. Not a physical one.” Let’s pretend everyone reading this knows there are such things as abstract machines in computer science, and that Turing was referring to women who in the US Civil Service were paid between $1440 p.a. (Junior Computer) through to $3200 (Chief Computer). I presume you would argue that abstract machines do not actually exist ontologically speaking, or have an existence purely by virtue of being concepts in human minds?

    While I might define them as a complex pattern for the organization of material processes that will reliably give the same outputs for given inputs. Then halting programs are physical processes that stop, whether carried out in my head, or by a Berry & Boudel chemical computer having the same abstract structure, and whether or not there are conscious observers to see if they did or didn’t halt. Furthermore, that view would extend to other concepts like computational complexity, which after all are about the practicalities of time and space needed to carry out a given calculation.

  92. Dear mjgeddes.

    http://www.mmdr.it/defaultEN.asp

    is for a book The Mathematics of Models of Reference.

    “The [Kleene (Strong)] Recursion Theorems, applied to our recursive Models of Reference, guarantee that we can define partial MsoR which are recursively self-referential, for they include their own code in their recursive definition. These are simply classical fixed-point definitions. Since numeric codes are perceptions taken as inputs by (meta-)models of reference, which can also emulate the thought procedures performed by the encoded MsoR, recursive self-referential MsoR can perceive themselves in a precise mathematical fashion, and represent the computational procedure in which they consist within themselves.”

    I really don’t know how to assess this (I haven’t read the book) – it is apparently a reversible cellular automaton model.

  93. The book looks promising David. Cellular automata are a functional model of computation, and I think functional models are exactly what’s needed for getting a handle on reflection (as opposed to concurrent and sequential models).

    There’s a close connection between functional models of computation (e.g, lambda calculus), functional programming (e.g, logic programming), and non-classical logic (e.g., modal and many-valued logics). My view is that all three should be grouped together as ‘the logic of reflection’. See my wiki-book:

    https://en.wikipedia.org/wiki/User:Zarzuelazen/Books/Reality_Theory:_Computational%26Non-Classical_Logic

  94. David


    “I presume you would argue that abstract machines do not actually exist ontologically speaking, or have an existence purely by virtue of being concepts in human minds?”

    What don’t you understand about the long division you learnt at school not having the same ontology as a motor cycle ? Is it that complicated ?

    If you accept that long division – a simple computational method for dividing two numbers – isn’t physical, why do you insist that a more generic computational method – a Turing machine – IS physical ? Or has the same ontology as a motorcycle, or a volcano ?

    They have an existence as cultural artfects – like the word “David”, or the idea of “social networking”.

    “While I might define them as a complex pattern for the organization of material processes that will reliably give the same outputs for given inputs. Then halting programs are physical processes that stop, whether carried out in my head, or by a Berry & Boudel chemical computer having the same abstract structure, and whether or not there are conscious observers to see if they did or didn’t halt.”

    What on earth has “halting” got to do with it ? What’s the relevance ? Was it part of a quote you picked up ?

    What you have you just written is self-evident complete nonsense. If it’s meaningful to say that if I “run” a program in my head, then that means the object of my thoughts become physical – adopt a physical ontology – then that means when I think “social networking” that means “social networking” becomes physical, or “david” becomes physical, simply because I use a physical mechanism – my brain – to think about it. Total confusion between the brain as a physical mechanism, and the ontological nature of the object of thoughts. Utter epistemological hocus pocus.

    Furthermore, that view would extend to other concepts like computational complexity, which after all are about the practicalities of time and space needed to carry out a given calculation.

    Computational complexity. Another computationalist fantasy idea. There is no such thing.

    JBD

Leave a Reply

Your email address will not be published. Required fields are marked *