There’s a fundamental ontological difference between people and programs which means that uploading a mind into a machine is quite impossible.

I thought I’d get my view in first (hey, it’s my blog), but I was inspired to do so by Beth Elderkin’s compilation of expert views in Gizmodo, inspired in turn by Netflix’s series Altered Carbon. The question of uploading is often discussed in terms of a hypothetical Star Trek style scanner and the puzzling thought experiments it enables. What if instead of producing a duplicate of me far away, the scanner produced two duplicates? What if my original body was not destroyed – which is me? But let’s cut to the chase; digital data and a real person belong to different ontological realms. Digital data is a set of numbers, and so has a kind of eternal Platonic essence. A person is a messy entity bound into time and space. The former subsist, the latter exist; you cannot turn one into the other, any more than an integer can become a biscuit and get eaten.

Or look at it this way; a digitisation is a description. Descriptions, however good, do not contain the thing described (which is why the science of colour vision does not contain the colour red, as Mary found in the famous thought experiment).

OK, well, that that, see you next time… Oh, sorry, yes, the experts…

Actually there are many good points in the expert views. Susan Schneider makes three main ones. First, we don’t know what features of a human brain are essential, so we cannot be sure we are reproducing them; quantum physics imposes some limits on how precisely we can copy th3 brain anyway. Second, the person left behind by a non-destructive scanner surely is still you, so a destructive scan amounts to death. Third, we don’t know whether AI consciousness is possible at all. So no dice.

Anders Sandberg makes the philosophical point that it’s debatable whether a scanner transfers identity. He tends to agree with Parfit that there is no truth of the matter about it. He also makes the technical point that scanning a brain in sufficient detail is an impossibly vast and challenging task, well beyond current technology at least. While a digital insert controlling a biological body seems feasible in theory, reshaping a biological brain is probably out of the question. He goes on to consider ethical objections to uploading, which don’t convince him.

Randal Koene thinks uploading probably is possible. Human consciousness, according to the evidence, arises from brain processes; if we reproduce the processes, we reproduce the mind. The way forward may be through brain prostheses that replace damaged sections of brain, which might lead ultimately to a full replacement. He thinks we must pursue the possibility of uploading in order to escape from the ecological niche in which we may otherwise be trapped (I think humans have other potential ways around that problem).

Miguel A. L. Nicolelis dismisses the idea. Our minds are not digital at all, he says, and depend on information embedded in the brain tissue that cannot be extracted by digital means.

I’m basically with Nicolelis, I fear.

 

69 Comments

  1. 1. David Duffy says:

    A hurricane isn’t digital, but we like to think we can simulate it. Unlike the hurricane, where the winds are ontologically different from the simulated winds, minds secrete thoughts and words, which seem a lot closer in nature to the world of computation to many of us.

  2. 2. Lloyd says:

    As far as the hardware-software distinction goes, I see that as no big deal. A 3D scanner without a program is a piece of junk. The gist of the creation is surely in the program. And then it becomes real. What’s the problem?
    A scanner, if it could be built, would defnitely make two you’s. Whether or not you kill one of them doesn’t really matter (except that maybe it’s murder). Assuming the copy is perfect, both are really you. As of the instant of copying, the two go separate ways.
    As far as creating the color red, I suspect there’s some random pattern generation involved there. After all, nobody else can check what you see. It might as well be random.
    I guess I’m with Randal Koene, at least most of the way.

  3. 3. Lloyd says:

    I meant a 3D printer.

  4. 4. Sergey says:

    There’s a theoretical limit on the amount of information contained in a volume of space. So, what information would remain missing that is contained in the brain tissue if we can extract a description with enough fidelity, hypothetically?

  5. 5. SelfAwarePatterns says:

    “There’s a fundamental ontological difference between people and programs which means that uploading a mind into a machine is quite impossible.”

    The question I always have with this stance is, what is that difference? The science seems to point to the mind being a physical system that operates according to the laws of physics. What about those physics would make them impossible to replicate somewhere else?

    You do give an answer to this question:
    “a digitisation is a description. Descriptions, however good, do not contain the thing described”

    Consider that the description is of a physical system. Also consider that the description itself is a physical system. If the description is complete and granular enough, eventually the physics of the description become isomorphic with the physics of the original system. In other words, eventually the description becomes another physical system that is functionally equivalent to the original, at least in the sense that it has the same effects on its environment that the original had on its environment.

    Is the description then conscious? That depends on your stance on philosophical zombies, but my thinking is that consciousness has a functional metaprocessing role to play, and that reproducing the original system’s effects on its environment requires reproducing that functional role.

    I do have some sympathy with the argument that reproducing the processing of a brain using the current architecture of commercial computers could require more computing power than might ever exist. But people are working on new architectures, such as physical neural networks. Ultimately, once it’s possible in principle, making it possible in practice becomes an engineering problem.

  6. 6. Peter says:

    David – yes, I see what you mean. Thoughts seem more like the kind of thing you could build out of computation than rainstorms. You can generate strings of symbols computationally that look like thoughts. But in itself computation is meaningless, whereas thoughts are all meaning.

  7. 7. Peter says:

    Lloyd – seems odd to say a copy is the original; but the real point is whether the data is me.

  8. 8. Peter says:

    Sergey – my point is really that having all the information does not give you the thing the information is about.

  9. 9. Peter says:

    SAP – but I deny that a description is a physical system! What mass and position does a description have? We mustn’t identify data with the various substrates in which it can be encoded.

  10. 10. Peter says:

    Now the comments look as if I scanned and copied myself (sigh)…

  11. 11. Christophe Menant says:

    I’m on your side Peter (with Nicolelis). But using a different thread.
    Nicolelis says that it depends on information embedded in the brain tissue that cannot be extracted by digital means.
    Fine, but what about the nature of ‘information embedded in the brain tissue’?.
    Information in biological entities do not exist by themselves. Information in our brains have reasons of being there. They have meanings related to our metabolism, to our emotions, to our thoughts, to our… And transferring these information from our minds to computers is not only about transforming neurons polarisations into bit strings. It is also (and mostly) about transferring the meanings of these information into information meaningful for computers. This brings us to known subjects like the Turing Test that can be analyzed by a modeling of meaning generation applicable to both humans and artificial agents. The outcome is that the TT is not possible today as we cannot transfer an organic meaning in an artificial agents (see https://philpapers.org/rec/MENTTC-2). Such approach also allows to highlights associated ethical concerns and positions artificial life as an important step on the road to strong AI.
    To complement Nicolesis wording I would say that it depends on information embedded in the brain tissue, the meaning of which cannot be transferred to computers.

  12. 12. Paul Topping says:

    Copying a brain at the atomic level is very hard so we can dismiss that for now. Copying just the parts that are important for thought might be possible if we knew how the brain works. We don’t, so it is not possible currently to resolve the posed question. The “brain is analog but computers are digital” argument doesn’t help as we don’t know the accuracy required for an adequate brain copy (or mind copy if you prefer) because, again, we don’t know how the brain works.

  13. 13. Jochen says:

    Selfawarepatterns:

    Consider that the description is of a physical system. Also consider that the description itself is a physical system. If the description is complete and granular enough, eventually the physics of the description become isomorphic with the physics of the original system.

    But this argument is ultimately cyclical: you get two systems that have the same description, but from there, it only follows that both would likewise be conscious if having the same description suffices for consciousness—but that’s the question that’s at issue.

    My way out, of course, is that there’s a noncomputable—and hence, nondescribable—element to conscious experience. So a computational equivalence won’t be sufficient for a claim of identity in a perfectly ordinary way: there are properties that are missed by every computational model, and consequently, there’s room to differ along those properties. And what differs in its properties must differ absolutely.

    Putting this another way around, there’s really no computation without there being a mind interpreting a given physical system as computing something. That point is just the same as saying that ‘dog’ does not inherently mean dog without a mind interpreting it that way (after all, another mind could interpret it as meaning cat); yet it’s somehow often missed.

  14. 14. Paul Topping says:

    What proof do you have that human consciousness is not computable? Since we do not know how the brain works, I know we also have no proof of this. Of course, it might be true but the history of science is littered with such woo. Some fraction of all scientists respond to their own inability to solve the mysteries of the universe with “Perhaps we’ll never know!”

  15. 15. araybold says:

    This sort of argument against the possibility of something requires more than saying “we call A an X and B a Y and they are different” – one should show that the mutual exclusion is absolute and sound, and not just, for example, a contingency of history or a consequence of our incomplete knowledge. For example, it was once held that organic and inorganic chemistry were distinct in this way, until it was demonstrated to be otherwise, and, more recently, the ontological gulf between waves and particles has succumbed to increasing knowledge.

  16. 16. SelfAwarePatterns says:

    Peter #9,
    Whether or not the description has any platonic existence, to be useful, doesn’t it have to be instantiated in at least one substrate? And wouldn’t the implementation of the description in that substrate have mass and position?

  17. 17. SelfAwarePatterns says:

    Jochen #13,
    I agree that there’s non-describable aspect to conscious experience, but that’s only because language is ultimately about conscious experience. Eventually, we always get to a language element that can’t be broken down further, except to point at what is being referred to. For example, the concept of red can’t really be described to a person born blind. But the ineffability of redness shouldn’t be taken as proof of anything, except that language, and all elements of symbolic thought, are ultimately placeholders for primal experience.

    (It’s interesting to consider what about the human mind gives it the ability to have symbolic thought. Lately I’ve concluded that it’s our metacognitive abilities. The scientific evidence for metacognition in non-human animals outside of primates is non-existent, and even other primates only seem to have it in a comparatively limited fashion.)

    “there’s really no computation without there being a mind interpreting a given physical system as computing something. ”

    I think that’s debatable. Computation is functionality. It seems similar to asserting that the heart isn’t a pump if we’re not around to interpret its pump nature. But my point about the copied system was that, once it’s implemented, it has a physical existence regardless of whether anyone chooses to interpret it as computational.

  18. 18. Peter says:

    SAP – OK, but what if it does? The question is whether a real, actual person like me can become a digital description of themselves. Actually become that description, not correspond with it or be intpretable as the thing described. I’m saying the two things – person and description – are radically different at a metaphysical level.
    Put it another way; I’m saying that I’m not made of data. I’m a particular physical animal.

  19. 19. Lloyd says:

    Re #7 and #18: If you could 3D print an identical copy of something, same materials and atom for atom, it would not matter which was which, the original and the copy are the same, they are both “original”. That would still be true even if the items in question were alive and yet even if they were conscious beings.
    If the copy was of “you”, then there are two indistinguishable “you”s. Basically, the printer has turned a description into a real object. That’s a big step more than your basic computer code usually does.

  20. 20. David Duffy says:

    “…the TT is not possible today as we cannot transfer an organic meaning [to an] artificial agent”

    I am reminded of claims about how male (old/white/etc) novelists cannot enter the mental worlds of woman characters in a realistic way. Viz the original imitation game. Are there non-computational differences between male and female consciousness?

    “…a mind interpreting a given physical system as computing something…”, “..computation is meaningless…”

    I’ve previously pointed to the recent literature on Maxwell’s Demon, computation and thermodynamics, and how this relates to embodiment. Are the computations bacteria doing meaningless? Do they have a self, even though they don’t know it in the way we do? It seems to me these have a naturally emerging (“teleonomic”) semantics, but I also think we can simulate the processes giving rise to these semantics without any vicious regresses. Does that help with the uploading question? Maybe.

  21. 21. araybold says:

    Peter #18: A vitalist would say you are not a physical animal because there is no such thing: animals are living things, not physical things, a different ontological category (of course, you might yourself be a vitalist speaking loosely in this case, in which case this argument won’t make much impression you!)

    Another point to consider is that the physical you contains almost none of the atoms it was comprised of a couple of decades ago. In whatever sense you are the same person as then, that sameness does not come from the matter within you. If it comes from the arrangement of that matter while it is in you, then that is information (or if not, then what?)

    Lloyd: Whenever you are discussing replication at the atomic level and smaller, you have to consider the no-cloning theorem of quantum mechanics. Of course, you can waive it in a thought experiment, so long as you are consistent about doing so.

  22. 22. Christophe Menant says:

    David # 20: “It seems to me these have a naturally emerging (“teleonomic”) semantics, but I also think we can simulate the processes giving rise to these semantics without any vicious regresses. Does that help with the uploading question? Maybe.”

    ‘Emerging semantics’ can be a consequence of the ‘emergence of local constraints’ in an a-biotic universe submitted to ubiquist physico-chemical laws.
    The first emerging constraint could have been ‘maintain far from thermodynamic equilibrium status’, applied to a defined volume.
    Then:
    – An agent can be defined as an entity submitted to internal constraints and capable of actions for the satisfaction of the constraints (ex: animals submitted to a ‘stay alive’ constraint).
    – ‘Semantic emergence’ becomes ‘meaning generation’ by an agent when it receives information that has a connection with a constraint. The generated meaning is precisely that connection. It will be used for the determination of an action that the agent will implement to satisfy the constraint.
    – Normativity and teleology can also be added to the ‘internal constraints’ thread.

  23. 23. Peter Martin says:

    Seems to me that we are much closer to a set of nested control loops operating on a substrate than to the physical material world (what ever that is). That’s true of us physically, mentally and consciously. ‘I’ am not the specific material I’m made of at any one time, since it doesn’t matter to me if all that changes as my body works away, what matters is that the patterns that constitute me survive, like an eddy in water. When I die, it is a set of control loops that cease to operate, so consciousness goes, neural control goes, and the body decays.

    Therefore something that replicates the set of control loops that enable me to keep existing (as a set of control loops) would be me, and I would experience it as me, even if I then experienced some differences as a consequence of existing in a different substrate, or in a different context.

    At a more abstract level, what we think of as an entity is a distributed pattern, embedded in a set of relationships with other patterns that determine its ongoing evolution. I see no (theoretical) problem in capturing this as information, although it needs an underlying substrate to ‘run’ on (which can itself be a set of interacting patterns).

  24. 24. SelfAwarePatterns says:

    Peter #18,
    Fair enough. I see that in particular as a philosophical conclusion where there is no fact of the matter. If that’s the way you choose to view it, I see no way for me to say you’re necessarily wrong.

  25. 25. Jochen says:

    SelfAwarePatterns:

    I agree that there’s non-describable aspect to conscious experience, but that’s only because language is ultimately about conscious experience.

    I mean ‘description’ in a very general sense—including patterns of ones and zeros.

    It seems similar to asserting that the heart isn’t a pump if we’re not around to interpret its pump nature.

    Well, no, there’s an important difference: whatever you call it, what a heart does is moving fluid. There’s no interpretive freedom on that end: its function is fixed regardless of who may interpret it (or not, as the case most often is).

    Computers, however, are just physical systems traversing a sequence of states upon which an additional interpretive gloss is layered. Take a binary adder, which represents numbers via LEDs that may be either on or off: this interpretation isn’t fixed by anything within the adder itself. It would not be wrong, for instance, to interpret the nth element of an m-element row of LEDs not as representing the 2^nth power, but instead, as the 2^(m-n)th—such that instead of 2^0 = 1, the first light represents instead (for m = 3, say) 2^3 = 8, the second LED represents 4 in both cases, and the third either 8 or 1; basically just turning the labeling on the LEDs around.

    The ‘adder’ then would no longer perform addition, but some other computation; moreover, every other assignment of meanings to lights, or groups of lights, is equally well possible. The physical system here does not fix its function; it’s only due to being interpreted the right way that we can say that it ‘adds’.

    Moreover, even if the mapping between states of the LED output and states of a computation were unique, it still would need that mapping in order to claim that the system computes—after all, a pattern of lights isn’t a number. No mapping, no computation; no interpretation, no mapping; no mind, no interpretation. Consequently, computation is a mind-dependent notion.

    A heart acts on fluid, and its function is that action. But a computer acts on symbols (physically instantiated ones, whether patterns of lights, of switches, of electrons, or even marks on paper), and what we take its function to be depends on how we interpret those symbols.

  26. 26. Simon says:

    “the person left behind by a non-destructive scanner surely is still you”

    It seems obvious that there would not be continuity of consciousness with the copy but is there any evidence that there is even for the original? If there is no continuity beyond memories then the copy isn’t really worse off, and uploading might be preferable compared to living normally, though in that rather depressing scenario neither seems particularly desirable.

  27. 27. Stephen says:

    Mind and body are deeply intertwined and one without the other will not function. A copy of your brain processes isn’t enough to create another you. That’s why when Peter says he is not his description, he is correct in at least one sense.

    For this exercise, we could divide a person into their physical part and their information processing part. We would need to replace the functionality of the physical part that affects information processing in order for the emulated processing part to work properly. For example, the brain controls the production of chemicals that affect our emotions. We would need a stub on the uploaded brain function to replace these or we wouldn’t have our emotions, and without them we aren’t the same person. Glial cells in the brain may affect our information processing, so somehow their affect would have to be replicated as well. And so on.

    While complicated, it doesn’t seem like there is anything preventing this kind of replication other than the uncomfortableness of being “just” a meat machine and a massively daunting technical challenge.

  28. 28. SelfAwarePatterns says:

    Jochen #25,

    “I mean ‘description’ in a very general sense—including patterns of ones and zeros.”

    I think we have to make a distinction here between what can happen from inside of a conscious system vs what can happen from outside of it. From inside, we eventually reach a layer of the very stuff of conscious experience, of sensory, emotional, and motor perception. That layer is ineffable. All we can do is associate a symbol (word, etc) with it, but we can’t describe it any further.

    But from the outside, we can describe the correlated neural patterns and firings. Certainly we don’t have a full account of those correlates yet, but I think we have good reason to think we eventually will. Of course, we won’t look at those correlates and intuitively see our internal experience, but that gets to the hard problem and the limits of introspection.

    “A heart acts on fluid, and its function is that action. But a computer acts on symbol”

    But they’re both physical systems. If the adder you describe above was part of a processor in the control unit of a robot, the interpretation would be reified by the robot’s body and physical actions. The adder’s functionality would no longer be relative to a mind. The designer’s interpretation would have objective existence in the world independent of anyone’s mind.

    Looked at another way, our peripheral nervous system and overall body essentially reifies evolution’s interpretation of what happens in our central nervous system. If an evil scientist isolated a live cluster of neurons from my prefrontal cortex, he would be free to interpret that cluster’s neural firings in any way he chose.

  29. 29. mjgeddes says:

    I’m so confident that I’ve cracked consciousness, I recently tweeted my solution to Dan Dennett 😀

    Here’s my best current crack at it:

    “Consciousness is an extension of dynamical systems theory. In dynamical systems, it’s a form of *internal* time keeping (a *second* arrow of time) and a symbolic language for self-modelling. Language provides temporal coordination!”

    “The key is to realize that conventional thermodynamics needs to be extended! The entropic arrow of time measures *external* time, but dynamical systems have an *internal* time! Two arrows! The internal time flow is consciousness!”

    “When the dynamical system effects an accurate self-model, it reduces entropy dissipation (free energy principle). External coordination of multiple sub-agents (global work-space) combined with internal integration of symbolic models! (memory and imagination) 2nd arrow of time!”

  30. 30. mjgeddes says:

    #6 Peter and #13 Jochen

    I would start by drawing a sharp distinction between *static* objects (like rocks, tables and chairs) and *dynamical* systems (like a hurricane). A *dynamical system* is all moving parts in constant flow – for instance if you look at the vortex of a tornado, that’s not dependent on the material details of the matter making it up- it’s a *pattern* of events.

    So I think looking at dynamical systems, the distinction between computation/matter on the one hand, and mind on the other starts to dissolve.

    https://www.wikiwand.com/en/Dynamical_systems_theory

    Then you can start to look the principles governing these dynamical systems…thermodynamics and information theory.

    As regards consciousness, I think it doesn’t take an Einstein to realize it’s closely tied in with language, because language is what lets us reflect on our own thoughts (by forming *symbols* to represent them).

    Now as you said Peter ‘thoughts are all meaning’ – precisely so! Thoughts are all about *language* and meaning yes. But you also conceded that ” Thoughts seem more like the kind of thing you could build out of computation than rainstorms” . So why can’t thoughts be computations?

    Now what happens when we connect the insight that thoughts are a symbolic language with the other insight that the brain is a dynamical system? Combine the two ideas.

    Then you can imagine a physical dynamical system that works as a symbolic language generator , and it can form *models* of itself. This points naturally in the direction of two theories of consciousness… integrated information theory and global work-space theory.

    A dynamical system needs a way to coordinate all the activities of it’s parts…it’s precisely symbolic language that lets the system do this! So there are two key parts, one external , the other internal. The external part is the physical coordination of the dynamical system. And internal part is the integration of the system’s symbolic models.

    And I’m saying that Coordination (external) and Integration (internal) of a dynamical system is consciousness.

  31. 31. Christophe Menant says:

    # 28. SelfAwarePattern.
    “A heart acts on fluid, and its function is that action. But a computer acts on symbol”
    But they’re both physical systems.

    They are both physical systems but a heart is alive, a computer is not. And we don’t know the nature of life. So I’m afraid we cannot today consider that the interpretation by a computer can be similar to the interpretation made by a living entity.
    It will probably become possible in the future. The first thing to do is to bring life to computers. To look at how we could transfer to artificial agents the ‘stay alive’ constraint which looks as intrinsic to living entities.
    The nature of mind, taking life as a given, remains of course a key subject to investigate.

  32. 32. Peter says:

    Many thoughtful points here that really deserve a response from me, but I’m afraid I haven’t currently got time to do them justice. Apologies and thanks.

  33. 33. Jochen says:

    SelfAwarePatterns:

    I think we have to make a distinction here between what can happen from inside of a conscious system vs what can happen from outside of it.

    I think I basically agree with most of what you write here, but surely, this should speak against any computational account, no? After all, a computation is fully specified by, say, the systems states and transition rules. No question of ‘inner’ vs. ‘outer’ description pops up.

    If the adder you describe above was part of a processor in the control unit of a robot, the interpretation would be reified by the robot’s body and physical actions.

    First of all, this already concedes half my point (at least): computation alone isn’t sufficient, interaction with the environment is necessary. A black box, sitting in the dark, won’t yield conscious experience. This should come as a relief to those worried that our whole world might be a simulation!

    But more importantly, the robot is really just like the heart: what it does is entirely physical. If we speak of computation at all, then again merely as an interpretational gloss. In the end, the robot, like the heart, consumes some vehicles of the environment, leading to internal state transitions, leading to action on the environment. It ‘computes’ only in the sense that the heart does; but actually, all that happens is that the environment impinges on its contact surface, which causes some internal changes (switches flipping, electrons shuffling around, valves opening: it does not matter), which then yield some reaction (say, light in a given pattern impinges on the cells of a CCD-chip, which sends certain currents through a variety of gates, transistors, resistors, and the like, ultimately activating some servos that, say, make an arm extend towards an apple). This isn’t computation anymore than, e.g., a ball bouncing down a hill is; it’s simply physical causality.

    Sure, we can interpret this as computation—but we can interpret a heart as performing some computation, too. This says something about our capacity for interpretation, for generating meaning, but nothing about the systems themselves. An analogy is a turntable translating the grooves etched into vinyl into sound: while on some level, one might claim that the grooves constitute a kind of ‘writing’, the turntable hasn’t in any sense ‘read’ that writing out loud. It has merely reacted to the physical properties of the vehicle it was presented with; and that’s all a robot does, too.

  34. 34. David Duffy says:

    “If we speak of computation at all, then again merely as an interpretational gloss”. We know that non-reversible computation requires the use of energy to erase memory. I don’t believe that this can be completely interpretational in nature. Similarly, algorithmic complexity and randomness are not interpretational in nature. At the physical level, thermodynamic disequilibrium, cognition and intentionality require an inside and an outside.

  35. 35. SelfAwarePatterns says:

    Jochen,
    “No question of ‘inner’ vs. ‘outer’ description pops up.”

    Actually it does. From within the information framework of software, a bit is an irreducible concept (even in machine language). But in hardware, a bit is a transistor, which is reducible to its components (terminals, junctions, etc). Likewise, software can only see certain things about I/O ports. The hardware reality of those ports is opaque to it.

    It even happens within software layers. An application program often only has access to the logical constructs created by the operating system. It doesn’t have access to the mechanisms underpinning those constructs. Layers of abstraction.

    “computation alone isn’t sufficient, interaction with the environment is necessary.”

    I don’t know of any useful computational devices that don’t interact with their environment, so I’m not seeing why this would be a point in your favor. To your point about simulations, I don’t see any reason why interaction with a simulated environment wouldn’t work. (Not that I see the simulation hypothesis as a particularly useful outlook.) I think we agree though that minds are information nexuses of their environment. But as far as I can see, so are computers.

    On interpretation, presumably you spent money for the device you’re reading this with. You spent that money because the effort to interpret it as doing computation is far less than the effort to interpret the nearest rock as doing so. If it took as much effort to interpret a brain as doing computation as it does the rock, the computational outlook wouldn’t be compelling.

    Your description of the robot at a physical level matches my point above that we can look at any computational system purely in terms of its physics. Incidentally, we can do the same with brains, describing everything that happens purely in terms of electrochemical reactions. In both cases, it’s tedious since we’re forgoing the benefits of a useful higher layer of abstraction.

  36. 36. Hunt says:

    @21

    Another point to consider is that the physical you contains almost none of the atoms it was comprised of a couple of decades ago. In whatever sense you are the same person as then, that sameness does not come from the matter within you. If it comes from the arrangement of that matter while it is in you, then that is information (or if not, then what?)

    I think the contention is that a person is dependent on the physical properties of the atoms/molecules, which doesn’t preclude the possibility that a personal potential can’t be specified by pure information. The particular origin of the “stuff” that makes you is immaterial (heh); physics already tells us that one elementary particle is identical to all others.

    Another way to put it is that if “mind uploading” is impossible, that doesn’t imply that “mind recording” is impossible. Your mind might still be captured, say on a DVD set, pending the day you can be reimplemented in physical form.

  37. 37. Hunt says:

    @28

    I think we have to make a distinction here between what can happen from inside of a conscious system vs what can happen from outside of it. From inside, we eventually reach a layer of the very stuff of conscious experience, of sensory, emotional, and motor perception. That layer is ineffable. All we can do is associate a symbol (word, etc) with it, but we can’t describe it any further.

    We describe things by invoking similar experience or sentiment in the target audience. It depends on what you mean by “ineffable”. Language is remarkably impotent in actually describing anything; it’s really just message passing with the intention of invoking semantic routines in the target. The word “describe” begins by begging the question, since you “describe” something to a target that interprets. Really another words is needed, like “specify” or something.

    But natural language (or any language for that matter) doesn’t “specify” anything by itself.

  38. 38. Jochen says:

    David Duffy:

    We know that non-reversible computation requires the use of energy to erase memory.

    We know that flipping a physical system’s state requires a minimum energy (even more accurately: flipping it in such a way that no other system ends up being reliably correlated with it), but that this is the erasure of information is, again, only thanks to interpreting one state as ‘0’ and the other as ‘1’—think about how easily one could reverse this interpretation, for instance.

    Similarly, algorithmic complexity and randomness are not interpretational in nature.

    Well, they’re still relative: given an appropriate oracle, a random string may be non-random, so randomness is at least interpretational in so far as it requires a certain kind of observer.

    Furthermore, both really measure an information-bearing capacity: after all, a random string, served up to you, does not inform you of anything. It’s only if you have the proper code that you can ‘uncover’ the information sent to you, and with a different code, you could similarly uncover any other information at all.

    This is the problem with the way ‘information’ is used in everyday and in technical cases: in the technical sense, it’s really just a measure of the complexity of a given object—how many elementary differences it contains, how many yes-no questions one would have to answer to fully describe that object. It’s a purely syntactic quantity, but computation isn’t purely syntactic—the symbols a computer manipulates have meaning, and it’s us that supply this meaning. Without us, there’s nothing decoding the symbols, and consequently, there’s simply no computation there.

    SelfAwarePatterns:

    From within the information framework of software, a bit is an irreducible concept (even in machine language).

    But this framework already only exists if a mind interprets transistor states to represent, say, ‘1’ or ‘0’. Because again, transistor states and ones and zeroes are different kinds of thing entirely: this is shown by the fact that one can easily flip the assignment of which to call one, and which to call zero. That is, in proposing that there is an ‘interior’ level to the ‘software’, all you’ve really done is imported the interior level of the mind interpreting a physical system as implementing a certain kind of software.

    To your point about simulations, I don’t see any reason why interaction with a simulated environment wouldn’t work.

    Because if interaction were sufficient to ‘fix’ what computation is being done, without interaction with the ‘real’ outside world, there would not even be a simulated environment to interact with. There would simply be tiny currents changing transistor states.

    I think we agree though that minds are information nexuses of their environment.

    Actually, I wouldn’t agree with that: information is something that minds bring into the environment. Without minds, there is no information. In fact, without mental models, there is no information (but see my FQXi essay for that).

    You spent that money because the effort to interpret it as doing computation is far less than the effort to interpret the nearest rock as doing so.

    Indeed, and I spend money on books because it’s easier to interpret the marks on their pages as describing the story of star-crossed lovers or alien invaders than it is to interpret a rock in that way. But this doesn’t mean that those star-crossed lovers or alien invaders have any independent reality: they are conjured up by my mind. It’s exactly the same with computers.

    Take for instance the case of a set of cracks in a stone being ‘deciphered’ as an epic poem written in runes. It’s not that, by accident, these cracks happened to ‘mean’ that poem; it’s that a particularly inventive mind interpreted them a certain way. Their meaning was not discovered, it was imported. It’s the same with computation: we can interpret systems as computing, but absent such interpretation, there’s just no ‘there’ there.

  39. 39. mjgeddes says:

    If you think about an inside-outside distinction, this provides grounds for extending conventional thermodynamics: the standard entropy measure only defines a system relative to its environment.

    What I’m suggesting is introducing a *second* measure of entropy (or a second ‘arrow of time’ if you like), which would apply to dynamical systems and defines an *internal* measure of the system…or the system relative to itself (the evolution of internal symbolic models).

    Jochen, the computational model continues to enjoy rapid success. For instance in this podcast, Sam Harris recently did an extensive interview with top neuroscientist Anil Seth, which is very good indeed:

    https://samharris.org/podcasts/113-consciousness-and-the-self/

    The ‘free energy principle’ and ‘predictive coding’ are computational models of consciousness that are on the rise.

    https://www.wikiwand.com/en/Free_energy_principle
    https://www.wikiwand.com/en/Predictive_coding

  40. 40. James of Seattle says:

    I’m pretty much with SelfAwarePatterns in this discussion, and because I’m starting in the middle, I want to summarize the important background first.

    1. There are (effectively) two kinds of computers, analog and digital. Both of these kinds of computers can be programmable or not. When we talk about “computers” in normal conversation, we are almost always talking about programmable digital computers. No one thinks the brain is a programmable digital computer. People who think the brain is a computer would say it a non-programmable analog computer (with digital elements and the ability to learn).

    2. It is a proven fact (I’m pretty sure) that any function computed by an analog computer can be simulated by a programmable digital computer to any degree of accuracy except perfection. That means that, theoretically, a brain (as an analog computer) could be simulated by a digital programmable computer. That does NOT mean that the simulation would necessarily run in “real time”. The closer to perfection that you want to get, the bigger the program will be and the slower the digital simulation will run.

    The above should not be controversial. The following may be. It is my personal take on things, and I think it corresponds to what SelfAware has been saying.

    3. A system that can be described as functional (i.e. a computer as described above, including emergent analog computers like the brain) has two descriptions: a physical description and a functional description. This is the hardware/software divide.

    4. The functional description (software description) is necessarily hardware independent. That means that the functional description says absolutely nothing about physics. Thus Descartes’s dualism.

    5. The physical description alone does not specify function. One man’s Dell laptop computer is another man’s paperweight. However, if two systems have the same physical description, they will have the same set of possible functional descriptions. An exact copy (atom for atom) of a Dell computer running Word has the same functionality. A real copy of a Dell computer running Word wherein all of the values in all of the memory locations are identical also shares the exact same functionality, because they are digital. A standard copy of a Dell computer (my computer and yours, say) has nearly the same functionality

    6. What makes a physical system a computer/person/agent is a combination of the physical description and the functional description of choice.

    [I’m pretty sure SelfAware will buy into all of the above. I’m less sure about the following]

    7. The basic model for a computation (including non-programmable analog computation) looks like this:
    Input –>[agent]–> Output.
    In this format, the agent = the system, so we can talk about the physical agent and the functional agent.

    So now, general comments relevant to the OP, assuming the above is correct:

    You, the reader, are not a functional agent, nor are you a digital simulation that duplicates the functionality of a functional agent. You are a functional agent running on/realized by a specific physical agent. A digital program running on a programmable computer that is equivalent to your functionality is theoretically possible but certainly not feasible in the near future, and probably pointless in the far future where it might be feasible.

    *

  41. 41. Jochen says:

    mjgeddes:

    Jochen, the computational model continues to enjoy rapid success. For instance in this podcast, Sam Harris recently did an extensive interview with top neuroscientist Anil Seth, which is very good indeed:

    I think the sort of work Anil Seth is doing is very important and intriguing. For one, he’s careful to distinguish between the hard problem—why and how there is conscious experience at all—and the real problem—how consciousness can be assessed, measured, described—i.e. studied in itself in terms of measurable properties of the human brain. Essentially, he’s saying, let’s not worry about the hard problem for now, but instead try and get consciousness itself within the purview of scientific study. Once we understand better how it behaves, what its properties are, and so forth, maybe then will we be able to attack the hard problem—or who knows, maybe, as with life, the hard problem will seem much less forbidding by then.

    I think for this sort of work, the computationalist metaphor can be very helpful, because you get to take certain things for granted—that consciousness exists, and that it indeed performs manipulations on abstract entities, i.e. computations. But for explaining what consciousness is, you’re just not getting anywhere with computation, since computation always relied on mind having been smuggled it at the ground floor somewhere, meaning it can’t account for its emergence at the top level.

    ————————————————————

    James of Seattle:

    It is a proven fact (I’m pretty sure) that any function computed by an analog computer can be simulated by a programmable digital computer to any degree of accuracy except perfection.

    Better than that, actually: as long as the analog signal is bandlimited, or, in other words, as long as there is some finite accuracy for analog parameters, digital signals can exactly recover all of the information in analog ones. So the sets of functions computed by analog and digital computers are exactly the same.

    (If, of course, the exact real number value of analog parameters matters, we’re into the realm of hypercomputation, that is, of devices capable of solving problems that no Turing-machine equivalent computer can solve.)

    A system that can be described as functional (i.e. a computer as described above, including emergent analog computers like the brain) has two descriptions: a physical description and a functional description. This is the hardware/software divide.

    You have to be careful what you mean by ‘functional’ description, I think. A heart, for instance, can functionally be described as a pump, but that doesn’t mean it has any kind of ‘software’. Usually, what we call ‘functional’ is the performance of some physical task that, however, could equally well be performed by a physically different system—so a heart can be replaced by an artificial pump, for example, without any loss of functionality.

    This has several complications, however. First of all, what ‘task’ we recognize in a physical system depends on our interests: we describe a heart as pumping blood, but we could equally well describe it as metabolizing sugar. Also, the level of coarse-graining plays a role: at a molecular level, that a heart pumps blood may not be readily obvious; instead, we see a lot of functions being performed that are wholly opaque to us at the macroscopic level. Additionally, how the system is embedded into its environment matters: a heart can only pump blood if it’s within an organism, although physically, nothing changes about it if we extricate it and put it on ice for transplantation. A heart in an ice box no longer pumps blood, however.

    So there’s already a lot of observer-relativeness in ascribing a proper function to a physical system. Things get worse once we move on to computation.

    Picture a ‘binary adder’ (quotes because that already implies more than we’re strictly entitled to conclude at this point—consider it just a name). It consists of two rows of LEDs, each of which is coupled to a switch that either turns it on, or off. A button, if pressed, will yield to a third row of LEDs lighting up. The pattern in which it does so depends on the pattern of the LEDs in the first two rows in such a way that if they are interpreted as binary numbers—on for 1, off for 0, with the leftmost LED having the highest value—then the third row can be interpreted as a binary number in the same way that corresponds to the sum of both ‘inputs’.

    Now, the question is: does this device perform addition? Does it, in any objective sense, compute the sum of its inputs?

    Most people would probably answer ‘yes’; but the answer is actually a straightforward ‘no’. For consider what happens if we change the interpretation of the LEDs: for instance, now off corresponds to 1, and on to 0. The device will still compute a function of two binary numbers; however, that function will no longer be addition. But which interpretation is right? If you use that adder in order to compute the sum of two numbers, I can use it with just the same effort to compute whatever function is implemented after the change of interpretation.

    This is, of course, not the only way in which we can change the interpretation. We could also modify the significance of each LED: say, consider the rightmost light to map to the highest value. We can even jumble up things in different ways: have the leftmost one have the highest value, the one next to it the lowest, then the second highest, and so on. In each of these cases, we would get a device that performs a different computation. Thus, computation is entirely in the eye of the beholder.

    Now, sometimes people claim (as SelfAwarePatterns did above) that hooking such a device up to the environment serves to reduce, or even eliminate, the ambiguity. But this actually does no work at all: consider the input rows to be set by ‘the environment’—sensory stimuli, for instance—and the output row the ‘reaction’ of the system. Then, under all of the above interpretations, the stimuli would be the same, and such would likewise be the reaction. What computational gloss we put upon this is entirely irrelevant; all that matters is the physical causality that makes a given LED light up if a certain pattern of LEDs was set.

    This last, the physical causality, is then all we have to work with in an objective way; computation and function are ultimately subjective glosses, interpretations of that objective data that have no standing on their own. Consequently, trying to ground mind in such notions can’t work, as they depend on mind already.

    This is a very simple and necessary conclusion. I suspect that really all that keeps it from being accepted more widely is social inertia: computers seemed to be almost magical devices in the 20th century, capable of feats hardly imagined up to that point. Why not of explaining consciousness?

    But in the end, as in some glurgy kid’s movie, the magic really was in us all along.

  42. 42. SelfAwarePatterns says:

    James of Seattle,
    I think you nailed my outlook, except for a couple of points. First, from what I know of Descartes’ dualism, it’s much stronger than what you describe. Among other things, the functional / physical divide in perspective doesn’t require the pineal gland to be a medium between the ghostly mind and the physical body.

    In the end, both the functional and physical descriptions are mental images / models / representations / theories we construct to understand something, and that understanding is only useful to the extent it gives us accurate predictions about future preceptions. Calling one real and the other not misses the epistemic limitations we always have to contend with, that we only ever have access to our own consciousness and the theories we construct about what lies outside of it.

    Second, regarding your last point, if mind copying were available right now, I wouldn’t view it as pointless. To the contrary, I’d try to make sure I maintained recent backups at all times. And I’d choose to view the backup being booted up after my death as me surviving. I can’t say people who choose to view it as another entity are necessarily wrong, but I can’t see that they can legitimately say I’m wrong either.

  43. 43. James of Seattle says:

    Jochen,

    Most of what you wrote is fine, but you made mistakes at the end. You said
    [quote]computation and function are ultimately subjective glosses, interpretations of that objective data that have no standing on their own.[/quote]
    This is not correct. Their are objective explanations for functionality. There are objective reasons an eyeball functions the way it does. It’s true there are subjective glosses possible, which would provide an explanation for re-purposing a heart as a sugar sink. But the “ultimate” explanations are objective, even, if they seem to include subjective goals. Those subjective goals have objective explanations.

    [quote]Consequently, trying to ground mind in such notions can’t work, as they depend on mind already.
    This is a very simple and necessary conclusion.[/quote]
    Simple, yes. Necessary? No. Several significant people are coming around to the notion of how to ground meaning (function) from natural sources. See David Haig’s “Making sense: information interpreted as meaning”, or Carlo Rovelli’s “Meaning and Intentionality = Information + Evolution”

    *

  44. 44. James of Seattle says:

    [Dang. How do you do quotes? How do you edit?]

    SelfAwarePatterns,
    I did not understand the first part of your response. You said

    “Calling one real and the other not misses the epistemic limitations we always have to contend with, that we only ever have access to our own consciousness and the theories we construct about what lies outside of it.”

    I didn’t think I was calling one real and the other not, and I’m not sure what real or not thing(s) you are referring to. Are you talking about the difference between a physical agent and a functional agent? The descriptions (physical v. functional) are just concepts. References to physical and functional agents are references to different, but real, aspects of the same thing.

    Also, not sure which epistemic limitations you are referring to.

    *
    [the last point I’ll concede. My point was meant to be that a back-up capability will be post-singularity, so all bets are off]

  45. 45. Jochen says:

    James of Seattle:

    There are objective reasons an eyeball functions the way it does.

    And those reasons are completely exhausted by the physical causality within which the eye exists. So why add anything beyond that?

    And again, I’m not saying the reasons for an eye’s function are (to some degree, at least) subjective, but what function you consider it to have. To see? To move around in its socket? To store the aqueous humor? To give social cues to other members of the species? To be a particularly tasty morsel to carrion birds, once an organism has deceased?

    All of these (and many more) are certainly things an eye does; but what its function is, is something we choose. For some, it can only fulfill the function if it is properly embedded within its environment: to see, it needs to be hooked up to a nervous system, for instance. To give social cues, it needs to be embedded in a society whose members are capable of receiving these cues.

    There is also a disturbing teleology in the notion of function: we typically claim something functions in such a way if it achieves some goal. But how does the goal, the future, intended end-point of an action, determine function?

    Some, like Ruth Millikan, appeal to evolution to explain this: a part’s proper function is what it has evolved to do. This carries the somewhat disturbing implication that an eyeball that has spontaneously congealed out of the vacuum (a Boltzmann-eyeball) doesn’t have a function, despite being molecule-by-molecule identical with my own functional eyeball: so function does not lie in the physics.

    So, what do we really mean when we say, an artificial heart is functionally equivalent to a biological one? Well, we have isolated one particular range of physical behavior that we deem important, and created a system that replicates it. The replacement heart will differ with respect to many functions: it won’t metabolize sugar, for instance. But we’ve singled out the blood-pumping function, not because we wanted a replacement heart, but a replacement blood-pump. In general, however, it’s not at all easy to figure out which function to focus on.

    But my real target is computation. Here, the issue reduces to a simple question: what does the adder (as described above) compute? If there’s no objective answer to that, then computationalism is simply wrong. (It’s actually still not right if there is an answer, since we still need a mind to map physical states to abstract objects even if that mapping is unique, but uniqueness is necessary, if not sufficient.)

    Several significant people are coming around to the notion of how to ground meaning (function) from natural sources.

    Or indeed, see my own essay “Von Neumann Minds: A Toy Model of Meaning in a Natural World“.

    I’m not in any way saying that mind needs to be grounded in something non-natural: quite the opposite, I’m trying to formulate a physicalist picture of the mind. I disagree with many commonly held beliefs in that area, though, chiefly among them that mind can be explicated in terms of computation.

  46. 46. Hunt says:

    http://www.umsl.edu/~piccininig/Computationalism_in_the_Philosophy_of_Mind.pdf

    Computationalism is the view that intelligent behavior is causally explained by computations performed by the agent’s cognitive system (or brain). In roughly equivalent terms, computationalism says that cognition is computation. Computationalism has been mainstream in philosophy of mind– as well as psychology and neuroscience – for several decades.

    Jochen says:

    First of all, this already concedes half my point (at least): computation alone isn’t sufficient, interaction with the environment is necessary. A black box, sitting in the dark, won’t yield conscious experience.

    This isn’t computation anymore than, e.g., a ball bouncing down a hill is; it’s simply physical causality.

    SAP says:

    I don’t know of any useful computational devices that don’t interact with their environment, so I’m not seeing why this would be a point in your favor.

    If real world interaction is all that is required to “fix” computation into one aspect of causal mechanism, I don’t see how the “problem of interpretation” presents a problem for computationalism. From the quoted definition, computationalism is the view that intelligent behavior is causally explained by computation, BUT as SAP points out, interaction with environment is usually assumed. But if explicit specification is all it takes to resolve the argument…

    Having said all that, I actually don’t buy the “interaction with environment” rebuttal. I’ve always been swayed by the brain in a vat argument: a brain excised from a scull and somehow kept alive in a vat would still be conscious, though totally cut off from its environment. As a more realistic example: those unfortunate enough to suffer “locked in” stroke are still conscious, thought mostly cutoff from their environment.

    (As a side note: a preview button would be great. Sure hope my markup is right!)

  47. 47. Jochen says:

    (Sorry, I meant to add: in order to do quotes, you need to use the html “blockquote” tag—enclose the quoted part in “blockquote” and “/blockquote” in angular brackets (i.e. “”). You can’t edit, unfortunately.)

    SelfAwarePatterns:

    And I’d choose to view the backup being booted up after my death as me surviving.

    So, there is a copying machine. You’re offered the following deal: you get into the machine, a copy is made and reconstituted in a different room; that a copy was made is afterwards proven to your satisfaction, say by video. You have every reason to believe the process works as advertised—say, you’ve been copied many times, it’s a routine procedure used all over the world, there’s no doubt that it does what it says on the tin.

    You’re presented with two options, from which you must choose before the procedure: either you choose that the original is destroyed; in that case, the copy will receive five million dollars. Or, you can choose that the original lives; in that case, the clone will be destroyed, and you get nothing. (Oh, and you can’t choose not to play: you’re prisoner of the usual mad scientist type, and if you renege, you’ll just be killed without copy all old-fashioned like.)

    Which do you choose? (I’m genuinely curious here; I have no settled views on the matter of personal identity. Still, my intuition is that one ought to decide against the original being killed, since I can’t shake the idea that otherwise, my experience would be—make decision, enter cloning machine, die; not—make decision, enter cloning machine, receive five million bucks.)

    A variation:

    Neither of you can tell, immediately after the cloning, whether they’re clone or original—the cloning procedure requires anaesthesia, and you wake up in identical surroundings.

    Does that have any influence on your answer? In what way?

  48. 48. Jochen says:

    Hunt:

    If real world interaction is all that is required to “fix” computation into one aspect of causal mechanism, I don’t see how the “problem of interpretation” presents a problem for computationalism.

    I don’t think interaction with the environment suffices for ambiguities of interpretation to be resolved; I was merely pointing out that even if that were correct, we don’t really have a computationalist account, as interaction with the environment is not a computational process.

    Regarding why interaction can’t suffice, see my first answer to James above, with the example of the ‘binary adder’. If you hook a device up to the environment, then it’s its input-output behavior that is relevant—the way it reacts to environmental stimuli. This doesn’t help fix the interpretation of inputs and outputs (and thus, of what is being computed) in any way, though.

  49. 49. Hunt says:

    Jochen,

    This doesn’t help fix the interpretation of inputs and outputs (and thus, of what is being computed) in any way, though.

    Why does it matter? Again returning to the definition above (which can be quibbled with, of course), computationalism is the idea that cognition is computation. That no specific interpretation can be given to the computation that goes between input and output is irrelevant. The computation is just another part of the causal explanation of intelligent behavior. I think there’s a danger of losing sight of when the problem has been answered here: it doesn’t matter what computation is being performed! Therefore interpretation is irrelevant. Problem solved. Yes/no?

  50. 50. Hunt says:

    Speaking for myself, I just know I wouldn’t want to be the clone that gets drowned in the tank of water, like in “The Prestige”.

  51. 51. Jochen says:

    Hunt:

    Again returning to the definition above (which can be quibbled with, of course), computationalism is the idea that cognition is computation.

    But without fixing the interpretation, there’s no fact of the matter which computation is being performed. That is, by saying ‘the binary adder adds’, one means that it implements a concrete algorithm, and it’s this algorithm that is meant by ‘the computation’. Likewise, if mind were computational, it would at least be equivalent to a certain algorithm being implemented physically.

    But without fixing the meaning of inputs and outputs, we can’t say that the binary adder adds—there are, as I described above, even within the restricted domain of binary functions a great many that one can consider the ‘adder’ to implement. Consequently, there’s many different algorithms that one could consider the adder to perform, and hence, many different computations: without making an interpretational choice (saying ‘that LED corresponds to place value 2^0, this one to 2^1’, and so on), no computation at all is being performed.

    Consequently, without interpretational choice, it’s not the case that the brain implements a certain computation.

  52. 52. Jochen says:

    Hunt:

    Speaking for myself, I just know I wouldn’t want to be the clone that gets drowned in the tank of water, like in “The Prestige”.

    But in order to avoid that fate, which strategy would you, if pressed, choose? Have the original destroyed (and have the clone pocketing five million bucks), or have the clone destroyed? (And yes, either would get to experience their imminent end; there’s no unnecessary torture, but it’s also not a painless blinking out of existence.)

    Does your choice depend on the amount of money? If no money at all is offered, would you be fine with a coin toss? Would ten bucks suffice to choose the clone as survivor? Is there a threshold such that you would take the deal?

  53. 53. Jochen says:

    I mean, it’s in the end quite simple: you come across a black box in the emptiness of space. It’s some complex mechanism, it has gears, switches, lights, and whatnot. Manipulating switches causes lights to go on, whatever. What does it compute?

    This is impossible to answer. You might make a reasonable guess at what the intended meanings of the lights are, and get a consistent interpretation; but other consistent interpretations will exist.

    The problem is exactly the same as: you come across a page of text in an unknown language. What does it say? Again, this is impossible to answer. In principle, it could say anything that can be said in the amount of characters it contains—in this way, one would regard it as a one-time pad, and a decoding exists such that every English text of that given length, every Chinese text, every German text, whatever, can be gathered from it.

    The meaning is not just in the physical artifacts; one always needs to interpret them. But since such interpretation is necessary, it’s just not the case that in any objective sense, the brain computes something. It can only be interpreted as such. But this falsifies computationalism.

  54. 54. SelfAwarePatterns says:

    James of Seattle #44,
    Sorry! That paragraph was really just thrown in as an aside comment about the overall conversation in this thread. I really should have broken it out. Totally my bad.

    On the real or not thing, my point was that designating the functional aspect as an interpretation but not the physical aspect, is artificial. They’re both interpretations of reality, mental models we create. And the epistemic limitation is that we only ever have access to the models.

  55. 55. SelfAwarePatterns says:

    Hunt #46,
    Actually a brain in a vat (in the typical scenarios) is not cut off from its environment. The usual idea is that an evil scientist is feeding it sensory input and responding to its motor output. Certainly its environment is radically different than what it perceives, but it’s still an environment.

    A patient who is completely locked in might still be conscious. They still have their memories if nothing else from when they did interact with their environment. Although I’m not aware of any patients who experienced complete and utter lock in and recovered enough to describe it, so I’m not sure how much we really know here. But would a brain completely locked in since the early fetal stages be conscious? I don’t know the answer to that question, but I’m pretty sure if it did, it would be a desolate and impoverished consciousness by any standard.

  56. 56. SelfAwarePatterns says:

    Jochen #47,
    These scenarios are intuitively difficult because they are outside of anything our instincts evolved to handle. We never had to worry about copies of ourselves in the hunter-gatherer environments, so no answer is going to feel good to us.

    Before addressing the scenario you laid out, let me lay out another one. The mad scientist holding you captive is going to flip a coin. It it’s tails, you die. You get to decide now what happens if it’s heads. Option one is you go free with nothing. Option two is you go free with $5 million. Option two here seems like the rational choice. In either case, you have a 50% chance of dying and a 50% chance of living, so you might as well choose the option where in the 50% chance you live, you live in style.

    Okay, back to your scenario. An important detail is that you specify that I must decide *before* the procedure. In my view, in my personal subjective future timeline, I have a 50% chance of coming out of the machine as copy-me, and a 50% chance coming out as original-me. No matter which option I choose, there’s a 50% chance that death is in my future. For me, the choice here is identical to the one in the alternate scenario above. (It’s actually slightly better, since some version of me definitely does get out alive.) The alternate scenario you present, where there is an explicit interruption in consciousness and an obfuscation of identity afterward, just makes the choice easier.

    The choice would be harder if the scenario was that I had to decide *after* the procedure, since now, as original-me, if I choose for copy-me to survive, there is a 100% chance of death in my subjective future. But again looking at an alternate scenario helps. Suppose the choice was whether to have my memory of the current situation wiped and then set be free with $5 million, or simply to be set free with no memory wipe and no money? Subjectively, for me, the two scenarios are equivalent, although admittedly my survival instincts would make that hard to remember if I were in the decide after copy scenario.

    Suppose the mad scientist gave me a choice before the procedure. In choice one, copy-me is tortured for several hours and then killed, while original-me leaves with $5 million. In choice two, both copy-me and original-me are set free unharmed. Again from a subjective perspective, this seems equivalent to the mad scientist giving me a choice of enduring several hours of torture but then being restored to full health and having my memory of the torture session wiped before being set free with the money, or simply being set free with no torture or money. (In both scenarios, unless I desperately needed the money, I think I’d skip the torture.)

    Again, these are all situations our instincts didn’t evolve to handle, so no answer is going to feel categorically right. Indeed, I can’t say that someone who makes different choices than I did is wrong. It’s all in our conceptions of self.

  57. 57. Jochen says:

    Interesting answer, thanks! To me, it sort of boils down to: the copy procedure didn’t change anything about me; from my point of view, from the point of view of me as a physical system, there’s absolutely no difference regarding whether there’s a clone made after I have been scanned, or not. So what could possibly cause me, in a physical sense, to have any different experience than—be scanned, then die? It seems that anything else would have my experience be contingent on distant facts—i.e. whether there is actually a clone made or not. This just doesn’t sit well with me.

    But as I said, I don’t think I have a good answer myself here.

  58. 58. SelfAwarePatterns says:

    I should note that a lot depends on my faith in the copying process. If I’m not sure about it, I’d probably be closer to your position. The stream of consciousness I know is preferable to one with substantial question marks. But if the procedure were common and I’d been through the copying process before, such that I remembered being the original followed by being the copy, then it’s easier for me to consider the copy an aspect of me.

  59. 59. James of Seattle says:

    Jochen #45 said

    There is also a disturbing teleology in the notion of function: we typically claim something functions in such a way if it achieves some goal. But how does the goal, the future, intended end-point of an action, determine function?

    Actually, the relation of “some goal” (or purpose) to function is that the former is an explanation of the latter. Something has a function because it was created (selected) to achieve a purpose. That purpose can be long gone, but it still explains the function. The function under consideration will always be objectively tied to the purpose for which it was created. So a mechanism is an “adder” if it’s purpose when it was created and/or situated was to add. That same mechanism could be a subtractor, but only if created or situated for the purpose of subtracting.

    So it’s not that the Boltzmann eyeball doesn’t have a function so much as that it has no function related to a purpose.

    But my real target is computation. Here, the issue reduces to a simple question: what does the adder (as described above) compute?

    The answer is the same as above. What it computes is determined by the objective purpose for which it was created or situated.

    we still need a mind to map physical states to abstract objects

    Do you mean we need a mind to do the mapping, or we need a mind for there to be a mapping? I would say that given a purpose, there is an objective mind-free mapping.

    *

  60. 60. David Duffy says:

    Jochen writes: “this is the erasure of information is, again, only thanks to interpreting one state as ‘0’ and the other as ‘1’—think about how easily one could reverse this interpretation” and “algorithmic complexity and randomness are…still relative: given an appropriate oracle, a random string may be non-random”.

    These kind of comments keep confusing abstract and concrete computation. The random string is non-random only because a suitably formed physical dynamical process can expend energy to decode it – the total information must be message plus reader. The reason we have Landauer’s principle is that hypercomputing cum oracles at the level of thermodynamics give perpetual motion, which is generally thought a bad thing. And Maxwell’s demon cannot start swapping his semantics re ones-and-zeroes half-way through and expect his books to balance.

    As to teleosemantics a la Millikan and others, your Swamp Thing/Boltzmann Brain type objection fails because natural selection is “merely” a search engine finding actual physical solutions. Sure you can throw together a de novo solution that is just as effective, but again, in this world, you have to have expended work to get to such an unlikely state from the position in which we usually find ourselves.

  61. 61. Jochen says:

    James of Seattle:

    So a mechanism is an “adder” if it’s purpose when it was created and/or situated was to add. That same mechanism could be a subtractor, but only if created or situated for the purpose of subtracting.

    But natural selection simply doesn’t act on that level. It selects which LEDs light up in response to which switches have been flipped, as that determines the system’s behavior within the environment; but whether you consider those lights to represent binary numbers, and how you do so, doesn’t make a wit of a difference for selection.

    The same holds for man-made objects: no matter how hard you want your device to be an adder, I can always use it to implement a different function, and there’s nothing you can do to stop me. The same goes for every system that purportedly computes something.

    This is in stark contrast to any objective properties of a physical system. It’s not open to interpretation what its mass, or size, may be; and even in cases where different observers come to different results (say, relativistic length contraction), there is a simple, lawful connection that gives a unique answer for every combination of observer (i.e. frame of reference) and system. No such agreement exists for the notion of ‘computation’.

    No computationalist ever is going to solve the question of how to find out what an arbitrary system computes; yet if that were an objective property of a system, this should be possible at least in principle. I’ll even throw in the complete state-transition table of the system for free, since thanks to Moore’s theorem, you can’t generally figure that out just by observing input/output behavior.

    It won’t help: different people can use this system to compute different functions, and even the same person can use it differently. What a system computes is always relative to how it is interpreted; but consequently, computation just can’t figure in explaining minds.

    —————————

    David Duffy:

    The reason we have Landauer’s principle is that hypercomputing cum oracles at the level of thermodynamics give perpetual motion,

    Landauer’s principle, as I mentioned, simply characterizes the fact that to switch an unknown state, a minimum average energy is necessary. It has connections to information only as far as one can use the state that was switched to represent a ‘bit’ of information. (One can easily derive the principle without any mention of information or computation at all, for one.)

    Still more correct would it be to say that one erases correlations between systems. Since such correlations derive from constraints, this entails removing the constraints; that this needs physical work to be performed should not be surprising at all.

    As for Maxwell’s demon, again, it’s completely irrelevant whether you call the elements of its memory ‘1’ or ‘0’; what is useful to the demon is the physical correlation between that memory and the position of the particles. It can be completely understood in terms of switches being flipped, circuits being closed, or whatever; anything informational is, again, merely an after-the-fact gloss.

    natural selection is “merely” a search engine finding actual physical solutions

    The problem is not finding solutions; it’s formulating the problems in an objective way. So the heart evolved in order to pumped blood, and it’s by this process that, as teleosemantics would have it, its function indeed is ‘to pump blood’. This makes the question which function the heart fulfills dependent on its evolutionary history; a heart with a different history, or none at all, would thus have a different function even if it was physically identical to an evolved one. It would be the same ‘solution’, but for a different problem, or no problem at all.

  62. 62. David Duffy says:

    Jochen: “correlations”: “correlation”, “mutual information”, “feedback”, “measurement” are informational concepts. Autonomous Maxwell’s Demons of several types have been built.

    in structured environments, whether the correlations are temporal or spatial. Ashby’s
    Law of Requisite Variety – a controller must have at least the same variety as its input so that the whole system can adapt to and compensate that variety and achieve homeostasis – was an early attempt at [a] general principle of regulation and control. In
    essence, a controller’s variety should match that of its environment. Above, paralleling
    this, we showed that a near-optimal thermal agent (information engine) interacting with
    a structured input (information reservoir) obeys a similar variety-matching principle.

    Arising naturally in recent analyses of Maxwellian demons and information engines , information reservoirs have come to play a central role as idealized physical
    systems that exchange information but not energy. Their inclusion led rather
    directly to an extended Second Law of Thermodynamics for complex systems: The total
    physical (Clausius) entropy of the universe and the Shannon entropy of its information
    reservoirs cannot decrease in time. We refer to this generalization as the
    Information Processing Second Law (IPSL).

    A specific realization of an information reservoir is a tape of symbols where information is encoded in the symbols’ values.

    Both from

    http://csc.ucdavis.edu/~cmg/compmech/pubs/abboyddissn.htm

    And see also a primer by Sagawa [2017] arXiv:1712.06858v1

    The question is whether moving up to the level of Friston’s models of the brain’s function is justifiable.

  63. 63. Jochen says:

    David Duffy:

    “correlations”: “correlation”, “mutual information”, “feedback”, “measurement” are informational concepts.

    They’re syntactical concepts that tell you nothing at all about semantics; it’s semantics we need for minds. A sheet of paper with random characters on it will have a high Shannon information, but still, that doesn’t mean it transports any sort of message at all. Yet, any sort of message with the right length can be decoded from it.

    The quantities you’re talking about are best thought of as being related to information-bearing capacity, rather than information itself.

    When it gets right down to it, using the notions you refer to, a bit is nothing but an elementary difference between two physical systems. Say, a red ball and a green ball: their color is a difference across one property, and hence, I can provide you with one ‘bit’ of information in this sense if I give you either the green or the red ball.

    But what we need for minds is meaning: say, that one bit of information tells you the answer to a yes/no question, for instance, that the English are attacking by sea. This is not reducible to the physical system on its own; it is your interpretation that makes it about that. It could have just as easily meant something completely different. There is no sense in which a green ball inherently and objectively means that the English will attack by sea. This is an additional interpretive gloss put upon that system by an interpreting observer.

    It’s exactly the same with computation. That the machine I describe is an adder is just the same kind of fact as that a green ball means that the English attack by sea: without the right interpretation, there’s simply no fact of the matter there.

    Hence, we can’t explain mind—which we use to furnish these interpretations—in terms of computation—which is always dependent on such interpretations.

  64. 64. Jochen says:

    In other words, what we’re saying when we say ‘the machine computes the sum of its inputs’ is the same kind of thing as what we’re saying when we say ‘the word ‘dog’ means a certain kind of quadruped’. It’s convention: it could just as easily be different. ‘Dog’ could mean a certain plant, a style of literature, or just be a nonsense syllable. Likewise, the ‘adder’ may be thought of as adding, but could just as well perform a plethora of other functions.

    Any way around this would have to postulate that there are symbols that, somehow, just inherently mean things. This does not strike me as reasonable.

  65. 65. James of Seattle says:

    Jochen,

    I think I see the disconnect between you and me, but I am not sure I will be able to convince you because you seem wedded to the idea that computation is necessarily independent of mind (which does the interpreting). For what follows I ask that you suspend this stance and consider the possibilities presented on their own merits.

    What I think you are missing is that interpretation is computation.

    Let me change the example from an adder, which I don’t find intuitive, to a kind of thermostat. This thermostat, instead of regulating the heat of a room, simply closes a circuit that turns on a light if the temperature is above 70 degrees Fahrenheit. In this case the thermostat interprets a certain amount of ambient kinetic energy as “turn on the light”. That’s what I am calling a computation. The output (light on or off) is determined by the input (ambient kinetic energy). Presumably the purpose of this thermostat was simply to generate a symbol relating to the temperature.

    Now a subsequent system can take that light as input, and depending on its purpose, produce an output, such as displaying “The temperature is 70 degrees”. Alternatively, the output might be opening windows (making a proper thermostat). These are interpretations/computations, determined by the functions of the secondary systems as explained by their respective purposes. With some deeper knowledge, a system such as yourself can make the computation/interpretation that the light is creating heat and possibly working against the thermostatic purpose.

    So, does that change your mind? 🙂

    *

  66. 66. Jochen says:

    James of Seattle:

    you seem wedded to the idea that computation is necessarily independent of mind

    I presume you meant to write ‘dependent on mind’ here. And it’s not an idea that I came to easily and without a fight—indeed, an earlier FQXi-entry of mine attempted to rigorously define an information-based ontology in order to attack the hard problem of consciousness, but the solution just never seemed satisfactory. But in the end, it’s just a simple fact that for every system you claim computes a function x, I can explicitly specify a function y that it can just as well be considered to compute, with the difference between both cases merely resting in interpretation.

    It’s again exactly the same as the fact that you can always decode a given string of symbols in several distinct ways. I suppose you take no issue with that, at least?

    Let me change the example from an adder, which I don’t find intuitive

    The purpose of the adder example is exactly that it allows us to check our intuitions by being fully well specified: it is completely clear that you can, by a mere relabeling of what the LEDs represent, change the algorithm that is implemented by the system. If that goes against your intuition, then maybe you should reconsider that intuition.

    In this case the thermostat interprets a certain amount of ambient kinetic energy as “turn on the light”.

    No; this is your interpretation of what the thermostat does. The thermostat knows nothing of temperature, or of light: it is simply a physical mechanism where a state-change caused by an increase of temperature closes a circuit. Anything more than that is unearned.

    Furthermore, this is not actually what one typically considers a computation; it’s a control loop, which doesn’t work on formal objects, as a computation does, but which regulates physical quantities. It’s easy, in this sort of example, to muddy the waters: we never look at the world without interpreting it, and thus, we see interpreted things everywhere in the world, and consider them a feature of the world, rather than of our minds. It’s the old problem of the fish that don’t know what water is, since they’re too immersed in it to notice. So it’s easy to think that a rise in temperature means ‘switch the light on’ to the thermostat; but without one’s interpretation, this is not anything like a meaning.

    Otherwise, every physical interaction would suddenly acquire a meaningful nature: the jagged piece of rock that sends a ball tumbling down an incline up in the air would mean ‘jump three meters’ to the ball. But that’s again just overinterpretation: it merely causes the ball to jump three meters, just as the temperature merely causes the thermostat to switch on the light.

    This doesn’t change if we add further system that react to the thermostat’s light going on. A display showing the temperature does not interpret the light; you interpret what’s shown on the display. Think about somebody speaking a curious language in which ‘temperature’ means ‘color’ and ’70 degrees’ means ‘pink’: to them, the display would ‘interpret’ the thermostat’s light as indicating that ‘the color is pink’. But neither the mapping of the letters ‘temperature’ to average molecular kinetic energy nor to light frequency has any precedence, so that person is just as justified in their pronouncement: the interpretation comes at the level of the mind watching the mechanism.

    Or again, take a window being blown open by the wind: do you think that this is because the ambient air pressure indicated to the window that it ought to open? But that’s not different than the window being opened by the thermostat. The physical causality is a little more complex, but at the bottom, exactly the same kind of process.

  67. 67. James of Seattle says:

    Jochen,

    I’ll take one last shot. The key is function (and purpose, in a broad sense). The thermostat was designed/situated with a purpose, and therefore a specific function. When it functions correctly, the light goes on when the temp. reaches 70. The displayer mechanism’s function is to announce the temp. The wind blowing open the window does not have a purpose, and therefor no function. I am defining this as computation. This is what I mean when I say mind is computational.

    Now you can re-interpret the output, but that does not change the function of the original mechanism. But your re-interpretation is a second computation.

    You can say “that’s not what I mean by computation”, but that doesn’t matter. I say every mental event is my kind of computation. Every interpretation is this kind of event, and I’m calling it computation for lack of a better word.

    *

  68. 68. David Duffy says:

    Hi Jochen: “that one bit of information tells you the answer to a yes/no question”…
    is exactly the same problem as the original Maxwell’s demon – how is that “neat fingered” intelligences breaks the laws of thermodynamics? How is that intentionality leads to my brain getting its 2000 kJ every day by creating “negative entropy”?

    Where does the viewer of the one-bit-of-information green lantern get all that data from? In that case it is simple. It was all transferred earlier into a “system possessing multiple, distinct metastable states”. As in the case of Maxwell’s demon, one has to look at all the involved systems. The fact that I walk around with mental representations of refrigerators means that the correct model of my brain’s thermodynamics must include the properties of supermarkets etc, as per the “law of requisite varieties”. Correlations are not purely syntactic, in that within a physical computing device they represent the nonequilibrium free energy that can be realized if the transduction between measurement and action is efficient enough. And this is just metabolism and environmental prediction – I have mentioned nothing of the informational nature of replication in biology at the molecular level.

  69. 69. Jochen says:

    James of Seattle:

    I’ll take one last shot. The key is function (and purpose, in a broad sense). The thermostat was designed/situated with a purpose, and therefore a specific function.

    I think I get what you’re proposing, and I’ve already told you why I think it doesn’t work; simply restating your proposal won’t do any more to convince me.

    In short, humans just aren’t designed for a purpose; evolution doesn’t have a purpose. And even if it did, then it would follow that a physically identical copy that randomly coalesced from a swamp would not have consciousness, since it would not be designed for a purpose and hence, have no function. And yet still, if you design something with an explicit purpose in mind, nothing is going to deter me from using it for a different purpose. Indeed, in general, merely being presented with the artifact isn’t even going to tell me its purpose: it’s not an objective property of the physical object itself.

    I am defining this as computation.

    You’re of course free to define things any which way you like, but in a discussion, this runs the risk of Humpty Dumpty problems: if your words always mean exactly what you want them to mean, I’m left ignorant of what it is you mean (ironically, this is exactly what I’m trying to get across re computation). So just to get clear on this point, I tend to think of computation in roughly the same way wikipedia has it:

    Computation is any type of calculation that includes both arithmetical and non-arithmetical steps and follows a well-defined model understood and described as, for example, an algorithm.

    In other words, something like what the adder does, under the right interpretation. Using this definition, I think it also becomes completely clear how the adder can be seen to implement distinct algorithms, and hence, how computation depends on interpretation.

    ————————————————————

    David Duffy:

    is exactly the same problem as the original Maxwell’s demon – how is that “neat fingered” intelligences breaks the laws of thermodynamics? How is that intentionality leads to my brain getting its 2000 kJ every day by creating “negative entropy”?

    I don’t really get what you mean here. Neither my brain nor Maxwell’s demon breaks the laws of thermodynamics.

    As in the case of Maxwell’s demon, one has to look at all the involved systems.

    This is just an argument from complexity: because it’s easy to see in simple systems that no information is actually involved, an appeal is made to complex systems that aren’t so easily analyzed anymore, because maybe, somehow, somewhere, information is going to creep in there. But it doesn’t: it’s the same as trying to define a word by referral to other words—the word never ends up getting defined, since you have to define all the defining words first, and those defining them, and so on, so meaning is eternally deferred. Ultimately, one must ground everything in something other than words—i.e. the real world. The same with information and meaning: digging deep enough, one always finds a mind interpreting some symbols as pertaining to something beyond themselves.

    And this is just metabolism and environmental prediction – I have mentioned nothing of the informational nature of replication in biology at the molecular level.

    Which is a good thing, since there is none. What happens are processes that are entirely analogous to keys opening locks: by their form, proteins, enzymes, and what have you, cause or catalyze certain chemical reactions. This is, again, not different from the wind blowing open a door; a strand of mRNA doesn’t have inform a ribosome to assemble proteins any more than the air pressure ‘tells’ the door to open. The pressure simply reaches a certain strength that overcomes the door’s inertia; and likewise, the shape of mRNA triggers changes in the state of the ribosome that lead to certain amino acids being preferentially attached (or something like that; high school biology was a long time ago, so I apologize for any errors).

    Sure, it is often advantageous to speak of ‘the genetic code’, and of mRNA as ‘carrying information’ about proteins. There’s nothing wrong with that, as long as one is clear on the fact that such talk is metaphorical. It’s not even strictly wrong to say that ‘this mRNA means that protein to the ribosome’. But meaning can’t be exhausted by this: it leads to infinite regress. This is the picture of mental symbols meaning something to an internal observer: in other words, the homunculus fallacy. Hence, one needs to work a little more in order to get a mind out.

Leave a Reply