brain copyInformation about the brain is not the same as information in the brain; yet in discussions of mind uploading, brain simulation, and mind reading the two are quite often conflated or confused. Equivocating between the two makes the task seem far easier than it really is. Scanners of various kinds exist, after all, and have greatly improved in recent years; technology usually goes on improving over time. If all we need is to get a really good scan of the brain in order to understand it, then surely it can only be a matter of time? Alas, information about the brain is an inadequate proxy for the information in the brain that we really need.

We’re often asked to imagine a scanner that can read off the structural details of a brain down to any required level of detail. Usually we’re to assume this can be done non-destructively, or even without disturbing the content and operation of the brain at all. These are of course unlikely feats, not just beyond existing technology but rather hard to imagine even on the most optimistic view of future developments. Sometimes the ready confidence that this miracle will one day be within our grasp is so poorly justified I find it tempting to think that the belief in the possibility of such magic scans is being buoyed up not by sanguine technological speculation but unconsciously by much older patterns of thinking; that the mind is located in breath, or airy spirits, or some other immaterial substance that can be sucked out of a body and replaced without physical consequences. Of course on the other side it’s perfectly true that lots of things once considered impossible are now routinely achieved.

But even allowing ourselves the most miraculous knowledge of the brain’s structure, so what? We could have an exquisite plan of the structure of a disk or a book without knowing what story it contained. Indeed, it would only take a small degree of inaccuracy or neglect in copying to leave us with a duplicate that strongly resembled the original but actually reproduced none of the information bearing elements; a disk with nothing on it, a book with random ink patterns.

Yeah but, the optimists say; the challenge may be huge, the level of detail required orders of magnitude beyond anything previously attempted, but if we copy something with enough fidelity the information just is going to come along with the copy necessarily. A perfect copy just has to include a perfect copy of the information. Granted, in the case of a book it’s not much use if we have the information but don’t know how to read it. The great thing about simulating a brain, though, is that we don’t even need to understand. We can just set it up and start it going. We may never know directly what the information in the brain was, but it’ll do its job; the mind will upload, the simulation will run.

In the case of mind reading the marvellous flexibility of the mind also offers us a chance to cheat by taking some measurable, controllable brain function and simply using it as a signalling device. It works, up to a point, but it isn’t clear why brain communication by such lashed-up indirect routes is any more telepathy than simply talking to someone; in both cases the actual information in the brain remains inaccessible except through a deliberate signalling procedure.

Now of course a book or a disk is in some important ways actually a far simpler challenge than a brain. The people who designed, made, and use the disk or the book take great care to ensure that a specific, readily readable set of properties encodes the information required in a regular, readable form. These are artefacts designed to carry information, as is a computer. The brain is not artefactual and does not need to be legible. There’s no need for a clear demarcation between information-bearing elements and the rest, and there’s no need for a standardised encoding or intelligible structures. There are, in fact many complex elements that might have a role in holding information.

Suppose we recalibrated our strategy and set out to scan just the information in the brain; what would we target? The first candidate these days is the connectome; the overall pattern of neural connections within the brain. There’s no doubt this kind of research is currently very lively and interesting – see for example this recent study. Current research remains pretty broad brush stuff and it’s not really clear how much detail will ever be manageable; but what if we could map the connections perfectly? How could we read off the content? It’s actually highly unlikely that all the information in the brain is encoded as properties of a network. The functional state of a neuron depends on many things, in particular the receptors and transmitters; the large known repertoire of these has greatly increased in recent years. We know that the brain does not operate simply through electrical transmission, with chemical controls from the endocrine system and elsewhere playing a large and subtle part. It’s not at all unlikely that astrocytes, the non-neural cells in the brain, have a significant role in modulating and even controlling its activity. It’s not at all unlikely, on the other hand, that ephaptic coupling or other small electrical induction effects have a significant role, too. And while myself I wouldn’t place any bets on exotic quantum physics being relevant, as some certainly believe, I think it would be very rash to assume that biology has no further tricks up its sleeve in the shape of important mechanisms we haven’t even noticed yet.

None of that can currently be ruled out of court as irrelevant. A computer has a specified way of working and if electrical interference starts changing the value of some bits in working memory you know it’s a bug, not a feature. In the brain, it could be either; the only way to judge is whether we like the results or not. There’s no reason why astrocyte states, say, can’t be key for one brain region or for one personality, and irrelevant for others, or even legitimate at one time and unhelpful interference at others. We just cannot know what to point our magic scanner at, and it may well be that the whole idea of information recorded in but distinct from a substrate just isn’t appropriate.

Yeah but again, total perfect copy? In principle if we get everything, we get everything, don’t we? The problem is that we can’t have everything. Copying, simulating, or transmitting necessarily involve transitions during which some features are unavoidably left out. Faith in the possibility of a good copy rests on the belief that we can identify a sufficient set of relevant features; so long as those are preserved, we’re good. We’re optimistic that one day we can do a job on the physical properties which is good enough. But that’s just information about the brain.

19 Comments

  1. 1. Scott Bakker says:

    But I also think it serves to remember the *industrial scale* of the research we’re talking about. Even the EU ‘Connectome’ initiative (boondoggle) is small potatoes in the scheme of things. The NIH earmarked some 5.5 billion dollars (!) for neuroscientific research in 2015. The Neurotechnology Council estimates that other Western nations spend on average 2 billion dollars per year on neuroscientific research. Meanwhile, private industry keeps plowing more and more money (though the amount varies depending on the source you consult).

    If you accept that ‘knowing the brain’ ultimately amounts to knowing how to fix, manipulate, map, augment, and predict the brain with every greater resolution and reliability, and you appreciate the degree to which the enormous amount resources dedicated turn on tangible, research results, then it seems clear that with every passing year we will know more and more about the brain. It seems to me that it’s just a matter of time before this process renders questions regarding the ‘knowability of the brain’ moot at some point.

    The temptation is to look at ‘knowledge’ in rarefied terms, as some kind of representational canon that somehow captures all the relevant functional details, when in fact it’s about instrumentally incorporating the brain into the morass of our social practices. On the former account the brain does seem ‘impossible to know,’ but on the latter account, it seems all but assured.

  2. 2. Hunt says:

    Science has a very hard time with positive or negative prediction. We simply can’t say much about that which we don’t fully understand, let alone that which baffles us. If anything, if I were to say that all of the things that we can possibly imagine will come to pass, and then some, I think I would ultimately be closer to future historical truth, or fall short of it, than any negative prediction I might ever make. Whatever we can imagine, it’s going to happen.

    Imagine fourteenth century philosophers discussing almost any scientific or technological topic of today. Imagine them attempting to fathom, say, micro-electronics. If we allow that gap in ability to guide our view of our technological future, it’s probably the closest we’re going to get to predicting it.

  3. 3. David Duffy says:

    It does depend on what you want from your brain model. I think the usual idea of an upload is a computationalist one: a good or poor copy of a running algorithm (which we infer from the state of the substrate). I suppose the stepwise replacement thought experiment implies we end up with an upload, but I think there is an implicit idea that the replaced bit allow continuation of
    a process. One particular operational criterion for success of an upload or simulation (eg Greg Egan’s Zendegi): whether the original person is satisfied with the answers received in a Turing Test like setup covering goals, emotional responses to events etc.

  4. 4. Richard J.R.Miles says:

    I do not think that human implants for communication and internet connection are that far in the future. This will assist understanding human function.

  5. 5. Callan S. says:

    Yeah – really the word ‘copy’ is kludge thinking itself. It’s not a ‘copy’, not a duplicate – it’s a different set of atoms and molecules entirely. There’s not much point doing super fine high res measurement of a brain, then turning around and using low res kludge words like ‘copy’.

  6. 6. Sci says:

    It seems to me that science validating the idea of mind uploads simultaneously proves eliminativism beyond a doubt?

    A rather hollow prize.

  7. 7. Arnold Trehub says:

    Peter: “Alas, information about the brain is an inadequate proxy for the information in the brain that we really need.”

    In the SMTT experiments, a vivid hallucination is self-induced in a subject, and the same hallucination is induced in N observers looking over the subject’s shoulder. Does this provide the independent observers the information that is in the brain of the subject?

  8. 8. Peter says:

    Well, it’s no surprise that people looking at the same thing have reportedly similar experiences. But are they recorded in the brain in ways that are transferable?

    You probably think this is close to being demonstrable with technology just a bit better than what we’ve got, which must be frustrating. For certain levels of vision you might be right, from my limited understanding; but I’m still sceptical so far as actual experience goes.

  9. 9. Arnold Trehub says:

    Peter,

    What the subject and the observers are “looking at” is nothing like what they are consciously experiencing. The subject induces a self-hallucination that is shared by others. There is no *perception* of an object in space, only a shared hallucination of an object in space with subjective properties that are controlled by the subject.

  10. 10. Tanju Cataltepe says:

    There is a recent paper from EPFL in Switzerland about a neuron-level simulation of a small section of a mouse brain (about 0.29 cubic millimeters, 31 thousand neurons). Abstract is [here](http://www.cell.com/cell/abstract/S0092-8674(15)01191-5). The Guardian report is [here](http://www.theguardian.com/science/2015/oct/08/complex-living-brain-simulation-replicates-sensory-rat-behaviour).

  11. 11. Sergio Graziosi says:

    Peter,
    you raise very important questions/worries, and I couldn’t agree more: how can we know that we are looking at all the right things?
    One obvious short answer could be: when we can produce reliable predictions of observable phenomena. So one approach is to reduce the scale of our observations as much as we can so to make reliable predictions possible (with our current resources and knowledge). That’s a way to look at the Blue Brain results (mentioned by Tanju) and try to make sense of the whole project. They aren’t just trying to scale up, they are trying to scale up as little as possible!
    Maybe this approach is viable, but I’m one that thinks that we need to develop a good(/stronger) theory to help us identify what’s relevant: there is just too much complexity to tame, and scaling up and up and up looks wasteful to me. Granted, I’m instinctively very lazy, so my biases are certainly at work here.

    A good (if a little hurried) commentary on the Blue Brain latest results is provided by Adam J Calhoun (@neuroecology on Twitter, worth a follow IMHO).
    (it’s good to be back!)

  12. 12. Sci says:

    @Tanju: Interesting stuff – pressed for time so read the Guardian Article for now. Did I read it right that it would take 20 Billion pounds to completely model the rat’s brain?

    Would be amusing if mind “uploading” ended up being out of the price range of all but the heads of major corporations. Imagine having those guys (or perhaps illusory copies of those guys?) around running companies till the Sun expands to consume the Earth? 🙂

  13. 13. Charles Wolverton says:

    The temptation is to look at ‘knowledge’ in rarefied terms, as some kind of representational canon that somehow captures all the relevant functional details, when in fact it’s about instrumentally incorporating the brain into the morass of our social practices. (Scott, comment 1)

    Perhaps consistent with this quote, I view the relevant functionality of the brain as part of a comm system. In that model, the environment communicates with an organism in order to produce responsive actions. The communication medium is sensory stimulation the results of which undergo some processing in the organism to extract the “channel information” (raw data with no inherent meaning; AKA “Shannon information”). The channel information undergoes further processing in the brain which results in responsive action (possibly latent). I’ll call such action the “application information” received from the environment. (See note below.)

    In that view, part of the brain acts as an interpreter of the channel information contained in the “signal” received from the environment. I take part of Peter’s “information about the brain” to be essentially a design spec for such an interpreter. However, I don’t know what or where Peter’s concept of “information in the brain” is. It can’t be in the interpreter because then it would just be a subset of the application information. So, it must be in the complement of that part of the brain. It can’t correspond to channel information which essentially is in the sensory stimulation, not in the organism, and in any event is transient. So, what is it? (Rightly or wrongly, I interpret Scott’s Sellarsian quote above as suggesting that he has a similar problem with the concept.)

    The content of a book or disk is essentially channel information which, as Peter notes, has to be interpreted by a reader (human or computer) to produce action. Again, I don’t know what else in a book or disk corresponds to the concept “information in X”.

    Of course, all of Peter’s concerns about accuracy of transmission or reproduction apply to information about the brain, ie, the interpreter’s design spec. But those concerns apply to replication of any mechanism: an inadequately detailed design spec is likely to result in degraded functionality.

    Uploading is reminiscent of Davidson’s Swampman thought experiment in which a human organism is transferred molecule-by-molecule to form a new physically identical organism. Davidson claims that despite being behaviorally identical, there is a psychological difference between the source and target organisms. Could something like information in the brain be what Davidson was trying to capture with “psychological”? (Aside: I’ve never understood his argument – which seems to me clearly wrong – but I’ve found neither refutations nor convincing confirmations. I’d be interested in others’ opinions on his claim.)

    Note: Recall Paul Revere’s ride and “One lantern if by land, two if by sea”. The channel information was the single bit conveyed by that binary optical channel. A separate step of interpretation was required in order to convert that bit into application information, ie, which of two possible responsive actions to execute. (For anyone familiar with comm protocol stacks, those information types roughly correspond to the content conveyed by the data link and application layer protocols respectively.)

  14. 14. Charles Wolverton says:

    Arnold:

    In the SMTT experiment, are the visible shapes short, oriented line segments or merely dots (or other symmetrical shape)? And in either case, does the subject know that the visible shapes are produced by moving a vertical slit in front of a geometric background figure rather than, say, merely shining movable focused light shapes on the back of a uniform translucent surface?

    Ie, how much prior knowledge does the subject have about the mechanics of the experimental setup that might be used to infer the triangular shape?

  15. 15. Arnold Trehub says:

    Charles,

    In the SMTT experiment the tiny line segments show no orientation (essentially dots). Also, it is essential to understand that a slit does not move over a background figure. The slit is stationary while the hidden figure moves in horizontal oscillation. What the subject sees at < 2cps is a vertically oscillating dot above a stationary dot.

  16. 16. Arnold Trehub says:

    I should add that the SMTT hallucination is induced with subjects who have absolutely no knowledge of the mechanics of the experimental setup or its purpose.

  17. 17. Sean says:

    So I’m working on a technological Eutpia, I’m working out its world. This question of an upload was the first one I was thinking about. I was wondering if we could isolate the patterns in a specific brain and match specific patterns to thoughts through words, and patterns to experiences.

    What I am suggesting is that theoretically there could be a machine that reads brainwaves that are tagged by a harmless neurotoxin following the synaptic firing. The computer has an algorithm associated to an individual. The algorithm is created through associated thoughts and experiences. Any thoughts on this… I could use some insight into it’s possibility.

  18. 18. Zuum says:

    essalam alykommon fre8re sid ahmed j’ai de9ja lu ce vous avez e9crie allah ma3ak et nous somme les e9lement de aildnbeiea derrie8re vous pour toujourston fre8re Samir

  19. 19. Anonymous says:

    Having read this I believed it was very enlightening. I appreciate you spending some time and energy to put this short article together. I once again find myself spending a significant amount of time both reading and commenting. But so what, it was still worth it!

Leave a Reply