A whole set of interesting articles from IEEE Spectrum explore the question of whether AI can and should copy the human brain more. Of course, so-called neural networks were originally inspired by the way the brain works, but they represent a drastic simplification of a partial understanding. In fact, they are so unlike real neurons it’s really rather remarkable that they turn out to perform useful processes at all. Karlheinz Meier provides a useful review of the development of neuromorphic computing, up to contemporary chips with impressive performance.

Jeff Hawkins suggests the brain is better in three ways. First, it learns by rewiring; by growing new synapses. This confers three benefits: learning that is fast, incremental, and continuous. The brain does not need to do lengthy retraining to learn new things. Remarkably, he says a single neurons can do substantial pieces of pattern recognition and acquire ‘knowledge’ of several patterns without them interfering with each other.

The second way in which the brain is better is that it uses sparse distributed representations; a particular idea such as ‘cat’ can be represented by a large number of neurons, with only a small percentage needing to be active at any one time. This makes the system robust in respect of noise and damage, but because some of the ‘cat’ neurons may play roles in the representation of other animals and other entities, it also makes it quick and efficient at recognising similarities and dealing with vague ideas (an animal in the bush which may or may not be a cat).

The third thing the brain does better, according to Hawkins, is sensorimotor integration. He makes the interesting claim that the brain effectively does this all over, as part of basic ordinary activity, not as a specialised central function. Instead of one big 3D model of the world, we have what amounts to little ones everywhere. This is interesting partly because it is, prima facie, so implausible. Doing your modelling a hundred or a million times over is going to use up a lot of energy and ‘processing power’, and it raises the obvious risk of inconsistency between models. But Hawkin’s says he has a detailed theory of how it works and you’d have to be bold to dismiss his claim.

There are several other articles, all worth a look. Actually there are several different reasons we might want to imitate the brain. We might want computers that can interface with humans better because, in part, they work in similar ways. We might want to understand the brain better and be able to test our understanding; an ability that might have real benefits for treating brain disease and injury, and to some degree make up for the ethical limitations on the experiments we can perform on humans. The main focus here, though, is on learning how to do those things the brain does so well, but which still cannot yet be done efficiently, or in some cases at all, by computers.

As a strategy, copying the brain has several drawbacks. First, we still don’t understand the brain well enough. Things have moved on greatly in recent years, but in some ways that just shows how limited our understanding was to begin with. There’s a significant danger that by imitating the brain without understanding, we end up reproducing features that are functionally irrelevant; features the brain has for chance evolutionary reasons. Do we need a brain divided into two halves, as those of vertebrates generally are, or is that unimportant? Second, one thing we do know is that the brain is extraordinarily complex and finely structured. We are never going to reproduce all that in full detail – but perhaps it doesn’t matter; we’ve never replicated the exquisite engineering of feather technology either, but it didn’t stop us doing flight or understanding birds.

I think the challenge of understanding the brain is unique, but trying to copy it is probably an increasingly productive strategy.

25 Comments

  1. 1. Dave Xanatos says:

    I am in agreement with this for the most part. From everything I’ve studied, down to the smallest details of neuron behavior, they seem to have their own “memory”, and an ability to respond independently from the “brain” on the larger scale. I rather think of the brain not as the controlling entity, but rather as being informed by the sum of the decisions and micromemories of each node.

    For this reason, rather than creating neural networks in software, my approach has been to create each “neuron” as a small computer in itself, using a combination of microcontrollers and Raspberry Pis (NOT in a typical “cluster” configuration). Of course the proof will be in the pudding… if what I’m doing works as well on its own as it has so far in my development environment – maybe then I’ll have earned some bragging rights. For now… it’s fascinating research 🙂

    Dave

  2. 2. john davey says:

    I can understand copying a brain physically – using tissue cultures and the like – but ‘copying’ via computational models is not, and never will be, copying. It is modelling, which is not the same thing at all.

    Physical elements have causal powers, including the causal power to do the interesting bits – creating conscious mental states for instance. That’s not going to happen with a computer, but one would think it must happen with a physical reproduction of a brain.

    There may be some value in computationally modelling the brain, but models are restricted by the framework of understanding in which the model sits. As our understanding of the brain is superficial that makes the quality of the models likely to be poor and the value gained therefore minimal.

    Current models are fixated on higher order structures such as synapses and the compulsion to believe that these are pathways for information signals, based upon little or no evidence. And as we struggle to even find how nematode brains, with a few hundred cells, manage to work it may be an idea to learn to crawl first – human brains are far more complex than nematodes, so if little worms are a mystery then humans are impossible

    JBD

  3. 3. Disagreeable Me says:

    Hi Peter,

    > In fact, they are so unlike real neurons it’s really rather remarkable that they turn out to perform useful processes at all.

    Yes and no, I would say. What it suggests to me is that what little structure they share with actual neurons is probably what matters most for the brain’s information processing. Or alternatively that you can build a computer out of almost anything!

  4. 4. Disagreeable Me says:

    Hi Dave

    > For this reason, rather than creating neural networks in software, my approach has been to create each “neuron” as a small computer in itself, using a combination of microcontrollers and Raspberry Pi

    Just to clarify something — not a criticism of your work.

    It seems to me there is no reason one could not give neurons memory etc in software also. The advantage of your approach can only be for performance or ease of maintenance or other secondary qualities. It cannot be to enable behaviour that would be otherwise impossible in software alone. Right?

  5. 5. Disagreeable Me says:

    Hi John Davey,

    > I can understand copying a brain physically – using tissue cultures and the like – but ‘copying’ via computational models is not, and never will be, copying.

    That is certainly a tenable and popular view, but I hope you understand that it is controversial and not everyone will agree. It depends on whether you believe that consciousness is an artifact of a certain kind of information processing (as I believe) or whether it is an artifact of a certain inherently biological system (as you seem to believe).

    Copying via computational models does indeed duplicate and replicate the information processing happening in the brain. By this I mean that a digitally copied or simulated brain could in principle do everything a normal brain can do from a functional perspective, as long as the simulation is faithful enough. It may not be able to produce the pituitary’s hormones or the biological wasteproducts of the brain, but it can take in input signals and produce output signals in the same manner as an actual brain, and should these signals be received from and fed into an appropriate robot body (or even a biological body), then that body would behave much as a human does.

    I’m guessing you would think this would be a case of “the lights are on but nobody’s home” — this body would be something like a philosophical zombie (albeit not being entirely physically the same as a human). But if I’m right that consciousness is an artifact of a certain kind of information processing, then consciousness and intentionality and qualia and the whole shebang will necessarily come along for the ride.

  6. 6. John Davey says:

    hi Disagreeable


    But if I’m right that consciousness is an artifact of a certain kind of information processing

    Information can’t produce consciousness because information requires conscious information-processing agents in the first place. The tail can’t wag the dog. Information is of no causal significance whatsoever : as ethereal an idea as a painting, and there’s never been a painting that caused anything either, at least in the natural sense of the word.


    “Copying via computational models does indeed duplicate and replicate the information processing happening in the brain.”

    The correct answer is it models the brain according to a certain very limited sense of comprehension. As neither you nor I have the slightest idea how brains work, the claim that it is ‘duplicating’ is just unfounded : a dogma, completely.

    but it can take in input signals and produce output signals in the same manner as an actual brain

    Is there any science to support the outrageous claim that all the brain does is process “signals” ? Signals of what ? How ? using what mechanism ?

    JBD

  7. 7. SelfAwarePatterns says:

    On Hawkins’ observations, I think it’s worth noting that every artificial neural network “rewires” itself. That’s inherently what a neural network does. Almost all of them up to now have been software, but as Disagreeable Me discussed, if consciousness is an information processing framework (a stance I agree with), that doesn’t matter. The advantage of the hardware will be enhanced performance (no small thing), but not additional capabilities.

    The point about sparse representation, I think, has more weight. The brain appears to boil things down to a remarkably sparse level, something I think computer science is still figuring out. Although it’s important to understand that these representations exist in neural hierarchies, a paradigm where the neural layers close to sensory pathways are highly excited, which in turn excite subsequent layers more selectively, until we get to the very sparse representations at the top of the hierarchy. A sparse pattern at the top layers only has representation by virtue of everything in the lower layers. Artificial neural networks seem to have this paradigm, but have a ways to go before they can do it anywhere as well as organic brains, and the additional connectivity and pliability of biological neurons seem likely to be a factor.

    Peter, I like your last point. There’s enormous value in understanding how neurons work, but it’s worth remembering that evolution made do with what it had to work with, cells and cellular signalling. It might be that technology eventually accomplishes the same results with very different paradigms. We just don’t know what those might be yet.

  8. 8. SelfAwarePatterns says:

    Hi John #6,

    “Information can’t produce consciousness because information requires conscious information-processing agents in the first place.”

    Do you see DNA as encoding information? If not, how would you say traits get passed on from parent to offspring? What’s being passed? If you do see it as information, how would you account for the fact that DNA predates conscious creatures on the Earth by billions of years?

    Or would you say it wasn’t information until conscious entities started interpreting it that way, but it nonetheless fulfilled its role before we interpreted it as information? If so, why couldn’t the information processing that is consciousness work similarly?

  9. 9. Disagreeable Me says:

    Hi John Davey,

    > Information can’t produce consciousness because information requires conscious information-processing agents in the first place.

    What constitutes an information-processing agent? I’m guessing you’re going to say it has to be something like a human that already has intentionality and consciousness somehow, but to me a computer is an information processing agent, that could potentially get intentionality and consciousness by virtue of how it processes information.

    I’m not trying to assert anything or tell you you’re wrong, I’m just saying that the points you are stating as fact are controversial.

    > As neither you nor I have the slightest idea how brains work, the claim that it is ‘duplicating’ is just unfounded

    I don’t think it is controversial to say that a simulation of an information processing system duplicates its information processing (this is what I said was duplicated). I accept that it is controversial to say that it duplicates consciousness, but surely not the information processing. A simulated computer can process information as well as a physical computer.

    > Is there any science to support the outrageous claim that all the brain does is process “signals” ? Signals of what ? How ? using what mechanism ?

    Outrageous how? The brain accepts input nerve signals (chiefly from sensory organs) and produces output nerve signals (chiefly to muscles). That’s not all it does — it also accepts and produces hormones (if we take the pituitary as part of the brain), and it consumes oxygen and nutrients and it produces waste products including heat. I guess you could also say it plays the role of a mass weighing down the head somewhat, which could turn out to be important for balance or something. But accepting input signals and producing output signals certainly seems to me to be its main function. To me, it seems that consciousness is just part of its information processing. I accept that this is controversial, but then that’s my point.

  10. 10. john davey says:

    SelfAware


    Do you see DNA as encoding information?

    No.


    If not, how would you say traits get passed on from parent to offspring ?

    Biochemistry.


    What’s being passed?

    Nothing. Biochemical reactions take their course.


    If you do see it as information, how would you account for the fact that DNA predates conscious creatures on the Earth by billions of years?

    I don’t see it as information – however information is always held from a conscious agent’s relative perspective, so it’s probably true to say that until there was a conscious agent who understood what DNA was, DNA was never capable of being treated as possessing information by a conscious agent. About 1950 I think.

    Or would you say it wasn’t information until conscious entities started interpreting it that way

    Yes


    but it nonetheless fulfilled its role before we interpreted it as information

    “information” is not causal and fulfils no role, so no.


    If so, why couldn’t the information processing that is consciousness work similarly?

    Consciousness has nothing to do with information. It is an irreducible natural phenomena
    whose fundamental nature has no informative content.

    JBD

  11. 11. john davey says:

    Disagreeable


    “but to me a computer is an information processing agent”

    i think it’s true to say a computer is an information processor, but so is a telephone. Both lack agency and are in effect, value added media.


    that could potentially get intentionality and consciousness by virtue of how it processes information.

    How ? Unless the pathway is laid out, this is meaningless. It could potentially morph into a dog or invade North Korea too.


    I don’t think it is controversial to say that a simulation of an information processing system duplicates its information processing

    Fair enough, if that’s what it’s limited to.


    The brain accepts input nerve signals (chiefly from sensory organs) and produces output nerve signals (chiefly to muscles).

    Your chemical analysis is in fact the only definitively true objective statement about the brain you have made.The “signal processing” aspect is conjecture – there are precisely zero theories making a claim as to the content of these “signals” throwing into doubt the value of that analogy. They are electrochemical activity.

    JBD

  12. 12. David Duffy says:

    “The ‘signal processing’ aspect is conjecture” ??? So when we carry optogenetic experiments that erase or create engrams, they’re not associated with memories?

  13. 13. Tom Clark says:

    John Davey in 11:

    “Your [DM’s] chemical analysis is in fact the only definitively true objective statement about the brain you have made.The ‘signal processing’ aspect is conjecture – there are precisely zero theories making a claim as to the content of these ‘signals’…”

    There are theories, or at least well-evidenced hypotheses that go beyond conjecture, that support claims about the informational content of neural processing, for example the claim that part of the fusiform gyrus functions in the perception of faces. With damage to that area, you won’t see faces as such. I’m not sure why the brain’s higher-level functional organization that enables it to model the world outside the head, and to control behavior, isn’t as real, objective a feature of it as its chemical goings-on.

    https://en.wikipedia.org/wiki/Fusiform_face_area

  14. 14. john davey says:

    David Duffy/Tom

    There is a specific claim that neurons work by transmitting ‘signals’ i.e – the form of electrical activity that is found in brains possesses, in some kind of direct manner, informational content. It’s a low level claim, speculative, and not supported by either of your claims with all due respect.

    To point out that electrical activity goes in the brain (and may be localised) during certain activity doesn’t support the idea that neurons transmit ‘information’ between each other. That’s conjecture. It merely supports the idea that certain functions may be localised in the brain – and even then somewhat thinly, as all we are looking at after all is a certain spectrum of electrical activity which may or may not be important.

    To point out that a specific memory may be associated with a specific collection of neurons also amounts to much the same thing. It indicates that and no more, and makes no conjecture that the electrical activity between neurons amounts to a ‘signal’ of determinable informational content. T

    JBD

  15. 15. David Duffy says:

    Dear John

    The evidence for information transfer within the nervous system is introvertible – the interest is not localisation but widespread correlation. Here is discussion of distribution of memory:

    http://www.mdpi.com/2076-3425/7/4/43/htm

    https://www.researchgate.net/profile/Melanie_Sekeres/publication/313542134_Mechanisms_of_Memory_Consolidation_and_Transformation/links/58b6ff8ba6fdcc2d14d6fc8c/Mechanisms-of-Memory-Consolidation-and-Transformation.pdf

  16. 16. John Davey says:

    David

    I’m not disputing that the brain deals with information. I’m not even disputing that locales within the brain store information that is different and can even move it about.

    I am disputing the idea that the electrical activity corresponds to determinable information content – ‘signals’. Your examples are high level and simply don’t require it. The first work you provide, for example , take about ‘some mechanism’ to transfer memories – it doesnt specify that a ‘signal’ does it from another neuron.

    It’s all rather vague – as I’d expect. The idea that a single datum corresponds to a single burst of electricity would seem to me to be the overly simplistic consequence of using a computer analogy, no matter how thin.

    Jbd

  17. 17. John Davey says:

    David

    Perhaps I might summarise my position a bit more clearly by saying that although I believe the brain is a machine which has inherent information processing capabilities, using information flow imagery as a basis of examining the inherent mechanisms of the brain is likely to be flawed.

    One if the reasons for this is that one of the primary innovations of the brain – it’s biggest – is the ability to realise information in the first place. This mechanism is unlikely to be simple but until it us even vaguely understood, all the physical metrics currently connected to neuroscience (eeg scans etc) do not have an ‘information’ context in any way.

    Regs
    Jbd

  18. 18. David Duffy says:

    Dear John. Unfortunately one has to read not just the article but all the references too. If the authors claim that the spatial coding is well understood, one checks the review cited which says “grid cells…fire when the animal is in any of multiple locations that form a triangular grid. Other cell types include conjunctive grid cells (these fire only when the animal is on a vertex of the grid and when the animal moves in a particular direction)” etc. They have a straightforward informational relationship with the world. This is quite aside from all the encoding of information from the peripheral nervous system and sensory organs, which we can follow using evoked potentials.

  19. 19. john davey says:

    David

    The Sargolini article ? “straightforward” ? Are you kidding ?!

    That article indicates quite clearly how flawed neuron network models are : what it shows is that the neurons, the synapses, the biochemical and electrical activity cannot be neatly packaged into separate components : the system works as a whole. To be fair, the “straightforward” relationship you suggest is certainly not claimed by the authors this report – at all.

    I’m sorry, it just doesn’t follow that a firing of a neuron corresponds to a definitive information flow. It just doesn’t, and nothing you’ve said requires it.


    “grid cells…fire when the animal is in any of multiple locations that form a triangular grid.”

    So what ? Does that require the electrical activity to correspond to “information” ? Of course not. It may well be that the neurons in question may – through biochemical activity – are allowing information to be realised – as part of a wider system — but it doesn’t mean that the firings themselves correspond to meaningful data. They don’t have to and frankly it would be surprising if they did.

    JBD

  20. 20. David Duffy says:

    Dear John.
    If you agree that the firing and location of grid cells precisely captures the movement of an organism in its environment by integrating within-body direction and movement information by dead reckoning (that is, from the train of signals fired into the peripheral nervous system that control muscles, and which we do pretty totally understand), then either: a) there are two informational systems that contain completely the same data – the one that the neuroscientists are reading and manipulating (electrodes, optogenetics, changing the external environment), and the other system you are suggesting exists; b) one computational-type system that engineers working in robotics could emulate in silicon.

    This review paper might make things slightly clearer:

    https://www.researchgate.net/profile/David_Rowland3/publication/299491142_Ten_Years_of_Grid_Cells/links/57f3a1a608ae280dd0b70b2f.pdf

    It’s interesting that the place and grid cells are not egocentric coordinates, they are environment-anchored coordinate systems that can be remapped, rotated or sheared and contain the organism’s location in that system, even in the absence of “external” sensory inputs. Phenomenologically, I have often had that experience of one’s mental map rotating 180 degrees once you realize which way north actually is.

  21. 21. john davey says:

    David


    “If you agree that the firing and location of grid cells precisely captures the movement of an organism”

    Yes .. in a strictly restricted, minimal scope sense. That doesn’t mean the place or grid cells “contain data”. It means they are firing (as I understand it) in response to spatial movement. It’s a bit like looking at the shadow of a sun-dial. The shadow of a sun-dial is just a lack of light. When viewed by a human wanting to know the time, it is information.

    The grid-cell firings are information from a scientific perspective (observer-relative) but it is important to not jump the gun and assume they are, or contain, physical structures which “represent” something as part of a memory structure. They must be assumed to not beinformation, rather to be part of a system which realisesinformation which, I repeat, is the true achievement of the brain.

    I realise that you may think I’m being nitpicky (habit after studying information theory), but in any situation ‘information’ has multiple scopes. I don’t deny that the brain realises information, or that when a neuroscientist gathers information about firings in grid cells that that is information, but I will always dispute anyone who claims that cell firings are ‘inherent’ information in some way, akin to computer memory or some kind of processing chip. I also don’t believe that the neat split of memory and brain activity into hardware/software analogies is even remotely likely to be correct for the simple reason that a hardware/software computational analogy is an architecture incapable of realising information, although it evidently capable of virtually-processing information from an observer perspective.

    JBD

  22. 22. David Duffy says:

    Dear John.

    I can’t think neuronal firing rates are inherent informational in that kind of sense, but I can believe that they convey information in the sense of representing the
    external world to an organism. Spike rates in a peripheral nerve represent the magnitude of a depression in the soft tissues of my leg due to a stick poking into it only because they are transmitted to the appropriate region of my brain from appropriate receptors. In this context, the “efficient coding theory” that “neural circuits can encode and transmit as much useful information as possible given physical and physiological constraints” is a model that has been vindicated in multiple settings. And it is an hypothesis that implicitly depends on evolution and the survival value of optimal information processing. Traditional electrical engineering cum cybernetic type explanations are sufficient for many behavioural sensory-motor loops and of course I see no reason to stop there.

    I’ve previously commented that I think there is only one kind of information in the world – the recent efflorescence of papers on information thermodynamics fits in very well with the autopoiesis-enactivist view that “Living systems are cognitive systems, and living as a process is a process of cognition. This statement is valid for all organisms, with and without a nervous system” [Maturana cited by Evan Thompson 2004]. This is also very congruent with the Bayesian reasoner type models of Friston etc, where they explicitly move backwards and forwards between Fisher, Shannon and Boltzmann type entropies.

    PS Grid cells do seem to be involved in human imagination (visual imagery)
    https://elifesciences.org/articles/17089

  23. 23. john davey says:

    David

    As I said previously, information has scope. When viewed at from a science perspective – from the exterior – then neural firings, spikes, have informative content. The definition of the attribute (neuron firing/voltage spike) represents a physical attribute that an external observer can compartmentalise, separate from everything else, can view as a measure which, by virtue of correlation, corresponds to a mental event. From our perspective, its like a computational event in the sense that we, as an observer, link the physical datum to the mental event – by inference. We, as observers, make information from the observation.

    It does not hold – at all – that the internal workings of the brain go through the same cycle of a) physical datum isolation , b) relation of physical datum to correspondence event c) doing so via a process of correlation or intrinsic rule, as in a computer. There would need to be a ‘head inside the head’ for it to be even remotefully meaningful to say that a neural spike ‘represented’ anything. Where are the rules of association, as in any other computational system ? Whuch part of the brain has a memory rule which relates that particular spike to the geographic location ?

    My guess is there is no such rule – and hence no representation. What is perhaps more meaningful is to say the neural firing generates an information realisation event in the brain without reference to any need for representation at all. It just does it. From our perspective we can talk about “information” end encoding all we like, as long as we remember it’s from our perspective.

    JBD

  24. 24. David Duffy says:

    Dear John, the talk of encoding is very precise, otherwise the “bionic ear” would not work. There are a few papers on the phenomenology of being attached to such a device – from individuals with one “good” ear, or those who used to have normal hearing. Qualia is still pretty crappy listening to music.

    As to the “head inside the head”, we only need to have it that a given spike from a given cell represents X to the rest of the system by virtue of its location in that system. The Strong Church-Turing and Invariance Theses don’t restrict the form of the computational system, just that it is *completely* equivalent to a rule-based system. A self-replicating RNA (ligase) enzyme can be just 50 nucleotides, but this does not mean that the Von Neumann partitioning of self-replicating automata does not apply to it, even though code and replicator function are realized in the same molecule. The same is true of homeostasis or allostasis – the “internal model” can be implicit and mixed in with the effector arm. Possibly this does not differ too much with your way of thinking about perspective, but only some kinds of internal model will be flexibly modifiable – from a computer science perspective, modularisation etc allows reuse.

  25. 25. john davey says:

    David


    The talk of encoding is very precise”

    The metrics may precise, the measurement may be precise, but the viewpoint is defintely precise – that of the scientific observer, not the brain.


    As to the “head inside the head”, we only need to have it that a given spike from a given cell represents X to the rest of the system by virtue of its location in that system

    Ok. Lets solve this now.Tell me what it represents – bearing in mind that in order to have a representation, you need an observer with a concept, and an object that the observer can comprehend as having a representative relationship with the concept.

    JBD

Leave a Reply