OutputMachine learning and neurology; the perfect match?

Of course there is a bit of a connection already in that modern machine learning draws on approaches which were distantly inspired by the way networks of neurons seemed to do their thing. Now though, it’s argued in this interesting piece that machine learning might help us cope with the vast complexity of brain organisation. This complexity puts brain processes beyond human comprehension, it’s suggested, but machine learning might step in and decode things for us.

It seems a neat idea, and a couple of noteworthy projects are mentioned: the ‘atlas’ which mapped words to particular areas of cortex, and an attempt to reconstruct seen faces from fMRI data alone (actually with rather mixed success, it seems). But there are surely a few problems too, as the piece acknowledges.

First, fMRI isn’t nearly good enough. Existing scanning techniques just don’t provide the neuron-by-neuron data that is probably required, and never will. It’s as though the only camera we had was permanently out of focus. Really good processing can do something with dodgy images, but if your lens was rubbish to start with, there are limits to what you can get. This really matters for neurology where it seems very likely that a lot of the important stuff really is in the detail. No matter how good machine learning is, it can’t do a proper job with impressionistic data.

We also don’t have large libraries of results from many different subjects. A lot of studies really just ‘decode’ activity in one context in one individual on one occasion. Now it can be argued that that’s the best we’ll ever be able to do, because brains do not get wired up in identical ways. One of the interesting results alluded to in the piece is that the word ‘poodle’ in the brain ‘lives’ near the word ‘dog’. But it’s hardly possible that there exists a fixed definite location in the brain reserved for the word ‘poodle’. Some people never encounter that concept, and can hardly have pre-allocated space for it. Did Neanderthals have a designated space for thinking about poodles that presumably was never used throughout the history of the species? Some people might learn of ‘poodle’ first as a hairstyle, before knowing its canine origin; others, brought up to hard work in their parent’s busy grooming parlour from an early age, might have as many words for poodle as the eskimos were supposed to have for snow. Isn’t that going to affect the brain location where the word ends up? Moreover, what does it mean to say that the word ‘lives’ in a given place? We see activity in that location when the word is encountered, but how do we tell whether that is a response to the word, the concept of the word, the concept of poodles, poodles, a particular known poodle, or any other of the family of poodle-related mental entities? Maybe these different items crop up in multiple different places?

Still, we’ll never know what can be done if we don’t try. One piquant aspect of this is that we might end up with machines that can understand brains, but can never fully explain them to us, both because the complexity is beyond us and because machine learning often works in inscrutable ways anyway. Maybe we can have a second level of machine that explains the first level machines to us – or a pair of machines that each explain the brain and can also explain each other, but not themselves?

It all opens the way for a new and much more irritating kind of robot. This one follows you around and explains you to people. For some of us, some of the time, that would be quite helpful. But it would need some careful constraints, and the fact that it was basically always right about you could become very annoying. You don’t want a robot that says “nah, he doesn’t really want that, he’s just being polite”, or “actually, he’s just not that into you”, let alone “ignore him; he thinks he understands hermeneutics, but actually what he’s got in mind is a garbled memory of something else about Derrida he read once in a magazine”.

Happy New Year!

10 Comments

  1. 1. vicp says:

    An alien civilization received a copy of an electronic document we have called ‘Conscious Entities’ but do not have the foggiest notion what all of the symbols composed of straight lines, circles and curves separated by spaces and ‘.’ mean. They are building scanners and computational programs to further analyze the symbols.

    Happy New Year 2014 Peter !

    Great reading as usual.

  2. 2. Peter says:

    Cheers, vic! Thanks.

    Happy New Year… I can see why you’ve chosen to erase 2016, but going back to 2014 seems a bit drastic? 🙂

  3. 3. Callan S. says:

    Doesn’t that robot sound creepily familiar?

    What if the talking, typing part of you is actually a naturalistic version of that robot, a part that is set to explain the brain it is attached to, to other brains?

  4. 4. SelfAwarePatterns says:

    Happy New Year everyone!

    “But it’s hardly possible that there exists a fixed definite location in the brain reserved for the word ‘poodle’.”

    Actually, I think this is more plausible than it might appear at first glance.

    The poodle and dog concepts are almost certainly vast collections of constituent concepts, such as the wolf like body plan, the snout shaped face, texture of hair, the associated barking sounds, and other components, each of which may be composed of their own components. Eventually the conceptual hierarchies are built on perceptual sensory primitives (shapes, colors, textures, sounds, etc).

    It’s not hard to see that each of the primitive sensory concepts might be handled in “standard” locations, such that the poodle concept in each of us ultimately excites similar networks of neural patterns, patterns that it would make sense would be near and likely substantially overlap with the general dog ones, since they would share many of the same primitives and intermediate component patterns.

    In the case of Neanderthals, they would only have to have other uses for those sensory primitives and intermediate components to have similar networks, although in their brains it would never have lit up in the precise combination it does for us when we perceive a poodle. But seeing a wolf might trigger many of the same networks, and other animals and concepts might trigger the other primitives.

  5. 5. vicp says:

    Peter,

    It took 3 Light Years for the webpage to reach the hidden alien civilization.

  6. 6. David Duffy says:

    As SAP implies, that Huth et al paper (2012) has it that clusters of related concepts mapped to similar regions across their 5 subjects. And similar concepts tend to be mapped adjacently within an individual’s cortex so a hierarchy of concepts is a physical arrangement.

    http://www.cell.com/neuron/fulltext/S0896-6273(12)00934-8

    But it must be fluid, eg L ventral occipito-temporal sulcus moving from faces to words.

    http://www.unicog.org/publications/Dehaene_Cohen_Morais_Kolinsky_IlliteratetoliterateChangesinducedbyreadingacquisitionNa%20ReviewsNeuroscience2015.pdf

  7. 7. Howard says:

    I’m curious. Your article piqued my curiosity, that is.Part of the problem is that the brain is a black box. So naturally you’d wonder whether the machine learning modelling, no matter how problematic, might replicate the accomplishments of Skinnerian behaviorism. On the other hand, might the problem be that unlike say classical physics, which may be reduced to a core principle, studies of the brain, like the humanities, has no Archimedian point so the problem is like mapping out a network of roads or highways in a city?

  8. 8. VicP says:

    Watch more videos on iai.tv

    What happens in the neo cortex may not stay in the neo cortex or the error of machine mapping may be that it gives more precise mappings and uncover countless more patterns of association to the point that we may be only recreating a zombie. More precisely a live lion may be the symbol of a lion but its meaning may go deeper than “oh that’s a lion”. Will the machine learning pick up the deeper reactions in the limbic system if there are no skin sensors to detect the sweating and trembling.

    The above video contains quite a bit of psychology concerning emotions if you are considering the theological vs scientific approach. Atkins at 41:00 discusses the irrefutability of mathematics as opposed to the theologians who encourage us to pray and reach higher. In a previous video he discusses how all numbers can be derived from null sets. I will “double down” and say that the symbols whether on paper or perceived in the neo cortex are meaningless unless they instantiate emotions, meanings or a set order within us in the case of mathematics. It is all a question to me of whether we put our hands together and pray or look for the deeper patterns of what is always there within us.

  9. 9. VicP says:

    https://iai.tv/video/uncovering-the-unknown

    Goes with my comment above.

  10. 10. Lloyd Rice says:

    Surely the poodle question depends more on the individual’s educational background than whether or not a Neanderthal would have a similar experience. Nobody mentioned that the brain is a growing, developing thing.

Leave a Reply