robot illusionsNeural networks really seem to be going places recently. Last time I mentioned their use in sophisticated translation software, but they’re also steaming ahead with new successes in recognition of visual images. Recently there was a claim from MIT that the latest systems were catching up with primate brains at last. Also from MIT (also via MLU) though, has come an intriguing study into what we could call optical illusions for robots, which cause the systems to make mistakes which are incomprehensible to us primates. The graphics in the grid on the right apparently look like a selection of digits between one and six in the eyes of these recognition systems. Nobody really knows why, because of course neural networks are trained, not programmed, and develop their own inscrutable methods.

How then, if we don’t understand, could we ever create such illusions? Optical illusions for human beings exploit known methods of visual analysis used by the brain, but if we don’t know what method a neural network is using, we seem to be stymied. What the research team did is use one of their systems in reverse, getting it to create images instead of analysing them. These were then evaluated by a similar system and refined through several iterations until they were accepted with a very high level of certainty.

This seems quite peculiar and the first impression is that it rather seriously undermines our faith in the reliability of neural network systems. However, there’s one important caveat to take into account: the networks in question are ‘used to’ dealing with images in which the crucial part to be identified is small in relation to the whole. They are happy ignoring almost all of the image. So to achieve a fair comparison with human recognition we should perhaps think of the question being not ‘do these look like numbers to you?’ and more like ‘can you find one of the digits from one to six hidden somewhere in this image?’. On that basis the results seem easier to understand.

There still seem to be some interesting implications, though. The first is that, as with language, AI systems are achieving success with methods that do not much resemble those used by the human brain. There’s an irony in this happening with neural networks, because in the old dispute between GOFAI and networks it was the network people who were trying to follow a biological design, at least in outline.  The opposition wanted to treat cognition as a pure engineering problem; define what we need, identify the best way to deliver it, and don’t worry about copying the brain. This is the school of thought that likes to point out that we didn’t achieve flight be making machines with flapping, feathery wings. Early network theory, going right back to McCulloch and Pitts, held that we were better off designing something that looked at least broadly like the neurons in the brain. In fact, of course, the resemblance has never been that close, and the focus has generally been more on results than on replicating the structures and systems of biological brains; you could argue that modern neural networks are no more like the brain than fixed-wing aircraft are to birds (or bats).  At any rate, the prospect of equalling human performance without doing it the human way raises the same nightmare scenario I was talking about last time; robots that are not people but get treated as if they were (and perhaps people being treated like machines as a consequence.

A second issue is whether the deception which these systems fall into points to a general weakness. Could it be that these systems work very well when dealing with ‘ordinary’ images but continue go wildly off the rails when faced with certain kinds of unusual ones – even when being pout to practical use? It’s perhaps not very likely that  system is going to run into the kind of truly bizarre image we seem to be dealing with, but a more realistic concern might be the potential scope for sabotage or subversion on the part of some malefactor.  One safeguard against this possibility is that the images in question were designed by, as it were, sister systems, ones that worked pretty much the same way and presumably shared the same quirks. Without owning one of these systems yourself it might be difficult to devise illusions that worked – unless perhaps there are general illusions that all network systems are more or less equally likely to be fooled by? That doesn’t seem very likely, but it might be an interesting research project.  The other safeguard is that these systems are not likely to be used without some additional safeguards, perhaps even more contextual processing of broadly the kind that the human mind obviously brings to the task.

The third question is – what is it like to be an AI deceived by an illusion? There’s no reason to think that these machines have subjective experience – unless you’re one of those who is prepared to grant a dim glow of awareness to quite simple machines – but what if some cyborg with a human brain, or a future conscious robot, had systems like these as part of its processing apparatus rather than the ones provided by the human brain?  It’s not implausible that the immense plasticity of the human brain would allow the inputs to be translated into normal visual experience, or something like it.  On the whole I think this is the most likely result, although there might be quirks or deficits (or hey, enhancements, why not) in the visual experience.  The second possibility is that the experience would be completely weird and inexpressible and although the cyborg/robot would be able to negotiate the world just fine, its experience would be like nothing we’ve ever had, perhaps like nothing we can imagine.

The third possibility is that it would be like nothing. There would be no experience as such; the data and the knowledge about the surroundings would appear in the cyborg/human’s brain but there would be nothing it was like for that to happen.  This is the answer qualophile scpetice would expect for a pure robot brain, but the cyborg is more worrying. Human beings are supposed to experience qualia, but when do they arise? Is it only after all the visual processing has been done – when the data ariive in the ‘Cartesian Theatre’ which Dennett has often told us does not exist? Is it, instead, in the visual processing modules or at the visual processing stage? If so, then we were wrong to doubt that MIT’s systems are not having experiences. Perhaps the cyborg gets flawed or partial qualia – but what would that even mean..?



  1. 1. Arnold Trehub says:

    “Human beings are supposed to experience qualia, but when do they arise? Is it only after all the visual processing has been done – when the data ariive in the ‘Cartesian Theatre’ which Dennett has often told us does not exist?”

    Of course it is after “the data arrive in the ‘Cartesian Theatre’”. The weight of available evidence says so.

  2. 2. john davey says:

    I’ve worked on neural networks – the idea that they have some kind of relationship to real brains is comical.

    They are nothing more than an alternative method of memory storage. They aren’t ‘taught’ anything in that sense, they can be viewed as a large array of input/output arrays with no inherent meaning whatsoever.

  3. 3. Charles Wolverton says:

    Arnold –

    I assume that you say qualia arise “after the data arrive in the Cartesian Theatre” because you view the RS combination of I! and the Z-planes of autaptic cells as analogous to a homunculus in the CT. And that you would agree that “after all the visual processing has been done” probably isn’t quite right because processing leading to phenomenal experience may be in parallel with other visual processing. Right?

  4. 4. Arnold Trehub says:


    Right. During the same time that pre-conscious sensory patterns are projected into retinoid space to become conscious events, there is a vast amount of other non-conscious/pre-conscious processing going on in parallel.

  5. 5. Sergio Graziosi says:

    I guess we share more or less the same sources, but for everyone’s benefit, here are two links to a slightly more technical (but not overwhelmingly so!) discussion. The one on the “digit” images is here (with link to the relevant paper):

    On the same site, there is also an older post on a related, and for me even more striking effect: images that look the same to humans get correctly identified in one case and not the other.

    To me, the first link (but perhaps not the second) intuitively points to a flaw in the training paradigms: could it be that the false positives would be avoided by using similar “noisy” and/or “nonsensical” images also in the training phase, and letting the system know that they account to “nothing”?

    Overall, if that’s the case (I don’t know how the training is done), the false positives look less disturbing to me, while the false negatives (my second link here) are much more puzzling: they suggest that what we “see” as high-level features of a picture is completely ignored by deep neural network systems. If true, it kind of undermines what I understood as the “deep” part in the definition: these networks are layered, and are supposed to extract more and more high-level features as the information travels across the layers. At least, that’s my non-specialist understanding, but it is somewhat confirmed by the discussion there.

  6. 6. Scott Bakker says:

    Fascinating article! In a sense, all that makes this particular topic interesting is the analogy to visual illusion. Is anyone surprised that machines become systematically decoupled from their environments? The possibility that artificial visual systems would systematically ‘mistake’ certain forms of visual information was all but assured.

    What it does is bring home in yet another way the mechanistic nature of the human: our brains become systematically decoupled from their environments like any other machine. It’s the question of how to fit ‘what it is like’ into all of this that has us all befuddled – where we find ourselves systematically decoupled from ourselves.

    It stands to reason that we’re systematically decoupled from ourselves for much the same reasons we’re systematically decoupled from our environments. In the case of visual illusions, the decoupling involves the cuing of heuristics out of school. Given environmental frequencies, we are visually coupled with our environment in myriad opportunistic ways. These ways can be hacked by unprecedented information.

    This is why I think that ‘appearance consciousness’ which so many take as the great explanandum is likely a cognitive version of a visual illusion. Why should we assume that we’ve evolved the capacity to cognize ourselves in anything other than radically specialized ways? No one I know of has answered this question. Short of an answer, there’s no reason to take appearance consciousness seriously, because we should expect that various illusions will confound us… as we are indeed confounded. We’re feeding unprecedented information to metacognitive systems adapted to solve specific problems. Of course we’re stumped.

  7. 7. Sci says:

    @John Davey: Gotta agree with your assessment. I think a lot of people who don’t have training in computer science get overly wowed by this sort of thing.

    Lanier had a good article about the PR hyperbole that goes into claims about AI, will try to dig it up.

  8. 8. Scott Bakker says:

    Here’s a way to put the question, Peter. If a machine can be systematically decoupled from its environments, such that it makes mistakes in a manner resembling those attending what we call ‘visual illusions,’ then it stands to reason that a machine can be systematically decoupled from its own operations such that it makes mistakes in a manner resembling what we call ‘philosophy of mind.’

    If a machine ran afoul its own version of the mind-body problem, what then? Do we insist that our version is the only ‘true one’? That our problem is real, involves some impossible to define ‘inherent meaning,’ whereas the machine only suffers the simulacrum of such definitional incapacity?

  9. 9. Scott Bakker says:

    Sci: That Lanier piece you linked last week was cool. Not sure how his argument isn’t a genuine zombie argument, tho. Saying that ‘experience’ can’t be removed from the explanation in no way vouchsafes what ‘experience’ consists in. Why does there have to be something in addition to nature in his account, outside his Platonic stipulation? He just stamps his foot, as far as I can tell. He needs to explain why experience has to be *what it appears to be in experience.* Othewise, his radio dial metaphor is one that I’ve actually used myself (!), not to describe differentiated degrees of something apart from nature, but to explain why it is wholly continuous with it.

    I think his argument against computationalism is spot on: I just don’t see how it warrants anything other than eliminativism.

  10. 10. Sci says:

    @Scott: Glad you also agree that computationalism is not synonymous with materialism.

    I feel that eliminativism is the only kind of materialism that is intellectually honest, though it’s hard to wrap one’s head around the kind of illusory nature of thought Rosenberg espouses in Atheists’ Guide to Reality.

    For now I’m happy to simply read and ponder…

  11. 11. Scott Bakker says:

    Sci: Rosenberg doesn’t have much in the way of a positive account, which is why abduction cuts so easily against him. But this doesn’t have to be the case:

  12. 12. Charles Wolverton says:

    From the description in the articles Sergio cites, I infer that a neural network is trained to assign arbitrary matrices of black and white patterns to one of the single digit numerals, which it does reliably in the case of patterns designed to “look” (to human observers) like numerals. It was also found that for each numeral, patterns that look like “white noise” (ie, essentially random) were found which were also reliably assigned to the numeral. But almost all patterns will look random, and presumably every pattern must be assigned to some numeral, So, there will inevitably be many (in fact mostly) such “illusions”.

    The decision process as described doesn’t appear to be engaged in object recognition, in which case from its “perspective” the car picture is essentially an arbitrary pattern. Near decision boundaries, there will inevitably be patterns that are “close” in some Hamming-like metric (and therefore visually indistinguishable to most human observers) but are assigned differently. In this paper Andy Clark suggests that in the dual stream processing model, the ventral stream may add cognitive functionality to the dorsal stream’s non-cognitive functionality. If that additional cognitive functionality is missing from the neural network processing, it isn’t really accurate to describe the failed decision as “car” vs “no car”. The neural network is just classifying patterns that from its perspective are content-free, notwithstanding that our cognitive ability allows us to recognize them as similar representations.

    This seems so straightforward that I have to assume I’m missing something.

  13. 13. Callan S. says:


    With something like the hollow-face illusion ( ), it’s possible to describe the illusion in pure technical terms as well. It still remains an illusion, though.

    What’s the distinction you’re making? Something like saying it’s when a bug or microbe suffers an illusion, rather than a human scale illusion? Fair enough, in that case.

  14. 14. Charles Wolverton says:

    Callan –

    As described in the articles, it appears that the neural network is being trained to divide the N**2-dimensional vector space, the members of which are the NxN matrices that constitute the image space, into ten regions each of whose members will be interpreted as being an image of one of the single digit integers. For each such region, many (most?)of the images that will be interpreted as representing a digit image will look totally random to a human observer. Thus, it isn’t clear to me why 1) the fact that there are random looking mages that are reliably “recognized” as a digit image is surprising or 2) why such images are called “illusions” in the post.

  15. 15. Callan S. says:

    Charles –

    As described in the linked article, it appears neural networks are trained to percieve a convex structure, but will as much identify a concave structure as a positive. Just as much as the network in your article does not pay attention to all the details, the network in the article I linked will trigger on something totally random and alien.

    I know it’s old hat to mirror your post to make my point, but genuinely it also seems so straightforward to me. #1 isn’t surprising. But why would you question calling it an illusion without providing a replacement word of your choosing?

  16. 16. Sergio Graziosi says:

    I’ve read the relevant parts of the paper (the one Peter discusses, on false positives), it’s a bit technical, but I think it’s possible to explain the whole thing in plain language (so I’ll try!).

    First thing they did was to train a DNN (Deep neural network) by using a library of images that contained the 10 different digits (Y. LeCun and C. Cortes. The mnist database of handwritten digits, 1998), this contains 60K images for training and 10K images to check the results (MNIST). The result is indeed what we both suspected, the DNN will only have 10 classes to choose from, and when asked to classify a new image will output two things: the “recognised” class and the confidence level.
    The reason why their results are note-worthy is that they where able to produce images that where classified with a high confidence level (99.99%). As you have pointed out, the fact that the DNN assigns a class to a new (nonsensical) image isn’t noteworthy, having 10 classes, it’s job is to pick one. The somewhat surprising result is that it’s possible to produce nonsensical images that will be classified with a high degree of confidence. However, this is only possible by using sophisticated “evolving” algorithms that use multiple iterations to “learn” how to fool the DNN better and better at each new attempt. So far, no surprises, really.

    Still, the authors where careful enough to test what happens if one starts the training with 11 classes, where class 11 is made up from images that actually fool a previously trained DNN. This is where it gets interesting: what they found is

    The immunity of LeNet is not boosted by retraining it with fooling images as negative examples.
    Evolution still produces many unrecognizable images for DNN2 with confidence scores of 99.99%. Moreover, repeating the process for 15 iterations does not help

    (LeNet is the DNN they where using).

    So, after repeating the process 15 times, they still find that it’s possible to build new images that will fool the DNN.
    This somewhat contradicts my hunch that false positives would be avoided by using similar “noisy” and/or “nonsensical” images, but the real twist comes when they repeated the same protocol using a different, larger, and much more diverse set of images: ImageNet (J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. Imagenet large scale visual recognition competition 2012) which has 1.3 million natural images in 1000 classes. In this case, they can still “evolve” images that fool the DNN, but training a DNN with 1001 classes, where one class is made of the “fooling” images, does have a dramatic effect:

    for ImageNet models, evolution was less able to evolve high confidence images for DNN2 than DNN1. The median confidence score significantly decreased from 88.1% for DNN1 to 11.7% for DNN2 […]. We suspect that ImageNet DNNs were better inoculated against being fooled than MNIST DNNs when trained with negative examples because it is easier to learn to tell CPPN images apart from natural images than it is to tell CPPN images from MNIST digits.

    So in short(ish):
    1) why is all this (somewhat) surprising? Because it’s possible to produce fooling images that are recognised with a high level of confidence.
    2) why such images are called “illusions”? Ditto, the DNN is effectively saying “this is a picture of X, I am really sure”. But the picture looks nothing like X.
    I’d also add:
    3) this isn’t really surprising, considering that they are actively looking for images that can fool the DNN, and that they are doing so using sophisticated algorithms that improve their own performance iteratively.
    4) the fact that DNNs can be “inoculated against being fooled” if they are trained to classify massive amounts of very different natural images, but not if they only look at sets of handwritten digits, is in itself interesting (it confirms and significantly extends my own hunch). This for me is the truly remarkable finding, and I’m surprised that the authors didn’t focus more on this side of their results.

  17. 17. Charles Wolverton says:

    Sergio –

    Many thanks for that most helpful expanded comment. I assumed that MIT researchers weren’t going to miss anything I’d notice, so I’m not surprised that there was more to it than I gleaned from Peter’s post and Mike James’s articles.

    The training phase produces what amount to ten codewords (ie, digit images) that constitute a very powerful error-correcting code: k=log2(10) info bits in a code of length n=NxM, the size of the image matrix. (The potential error-correcting power of a code increases as the code “rate” (k/n) decreases.)

    Presumably, the confidence level of a classification decision for an input vector will rise with decreasing distance from a codeword. So, I would guess that the iterative process of raising the confidence level of the so-called “illusory” images amounts to (even if it isn’t in fact) a trial-and-error search for vectors that are close to code vectors but don’t represent images that look like anything in particular to humans. As I noted before, that the search is successful isn’t surprising because most possible input vectors (ie, images) aren’t recognizable by humans. Yet, a large fraction of possible input vectors will be relatively close to one or another of the ten codewords. So, finding vectors that are unrecognizable by humans as a digit image but nonetheless are interpreted with high confidence as being an image of a digit should be quite easy.

    Members of class 11 will likely be close to decision boundaries, so forcing reassignment of them by iterating presumably merely tweaks the boundaries, perhaps thereby exchanging misclassified patterns between two decision regions. And that seems unlikely to significantly change the overall performance. (Note: I’m using “confidence level” in an abstract sense. I assume that what are called “confidence levels” in the articles are empirically determined percentages of correct classification of the 10K “check” images.)

    I’m less clear on how to interpret the results of switching to the Imagenet training set. If the size of the image matrix remains the same, increasing the number of codewords (image categories) two orders of magnitude will significantly decrease the distance between codewords and therefore the number of candidates for input vectors that are both unrecognizable by humans and classified with high confidence. However, why iteration helps escapes me for the moment.

    In this discussion, “illusion” apparently is being defined as a mental image that “looks” to the viewer like an image of X but isn’t really. But with respect to the neural nets, there is no “viewer” and hence no mental image – just a vector. The vector doesn’t “look” like anything to the neural net. The neural net is trained to classify a subset of the possible image vectors reliably, and it does so. That it classifies as digits vectors that when displayed graphically don’t “look” right to humans seems irrelevant.

  18. 18. Sergio Graziosi says:

    I’m not familiar with the maths behind DNNs, but I suspect that it is almost legitimate to say that nobody really is. In this specific case I regret not knowing how the confidence score is calculated (will look into it if/when I’ll find the time). However, I do know that the whole idea of using DNNs is that they do not resemble classic multivariate classification (SMV and family), they are supposed to find sophisticated features of the sets they get trained with, so that they (supposedly) resemble what our brains do much better than the standard methods. The idea is that to recognise a picture of a face finding that it contains eyes, nose, mouth and that they are more or less in the right (relative) positions is going to be more reliable than simply looking at where the image falls in the hyperspace. Add more layers (of simulated neurons) and the DNN will be able to recognise that in this particular picture there is a face with a hand hiding the eyes.

    This is why Peter was only stretching the metaphor a little, when asking “what is it like to be an AI deceived by an illusion”: SMVs (and similar) do not have anything that resembles understanding (as you point out), but to our wet brains, it seems less outrageous to believe that DNNs develop something that is a beginning of proto-concepts. [Note that I’m watering down this intuition to almost homeopathic dilutions, and note also John’s agreeable comment #2]

    Anyway, the gist here is that the digits result seem to suggest that the DNN “promise” [what I summarily describe above] is not what is happening, because the DNN behaves (and makes mistakes) more or less like a good old SMV. On the other hand, the results from ImageNet point in the other direction: give enough diversity to a DNN and you’ll get what it says on the tin. Or, as a punchline: “Feed it with dull input and your network will become stupid”. [with apologies for over-stretching the intelligence metaphor myself, I couldn’t resist]

  19. 19. Charles Wolverton says:

    Well, Sergio, I’m afraid at this point my intuition fails me. I still think the case of non-digit-looking (to us) images that are reliably classified as digits makes sense whether or not the DNN is doing something like feature extraction. As I argued above, it should be easy (in concept, not necessarily in practice) to force the DNN to behave like that, and the experimenters indeed make it so behave. But the case of misclassified images that look (to us) just like correctly classified images seems mysterious even taking into account that what’s on in the DNN may not be much like what’s going on in us.

    The fact that small deviations in correctly classified images cause errors presumably means that such images are near decision boundaries. But that leaves unanswered why training with readily distinguishable (by us) images would result in such decision boundaries, and I’m afraid I have no speculations on that.

    Anyway, thanks again for your replies which were quite helpful.

  20. 20. Vicente says:

    If, as this study suggests, and I have been inclined to think since long time ago,

    synapses don’t play a role in memory storage as significant as thought, the whole idea of brain pattern reconisition models based on ANN is going to be a nice flop.

    ANNs are just simple mathematical functions with parameters to be adjusted by fitting (if you want to call it training), that can be graphically depicted as networks. I worked a few years with ANN, and these false positive cases are not rare.

    The tuning of thresholds prevent ANNs to be bijections by definition.

    I’ve never understood why people was so keen on ANNs prospects for neurological simulation.

  21. 21. Arnold Trehub says:


    The problem is that the mechanism of synaptic modification in learning is not properly delineated in the cited paper. Recovery of learned responses after destruction of modified synapses is not surprising if you understand that the synaptic transfer patterns are now on the post-synaptic dendritic membrane — on the cell itself.

  22. 22. Vicente says:


    Yes, you are quite right, to me the gist of the result is that enhances the role of the individual neuron itself compared to the networking effect. Regarding pattern recognition, it makes a big difference. Probably, in terms of overall memory management not as much, where the network (the real tissue, probably with a functioning very different from ANNs) will be crucial.

    I recall reading about some work that identified the activation of single neurons for specific people recalling…

  23. 23. Sergio Graziosi says:

    I fear I have fuelled confusion instead of reducing it. My comment #5 mentioned two papers, the first (Nguyen et al) is the one Peter discusses here and is about false positives, the second (Szegedy et al, was published before) is about false negatives.

    My comments #16 and #18 then are about false positives (Nguyen et al) and do not discuss the false negatives at all. That is to say that we agree on your point:

    the case of misclassified images that look (to us) just like correctly classified images seems mysterious

    It is baffling and I am unable to produce a decent/plausible hypothesis on why it happens (and I understand that they can manufacture such “illusions” for pretty much any image, so one would have to presume that all images are close to decision boundaries!).

    Anyway, yesterday I went to this “public discussion”:
    The presence of Dr. Demis Hassabis (Vice President of Engineering at Google DeepMind) made me hope that I might learn something about the cutting edge of DNN research. I was marginally disappointed, but I’ve learned two things (relevant to our current discussion):
    1) They are actively developing tools to stop treating ANNs as black boxes. I.e. they are trying to gain some understanding of how they store the “learned” information. This confirmed my hunch that we currently don’t really know what features (if any) a single layer in a DNN is able to recognise.
    2) They are also actively trying to build more proactive recognition systems, inspired by “Bayesian brain” and “Predictive Coding” kind of frameworks. That is, systems that actively build a model of what is expected and then use external stimuli to check if the prediction was correct (this is very different from the DNNs we discussed so far). This is relevant because this kind of approach is supposed to “solve” the problem that in high dimensional spaces the decision boundaries are never very far.

    Vincente and Arnold:
    the work you refer to (Chen S, Cai D, Pearce K, Sun PY, Roberts AC, & Glanzman DL 2014. Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia. eLife, 3 PMID: 25402831) is fascinating to say the least. However, the press release that Vincente linked is over-hyped to the point of being embarrassing. I haven’t read the full paper yet but the discussion below, from the always trustworthy Neuroskeptic, includes numerous intervention by the PI of the original study (David Glanzman) and is quite informative and much more enjoyable than reading a dry academic paper:

    I think Vincente is right: the importance of single synapses has been historically (and irritatingly) overestimated, in my humble opinion, just because we thought we understood something about them (LTP, LTD and the like). The Chen et al study does start chipping away some unwarranted assumptions and got me pretty excited.

  24. 24. Charles Wolverton says:

    Not to worry, Sergio, any confusion on my part is attributable to my own ignorance of neural nets (et al). In any event, I think I correctly matched your comments to the separate issues of false positives and negatives.

    A speculation about false negatives. Even in low dimension matrices, the ratio of possible vectors to target image vectors is, of course, quite large. Hence, most possible vectors aren’t “images” in the sense that we can recognize them as such. Add feature extraction, and images that the DNN classifies as being of an object X may very well not look very much like an X to us. Similarly, an image that does look like an X to us may not be classified as an X by the NN.

    To somewhat support this, using eXcel I generated random 5×8 matrices, few of which look anything like a digit. And perturbing an image of an “8” – a relatively structured shape – it doesn’t take a large number of pixel inversions (AKA distance) to get an image that stops looking much like an “8”. So, we do indeed need more insight into what’s going on inside the DNN.

    The Neuroskeptic article itself was great, especially the comments. Although I try to think about these matters a level or two down from the system level (I’m basically a comm system engineer), my neuroscience background is limited to what little I’ve gotten out of a few books at the relatively high level of Edelman’s book on neural Darwinism. So for me, the article and comments were effectively a crash course in the lower level of neuron biology. Although I had some general ideas about (biological) neuron networks, I had no inkling of how things worked at the cellular level. Fascinating – though a bit discouraging since there’s so much to learn. Sigh.

    Thanks yet again for the pointers.

  25. 25. Charles Wolverton says:

    Rats! I confused myself re the eXcel “experiment”. What I reported was the false positive result. The false negative goes the other way. The “8” remains more-or-less recognizable for non-trivial perturbations of the image, ie, even for images that are at some non-negligible distance in image vector space. Extrapolated to high density images and adding feature extraction, this suggests that the distance between images both of which look to us like some object may nonetheless be relatively far apart in image vector space.

    All hand waving of the worst sort, of course.

  26. 26. Sergio Graziosi says:

    Charles (#24)
    the pleasure is all mine, truly. Your input prompted me to look into the details, and I am not confident I would have clarified my thoughts without your nudges. So thank you!

    The best high-level explanation of the false negatives is indeed what you point to, and it is unsatisfying because it seems to indicate that we’ve found a dead end :-(.
    My background is almost perfectly the opposite of yours: trained in biology (neuro- and molecular-), I’ve approached computers as an amateur and found myself happy to become a professional software engineer. Life is full of surprises.

    I don’t find our lack of knowledge discouraging, but I am very disturbed by certain tendencies of group-think and fashion-following that I regularly see in modern professional neuroscience and science in general, a secondary effect of the way science (funding, careers, et al.) is organised. More on this (in the latest “constructive” form) is here: check out the previous work of Ioannidis for the destructive part.

    Finally, I’ve found that Micah Allen (AKA Neuroconscience) has produced a summary of the discussion at yesterday’s LSE event here:

  27. 27. Vicente says:

    Talking about false positives and differences between the brain and ANNs, consider the frequent cases of seeing a face in a water stain on the wall, or an animal in a cloud shape passing above us, or the Necker cube experience. That is the difference between perception and pattern recognition. Intentionality?

  28. 28. john davey says:

    if you write neural networks, your starting point is not biology – it is mathematics. You treat a neuron as a mathematical function with values (or a range of values) assigned to it by any old mathematical function you want (usually related to it’s “input”, also a flexible notion).

    there is no knowledge – nothing, none, nada, zero – that has been gathered from neuroscience that suggests a particular mathematical function is used in any relationship of neurons in a brain or can even be treated as being used by networks in a brain (like a planet can be ‘treated’ as being aware of the laws of gravitation, for example). This is before we even consider what correspondence there may be between the mathematical functions currently used in “neural” networks and any such function in nature .

    So to call it a ‘neural’ network is imposture of such an enormous magnitude it is difficult to describe. They have the same relationship to biological neurons as “The Simpsons” do to real human beings. They are cartoons. There is a vague relationship in the mind of the creator only. But to use them as a serious basis for analysis of mental phenomena is fantastic waste of time and money

  29. 29. Sergio Graziosi says:

    John #2 and #28
    In case anyone had doubts, Andrew Ng (formerly at Google brain, now chief scientist at Baidu) makes your case, in a very uncompromising way. Just wanted to confirm we entirely agree on this one.

  30. 30. john davey says:

    sensible comment ! Its amazing how few commercial IT professionals – the kind of people who work with computers day in, day out ( .. unlike philosophers of mind and cognitive scientists ) have any of these ludicrous delusions about the brain being analagous to a computer.

    The difference is that unlike cognitive scientists, computer people know that computers are arbitrary lumps of metal which can never amount to anything other than a very, very basic framework on which to build usable information machines (ie – add a screen, add a printer and a keyboard etc).

  31. 31. Arnold Trehub says:

    This was my response to this year’s Edge question:

  32. 32. Sci says:

    @Arnold – Good stuff.

    @John Davey – Yeah, it’s sad how people who don’t understand computers are easily seduced by the computationalist claims. Similar to how some physicists desperate to overreach about their understanding of reality have tried to pass off the Multiverse as anything but science-fantasy to the average person.

    Jaron Lanier is another Silicon Valley guy who has the intellectual integrity to not pretend computers can be minds (You Can’t Argue with a Zombie), as well as Yale compsci prof Gelernter (The Closing of the Scientific Mind)

  33. 33. INOC | NOC for Data Center says:

    Computers are just tools to get work done. No matter how “powerful” it is, a computer can never by itself compose better than Mozart or outdraw or paint better than Leonardo Da Vinci.

Leave a Reply