OEDWhat do we even mean when we speak of consciousness? As we’ve noted before, there are many competing and overlapping definitions, and in addition it’s pretty clear that the phenomenon itself is complex and that the word refers, in different contexts, to a number of different things.

A few years back, Thomas Natsoulas had a determined go at clarifying the position in his paper “Concepts of Consciousness”.  For his framework he fell back on the old debating society standby of consulting the dictionary. You might ask whether this was necessarily the best way to go: lexicographers have their own priorities, after all. They typically aim to report the way a word is used; if it’s used in ways that are inconsistent or taxonomically incomplete, that isn’t a problem for them. On the other hand, the use of dictionary definitions does bring in an element of neutrality, and protects Natsoulas against any charge of skewing his definitions to support his own theoretical views: and the dictionary in question was no less a tome than the complete Oxford English Dictionary (OED), a mighty work of scholarship whose views on almost any subject are not to be lightly dismissed. Natsoulas himself says he sees merit in looking at ordinary, common-sense ideas, which can help remedy the potentially problematic lack of work by psychologists on the conceptual side.

The OED gives six senses of ‘consciousness’. The first, which we can call c1, strikes a modern reader as odd: it is knowing something together, joint or shared knowledge: con-scire as the derivation of the word suggests. There is a definite suggestion of the shared knowledge being a guilty secret, perhaps even an echo of con-spire. Although c1 is the ancient sense of the Latin root and seems to have enjoyed a brief revival in the seventeenth century it is no longer current and does not at first seem to offer us much enlightenment on the modern concept. Natsoulas, however, points out that it captures the idea of consciousness as a social, interpersonal thing. He quotes Barlow:

…consciousness is something to do with a relation between brains rather than a property of a single brain.

Barlow, it seems, went on to suggest that internal consciousness was a kind of ‘rehearsal for recounting’, which is interesting.

If we doubt that c1 is really important, I suppose we might ask ourselves what the state of mind would be of a human being who never at any stage since birth met another communicative entity. I don’t think I’d be ready to say that such a person could not be conscious, but their consciousness would surely be lacking in some important respects.

C2 follows on in a way: it is in effect knowledge shared with oneself, knowing that you know. This sounds like the HOT (Higher Order Theory) and HOP (Higher Order Process) theories which approximately say that a thought is conscious when accompanied by an awareness of that thought. It’s also reminiscent of those, like Dennett, who see the internalisation of talking to oneself as the origin of consciousness.

Natsoulas quotes Vygotsky:

 A function which initially was shared by two people and bore a character of communication between them gradually crystallised and became a means of organisation of the mental life of man himself

It almost begins to look as if the OED has a rather cogent theory of consciousness.

C3 is awareness, of or that, anything, whether obects in the world or one’s own thoughts. Natsoulas insists there must be an object for this form of consciousness: even the thought that ‘I am having no thoughts’ actually has a content, he points out. Being conscious without content is in his eyes properly reserved for c6. He notes that the OED seems to include with c3 a veridicality requirement the claim is that if a man is aware of a bush, but thinks it is a rabbit, he is not really aware of the bush. Natsoulas, rightly I think, disagrees, insisting that even false awareness is still awareness. I think we must certainly preserve the possibility of being aware of something without having to have correct knowledge of its real nature.

I’m not sure whether the indirect awareness of memory falls into this category or the next, but there seems to be an overlap because c4 is, to adopt the OED’s Locke quotation:

…the perception of what passes in a man’s own mind…

which appears to be a subset of c3. Interestingly the OED seems to think that the primary use of c3 is awareness of one’s own mental contents, while using it to mean awareness of actual objects in the world is ‘poetic’. Natsoulas concludes that in this respect Englsh has moved on a bit since the OED last looked, but you have to admire the magisterial coherence of the OED view in which consciousness begins by being shared knowledge, becomes knowledge you share with yourself, in which form it is naturally about your own internal states, but by metaphorical extension can also mean your awareness of the external world.

I say c4 seems to be a subset of c3: perhaps as a result, Natsoulas says experimenters are often bedevilled by confusion between the two, claiming that a subject’s inability to report a stimulus shows they were never aware of it (whereas the subject can be aware of the stimulus withoug being aware that they are aware).

There might seem to be some dangers in the self-reference of c4,  but Natsoulas points out that there’s no problem in well-managed higher orders. If there were a ban on higher orders, he argues, introspection could never get properly started.

c5 is not a form of consciousness but rather a set of all the occurrent and previous mental states which putatively make up the individual’s existence. This is the sense in which science fiction stories speak of your consciousness being transferred to another body, or to a machine. The OED gives us a quote from Locke:

If the same consciousness can be transferr’d from one thinking substance to another, it will be possible that two thinking substances may make  but one person…

Natsoulas is quite happy with the idea that consciousness is not to be identified with the substrate, Locke’s ‘thinking substance’, but he raises some difficulties. What set of states is adequate to constitute a specific consciousness? Must it be all of them? That seems too strong, because if I were to lose one of my memories, I should not thereby lose my identity – I forget things all the time. Perhaps it has to everything I could recall – and some forgotten or unconscious things might still be shaping my mind. Worse, I can remember experiences yet fail to have the feelng that they are really my experiences.

Some would regard this defining set of mental states as the ego, or the ‘I’, which is a way of looking at it: but attempts to use a static central ‘I’ as the thing that pulls it all together are doomed in Natsoulas’ eyes. He thinks that with appropriate care we can use c5 and take account of the vivid and persuasive sense of an inner source without having to grant its ontological reality.

c6 is approximately equivalent to ‘awake’ and the opposite of unconscious. Searle, in typical commonsensical style, once used c6 as his definition of consciousness:

‘Consciousness’ refers to those states of sentience and awareness that typically begin when we awake from a dreamless sleep and continue until we go to sleep again, or fall into a coma or die or otherwise become ‘unconscious’.

Natsoulas takes this to be the kind of consciousness that has no requirement as to content: you could be conscious in this sense while thinking anything or nothing. In fact although the medical salience of the concept is clear, it seems too open-ended to be of much analytical use.  We should certainly be willing to speak of a dog being conscious (or unconscious) in this sense, and I think we’d be willing to push that usage much further – certainly to fish and quite possibly to an ant in certain circumstances. We’re not, then, speaking of anything narrowly defined: c6 means something like ‘the state when whatever mental activity normally goes on during periods of activity, is going on’.

I think it is open to debate whether this OED-based six-way definition gives us sufficient tools to tackle the probem of consciousness. It does not seem to capture Ned Block’s distinction between a- and p-consciousness, more or less the distinction between the targets of the Hard and Easy problem: yet that is one of the most-quoted and used of definitions.  I think we might also look for sharper and more useful distinctions between internal and external awareness.

Still, it is a useful exercise, and Natsoulas proceeds to do something with the results, positioning the six senses along four different axes:  intersubjectivity, objectivation, apprehension and introspection (this seems to cry out for a diagram, though I appreciate that rendering a four-dimensional space graphically intelligible is a non-trivial matter.

Natsoulas makes little of this concluding exercise, presenting it as a kind of run-through to help get things clear. But it seems obvious that what he’s offering is a potential reduction, abstracting away from the six OED definitions to define a consciousness space of four dimensions. Not the least interesting aspect of this is that it implies the conceptual possibility of unknown forms of consciousness which would be situated in unpopulated regions of the space. Suppose, for example, we have a form of consciousness with high intersubjectivity, but low objectivation, apprehension, and introspection? My imagination begins to fail me, but I think that would be a kind of diffuse but powerful general empathy. I’m surprised this aspect has not been explored.


  1. 1. Arnold Trehub says:

    Searle: ‘Consciousness’ refers to those states of sentience and awareness that typically begin when we awake from a dreamless sleep and continue until we go to sleep again, or fall into a coma or die or otherwise become ‘unconscious’.

    Peter: “Natsoulas takes this to be the kind of consciousness that has no requirement as to content: you could be conscious in this sense while thinking anything or nothing.”

    I don’t believe that this kind of consciousness could have no content. The minimal level of consciousness, what I call C1, is a primitive sense of centeredness within a surround. This would be a brief experience upon awakening from a deep dreamless sleep. See p. 327, section 8.2, here:

    In my view, if there is not at least this minimum mental content, one is not conscious.

  2. 2. John says:

    Our collective consciousness causes me to blush when I self consciously offer a stream of consciousness about a political consciousness that is really my own conscience. I am conscious that Vygotgsky provided the analysis for social conscience to be merged with personal consciousness so that Stalin was able to purge those whom he considered lacked class consciousness. Was this an early example of Eliminativism? (Ho Ho).

    But seriously, Marx and Engels were persuaded by French philosophers such as La Mettrie that consciousness was a mechanism and this was fundamental to the creation of Marxism. There is a fascinating, almost post-modern ramble called “The Holy Family” http://www.marxists.org/archive/marx/works/1845/holy-family/index.htm which hints at these origins. It is the Marxist philosophers, their fellow travellers the behaviourists and now the post Marxists, who spread the word “consciousness” thinly over every aspect of society whilst refusing to accept its original meaning as the presence of the soul.

    Peter, why cant we be apolitical and down to earth and just ask simple scientific questions. Here is a most basic question:

    “How do I see this screen?”

    Arnold has some good suggestions but none of your other correspondents are even close. Look through one eye, if you trace the light from the screen to your eye using pins (the nearest pin being >20cm from the eye) it seems as if your vision is like a pin-hole camera, it is like you are seeing from the centre of your pupil! The illusion of point vision occurs because if you focus on a nearby object then on a distant object the unblurred local object remains in your experience even though it may in reality be blurred if you attend to it. The physical truth is that we cannot have point vision in a materialist world, simple materialism cannot explain such a phenomenon. We cannot possibly be seeing from the centre of a pupil so what is the geometry of vision?

    (I bet most philosophers do not even realise that there is a deep problem with the science of vision. They do not realise that light falls all over the cornea to be diverted by the lens system of the eye and that there are different images in each eye and that these are suppressed during saccades – which is most of the time)

  3. 3. Tom Clark says:

    Peter, as usual, thanks for the work you put into these posts.

    In defining consciousness, I’d point to its essential characteristics, which seem to me include being qualitative, unified, informational, private and subjective (QUIPS). Consciousness constitutes what could be called a “system reality” having these basic characteristics: it’s real only for the conscious system. These characteristics contrast with how we characterize the world in science: it’s quantitative, composite, public, and intersubjective.

    I’d say that qualitative states, available only to the conscious system (hence private), ordinarily bound into integrated objects, scenes and events (hence unified and informational), with the phenomenal self at the center (hence subjective), are all there is to consciousness. There is no non-qualitative contrast set for qualia within consciousness. If you subtract all qualitative experience, would there still be something it is like to think, believe, or be the subject of other non-sensory, non-perceptual states? Arguably not. Conscious episodes of thinking involve “phonemic imagery” (as Velmans puts it in the Journal of Consciousness Studies); conscious episodes of believing involve qualitative sensations (some vivid, some subtle) of conviction, of preparing to avow and assert, of vindication, etc. If non-qualitative conscious states do exist, then arguably they don’t pose as tough an explanatory problem for science, since science could show them as straightforward entailments of other non-qualitative phenomena, e.g., brain states.

    Pointing to the basic characteristics of consciousness sets up the hard problem, since it isn’t obvious how a system reality and its characteristics are entailed by the world and its characteristics as described by science. Yet consciousness is a natural phenomenon that exists within that world.

  4. 4. Vicente says:

    I very much agree with Tom, and I would go further, consciousness “is reality”, it is the subject for every object. To have a “reality system” you need to establish the object-subject binomial.

    It seems to me that Peter post presents an epistemological approach to the definition of consciousness, I believe that an ontological one would be more adequate and satisfactory.

    Actually Tom, think that science (as a knowledge body) requires of consciousness to exist, but consciousness needs no science.

    From an ontological point of view, you have to add to the hard problem, the problem of the “multiple minds” the fragmentation of consciousness.

    To be or not to be, that is the question.

  5. 5. John says:

    Vicente: “..consciousness needs no science.”

    Science is simply knowledge. If there are relations in a phenomenon there is science.

  6. 6. Tom Clark says:

    Just to say that if you’ve missed it there’s an exchange on qualia by Richard Brown and Keith Frankish at Philosophy TV that people might enjoy, a nice example of non-combative philosophy: http://www.philostv.com/richard-brown-and-keith-frankish/

  7. 7. VicP says:

    “The presence of mental images and their use by an animal to regulate its behavior, provides a pragmatic working definition of consciousness”

    D.R.Griffin, ‘The Question of Animal Awareness’,

    I like this definition except I would add; mental images with meaning.

    “Regulate its behavior” signifies that consciousness is part of the inner learning system for the organism.

    My own definiton would be the inner system along with senses by which an animal can objectify its environment.

  8. 8. Arnold Trehub says:

    Peter, I was looking over the OED definitions of consciousness and I realized that what you call C1, “knowing something together”, is right on the mark because it reflects my primitive C1 in *Space, self, and the theater of consciousness* (2007), where what are sensed together are SELF (I!) and the SPACE around I!.
    Odd that I didn’t notice this before.

  9. 9. Richard J R Miles says:

    Arnold, How much longer are you and others going to be able to ignore how and why consciousness evolved originally?
    Peter is right in his last sentence of the third from last paragraph when he writes ‘I think we might also look for sharper and more useful distinctions between internal and external awareness, which incidentally is my hypothesis, which some are becoming aware of.

  10. 10. Arnold Trehub says:

    Richard, I have suggested that consciousness evolved because it gave some lucky creatures the adaptive advantage of an internal perspectival representation of the world around them (subjectivity).

  11. 11. Vicente says:

    Arnold, for that advantage(function) there is no need for phenomenal consciousness. Zombies could very well work on that basis, or not?

  12. 12. Richard J R Miles says:

    Yes Arnold, I think that is correct. This lucky natural adaptive advantage of subjectivity was as a result of the external somatic activity becoming conscious. This as a result of a natural seperation from the internal unconscious autonomic activity, which is how we have physical interactive dualism. This combined with our dexterity has made us so formidable.
    I have been recommended to write a short speech for a CFA: AISB workshop on the emergence of consciousness, re my hypothesis. I could email it to you.

  13. 13. Arnold Trehub says:

    Richard, why don’t you put an abstract of your talk here so that we can all consider it?

  14. 14. Arnold Trehub says:

    Vicente, the pattern of activation within the mechanism that gives a creature the function of subjectivity *is* phenomenal consciousness. Therefore, any creature that has such a functional mechanism (retinoid space?) cannot be a Zombie.
    As I have mentioned before, I know of no artifact that can represent a coherent volumetric space that includes a part of itself as the perspectival origin within its surround.

  15. 15. Peter says:

    I’d be very interested to see your speech, a transcript or a link if that’s available, Richard – sounds like something we might discuss.

  16. 16. Vicente says:

    Arnold, I don’t think so. The pattern of activity is just the pattern of activity,in any physiological parameters you would wish to depict it. I agree that, in pure bio-physical terms, it could be enough for a “lucky creature” to navigate better. If physicalism and dual aspect monism are right, and zombies are conceivable (I don’t see they violate the laws of physics), then phenomenal consciousness is (in the best case) just an epi-phenomenon, providing no competitive advantage.

    Please, have a look at this link, to clarify the concept:


    Additionally, we won’t have to bother ourselves with the free-will problem any more.

    Anyway, the point is that phenomenal consciousness is not that pattern, by concept, and by definition.

    As a proof of concept, take an extreme (ideal)case of blindsight. We have the retinoid space working, the evolutive advantage in place, but most part of the related phenomenal experience is absent, and “ideally” it shouldn’t make a change.

    Arnold, physicalim has a price, I you want to keep it down to dopamine levels, firing rates and neuronal architectures, then there is no feeling of space. But I have one, how odd.

    The first thing is to understand the workings of the brain, and then the correlations and mappings with phenomenal consciousness. Then we could try to see if we are working on a: simplex, semi-duplex or full duplex communications policy.

  17. 17. Richard J R Miles says:

    Arnold/Peter, I would like it read as a complete speech, not an abstract, so will send it as Peter suggested. I have not quite finished trimming it below 500 words, (the speech requirement), will do so as soon as I have.

  18. 18. Arnold Trehub says:

    Vicente: “If [a] physicalism and dual aspect monism are right, and zombies are conceivable (I don’t see they violate the laws of physics), then [b] phenomenal consciousness is (in the best case) just an epi-phenomenon, providing no competitive advantage.”

    I don’t see why [b] should follow as a logical consequence of [a]. I can conceive of a chain of duplicate Vicentes stretching from your present location on earth all the way to the moon. I can conceive of a zombie with godlike powers. So what? My conceptions, or your conceptions, or anybody else’s conceptions have no *necessary* validity.

    Vicente: “As a proof of concept, take an extreme (ideal)case of blindsight. We have the retinoid space working, the evolutive advantage in place, but most part of the related phenomenal experience is absent, and “ideally” it shouldn’t make a change.”

    In blindsight, the full content of the phenomenal visual world is absent, yet the person is able to perform some rudimentary acts of proper visual-motor response. But why is this “proof of concept”? We know that there are subcortical visual-motor pathways that parallel the primary visual-perceptual mechanisms which provide the normal content of retinoid space. Presumably, it is these spared subcortical mechanisms that enable degraded visual performance in cases of blindsight.

    Vicente: “Arnold, physicalim has a price, I you want to keep it down to dopamine levels, firing rates and neuronal architectures, then there is no feeling of space. But I have one, how odd.”

    Your rejection of excitatory activity in a special kind of neuronal architecture (the retinoid system?) as *being* your feeling of space is no more than a very strong intuition on your part. Just as my belief that excitatory neuronal activity in retinoid space *is* my feeling of space is a very strong intuition on my part. But in science, intuition is trumped by evidence, so we have to look to the evidence. In a wide range of empirical tests, the operating characteristics of the retinoid model successfully predicted/explained previously inexplicable conscious phenomena/feelings, and also successfully predicted novel conscious phenomena. Among many examples are hemi-spatial neglect, seeing-more-than-is-there (SMTT), Julesz random-dot stereograms, the pendulum illusion, 3D experience from 2D perspective drawings, the moon illusion, the Pulfrich effect, …etc.??

  19. 19. Vicente says:

    Arnold, then if the activity IS the feeling, why to appeal to a dual aspect? there is only one aspect to consider.

    But this was not my point, we were talking about evolution favoring consciousness. What I am saying is that even accepting dual aspect monism, it is very different to claim for an asymmetrical dual aspect in which the physical aspect is the only causal agent, and completely determines behaviour, than to propose a symmetrical dual aspect, in which both aspects are causal. The latter entails that qualia are causal agents.

    The fact that having a better brain to know where you are [I!] and locate preys and predators is an advantage is clear, but it requires no phenomenal consciousness at all. Evolution could perfectly work with zombies. Unless we accept another agent acting on the brain.

    Evolution is another thing, e.g.: for the time being, in evolutive terms, tell me the advantage of the creature with the best retinoid system versus bacteria or viruses…. poor creature. Unless, again, we introduce some antropic principle, or purpose, etc… absolutely out of the scope of standard evolution principles, that rely on a random exploratory mechanism to progress.

    In any case, I don’t reject the retinoid system, or similars, playing a crucial role in space management.

    Finally, “lucky” creatures…. why lucky? another strong intuition?

  20. 20. Tom Clark says:


    “The fact that having a better brain to know where you are [I!] and locate preys and predators is an advantage is clear, but it requires no phenomenal consciousness at all.”

    Well, if phenomenal consciousness is identical to some set of processes in a (better) brain, then it plays a causal role in locating prey and predators. But of course in that case it doesn’t add any causal power above and beyond its neural constituents, which is what people traditionally want from consciousness: being more than mechanisms. If it *isn’t* identical, then the problem arises of how it could add its causal power to behavior control, and there’s no account on offer of how a categorically distinct mental substance or property could affect the brain and body (the intractable problem of dualist interactionism). But without such a mechanism, phenomenal experience wouldn’t have been an adaptive trait to possess and so wouldn’t have been selected for. So either way, it doesn’t look as if phenomenality per se as distinct from its neural basis can be considered adaptive and thus explained by standard evolutionary accounts.

  21. 21. Arnold Trehub says:

    Vicente: “Evolution is another thing, e.g.: for the time being, in evolutive terms, tell me the advantage of the creature with the best retinoid system versus bacteria or viruses…. poor creature.”

    The “best retinoid system” (the human kind) can survive and advance the survival of its kind in the widest variety of ecological niches — in extreme heat and cold, on all parts of the earth, from the depths of the ocean to outer space. The only kind of bacteria and viruses that can claim the same ubiquity are those that are resident in humans. And when a bacterium or virus multiplies to the point of threatening its host, it is usually destroyed by a human weapon (antibiotic or some other kind of treatment) in a “struggle” for survival.

  22. 22. Vicente says:

    Arnold, that’s not fair play, it is a very interesting issue, but if we include the results of human intelligence (tools), and we don’t limit the perimeter to improvements stemming form genetical mutations, then the scenario completely changes. The issue of the impact of human intelligence on the evolution of the species has to be treated aside. Besides, I’m not sure of the net profit (biologywise), let’s wait a few centuries and see: And when a bacterium or virus [or human] multiplies to the point of threatening its host [or planet], it is usually destroyed [or self-destroyed] by a human weapon (antibiotic or some other kind of treatment or natural disaster) in a “struggle” for survival [or greedy power].

    Tom, thank you, in your words sounds better.

  23. 23. Arnold Trehub says:

    Vicente, we can’t separate human intelligence from human consciousness because our kind of intelligence could not exist if we did not have the content of our phenomenal world to learn about, to understand, and then to use our understanding for adaptive reconstruction of the world. This is the difference between conscious cognitive adaptation and nonconscious reflexive adaptation. Regarding the future prospects for humans vs. bacteria — I must agree with you that only time will tell.

  24. 24. Vicente says:

    Arnold, then look what you did: the retinoid system enabled subjectivity (consciousness for you), creating the individual for the first time. Now the individual can become as selfish and egoistic as its genes (worse for Dawkins), putting an end to evolution as it was known. So you can say that the retinoid system sets a breakpoint in evolution, there is a before and an after.

    Only very conscious creatures can move against the strong forces of biology and instincts….

  25. 25. Charles Wolverton says:

    Tom -

    I’m not so sure we need an evolutionary-advantage argument for PE. Presumably, it’s uncontroversial that in order to effect responses to different sensory stimuli the brain must be able to distinguish among different consequent patterns of neural activity (qualia?). And in order to use such patterns in producing responsive action, they must be conceptually “positioned” relative to the subject (perhaps ala Arnold’s RS re I!). And once the brain has done that, there is a “representation” of the environment as “sensed” by the subject, in particular a neural activity map of the environment – in the case of visual sensing, a virtual “picture”. In Cartesian Theater thinking, this “picture” is a mini-reproduction of what is actually “seen” by the eyes, but of course it isn’t. There’s no “picture” anywhere, just the PE, an artifact – so to speak – of those distinguishable patterns of neural activity.

    Since we don’t know the mechanism by which PE is produced (right?), we don’t know its attendant cost to the organism. So, couldn’t PE conceivably be a freebie with no benefits but also no – or at least negligible – cost?

  26. 26. Tom Clark says:


    “Since we don’t know the mechanism by which PE is produced (right?), we don’t know its attendant cost to the organism. So, couldn’t PE conceivably be a freebie with no benefits but also no – or at least negligible – cost?”

    Yes, phenomenal experience (PE) likely just comes along for the ride. From a 3rd person explanatory standpoint it’s a non-functional accompaniment to the neurally instantiated cognitive processes that get the behavioral job done and that were naturally selected for. But I think it’s a mistake to suppose PE is in any sense produced by neural processes as an effect distinct from them, since in that case we’d see PE out there in the world like we do brains, and we don’t (and never will).

    Since it isn’t produced or caused by neural processes as a separate effect requiring the expenditure of glucose, PE doesn’t cost the organism anything, and so is indeed a freebie. So I agree: we don’t need a an evolutionary advantage explanation for PE, only for the neural processes which make possible the cognitive functions associated with it. To explain PE, I think we need an account of how it’s *non-causally* entailed by being a free standing, mobile cognitive system with certain sorts of representational capabilities.

  27. 27. Charles Wolverton says:

    Tom -

    On rereading your comment I see that I misinterpreted it. Great! It appears that we’re in accord.

    You’ve been touting this general idea for quite a while. I assume it’s out of the mainstream, but perhaps I’m wrong. What kind of response has it received in your experience?

  28. 28. Tom Clark says:

    Yes, it’s outside the physicalist mainstream but very much in line with Metzinger’s work. Plus I’ve recently discovered that Evan Thompson has similar ideas – see his worthwhile exchange with Owen Flanagan at http://video.at.northwestern.edu/2013/03-04_CogSci/CogSci_03-04-13.P2G/NewStandardPlayer.html?plugin=Silverlight , abstracts at http://www.cogsci.northwestern.edu/speakers/2012-2013/dialogue.php . My particular proposals haven’t gotten much play, so not much response – gotta publish the damn things…

  29. 29. Richard J R Miles says:

    Tom,thanks for your links.

  30. 30. Charles Wolverton says:

    Ditto. I liked both talks, especially the meditation parts. Having read a bit of Zen and occasionally meditated, I’ve found quieting the inner dialogue (what I take to be a component of the Zen state called “no-mind”) helpful in sorting out some issues in perception. Whether that translates into utility in the lab is another matter. Not being a fan of the whole concept of “introspection”, I have doubts.

  31. 31. Parag says:

    What is consciousness?
    How, when and why does consciousness emerge?

Leave a Reply