Picture: Sleepwriting. Here’s an interesting piece by Neil Levy from a few months back, on Does Consciousness Matter? (I only came across it because of its  nomination for a 3QD prize). Actually the topic is a little narrower than the title suggests; it asks whether consciousness is essential for free will and moral responsibility (it being assumed that the two are different facets of the same thing – I’m comfortable with that, but some might challenge it).  Neil notes that people typically suppose Libet’s findings – which suggest our decisions are made before we are aware of them – make free will impossible.

Neil is not actually all that worried by Libet: the impulses from the event of intention formation ought to be registered later than the event itself in any case, he says; so Libet’s results are exactly what we should have expected. Again, I’m inclined to agree: making a conscious decision is one thing;  the second-order business of being conscious of that conscious decision naturally follows slightly later.  (Some, of course, would take a very different view here too; some indeed would say that the second-order awareness is actually what constitutes consciousness.)

Neil particularly addresses two arguments. One says that consciousness is important because only those objectives that are consciously entertained reflect the quality of my will; if I’m not aware that I’m hitting you, I can’t be morally responsible for the blows.  Neil feels this is a question-begging response which just assumes that conscious awareness is essential; I think he’s perhaps a bit over-critical, but of course if we can get a more fully-worked answer, so much the better.

He prefers a slightly different argument which says that factors we were not conscious of cannot influence our deliberations about some act, and hence we can only be held responsible for acts consciously chosen.  George Sher, it seems, rejected this argument on the grounds that our actions are influenced by unconscious factors; but Neil rejects this, saying that although unconscious factors certainly influence out behaviour, we have no opportunity to consider them, which is the critical point.

Personally, I would say that agency is inherently a conscious matter because it requires intentionality. In order to form an intention we have to hold in mind (in some sense) an objective, and that requires intentionality.  Original intentionality is unique to consciousness – in fact you could argue that it’s constitutive of consciousness if you believe all consciousness is consciousness of something – though I myself I wouldn’t go quite so far.

But what about those unconscious factors? Subconscious factors would seem to possess intentionality as well as conscious ones, if Freud is to be believed: I don’t see how I could hold a subconscious desire to kill my father and marry my mother without those desires being about my parents. Neil would argue that this isn’t relevant because we can’t endorse, question or reject motives outside consciousness – but how does he know that? If there’s unconscious endorsement, questioning and so on going on, he wouldn’t know, would he? It could be that the unconscious plays Hyde to the Jekyll of conscious thought, with plans and projects that are in its own terms no less complex or rational than conscious ones.

I think Neil is right: but it doesn’t seem securely proved in the way he was hoping for. The unconscious part of our minds never has chance to set down an explanation of its behaviour, after all: it could still in principle be the case that conscious rationalising is privileged in the literature and in ordinary discourse mainly because it’s the conscious part of the brain that does the talking and writes the blogs…


  1. 1. Charles Wolverton says:

    Peter –

    Clarification requests:

    1. Most of your remarks about “intentionality” seem to be assuming the Dennett sense of that word. But in addressing the subconscious you say “I don’t see how I could hold a subconscious desire to kill my father and marry my mother without those desires being about my parents.” (Emphasis added.) Is that “about” supposed to suggest the Brentano sense of intentionality or is it just shorthand for “directed toward”?

    2. The paragraph about the relationship between intentionality and consciousness seems a little garbled:

    – “In order to form an intention we have to hold in mind (in some sense) an objective, and that requires intentionality.” seems circular.

    – “Original intentionality is unique to consciousness …” and (from the next paragraph) “Subconscious factors would seem to possess intentionality as well …” seem contradictory.

  2. 2. Peter says:

    Charles –

    Sloppy drafting on my part.

    1. I don’t mean to invoke a specifically Dennettian sense here – by “intentionality” I mean more or less the Brentano sense as subsequentlyy kicked around by various people. In my mind desires and beliefs are leading examples of intentionality, as in the charming couple Des and Bel who feature in many discussions of the topic.

    2. “Intention” meant in the everyday sense, “Intentionality” in the philosophical, so what I’m saying is approximately “In order to have plans or projects we have to hold in mind an objective, which requires the Bretano inexistence stuff”. I suppose on reflection there is an element of tautology, but that’s sort of what I’m saying: can’t have one without the other.

    “unique to consciousness” – here I’m using consciousness to include subconsciousness; but the messiness of the expression may indeed reflect a messiness in my views. Anyone who can expound a tidier version earns my gratitude.

    Hope that makes sense.

  3. 3. Vicente says:

    To me the key issue for this discussion and others alike is to understand the decision mechanisms, conscious, unconscious or subconscious. HOW IS A DECISION MADE?

    Picture a man who is ready to commit a horrible action, he is fully conscious of the terrible results of his action, he hesitates, he has doubts, but at a certain point he makes the decision, he goes ahead (or not). At a certain point, there is something inside him, like a “spark” that ignites the action.

    What is the mechanism that determines or dictamines the choice? how does it work? that is the question.

    To be conscious or not of the action, probably means to have additional inputs to the decision mechanism, which might eventually bias the result.

    I believe we have all felt the “spark moment” at some point in our lifes, when we had to make a choice, and… suddenly we come out with “OK let’s go”, not knowing very well why.

    Then, intentionality is the process that leads to the decision moment… is the set of desires and boundary conditions that build the decision scenario. But how is the decision made? why are sometimes cravings so strong? do we have inner dice to play?

    My opinion is that decision mechanism takes as main input the brain reward system expecations … which is a disaster, because I have the impression that the reward system circuitry has become extremely obsolete for modern life conditions.

    So trying to answer Levy’s question: Consciousness matters as far as decision making is involved, but not for moral judgement, that is another problem…

  4. 4. Charles Wolverton says:

    Thanks, Peter, that helps.

    Although I question the need for Brentano-style “intentionality” in the present discussion, for the moment I’ll make the distinction explicit by using “B-intentionality”, restricting “intention” to its “everyday sense”, as you are.

    The following may seem argumentative, but my “intent” is to learn. I’ve struggled with the concept of B-intentionality, finding it elusive. So, I’m as much playing devil’s advocate as arguing a position.

    In my (limited) experience, B-intentionality has typically reared its ugly head in two contexts:

    – when talking about inability to translate between the “mental vocabulary” and the “physical vocabulary”, eg, “the [B-]intentional idiom cannot be translated into the vocabulary of physics”)

    – when talking about the reducibility of “the mental” to “the physical” (although I prefer Chalmer’s term “reductively explainable”, which seems to capture the essential question without necessarily injecting ontology into the discussion)

    I don’t see that the present context (assignment of MR) involves either translation of vocabulary or ontological issues. And if it doesn’t, why inject the troublesome concept of B-intentionality rather than just stick with “intention” (or “intent”, the word used in what I take to be the major application area of MR, criminal law)?

    In any event, now that I understand that you are being careful in distinguishing between “intention” and “B-intentionality”, I have additional questions about that key paragraph in your post.

    “I would say that agency is inherently a conscious matter because it requires [B-]intentionality.”

    “agency” seems to be another word that is ambiguous in philosophical usage. Given the context, I infer that there is an implicit “moral” preceding it in that sentence. But then it’s not obvious to me why the embedded premise “[moral] agency requires [B-]intentionality” is true – whereas if “intention” is substituted, it seems arguably correct. But even were I to accept the premise as written, I don’t see why the conclusion “B-intentionality implies consciousness” follows; don’t we have unconscious desires, etc?

    “In order to form an intention we have to hold in mind (in some sense) an objective, and that requires intentionality.”

    I agree with the first assertion, but I don’t see why “hold[ing] in mind … an objective” requires B-intentionality. I intend to get a beer from the fridge; what component of that statement is “inexistent”? (Assuming, of course, there’s a beer in the fridge, which is always the case in my house!) Yes, the intent is motivated by a desire, but is that enough to create B-intentionality? Seems like that would have the effect of making every action, even automatic ones, require it.

  5. 5. Peter says:

    “Argumentative” is a positive quality here, Charles!

    You’re right that neither translation nor reduction into physical terms is directly relevant here. I’d say problems with those show that intentionality is difficult without capturing its essence; which is aboutness, a particular, commonplace but nevertheless rather mysterious kind of directedness.

    I mean all kinds of agency, including the moral kind. To be an agent is to behave deliberately rather than merely transmit the chain of cause and effect; to act deliberately requires intentionality for the reason stated: you have to think about your objective, to have it in mind. We do no doubt in ordinary terms, have unconscious desires, which is why I was sneakily trying to position them as “subconscious” and within the pale of a broad reading of “conscious”. A messy bit of papering over some cracks, I concede, and no doubt deserving of challenge.

    Where you leave me behind a bit is in not seeing why ‘holding in mind’ requires Brentanoish intentionality. Holding in mind is what Brentano was talking about, surely? He meant it’s as though when you think about the beer it has intentional inexistence in your mind, as though beer which is actually in the fridge (or for that matter on the other side of the world) can actually cause events (and desires) in your mind here. His point, at least as I read it, was that whatever kind of causation that is, it ain’t catered for in physics.

  6. 6. Charles Wolverton says:

    “Argumentative” is a positive quality here … !”

    But only if the person doing the arguing has some reasonably clear idea of what they are arguing about. I’m largely in a fog on “B-intentionality”.

    You are quite right to question my contention re B-intentionality and the beer in the fridge. I didn’t think my argument held together, but in the spirit of “devil’s advocate” thought I’d float it anyway – and you saw right through it.

    I have just reread the wicki entry on intentionality and must say it doesn’t help much – quite ambiguous and hence confusing. Which leads me to ask again: given the context of the MR discussion, what do you consider introducing this apparently ill-defined concept adds to the discussion? Being a fan of the “linguistic turn”, I tend to assume that if knowledgeable people can’t agree on the meaning of a term, one must question whether its inclusion in a vocabulary is just a historical fluke which causes more harm than good. It is certainly possible (maybe likely) that I am missing a subtlety, but so far I don’t see the good consequent to use of “B-intentionality” and am quite clear on the harm. Why doesn’t “intention” – a word on the meaning of which I assume we all mostly agree – adequately represent the position of the person whose actions we want to assess as to MR?

    I’m being somewhat obtuse about this because I ultimately want to argue against “intention” as well, but to do so I first need to dispense with B-intentionality. The key may be in your observation “as though beer which is actually [does it matter whether it is “actually” there?] in the fridge … can actually cause events (and desires) in your mind”. Perhaps the (inexistent?) “mental image” of a beer possibly in the fridge can’t (although I’m not sure why not), but what about your past personal history with respect to beer, presumably extant in neuron-based memory, a physical entity which presumably can have causal effects on your present behavior?

  7. 7. Vicente says:

    as though beer which is actually in the fridge (or for that matter on the other side of the world) can actually cause events (and desires) in your mind here

    This is the point I can’t take, the cause of the desire is not the material beer in the fridge. The cause could be: thirst, culture, anxiety, a tv advert you just watched… anything but the beer can in the fridge.

  8. 8. Vicente says:

    Charles, I don’t know if this will help, but my understanding of intentionality in its philosophical use is based on its opposition to the idea of pure contemplative mind .

    Can you imagine a pure contemplative existence? just watching and observing, all desire or interest has ceased.

    So, intentionality is the complementary part to that kind of purely observing mind, is what you need to add to achieve your normal state of mind, is the directness and aboutness (Peter referred to), that you need to put on top of pure contemplation in order to get involved with the world.

    So your mind in this world would be: contemplation + intention.

    I have tried to convey a very personal and subjective understanding of the issue, so I doubt it can make any sense to others.

  9. 9. Charles Wolverton says:

    OK, Peter, I’ve done a little more reading and thinking about what you were trying to do with that paragraph. I now understand your logic and see why and how you are using intentionality (and am even comfortable dropping the “B-“!). In an attempt to make the logic (as I understand it) a little easier to follow, I might rephrase the first couple of sentences as follows:

    I would say that agency in general requires us to hold in mind (in some sense) an objective, that is, an intentional “object”. In particular, to form an intention to act requires that ability.

    {I’m not too happy with that somewhat awkward use of “object”, but it’s kidnapped from a Brentano quote in the wicki entry on “intentionality” – pretty could parentage, I’d say.)

    “Original intentionality is unique to consciousness – in fact you could argue that it’s constitutive of consciousness if you believe all consciousness is consciousness of something – though I myself I wouldn’t go quite so far.”

    Because I don’t consider “consciousness” to be well-defined enough to warrant being used when attempting to make definitive statements, in place of the first phrase of this sentence I would continue the second sentence in my suggested rewrite above something like:

    … requires that ability – an ability typically assumed when we talk of “consciousness”. Since we also typically speak of being “conscious of something”, one could argue that in that sense, intentionality is inherent in talk of consciousness.

    I don’t mean to be making assertions in this rewrite, only to suggest a style. I don’t know enough to argue the relationship between intentionality and “consciousness”, so any part of that suggested rewrite may be wrong.

    However, I will question the expansion of intentionality to the subconscious, given how you have introduced intentionality in the context of “holding an objective in mind (in some sense)”. The “in some sense” gives you some wiggle room, and your Oedipus example of subconscious intent does have the required intentionality. But I don’t see how to square what we usually mean by “holding something in mind” with that “holding” being subconscious. Of course, I may yet again be missing your point.

    If this exchange continues, I’ll at some point get to the real subject, unconscious intentions. But for the moment, I’ll switch topics (although I just noticed that there’s a hint of my position on MR buried in the following).

    “I’m inclined to agree: making a conscious decision is one thing; the second-order business of being conscious of that conscious decision naturally follows slightly later.”

    This is interesting to me because I have come to wonder if the phenomenal experience we associate with “consciousness” is a reporting function rather than part of a decision process. As I noted in a comment in another thread, it seems that the neural processing necessary to create the phenomenal “picture” of the content of our visual FOV should be sufficient for doing the things our ancestors needed to do – detect and identify objects in the FOV, detect the motion of objects in the FOV (especially hostile ones moving nearer or edible ones receding), etc. And if that’s correct, it raises the question of what additional capabilities are attendant to the emergence of that phenomenal “picture”.

  10. 10. Peter says:

    OK, Charles. I have to admit I haven’t got a well-developed view about subconscious intentionality, so it might be more illuminating to see how you attack it than see me defend it!

    Vicente: but surely you go to the fridge because of the beer? No matter how many adverts or cultural influences are bearing on you, you wouldn’t go to the fridge if the beer wasn’t there?

  11. 11. Vicente says:

    I disagree Peter: Imagine there is no beer in the fridge, but Charles think there are some (overconfident in his home supply chain), then he would go to the fridge to get one. But when he opens the fridge he finds there is no beer in it, so what was then the cause of the action? not the can of beer for sure.

    So the absence of the beer has caused the same effect as the presence of the beer. So contradicting your statement, Charles went to the fridge with no beer in it.

    It is not the material can of beer, that can have no direct effect on you (before drinking it I mean), irrespective of its presence, it is the belief about having a can of beer in the fridge, which very different.

    This is a problem, too often we act based on what we believe and not on what we know, or on real facts.

  12. 12. Charles Wolverton says:

    “it might be more illuminating to see how you attack [subconscious intentionality] than see me defend it”

    I just wanted to understand those two key paragraphs in your post, which I think I now do. So, even if we disagree on that issue, I’m ready to move on to (what I take to be) the actual subject of Levy’s piece and your response.

    I infer from Levy’s piece, including his discussion of the significance of Libet’s experiments, that a simplified model of the process he envisions leading up to an action A might comprise these steps:

    1. P postulates an action A.

    2. P deliberates the pros and cons of taking A (essentially a cost-benefit analysis).

    3. Based on the results of this deliberation, P decides whether to execute A.

    4. If the decision is to take A, P forms the intention to take A.

    5. P executes A.

    I think it is instructive to review Levy’s conclusion about Libet’s experiments:

    [I]mpulses from the event of intention formation ought to be registered later than the event itself (given that it takes time for impulses to travel across the brain)”

    I interpret this as meaning:

    At time T, P forms the intention to take action A, but only at a later time becomes conscious of having formed the intention.

    But implicit in this seems to be that the intention formation step is executed “unconsciously”. For assume the contrary and suppose that “P consciously forms the intention to take action A” includes that P is conscious of having completed formation of the intention. Then Levy’s conclusion would become:

    At time T, P is conscious of forming the intention to take action A – in particular, is conscious that the intention has been formed, but only at a later time becomes conscious of having formed the intention.

    This is internally contradictory, so the premise must be false.

    Peter concurs with Levy, restating Levy’s conclusion as:

    [M]aking a conscious decision is one thing; the second-order business of being conscious of that conscious decision naturally follows slightly later

    which can be restated in the terms used above as:

    P is conscious of making a decision at time T, but only later becomes conscious of having made the decision.

    which leads to the same conclusion.

    Before proceeding, I’d like feedback on this much – too many opportunities for logic errors..

    Also, Peter, I’d appreciate any thoughts you have on the last paragraph of my comment 9. Thanks.

  13. 13. Charles Wolverton says:

    On yet further reflection on Peter’s position, I now think that we may actually be converging on a common goal, but from different directions. Peter seems to be taking consciousness as a given but exploring how far apart the conscious and the unconscious really are. As a step in that direction, he argues that the intentionality we accept as a feature of being conscious may also apply to the unconscious, and I now see his point.

    I, OTOH, start from the position of taking the mental state we call “unconscious” as a given but am skeptical about whether that state and the state we call “conscious” really are complementary as suggested by those words. As a first step, I latch onto the possibility that consciousness performs an a posteriori reporting function rather than an an a priori evaluation function, possibly part of a feedback loop in a decision process.

    So, it appears that we are both trying to “tear down that wall”, or at least make it less like the Great Wall and more like one of Frost’s “good fences”.

    The relevance of this to Levy’s piece is that it raises doubts (for me, anyway) about whether the question “Does Consciousness Matter?”, with it’s implication of contrast with unconsciousness, is really meaningful.

  14. 14. Vicente says:


    consciousness performs an a posteriori reporting function rather than an an a priori evaluation function, possibly part of a feedback loop in a decision process

    Why would you need consciousness to close the feedback loop?

    In addition, why would you need feedback to make a decision, it is not a control process.

    You will soon find that the logical scheme you are trying to build helps little.

  15. 15. Charles Wolverton says:

    Just closing the open italic tag (sorry).


  16. 16. Charles Wolverton says:

    .“Why would you need consciousness to close the feedback loop?”

    I was confusing the possible role of “consciousness” as a reporting function in the decision process that is the subject of the present thread with it’s possible role as described in my comment 9 above, where it could be part of a tracking and prediction loop, ie, a “control process”. (Typical of the sloppiness of my comments on this thread – which I would like to think is somewhat uncharacteristic, since I actually try pretty hard to make them coherent.)

    “You will soon find that the logical scheme you are trying to build helps little.”

    I have no idea what this amusingly confident assertion means. This thread, while perhaps somewhat wasteful of Peter’s time, has actually helped me in several ways. Eg, it drove me back to some previous reading on intentionality including Chap 1 of Rorty’s PMN, which I found directly relevant to the present issue. I think I now understand both intentionality and the whole thrust of that chapter much better than before. So in that sense, trying to build the “logical scheme (??)” has helped (me) considerably.

  17. 17. Vicente says:

    Charles, good if you feel to have your ideas more neat now.


    What does it mean that consciousness reports to…. What reports to what or to whom? An experience is reporting to a neurons based algorithm? Or is it that qualia are used as input for the prediction system (whatever that is), even worse.

    When I look at the original paper from Libet I just find impossible to understand what’s going on. He requests the subject to report when he is conscious of having made the decision to push a bottom. To me that request makes no sense, because the question is assuming that the decision was already (previously) and unconsciously made. In addition, I believe the experimental set up is from a psychological point of view so artificial that the results can’t hardly represent anything at all.

    And then what and where is the [unconscious]stimulus or process needed to start the brain commanding the finger to push the bottom? 350 ms before or 2 hours, it doesn’t matter, what ordered the brain to do such a thing? Why did those neurons start firing in that precise but random moment?

    Maybe other possibilities could be explored, like the decision was consciously made when it is detected by the apparatus (350 ms before) but the reporting is retarded by that time lag. Do you think this could be? Since we cannot observe directly the subject conscious experience, only listen what he tells us, this cannot be checked. There is no way we can really confront the apparatus measurement with the real subject inner experience of making the decision.

    I am not denying the reality of unconsciouns decisions, just have a look at the behaviour of patients with dissociative conditions…

    As I said, before somebody explains me how a decision is made (by the brain) I feel there is no basis to go on with this analysis.

    What I tried to say with my ammusing assertion is that we are talking about consciousness, if you try to be so precise and logical in your approach (which I fully respect) you are going to lack the flexibility that I think this field requires.

  18. 18. Richard Miles says:

    I think unconscious freewill describes how the unconscious works 24/7. It is the active unconscious that takes care of us from the inside, a silent partner in the brain and body, making us consciously aware in its many ways of its requirements. When asleep, the unconscious is free from the part time interference and sometimes abuse of consciousness. Unconscious and conscious rely on each other in their different ways for the survival of the individual human. My Philosphy of Psychology on my web perhapspeace.co.uk also comment 1 and 14 of Old Skool Consciousness refers.

Leave a Reply