Smonsciousness

catandaliensEric Thomson has some interesting thoughts about aliens, cats, and consciousness, contained in four posts on Brains.  In part 1 he proposes a race of aliens who are philosophical zombies – that is, they lack subjective experience; their brains work fine but there is, as it were, no-one home; no qualia. (I have to say that it’s not fully clear to me that cats actually have full consciousness in the human sense rather than being fairly single-minded robots seeking food, warmth, etc: but let’s not quibble.)

These aliens come to earth and decide, for their own good reasons, to set about quietly studying the brains of cats. They are pretty good at this; they are able to analyse the workings of the cat brain comprehensively and in doing so they discover that feline brains have a special mode of operation. The aliens name this ‘smonsciousness’; most of the cats’ brain activity is unsmonscious, but certain important functions take place smonsciously. The aliens are able to work out the operation of smonsciousness in some detail and get an understanding of it which is comprehensive except for the minor detail that unbeknownst to them, they don’t really get what it actually is. How could they? They have nothing similar themselves.

Part 2 points out that this is a bit of a problem for materialists. The aliens have put together what seems to be a compete materialist account. In one way this seems like a crowning achievement. But it leaves out what it is like for the cats to have experiences. Thomson acknowledges that materialists can claim that this is simply a matter of two different perspectives on the same phenomenon, rather in the way that temperature and mean kinetic energy turn out to be the same thing viewed in different ways. It’s a conceptual difference, not an ontological one. But it would be unprecedented to understand something from the lower level without being aware of its higher level version; and in any case, ex hypothesi the aliens know all there is to know about every level of operation of the cat brain. Isn’t this an embarrassment for monist materialism?

Part 3 proposes that all might be well if we could wall off phenomenal experience by arguing it needs a special, separate set of concepts which you can only acquire by having phenomenal experience. No amount of playing around with neural concepts will ever get you to phenomenal concepts (rather in the way that Colin McGinn suggests that consciousness is subject to cognitive closure). Neuroscience suffers from semantic poverty in respect of phenomenal experience.

Thomson rightly suggests that this isn’t a very comfortable place for the materialist case to rest in and that it would be better for materialists if the semantic poverty idea could be done away with.

So in Part 4 he suggests a cunning manoeuvre. He has actually set the bar for the aliens fairly low: they don’t need to have a full grasp of phenomenal experience, hey merely need to become aware that in smonsciousnes something extra is going on (it could be argued that the bar is too low here, but I think it’s OK: if we demand much more we come perilously close to demanding that the aliens actually have phenomenal experience, which is surely too much?).

Now again for their own good reasons the aliens build a simulation of their own consciousness with an additional model which adds smonsciousness when powered up; they call this entity Keanu. Keanu functions fine in alien mode, and when they switch onm his smonsciousness he tells them something is going on which is totally inexpressible, other than by ‘Whoa…’  Now that may not seem satisfactory, but we can refine it further by supposing the aliens have powerful brain in which they can run the entire simulation. Keanu them is an internal thought experiment: an alien works it through mentally and ends up exclaiming “Dudes! We totally missed phenomenal experience!”  Hence the aliens become aware that something is missing and semantic poverty is vanquished.

What do we make of that? There are a lot of angles here; myself I’m suspicious of that interesting concept smonsciousness. It allows the aliens to understand consciousness perfectly in functional terms without knowing what it’s really about. Is this plausible? It means that when they consider the following:

O my Luve's like a red, red rose, 
That's newly sprung in June: 
O my Luve's like the melodie, 
That's sweetly play'd in tune.

….they know exactly and in full what caused Burns to use these words about red things and melodies as he did, but they have no idea at all what the essential significance of the words to Burns was. This is odd. If pressed sufficiently I think we are forced one of two ways. If we cling to the idea that smonsciousness gives a full explanation we are obliged to say that the actual experience of consciousness contributes nothing, and is in fact an epiphenomenon. That’s completely tenable, but I would not want to go that way. Alternatively, we have to concede that the aliens don’t really have a full understanding, and that smonsciousness isn’t really viable. In short, we are more or less back with the dilemma as before.

What about Keanu? Let’s consider the internalised version. It’s important to remember that running Keanu in your brain does not confer phenomenal experience, it makes you aware that another person of a certain kind would claim phenomenal experience. So what the alien says is not “Dudes! We totally missed phenomenal experience!” , but “Dudes! These cats have a really wild kind of delusion that makes them totally convinced that they’re having some kind of special experience – but they can’t describe it or what’s special about it in any way! What a crazy illusion!”. Now Thomson has set the bar low, but surely becoming aware that creatures with smonsciousness claim to be conscious is not quite high enough?

62 thoughts on “Smonsciousness

  1. Hi Peter,

    I am very much inclined to the idea that we only have two alternatives: epiphenomenalism (which is very unsatisfactory and imho plain crazy) or some form of some causal efficiency/relevancy to the organism of conscious experience (to which I am much more sympathetic).

    The delusion thing strikes me as interesting. Usually to say „oh it‘s just an illusion“ is alway connected to showing „how it really is“. Think of an optical illusion, you can claim that people seeing two lines with different lengths are deceived only if you, at the same time, can show that in fact both lines are of the same length and, in the case, what this illusion of yours really is. You somehow need to switch to another, more objective „vocabulary“ (in this case geometry). But that you can only do if you can show that experience is epiphenomenal and the right vocabulary is only neuroscience. (because otherwise the way „it really is“ does invovle the presence of qualia or similar) So, claiming that some experience is an illusion already presupposes that we can show how neuroscience gives us everything, but that‘s exactly what we want to find out, isn‘t it? I don‘t really understand your conclusion on the illusion thing here, maybe you could explain it?

  2. Robert,

    I’m not meaning to say that phenomenal experience is an illusion, just that Thomson’s aliens might say that. He needs Keanu to show them that it exists, but my point is that they might just dismiss Keanu’s experience as deluded, on the view that their own materialist account is already complete and Keanu’s views are just an interesting side effect of the smonsciousness they already understand.

    Hope that makes sense!

  3. Thanks Peter for the discussion of my posts. I am about to run off for the day, but wanted to give a quick reply.

    Your point about how they would react to Keanu is fair, though I think I can make my main claim without getting too concerned about this. The semantic poverty thesis is this: if you restrict yourself solely to the theoretical resources of the natural sciences (esp neuroscience), then you are necessarily and forever screened off from coming to have a phenomenal conception of experience. The thought experiment in Part IV should show this is suspect.

    But you are right that I didn’t really make a strong case that they would react by believing in this new conception that they discovered. They might come to have this phenomenal conception, but then reject it as illusory (i.e., the aliens might be constituted to be like Dan Dennett, not like me or Chalmers). But they at least end up with this additional conceptual understanding that they can argue about.

    You are right to focus on this as a detail that I sort of helped myself to, but I think it doesn’t hurt my conceptual point. What strikes me is the quick and glib acceptance of the semantic poverty of neuroscience. Once that falls, a whole bunch of dualist arguments look a lot less interesting.

    I have a couple of minor points to make too….

    One, these aliens are not technically philosophical zombies,which are physical duplicates of us that lack qualia (that would undermine everything, no?). Rather, I envision them as having a radically different internal organization, at physical and functional levels. E.g., they might even be a big plastic look-up table, for all I care.

    Second, I purposely had them study cats rather than people to avoid the poem-like situation you brought up. That is, I wanted them to study a species that does not talk about or think about experiences. If they study people, then all bets are off. I mentioned this only briefly in Part III in note 1: “I am presently leaving out the possibility that the aliens could learn about phenomenal concepts if they talked to, or studied, humans, as it seems to be a cheat.”

    It would be a cheat because I built the scenario to be as hard as possible for the materialist, closing the usual loopholes you’d find with Mary and such. I didn’t want them to find out about consciousness by simply talking to people who already conceive of it, as such. That would likely make the concerns about semantic poverty moot. Hence, they study cats, and don’t get to sneak in phenomenal concepts from people that already have them.

    Also, as you point out, the scenario in Part IV implies they didn’t initially have a complete understanding. This is objection 9 in the objection list at the end of Part IV: “You made it seem as if the aliens had complete knowledge of the cats, but if they had more to learn (from additional simulations), then their knowledge wasn’t really complete.” That is a good objection, and I frankly didn’t plan to end up where I did when I was writing Parts I/II. If I were starting it anew, I would have to reframe things in the first couple of posts to qualify that ‘completeness’ claim.

  4. Maybe the materialist could argue that having concepts of any kind, including those required for studying the cat’s brain, implies having some kind of phenomenal experience? That would undermine the whole scenario.

    The scenario seems to assume an important part of its conclusion from the start: if (kinds of) zombies are possible, that is, if cognition/knowledge and phenomenal experience are uncorrelated, then knowledge can be incomplete. I think the best way out for the materialist is to argue that no kind of zombie is possible at all because any kind of knowledge is rooted in phenomenal experience.

    Of course this leaves room for dualism until we get a real account of the connection between knowledge and experience.
    But this assumption of a necessary connection between knowledge and experience seems rather intuitive to me: in some sense, it just is the basis of empiricism…

  5. It seems very much based on assuming the experience is fundimentally true.

    Awful mind experiment time! Let’s say you take a child and all through the years you keep them from seeing their body somehow (their head is through a hole in something that blocks that). Also you stop them from feeling their body except maybe by brushing their fingertips. And you attach some largish object to them (or even surgically implant some largish object under their skin) when they are babies.

    Okay so they can never really examine it and…well, now we have the child in latter years have a conversation about bodies with someone who has never been in the horrible contraption were talking about. They talk about their bodies and…the kid in the trap starts to feel the other one doesn’t understand what it’s like to have a body, or alteast one like his, because they never talk about this large part of them.

    The other persons discussion leaves out what it’s like to have a body.

    Like the idea here ‘But it leaves out what it is like for the cats to have experiences.’

    If you’ve gotten this far, the point is: This doesn’t much engage that ‘what it’s like’ may be an entirely false conception. It’s asking to explain something which really is just a false perception.

  6. quen_tin wrote:
    Maybe the materialist could argue that having concepts of any kind, including those required for studying the cat’s brain, implies having some kind of phenomenal experience? That would undermine the whole scenario.

    Yes, true. As I mention in the first post (and comments to Part III), it could be that the entire situation is incoherent. Hence the thought experiment should never be allowed to get off the ground. I’m actually OK with that criticism, but my point is that even if you do let this (antimaterialist-friendly) scenario get off the ground, the materialist can *still* deal with it fairly effectively. This means life is good for the materialist.

    Callan: Good point: for people that accept the scenario from the start, there are two main options to avoid the antimaterialist argument. One, deny that experiences (as depicted in the scenario) are real. Two, undercut the semantic poverty thesis. The first option I don’t consider viable (dualism seems more reasonable, frankly) so I took the second.

  7. Ok I understand. I posted the comment because the objections in footnote of part 3 and those in Peter’s article here had convinced me that the situation was not so good for the materialist actually… It seems to me (intuitively) that the only way for the aliens to realise that something is missing in their first description would be its causal incompletude, not any simulation outcome…

  8. quen_tin: models make unexpected predictions all the time. E.g., the paper ‘Null space in the Hodgkin-Huxley equations’ predicted that neurons could actually have action potentials extinguished by a depolarizing current pulse delivered at the right time (depolarizing inputs typically cause action potentials). For extremely complex nonlinear systems, it is extremely hard to see the implications of a model just by staring at the equations.

    And note the above is based on the model of a single neuron. For conscious brains, we are dealing with objects that are many orders of magnitude more complicated. I don’t expect anyone to have good intuitions about such matters, frankly, without a good deal of knowledge gained outside the library (i.e., in the laboratory).

    As I wrote in Part IV:
    Scientific practice produces conceptual novelty in more ways than you might expect from analytic philosophers’ rational reconstructions or conceptual analyses. Scientists tend to place much less stock in what they intuitively think is already “contained in” their conception of the world. We tend to hold onto our concepts with a loose grip, allowing data/evidence gained from experiments, and predictions gained from models/simulations, to expand our outlook in unexpected directions. In this way, our understanding of the implications of our theories is continually expanding and enriched.

    Especially in the biological study of complex systems such as brains, we do not see conceptual rumination as a particularly useful source of knowledge. While biology admittedly involves a great deal of prediction-building in ordinary language, significant stress is also placed on the importance of prediction based on quantitative models. Even something as simple as the human heart displays behavior that we would not predict from ordinary-language meditations about hearts. Why, for an object orders of magnitude more complex, would we expect to find a transparent conceptual route without the help of computers or simulations?

    I see this as a key point from all four of the posts.

  9. What disturbs me is not the idea that the aliens would learn something by running a simulation. What disturbs me is the idea that they would eventually “know that there is something which they cannot know”. I am a bit confused: is their knowledge complete or not at the end? (and could a simulation really show that the knowledge on which it is based (?) is incomplete? but that is a secondary question).

    Maybe I didn’t get all the subtleties of the argument, though, or maybe I misunderstood something…

  10. Well, they lack phenomenal experience, and they know this (just as Mary doesn’t see colors and she knows this). This seems no stranger than not photosynthesizing, even if you study photosynthesis.

  11. Callan: Great way of expressing the stakes. You’ve been soaking BBT up!

    Eric: Callan’s example is actually relevant to your scenario on the back-end as well, so to speak. You can use his story to undercut the possibility that there’s ‘any such thing’ phenomenal experience, or you can use his to story to undercut the empirical possibility of the aliens *knowing they LACK phenomenal experience.* The upshot is that there’s no real way to plausibly/naturalistically discuss/pose these questions short of some consideration of the informational and cognitive resources available. Why? Because the possibility of some kind of ‘Plato’s Cave Effect’ is very real (and I would argue, inevitable).

    So for instance, when Keanu says, ‘Whoa!’ what *new information* is he noninferentially metacognizing that he wasn’t noninferentially metacognizing before? ‘Phenomenal’ information? How does that information differ from the information he noninferentially metacognized before?

    These questions actually cut to the root of Peter’s insight that your Mary-esque response is actually unmotivated. If it’s just more noninferential metacognitive information then it could just as likely be, “Meh, cats are zombies just like us…”

  12. Scott, we could definitely have a conversation about how the aliens would react to this new prediction of their models (this was the point of my ‘Objection #5′ in part iv of the series). But the main point is that the scenario undermines semantic poverty (see end of part iii). Even if the aliens became dualists because of Keanu, semantic poverty would still be undermined, so the argument would still flow.

    This isn’t to say there isnt’ an interesting conversation to be had, just that I think it doesn’t help the dualist any.

  13. @Eric OK then it is no stranger than studying smonsciousness without being smonscious and Keanu does not bring anything new to the debate.

  14. Don’t you need a distinction between knowledge by acquaintance and knowledge by description to make sense of it? But doesn’t knowledge by acquaintance already presuppose some kind of phenomenal aspect (beside being a dubious notion)?

  15. quen_tin: Keanu shows that the semantic poverty thesis is suspect, he makes the aliens aware of subjectivity (or for Dennett types, it emerges as a previously unseen prediction that turns out to be rejected), so I wouldn’t say he brings nothing to the table. He undercuts one of the most pernicious assumptions in philosophy of mind, common to both dualists and materialists, that consciousness is forever sealed off from the neural.

    My point about photosynthesis is pretty vanilla. As I wrote in Part iii:
    We would not expect our aliens to experience red any more than we expect to photosynthesize upon acquiring expertise on the topic of photosynthesis. Materialists are committed to the view that experiences are brain states, not that studying consciousness magically induces such brain states.

    As for knowledge by acquaintance, I am agnostic. I don’t know enough to place a stake anywhere in such a specific philosophical view that is loaded in ways I probably don’t appreciate.

    For some (e.g., Sellars, in his critique of knowledge by acquaintance in Empiricism and Philosophy of Mind) it is a sort of propositional knowledge. For others, to undergo an experience seems sufficient to have knowledge by acquaintance, and it requires no commitment to propositionally structured beliefs about the experience as such. So in that case even the cats have knowledge by acquaintance. But then it isn’t clear why we need a new term other than subjective experience. If it is synonymous with experience, then I guess I believe in it.

    At any rate, I’m agnostic.

  16. Let me try to formulate my critic a bit more precisely (again I am not very sure it is valid, maybe I totally missed the point).

    Here is your point as I understand it: in this story, the aliens come to know that there is something which they cannot know (the phenomenal aspect of experience). Hence the semantic poverty thesis is undercut: neural concepts are sufficient to know that something like phenomenal consciousness exists, even though they are not sufficient to become conscious or to describe conscious experience itself.

    The problem as I see it is the following.
    Maybe the aliens think that knowledge by acquaintance is possible? This cannot be, because the aliens do not have the concept of an experience. So the aliens probably understand knowledge as: knowledge by description, and what they know they cannot know can be expressed by a description, or a proposition.
    But it seems to me that the aliens know (by description) everything there is to know about the cat’s brain, or their own, since they are able to run a simulation of it: their neural description of it is complete. What else would you expect? What kind of knowledge do they think they miss exactly?

    It seems to me that the only way for them to discover that the neural description is incomplete is if one of their prediction does not match reality (simply running a simulation would not suffice). But if their predictions *cannot* match reality (because there is something they *cannot* know) then probably materialism is false: there is some extra-ingredient in reality which makes any physical prediction fail.
    So the materialist might undercut the poverty thesis, but that would threaten materialism itself.

  17. Quentin I wouldn’t put it in terms of them knowing there is something they cannot know, that is too loaded a way to express it for my tastes. They learn they lack subjective experiences because they do not instantiate the relevant brain states. Whether that counts as knowledge or not, I don’t want to touch that one 🙂

  18. But how would the alien express that ‘thing’ they lack if not in terms of knowledge, nor of direct experience (they can’t), and without making of it something as trivial as photosynthesis, nuclear fission, smonsciousness, or any physical process their brain lacks? I am a bit puzzled…
    Anyway thank you for your responses.

  19. quen_tin: They would say they have discovered subjectivity, which they lack, but it is not metaphysically any more special than photosynthesis. It is a prediction of their neuronal model that they didn’t initially see, but discovered when they extended their theory beyond its original target domain.

  20. Eric,
    Keanu functions fine in alien mode, and when they switch onm his smonsciousness he tells them something is going on which is totally inexpressible, other than by ‘Whoa…’ Now that may not seem satisfactory, but we can refine it further by supposing the aliens have powerful brain in which they can run the entire simulation. Keanu them is an internal thought experiment: an alien works it through mentally and ends up exclaiming “Dudes! We totally missed phenomenal experience!”

    Why does the alien exclaim that? What does working it through involve that it gets to that point?

    What if they work it through and discover ‘distractional experiance’ instead? That the Keanu thought construct, the various logic paths, the various triggers of electrical conduits, become rather distracted from raw firings in regard to contact with environment and instead phantom environment readings sometimes occur by error, which are reacted to, which in turn creating more phantom readings, which are reacted too – a constant cycle of distraction. Synapses acting like a dog chasing it’s tail, chasing something just beyond their clear sight, so it appears something else. A distraction.

    I’m not sure how they work it through to find phenomenal experience?

  21. HI Eric
    Your diagnosis about where the problem for the materialist is seems sound to me. The spot where I think you overstate your case is when you say “[the simulation] shows that the fact there is something it is like to be conscious could be a natural outgrowth of a neural theory.” If you said “suggests” instead of “shows” I would be on board. Your painting the scenario seems equally as (un)convincing as the bare intuition of the “semantic poverty thesis.” We can say “imagine a zombie” and have lots of arguments about them, just as you can say “imagine aliens figured out consciousness exists from running a neural simulation.” So that part seems like a bare claim. Your actual arguments for your claim are circumstantial/plausibility arguments like “why would we expect to see the connection/prediction easily for a brain that is more complicated than a heart when we can’t predict everything about a heart….”

    I guess my objection is the one you called objection 1 in your footnote. But glad to hear life is good for my materialist friends…and I might be one myself…

  22. Hi Mike: I’m ok if you change ‘shows’ to ‘suggests’, or perhaps ‘strongly suggests.’ 🙂

    The root intuition at play here is that it seems not only plausible but likely that (say) a molecule-by-molecule simulation of a human would yield the simulation saying they are conscious and having experiences. This is basically what I have done with the aliens.

    For people who don’t find this assumption plausible (e.g., an interactionist dualist who thinks a new ingredient must literally be added to the inventory of the natural sciences to explain neuronal dynamics) my argument will not work (and frankly they will never swallow Post I).

  23. Wait, that is wrong.

    The whole argument is actually set up as a reductio (as I wrote in Part I: ‘In this attempted reductio, I assume physicalism is the case, and generate a putatively serious objection.’).

    The entire series of posts is not meant as an argument for materialism but as a rebuttal to a reductio against materialism. Rebutting this one argument isn’t to establish materialism per se, but only to escape this one particular objection.

    So that’s why the scenario in Part IV works (and I didn’t defend it too much there). Under materialism, we would expect a molecule-by-molecule duplicate to talk about being conscious etc.. That is, it seems materialism has plenty of resources to undercut semantic poverty, and this is how the legs are taken out from the reductio proposed in Part II.

    So I take back the final paragraph of my previous post.

  24. Hm, I must admit I do not understand your attempt to correct what you said in the previous post. You seem to be saying there are those who find it plausible that a materialist theory would predict consciousness and there are those who do not. I agree with that. I don’t see how a bare description/assertion of a scenario in which a materialist simulation predicts consciousness really has any force to push an undecided person one way or the other. The person who feels the force of arguments claiming that physical variables can never imply a consciousness property have no reason to give up that…intuition…just because you say: imagine aliens made a theory that does what you (think you) see to be impossible.

    Sorry if I’m being obtuse. I don’t know all the lingo but I think I know what “reductio” is. But in the lingo, isn’t your scenario “begging the question”—ie assuming what is to be proved? I know proved is too strong a word because you didn’t intend that your argument would be a proof…but I’m not sure it has any more force than saying “imagine a horse with wings” demonstrates the plausibility of a pegasus.

  25. I don’t see it as a stretch at all to say that materialists are committed to the claim that a molecule-by-molecule physical simulation of a human will say it is conscious, talk about how things feel, etc.. This is not some weird outlier assertion about materialism, but seems almost tautological. For that reason, I don’t see the semantic poverty thesis as legit.

    That is not to beg the question, but to clarify the nature of the target of the reductio in what seem reasonable ways. Consider the following argument:

    E.g., If God exists then that implies evolution did not happen.
    Evolution did happen.
    Therefore God does not exist.

    Well that is a crappy argument, because the first premise is false. It is not to beg the question to show that evolution is consistent with nearly all typical conceptions of God. It’s simply to clarify the nature of the target (God) for purposes of showing the reductio is not very well thought out.

  26. Also, Mike, the bar is actually fairly low. All I have to do is argue that the scenario I point out is logically possible. Unless you are arguing that it is logically impossible that a simulation work out the way I’ve suggested, then any disagreement is likely inessential to the argument.

  27. I agree with Mike, it seems to me that the argument does not do justice to the dualist intuition if it is to consider consciousness as a bare physical process, however complex.
    As I attempted to say in my earlier comments, I think the problem has to do with the status of knowledge and semantic. The dualist would expect that the words expressed by Keanu are not simply the outcome of a physical process, but that they are *true*, which makes a difference. The materialist does not tell us where the difference lies, and for that reason the argument is not totally convincing.

  28. Quen_tin wrote the argument does not do justice to the dualist intuition if it is to consider consciousness as a bare physical process, however complex.

    The only dualistic intuition the argument against materialism relied upon was the semantic poverty thesis. And my argument gives a reason to think this intuition is wrong.

    As I said, my goal was only to block an argument against materialism, not to give an independent argument that materialism is true and dualism false. Just because this particular argument against materialism is weak doesn’t mean that materialism is true! It only means that this argument, relying on intuitions about two conceptual domains, is weak.

    However, perhaps there is a separate argument for dualism to be pulled from intuitions of distinctness of experiences and brain states, or some such. I know such intuitions are powerful, but that would be a different topic.

  29. Sorry to insist, but I do not think that the materialist does justice to the intuition which supports the semantic poverty thesis, unless she explains why the aliens have reasons to think that Keanu says the truth (or at least what he believes is the truth) rather than produces a sequence of words which happens by chance to receive a meaningful interpretation.
    Photosynthesis produces oxygen, not truth, and unless Keanu is qualitatively different, he doesn’t bring anything more than photosynthesis to the debate.

  30. quen_tin: as I argued above (my first point in this thread) it technically doesn’t matter if what Keanu says is the truth, only that he introduces them to a new prediction that they didn’t have before, this prediction that there is subjective experience.

    How you predict the aliens will ultimately react to the prediction seems a kind of intellectual Rorschach test, but the very existence of the prediction is enough to undermine the semantic poverty thesis (and, hence, the specific antimaterialist argument under consideration).

  31. Eric, Ah, looking back through the comments, I think I might have misstaken you as making another kind of point. My error! Thanks for your replies! 🙂

  32. Eric,

    Correct me if I am wrong, but one of the conclusions of the story is that consciousness is the result of the brain architecture and information processing dynamics, not of its biophysical components, that of course support the dynamics, but are very much the same at molecular level compared to other organs. Or not, do the neuron physiology and architecture play a central role, microtubules etc?. This is why when Keanu is augmented by the simulation it reaches a smoncious state. But the basic alien hardware remains the same, doesn’t it.

    After all, simulations just produce data in the same terms they were programmed. They provided values for aggregated parameters, or for each atom if required, but it is only data.

    Then you are assuming that the simulation is equivalent to the actual aliens. Why? because they are not conscious in the first place, so it is pretty easy to equate aliens (machines) to other machines. So Keanu is not a simulation is a real alien “de facto”. This is why it can be augmented. You could have perfectly connected (in the story) the augmenation system directly to the aliens “brain”, with the required interface.

    So taken to the extreme, your story acknowledges from the start that matter produces consciousness when it behaves in a certain manner. That’s it, when it processes information in a certain manner, irrespective of the physical substrate.

    Under this premise there will always be a possible software augmentation to enable conscious states in the alliens, or in any machine. The point is that the augmented Keanu will never be able to convey the experience to the other aliens, to begin with the aliens won’t have the word experience.

    The result of the story was already implicit in the boundary conditions. In this case aliens are machines by nature, and materialists have to accept that machines can be conscious if build in the right way. Or is there anything special in the brain matter?

    Try the opposite, or the complementary, what could you do to an ordinary cat to make it become a philosophical zombie cat.

  33. Vicente: well, I am not arguing directly that consciousness is a brain process, but blocking an argument that this view is untenable.

    In terms of how the simulation works, I purposely left the details fuzzy, but I envisioned that the simulation would actually include a simulated terrestrial brain connected to a simulated alien “brain” through some kind of crazy interface. I don’t envision it being simply more alien in basic architecture, but alien+cat.

    Hence, I’m not committed to the view that the physical substrate doesn’t matter. While I do lean toward that view, I don’t think it is a consequence of this scenario. One nice feature is that this scenario actually should be friendly to all materalists (identity theorists (including quantum theories), functionalists, etc).

    Regarding your final paragraph: I don’t think there could be a physical zombie cat in the technical sense. Since experiences are brain processes, you would never end up with identical brain processes with different conscious experiences (or without conscious experiences in one). (See comment 3). You might end up with a cat-like creature that looks and acts like a cat, but is not conscious, but that is different.

  34. Eric: Yes, but to block the argument you start with a system in which experiences are processes (of a certain kind).

    The real point is that materialism holds that experiences are just physical brain processes, and nothing else.

    In the zombie cat case if the processes are modified the resulting creature won’t even act as a cat strictly speaking.

    Then, from a formal perspective: If the experience stems from the processes, I doubt that the augmentation function works, the new processes will have to be embedded in Keanu. And Keanu can’t be a simulation (different processes, you are assuming that the experience is transferred throught the interface to the “core brain”, but the experience is the augmentation process and can’t be transferred). Will the simulation of a human brain produce consciousness?

    I don’t see how the argument is blocked more than it was.

  35. Vicente: you seem to be arguing that the augmentation procedure won’t yield experiences for Keanu. This is an interesting point. As I imagine it, the procedure is much more than a simple transmission of information between simulated minds. Keanu is augmented in a radical way, probably more cat than alien in some sense, it isn’t clear he is technically Keanu any more after the procedure. I imagine him more like a cat that can talk and think, not an alien that has bare experiences of a cat.

    Vicente also asked:
    Will the simulation of a human brain produce consciousness?

    I doubt it, no more than I would expect a simulation of digestion to urinate.

    Also, I’m not sure why you are bringing up zombie cats again. A zombie is, by definition, a particle-by-particle physical duplicate that lacks experiences (and, for conscious content externalists, we’d say they also have the same histories of interaction with their environment). Such entities play no role in my scenario, and should not be possible under materialism. We could have an animatronic cat that behaved just like a cat, but that is different from a philosophical zombie.

  36. Eric: OK, that’s what I thought, then why does the simulation of the cat brain processes embedded in Keanu (that at that same time is a simulation itself) produces any experiences !?

    I guess that the aliens’ organisms are sort of hardware, or better firmware, in which there is no difference between simulations and real processes, so once the augmentation procedures are “uploaded” (or added?) there is no simulation, but ordinary physical processes with new dynamics.

    You are also assuming compatibility and interoperability between systems, for the aliens tech to run cat brain processes.

    It is too tricky, unless augmentation is not really augmentation, and simulation is not really simulation, it makes no sense to me.

    On the other hand, if the animatronic cat behaves just like a cat, it is reasonable to think that it is processing information just like a cat, so what would make the difference with a real biological cat, for one to be conscious and not the other.

    Sorry to be fussy, I’m probably particularly thick today, but I would like to understand your point, and I assume that this is not a science fiction tale ala Asimov, but a real analysis.

  37. Vicente: It is all simulation, there is nothing about Keanu or augmented Keanu that is not simulation of a physical process. There are no experiences in the simulation. Keanu is not conscious, but a simulation of a conscious being. Similarly, a simulation or model of a conscious human is also not conscious, but could tell us interesting predictions about conscious experience, just as a simulation of digestion will tell us interesting things about digestion even though it doesn’t urinate.

    >> if the animatronic cat behaves just like a cat, it is reasonable to think that it is processing information just like a cat

    If you are a behaviorist, maybe. 🙂

  38. Eric, thank you, now we have come to the issue…

    Similarly, a simulation or model of a conscious human is also not conscious, but could tell us interesting predictions about conscious experience, just as a simulation of digestion will tell us interesting things about digestion even though it doesn’t urinate.

    Well, it all depends in what you consider interesting predictions about conscious experience. Once the NCCs are perfectly identified and characterised could be. Still, no information about the conscious experience itself will be produced.

    Simulations produce outputs and results in physical data terms, in the same terms they were programmed. From a brain simulation you will get: data values for polarisation charge and currents distributions, electrolites concentrations, activation patterns, ion membrane channels molecular quantum states values, voltage wave propagation dynamics along the axon… you name it. But no information about sheer experiences is produced at all. To begin with the is no physical model for them. Actually, theoretically that information should be available by direct inspection (measurement) of the brain, before any simulation is carried out, but unfortunately it is not.

    Now we could enter (I have no stamina for it) into the 1PPP and 3PPP debate…that leads nowhere.

    So as I thought the starting point is already materialistically biased.

    BTW, I am no behaviorist just an ignorant.

  39. Vicente: it sounds like you are endorsing the semantic poverty of neuroscience, which is precisely what I am contesting. The claim that just because at the lower level, we have individual neurons and such, that there is no information about experiences per se, seems wrong.

    Even ignoring the alien scenario, I would expect a molecule-by-molecule simulation of a human to talk about conscious experience. See my conversation with Mike Wiest above, where I articulated the argument structure in more detail.

  40. Eric, I don’t really endorse anything, but semantic poverty could also be endorsed by a lack of definition. Look at your expressions, a bit vague, hmmm.

    but could tell us interesting predictions about conscious experience

    to talk about conscious experience

    As I said, if we knew accurately which neural activity patterns and mechanisms support consciousness, then simulations could make predictions. I fully support the great value of simulations as research tools in neurology, or any other field. Maybe, you could give an specific example of the concrete data about consciousness that the simulation is going to provide, to clarify the point.

    But we are tackling another problem, we are talking about the WOAAHHH…! when the augmentation is connected. This is, to me, the inconsistency in your story, you acknowdlege that simulations don’t produce material results, but you are not so clear about brain simulations. Maybe because to simulate digestive processes has nothing to do with simulating brain processes underpinning consciousness. In the former, you have a physical model that covers the chain end-to-end, from inputs to products. In the latter, the physical model does not include the most important part of the products (even for epiphenomenalists).

  41. Vicente a molecule-by-molecule simulation of me (interacting with a detailed simulation of an environment) would generate claims about visual phenomenology, hallucinations, what it is like to see a sunset, etc..

    I am not sure what to make of your claim that “you acknowdlege that simulations don’t produce material results, but you are not so clear about brain simulations.”

    I have little to add to what I already wrote about that. It you keep trying to say I am making brain simulations or simulations of people special or different, but I am not. It is not qualitatively different from simulating weather patterns or respiration, and I have never intended to suggest otherwise.

  42. Eric, your statements are still very vague ” would generate claims about”…

    Whether the simulation is done on a: molecule-by-molecule, electron-by-electron or quark-by-quark level doesn’t make a change. The point is that the simulation run on a physical model.

    Your simulation can predict your body temperature, your blood urea nitrogen concentration, or the viscosity of myelin at 33ºC.

    I can accept that if, as I said, we had thorough information of NCCs, the simulation could predict that if you stimulate your ear with a C note pitch, the corresponding activity pattern would be triggered, and you should be having the C note phenomenal sound experience. This is just patterns recognition techniques…

    For this purposes, you can use all the variables values that describe the system state in your simulation, the position and velocities of particles or the spin state of the n-th electron of the k-th molecule, but bear in mind that all of them relate to physical magnitudes, and it that sense brain simulations are ordinary simulations (well the hell complex…).

    So far, nothing new. Now let’s say you want to take a step further, the augmentation WOOAAAHHHH…. could you please tell me what variables in your molecule-by-molecule simulation will account for the: visual phenomenology, hallucinations, what it is like to see a sunset, etc.. I insist that the NCCs are not questioned.

    Sorry but you said that the augmentation will produce the WOOOAAAHHH, then that simulations don’t yield real stuff, and then that brain simulations are ordinary ones. Is it that the WOOAAHHH is not for real?

    I hope you get the point, and please don’t get me wrong, it is clear that you are the materialist and I am the one who’s not (no shame).

    Please, focus on the variables, they are the key to clarify the issue.

  43. I think the question is a bit cruel in that no one has any currently working AI code (thank goodness…for the sake of AI’s…oops, I’m off topic) and the question essentially asks to be taken through the AI code (and the code various interactions with it’s memory) to the point of ‘Whoa’.

    Phenomenological experience might be seen as rather like police taking evidence at a crime scene, but before any conclusive determination about the crime can be made. Until that conclusive determination is made, just about any bit of evidence (or even things not even taken as evidence!) might matter. However, once the determination is made, then it shows that some of the evidence taken was absolutely not relevant to the determination at all. Perhaps 90%+ of it. But you do not know that before the determination. It all seems to matter, or potentially matter.

    So phenomenological experience might be treated as being stuck in the in between, the not knowing what inputs are not relevant, so it all seems to matter. I’m sure weve all been stumped by a riddle at some point, turning over the words time and again as each seem import with some hidden meaning – then when the solution to the riddle is given, it seems so obvious – and the words then do not seem so laced with hidden meaning or worthy of such close scrutinising. The capacity for words to, at one point, be laced with hidden meaning and then at another point not so much – whatever engages that quality, is probably behind phenomenological experience.

  44. Vicente: I am not talking about NCC (a way of talking I avoid because it already concedes too much to dualistic intuitions), but simulating an entire brain/body/in an environment. A simulation that can produce (simulated) expressions, etc.. It’s all simulation all the way up and down, sideways, across, inside, and out. I have never asserted otherwise, but you keep insisting I want the simulation to be more. Simulations make predictions that we don’t explicitly build in, and that’s what happens with the aliens.

    Understanding the basic reductio structure of the dualist argument, the argument that I am attempting to rebut, is pretty key here, as I clarified with Mike Wiest above (but especially I recommend reading the original four posts at philosophyofbrains more closely, as I think I was pretty clear there). It seems you want something different, or something more than that.

  45. Isn’t this kinda sticking in a “It simulates it all, man!” “‘Whoa’ is part of it all, how does it simulate all that?” “It simulates it ALL!” loop? There’s still a black box left in the conversation, and saying the simulation just does all the stuff that’s in the black box, sans any descriptor of how, doesn’t satisfy anyone who says there’s something dualist in the black box. Seemingly, anyway.

  46. Eric, you seem to respond that it’s not a stretch for materialists being commited to the claim. But that’s a given and I don’t think it was what Mike asked – he asked about how do you prove these things to people who aren’t materialists? Providing a black box who’s only detail is ‘it’s all done in there!’ isn’t convincing as proof, is it?

    I say this, as far as I understand it, being inclined towards the materialist myself. Were stuck at the point where we don’t have Turing class AI code (thank goodness!), with which we could take someone pedantically through each step of that code and have them *light bulb/world quake* and say “I just spoke with that? It even gave some advice and made me realise some things about my life!” and realise that steps can become what we call ‘life’.

    Right now, at best we might have recourse to psychological maladies to help show the insides of the box. Stuff like Anton’s sydrome, where someone is quite blind, but will keep making up confabulations towards the notion they can see quite clearly. Ie, the idea that right now, while the dualist inclined might easily raise ideas against against the black box, they might want to consider how readily inclined we as a species are to confabulation. Sure, it’s not proof against the dualist position in itself, but it is proof that one can be prone to confabulation and a prompt for dualists to perhaps attempt to discover disproval methods for atleast parts of their claims, just in case they can be disproved/it was a confabulation.

  47. Callan your first paragraph is all I need: this isn’t an argument against dualism, but simply blocking an antimaterialist argument that thinks it has found a chink in materialism’s armor. But it requires the materialist to grant something that no materialist would grant.

    I agree that it would be nice to have more details, and you guys have convinced me that I need to do a better job a) making precise the structure of the argument, and b) making it much more compelling that I am not pulling something out of thin air.

    Frankly I had no idea I would end up where I did. I thought I would end up with a standard phenomenal concepts strategy response in Part III, but ended up realizing I thought it was weak, so Part IV was born.

  48. Fair point to make – the dualists argument doesn’t suddenly evoke any dualist ephany in the materialist. It doesn’t prove anything of how things are, but it does prove that the argument involved does not provoke the responce in a materialist that it does in a dualist.

  49. Hello, i think that i saw you visited my weblog so i came to go back the desire?.I’m attempting to to find things to enhance my website!I assume its ok to make use of a few of your concepts!!

  50. Thank you for every other magnificent article.
    The place else may just anybody get that kind of information in such a perfect way
    of writing? I’ve a presentation subsequent week, and I am on the
    look for such information.

  51. An impressive share! I have just forwarded this onto a
    colleague who had been conducting a little homework on this.

    And he actually ordered me lunch due to the fact that I found it for him…
    lol. So let me reword this…. Thanks for the meal!!
    But yeah, thanks for spending the time to talk about this topic here
    on your web site.

  52. With the coming of the Second World War, activity at McChord Field just
    south of town ramped up considerably. Most of the bomber crews who served in the Pacific theater (including the men who flew the Doolittle Raid) trained at McChord Field.

  53. With arrival of “buy marijunana online” I see this site is finally getting serious about consciousness exploration. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *