Sergio Resurgent

sergio differenceSergio’s thoughts on computationalism evoked a big response. Here is his considered reaction, complete with bibliography…

+++++++++++++++++++

A few weeks ago, Peter was kind enough to publish my personal reflections on computational functionalism, I made it quite clear that the aim was to receive feedback and food for thought, particularly in the form of disagreements and challenges. I got plenty, more than I deserve, so a big “thank you” is in order, to all the contributors and to Peter for making it all possible. The discussions that are happening here somehow vindicate the bad rep that the bottom half of the internet gets. Over the last week or so, I’ve been busy trying to summarise what I’ve learned, what challenges I find unanswerable and what answers fail to convince. This is hard to do, as even if we have all tried hard to remain on topic, our subject is slippery and comes with so much baggage, so apologies if I’ll fail to address this or that stream, do correct me whenever you feel the need.

This post is going to be long, so I’ll provide a little guidance here. First, I will summarise where I am now, there may be nuanced differences with the substance of original post, but if there are, I can’t spot them properly (a well-known cognitive limitation: after changing your mind, it’s very difficult to reconstruct your previous beliefs in a faithful way). There certainly is a shift of focus, which I hope is going to help clarify the overall conceptual architecture that I have in mind. After the summary, I will try to discuss the challenges that have been raised, the ones that I would expect to receive, and where disagreements remain unresolved. Finally, I’ll write some conclusive reflections.

Computational Functionalism requires Embodiment.

The whole debate, for me revolves around the question: can computations generate intentionality? Critics of computational functionalism say “No” and take this as a final argument to conclude that computations can’t explain minds. I also answer “No” but reach a weaker conclusion: computations can never be the whole story, we need something else to get the story started (provide a source for intentionality), but computations are the relevant part of how the story unfolds.
The missing ingredient comes from what I call “structures”: some “structures” in particular (both naturally occurring and artificial) exist because they effectively measure some feature of the environment and translate it into a signal, more structure then use, manipulate and re-transmit this signal in organised ways, so that the overall system ends up showing behaviours that are causally connected to what was measured. Thus, I say that such systems (the overall structure) are to be understood as manipulating signals about certain features of the world.
This last sentence pretty much summarises all I have to say: it implies computations (collecting, transmitting, manipulating signals and then generating output). It also implies intentionality: the signals are about something. Finally, it shows that what counts are physical structures that produce, transmit and manipulate signals: without embodiment, nothing can happen at all. I conclude that such structures can use their intrinsic intentionality (as long as their input, transmission & integration structures work, they produce and manipulate signals about something), and build on it, so that some structures can eventually generate meanings and become conscious.

Once intentionality is available (and it is from the start), you get a fundamental ingredient that does something fundamental, which is over an beyond what can be achieved by computations alone.

Say that you find an electric cable, with variable current/voltage that flows in it. You suspect it carries a signal, but as Jochen would point out, without some pre-existing knowledge about the cable, you have no way of figuring out what it is being signalled.
But what precedes brains already embodies the guiding knowledge: the action potentials that travel on axons from sensory periphery to central brain are already about something (as are the intracellular mechanisms that generate them at the receptor level). Touch, temperature, smells, colours, whatever. Their mapping isn’t arbitrary or optional, their intentionality is a quality of the structures that generate them.

Systems that are able to generate meanings and become conscious are, as far as we know (and/or for the time being), biological, so I will now move to animals and human animals in particular. Specifically, each one of us produces internal sensory signals that already are about something, and some of them (proprioception, pain, pleasure, hunger, and many more) are about ourselves.
Critics should say that, for any given functioning body, you can in theory interpret these signals as encoding the list of first division players of whichever sport you’d like, all you need is to design a post-hoc encoding to specify the mapping from signal to signified player – this is equivalent to our electrical cable above: without additional knowledge, there is no way to correctly interpret the signals from a third-party perspective. However, within the structure that contains these signals, they are not about sports, they really are about temperature, smell, vision and so forth: they reliably respond to certain features of the world, in a highly (and imperfect) selective way.

Thus, you (all of us) can find meaningful correlations between signals coming from the world, between the ones that are about yourself, and all combinations thereof. This provides the first spark of meaning (this smells good, I like this panorama, etc). From there, going to consciousness isn’t really that hard, but it all revolves around two premises, which are what I’m trying to discuss here.

  • Premise one: intentionality comes with the sensory structure. I hope that much is clear. You change the structure, you change what the signal is about.
  • Premise two: once you have a signal about something, the interpretation isn’t arbitrary any more. If you are a third-party, it may be difficult/impossible to establish what a signal is signalling exactly, but in theory, the ways to empirically get closer to the correct answer can exist. However, these ways are entirely contingent: you need to account for the naturally occurring environment where the signal-bearing structure emerged.

If you accept these premises, it follows that:

a) Considering brains, they are in the business of integrating sensory signals to produce behavioural outputs. They can do so, because the system they belong to includes the mapping: signals about touch are just that. Thus, from within, the mapping comes from hard-constraints, in fact, you would expect that these constraints are enough to bootstrap cognition.
b) To do so, highly developed and plastic brains use sensory input to generate models of the world (and more). They compute in the sense that they manipulate, shuffle symbols around, and generate new ones.
c) Because the mapping is intrinsic, the system can generate knowledge about itself and the world around it. “I am hungry”, “I like pizza”, “I’ll have a pizza” become possible, and are subject-relative.
d) thus, when I sayThe only facts are epistemological” I actually mean two things (plus one addendum below): (1) they relate to self-knowledge, and (2) they are facts just the same (actually, I’d say they are the only genuine facts that you can find, because of (1)).

Thus, given a system, from a third-party perspective, in theory, we can:

i. Use the premises and conclusion a) to make sensible hypotheses on what the intrinsic mapping/intentionality/aboutness might be (if any).
ii. Use a mix of theories, including information (transmission) theory (Shannon’s) and the concept of abstract computations to describe how the signals are processed: we would be tapping the same source of disambiguation – the structure that produces signals about this but not that.
iii. This will need to closely mirror what the structures in the brain actually do: at each level of interpretational abstraction, we need to keep verifying that our interpretation keeps reflecting how the mechanisms behave (e.g. our abstractions need to have verifiable predictive powers). This can be done (and is normally done) via the standard tools of empirical neuroscience.
iv. If and only if we’ll be able to build solid interpretations (i.e. ones that make reliable predictions) and we’ll be able to cover all the distance between original signals, consciousness and then behaviour, we will have a second map: from the mechanisms as described in third-person terms, to “computations” (i.e. abstractions that describe the mechanisms in parsimonious ways).

Buried within the last passage, there is some hope that, at the same time, we will learn something about the mental, because we have been trying to match, and remain in contact with, the initial aboutness of the input. We have hope that computations can describe what counts of the mechanism we are studying because the mechanisms themselves rely on (exist because of) their intrinsic intentionality. This provides a third meaning to “the only facts are epistemological(3): we would have a way to learn about otherwise subjective mental states. However, in this “third-party” epistemological route, we are using empirical tools, so, unlike the first-party view, where some knowledge is unquestionable (I think therefore I am, remember?), the results we would get will always be somewhat uncertain.

Thus: yes, there is a fact to be known on whether you are conscious (because it is an epistemological fact, otherwise it would not qualify as fact), but the route I’m proposing to find what this fact is is an empirical route and therefore can only approximate to the absolute truth. This requires to grasp the concept that absolute truths depend on subjective ones. If I feel pain, I’m in pain, this is why a third-party can claim there is an objective matter on the subject of my feeling pain. This me is possible because I am made of real physical stuff, which, among other things, collects signals about the structure it entails and the world outside.

At this point it’s important to note that this whole “interpretation” relies on noting that the initial “aboutness” (generated by sensory structures) is possible because the sensory structures generate a distinction. I see it as a first and usually quite reliable approximation of some property of the external environment: in the example of the simple bacterium, a signal about glucose is generated, and it becomes possible to consider it to be about glucose because, on average, and in the normal conditions where the bacterium is to be found, it will almost always react to glucose alone. Thus, intentionality is founded on a pretence, a useful heuristic, a reliable approximation. This generates the possibility of making further distinctions, and distinctions are a pre-requisite for cognition.
This is also why intentionality can sustain the existence of facts: once the first approximations are done, and note that in this context I could just as well call them “the first abstractions”, conceptual and absolute truths start to appear. 2+2 = 4 becomes possible, as well as “cogito ergo sum” (again).

This more or less concludes my positive case. The rest is about checking if it has any chance of being accepted by more than one person, which leads us to the challenges.

Against Computational Functionalism: the challenges.

To me, the weakest point in all this is premise one. However, to my surprise, my original post got comments like “I don’t know if I’m ready to buy it” and similar, but no one went as far as saying “No, the signal in your fictional bacterium is not about glucose, and in fact it’s not even a signal“. If you think you can construct such a case, please do, because I think it’s the one argument that could convince me that I’m wrong.

Moving on the challenges I have received, Disagreeable Me, in comment #82 remarks:

If your argument depends on […] the only facts about consciousness or intentionality are epistemological facts, then rightly or wrongly that is where Searle and Bishop and Putnam and Chalmers and so on would say your position falls apart, and nearly everyone would agree with them. If youre looking for feedback on where your argument fails to persuade, Id say this is it.

I think he is right in identifying the key disagreement: it’s the reason I’ve re-stated my main argument above, unpacking what I mean with epistemological fact and why I do.
In short: I think the criticism is a category error. Yes there are facts, but they subsist only because they are epistemological. If people search for ontological facts, they can’t find them, because there aren’t any: knowledge requires arbitrary distinctions and at the first level, only allows for useful heuristics. However, once these arbitrary distinctions are done and taken for granted, you can find facts about knowledge. Thus: there are facts about consciousness, because consciousness requires to make the initial arbitrary distinctions. Answering “but in your argument, somewhere, you are assuming some arbitrary distinctions” doesn’t count as criticism: it goes without saying.
This is a problem in practice, however: for people to accept my stance, they need to turn their personal epistemology head over feet, so once more, saying “this is your position, but it won’t convince Searle [Putnam, Chalmers, etc…]” is not criticism to my position. You need to attack my argument for that, otherwise you are implying “Searle is never going to see why your position makes sense”, i.e. you are criticising his position, not mine.

Similarly, the criticism that stems from Dancing with Pixes (DwP: the universal arbitrariness of computations) doesn’t really apply. This is what I think Richard Wein has been trying to demonstrate. If you take computations to be something so abstract that you can “interpret any mechanism to perform any computation” you are saying “this idea of computation is meaningless: it applies to everything and nothing, it does not make any distinction”. Thus, I’ve got to ask: how can we use this definition of computation to make distinctions? In other words, I can’t see how the onus of refuting this argument is in the computationalist camp (I’ve gone back to my contra-scepticism and I once more can’t understand how/why some people take DwP seriously). To me, there is something amiss in the starting definition of computation, as the way it is formulated (forgetting about cardinality for simplicity’s sake) allows drawing no conclusions about anything at all.
If any mechanism can be interpreted to implement any computation, you have to explain me why I spend my time writing software/algorithms. Clearly, I could not be using your idea of computations because I won’t be able to discriminate between different programs (they are all equivalent). But I have a job, and I see the results of my work, so, in the DwP view, something else, not computations, explain what I do for a living. Fine: whatever that something is, it is what I call computations/algorithms. Very few people enjoy spending time in purely semantic arguments, so I’ll leave it to you to invent a way to describe what programming is all about, while accepting the DwP argument. If you’ll be able to produce a coherent description, we can use that in lieu of what I normally refer to computation. The problem should be solved and everyone may save his/her own worldview. It’s also worth noting how all of this echoes the arguments that Chalmers himself uses to respond to the DwP challenge: if we start with perfectly abstract computations, we are shifting all the work on the implementation side.

If you prefer a blunt rebuttal, this should suffice: in my day job I am not paid to design and write functionless abstractions (computations that can be seen everywhere and nowhere). I am ok with my very local description, where computations are what algorithms do: they transform input into outputs in precise and replicable ways. A so and so signal gets in this particular system and comes out transformed in this and that way. Whatever systems show the same behaviour are computationally equivalent. Nothing more to be said, please move on.

Furthermore, what Richard Wein has been trying to show is indeed very relevant: if we accept an idea of computations as arbitrary interpretations of mechanisms, we are saying “computations are exclusively epistemological tools” they are, after all interpretations. Thus, interpretations are explicitly something above and beyond their original subject. It follows that they are abstract and you can’t implement them. Therefore, a computer can’t implement computations: whatever it is that a computer does, can be interpreted as computations, but that’s purely a mental exercise, it has no effect on what the computer does. I’m merely re-stating my argument here, but I think it’s worth repeating. We end up with no way to describe what our computers do: Richard is trying to say, “hang on, this state of affairs in a CPU reliably produces that state” and a preferential way to describe this sort of transitions in computational terms does exist: you will start describing “AND”, “OR”, “XOR” operators and so on. But doing so is not arbitrary.
If I wanted to play the devil’s advocate, I would say: OK, doing so is not arbitrary because we have already arbitrarily assigned some meaning to the input. Voltage arriving through this input is to be interpreted as a 1, no voltage is a 0, same for this other input. On the output side we find:
1,1 -> 1, while 0 is returned for all other cases. Thus this little piece of CPU computes an AND operation. Oh, but now what happens if we invert the map on both the inputs and assume that “no voltage equals 1”?
This is where I find some interest: computers are useful, as John Davey remarked, because we can change the meaning of what they do, that’s why they are versatile.

The main trouble is that computers cant be held to represent anything. And that trouble is precisely the reason they were invented numbers (usually 1s and 0s) are unlimited in the scope of things that they can represent […].

This is true, and important to accept, but does not threaten my position: if I’m right, intentional systems process signals that have fixed interpretations.

Our skin has certain types of receptors that when activated send a signal which is interpreted as “danger! too hot!” (this happens when something starts breaking up the skin cells). If you hold dry ice in your hand, after one or two seconds you will receive that signal (because ice crystals form in your skin and start breaking cells: dry ice is Very Cold!) and it will seem to you that the ice you’re holding has abruptly become very hot (while still receiving the “too cold” signal as well). It’s an odd experience (a dangerous one – be very careful if you wish to try it), which comes in handy: my brain receives the signal and interprets it in the usual way, the signal is about (supposed) too hot conditions, it can misfire, but it still conveys the “Too hot” message. This is what I was trying to say in the original post: within the system, certain interpretations are fixed, we can’t change them at will, they are not arbitrary. We do the same with computers, and find that we can work with them, write one program to play chess, another one for checkers. It’s the mapping in the I/O side that does the trick…

Moving on, Sci pointed to a delightful article from Fodor, which challenges Evolutionary Psychology directly, and marginally disputes the idea that Natural Selection can select for intentionality. I’m afraid that Fodor is fundamentally right in everything he says, but suffers from the same kind of inversion that generates the DwP and the other kind of criticism. I’ll explain the details (but please do read the article: it’s a pleasure you don’t want to deny yourself): the central argument (for us here) is about intentionality and the possibility that it may be “selected for” via natural selection. Fodor says that natural selection selects, but it does not select “for” any specific trait. What traits emerge from selection depends entirely from contingent factors.
On this, I think he is almost entirely right.

However, at a higher level, an important pattern does reliably emerge from blind selection: because of contingency, and thus unpredictability of what will be selected (still without the for), what ultimately tends to accumulate is adaptability per-se. Thus, you can say that natural selection, weakly selects for adaptability. Biologically, this is defensible: no organism relies on a fixed series of well-ordered and tightly constrained events to happen at precise moments in order to survive and reproduce. All living things can survive a range of different conditions and still reproduce. The way they do this is by, surprise! sample the world, and react to it. Therefore: natural selection selects for the seeds of intentionality because intentionality is required to react to changing conditions.
Now, on the subject of intentionality, and to show that natural selection can’t select for intentions, Fodor uses the following:

Jack and Jill
Went up the hill
To fetch a pail of water
Jack fell down
And broke his crown
And thus decreased his fitness.

(I told you it’s delightful!)
His point is that selection can’t act on Jack’s intention of fetching water, but only on the contingent fact that Jack broke his crown. He is right, but miles from our mark. What is selected for is Jack’s ability to be thirsty, he was born with internal sensors that detect lack of water: without them he would have been long dead before reaching the hill. Mechanisms to maintain homeostasis in an ever changing world (within limits) are not optional, they exist because of contingency: their existence is necessary because the world out there changes all the time. Thus: natural selection very reliably selects for one trait: intentionality. Intentionality about what, and how intentionality is instantiated in particular creatures is certainly determined by contingent factors, but it remains that intentionality about something is a general requirement for living things (unless they live in a 100% stable environment, which is made impossible by their very presence).

However, Fodor’s argument works a treat when it’s used to reject some typical Evolutionary Psychology claims such as “Evolution selected for the raping-instinct in human males”, such claims might be pointing into something which isn’t entirely wrong, but are nevertheless indefensible because evolution directly selects for things that are far removed from complex behaviours. Once intentionality of some sort is there, natural selection keeps selecting, and Fodor is right in explaining why there are no general rules on what it selects for (at that level): when we are considering the details, contingency gets the upper hand.

What Fodor somehow manages to ignore is the big distance between raw (philosophical) intentionality (the kind I’m discussing here – AKA aboutness), and fully formed intentions (as desires and similar). We all know the two are connected, but they are not the same: it’s very telling that Fodor’s central argument revolves around the second (Jack’s plan to go and fetch some water), but only mentions the first in very abstract terms. Once again: selection does select for the ability to detect the need for a given resource (when this need isn’t constant) and for the ability to detect the presence/availability of needed resources (again, if their levels aren’t constant). This kind of selection for is (unsurprisingly) very abstract, but it does pinpoint a fundamental quality of selection which is what explains the existence of sensory structures, and thus of intrinsic intentionality. What Fodor says hits the mark on more detailed accounts, but doesn’t even come close to the kind of intentionality I’ve been trying to pin-down.

The challenges that I did not receive.

In all of the above I think one serious criticism is missing: we can accept that a given system collects intentional systems, but how does the systems “know” what these signals are about? So far, I’ve just assumed that some systems do. If we go back to our bacterium, we can safely assume that such a systems knows exactly nothing: it just reacts to the presence of glucose in a very organised way. It follows that some systems use their intrinsic intentionality in different ways: I can modulate my reactions to most stimuli, while the bacterium does not. What’s different? Well, to get a glimpse, we can step up to a more complex organism, and pick one with a proper nervous system, but still simple enough. Aplysia: we know a lot about these slugs, and we know they can be conditioned. They can learn associations between neutral and noxious stimuli, so that after proper training they would react protectively to the neutral stimulus alone.

Computationally there is nothing mysterious (although biologically we still don’t really understand the relevant details): input channel A gets activated and carries inconsequential information, after this, channel B starts signalling something bad and an avoidance output is generated. Given enough repetitions the association is learned, and input from channel A short-cuts to produce the output associated with B. You can easily build a simulation that works on the hypothesis “certain inputs correlate” and reproduces the same functionality. Right: but does our little slug (and our stylised simulation) know anything? In my view, yes and no: trained individuals learn a correlation, so they do know something, but I wouldn’t count it as fully formed knowledge because it is still too tightly bound, it still boils down to automatic and immediate reactions. However, this example already shows how you can move from bare intentionality to learning something that almost counts as knowledge. It would be interesting to keep climbing the complexity scale and see if we can learn how proper knowledge emerges, but I’ll stop here, because the final criticism that I haven’t so far addressed can now be tackled: it’s Searle’s the Chinese room.

To me, the picture I’m trying to build says something about what’s going on with Searle in the room, and I find this something marginally more convincing than all the rebuttals of the Chinese room argument that I know of. The original thought experiment relies on one premise: that it is possible to describe how to process the Chinese input in algorithmic terms, so that a complete set of instructions can be provided. Fine: if this is so, we can build a glorified bacterium, or a computer (a series of mechanisms) to do Searle’s job. The point I can add is that even the abilities of Aplysiae exceed the requirements: Searle in the room doesn’t even need to learn simple associations. Thus, all the Chinese room shows us is that you don’t need a mind to follow a set of static rules. Not news, isn’t it? Our little slugs can do more: they use rules to learn new stuff, associations that are not implicit in the algorithm that drives their behaviour, what is implicit is that correlations exist, and thus learning them can provide benefits.
Note that we can easily design an algorithm that does this, and note that what it does can count as a form of abstraction: instead of a rigid rule, we have a meta-rule. As a result, we have constructed a picture that shows how sensory structures plus computations can account for both intentionality and basic forms of learning; in this picture, the Chinese room task is already surpassed: it’s true that neither Searle nor the whole room truly know Chinese, because the task doesn’t require to know it (see below). What is missing is feedback and memories of the feedback: Searle chucks out answers, which count as output/behaviour, but they don’t produce consequences, and the rules of the game don’t require to keep track neither of the consequences nor of the series of questions.

Follow me if you might: what happens if we add a different rule, and say that the person who feeds in the questions is allowed to feed questions that build on what was answered before? To fulfil this task Searle would need an additional set of rules: he will need instructions to keep a log of what was asked and answered. He would need to recall associations, just like the slugs. Once again, he will be following an algorithm but with an added layer of abstraction. Would the room “know Chinese” then? No, not yet: feedback is still missing. Imagine that the person feeding the questions is there with the aim of conducting a literary examination (they are questions about a particular story) and that whenever the answers don’t conform to a particular critical framework, Searle will get a punishment, when the answers are good he’ll get a reward. Now: can Searle use the set of rules from the original scenario and learn how to “pass the exam”? Perhaps, but can he learn how to avoid the punishments without starting to understand the meanings of the scribbles he produces as output? (You guessed right: I’d answer “No” to the second question)

The thing to note is that the new extended scenario is starting to resemble real life a little more: when human babies are born, we can say that they will need to learn the rules to function in the world, but also that such rules are not fixed/known a-priori. Remember what I was saying about Fodor and the fact that adaptability is selected for? To get really good at the Chinese room extended game you need intentionality, meaning, memory and (self) consciousness: so the question becomes, not knowing anything about literary theory, would it be possible to provide Searle with an algorithm that will allow him to learn how to avoid punishments? We don’t know, so it’s difficult to say whether, after understanding how such an algorithm may work, we would find it more or less intuitive that the whole room (or Searle) will learn Chinese in the process. I guess Peter would maintain that such an algorithm is impossible, in his view, it has to be somewhat anomic.
My take is that to write such an algorithm we would “just” need to continue along the path we have seen so far: we need to move up one more level of abstraction and include rules that imply (assume) the existence of a discrete system (self), defined by the boundaries between input and output. We also need to include the log of the previous question-answer-feedback loop, plus of course, the concepts of desirable and undesirable feedback (anybody is thinking “Metzinger” right now?). With all this in place (assuming it’s possible), I personally would find it much easier to accept that, once it got good at avoiding punishments, the room started understanding Chinese (and some literary theory).

I am happy to admit that this whole answer is complicated and weak, but it does shed some light even if you are not prepared to follow me the whole way: at the start I argue that the original task is not equivalent to “understanding Chinese”, and I hope that what follows clarifies that understanding Chinese requires something more. This is why the intuition the original Chinese room argument produces is so compelling and misleading. Once you imagine something more life-like, the picture starts blurring in interesting ways.

That’s it. I don’t have more to say at this moment.

Short list of the points I’ve made:

  • Computations can be considered as 100% abstract, thus they can apply to everything and nothing. As a result, they can’t explain anything. True, but this means that our hypothesis (Computations are 100% abstract) needs revising.
  • What we can say is that real mechanisms are needed, because they ground intentionality.
  • Thus, once we have the key to guide our interpretation, describing mechanisms as computations can help to figure out what generates a mind.
  • To do so, we can start building a path that algorithmically/mechanistically produces more and more flexibility by relying on increasingly abstract assumptions (the possibility to discriminate first, the possibility to learn from correlations next, and then up to learning even more based on the discriminations between self/not-self and desirable/not desirable).
  • This helps addressing the Chinese room argument, as it shows why the original scenario isn’t depicting what it claims to depict (understanding Chinese). At the same time this route allows to propose some extensions that start making the idea of conscious mechanisms a bit less counter-intuitive.
  • In the process, we are also starting to figure out what knowledge is, which is always nice

I Hope youve enjoyed the journey as much as I did! Please do feel free to rattle my cage even more, I will think some more and try to answer as soon as I can.

Bibliography

Bishop, J. M. (2009). A cognitive computation fallacy? cognition, computations and panpsychism. Cognitive Computation, 1(3), 221-233.

Chalmers, D. J. (1996). Does a rock implement every finite-state automaton?. Synthese, 108(3), 309-333.

Chen, S., Cai, D., Pearce, K., Sun, P. Y., Roberts, A. C., & Glanzman, D. L. (2014). Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia. Elife, 3, e03896.

Fodor, J. (2008). Against Darwinism. Mind & Language, 23(1), 1-24.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(03), 417-424.

72 thoughts on “Sergio Resurgent

  1. Hi Sergio,

    I haven’t yet had the time to absorb the entirety of your post, but I’m a little confused—at first, you seem to agree with me that the interpretation of some physical process, such as electrons moving down a wire, as a signal—barring any information that could pin down how to interprete it—is arbitrary (as an extreme case, a single bit could equally well tell whether the English attack by land or sea, or whether I’ve had cereal or croissants for breakfast).

    But later on, you seem to disagree with the Dancing with Pixies/Putnam’s stone/Searle’s wall-argument. However, those aren’t different things: they’re just the same idea that you can map semantic content (whether the signal is about the English attacking or my breakfast) to logical states (bit one or zero) to physical states (high voltage/low voltage) in an arbitrary way. High voltage can mean 0 or 1, as can low; 0 or 1 can mean the English attacking by sea, or that my breakfast consisted of croissants—all combinations are possible, they’re arbitrary conventions set by the user, by intentional beings (it is our intentionality that ultimately pins down the reference: we can interpret some signal as being about something because it causes our mental content to be about something, provided we have the necessary faculties of understanding).

    This is, in principle, not any different from the fact that a ciphertext (say, a one-time pad generated one) can’t be read without having access to the key—any text of equal length can be generated by applying a suitable key. Perhaps to illustrate, take a text such as this one, convert it to numbers by some means, say ASCII-encoding, then add a random number of equal length, and you have the ciphertext—and knowing the key, you can recover the original message. But without this key, you can try every random number of the same length—and thus, ‘decrypt’ it into any plaintext of the same length. Thus, the mere ciphertext alone only tells you something about the length of the message.

    The same thing now works for a dynamical case, a ‘ciphertext spelled out in time’: this is all that, ultimately, a computation is. Using the right decryption key, the right interpretational mapping, you can generate the outcome of the computation from some physical system’s time evolution; but lacking this key, any computation of the same ‘length’ can be ‘decoded’ from the system. This corresponds to the sort of thing we experience going on in our brains in the same way as Shannon’s information corresponds to semantic meaning: it basically ignores it, focusing instead on the gross, quantitative aspects (the length of the message, the number of states traversed/complexity of the computation, etc.).

    All of this is really just an elaboration on Newman’s famous objection to Russell’s structural realism: all that structure—in a sense I will define in a second—tells us, are merely questions of quantity, of cardinality. Here, ‘structure’ is meant in the sense of ‘relation’: a structure on some set (of states, of numbers, of words, of whatever) D is a set of relations on D, where a relation is itself a set of ordered tuples of elements of D. That is, if we have some set D={a,b,c}, then a relation on D may be R={(a,b),(b,c),(c,a)}.

    An interpretation of this might be given by considering a, b, and c to stand for Alice, Bob, and Charlie, and the relation R to be ‘is in love with’: Alice loves Bob, Bob loves Charlie, and Charlie loves Alice. Together, the tuple (R,D) is the structure of amorous intentions on the base set D. Now, note again that this interpretation is arbitrary: just knowing the structure (R,D) does not serve to fix it.

    But then, Newman showed that all knowing the structure (R,D) really tells us is the cardinality of D—i.e. how many elements D has. The reason is, basically, that for any set D, we can consider that it has any structure (R,D) at all, for R being any relation (or indeed set of relations). To see this, first note that every relation R is just a set of ordered tuples of elements of D; if it is a unary relation, those tuples will have one member (and be singletons consequently), for a binary relation, it will be 2-tuples, and generally, for a k-ary relation (a relation between k elements of D), it will be k-tuples.

    But now note that with R, we already assert the existence of D: without D, R has no extension, and hence, can’t come out true. But with D, there also exist sets of arbitrary tuples of elements of D. This means, however, that any relation R* definable on D is just as ‘real’ as our original relation R is—i.e. we can’t say with greater justification that the structure (R,D) is real than we can say that (R*,D) is real. But the existence of all the R* follows directly from the exitence of a set of cardinality D: hence, if all we know is structure, all we know is the cardinality of D.

    This is what is behind the DwP-reductio: all we know is the cardinality of the state space of the physical system that the computation supervenes on; hence, all computations that can be implemented on a state space of that cardinality can equally well be considered to be performed by the system. It’s also behind the uncrackability of the one-time pad: the ciphertext gives us merely a bound on the length of the message, but is compatible with any message of that length.

    Coming back to your proposal, Sergio, it seemed to me in your last post that you recognized these difficulties, and hence, proposed that we need to go beyond structure (I’m not sure, but I don’t think we use the word the same way, by the way, but since I’ve given a definition above you are probably in a better position to figure out where agreements or disagreements lie). This is why I did not voice opposition to your premise one: I thought you wanted to ground intentionality in the physical reaction of the bacterium to a glucose gradient, as somehow ’embodying’ that gradient in itself, thus, ‘referring’ to it. While I’m not sure entirely how that’s supposed to work, I can go along with it, at least on a hypothetical basis. Now, however, I’m not so sure anymore: if you consider the whole thing to be merely due to ‘signals’ and computation, then I think the above poses a counterargument, as you wished for.

    Because even if the signals are caused by something in the environment, it does not follow that they are, in the relevant sense, about that something. Mere causality does not pin down reference—how would e.g. the bacterium, having access solely to the signal, only to the effect, and not its cause, figure that this signal is ‘about glucose’? The argument above applies: it could be about anything. But our mental states are definite things, they are about definite (albeit sometimes not fully characterized) things. Otherwise, the bacterium would have to have access to the causal connection between the glucose and the signal it causes in order to use said signal as a representation of glucose. But in order to have access to that causal relationship, it would need to have some representation thereof, and then, we fall right into homuncular thinking.

    (Plus, and this is more of a general criticism of any causal theory of reference, how could it be wrong? Because we are sometimes wrong in our intentions. But if the required signal is reliably caused by the same entity, then there would be no room for error. If, say, cats always cause a given signal in the sensory apparatus, then this signal can be about cats and cats only. If, however, sometimes the signal is triggered by a dog, then it simply ceases to be about cats: since what a symbol means is given by what causes it, it now means ‘cat-or-dog’. This is hence sometimes called the ‘disjunction problem’.)

    Sorry for the rambling, but I did want to clarify this upfront: do you think that intentions are ultimately down to signals? If so, how do they acquire their definite content? Or, do you think that there needs to be something beyond signals, something beyond comptutation?

  2. Sergio, thanks for the link to Chalmers’ paper, which I hadn’t seen before. While Chalmers goes into far more detail, and says things in a more technical way, I would say that my point is broadly the same as his.

  3. Some more thoughts:

    If any mechanism can be interpreted to implement any computation, you have to explain me why I spend my time writing software/algorithms.

    All mechanisms can be interpreted to implement any computation, but not all these interpretations are very user-friendly; what you are doing when you write a program is to write it in such a way that the human user you implicitly assume is easily capable of comprehending its outputs. For instance, your programs will typically present its outputs in the form of text (or graphics), rather than, say, in terms of Gödel-numbers or arrangements of sand grains or just raw voltage patterns in some memory. Text is something we seem to understand immediately, while patterns of sand grains aren’t; but of course, this is only due to our own interpretive faculties having been fine-honed to the understanding of text. An organism is imaginable such that it finds patterns of sand-grains far more intelligible than strings of letters.

    But basing a notion of computation on this will not work, at least not without smuggling the answer to our problem into the foundations of the framework: human beings have some form of preferred mapping because we are intentional agents; so defining the notion of computation by appealing to that which you do in order to produce human-usable programs defines computation ultimately in terms of intentionality. Certain outputs are meaningful to us, others are not, but this meaningfulness is determined by the way in which we are intentional. It does not tell us anything about the outputs, or about the computation, but merely about us. This is the reason why the account of computation is abstracted away from that which humans find meaningful.

    And just because everything can be viewed as some form of computation doesn’t mean the notion of computation becomes meaningless, because there is a meaningful question associated with whether something is seen as a computation, or, say, as a physical process (or a biological, historical, social one, etc.). It just means that computation is use, that it is not an a-priori property of a physical system. It is more a kind of tool, which is not a category in which the world is naturally divided: things can be used as tools, but they are not tools absent that use.

    Thus, all the Chinese room shows us is that you don’t need a mind to follow a set of static rules.

    This is indeed not news, but Searle’s argument is a reaction to the idea that following a set of rules is sufficient to yield a mind—if you needed a mind to follow rules, then the whole argument would be self-defeating. So this just states the condition under which one could hope that the argument achieves something.

    What is missing is feedback and memories of the feedback: Searle chucks out answers, which count as output/behaviour, but they don’t produce consequences, and the rules of the game don’t require to keep track neither of the consequences nor of the series of questions.

    I think this is basically a popular misunderstanding of the Chinese room: in order to function properly, you have to count every question and answer given as part of the ‘input’ for the next round. Otherwise, the system could trivially not fulfill the role of being a competent partner in a conversation; the simple question: “What was your answer to the last question?” would trivially defeat it.

    Thus, in order to work as the original thought experiment requires it to, the next answer string must always be determined by not only the last question, but by all prior questions and answers; i.e. there must be an ever-growing ‘log’ of all received questions and produced answers, and this must be part of each input, together with the new question. This is, obviously, highly inefficient, but for finite times still always achievable with a finite amount of work.

    But if the CR actually works this way, then it also has the means to defeat your objection: any prior answers, any ‘punishments’ (which can, ex hypothesi, only consist in strings of Chinese characters, perhaps screaming invectives, which, however, Searle will be completely ignorant of), are part of what determines future outputs; but still, all Searle has to do is look the whole shebang up in his giant rule-book, and he will be able to produce a compelling answer, if such an answer can be found by any rule. He will even appear to learn, give answers on a subject that get better over time, and so on; but he will still remain blissfully ignorant on anything pertaining to the conversation playing out (save its length).

  4. What is missing is feedback and memories of the feedback: Searle chucks out answers, which count as output/behaviour, but they don’t produce consequences, and the rules of the game don’t require to keep track neither of the consequences nor of the series of questions.

    Well, Searle is rather vague about the CR scenario. But I think we should interpret it as one in which data is remembered beyond the giving of the current answer. In other words, Searle jots down data on a (giant) notepad, or memorises it (in his vast memory), and keeps it for as long as necessary. That data could include full details about previous questions and answers.

    But this doesn’t save the CR argument. The CR argument proper is refuted by the Systems Reply. Searle has additional arguments like his argument about syntax and semantics (SAS), and those require further replies. Unfortunately, however, he tends to conflate different arguments, so when people make the Systems Reply he doesn’t address it as a refutation of the CR argument, but instead complains that they haven’t addressed the SAS argument.

  5. @Jochen #1 (ignoring the rest as I’m out of time!),

    it’s hard not to get confused, isn’t it? I would have no hope to even know how much I’m confused without you and the other contributors throwing questions back at me.
    Very hasty reply: I might need to come back later to clarify the clarifications :-/

    Abstract computations, strings of numbers, ciphered texts, etc., all apply to DwP and can be interpreted in infinite ways, we’ve agreed on this. What I disagree with is that “therefore only pre-existing (fully formed) intentionality can fix the interpretation” (let alone the correct interpretation). I don’t see where the “therefore” comes from, or better, I do have an idea, but think it’s a mistake: we are existing intentional beings, we can only consciously use our fully formed intentionality (as part of our fully formed cognitive system) to interpret abstract stuff (computations, numbers, etc), so we do, and because we do we think it must be the only way. But here we are trying to figure out how our fully formed cognition/consciousness/intentionality get to exist, so we are starting with the hypothesis that “there must be another way”.

    Structure: when I talk about structure I mean the shape of a given receptor, which is able to change when a glucose molecule sticks to it, or what makes Rhodopsin react in a certain way to photons, or what cells does a cone make synaptic contact with, and where the optic nerve projects to and so forth. I was (and still am) light-years from thinking about logic/mathematical definitions of structure.

    how would e.g. the bacterium, having access solely to the signal, only to the effect, and not its cause, figure that this signal is ‘about glucose’?

    It doesn’t. I’m explicitly saying that the bacterium knows nothing, remember? In the current post, I’m trying to show how from this causality you can start building knowledge. The bacterium knows nothing, Aplisiae almost know something. After that, to get to actually know something you need to do other stuff, much more other stuff, but the kind of route is already somewhat delineated, or at least, that’s what I claim and hope. I will expand on this when I have more time.

    Cat-or-dog & disjunction problem: you are doing the same mistake that I’m attributing to Fodor here. Visual signals are not about cats, dogs and similarly fully formed concepts. They are about photons adsorbed by rods and cones. I am saying that causation fixes the first, lowest level interpretation (it can still go wrong, as in the “too hot” signal firing when it’s actually too cold), and that from them on, it’s a matter that can be effectively reduced to information processing (understood, in terms of).

    Or:

    I thought you wanted to ground intentionality in the physical reaction of the bacterium to a glucose gradient, as somehow ’embodying’ that gradient in itself, thus, ‘referring’ to it. While I’m not sure entirely how that’s supposed to work, I can go along with it, at least on a hypothetical basis.

    I’m not sure either: that’s one reason why discussing here is so fantastically useful. However, your description captures exactly what I’m trying to do. This current post suggests something about “how that’s supposed to work”, and why it makes sense to me to look at it in terms of information processing (or functional computations, or signal transmission and integration, etc etc). We can only proceed on a hypothetical basis, for now, I’m afraid.

    Quick fire round:

    do you think that intentions are ultimately down to signals?

    No, these signals need to be reliably caused by some variant feature of the world via some transduction structure which is stable enough to consistently produce comparable outputs. The issue then becomes “how do you build content” from such limited information?

    If so, how do they acquire their definite content?

    Ditto.

    Or, do you think that there needs to be something beyond signals, something beyond computation?

    I do. That’s our sensory structures: you take these out, or you start messing with them (radically changing what they react to or how they react to something, do so at random (shortish) intervals and without warning), and suddenly we won’t be able to make any sense of the world. We can probably agree with this.
    In a sense, I’m suggesting something really trivial, but we are doing a really good job at making it sound complicated and philosophically deep. 😉

  6. I still think you’re tilting at the wrong windmill. You want to have it both ways, to be an interpretavist about intentionality, all the while insisting on the ontological nature of the ‘real patterns’ involved. From what I can see, this is what’s causing you all the dialectical grief. Eliminativists such as myself see a ‘failure a nerve,’ whereas intentionalists see a ‘cheat,’ and for the life of me, I can’t see how it could end any other way. Not only are the criteria for ‘perspectival facts’ too loose (after all, believing in God systematically relates individuals to their environments) they rather obviously require some understanding of ‘perspective,’ thus pitching you into the rotary blender of every intentionality debate since Noah. Sure, ‘intentional’ cognition tracks certain patterns ‘out there,’ patterns that are ‘nothing at all’ outside their cognitive uptake within certain kinds of systems. Everything involved at every juncture is as real as real can be… except, of course, ‘intentionality.’

    To say that intentionality is ‘intrinsic’ is to say that it is not merely epistemological. To say that a fact is ‘epistemological’ is to say that it is extrinsic, that it has the properties it has vis a vis some particular, extrinsic systematic relation. To me, it often reads like you’re arguing that intentionality is both intrinsic and extrinsic. You think the *systematicity* of the extrinsic relation warrants the claim that the properties at issue are ‘intrinsic.’ I just don’t see how it follows, Sergio.

    But more importantly, I’m not sure what’s to be gained from insisting as much, aside from appeasing a handful of exceptionalist intuitions that possess no credibility to begin with.

  7. @Jochen #3

    […]there is a meaningful question associated with whether something is seen as a computation, or, say, as a physical process (or a biological, historical, social one, etc.). It just means that computation is use, that it is not an a-priori property of a physical system. It is more a kind of tool, which is not a category in which the world is naturally divided: things can be used as tools, but they are not tools absent that use.

    Ok, so I take it we agree that computations are interpretations. As such we can use them to interpret what interests us, and the question becomes: is this particular tool able to help us identifying the features we are interested in?
    When it comes to brains, with all their neurons firing and exchanging signals, I’m inclined to answer “it does seems that analysing this particular system with particular tool can work well”. Or am I projecting my beliefs on your words?

    On how to interpret the Chinese room, I have two things to clarify (to Richard as well):
    First, if we accept that the rules provided to Searle include what’s necessary to answer correctly to “What was your answer to the last question?” and all the trickier variations, then I guess that demonstrating that an appropriate set of rules can be produced becomes very central. I do not know if this was clarified at some point, but I’ve encountered no sign of this kind of discussion. This is very relevant to me, because it leads to what sort of rule can be used to produce sensible answers to questions such as “you said so and so, but isn’t your answer a matter of opinion, because such and such?”. If you dig long enough you will find that the rules need to include a concept of self (you said so and so) and the ability of meta-cognition, i.e. reflecting on what made you return certain symbols and not others. Once you have something like that, the “systemic” answer (it’s the full room that understands Chinese) becomes more palatable to me, otherwise it leaves me unconvinced. This does leave us open to the objection on the scenario where Searle learns all the rules by heart, I think it can still be addressed, but it would be quite long.

    Second, I now think that I’m stretching the thought experiment beyond acceptable limits with the feedback requirement. I wasn’t thinking of remaining within the original hypothesis, and was actually proposing to punish the poor Searle, but this breaks down on the systemic answer, so it’s probably unhelpful – your suggestion that feedback is provided by more squiggles is accepted. However, to me the question remains: given that the rule-maker doesn’t know about literary theory (I require this because I need to make sure the correct behaviour can’t be produced via a giant pre-compiled look-up table), what is required to be in the rules so that Searle may generate the learning appearance? My take is that references to a self (to the system, or Searle in case he learnt the rules), the ability to question/explain/discuss certain parts of the rules and the concepts of desirable/undesirable feedback are all needed. But once you add that, to me it does look plausible that something is achieving some true understanding… So yes, it can be that Searle, or his conscious efforts to follow the rules, are just a cog in a bigger system that does understand Chinese. My criticism ends here: I think the original scenario was effectively hiding the relevant observations about what’s needed for the rules to work.
    Talking about snails made me focus on this objection…

  8. @Scott #6
    I interpreted you previous comments as “I can’t understand why you even bother”. This last comment makes me think that you are actually saying “I think you’re dead wrong”…
    Is it the former, the latter or something in between?

    I need to warn you: if the latter, I’ll come back asking “why?” I’ll be also asking you to write your replies as if I was a 6yo child: I really struggle to decode your messages (but you did make me chuckle with “thus pitching you into the rotary blender of every intentionality debate since Noah”!).
    Sorry, you make me feel dumb… By the way, yes: I do suspect a certain gluttony for punishment is included in my list of motivations…

  9. Hi Sergio,

    Been reading your posts and comments with some interest. I do think Scott waxes philosophic but with good volitions. Perhaps not seeing you as a 6 yo but like myself one who shares a 6 yo wonderment with toys and mechanisms. My background is also technical but I learned a lot from Scott’s philosophic and creative writer background.

    Your comment cuts to the heart: “The whole debate, for me revolves around the question: can computations generate intentionality? Critics of computational functionalism say “No” and take this as a final argument to conclude that computations can’t explain minds. I also answer “No” but reach a weaker conclusion: computations can never be the whole story, we need something else to get the story started (provide a source for intentionality), but computations are the relevant part of how the story unfolds.”

    To me computations are no less explanatory or more explanatory than breaking down objects into molecules, atoms, particles and patterns. The “computational wonderment” which even Richard Dawkins expresses, probably based in his own technically deficient background is there because computational is no less or more than breaking down reality into instances of time which the computer “kluges” by a program which takes its own artificial clock and builds patterns and responses in accordance with an external and internal environment of its own artificial stimulus response system.

    Imagine flipping through a portfolio of photographs by famous photographers but someone has slipped in a set of photos which a computer generated by removing people and objects from real photos and generating new ones. You may see no difference between the real photos and computer photos until you note certain objects are out of place or you see President Obama shaking hands with President Lincoln. The analogy of photos is the same as the CR argument because according to Wittgenstein thought and language generate pictures in our brains or intentionality is a translational property which assembles biological pictures that conform to a biological time domain; which is where real time and artificial computer time tie together.

  10. Sergio: “I interpreted you previous comments as “I can’t understand why you even bother”. This last comment makes me think that you are actually saying “I think you’re dead wrong”…
    Is it the former, the latter or something in between?”

    I apologize for being abstruse (obtuse)? Writing fiction and philosophy is about as good a recipe for being cryptic as I can imagine!

    It all depends where *you* stand, Sergio: My earlier misgivings were based on the assumption that you were taking a primarily interpretativist tack, in which case the question of whether any system cobbling ‘Chinese Rooms’ together ‘understands x’ is simply an epistemological fact of the system. It does or does not to the extent that we cognize it as such. In this case, you can simply appropriate the critiques of computationalism as your own–so why bother facing them down? The real problem you face is the problem eliminativists face more generally: that of explaining various intentional idioms and phenomena. (This is the problem I think BBT overcomes).

    But here it’s very clear that ‘minimal *intrinsic* intentionality’ is your goal, that you want to isolate the conditions allowing ‘aboutness’ to arise as another natural feature of the natural world (and not as a way to troubleshoot certain kinds of problems absent certain kinds of information).

    Consider:

    1) What are the conditions of interpreting a system as intentional?
    2) What conditions make a system intentional?

    As we know, the conditions of (1) are incredibly loose. Very little is required to trigger our intentional cognitive systems. The conditions pertaining to (2), on the other hand, are incredibly elusive. Decades on, and we’re still stumped.

    Because the conditions of (1) are so loose, we can quite easily interpret bacterial signaling in intentional terms (I think this is the reason no one gave you any flak for suggesting bacteria can have ‘knowledge’ in some rudimentary form). But we can also interpret bacterial communication in *mechanical* terms as well, given the relative simplicity of the signalling systems involved. All we have to do is look at them as components in larger systems. Because we can do this, and because we have no way of squaring our mechanical versus our intentional intuitions, many are inclined to think that bacteria, at best, possess only some form of ‘derived’ intentionality.

    But the big point has to be that we know the latter, mechanical interpretation, is the theoretically reliable one. The question is one of what to make of the intentional interpretation.

    Precisely as is the case with humans.

    The difference, of course, is complexity. The machinery involved in human signalling is astronomically more complex–so much so that it outruns the possibility of mechanically cognizing the signalling systems altogether. Here, we have no choice but troubleshoot via intentional cognition (which is automatically cued in any case). Intentional cognition allows us to solve for our fellow humans absent any information regarding the machinery of human cognition.

    Dennett, like you, is fond of providing ‘complexity tales,’ giving examples of how mere mechanisms can, by virtue of accumulating capacities, become ‘intentional.’ But for Dennett, the process is one of systems becoming more and more advantageously *interpretable* as intentional, not, as you would have it, more and more *intentional.*

    So here you have one clear cut dialectical challenge: All your complexity tales evidence an interpretavist account as much as your computationalist account. The failure to meet this challenge is what makes Deacon’s recent book, for instance, a masterpiece of dialectical futility.

    Physically speaking, all a bacterium needs is the right kind of mechanical relationship with its environment. The same goes for humans. Systematic covariances are enough to mechanically explain the brute facts of cognition and behaviour. The problem, however, is that systematic covariances cannot explain *our understanding* of cognition and behaviour. And this is precisely the problem we should expect to have given the astronomical complexity of most organisms’ mechanical relationships to their environments. We have no choice but to cognize those relationships nonmechanically, to rely on simple heuristics.

    And although ‘abstraction’ is a species of heuristic, cognitive heuristics are not abstractions–they’re not even systems devoted to abstraction! They are systems that are sensitive to certain strategic features of their environments, cued to generate reliable conclusions despite the absence of data. ‘Aboutness’ does not ‘abstract from’ so much as it ‘selects for,’ providing a crude, but effective way of understanding organisms and their environmental relations in the absence of any information pertaining to the machinery involved. Just think of how much the concept neglects.

    If you agree, as I think you do, with the bulk of this, then why worry about intrinsic intentionality? What problems does this way of looking at things fail to put to bed?

  11. Hi Scott,

    As an engineer I know what characteristics the gods of technological nature “selected for” when they evolved computers, radios and tv’s. Sentience was not one of them, especially in the case of computers, electron storage aka current storage was selected to represent logic ones and zeroes which became binary data and instructions.

    As far as nature and sentience, the jury is still out, and not sure if you mean Deacon’s “Incomplete Nature” because that is the exact idiom which captures neuron activity and sentiencequalia…etc. Most natural explanations not only capture neurons as nature’s version of logic gates so a computational view is easy to attain, but chemistry and biochemistry also fails miserably at explaining metabolic activity beneath the mechanistic level or fail to take into account all of the physical forces when explaining biochemistry, so we have no way to fathom how cell function can “escape cells” (or unify cells?) because of a strict mechanisitic view.

    Richard Dawkins who is the “king of cell mechanics” adopts the mechanistic view of machine consciousness if you listen to the later part of this debate. BTW it is not in Spanish after you get past the host intros. The mechanistic view is adaptive because chemical nature and biochemistry natural explanations are “Incomplete”

    https://www.youtube.com/watch?v=f4c_CrQzUGw

    BTW, I’m sure you will be taking your daughter to the movies for a Daddy and Me activity. Check out some of the reviews on line.

  12. The problem, however, is that systematic covariances cannot explain *our understanding* of cognition and behaviour.

    I dunno if it adds anything, but I’d rephrase this to being unable to explain a lack of feeling in a subject, to the subject. For example, if your arm was anesthetized and behind a sheet and being probed in various precise locations with a pin, no amount of technical explanation is going to explain to you where you are being probed to the point where you start to feel it, of course. Until the anaesthetic wears off, you will never feel it, regardless of the technical specifics that you are told. And clearly, the things called ‘qualia’ are a matter of feeling. And if someone tells you a ‘limb’ of qualia is actually anesthetized, no matter what they said or how technically they said it, you’d never actually feel that. And if you went off of feeling alone, you’d always say they were wrong! You’d always say in regard to qualia you feel everything that is involved.

    can computations generate intentionality?

    How about asking that the other way around?

    Can computations/a computer report itself as intentional?

    Of course, maybe we want to say the computer will always be ‘clear headed’ and never report itself as intentional.

    But if you take it as an error for the computer to report that, then I think a computer is just as much capable of such an error – just think of a heat seeking missile that races after a distraction flare instead of a hot jet blast. That’s an example of an error a computer can make.

    So if we take it as an error for the computer to report intentionality, how could a computer end up reporting its own intentionality? Mechanically, how could this come about?

    One would have to admit it’s possible for it to happen.

  13. @Scott: I’m curious about your brand of eliminativism about intentionality. Do you agree with Rosenberg that we have no thoughts, or do you have a less extreme claim in mind?:

    “What you absolutely cannot be wrong about is that your conscious thought was about something. Even having a wildly wrong thought about something requires that the thought be about something.

    It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all…When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.”

  14. sorry Scott, that’s a quote from his Atheist’s Guide to Reality. Wasn’t trying to attribute it to you.

  15. Hi Scott,

    I can’t help feeling that people may find some of your language confusing. Even I find it somewhat confusing, despite broadly agreeing with you on the substance (as far as I can tell).

    You call yourself an “eliminativist”, but it’s not clear what you’re eliminating. I assume you don’t want to eliminate ordinary intentional and mental language from our vocabularies, or say that the ordinary first order statements we make with such language cannot be true (e.g. “he believes the earth is round”, “he’s conscious”). You probably just reject ideas about ourselves that are explicitly or implicitly dualistic, as I do.

    You wrote:

    My earlier misgivings were based on the assumption that you were taking a primarily interpretativist tack, in which case the question of whether any system cobbling ‘Chinese Rooms’ together ‘understands x’ is simply an epistemological fact of the system.

    I don’t understand what you mean by “epistemological fact”. In my experience that term refers to facts about what people know or facts about how justified we are in believing things (http://plato.stanford.edu/entries/epistemology-naturalized/#4). On that interpretation, “it knows x” is an epistemological fact by definition (assuming it’s a fact at all), and I suppose we could say the same of “it understands x”. Since such facts are epistemological by definition, it makes no sense to suggest that they are epistemological only on an interpretivist view. So I guess that by “epistemological fact” you mean something else.

    You associate the idea of “epistemological facts” with “interpretivism”, suggesting that for you “epistemological facts” might mean interpretative facts, i.e. that statements of such facts are interpretations. But to me this seems like a misleading way of making the distinction that you apparently want to make. As far as I’m concerned, everything we say about the world is an interpretation. We model reality by imposing our abstractions (our interpretations) on it. Intentional terms are just another type of abstraction. Of course, it’s important to make a distinction between physical abstractions and intentional abstractions. As Dennett puts it, we take a “physical stance” and an “intentional stance”. It seems to me that the kind of thinking you and I reject arises in part from people conflating these two stances. They try to take a physical stance towards intentional properties, treating them as pseudo-physical properties. This takes quite an explicit form in Searle’s case, when he argues from an analogy between consciousness and wetness.

    I for one would avoid many of the terms you use, and instead concentrate on drawing a distinction between physical and intentional stances. (For “stance” you could substitute “language”, “model”, “abstraction”, etc.)

    1) What are the conditions of interpreting a system as intentional?
    2) What conditions make a system intentional?

    I think the distinction between (1) and (2) is unclear. They could best be combined into one question: under what conditions is it epistemically effective to adopt the intentional stance towards (use intentional language about) a system? I think the broad answer to that question would be: when we have a system that uses representations.

    Because the conditions of (1) are so loose, we can quite easily interpret bacterial signaling in intentional terms (I think this is the reason no one gave you any flak for suggesting bacteria can have ‘knowledge’ in some rudimentary form). But we can also interpret bacterial communication in *mechanical* terms as well, given the relative simplicity of the signalling systems involved. All we have to do is look at them as components in larger systems. Because we can do this, and because we have no way of squaring our mechanical versus our intentional intuitions, many are inclined to think that bacteria, at best, possess only some form of ‘derived’ intentionality.

    I’m not sure I would agree that “we have no way of squaring our mechanical versus our intentional intuitions”. We cannot give a formula for translating one into the other. But I would say that squaring those intuitions is just what we do by looking at simple systems like this. We take some process that we can follow in detail, describe it in both intentional and physical terms, and cannot help but see that both types of description are applicable. Our linguistic intuitions (our sense of the meanings of the words) forces us to accept both descriptions. Even to refer to some state or process as a “signal” is to adopt the intentional stance to some extent. But such language comes quite naturally to us when we see one part of the system produce a state and another part act appropriately in accordance with that state. That’s broadly the same thing that happens when people signal to each other.

    And although ‘abstraction’ is a species of heuristic, cognitive heuristics are not abstractions–they’re not even systems devoted to abstraction! They are systems that are sensitive to certain strategic features of their environments, cued to generate reliable conclusions despite the absence of data. ‘Aboutness’ does not ‘abstract from’ so much as it ‘selects for,’ providing a crude, but effective way of understanding organisms and their environmental relations in the absence of any information pertaining to the machinery involved. Just think of how much the concept neglects.

    I would say that our cognitive faculties are very much in the business of abstraction when we use intentional language, as they are whenever we talk about the world. (Perhaps you and I understand the word “abstraction” differently.) But I would agree that such language works in the absence of any information pertaining to the machinery involved, and that’s an important point to make. People attributed beliefs and desires to each other long before they knew anything about brains. We assumed that there is something that causes people to behave the way they do. And so we attributed causative states to people in order to help us predict and explain their behaviour. We found that attributing beliefs and desires (etc) was effective for this purpose, so we make such attributions. But that is ultimately the same reason we attribute states to the world at all: so we can predict and explain things. We shouldn’t treat intentional language as something fundamentally different from physical language. It’s different, but we shouldn’t exaggerate the difference. And since people generally do exaggerate the difference, I want to stress the similarities.

  16. Sci: “It’s this last notion that introspection conveys that science has to deny. Thinking about things can’t happen at all…When consciousness convinces you that you, or your mind, or your brain has thoughts about things, it is wrong.”

    This is the quote, right?

    This highlights the huge problem faced by what I call ‘dogmatic’ eliminativism: the inability to account for things that seem laughably obvious. My view suggests we take the fractionate, heuristic nature of cognition seriously, see ‘thoughts’ as tools belonging to specific problem ecologies. In our day to day life, ‘thoughts,’ like experiences, *go without saying*: they allow us to troubleshoot numerous, practical problems. The reason they allow us to troubleshoot a specific set of practical problems is that they rely on specific information, neglecting the details of what is going on in any high dimensional (naturalistic) sense.

    To advert to one of my favourite examples: the Gaze heuristic. Rather than passively watching and performing extensive calculations on the sensory information provided by a batter hitting a ball, fielders catch balls by simply fixing them in their visual field and beginning to run–they essentially *let the ball guide them to where it will land.* Rather than solve the fly-ball system from the outside, then intervening, fielders make themselves part of the system in a way that generates solutions. They derive ‘lock into’ solutions. Asking how fielders track and represent all the factors determining where a ball will land is simply asking the wrong question.

    Thoughts, all intentional idioms, are heuristic in this sense: they allow us to ‘lock into’ one another in advantageous ways. We are blind to our respective natures, to what is actually going on, so we make due with posits (that reflect the structure of this blindness) that generate solutions despite our ignorance. As blind to our respective natures, we have no inkling as to how we do what we do, and therefore no inkling as to the radically heuristic nature of our intentional tools. We assume that our psychological posits possess the same kind of high dimensional reality as our natural ones.

    ‘Aboutness’ is heuristic, paradigmatically so, in fact, a way to track agent-environment relationships absent information regarding the high-dimensional realities involved. It is designed to help us work around our blindness, it just so happens that this blindness is so profound as to constitute neglect.

  17. Richard: “I don’t understand what you mean by “epistemological fact”. In my experience that term refers to facts about what people know or facts about how justified we are in believing things.”

    I was just using Sergio’s lingo, Sergio’s way, where he seems to take ‘epistemological facts’ along similar lines to Dennett’s ‘real patterns.’ I could have just as easily used ‘perspectival facts.’

    “I think the distinction between (1) and (2) is unclear. They could best be combined into one question: under what conditions is it epistemically effective to adopt the intentional stance towards (use intentional language about) a system? I think the broad answer to that question would be: when we have a system that uses representations.”

    I don’t think any system uses representations, nor do I think any system takes any ‘intentional stance.’ Systems quite often *recapitulate* environmental relationships, but only where advantageous. The problem with representationalism is that it presumes there’s one canonical way for systems to systematically interact with one another in ways we are prone to call ‘cognitive.’ But there’s infinite ways, most of them, as my example of the Gaze heuristic above (#16) suggests, far more economical than those involving recapitulations of environmental information.

    But I think you’re mistaking the point (vis a vis Sergio’s argument) of the distinction between something being intentional as opposed to something being interpretable as intentional.

    “We found that attributing beliefs and desires (etc) was effective for this purpose, so we make such attributions. But that is ultimately the same reason we attribute states to the world at all: so we can predict and explain things. We shouldn’t treat intentional language as something fundamentally different from physical language. It’s different, but we shouldn’t exaggerate the difference. And since people generally do exaggerate the difference, I want to stress the similarities.”

    Well, here’s a list! https://rsbakker.wordpress.com/2014/06/16/discontinuity-thesis-a-birds-of-a-feather-argument-against-intentionalism/

    I agree with you that causal reasoning is heuristic as well (quantum mechanics proves as much!) but as it’s role in the scientific revolution of society suggests, it belongs to a different species. The big difference is that intentional cognition is ‘fetishistic,’ it posits ‘free floating constraints’ to predict and explain. This clearly follows from our ‘brain blindness,’ our abject inability to causally situate our behaviours.

    Another piece you might be interested in is: https://rsbakker.wordpress.com/2014/11/02/meaning-fetishism/

    I don’t think FP is a ‘theory,’ so I don’t see ‘elimination’ pertaining to our everyday use of intentional idioms. They’re just too damn powerful. I do think, however, that they will be gradually winnowed from our theoretical vocabularies, save those contexts (like Game Theory) where they have been exapted to produce fruitful subdisciplines.

  18. Hi Sergio –

    I’m another who doubts the utility of “intentionality”, if for no other reason than the degree to which it seems an amorphous concept. Its SEP entry seems consistent with my suspicion that the word’s sense as a feature of linguistics is much clearer than its sense as a sign of the “mental”. However, I’ll play your game anyway since I think that even accepting “intentionality” as a useful concept, there may be a fundamental problem with your argument as I understand it.

    Although you don’t lay it out like this, I infer that your logical flow goes something like this:

    1. “Intentionality” is an indicator of capabilities that are typically described as “mental”.

    2. The paradigm mental activity is the acquisition and use of knowledge.

    3. There is something you call “intrinsic intentionality” that doesn’t have to be acquired but instead is, in some sense, given to us by nature.

    4. Intrinsic intentionality is foundational in that once an organism has it, all other knowledge can be “derived” through experience, inference, guided learning, etc.

    If this is indeed your logical flow, it appears to be a version of the Myth of the Given that Sellars attacked in his celebrated (admittedly in narrow circles) essay “Empiricism and Philosophy of Mind”. Even if capable, I couldn’t begin to summarize his notoriously challenging argument here (it took deVries and Triplett 300 pages to do so in “Knowledge, Mind, and the Given”). But it’s my impression that the attack is deemed largely successful (again, in those circles). As I understand it, the part of his argument considered to be weakest is how the bootstrapping – necessary in the absence of a Given – gets off the ground. That leaves some hope for the Given, but as far as I know no candidates to date have taken hold.

    Which is not to suggest that you not try, just to warn that (assuming I have your logic right) you may be on ground well-trodden but on which no clear path has been impressed.

  19. Hi Scott,

    Thanks for replying. I don’t really want to add anything to what I wrote, except to clear up some points of possible miscommunication.

    I don’t think any system uses representations,

    Really? That sounds rather like what you called above “‘dogmatic’ eliminativism: the inability to account for things that seem laughably obvious.” I can’t help feeling that you are interpreting my word “representation” in some strong philosophical sense, when I just meant it an ordinary sense.

    nor do I think any system takes any ‘intentional stance.’

    On a point of terminology, the “intentional stance” is a stance we take towards a system, not a property we attribute to the system.

    The problem with representationalism is that it presumes there’s one canonical way for systems to systematically interact with one another in ways we are prone to call ‘cognitive.’

    I’m not sure what you mean by that, but I doubt it’s a position I would endorse.

    But I think you’re mistaking the point (vis a vis Sergio’s argument) of the distinction between something being intentional as opposed to something being interpretable as intentional.

    To be clear, I’m saying that’s a false distinction.

  20. Although for some time I’ve suspected Scott’s position and mine on various issues were aligned, I haven’t been sure because we use almost non-overlapping vocabularies. For whatever reason, this comment thread has confirmed that suspicion. Because I think his points are important but gather that others also have some trouble with his vocabulary, I’m going to express some of the points in his exchange with Richard in a way that may be helpful for others.

    In “Rorty & His Critics”, Ramberg elaborates a Davidson theme, viz, that the irreducibility of what Ramberg calls the “vocabulary of agency” (which I take to be more or less synonymous with the “intentional idiom”, Scott’s “cognitive heuristics”, et al) is that the purposes served by that vocabulary are different from those served by other (in particular, scientific) vocabularies. The irreducibility doesn’t detract from the utility (indeed, arguably the practical necessity) of the vocabulary of agency, but it does limit its scope of application. In particular, it precludes its having any explanatory value when one is, so to speak, “looking under the hood”.

    Talk about a human organism’s behavior as a person, a social being, is done in the vocabulary of agency. Even if in principle that vocabulary were reducible to some scientific vocabulary, it would be counterproductive to do so since the latter vocabularies are tailored to different purposes. Similarly, when trying to dig into the physiological underpinnings of behavior, the vocabulary of agency can’t help – indeed may hinder – the process because it isn’t “designed” for that purpose.

    Scott’s fly-ball example – prominent in ecological psychology – illustrates the idea of vocabularies appropriate to different purposes. For some analytical purposes, the appropriate tack is to model the process of catching the ball as a dynamical system. And that involves a vocabulary tailored to such purposes. But it would obviously be inappropriate for someone teaching a kid to catch a fly ball. The heuristic “keep your eye on the ball and try to go where it wants to go” in the intentional idiom is clearly the way to go.

    I assume that something like this context-dependence of appropriate vocabularies motivates Scott’s distinction between reasonable and unreasonable – AKA “dogmatic” – eliminativism.

    From that perspective, Rosenberg’s quote seems to need some editing. I’d say that thoughts (or generally, the vocabulary of agency) rather than being “denied by science” should be merely eschewed in the realm of scientific investigation.

    This view may help with Richard’s objection to Scott’s seeing his two questions as distinct:

    1) What are the conditions of interpreting a system as intentional?
    2) What conditions make a system intentional?

    The first question presumably is asking for guidance as to when a system’s observable behavior is best described using the intentional idiom.

    From Scott’s suggestion that the second question is currently unanswerable, I infer that it’s essentially asking how the inner workings of a system, the observable behavior of which could be described in the intentional idiom, can be described instead in some scientific vocabulary. And this does seem a question distinguishable from the first one.


  21. “say that the signals are ‘about’ anything is clearly wrong”

    because “to be about anything” is a property of mental states and is a higher level function. What you are suggesting is to my mind almost a’category error’

    there are a lot of problems i would suggest in assuming that a ‘stream of electrons’ or whatever physical underpinning a ‘signal’ has – is or has a mental state. If a “stream of electrons” , of which i have many, can each possess a mental state I might have an awful lot of mental states on my hands ..

    there is also the question of location. A stream of electrons is like a mountain or a lake – an observer relative phenomenon that is not inherent to physical structure and where the boundaries are determined by the observer. A ‘stream of electrons’ couldn’t know it was a pain in my foot, for instance, because it wouldn’t know what my foot was or where it was. Its just a stream of electrons doing what streams of electrons do. To suggest otherwise would be to suggest that streams of electrons in legs have different properties to streams of electrons in batteries, for instance. But a pain in my foot is most definitely geolocated as I experience it. Thats because my brain knows what a foot is and where there they are, something a stream of particles could not be aware of.

    There is the ‘constant anticomputationalist objection’ ( as i always point out in these discussions) is there isn’t a shred of scientific evidence that a stream of electrons or a ‘signal’ can of itself possess intentional mental states. Scientifically speaking, that is the end of the discussion. It is up to claimant to prove his case, not the refuter to prove it’s not true. That’s the way science works.

    I think that a ‘signal’ could certainly be part of a neurological physical infrastructure that could bring about ‘aboutness’. I assume that because, to my experience, it does. But not a ‘stream of electrons’ – this I think is really a category error, an ontological impossibility. The definition of ‘aboutness’ and the properties of electron streams do not to my mind permit it.


    “I don’t know for sure, but I do suspect that the “right” model might count as a partial reproduction, so I guess this is the conclusion you really want to refute.
    Do you think that a computer simulation of a ‘modelled’ action potential is the same thing as an actual action potential[?]”

    OK – let’s move on. Do you think that the right model can be implemented in any physical infrastructure ?
    If you think that physical infrastructure is important, you are refuting computationalism. If you dont think it matters, then you are defending computationalism

    PS This question has a yes or no answer ! It really is not relative.

    If you think that the right model has inherent properties, what are they ? Otherwise the Universe (if thats the observer running the program) won’t be able to run the program.

  22. @All:
    apologies for disappearing, I’m away these days and my hopes of finding the time to reply were clearly misplaced. Good to see the conversation developing, I’ve been reading you all with eagerness.

    @VicP #9
    Thanks. I like being stretched, but I certainly do need to make sure I understand the comments I receive! Applies to all, and of course Scott and you as well. In detail, I think I follow your “To me computations…” paragraph, but you have lost me with your photos analogies: what are you trying to say (sorry!)? BTW, I’m a software developer but I’m a neurobiologist by formation, my trajectory has been: short start on neurobiology research, move into IT as the local neurobiology lab “IT specialist”, move out of neurobiology and much more into software development. I’m fond of proceeding in circles, apparently.

  23. @SCott #10
    [there is another message awaiting to get fished out of the spam filter with some apologies for my disappearance]
    I think the muddied philosophical lingo (not of our making) is really not helping us out, the confusion between being intentional as “having aboutness”, opposed to “making decisions on the basis of some purpose” is really hard to avoid. However I do think you’ve hit the mark very well with:

    But here it’s very clear that ‘minimal *intrinsic* intentionality’ is your goal, that you want to isolate the conditions allowing ‘aboutness’ to arise as another natural feature of the natural world (and not as a way to troubleshoot certain kinds of problems absent certain kinds of information).

    Consider:

    1) What are the conditions of interpreting a system as intentional?
    2) What conditions make a system intentional?

    As we know, the conditions of (1) are incredibly loose. Very little is required to trigger our intentional cognitive systems. The conditions pertaining to (2), on the other hand, are incredibly elusive. Decades on, and we’re still stumped.

    Your comment sent me to an introspection trip, on the theme of: “why am I *really* doing this? what is my ultimate motivation?” Which is naturally a good thing!
    What I’ve found is murky, but I think I can try to write it down. It can be broadly divided in two separate strands.
    First branch: I have this strong and apparently unshakeable intuition that the organisation of our sensory systems is really the key, and requires an inversion on reasoning. Sensory systems preceded and kick-started the emergence of nervous systems, so when looked in evolutionary terms, they are the starting point. To me this means that we shouldn’t ask “how does a brain figure out things about the world given that it only receives signals that could in principle mean anything?”, we should ask, “how did (co-evolving) sensory structures shape the gradual emergence of cognition (and therefore mind-generating brains)?”. This implies that we start by accepting that some information about what is out there (but also about what is happening in the body) is already available, it is already established as important for survival/reproduction (i.e. relevant to the perceiver or perceiver-to-be), and that it is this which shapes how we consciously perceive reality. You can surely see how my efforts here relate to this intuition: I’m trying to make the philosophical ground work to make the inversion viable in scientific/empiric terms.

    The second strand plugs in the first, but is about the very unfortunate lack of constructive dialogue between neuroscience and neurophilosophy. From certain angles, you can even see the dialectic as an all-out war. The consequence is that neuroscience is impoverished, in a crippling way. This somewhat explains (post-hoc) why I was happy to leave the field very early on. The same thing, now that I have a little more maturity, some time and some mental energy to spare, is driving me back to the subject, on my own terms.
    With remarkable timing, a very short conversation appeared on Scientific American, where Cook, Damasio, Carvalho, Hunt and Sacks sent an open letter to Koch (http://blogs.scientificamerican.com/guest-blog/exclusive-oliver-sacks-antonio-damasio-and-others-debate-christof-koch-on-the-nature-of-consciousness/). There you have a good proportion of the top-tier scientists working on consciousness openly trying to address exactly the same kind of topic we’ve have been addressing here. The exchange is really short, so it has to be somewhat sketchy, but I guess you’ll agree that the signs of impoverishment are all there, in plain sight.
    This may or may not provide you an answer to your question about what outstanding problem I’m trying to address, but I’m afraid it’s the best that I could figure out in a few days of reflection.
    [I will discuss a bit more on the Scientific American exchange in another comment as soon as I can, which is probably not that soon.]

    There is another sub-current that shapes my trajectory, which is interestingly similar to what inspired my older post on AI and utilitarianism. The common theme being the trivialisation/over simplification of a given “paradigm” (for lack of a better word). In the first case, it was utilitarianism, in here it’s behaviourism. Chalmers, in the 1996 paper I cite above, is explicitly concerned with the need of refuting behaviourism, I guess because it is assumed that it leaves no space for the domain of mental phenomena. Your own comment suggests that we can build a full mechanistic/behavioural account (prohibitively complex, but that’s a practical detail) and maybe also leave the mental aside. My more broad aim is to show how and why a complete mechanistic/behavioural account of how humans (and other animals) work needs to include the mental. It’s not that we can’t/should not use mechanistic reasoning, it’s that we need to understand why there is a mental domain, which in evolutionary terms becomes a question of figuring out why it evolved, and how it provides some evolutionary advantage. You may be able to spot the traces of the same inversion I mention above…
    I’m afraid this is all I’ve got time for today. Will be back for more soonish, please do stay tuned!

  24. Hi Charles. Thanks for your comment (#20), and especially for clarifying the probable meaning of Scott’s (1)/(2) distinction. It seems that you, Scott and I have similar views on intentionality.

    To Sergio, and anyone else who wants to know more about this view (if I can broadly construe it as one view), I would strongly recommend reading about it from a good source. I recommend Dennett, who gives the clearest explanation I’ve yet seen. You could start by reading this PDF, which I believe is an extract from his book “The Intentional Stance”: (http://www-personal.umich.edu/~lormand/phil/teach/dennett/readings/Dennett%20-%20True%20Believers%20(highlights).pdf). It might be helpful to take that text as a starting point for further discussion.

    Charles wrote:

    I’m another who doubts the utility of “intentionality”, if for no other reason than the degree to which it seems an amorphous concept. Its SEP entry seems consistent with my suspicion that the word’s sense as a feature of linguistics is much clearer than its sense as a sign of the “mental”.

    I would agree that the term “intentionality” is problematic. Since this is a philosopher’s term, we are dependent on philosophers’ stipulations and usage to tell us what it means, and I would say there’s quite a lot of variation among philosophers in how it’s used and understood. One advantage of taking Dennett’s text as a starting point is that we can then adopt his sense of the term, for present purposes.

    An important feature of Dennett’s approach is that it separates the subjects of intentionality and consciousness. I think that’s very important, and that people will never make much progress with understanding either subject as long as they conflate the two.

    (If people take “intentionality” as referring to a property of conscious experience by definition, then I think they are creating problems for themselves. For a start, beliefs are not conscious experiences, so such a definition makes it hard to see why we should associate “intentionality” with beliefs, as is usually done. In any case, they’re free to read Dennett’s “intentionality” as “schmintentionality”, and proceed from there. A rose by any other name…)

  25. Charles: “In “Rorty & His Critics”, Ramberg elaborates a Davidson theme, viz, that the irreducibility of what Ramberg calls the “vocabulary of agency” (which I take to be more or less synonymous with the “intentional idiom”, Scott’s “cognitive heuristics”, et al) is that the purposes served by that vocabulary are different from those served by other (in particular, scientific) vocabularies. The irreducibility doesn’t detract from the utility (indeed, arguably the practical necessity) of the vocabulary of agency, but it does limit its scope of application. In particular, it precludes its having any explanatory value when one is, so to speak, “looking under the hood”.”

    Thanks for the clarification, Charles–but I fear you might be turning me into more of a pragmatist than I am. What heuristics and neglect add to this way of looking at things are, quite simply, natural explanations. They remove the problem from the traditional philosophical bailiwick, and deliver it–I think, anyway–to science. Rather than the opaque (but philosophically venerable) ‘purposes’ or ‘interests,’ they allow us to speak of ‘mechanisms’ and ‘ecologies.’ And most importantly, they allow us to clearly identify the *mistake* that has had humanity tied in such knots for so long. Rather than merely declaring the vocabulary of agency ‘irreducible,’ and then suggesting (a la Burge, most recently) that we simply get on with the work it allows us to do, *it lets us peer under their hood.*

    Thus the crucial disambiguation provided by BBT: Intentional idioms are reducible, they are just not *replaceable* for a large number of problem ecologies (purposes). This is because, as simple heuristics, they *depend* on neglecting certain kinds of information to function properly. Looking at the matter in terms of heuristics and neglect allows you to see why our neurobiological reductions of our intentional cognitive capacities cannot do the work those capacities do. It also explains why we have been so convinced (especially after Wittgenstein) that they have to.

    The pragmatic insight captures something of the contextual nature of intentional idioms, but it does so via intentional idioms, and so 1) runs afoul the very contextual limitations it attempts to pose, and 2) remains perpetually trapped with its own underdetermined claims.

    I appreciate how odd my approach must sound, but I’m telling you, once internalized, it provides a very robust and empirically responsible way of looking at things.

  26. Sergio: “This implies that we start by accepting that some information about what is out there (but also about what is happening in the body) is already available, it is already established as important for survival/reproduction (i.e. relevant to the perceiver or perceiver-to-be), and that it is this which shapes how we consciously perceive reality.”

    Well, if BBT turns out to be more or less correct, you’re going to encounter some pretty hefty challenges. I agree that the sensory picture is crucial, particularly because it allows you to see the brain as a device for solving the inverse problem for behaviour.
    But as soon as you fixate on information about, or semantic information, as the currency of the conscious realm, you’re trying to plug a deliverance of intentional cognition into the very picture intentional cognition evolved to ignore. Aboutness is a way to get around our abject ignorance of neurobiologies, to connect organisms and their environments in low-dimensional ways. Empirically understanding it involves isolating and decomposing the mechanisms responsible for intentional cognition, as well as surveying the kinds of environmental information structures prone to trigger it, and the kinds of problems it can reliably solve.

    “My more broad aim is to show how and why a complete mechanistic/behavioural account of how humans (and other animals) work needs to include the mental. It’s not that we can’t/should not use mechanistic reasoning, it’s that we need to understand why there is a mental domain, which in evolutionary terms becomes a question of figuring out why it evolved, and how it provides some evolutionary advantage. You may be able to spot the traces of the same inversion I mention above…”

    An interesting thing about looking at this problem as a cultural critic is that you have a good sense of the historical nature of our conceptions of the ‘mental.’ We’ve likely been talking ‘souls’ and ‘selves’ for quite some time, so if you’re asking the question from an evolutionary standpoint, these are the concepts you should be investigating. The assumption, of course, is that souls and/or selves are simply ‘minds glimpsed darkly,’ but this is actually quite a difficult claim to warrant. As far as I’m concerned, the ‘mental’ is simply a convenient fiction, a way for empirical psychology to essentialize (as we now know all humans are prone to do) what are fundamentally a number of low-dimensional heuristic hacks—aka, ‘mental functions’. Psychology is far too prone to take itself as evidence for certain ontological theses, when it evidences far more parsimonious ‘zombie’ accounts like my own just as thoroughly.

  27. @Charles #18
    Thanks, you’re adding to my intimidatingly long reading list! And you add things that seem intimidating on their own right…
    Re your summary, it’s laid down as a sketch, but I don’t have major corrections to push forward. Your point 1. made me think: “‘Intentionality’ is an indicator of capabilities that are typically described as ‘mental’.”
    That’s an interesting way to put it… I guess my underlying conjecture is that to have thoughts about something you need to rely on a reference system and computations (shuffling symbols around) can’t provide it in and of itself. If I can put it in this way, referring to “thoughts about something” allows me to clearly state that I’m interested on the mental domain (thoughts) and also avoid the terribly loaded “intentionality” word. I’m discovering that the term intentionality generates its own layer of misunderstandings…
    Re “3. There is something you call ‘intrinsic intentionality’ that doesn’t have to be acquired but instead is, in some sense, given to us by nature.”
    When summarised in this way, it makes me frown a little: I don’t think you are misrepresenting my point, but not mentioning what provides this ‘intrinsic intentionality’ makes it sound suspiciously abstract. You know my idea, ‘intrinsic intentionality’ is there when a system systematically collects information about the surrounding environment (and its own internal states).

    I am trying, and having a good time along the way… I don’t think there are paths that haven’t been trodden before, but then each one of us has his/her own stride, so something slightly different always comes out, doesn’t it?

  28. @john davey #21 and All
    [This is a continuation of the discussion on the previous thread, I’ve asked John to move it here so to help me track the conversation.
    In this comment I will also discuss the Scientific American open letter that I’ve mentioned above (#23).]

    [why do you] say [that suggesting] that the signals are ‘about’ anything is clearly wrong[?]

    because “to be about anything” is a property of mental states and is a higher level function. What you are suggesting is to my mind almost a ’category error’.

    there are a lot of problems I would suggest in assuming that a ‘stream of electrons’ or whatever physical underpinning a ‘signal’ has – is or has a mental state. If a “stream of electrons” , of which I have many, can each possess a mental state I might have an awful lot of mental states on my hands ..

    John, under your definitions, you are right, and that’s that. A stream of electrons, or ions across a membrane, or neurotransmitters across a synapse, or whatever physical form a signal takes, does not have an is not in itself a mental state. We completely agree.
    The really tricky part however is introduced by the word ‘signal’. If we agree that a physical phenomenon is to be understood as a signal (or plays the function of a signal, or other similar definitions), then I would not object to saying that a signal is about temperature while another is about smell (reports about chemical present in the sampled air), etcetera. This seemingly innocuous ‘signal’ definition, together with the humble ‘about’ qualifier is all we need to start thinking in computational terms while escaping the DwP (and similar) objections.
    Thus, for you the job would be to convince me that talk about signals is wrong, but your following statement “I think that a ‘signal’ could certainly be part of a neurological physical infrastructure that could bring about ‘aboutness’. I assume that because, to my experience, it does” does suggest we agree, after all. I would be surprised and pleased if we did!

    In this context, I think it’s useful to go back to the exchange on Scientific American (http://blogs.scientificamerican.com/guest-blog/exclusive-oliver-sacks-antonio-damasio-and-others-debate-christof-koch-on-the-nature-of-consciousness/), at least because it shows why our discussion is worth having.
    The original letter, signed by genuine neuroscience heavyweights, starts by suggesting something that is remarkably close to what I’m trying to unravel:

    the search for the properties underlying consciousness down to the level of the protozoan [a single-celled organism] in order to identify the fundamental cell-level mechanisms that, when scaled up in complex nervous systems, give rise to the properties that are typically referred to as “mind”. The unanswered question is: What characteristics of living cells lead ultimately to the various, higher-level psychological phenomena that are apparently unique to certain animal organisms? This question concerns essentially biological functions—and is distinct from “information processing” approaches that might be implemented in silicon systems.

    After that, they suggest that this mechanism is likely to be membrane excitability (rapid change of voltage potential across the cellular membrane) and:

    In order to produce the higher-level “awareness” of animal organisms, the activity of these numerous excitable cells to achieve a kind of sentience must be synchronized (in ways yet to be determined) for coherent organism-level behavior.

    The answer from Koch does somehow overlook the passage above (the “synchronized (in ways yet to be determined)” bit is crucial, in my own reading) and focusses instead on something that does seem to permeate the whole letter, interestingly for us, this is the same error that John is attributing to me:

    it can’t just be that cations flowing into the cell are what matters, but the overall causal effect the inflow of charged ions has on the system.

    What this calls for is a principled, analytical, prescriptive, empirically testable, and clinically useful account of how highly organized and excitable matter supports the central fact of our existence—subjective experience.

    Membrane excitability will play a role here, but as one among many other factors and considerations. Indeed, focusing on ionic channels to understand consciousness, a system-level property par excellence, is as useful as trying to comprehend the nature of the internet by focusing on how electrons flowing onto the gate of a transistor modulate the electric current flow between the other two terminal of the transistor.

    I think Koch is 100% right here, and that his reply is not manifestly uncharitable, but still, as you would have guessed, I do see that the original letter does try to address a very important point: the start of the story comes from sensory and signal transmission mechanisms/structures, these need to be understood in pre-computational terms, and indeed they are what makes computational metaphors useful from that level upwards.
    For us here, after a lengthy discussion, a number of missteps, miscommunications and whatnot, a few things should at least be clear:
    1. this is a conversation that is worth having (and I can’t stop being grateful for being involved)
    2. this is also hard stuff, and heavily philosophical, so a clear area where neuroscience has something valuable to gain by paying close attention (not to me! to this kind of discussion…).

    John: the above is also the reason why I don’t think it’s wise to hope that everything will be sorted out by work done in neuroscience labs. The lack of theoretical clarity in the strictly empirical side does worry me. What is needed is good fruitful dialogue between disciplines, not the typical turf wars that we have all witnessed.

    Back to John’s points:

    Do you think that the right model can be implemented in any physical infrastructure ?
    If you think that physical infrastructure is important, you are refuting computationalism. If you don’t think it matters, then you are defending computationalism

    PS This question has a yes or no answer ! It really is not relative.

    OK, the answer is a clear “NO”. But you know me, I do need to add explanations… The right model (representing the correct causal relations) needs very specific physical implementations. A stone would not do. This however does not stop me from expecting that the “right model” will be described in terms of algorithms. Nothing we’ve discussed have come close to change my mind on this. Does this mean I’m a computationalist or and anti-computationalist? I don’t know, and quite frankly, I don’t really care. 😉

    Finally:

    If you think that the right model has inherent properties, what are they ? Otherwise the Universe (if thats the observer running the program) won’t be able to run the program.

    Sorry John, you’ve lost me here. Can you clarify? No worries if not, I do hope I’ve managed to explain my position a little better for you…

  29. Sergio,

    Would you agree that the electrons and ion flows are simply the triggers to neurons; which implement some still undiscovered metabolic function that causes the emergentqualiasubjectiveconsciousness….!?

    I equate these inner neurons of subjectivity to motor neurons because muscles implement ‘real time’ emergent s l o w functions just as qualia are internal s l o w functions. My hunch is the cells are performing some metabolic function which is synchronized; as the scientists mentioned to Koch in the open letter.

  30. Scott –

    I’m afraid you read more into my comment that was intended. I don’t assume that you are a “pragmatist” or any other “-ist” because I don’t find such labels helpful. As with “intentionality”, the SEP entry on an “-ism” often seems to have so many branches and controversies that I find myself even more confused after reading it.

    I’m not suggesting that one accept the limitations of the intentional idiom and soldier on in a battleground where it’s applicable, but instead that once the battleground becomes science, that idiom is inapplicable.

    Presumably you are aware of Eric Schwitzgebel’s attempt to reify the concept of a belief by grouping behavioral dispositions into a family each member of which responds similarly when triggered by a wide range of contexts (internal and external stimuli). Eg, a person “believes” (holds true) the proposition P = “Honesty is the best policy.” if – as evidenced by observed behavior – disposed to tell the truth in a variety of situations. (I say “reify” because I envision behavioral dispositions as being implemented by a physiological process something like Edelman’s “theory of neuronal group selection” which results from a learning process presumably along the lines of those envisioned by Quine and Sellars; Eric may or may not actually have reification as an objective.)

    Although I have found behavioral dispositions implemented in neural structures (shadows of Sergio!) to be a useful way (a heuristic?) to think about how our behavior might be implemented, I don’t see how retaining the term intentional “belief” helps in that endeavor, and actually suspect that it hinders. Ie, I also want to avoid the realm of behavioral psychology and instead address mechanisms. (Eg, another heuristic I use is thinking of a behavioral disposition as analogous to an electronic communication component called a “matched filter” which responds maximally to a specific input signal but also responds to other signals to an extent that is a decreasing function of their “distance” (in some sense) from the target signal. Analogously, a disposition may respond to contexts that are “close” (in some sense) to the one in which the disposition was learned. Perhaps Arnold’s RS can be understood as something like a matched filter on steroids?)

    I don’t fully understand your paragraph re “purpose”. As I understand the first sentence, I agree. Re the second, I agree that the intentional idiom operates without detailed implementation info, but I’m not sure why you call it “neglect” when it appears to me simply a matter of unavailability. Re the third, I again agree, but I learned from Ramberg’s essay that the intentional idiom can’t be reduced to the vocabulary of neurobiology long before I ever heard of “heuristics” or “neglect” (in your sense of the words). So, I can’t ascribe to those concepts the importance you do – although I may well be missing something. And I’d be very interested in learning which Wittgensteinian ideas you consider play a role in all this.

    Finally, as I come to better understand your approach, I don’t find it “odd” at all. Or perhaps more accurately, I do find it odd in the sense of unusual but happen to share it nonetheless.

  31. Charles – Then we are cheek and jowl close! But my *guess* is that you suffer the standard abductive dilemma faced by all eliminativists: you have no way to explain what apparent intentional phenomena are.

    I’ve debated Eric on his dispositionalism a couple of times now. The problem I have is that it’s too phenomemalistic (in the skeptical sense, where we stick with observable behaviour and avoid positing entities to explain the behaviour) which is to say, too epistemically responsible to be scientifically useful. And I agree entirely with your worries vis a vis the theoretical utility of ‘belief.’

    “As I understand the first sentence, I agree. Re the second, I agree that the intentional idiom operates without detailed implementation info, but I’m not sure why you call it “neglect” when it appears to me simply a matter of unavailability.”

    Unavailability is ambiguous with reference to itself: that is, information can be unavailable, and yet still flagged as unavailable. Think of darkness or any of the other visual indicators of unavailable information in vision. Neglect is important because it provides a way to understand the kind of cognitive illusions that fall out of philosophical reflection on apparent mental phenomena. It explains why human cognition can consist of congeries of heuristic systems, yet strike deliberative metacognition as something unified and general. In other words, it provides a way to understand how the absence of information can generate the illusion of positive properties.

    Neglect, in other words, is the way out of the abductive dilemma. It allows us to see the philosophy of mind as an instance of the ‘availability heuristic’ run amok.

  32. Scott –

    But my *guess* is that you suffer the standard abductive dilemma faced by all eliminativists: you have no way to explain what apparent intentional phenomena are.

    Well, it may help allay such concerns if I inject two bits of information. First, I’m in no sense a philosopher but instead a retired comm systems engineer. I’m relatively uncorrupted by classical philosophical thought by virtue of being largely ignorant of it. Second. I’m pretty much a strict determinist (and apparently a hypocrite as well, having argued against “-isms”). But see below.) .

    In any event, I agree with Eric that apparent intentional behavior is just the triggering of behavioral dispositions. But whereas he tries to salvage propositional attitudes by grouping contexts (internal and external stimuli) that result in behavior consistent with an attitude (specifcally, “belief that P”), I “explain” the behavior in a specific context by postulating that there is a neural structure that implements that specific behavior in that specific context – a “deterministic” process not in a philosophical sense but in a mechanical sense. The subject’s “attitude towards the truth value of a proposition” plays no roll.

    Unavailability is ambiguous with reference to itself: that is, information can be unavailable, and yet still flagged as unavailable.

    I assume that last “un” shouldn’t be there and that you’re referring to introspection – which I reject. Davidson – one of my gurus – drives me nuts with his insistence on first person authority (by which, I should note, he doesn’t mean infallibility). I think we are “authoritative” (in an uninteresting sense) about our own behavior in that in a specific context a subject will reliably behave a specific way. However, I think we can do no more than to observe our own behavior just as does any 3pp observer except that we can observe more occurrences in more contexts thereby possibly forming a more reliable opinion.

  33. @Scott #26
    Re your first paragraph (“Well, if BBT turns out to be more or less correct […]”), I am hoping you are reading too much in my proposal and that the confusion/vicinity between “aboutness” and Dennett’s “intentional stance” is interfering in our exchange.
    I hope this because I can’t see how BBT could be fundamentally wrong, not if we are looking for (and going to find) an complete and exclusively naturalistic/physicalist account of consciousness. Nevertheless, I still don’t see why my attempt on micro/proto-intentionality and computationalism would clash with BBT: I actually think that they fit together like lock and key. It’s always possible that I’m misreading BBT so that it fits my own taste, though! Specifically, in here I am focusing on something that lies at a much lower-level than semantic content and/or the conscious realm, therefore it’s even more removed from the cognitive activity of interpreting the behaviour of other systems: what I am talking about is what I think can bring us to understand how semantics and phenomenal experience are even possible in naturalistic/physicalist terms. Once we reach this level, BBT would kick in and explain that we, at the conscious/semantic level, are utterly ignorant of how these levels come into being, and why this ignorance leads us to all sorts of dead ends. I was hoping you did recognise this from the previous chapters: after all I’ve introduced the same micro/proto-intentionality already for humble bacteria and then hurried to specify that such organisms should be understood as knowing exactly nothing. Thus, I thought it was clear that my story is still many miles away from what’s required to experience a stream of thoughts, recalling memories, having cognizable goals, making plans and all the other “mental phenomena” that make Dennett’s Intentional Stance such a useful (and common) tool to navigate life. Your previous points 1) and 2) (#10) also seemed to suggest that you were still with me, and I do think they capture a distinction that is useful to grasp my message.

    This brings us back a few squares (assuming we progressed at all!), I’m afraid. The micro/proto-intentionality that comes with any sufficiently stable sensory system is not even close to cognition (it’s necessary, but nowhere near sufficient). The same applies to the basic form of learning shown by Aplysiae. However, both mechanisms are already useful to see how they enable their “owners” to act in seemingly teleological ways: even such simple systems seem to act according to their own internally defined purposes, even if we don’t need to expect/propose that these organisms have any kind of self-awareness. I’ve also mentioned above that even at this very basic level the story is already about neglect: these systems neglect the fallibility of their sensory structures and produce/transmit exactly zero information about how they work internally.
    For all these reasons I’m completely blind to what conflict there may be between my interpretation of basic biology and BBT. You do provide some clues, but to me they point to an explanation which indicates that you indeed read my proposal as if I wanted to cover far more ground than intended. It’s true that I’m trying to point in a desirable direction of travel, but I do so just after leaving square 0, and while I’m not even sure I am on square 1 (we are having this entire exchange because I am not sure!). I do expect this journey to involve hundreds/thousands of squares, to give a sense of perspective.

    I’m also puzzled by your second paragraph. Are you talking about memetic evolution? Otherwise, why would you mention cultural constructs such as (verbally expressible) concepts of self/soul and the like? Anyway, no, I don’t expect to live long enough to reach the point of exploring the evolution of concepts such as self and soul in memetic terms, and I don’t even know if it will ever be something more than a wishful attempt.
    Re “the ‘mental’ is nothing but a convenient fiction”: once again you seem to conflate two levels of explanation/description, at least in the reading that I can produce without explanations. I have a quite developed idea of why our inner voice (and phenomenal experience) have evolved, and we surely (!?) agree that the way they both appear to us introspectively is deeply misleading due to blindness of how they come into existence. You may be inclined to declare both a mere illusion (I’m not), but I am convinced this is a cosmetic/strategic difference: I really don’t see why we would end up disagreeing on the substance of each other’s argument. So, “convenient fiction” is a definition we can agree upon, when talking about first-person perspective and the tricky nature of introspection (assuming fiction stands for “it’s not what it seems to be”). But then you talk about psychology, and I presume you are talking about the official scientific discipline, and thus move onto a third-person perspective: when you do so you leave me behind because I cease to be confident I understand what you mean (again! and sorry!).
    To be very clear: I can mentally sing a song to myself, I can even compose riffs and melodies in my mind alone, and then implement them by singing or playing an instrument (or “think” a sentence and then type it down, as I’m doing right now), so even if I can easily accept the illusory/misleading sides of my inner life, I don’t see anything to gain from negating they exist in absolute (third party) terms. What I experience is most likely bound to be mysterious/inexplicable in introspective terms, but it sure as hell does exist. We go back to the good old cogito ergo sum which is pretty much the only first-order certainty about reality I can have…
    Perhaps I really need to write my (forever planned) post on why reductionism has to conclude that almost everything is an illusion!
    Or perhaps I’m failing to understand what you mean, again and to my own shame…

  34. @VicP #29

    Would you agree that the electrons and ion flows are simply the triggers to neurons; which implement some still undiscovered metabolic function that causes the emergentqualiasubjectiveconsciousness….!?

    I rarely agree with something that doesn’t come with 2k words of explanations ;-).
    I find your observation on slow phenomena far more intriguing, but on your straight question, I’m afraid that my hunch goes in a very different direction: I think the qualities we all struggle with are likely to emerge as a result of the interplay across large populations of neurons, not primarily from some yet-undiscovered metabolic activity within single cells. I’m aligned with orthodox neuroscience here, and I am certainly happy to accept that I might be very wrong! The synchronisation (or better: orderly interplay) bit is crucial for both accounts, though. Also: I’m pretty sure we don’t have all the pieces of the puzzle, we certainly don’t know how to combine them together, but I do believe we still miss plenty of them.

    In my view, ion flows across membranes are indeed just a cog in a much bigger picture, so I might well call them triggers, but then any other mechanism involved may be triggering something else, starting with synaptic release, so I don’t think “trigger” is a particularly useful word/concept…

  35. Sergio: “what I am talking about is what I think can bring us to understand how semantics and phenomenal experience are even possible in naturalistic/physicalist terms.”

    The misreading could just as easily be mine: Lord knows I’ve done it before! BBT has two legs, the neglect leg, which I think you understand very well, and the *heuristic* leg, which I *think* is still tripping you up.

    “To be very clear: I can mentally sing a song to myself, I can even compose riffs and melodies in my mind alone, and then implement them by singing or playing an instrument (or “think” a sentence and then type it down, as I’m doing right now), so even if I can easily accept the illusory/misleading sides of my inner life, I don’t see anything to gain from negating they exist in absolute (third party) terms. What I experience is most likely bound to be mysterious/inexplicable in introspective terms, but it sure as hell does exist. We go back to the good old cogito ergo sum which is pretty much the only first-order certainty about reality I can have…”

    Experiences ‘exist’ the way money has value–which is to say, both obviously (relative to a restricted problem solving domain) and not at all (in any high dimensional or ‘third person’ sense). Consider how the category of the ‘mental’ (or even consciousness, for that matter) isn’t necessary to your example, how you can ‘*silently* sing a song to yourself, which you then manifest in musical behaviour. The tradition calls the ‘whatness’ of this activity ‘mental,’ and it seems as obvious as the nose on your face. But generalizing from instances like this to *theoretical categories* (like the ‘mental’) is precisely where the peril lies. What Descartes got right is the fact that thinking obviously means that he ‘exists,’ but only *given certain low-dimensional problem-ecologies.* Neglecting the heuristic nature of the intuitions involved, he assumed he could apply a *local* (because heuristic) verity *globally,* transform what is a practical platitude into a theoretical foundation.

    The complicating factor with the ‘mental’ is that it provides a way to understand the kinds of interventions you find in empirical psychology, ‘mental functions,’ and thus fools the psychologist into thinking they must be talking about ‘objective’ (because operationalized) entities.

    Shit! I’m late picking up my wife!

  36. Sergio, From my engineering pov triggers are trigger pulses or short duration pulses that implement other circuit functions. In neuroscience the frequency of photon energy triggers activity in the retina and v1 areas which implement the slower function of image creation etc. My pov is that consciousness deals with the time domain of sound so that image flows in the brain occur in the same time domain.

    Nature has to create these slower processes which is what we call conscious experience.

  37. @Scott #35
    You are kicking an open door when it comes to heuristics as local knowledge, but I think it’s worth exploring this topic, because there are some differences/nuances that I find intriguing.
    If I understand your position correctly, we reach very similar conclusions, but with one important difference, which stems from our different starting point. I start from science epistemology, while it seems to me that you restrict your conclusions to what can be directly derived from BBT, and/or what applies to “sciences of the mental” and philosophy of mind.
    If I’m right, our difference is that I push my conclusions pretty much to all knowledge/theories, while you don’t, or at least don’t insist on it as much as I do.

    So, where do I start? (will be sketchy, to keep it brief)
    1. As single individuals, we can only accumulate “knowledge” (will qualify better below) through direct sensory input. We sample the world and learn stuff. Therefore, everything we learn stems from induction, because nothing we directly experience can be assumed to be universal. Thus, when I study science on University textbooks, or read Journal Papers, I assume that most of what I read is “true” (with scare quotes) by virtue of induction: such sources are generally reliable, or supposed to be so.

    Journal articles are a nice example of why this sort of knowledge really does suffer the limits of induction: thanks to Ioannidis (see http://www.plosmedicine.org/article/related/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124, for example) and many others, we can’t cultivate the illusion that what is published is true anymore.

    2. The limits of induction are well known: if you’ve only encountered white swans, you “should” inductively conclude that all swans are white, and you’d be wrong. Therefore, all knowledge is provisional and inherently heuristics: it works until it doesn’t. This is standard science epistemology, theories are our “best explanations” not our “true explanations”. We (the scientific community) expect theories to be refined and/or superseded, in an effort of making the knowledge they represent less and less local and heuristic.

    3. Within this general framework, two things should be done to make our knowledge more reliable (you can strive to make knowledge less local, less heuristic or both). The first one is hinted above: we can work to make our theories less localised and more widely applicable – this is sometimes done by merging theories into supertheories. The other approach stems from the first and ties to my obsession with biases and selective blindness: to make a theory less local, you need to understand the applicability boundaries. Once you know very well when and how it is appropriate to apply a given theory, the need of making it less local is weakened: it’s OK to have a “single purpose” tool, as long as you know when you should NOT use it. At the same time, this “defining the boundaries” activity makes your theory less heuristic, because you get better at predicting when it will fail. Thus, the second thing we can do is explore and define with ever-increasing precision the boundaries of applicability of a given theory. For example, Dennett’s intentional stance is a very local approach that is very well defined in its scope, therefore it can live as a standalone tool that is reliably useful in certain situations.

    I’m not saying anything new, but my take on the implications seems to be relatively rare.
    For instance: the legitimacy of well-bounded theoretical tools means that a plurality of approaches, each with different, but frequently overlapping scopes of applicability becomes a resource, rather than a problem – even if the different theories are utterly unreconcilable. This does generate even more confusion in terms of the demarcation problem between science and pseudo-science, but I think it’s a price worth paying. For example, you could say that psychoanalysis can be very effective in certain circumstances and at the same time recognise that it doesn’t have what it takes to aspire to keep expanding its scope (i.e. it can’t explain the whole mental domain, even if we were happy to consider it a domain). It works in certain contexts even if most of us agree it should be relegated to the pseudoscience ghetto. At the same time, we can easily see why when it was young it could be legitimately be seen as a scientific endeavour. Thus, saying “this theory is false” becomes moot, while explaining when and why a theory is or isn’t applicable becomes very interesting. At the same time, we rescue a whole lot of local and self-limited knowledge – claims about truthness are sidelined, claims about usefulness highlighted.

    Second implication is that every theory is local, heuristic and bound to fail in one way or the other. The exceptions apply to theories which, by virtue of their own abstraction, self-define their scope. Enters maths and other forms of secondary knowledge (knowledge about knowledge) where, by knowing the scope from the beginning, it sometimes becomes possible to find definitive conclusions. What I’m writing right now falls in this category, as well as BBT, I suppose, since its domain is clearly identified. However, we still have to assume that for one reason or another induction will not fail us within the agreed scope, and there is always the possibility that our assumption will prove to be wrong, so an element of chance is always present.

    Third implication is that ontological claims, unless they refer to other abstractions, need to always be understood as local, fragile and subject to revision (not really ontological, if you ask me). This is where things get interesting to me: when you look at science as it happens in the field, my points 1-3 are nominally accepted by all the scientists who bothered to understand how science works. However, vast numbers fail to embrace this third implication and instead tend to understand the ontological claims at the foundation of their own discipline as strong and “true” (without qualifiers). You may have witnessed the same kind of mistake in philosophy, where usually errors creep in by failing to recognise where a given theoretical approach stops being reliable and why.

    In the context of your last comment and critique of psychology: my position is probably different from yours. I take it for granted that psychology can only consist of locally applicable theories, I also take it for granted that the ontological claims it produces are instrumental to the theories themselves, and should never be understood as universally applicable. Finally, I expect most specialists to be fooled by their own (perceived) cleverness and thus make the mistake of granting far too much ontological weight to their own theories. This last observation applies to the specialists of all disciplines, including philosophers, epistemologists, and, sadly, myself.

    In our specific case:

    The tradition calls the ‘whatness’ of this activity ‘mental,’ and it seems as obvious as the nose on your face. But generalizing from instances like this to *theoretical categories* (like the ‘mental’) is precisely where the peril lies.

    My take is that you need a good reason to propose the mental as an ontological valid theoretical category, but there is nothing wrong in doing so, as long as you are well aware that what you are doing is building a theory which is necessarily local, fragile and subject to revision. People are usually unaware of this, and *that* is where the peril lies. Slight difference in perspective, but you surely will recognise how I’m trying to make the scope of the present theoretical effort as wide as possible 😉 !

    The other difference that I spot is that you (seem to) assume “science” as “third person perspective” is qualitatively better than other forms of knowledge. You are mostly right, in the sense that science as a whole aims to broaden its scope of applicability as far as it can, and perhaps uniquely, does so explicitly and systematically (on paper); it also includes a multitude of different, overlapping and sometimes incompatible theoretical frameworks. Thus, it’s true that it does produce theories that are collectively less low-dimensional than what we can usually figure out on our own (cfr “folk psychology”). However, you are wrong if you assume/suggest/imply/hope that there is no bound in the dimensions encompassed by science: (I assume you don’t, but) ultimately, there is always a bound (even if we keep pushing it further). I personally don’t see much of a qualitative difference, if anything I’m annoyed by the arrogance of assuming that “scientific fact = true”. Scientific knowledge is still local and heuristic: when we are lucky, it is less local and heuristic than other forms of knowledge, but that’s all. It does mean that we agree on what Descartes got wrong, by the way!

    Back to BBT: I can see that much of the above is present within BBT and it is used to explain the puzzling phenomena associated with consciousness. What I’m not sure is present is the meta-awareness that all of the above has to apply to BBT itself as well. That’s why a few weeks back I was asking something along the lines of “what is BBT blind about?”. The better one defines the scope of a given theory, the less ugly surprises one should encounter!

    And to conclude, back on topic! Having said all this, what I’m trying to do here is use your brains to help me see what’s wrong with, and/or the limits of applicability of my own theoretical attempt (trying to explain how senses enable cognition and how to use the explanatory power of computational metaphors). In your case, you thought that BBT conflicts with what I’m trying to propose, I don’t think it does, so the question becomes: do you see why I think BBT and [sensory-proto-aboutness + computations] can easily fit together?

    I hope this makes some sense to you…

  38. Sergio:

    referring to “thoughts about something” allows me to clearly state that I’m interested [i]n the mental domain (thoughts) and also avoid the terribly loaded “intentionality” word

    Unfortunately, a “thought” – thinking that P – is another propositional attitude like believing, desiring, or wishing that P, and hence a word in the intentional idiom. So switching to thoughts doesn’t avoid that idiom. And the concept of “thought” also “generates its own layer of misunderstandings”.

    I try to employ the idea of behavioral dispositions implemented as neural structures to “explain” (ie, speculate on a possible implementation of) an activity like humming a song to oneself in terms of them. So, assume that such a structure has been formed and that it implements the observable activity of in some context humming a song out loud. Then humming the song to oneself would simply be exciting that structure in a different context which results in the motor neuronal part of the structure being inhibited.

    The same idea applies to a “thought” as you describe except with the order of events reversed: there is an existing disposition to express (utter, write, type, etc) a thought (ie, a verbal string) and the silent thought is excitation of that disposition with expression inhibited. A “new” thought might result from some combination of existing dispositions being excited by a new context (input). Of course, I’m suggesting that one drop “thought” from the explanation/speculation and address behaviors as being caused by changing contexts rather than “motivated” by “mental states”.

    The point, of course, isn’t whether things actually work like that but instead whether it’s a plausible speculation – a so-called “proof of concept”. If it is, that suggests that there is no need for the intentional idiom (AKA “mental heuristic”?); a neurological one suffices.

    ‘intrinsic intentionality’ is there when a system systematically collects information about the surrounding environment

    And that raises the question: what does it mean to “systematically collect information about the surrounding environment”? Prior to any learning, training, etc, an organism has only sensory stimulation to work with. I assume that by “information” (yet another problematic word) you mean something that might be considered – according to some definition – primitive “knowledge”. But it’s a long way from an organism’s being able to respond (say, by taking defensive action) to certain sensory stimulation received from the environment to the organism’s acquiring from such stimulation some foundational “knowledge” about the environment. That’s the bootstrapping problem to which I alluded in comment 18.

  39. Sergio

    If you think that physical infrastructure is important to a “simulation” of a brain, then you simply aren’t a computationalist. In your model you are allowing matter to play a pivotal role in the working of your brain simulation. The difference is that matter can do whatever matter can do, which involves the creation of consciousness without providing obvious clues to humans as to how this happens.

    Computation on the other hand is a closed system. We know exactly what computation is and that it is incapable of gaving any physical or naturally causal effects.

    We know computation is a system of symbol manipulation that requires an observer to map physical attributes to symbolic states. Thus in a cpu of a modern computer, humans map small positive voltages in a semiconductor array to ‘0’ and small negative ones to ‘1’. But without that rule there is no computer. The computer exists only relative to an agent that knows the rules of this mapping.

    There is nothing inherently physically identifiable in a computational system : thus a conscious alien could find a laptop but would have no idea what it was until it was told or guessed the rule of representation. Unless he had verification in the form of a peripherals system – a keyboard or a screen – he might never know.

    Hence my point about inherent properties. If computationalism is correct (which you don’t believe so maybe the point is not relevant to you) then there must be a physical attribute of a computational system that the universe can recognise in order to ‘run’ a brain program . As I don’t believe there can be, you must see my point that there can be no physical phenomena like consciousness emerging from non-physical systems like computation which have no physical properties, and thus physically causal capabilities.

  40. Sergio

    I’ve never seen a convincing rebuttal of the Chinese room yet. I’m not sure you’ve altered much.

    I think searle was not strong enough in his first Chinese room argument, which I think he later pointed out. The point is general, that syntax is never enough for semantic, and never will be.

  41. I don’t know how the necessitation of mapping or representation got so popular. Seems to have left pragmatic survivalism behind entirely (or escaped in in as much of an escapism).

    Perhaps some game world which is quite challenging for human players and some AI players as best the AI experts can make them. It might show, when the humans get picked off and the AI continue, the AI in the sim didn’t need mapping or representation to continue where we ended.

  42. @All:
    Apologies for disappearing: I was travelling again!

    @VicP #36
    I hope it’s clear that I think we should always welcome approaches and interpretations that borrow concepts and explanatory power from different disciplines. In this context, the standard engineering idea of trigger makes sense, and it does resonate with what Charles mentions in #38.

    After that however, you stumble on well known obstacles:

    Nature has to create these slower processes which is what we call conscious experience.

    You touch the problems of temporal integration and binding, but naturally the good old hard problem sneaks in as usual: slower processes are indeed likely to sustain the ability to bind together perceptions that happen at different time scales, and/or that need to be re-ordered to make sense of them (complex language syntax being the typical example, written numbers and maths syntax are my favourite). However, what makes the result conscious (in terms of phenomenal experience) remains a mystery: just expecting that the brain has ways of creating slower processes doesn’t really bring us across the explanatory bridge. If you do have an idea on how slowness does bring us forward, please do share it!

  43. @Charles Wolverton #38
    Yes of course, “thought” already belongs to the intentional idiom, that was (ahem) intentional! My only aim was to avoid confusion with Dennett’s “intentional stance” which is yet another level down in the hierarchy of the mental domain (aboutness -> meaning -> thoughts-> knowledge -> knowledge about other minds). Many people here (including me) find it hard to navigate the forest of interconnected words, so I try to keep it simple when I can.

    Re your dispositional account of thoughts, which in my reading looks like planned actions that never leave the planning stage, I find your account *extremely* plausible, both from physiological and introspective point of views. It also has the advantage of accommodating the phenomenology of dreams really well, as it is widely accepted that motorneuron output is actively suppressed during sleep. Trouble is, I don’t see anything in your account that explains why the mental, and even the “what is it like” domains even exist. You may be able to explain some parts of the mental domain, but it seems to me that there is still no role for the fact that formulating a thought, or mentally singing a song are consciously experienced (there is something it is like to silently sing a song to yourself). Thus, I fear this sort of account shares the issues that have (almost completely, and all too late) sunk the prospects of behaviourism as an attempt to account for all of human behaviour. You go beyond classic behaviourism, and allow for thoughts and similar ‘mental’ phenomena to be included in your model, which is one step in the right direction, but two elements are still missing:
    1. What justifies the existence of these phenomena? This is easily addressed, though: planning as “simulating ahead” is a predictive endeavour and therefore certainly useful to select appropriate actions. Thus, such capacities can be evolutionarily advantageous.
    2. Why do such thing happen with an associated feeling? We know form neuroscience, psychophysics and psychology that the number of things that our brain does “in the dark” (i.e. without us being conscious of any of it) is very high and very complex.

    Explaining point two is our final endeavour, and in here I’ve only tried to explore the possibility of getting started in one particular way. In the end, if this explanation will be possible, it will somehow necessarily rescue behaviourism: it will show how the “mental” domain, the fact that certain activities feel like something, and that a subset of these “feel like something” even if the brain produces no output (no external behaviour is observable) has its own role in producing/regulating behaviour. In other words, what an organism does is to be understood in terms of how it maximises the chances of survival, therefore explanations will necessarily boil down to “this mechanism helps producing appropriate behaviour”. Any explanation that will finally account for the existence of the mental will necessarily be somehow behaviourist, in my view. My hope is that this time it won’t pretend that our inner world can be safely brushed aside, but it will instead include it in the causal chain between perception and action.

    All this is to say that I don’t think we are that distant, but that I do believe you are somehow able to turn a blind eye to the part that truly is problematic: that’s understanding what our inner life is for and how brain mechanisms generate it. Failing to do so risks forcing us to take an Epiphenomenalist stance (see Peter’s account under the “Strange ideas” in the menus on the right), and I don’t see how it may help.

    So now we get to your final point, which is very helpful:

    Prior to any learning, training, etc, an organism has only sensory stimulation to work with. […] But it’s a long way from an organism’s being able to respond (say, by taking defensive action) to certain sensory stimulation received from the environment to the organism’s acquiring from such stimulation some foundational “knowledge” about the environment. That’s the bootstrapping problem to which I alluded in comment 18.

    I’m very much aware of the bootsrapping problem! That’s why I’ve written all of this… However, I do struggle to find a good way to translate my thoughts into understandable explanations; I think your way of framing the problem is useful, especially because it allows me to try explaining my point in your own terms.
    Yes, there is a long way between appropriately responding to an external stimulus and acquiring some foundational (and explicit!) “knowledge” about the environment. I do agree on this without reservations. However, I don’t think I’ve managed to explain the strange inversion that I am proposing.
    Observation 1: 99.99% of the behaviour of living things (because also plants have behaviours!) is the kind of behaviour you describe as “appropriate” while also implying that it requires no knowledge – my position starts from challenging this implication. Even humans, prior to any learning, come into existence with a lot of pre-established stimuli-driven behaviours (suckling, grasping, crying if injured, not to mention physiological responses and behaviours that facilitate learning). We as humans learn so many things, and so many of them are consciously learned, that we can easily conclude (as many do) that we were born as blank slates, but it is a scientific fact that this isn’t the case, not even in the slightest.
    Now, what does this mean? Am I saying that we are born with some prior knowledge? Yes I am. But if I do, I then need to explain what this knowledge consists of, and how it came into existence, because otherwise I’m only kicking the can a little further (and/or just conclude one cycle on the endless recursion that I’ve mentioned as the core of the problem in the first post). So what is this prior knowledge? It’s the range of “appropriate behaviours” that you mention as the example of “knowledge-free” reactions to stimuli. In the examples I’ve used, it corresponds to the ability of a bacteria to accelerate its metabolism when the presence of glucose is detected. For Aplysiae, it is the ability to retract when a noxious stimulus is received, as well as the ability of learning the association between the neutral and noxious stimuli (this ability embodies the “knowledge” that statistical regularities happen and can be exploited).

    Observation 2: If you are an organism which can
    A. React appropriately to a range of situations, enough to survive for some time (and parental care of mammalian style does help!).
    B. Record in some way if what you did was indeed appropriate (yes, there is a lot implied here, as it requires ways to evaluate if something is desirable or not, as well as a “concept” of self – there are problems here, which I will blissfully ignore).
    C. Reuse what was recorded in B to modify A.
    Then you will inevitably start accumulating something that is indistinguishable from knowledge as we normally understand it.
    It will be knowledge that is relevant to you, as defined by the sort of things that facilitate or hinder your survival and reproduction in the sort of environment that you are likely to inhabit.

    To look at this in practical terms: we don’t come equipped with sensors for gamma radiations, even if they can easily kill us, because life-threatening levels of gamma radiations are very rare in our normal environment. On the other hand, we come equipped with temperature sensors, and in-built avoidance reactions to temperatures that are dangerous to us (once again, we are not born as blank slates). Thus, what we (you and I, in a very practical sense) understand as knowledge is shaped by how our senses work, we understand objects as really solid because we bang on stuff and get hurt in the process (hurting being an undesirable outcome), not the other way round.
    We build what we call knowledge on the basis of the signals we receive from our senses, and how they rank in our built-in scale of desirable-undesirable states.

    This account does not solve the hard problem, but I do hope it helps on the bootstrapping side. I guess it’s fair to say that it explains away the latter, so I suppose some will criticise me on this basis, but I can offer nothing more: the evidence on how living organisms behave clearly points me in this direction (of course, that’s no guarantee that I’m right!). Most living things behave in a teleological way, whether they are conscious or not – for my account to work, we can be agnostic on whether all, some or none creatures are conscious. Therefore, they must necessarily embody the knowledge that is required to pick the “right” options – it may be “implicit”, unconscious and rigid, but it has to be there. The final conclusion is that the bootstrapping problem is malformed: it tries to explain something by starting from the wrong end. Searching for the origin of full, human-like knowledge is not going to work precisely because from this starting point we can’t break the endless-recursion circle, we simply don’t have the evidence that would allow us to stop chasing our tail. The way out is to start from the evidence and climb up. Thus, in here I’m trying to show that reversing the question, and attempt an explanation of how embodied proto-knowledge can be used to build explicit knowledge is a much more promising endeavour…
    Unsurprisingly, I find it hard to make my point clear, and naturally, even harder to convince anybody!

  44. I don’t see anything in your account that explains why the mental, and even the “what is it like” domains even exist.

    I’m speculating on how behavior might arise using a vocabulary which includes no word from the intentional idiom. So, for my purposes the “mental” indeed doesn’t exist.

    As for “what it’s like”, if one can explain all human behavior without assuming phenomenal experience (PE), it seems that PE becomes an interesting adjunct worth exploring for its own sake but not something that need consume everyone interested in how the brain works. I’m currently aware of nothing that positively negates the possibility that PE has no causal role in behavior.

    … behaviourism as an attempt to account for all of human behaviour.

    An example of why I try to avoid “-isms”. As you acknowledge, my speculation “goes inside the head”, hence isn’t classical “behaviorism”. OTOH, if every study of behavior is labeled “behaviorism”, then the word becomes redundant.

    it will show how the “mental” domain, the fact that certain activities feel like something, and that a subset of these “feel like something” even if the brain produces no output (no external behaviour is observable) has its own role in producing/regulating behaviour.

    This sentence as written in the post seems confusing. I read it as something like:

    While in response to members of some subset of PEs “the brain produces no external behaviour”, those PE nonetheless “have a role in producing behavior”.

    This seems to suggest some distinction between types of causal role in affecting behavior: perhaps direct vs indirect, or proximate vs distal. In any event, I think those of us who doubt the causal efficacy of PE mean that in some all encompassing sense. (At a minimum, I do – see footnote below).

    you are somehow able to turn a blind eye to … understanding what our inner life is for and how brain mechanisms generate it

    To repeat, I’m essentially hypothesizing that PE plays no role and seeing how far one can go before reaching an impasse. Those questioning that approach can’t just assume a priori that an impasse is inevitable, they must come up with one.

    Re. “knowledge”: IMO, for any discussion to get off the ground there has to be explicit agreement as to that word’s meaning. Paraphrasing Sellars’ definition (which I prefer): The state of knowing that a proposition P is true is the ability to assert P and to justify the assertion by providing reasons for holding it true (ie, asserting other propositions). This definition obviously implies linguistic ability and suggests the holism of “knowledge” so defined.

    The problem with starting with mere reactions to stimuli as primitive “knowledge” is that it doesn’t distinguish “knowing how” and “knowing that”. We attribute the former to all kinds of entities that react purposefully and reliably to stimuli. But we tend to think of the latter as a sign of “cognitive” capability, eg, inference, etc. The bootstrapping to which I refer is the transition from the one to the other, which apparently is considered by child psychologists to occur somewhere around age three or four, although exactly how is not well understood. Thus, it should be clear that the “bootstrapping problem” to which I refer has nothing to do with “searching for the origin of full, human-like knowledge”, rather it addresses the transition from possessing nothing meaningfully describable as “knowledge that” to at least some a minimal repertoire of primitives of that sort.
    =============================

    Note: The assumption of PE causal inefficacy is motivated by the Data Processing Theorem of comm theory which states (roughly) that once the information content of data has been extracted, additional processing can’t produce additional information. (Intuitively obvious as stated, but not so much when stated formally – see wiki entry for “mutual information”) The information content of sensory input to the brain appears sufficient to determine appropriate behavior even in the absence of PE. PE may play some valuable role in an organism’s well-being, but it’s not at all clear that such a role involves controlling behavior.

  45. During the course of this comment thread I noticed this quote from a 3AM interview with Robert Brandom:

    The move from the practical intentionality of the beasts to our discursive sort of intentionality, which is the move from sentience to sapience, is what makes room for Hegelian Geist in the realm of Natur. Language (Sprache) is the Dasein of Geist, Hegel tells us.

    Now I have to admit that at best I’ve only the foggiest idea what most of that means, but it does appear to address our bootstrapping problem. If so, one might infer that fully grasping that problem requires understanding Hegel (and seemingly Heidegger as well), in which case the problem would be even more challenging than I’ve been assuming!

  46. Sergio, Put simply what makes water molecules H2O is more than two hydrogen and one oxygen atom, but each atom is a complex structure of subatomic particles. When water molecules bond into droplets etc, the simple explanation for the phenomenality of liquid water is explained rather plainly by the behavior of simple molecules while the subatomic functions which entail the forces of nature are largely ignored. As I said the mechanical explanation suffice. Likewise for most organic structures if we think of cells as molecules because like molecules they have definite structure and are composed of repeatable to form larger structures, so the mechanical explanation for most biological organs suffice. For neurons it is still a mystery of how or if the inner functions come into play for possible intercellular interactions to form qualia and other elements of conscious phenomenality, or the scientific explanation is incomplete.

  47. @John Davey #39 & #40
    That’s fair enough: if you take computations as being 100% abstract and entirely up to third-party interpretation, then there is very little need to discuss at all. You are right in saying they can do no work, because they only exist as interpretations of physical system dynamics.
    My view fits into this approach pretty well, I think: I say you can find meaningful ways to interpret the physical processes within a brain using knowledge about sensory systems, effectors, and ecology of the host organism. This knowledge can be (and normally is) used to reduce the otherwise infinite space of possible interpretations. You would then be able to make predictions and use prediction verification to progressively refine your resulting model.

    The bit that puzzles me is:

    In your model you are allowing matter to play a pivotal role in the working of your brain simulation. The difference is that matter can do whatever matter can do, which involves the creation of consciousness without providing obvious clues to humans as to how this happens.

    This suggests to me that you might be injecting some sort of magical ingredient in my mix via your “without providing obvious clues to humans as to how this happens”. I see this risk, I’m not saying you are certainly doing it. We do agree that we don’t have obvious clues, and that simply designing the “correct” algorithms as above (assuming it’s possible), but keeping them on paper (without instantiating them) does not have any chance of generating a new consciousness. You have got to physically instantiate the thing.

    What we may disagree about is that I think it might be possible to instantiate all what is needed in silico, as long as the required causal chains were correctly identified and duly recreated. Thus, you could even “simulate” a virtual world, with agents embedded where the input/output of single agents recreates the required causal chains, along with the ones that are expected to happen within agents. I don’t see why not*, while I’m pretty sure you’ll scream “never!”.
    The reason why I think this could be possible is the subject of pretty much all of this discussion: the interpretation map that allows some particular structures to become “conscious” (by providing the first seed of aboutness), is the causal chain itself, or if you prefer, the structure that implements it. As long as a structure implements that particular causal chain, the result should not change, irrespective of the other physical properties of the structure.
    This could all be wishful thinking: in the absence of a candidate causal chain, we can only speculate. But still, for me, the result of this lengthy (and thrilling, as far as I’m concerned) conversation is that we can’t rule out this possibility a priori. If we agree on this (I assume we don’t), I’d have reasons to celebrate. [I think we do agree that such a causal chain is going to be mindbogglingly complicated, more than we can imagine]

    Re my sketchy take at the Chinese room: yes! My judgement is, paraphrasing you:
    I’ve never seen an entirely convincing rebuttal of the Chinese room yet. I’m pretty sure my own attempt is not conclusive and I’m not even sure it does carry some weight.
    I guess you’re being polite and you actually meant: “your attempt doesn’t change anything”. Once again, if you meant what you wrote literally, and implied that my take contributes a little bit, that would be another reason to celebrate.

    *I see many ethical reasons to avoid doing so, though.

  48. Sergio

    I’m not sure what you mean when you say computation is “100% abstract”. Computation is “100% abstract” in the sense that it’s a closed, man-made discipline within mathematics. In that sense it’s “100% abstract” by definition and I don’t feel at liberty to trump the work of Turing etc by assuming it’s something different.

    When you are on about “computation” I suspect you are on about something else. You cannot “physically instantiate” a computational system by virtue of the fact that computational systems are not physical by definition. For instance if I want to use the Sun as a basic computer, it becomes the computer the moment I decide to use it as a computer. I don’t “instantiate” the Sun physically – I make an observer-relative choice. This observer relative existence is a prerequisite of all computers and there are no subsets where the rules are different – by definition. I think you seem to think that there is a special type of computer that isn’t really computational but which somehow emerges magically from neurobiology. That makes you a non-computationalist. Just admit it !

    Brains cause minds. They are remarkably similar across all animal species chemically speaking. It’s the matter properties of brains that generates mental life. How no one knows yet, but matter is not limited to what it can do by the laws of computational mathematics. Brain Matter can cause pains, sexual excitement, sadness, happiness and do so quite naturally outwith computational mathematics, whose ontological limitations we know – by definition – will not match the challenge of representing mental life.

  49. John, If we take computationalism to mean sequentialization, then the abstract quality which emerges from mind is actually time or timing. The cherished IF P–Q sets P before Q. On the most fundamental level, computers all run on a master internal clock which is an artificial version of time and all programmers really do is play the trick of resequencing the clock or artificial time to recreate reality, which is why people can be tricked by AI, CR’s, iRobots etc.

    The computational fallacy is no different from the airplane fallacy or belief that airplanes fly and cars go from one place to another. True we can trick people by flying planes and navigating cars with computers but these devices are just human extensions or extensions of our advanced sensorimotor systems.

    Computationalism along with science and philosophy, biology, geology, engineering…..etc. Are just extensions of our more advanced language capabilities. Once you’ve studied computers you see that computationalism is just another elan vitale in a long line of dualisms that underscores the gaps in our thinking.

    The more fascinating question is why we possess the ability to perceive dualisms in the first place. My take is that human cognitive dualisms come about by natural advanced brain structure or any animal with an advanced sensorimotor system has to perceive movement or the IF THEN ELSE of advanced animal nature. Humans simply apply more advanced observation and communicative language to it.

    As you pointed out above human brains are the same as all mammal brains because they start with more fundamental emotions or a fundamental core to which advanced language and cognitive function is added. This trick of more advanced biological nature is something which they (Sergio and the folks on this blog excluded but I mean the “Scientific and Academic Community”)act or remain clueless about.

    Strikes me as somewhat pathetic.

  50. @Charles Wolverton #44

    I don’t see anything in your account that explains why the mental, and even the “what is it like” domains even exist.

    I’m speculating on how behavior might arise using a vocabulary which includes no word from the intentional idiom. So, for my purposes the “mental” indeed doesn’t exist.

    As for “what it’s like”, if one can explain all human behavior without assuming phenomenal experience (PE), it seems that PE becomes an interesting adjunct worth exploring for its own sake but not something that need consume everyone interested in how the brain works. I’m currently aware of nothing that positively negates the possibility that PE has no causal role in behavior.

    OK, so here is a clear fracture between our aims. No doubt, this requires us to adopt different assumptions, language, methodology, and more. It’s entirely OK, of course, but I can’t follow you too much, because I think PE is very likely to have a causal role, so starting by ignoring it looks like an unnecessary self-imposed limitation to me (but I do see why it may turn out to be useful nevertheless).
    The reasons I think this are so many, and so entrenched in the foundations of my worldview that I’m finding it very difficult to summarise them, but I’ll try. I’m sure I’ll produce more confusing sentences in the attempt, please forgive my sloppiness if you can.

    In short, I can’t make myself believe that PE has no causal role: I reach the same result from whichever starting point I’m able to adopt.

    First starting point: introspection. I’m picking this as my first attempt so to highlight my own biases: I am sure that introspection colours all of the attempts that follow.
    An example may suffice: me and my better half frequently end up discussing what to have for dinner, a recurrent and delicate matter. Imagine that three options are mentioned (with clues on our nationality) ordered here by descending effort required: pasta, gnocchi and pizza. I know I will prepare dinner, so what happens “inside my head”? Quick and quasi-unconscious sensations flash by, produced by anticipating how it would feel to prepare and consume the three options. Each comes with a glimpse of how good it would feel: if the feel-good factor of one option is clearly bigger than the other, I’ll hear myself proposing it out loud. This account works for the straightforward interpretation of the famous Libet experiment, but includes a causal role for “what it feels like”, and importantly, it relies on memory/learning: I can experience some “anticipating sensations” and evaluate the feel-good factor also (but not only!) because I’ve stored some memory of the activities of preparing and consuming food in general and these dishes in particular. Thus, PE appears to serve a role in picking choices, as well as being at the heart of what is needed to learn from experience.

    Second attempt: somewhat metaphysical. PE is what makes life what it is. Take it away, and I would not know good from bad, because you can’t distinguish between desirable and undesirable sensations if you don’t experience any sensation at all. I like being alive because of PE. If we ignore it, nothing of interest remains. If I ignore my own sensations, no motivation remains, so I can’t understand at a meta-level why someone would want to completely exclude it from her world-view. It’s OK to say “PE and the mental domains are far too intractable, I’ll try to see how much can be explained without them”, perhaps this is your position, but I’m not completely sure. On the other hand, Epiphenomenalism is a known philosophical can of worms, accepting it in a radical way generates so many puzzles that I don’t even know where to start (and have no desire to re-visit that uncomfortable rabbit hole once again).

    Third attempt: evolutionary theory.
    Whatever is biological needs to be explainable in terms of how it evolved. As far as I can tell, any trait usually can be fitted inside one of three main categories:
    C1: an evolutionarily advantageous trait, or a contributing element of something advantageous,
    C2: a fairly (or completely) inconsequential side effect of some other trait/component,
    C3: a relic or a necessary trait that was shaped by long-past evolution.
    The three classes are very blurry, as the longer C2 and C3 elements remain around, the more it is likely that they will be re-used for something useful: over time, they tend to crawl back into C1 or disappear. Perhaps a few prototypes might help clarifying: C1 should be obvious so I’ll skip it. C2 is the trickiest (and less stable in evolutionary terms), but one example could be our tendency to see agency where there is none. Other, more pragmatic C2 examples can be found by looking at our physical structures: for example, our hands can grasp in only one direction, so in some cases we need to rotate the hand in order to grasp an object. C3 would include things like muted genes, or the fact that human embryos start developing gills before taking a different path. You may prefer to lump classes two and three together, little would change.

    In our context, your hypothesis implies that PE should be in either class 2 or 3, certainly not C1. Thus, PE is either a necessary and inconsequential side effect of something else, or the shadow of something that was useful in our evolutionary past. The latter option seems very unlikely, so much so that I’m at a loss in explaining why, would you agree? We are left with “inconsequential side effect” explanation. Side effect of what? Nobody has even the slightest hypothesis. Inconsequential? If there is some consensus in this area, this would be my best candidate: the vast majority of human beings would agree on what I’ve expressed as my “second attempt”, very few would happily accept that PE is inconsequential. Thus, from the evolutionary point of view, we find that two additional questions are raised by your hypothesis, while nothing is explained by accepting it. I am not sold. [For clarity, the two questions are: (Q1) Why is PE and/or the mental domain a necessary side effect of something else? What is this something else? (Q2) Why both PE and the mental domain intuitively seem to have very strong causal effects while they actually don’t?]

    Fourth attempt: vanilla cognitive neuroscience.
    The whole aim of the discipline is to find the causal links between brain (neuro-) mechanisms and the mental (cognition). If you take away PE (or all the intentional idiom), there is no mental domain. Thus, the whole discipline collapses into neuroscience. This is likely to be your aim, but similar objections to attempt 3 apply: if PE and the mental domains are illusions, what creates the illusion? Why is the illusion necessary? Once more, nothing is explained and more questions are raised (or the same questions we started from, with “illusion” added), making the attempt very unattractive to my eyes.

    Fifth attempt: internal consistency.
    You yourself admit that your attempt somewhat enters into the mental domain. But you also try to make the point that for [your] purposes the “mental” indeed doesn’t exist. How do you resolve the tension? It’s possible that we would one day be able to explain how behavior might arise using a vocabulary which includes no word from the intentional idiom, and this would be useful. But still: additional explanations would be welcome, because after all, everything we talk about is mediated by PE, without experience and symbolic language (which requires the mental domain), there is no talk at all. Thus, it seems to me that this sort of attempt is, as I’ve hinted before, wilfully turning a blind eye. There is nothing wrong in doing so, as it is really possible that explaining what we can tackle now will make it easier to explain what is currently out of reach. Therefore, I’m OK with this approach if and only if it is agreed that we do expect to find an obstacle at some point and “reach an impasse”: the bet would be that we do hope that what we’ve explained in the mean time contains what is needed to break the impasse. Trouble is, before getting there, we can’t know if it will be the case, but actively negating what we expect will become an obstacle makes it unlikely that the resulting theory will be well placed to explain what it wilfully ignores. So, no, for all it’s attractiveness, this kind of “strategic” blindness doesn’t look promising to me.

    Overall: it seems to me that your speculation is destined to fail, a priori. In one way or the other, it will necessarily require some additional work. If all behaviour can be explained without additional work (and we have no reasons to believe so, while attempt 3 gives reasons to be sceptic), we would still have to explain why PE and minds (seem to) exist. The one hope one would have is that by building the direct explanations of behaviour we would somehow make what is currently out of reach less intractable. However, because of attempts 3 and 5, I don’t see why to bet my money on this particular strategy: it seems to me that PE is present in at least a good number of mammals (because you can tell when your dog/cat feels pleasure or pain!); if it’s conserved, it’s very likely that it plays some causal role.

    Your interpretation of my confusing sentence (apologies!) is correct: my own take is that PE plays a causal role in learning, sometimes it will not have any consequence (nothing is learned, or what is learned is never used and is eventually forgotten). Some other times the consequences will be visible as behaviours, but only after the fact (when what was learned is used). When I hum a tune silently I am reinforcing my memory of the tune; in case my memory is patchy, I might be also filling-in the gaps, so this activity might have a behavioural consequence if I’ll ever have some use for this knowledge (or if someday I’ll feel like humming the same or slightly modified tune in the future). Because I like torturing my guitar, there might be consequences on my improvisations: elements of the tune might sneak-in even if I probably won’t be aware of the connection.

    Re. “knowledge”: IMO, for any discussion to get off the ground there has to be explicit agreement as to that word’s meaning.

    Agreed, having a shared definition is usually necessary.

    Paraphrasing Sellars’ definition (which I prefer): The state of knowing that a proposition P is true is the ability to assert P and to justify the assertion by providing reasons for holding it true (ie, asserting other propositions). This definition obviously implies linguistic ability and suggests the holism of “knowledge” so defined.

    And we depart again!
    Adopting this definition actively hinders progress. It does imply linguistic abilities, and thus intentional idiom and metacognition, the trouble is that in this way it is setting our target far too high, with no intermediate steps to rely on. To produce a naturalistic explanation of knowledge defined in this way is to cover the whole distance from matter to the highest cognitive abilities in one single step. It’s not going to work, see also below.
    Furthermore, it generates the need of producing new definitions for what comes before knowledge: a bee can learn where to find nectary flowers and direct its feeding flights accordingly. However, what the bee has learned doesn’t even get close to your definition, so we would need to invent something to cover this (and many other) intermediary step(s). Once again: why should we make our route harder a priori?

    The problem with starting with mere reactions to stimuli as primitive “knowledge” is that it doesn’t distinguish “knowing how” and “knowing that”. We attribute the former to all kinds of entities that react purposefully and reliably to stimuli. But we tend to think of the latter as a sign of “cognitive” capability, eg, inference, etc. The bootstrapping to which I refer is the transition from the one to the other, which apparently is considered by child psychologists to occur somewhere around age three or four, although exactly how is not well understood. Thus, it should be clear that the “bootstrapping problem” to which I refer has nothing to do with “searching for the origin of full, human-like knowledge”, rather it addresses the transition from possessing nothing meaningfully describable as “knowledge that” to at least some a minimal repertoire of primitives of that sort.

    And now we are talking! This is a far more useful way of looking at things. However, I’ll keep nitpicking, ’cause I can’t resist. First: you explicitly mention some of the intermediary steps between your definition of knowledge and inert matter (“knowing how” and “a minimal repertoire of primitives of that sort”) this reinforces my point above: if we don’t consider this kind of things some sort of knowledge, how do we define them? Second: your previous point seemed to suggest we can call knowledge only the “full, human-like” kind of knowledge, but the bootstrapping applies to knowledge that we are not allowed to call knowledge, pardon me if I’m confused 😉 .
    Third: the bee probably only “knows how” (flying in that direction for this long leads to food) but needs to learn it: thus, it has inferred(!) the how from previous experience (this experience could consist of instructions delivered by another bee). The point I’m trying to make (equivalent to my Aplysia example) is that the moment any kind of learning enters the picture, some sort of knowledge is necessarily implied. The other observation is that the transition between “knowing how” and “knowing that” does not need to be all or nothing as our definitions would intuitively suggest. In fact, the transition is seamless and so gradual that deciding where to put the boundary is entirely arbitrary; I hope all of the above does clarify why I see it in this way.
    [If you are unconvinced: I could say that “the bee knows that a certain body shaking from a companion provides instructions on where good flowers can be found” in this way “the bee learns how to reach the flowers”, but I could also say “the bee knows how to receive information about sources of food by looking at the dance moves of her fellow bees, but doesn’t know what ‘source of food’ means, it mindlessly reacts to the dance”, and of course “the bee knows how and when to do the meaningful dance herself” therefore it necessarily “knows how to decide when dancing is appropriate, which is equivalent to knowing that the dance is appropriate only in certain situations”. Is any of the above sentences clearly false? You could (arbitrarily) define the rules to decide when to use “how” and when “that” and I’m pretty sure it will be possible to find more borderline cases that would be difficult to adjudicate. As a result, the whole effort looks like a discussion on “-isms”: pretty moot, and I do think you’ll agree.]
    Thus, framing the problem in terms of bootstrapping is somewhat misleading: you don’t add up a single faculty and suddenly the route to “knowing that” becomes a walk in the park. The route from always responding in the same way to the same stimulus, to adding some circumstantial optionality, to increasing optionality on the basis of other learned things, to knowing “that” and then being able to think about what it is that you know, and then being able to explain or formulate your thoughts in language form, and then being able to ask yourself “what is knowledge” and so forth is mostly composed of gradual steps, with no or very few sharp transitions. It would be very interesting to explore the route I’ve sketched above with the aim of identifying transitions that are necessarily sharp, my guess would be that our only strong candidate would be the appearance of syntactically complex language. Even then, however, I don’t think it’s reasonable to speculate that complex syntax appeared in an all or nothing fashion.
    Overall, my overarching point is that “knowing how” is the fundamental prerequisite for “knowing that” and that the route from one to the other is long and gradual. Thus, understanding how the “knowing how” step is realised is the necessary prerequisite to understand how “knowing that” becomes possible. The more I think about it in this terms, the more I end up embracing “radical embodiment” approaches to psychology. If you are unfamiliar with this theoretical framework, a very good starting point, packed with links and references for further reading is:
    http://psychsciencenotes.blogspot.co.uk/2011/11/embodied-cognition-is-not-what-you.html
    The points of contact with what I’ve expressed here are numerous, and importantly, I think they reinforce one another because the starting points are quite different: if different underlying assumptions and methods lead to compatible conclusions, I can’t avoid thinking that the link is likely to be embedded in the underlying reality (it could be a common bias as well!).

    Finally:

    The assumption of PE causal inefficacy is motivated by the Data Processing Theorem of comm theory which states (roughly) that once the information content of data has been extracted, additional processing can’t produce additional information.[…] The information content of sensory input to the brain appears sufficient to determine appropriate behavior even in the absence of PE.

    In light of the above, I’d hope you could guess why I disagree. First of all, decisional algorithms can always be modelled/described as lossy compression algorithms. Thus, it comes with no surprise that information is discharged along the cognitive route. If you are in the business of extracting regularities from a given signal, you do so by progressively reducing the amount of information you retain: you keep the regularity and discharge the noise. Biologically, this allows to reduce the amount of information you need to store in order to be able to identify regularities across the time-domain (with a huge hat tip to VicP!); this would allow an organism to find connections between stimuli that occurred at considerable time-distances. If you model mind/brains in terms of decision algorithms, PE becomes the result of an initial (heuristic!) attempt to identify what information is worth storing for further reference. The mental domain and all of the observable features of consciousness then become the natural (necessary) consequence of the ability to recall and re-analyse the information stored in light of new sensory input (and somewhat recursively redefine what is worth storing). There, I’ve said it: this is the gist of the overarching theory of consciousness I’ve been developing over the last years. It is also one reason why I’ve written my two posts and sought your feedback here: if the challenges to computational approaches demonstrate that no information can be extracted from sensory input (because there is and there can’t be a point of reference), then the whole approach is undermined at its most basic foundation.

    I hope you won’t receive this lengthy reply as overly confrontational: I am being direct because it aids clarity and because you come across as one that would appreciate directness in itself.
    It’s a pleasure to debate with you!

  51. @John Davey #48

    I think there indeed is some unfortunate ambiguity in my use of the “computation” term. I’ll try to clarify:

    I’m not sure what you mean when you say computation is “100% abstract”. Computation is “100% abstract” in the sense that it’s a closed, man-made discipline within mathematics. In that sense it’s “100% abstract” by definition and I don’t feel at liberty to trump the work of Turing etc by assuming it’s something different.

    I’m with you so far. We agree on this “100% abstract” sense, it applies to a closed, man-made discipline in mathematics.

    When you are on about “computation” I suspect you are on about something else. You cannot “physically instantiate” a computational system by virtue of the fact that computational systems are not physical by definition. For instance if I want to use the Sun as a basic computer, it becomes the computer the moment I decide to use it as a computer. I don’t “instantiate” the Sun physically – I make an observer-relative choice.

    This is where language fails us. In my job, I would write down sketches on how a new feature of our software should work. I would then write it in the appropriate programming language and “execute” the code to see where I’ve got it wrong (because I always get it wrong, at first!). In my world, this is referred as a first analysis step, where you abstractly design some (pseudo) algorithm, and then you instantiate it by writing the actual code and executing it (in Object-Oriented languages, “instantiation” even has a related internal meaning, to make things even more confusing!).
    Similarly, when I put together (or repair, or just switch on) a computer or server, I’m not interpreting some existing object/phenomenon as a computer, I’m actively changing the world to make it fit my purposes.
    The trouble is, I have very scarce ways to avoid using the terms that have a very precise meaning in Computer Science (intended here as a branch of Mathematics). Nothing new: in different domains the same words frequently have different meaning. However, our discussion is frequently spanning different domains, for example the DwP argument is only possible because it exploits the across-domains ambiguity.
    When I plan how an algorithm would work, then write it and run it, I’m doing something physical; if I’m not allowed to use the “instantiate” verb to describe the “write and run” activity, what else can I use? Suggestions are welcome!
    Similarly, the computers we use are actually lumps of matter, but how should I call them, if not computers? The same goes for computations, if the lumps of matter that we call computers are not objectively performing computations, how should I call what they actually do?
    The point is: for our purposes, the 100% abstract sense is completely useless, if not harmful. On this grounds, I’m happy to “trump the work of Turing etc”, use the terms differently, and in a way that matches the everyday experience of most of us [because CS mathematicians are a tiny minority!]. The ambiguity remains, so apologies for that!

    I think you seem to think that there is a special type of computer that isn’t really computational but which somehow emerges magically from neurobiology. That makes you a non-computationalist. Just admit it !

    Under your rules, the only allowed definitions of computer and computational mean that everything and nothing is a computer, and nothing physical is objectively computational (something can be computational only relatively to an intentional observer). Thus, no-one can be a computationalist, whether they know it or not. Happy to admit the bleeding obvious! 😉
    Also: no, I don’t think “there is a special type of computer that isn’t really computational but which somehow emerges magically from neurobiology”, it’s just that under the CS mathematical definitions nothing physical is really computational (“This observer relative existence is a prerequisite of all computers”), not even the computers we build and use. In other words, the conceptual problem isn’t in my camp, the problem of ambiguity is!

    We still want to find a way to explain how “Brain Matter can cause pains, sexual excitement, sadness, happiness”, so maybe a compromise is required… Would using only language like “signal transmission” and “signal transformations” help you follow me? I’m pretty sure we can (theoretically) rephrase everything I’ve written in this terms, if it helps.

  52. Pingback: Sources of error: the illusory illusions of reductionism | Writing my own user manual

  53. Sergio

    I know what instantiate means inside and outside of computer programming languages – I’ve been working with them for 30 years !

    In fact I recall a period before OO and other even 3GL languages. We used to write in the native instruction set of the chip in question, be it an Intel or Motorola. The great advantage of new languages like ‘C’ and Pascal was they saved you the effort – they wrote the instruction set code for you, as compilers do and STILL do, even in the OO era.

    Programmers today do exactly what they did 30 years ago : they write basic instruction set primitives for CPUs. All that has changed are the toolkits used to produce them – compilers. Please don’t suggest that OO languages or whatever other ‘new’ technology is being used (OO is actually pretty old now, its just far more common in production systems) has any impact whatsoever on the runtime ‘awareness’ of a CPU. Programming languages have no direct impact on the runtime state of a CPU whatsoever. Not saying you think that, but a lot of the AI bunch do because they simply don’t know how computers actually work. The trouble with a lot of contemporary computer education is they don’t actually teach how CPUs work any more, just programming languages.


    ” Similarly, when I put together (or repair, or just switch on) a computer or server, I’m not interpreting some existing object/phenomenon as a computer”

    You most certainly are ! Or try telling your employer otherwise !


    “I’m actively changing the world to make it fit my purposes.”

    This is where computationalists seem to hit the brick wall, failing to distinguish the pattern from the product, the duck from the painting of the duck………………

    “When I plan how an algorithm would work, then write it and run it, I’m doing something physical; if I’m not allowed to use the “instantiate” verb to describe the “write and run” activity, what else can I use? Suggestions are welcome!”

    Of course you are doing something physical : you are mapping physical states to mathematical ones using fixed rules. But the program – and there is no fault in language here are the definitions are clear – is the mathematical state the physical state represents. Just like a painting of Cleopatra contains an idea and visual representation of an Egyption queen as well as the the actual paint.


    “The same goes for computations, if the lumps of matter that we call computers are not objectively performing computations, how should I call what they actually do?”

    Computers are hardware that provide platforms for programs to run. The programs are implemented as variations of physical state of the hardware.
    None of this is complicated or “harmful” ! Nor does any of it suggest that programs can or should be treated as “physical” because if they did it wouldnt actually be a computer any more !


    “Under your rules, the only allowed definitions of computer and computational mean that everything and nothing is a computer”

    Anything physical can represent a programmatic state in some way. Not MY rules – the rules of the engineering and mathematical discipline known as computing. I’m making no rules up.


    “nothing physical is objectively computational”

    I’ve never seen an example of it yet. I think Daniel Dennett tried to make one up didn’t he ? But it’s difficult to take him seriously on this subject if not others, likeable as he is. He made an awful argument that determinism and free will were compatible based upon some kind of video games analogy, showing that he’s an imaginative charlatan if nothing else ..


    “Thus, no-one can be a computationalist”

    They can be if they believe that computer programs cause mental states. They tend to think this if they have not considered the possibility – or understood – the true, observer relative nature of computer programs. But fundamentally its an impossibility by definition – as long as one accepts that consciousness is a self-evident neurophysical phenomena and not a computer program – which it must be, because mental qualities cannot be replicated or duplicated by mathematical objects.

    Dennett is quite logical as a computationalist at this point – he simply denies the phenomena. (Although in order to deny the phenomenana – consciousness – you have to know what is is first, so I think this lacks credibility on his part )

    “It’s just that under the CS mathematical definitions nothing physical is really computational (“This observer relative existence is a”
    prerequisite of all computers”), not even the computers we build and use. ”

    I don’t know where you get this idea from. The demarcation between hardware and software is clear enough isn’t it ? Software is NEVER physical, even if realised in a physical architecture. Hence the name ‘software’.

    “We still want to find a way to explain how “Brain Matter can cause pains, sexual excitement, sadness, happiness”

    Sure – let the neuroscientists and biologists do it. Vomputers and software people simply don’t have any reason to expect they can provide one iota of insight on the cause. Why they do is the real mystery of the whole debate I think. It’s worth a research project in its own right – why do software scientists assume they know how the brain works, despite having not a scrap of scientific evidence to demonstrate it ?

  54. Sergio –

    Independent of dietary preferences, I think names ending in “-io” suggest nationality, no? And like you, I comment assuming the possibility – in my case, likelihood – of being wrong, so I don’t interpret disagreement as confrontational. Not to worry.

    “PE and the mental domains are far too intractable, I’ll try to see how much can be explained without them”

    Yes, that’s essentially my position except that it isn’t primarily motivated by apparent intractability. A couple of times you say I’m “turning a blind eye”, but that’s a misrepresentation of my motivations. In the case of the intentional idiom, I consider it simply has no positive – indeed, arguably has negative – value in discussions addressing how behavior might be effected by neural structures. In the case of PE, I question it’s role in effecting behavior. That’s not turning a blind eye, it’s strategically averting one’s gaze.

    The assumption that PE has a causal role in behavior is implicit in our language: “I saw X and therefore did Y” (ie, “I experienced a visual mental image of X and in response took action Y”. But that kind of observation has no probative value in refuting my argument. We have to consider what might be going on inside the body.

    In my simple model. visual sensory stimulation leads to neural activity in the brain, and that activity can be interpreted as an information-bearing (Shannon-sense) signal. The information (call it “I-in”) can be extracted and used as input to various processors. One possible processing path is to first use I-in to produce a visual PE which is then used to effect a behavior. However, the Data Processing Theorem says that I-in can be no less than the information content of the visual-PE; hence, I-in must itself be sufficient for effecting the behavior. This raises the question – especially when quick reaction is critical – why evolution would have injected an unquestionably delaying and arguably unnecessary processing step in the path from sensory input to responsive action. OTOH, we know that evolution has made mistakes!

    The theorem doesn’t say that there can’t be multiple processors, each extracting a subset of I-in for a specific purpose. It just says that no processor can create information that wasn’t already in I-in. And my argument clearly doesn’t show that the path from I-in to behavior doesn’t include producing PE or that PE isn’t produced for some other reason.

    I assume it’s the second sentence of the quote from my post re the Data Processing Theorem with which you disagree since the first sentence just paraphrases the theorem. Perhaps the above elaboration helps.

    Re your attempts three and four, I’m of course inclined towards C1, but not necessarily for purposes of directly effecting behavior. So, I’m unable to convince myself that it’s not C2.

    Side effect of what? Nobody has even the slightest hypothesis”

    I actually do have a rough hypothesis. A visual PE can be thought of as a mosaic of distinguishable patterns of neural activity (“qualia”, if one insists) that have been “recognized” in the stream of neural activity consequent to visual sensory stimulation. And once one has language, those patterns can acquire names: eg, neural activity pattern 783 -> “blue” (B). Presumably, in principle the brain could locate those patterns relative to some virtual point in the organism’s environment (eg, the I! in Arnold’s Retinoid System) thereby making available everything needed to construct an image analogously to a paint-by-numbers picture: at position (x=13,y=27) in the FOV is pattern 783. That enough for the subject to “describe” a (virtual) image; eg, “I see a 9X9 matrix of colored squares; reading row-wise, it’s RBGWWGBRB”). Note that being able to “describe” such an “image” doesn’t require that the brain actually create a visual PE. So, why we experience one remains a mystery – although the explanation “if you can, why not?” comes to mind.

    In your “fifth attempt” you say I admit that my discussion “somewhat enters into the mental domain”. That’s either a slip on my part or a misinterpretation on yours. I try to use words from that vocabulary only either to dismiss the associated concept or to identify a conceptual entity using common terms (eg, “mental image”). Since we are discussing what goes on in the so-called “mental domain”, that some of the relevant vocabulary is necessary shouldn’t be a surprise.

    BTW, I also read “the Two Scientific Psychologists” blog! Affordances anyone?

  55. Re “knowledge” ala Sellars:

    Often, my quoting a word or phrase is meant to indicate that I’m using the word hesitantly or as used in informal discussion. Eg, I don’t actually consider “knowledge-how” – eg, skills – to be “knowledge” in Sellars’ sense and was using “knowledge-that” to distinguish “knowledge” in a cognitive sense from skills. That was a mistake which unfortunately misled you. So, I have no response to your discussion using those terms since it’s based on that mistake. Mea culpa.

    what the bee has learned doesn’t even get close to your definition, so we would need to invent something to cover this (and many other) intermediary step(s).

    Agreed, and Brandom has taken a step in that direction by calling those intermediate abilities “reliable differential responsive dispositions” (RDRDs). The bee just needs to execute a learned (or perhaps even innate) disposition to behave a certain way in a certain context. Just like, say, a pro tennis player reacting to a cannonball serve. Something has to be added to get from such activities to something we would think of as “cognitive”, and Sellars argues that it’s language and the ability to use it to justify our assertions.

    To produce a naturalistic explanation of knowledge defined in this way is to cover the whole distance from matter to the highest cognitive abilities in one single step.

    You seem to be reading more capability into Sellars’ definition than is actually required. Merely asserting P = “This ball is red” and justifying doing so by explaining “Those are balls too” and “I’ve learned my colors” might suffice to demonstrate that a child “knows P”. And I think you’re misinterpreting my use of “bootstrapping” which is meant to be the move from having nothing that Sellars’ would call “knowledge” to having a set of “knowings” sufficient to allow inference to get off the ground. It’s the move from merely reliably producing “red” when exposed to a red object (which a colorimeter can do) to that child’s “knowing P”. Thus, as I’m using the word it has nothing to do with the process described late in your relevant paragraph.

    In any event, I’m not really trying to sell Sellars, just suggesting that one making an argument about “knowledge” needs to define as precisely as possible how to distinguish a state of “knowing P” from a state of “not knowing P”. That Sellars takes 100 pages of notoriously dense argument to do this suggests the difficulty of the task.

  56. You might read some of the work of Crutchfield on information, thermodynamics and computational mechanics, where one estimates the intrinsic information in physical systems. Looking at information content in the message but not the receiver is not the way to go. Before there is intentionality, there is already value to an organism, such as your bacteria, who have elaborate behaviours such as anticipatory encystment and sociality – see also biosemiotics. That is, value is the intermediate target (environmentally dependent) leading to maximization of fitness.

  57. @John Davey #53
    We are not doing much progress, are we? I would guess this is (also) because we are trying to grind different axes…
    I wasn’t suggesting weird and indefensible stuff about OO languages; if anything, I was vaguely pointing to the fuzziness and range of different meanings that the word “instantiate” can have.

    I’ll gloss over the details, as I don’t think pursuing each one of them will do us much good.
    The key point (KP) I think is:
    When we “map physical states to mathematical ones using fixed rules” (our real world description of what instantiating a program means), the map includes fixed state-transition rules: we have modified a physical system so to make the causal relations between its possible states and its state-transitions match our expectations (assuming there are no bugs) or “roughly fit” our expectations in the more likely case where bugs are present but pretty rare.

    To me, a defensible “computational” (with scare quotes) stance is the following:

    (Assumption) What makes a system conscious is a set of particular causal relations between the system’s internal states and its state-transitions (i.e. StateX produces a transition to StateY, but not StateZ, etc.), these relations apply to states within the system but also between the internal states of the system and the outside world (to some extent, with “brain in a vat” kind of exceptions).

    Thus, identifying these causal relations is what we are trying to do, and hopefully also produce an explanation of why some particular (and certainly vast and intricate) causal relations generate phenomenal experience and the like, while most others do not.
    Because of KP, we can expect that these causal relations can be reproduced within computers. This expectation is granted by:
    – your standard CS assumption that we can “simulate” any physical system with the desired degree of precision.
    – the fact that “there is an infinite number of possible maps […]”,
    – and KP

    “This is why in the original discussion I’ve said the arbitrariness of the mapping is the best argument for a computational theory of the mind” (quoted text comes from my first post here: “Sergio’s Computational Functionalism”).

    What does the real work here is the assumption: I am starting from the hypothesis that what counts in generating consciousness (and PE, etc.) is to be found at the level of state transitions, that’s why the “Functionalism” qualifier is not optional. The implicit assumption being that consciousness is a process (or the necessary result of a process): you can’t be conscious of anything if your internal state isn’t changing at all…

    One thing I was trying to achieve with these two posts was to see if I could find convincing objections to the above, and so far I haven’t, or maybe I’m unconsciously (!) refusing to recognise the objections as convincing.
    For me, there are only two options: either consciousness, PE, qualia and so forth are part of (or at least the result of) certain causal chains, or they are a-causal. If the latter, all computational attempts will fail, but so will all strictly “third-party”, objective and scientific attempts.
    Otherwise, computational attempts may work, if we can find the correct causal relations and reproduce them with enough precision. If there is a third way, I’ve completely missed it.

    I am not trying to say silly things like “brain is the hardware, mind is the software”, of course we agree that’s nonsense. What I don’t know is if you are trying to propose that there is a third way, or that my two possibilities above are the result of faulty thinking, or that there are causal relations that can’t be reproduced in silico, or some other reason to find a fundamental disagreement. As far as I’m concerned there is no fundamental disagreement between us: of course we’d need the contribution of neuroscientists and biologists to identify the causal relations that we’re interested in.

  58. “We are not doing much progress, are we?”

    That assumes that agreement is the way forward, which it is never going to be in matters of science. Someboy here is wrong and somebody is right.

    “we have modified a physical system so to make the causal relations between its possible states and its state-transitions match our expectations”

    There are no causal relations betwen state-machine states. There are sequnces of state entirely defined by the engineer. Expectation is not the same thing as design.

    “What makes a system conscious is a set of particular causal relations between the system’s internal states and its state-transitions”

    This statement I think is very, very vague. It probably suffers, like a lot of these ideas, from the confusion between the observer-relative and the inherent. A “set of particular causal relations between the system’s internal states and its state-transitions” sounds a bit vague but it does sound like something an external observer might see and a computer would/could be unaware of.

    A computer in state ‘S1’ at time ‘T’ will have/could not have any knowledge of a state ‘S2’ at time ‘T2’ without there being an inherent natural link between the two – which there isn’t, because the state is a computational state and is determined entirely by the external observer.

    You are back to the same, inescapable position from which there is simply no escape – computing is observer-relative, and that’s the end of it. it’s not a matter of opinion, it’s a matter of definition.

    “What does the real work here is the assumption: I am starting from the hypothesis that what counts in generating consciousness (and PE, etc.) is to be found at the level of state transitions,”

    “state transitions” only exist in your head. They are not real. They cannot create a natural phenomena like consciousness. I look at this cup of coffee in front of me now : I decide it goes from State A to State B. Does it strart thinking ?

    “One thing I was trying to achieve with these two posts was to see if I could find convincing objections to the above, and so far I haven’t, or maybe I’m unconsciously (!) refusing to recognise the objections as convincing.”

    Something only you could know. I’m an external observer and can only guess what you’re thinking. But my guess is you’re kidding yourself.

    “I am not trying to say silly things like “brain is the hardware, mind is the software”, of course we agree that’s nonsense.”

    Its not nonsense if you’re a computationalist ! That is the very definition of what it means to be a computationalist !

    “What I don’t know is if you are trying to propose that there is a third way”

    OK – firstly I’m making no specific claim as to how the brain works. I don’t have to and nor should I, as the science isn’t there.

    None of that affects the conclusion that the brain does not work like a computer because we know from our existing knowledge that it cannot be working like a computer. It’s impossible. We know that already.

    However, I’m not of the belief that consciousness is a ‘mystery’. Far from being a ‘mystery’ it’s evidently the most natural thing on earth for a human being. In fact it’s only a ‘mystery’ in western reductionist circles – most cultures regard the physical and mental as part of one, coherent universe.

    I don’t think that physics – as currently constituted – has a way of explaining consciousness because it’s techniques are based on mathematics. That’s fine, because we don’t use physics to study any other biological features either. Maths is of no use in studying the mating habits of penguins for instance. There is a real problem though with the propaganda of physics and the use of questionable terms like the “laws” of physics for instance and snap together phrases like “clockwork universes”.

    These terms suggest the mathematical models that physics uses have a greater underlying reality to them than simply being great predictors. So if physics can’t predict consciousness – which it can’t – some people conclude that consciousness must be the problem, as they are dogmatized to the point of stupidity by physics propaganda. They just can’t believe that physics could have huge holes in it. They don”t think that physics is a science of humans, they view it as an extended sister of mathematics.

    But guess what – physics is very human. Physics is for the most part adhoc, designed-to-fit and syntactical. It works – some of the time – but when it does we don’t even understand why. Quantum physics gives us nothing that could be termed ‘understanding’. it just works.

    I think that the brain causes consciousness, and its the material contents that matter. I ignore physics on this question, as physics is incapable of dealing with it. It does not possess the tools to answer the question. Computers are irrelevant, as discussed. Biologists and neuroscientists on the other hand do have the tools and to mind they are the people who will and are leading the way. If artificial consciousness is generated, it will be in a biology lab.

    J

  59. Pingback: Why Sergio’s computationalism is not enough | Observing Ideas

  60. Hi,

    I made a post on my blog that discusses Sergio’s ideas and even quotes some of the comments that I read from the two posts. Read it here if you like:

    https://observingideas.wordpress.com/2015/08/06/why-sergios-computationalism-is-not-enough/

    And I invite you all for a discussion. I hope I’m not hijacking site’s traffic 😉

    [Fine with me, especially since you linked back here! In fact if you haven’t gone and done the same on Sergio’s own blog, I recommend you do… Peter]

  61. @All
    I owe everyone an apology: I’ve disappeared for far too long. It’s not that I’ve lost interest, but in part it is true that we are gaining less and less as the discussion lengthens. Perhaps it is time to let it rest for good.
    Below I will answer to the last comments: it’s a matter of courtesy, but will also give me the chance to finish off by mentioning my future plans (and one ulterior reason why I’ve started this conversation).
    It goes without saying that my gratitude for Peter and all the contributors is hard to overestimate. It has been a privilege to debate with all of you.

  62. @Charles Wolverton #54 & 55
    Apologies for the late reply: it’s fairly typical for me to remain silent when I don’t really know what to say… I’ve allowed myself a good amount of time to re-gain some distance and perhaps I can now produce a not completely off-the-mark answer. I think I’ve spotted some common ground, but I might well be very wrong! [I also assume I’m wrong, what counts is how much!]

    I don’t disagree with your description and use of the Data Processing Theorem, and was going to point out that you seem to forget an important detail, but maybe you don’t:

    [PE could be the] Side effect of what? Nobody has even the slightest hypothesis”

    I actually do have a rough hypothesis. A visual PE can be thought of as a mosaic of distinguishable patterns of neural activity (“qualia”, if one insists) that have been “recognized” in the stream of neural activity consequent to visual sensory stimulation. And once one has language, those patterns can acquire names

    If we’re lucky, the above stands for the basic intuition of the hypothesis I’m trying to develop. You use the “recognize” expression, and then link recognition to language. Both activities, a sort of classification and the verbalisation of the same, rely on information that is somehow already stored in the brain (the important detail), and used to transform I-in. My rough hypothesis is that PE happens in the classification stage and that awareness of PE becomes inevitable when one recursively re-classifies the results of the first, automatic classification. In a nutshell, some visual signal gets processed and a part of it gets classified as “Tom, a friend of mine, is waving at me”. Tom lives in another country, so his sight is unexpected; this results in me giving a second look, confirm the first classification, also classify my reaction as “surprise”, wave back, display a big smile and utter “Tom! what are you doing here?”.

    In the above (my description may be intentional, but I don’t think it matters), there is nothing that violates the basic rules of information transmission and information transformation: the original signal is impoverished along the processing route. That’s what you want in any decision-making context (of course, generating behaviour can be described as the process of deciding what to do next), you need to discard the details and extract the ecologically meaningful information only (that visual pattern is caused by a person) and then enrich the result with more ecologically meaningful information you’ve already stored (I know that person, I like that person, that person wasn’t expected to show up here, […] yes it’s really him). Almost all of the original information is dropped, some other is added by the first “recognising” (wink to Peter – the source of the “added” information is information stored within the brain, AKA memory) and by subsequently assessing/recognising what consequences this new information/result has for me . This last assessment needs to allow for some recursion, as consequences can have consequences (I’m surprised, so I’d better check again) and so forth, and this recursion is what we experience as meta-cognition. We can re-recognise the consequences of recognition. Because recognition is based on what we have learned to classify in exclusively subjective terms (as all incoming sensory signals are relative to our own physical structure – the subject of my two posts here) the end result is that we become aware of subjective perception, while having no idea of how the whole thing works (wink to BBT). We would also have no clue that “blue” is actually pattern 783, because all we can recursively reclassify is just the symbols/labels, not the mechanisms that generate them.
    At the bare minimum, the process above is required to accumulate the information that one would use later to classify new patterns. A bee might come with no or very little ability of learning new classifications, in your language, all of its possible dispositions are pre-established at birth. Mammals (and other) in general do not: most/all of them learn by experience, and are able to get started based on the pre-established basic dispositions of “seek” and “avoid” (plus many more), or, in my language, the pre-existing foundational concepts of “desirable” and “undesirable”. A pain/damage signal is initially always classified as undesirable, or if you prefer, at birth a pain signal invariably triggers an “avoidance” disposition.
    Within your scheme, based on standard information theory, all of this can fit effortlessly (as far as I can tell), but PE would emerge, and would emerge as the consequence of being able to learn, or (in your terms) as a consequence of the mechanisms that are necessary to acquire new dispositions in response to whatever detectable regularities the environment happens to possess.
    If I think about this, I see no way in which a system that worked as above could not be experiencing PE and could not be equally puzzled by it as we are (assuming its classifying, memorising, and recursive abilities were approximately as powerful as our own).

    Thus, your rough hypothesis looks correct to me, but it stops before producing a plausible explanation of why PE does fall in C1 (an evolutionary advantageous trait – and the hypothesis with higher prior probability). My equally rough hypothesis adds some more, explains one more layer of behaviour (how new behaviours are learned) while accommodating the existence of PE and meta-cognition. All in two or three simple moves.

    I’ve written the two posts here (also) because all of the above is algorithmic, and if we have to accept that all computational explanations are impossible by definition, then I would not even get a chance to start… So I had to check if I had some hope in refuting the kind of objection represented by DwP, the computing stone et al.

    For my exchange with you, I guess it all revolves around whether we should acknowledge or exclude PE a priori. The rest are details, really. My whole nitpicking about knowledge was intended as a practical demonstration of why trying to pinpoint higher order concepts (such as knowledge) is not going to help: in the bottom up direction we need to stay close to the basic and build up as we learn more. In the top-down side, we need to keep an open mind and our concepts/definitions as broad as possible. Thus, if pressed, I would start with a definition that is something like: knowledge is acquired by a system when some of its internal structures change in such a way that it may potentially influence future behaviours, while retaining the integrity of the systems itself (otherwise the definition becomes too general!). We could then produce sub-classifications as we identify the different mechanisms that sustain different types of knowledge acquisition. From the bottom up, our classifications will follow the mechanisms and not necessarily our expectations: in our discussion “knowing how” and “knowing that” distinguish between what we expect to be separate mechanisms, but I guess that it’s fair to say that we don’t know if this expectation is correct. (In fact my working hypothesis is that “knowing how” is what makes “knowing that” possible).
    Please see also my last comment below: I will be very glad to discuss the details with you in whichever form you may prefer.

  63. @John #58
    My idea of “progress” doesn’t require to reach an agreement, it would be enough if I was sure we understood each other and we could finger a well defined disagreement. I know I’m not making it easy for you as I’m trying to reconcile seemingly irreconcilable approaches and vocabularies.

    It doesn’t really matter, though: your position *is* clear, while my take-home message is a reminder that I need to be very careful and try to make sure I’m not relying on vagueness or obfuscation. Even if I do try to avoid unnecessary fuzziness, I can never be sure I’ve been successful without seeking some form of adversarial feedback, so thanks once more!

  64. @ihtio #60
    Wow. I’m flattered: I have no idea how you’ve managed to summarise all of this discussion and attempt to extract and highlight the weakest spots. Thank you!
    My comment will be very brief, though:
    I think most of your criticism is valid, but rests on overestimating my ambitions. I agree with you that “Identification of a best explanation of functioning of neural systems should be possible with or without the notion of computation”. However, what I am saying is that algorithmic interpretations are likely to be the ones that will be easier to find (and that I expect them to cover a lot of explanatory ground). The main purpose of all this was to see if algorithmic interpretations are by definition useless. I don’t know if I’ve been successful, but I do know that I didn’t encounter a rebuttal that I find convincing. I may of course be fooling myself ;-).

  65. @ihtio #60 (…continued)

    On defining “meaning”. It once again goes back to what I’m trying to do: (in my opinion) algorithmic explanations for (directly experienced) psychological phenomena are the Holy Grail of Cognitive Neuroscience (see http://www.consciousentities.com/?p=1971#comment-389035). However, this has consequences: one has to accept the intuitive and fuzzy categories that we use to describe our inner life and find a way to bridge them with basic neuroscience. To do so, philosophical work is useful to shorten the explanatory gap from the top-down (and has to be a trial and error affair), so I voluntarily keep some definitions fuzzy: not doing so would increase the risk of reaching too many dead-ends. Doing so does risk obfuscating, and I do fear I’ve been unwillingly guilty of that.
    In short, (philosophical) intentionality/aboutness and meaning/semantics are the concepts used in philosophy of mind to describe the minimum requirements of minds themselves (as experienced directly by some philosophers at the very least), this is why I use them! I keep them as fuzzy and undefined as I think I can get away with, while trying to avoid making my whole effort a useless mental game, and again, I can’t say if I’ve been successful.

  66. @All
    Thank you, I think this is more or less all we could hope to discuss in this form.
    I will certainly read all further comments, but will reply only if I’ll see some clear reason to think that there is plenty to learn from further discussions.

    I’ve already sketched my follow-up idea in my latest reply to Charles Wolverton (#62). As described there, I have been working on something that’s even more ambitious than what I’ve tried here: explaining why we face the Hard Problem from a (broadly deductive and certainly mechanistic) evolutionary perspective.
    So far the paper has been peer-reviewed twice. The first time the response was that I needed more space to lay down my argument (it was certainly true!). The second time (different journal) the reviewers felt the paper is too philosophical for a Cognitive Neuroscience journal. However, my whole effort tries to shorten the distance between the empirical and philosophical camps, and therefore I do plan to keep trying to get it published in a journal that is read by neuroscientists (one more time, at least); suggestions are always welcome.

    As you all have surely understood, I’m eager for feedback, so I’ve decided that I will upload the paper on bioRxiv (http://biorxiv.org/) as soon as I can. I would love to get some specific criticism and I do hope that some of you will be tempted to contribute (in whichever form you may prefer).

  67. @Sergio #65,

    algorithmic interpretations are likely to be the ones that will be easier to find (and that I expect them to cover a lot of explanatory ground).

    “algorithmic” interpretations don’t necessarily involve the notion of computationalism or computer metaphor of the brain/mind. As I wrote in my post: even [modernized] psychoanalysis, [modernized] behaviorism, psychology, embodied cognition can be defined in mechanistic (or algorithmic) terms, but that has nothing to do with computationalism.

    The main purpose of all this was to see if algorithmic interpretations are by definition useless.

    I doubt that anyone (even Searle) thinks that psychoanalysis, behaviorism, various psychological theories, grounded cognition framework, computationalism are useless by definition, so I can’t see the motivation for such research.
    I that was your motivation, it must have been a slight of mind from my unconscious or I unwillingly and immediately repressed this.

    @Sergio #66,

    one has to accept the intuitive and fuzzy categories that we use to describe our inner life and find a way to bridge them with basic neuroscience. To do so, philosophical work is useful to shorten the explanatory gap from the top-down (and has to be a trial and error affair), so I voluntarily keep some definitions fuzzy: not doing so would increase the risk of reaching too many dead-ends. Doing so does risk obfuscating, and I do fear I’ve been unwillingly guilty of that.

    Proper definitions and descriptions of how those concepts operate in the general scheme of one’s theory, framework, proposition are the very foundations of every philosophical proposal, argument, theory. You can’t build a house on mud. You can’t build a decent philosophical argument using fuzzy concepts (not-so-clear definitions and/or their functions and place). That’s why I wrote in my post a light criticism of this part of your ideas.

  68. @Ithio #68
    I’ll try to keep this reasonably brief, also because we’d risk going wildly off topic.

    “algorithmic” interpretations don’t necessarily involve the notion of computationalism or computer metaphor of the brain/mind.
    [… And …]
    I doubt that anyone (even Searle) thinks that psychoanalysis, behaviorism, various psychological theories, grounded cognition framework, computationalism are useless by definition, so I can’t see the motivation for such research.

    The key point is that I didn’t invent this challenge. The challenge has been around for a long time and, as I’ve written:

    From this point of view, it becomes impossible to say that computations within the brain generate the meanings that our minds deal with, because this view requires meanings to be a matter of interpretation. Once one accepts this point of view, meanings always pre-exist as an interpretation map held by an observer. Therefore, “just” computations, can only trade pre-existing (and externally defined!) meanings

    This summarises the argument I’m challenging, in the best way I can, but it’s not an argument I’ve defined myself: I am trying to explain how it can be surpassed while accepting its premises, I am challenging the conclusions only. If one accepts the argument conclusions, most of Cognitive Neuroscience, which is rule-based, and thus algorithmic (and therefore can be reduced to something that describes computations) becomes almost useless: it might shed some light on specific details, but will never be able to explain how we end up producing some “understanding” of the world (which is among the final aims, at least for me).
    Thus: I have to accept the vocabulary used by the argument I’m challenging, I have to talk in terms “meaning” and “intentionality”, because they are part of the argument in question. Also I do think that these concepts can be useful, when used properly (see below), at least because a convincing explanation that uses them should be recognisable as such by philosophers of mind.

    (Thanks for raising my confabulations to the rank of “research”, you couldn’t flatter me more!)

    This links to your second objection:

    Proper definitions and descriptions of how those concepts operate in the general scheme of one’s theory, framework, proposition are the very foundations of every philosophical proposal, argument, theory. You can’t build a house on mud. You can’t build a decent philosophical argument using fuzzy concepts (not-so-clear definitions and/or their functions and place). That’s why I wrote in my post a light criticism of this part of your ideas.

    I can agree without reserve. However, one way of conceptualising what I’m trying to do is as follows (while also refuting the conclusions mentioned above): we have (many) philosophical theories that posit the usefulness of concepts such as “meaning” and “intentionality”. Across these theories, these concepts are fuzzy and underspecified, to the point that we have plenty of alternative theories that reach very different conclusions because they start from slightly different (precise) definitions of the concepts above. Which one is right? We have no idea, because the gap between empirical science and philosophy is still too wide, and trying to use the former to pick the best options offered by the latter is still a very difficult and prone to error exercise.

    Thus, I start with deliberately vague definitions of meaning and intentionality (and more), so to include as many potentially correct philosophical theories as I can. I then try to use the empirical side and apply plenty of different methods of inference to:
    a. retain the concepts that philosophy (and thus introspection) find broadly useful to describe “what is it like” and what supposedly helps explaining how a “what is it like” gets to exist (our end-game explananda), whenever possible. If successful, this would account for the top-down usefulness of philosophy: we would get part of the explanation for free, thanks to existing philosophical accounts.
    b. refine the concepts as we go along, removing fuzziness as we learn more. Thus, the “aboutness/intentionality” concepts (that you don’t really like) find some validation in my bacterium argument, and, at the same time, retain their already established/expected “explanatory usefulness”. Their fuzziness also gets narrowed down (reducing the number of “potentially correct” philosophical theories). See the bacterium discussion: if it works, it should help us understand how exactly we should understand the concept of “aboutness”.

    This may fail for many different reasons, the whole attempt is fraught with danger at every turn, and I understand why it makes you raise an eyebrow. But that’s why it’s fun, and a good reason for me to try it. Because it IS dangerous, if I was a young graduate trying to build a career, following this route would be close to madness: I would jeopardise my chances in the field of choice. Fortunately, I have a completely separate and reasonably solid career, so I can indulge in high-risk activities that may look too dangerous to professional philosophers and/or scientists.
    Does it make a little more sense now?

    @All
    Having said all this, I’m glad to report that professional philosophers (at least one!) are proving me wrong, and are actually happy to try something that looks very close to the risky strategy I’ve adopted.
    Please see:
    Is Computation Abstract or Concrete? and Does Computation Require Representation?, both connected to a brand new book: Physical Computation: A Mechanistic Account by Gualtiero Piccinini, definitely something I want to read as soon as possible (i.e. not soon 🙁 ).

  69. Pingback: Robert Epstein’s empty essay | Writing my own user manual

  70. Pingback: Partisan Review: “Surfing Uncertainty”, by Andy Clark. | Writing my own user manual

Leave a Reply

Your email address will not be published. Required fields are marked *