The bots are back in town…

bot Botprize is a version of the Turing Test for in-game AIs: they don’t have to talk, just run around playing Unreal Tournament (a first-person shooter game) in a way that convinces other players that they are human. In the current version players use a gun to tag their opponents as bots or humans; the bots, of course, do the same.

The contest initially ran from 2008 up to 2012; in the last year, two of the bots exceeded the 50% benchmark of humanness. The absence of a 2013 contest might have suggested that that had wrapped things up for good: but now the 2014 contest is under way: it’s not too late to enter if you can get your bot sorted by 12 May. This time there will be two methods of judging; one called ‘first person’ (rather confusingly – that sounds as if participants will ask themselves: am I a bot?) is the usual in-game judging; the other (third person) will be a ‘crowd-sourced’ judgement based on people viewing selected videos after the event.

How does such a contest compare with the original Turing Test, a version of which is run every year as the Loebner Prize? The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. They cannot set traps for the bots by making quirky demands (‘please reverse the order of the letters in each word when you respond’) or looking for a sense of humour.

In practice a significant part of the challenge is simply making a bot that plays the game at an approximately human level. This means the bot must never get irretrievably stuck in a corner or attempt to walk through walls; but also, it must not be too good – not a perfect shot that never misses and is inhumanly quick on the draw, for example. This kind of thing is really not different from the challenges faced by every game designer, and indeed the original bots supplied with the game don’t perform all that badly as human imitators, though they’re not generally as convincing as the contestants.

The way to win is apparently to build in typical or even exaggerated human traits. One example is that when a human player is shot at, they tend to go after the player that attacked them, even when a cool appraisal of the circumstances suggests that they’d do better to let it go. It’s interesting to reflect that if humans reliably seek revenge in this way, that tendency probably had survival value in the real world when the human brain was evolving; there must be important respects in which the game theory of the real world diverges from that of the game.

Because Botprize is in some respects less demanding than the original Turing Test, the conviction it delivers is less; the 2012 wins did not really make us believe that the relevant bots had human thinking ability, still less that they were conscious. In that respect a proper conversation carries more weight. The best chat-bots in the Loebner, however, are not at all convincing either, partly for a different reason – we know that no attempt has been made to endow them with real understanding or real thought; they are just machines designed to pass the test by faking thoughtful responses.

Ironically some of the less successful Botprize entrants have been more ambitious. In particular, Neurobot, created by Zafeiros Fountas as an MSc project, used a spiking neural network with a Global Workspace architecture; while not remotely on the scale of a human brain, this is in outline a plausible design for human-style cognition; indeed, one of the best we’ve got (which may not be saying all that much, of course). The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively. Although I have my reservations about that concept, it could at least reasonably be claimed that Neurobot’s functional states were somewhere on a spectrum which ultimately includes proper consciousness (interestingly, they would presumably be cognitive states of a kind which have never existed in nature, far simpler than those of most animals yet in some respects more like states of a human brain).

The 2012 winners by contraast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour. Alas, this seems in practice to be the Achilles heel of Turing-style tests; canned responses just work too well.

40 thoughts on “The bots are back in town…

  1. While a machine that could reliability pass the Turing Test should have amazing commercial applications, I’m a bit puzzled about applications for a machine that can play a shooter video game like a human. Even the video game industry itself might not be all that impressed, since “non-human solutions” might be even more entertaining to see and potentially learn from. I suppose that the real reason for these efforts is to somehow make better progress in AI somewhere. Who knows… perhaps it will help.

  2. As I’m writing in english, which is a sort of recording of previous ‘plays’ of communication, is that a canned responce?

    Also do they ever run false positive tests – matches which are entirely human players, and see whether no one tags anyone as a bot, or whether real humans get tagged just as much?

    “Like tours in rain…”

  3. At least you’re artfully arranging canned responses with a mind to the entire message, which makes one wonder if this couldn’t be applied to the bots themselves. This is why my initial thought about canned responses, that they’re cheating, may not actually be correct. How much of our daily language consists of turns of phrase, etc.? Not all of it but some of it, I think we can agree.

    This kind of reminds me of the debate regarding chess playing machines, like Deep Blue, and others. Much of the skill involved is “merely” extrapolating on brute force recall of canned strategy and much of it is brute force look-ahead. If the intent is to create intelligence, I still tend to call much of that “cheating” but how much, and where do you draw the line?

  4. This reminds me how I once teased Scott Bakker here for his profuse use of the term “heuristic” — though I do now see the value of a term for behavior that is “rule of thumb” or “canned.” I suppose that we open our “bag of tricks” constantly, and generally without even knowing it.

    Given our great dependence upon these “heuristics,” however, yes perhaps such things should not indeed be considered “cheating.” Furthermore perhaps “intelligence” isn’t the best term to describe what we ultimately seek here, as I think we’ve recently decide that our own machines should be considered somewhat intelligent already (though we may indeed want them to be more so). So perhaps it’s “conscious” Bots that we seek, or at least a simulation of it. Thus a good model of the conscious mind may be useful.

    From my own such model the line is drawn at “sensations” — without a punishment/reward dynamic from which to impel it, there can be no conscious function. And given that “the hard problem” has not indeed been solved, we are left to only simulate consciousness. But then what might consciousness effectively be?

    In comment #17 of the last discussion I mentioned that I see “conscious processing” as “consciousness itself,” and therefore this “thought” is the essential thing that would need to be simulated. Furthermore I define thought as “the processing of inputs” and “the construction of scenarios,” which Bots can indeed do (except not to make themselves “happy,” as required for consciousness). So without existence “mattering” to these subjects, apparently the burden remains upon Niv and other engineers to foresee the instructions that will be needed for effective function, since these subjects have no incentive to naturally figure out that which does not matter to them. Thus autonomy may remain a problem for our “non self” creations, though we might indeed make them quite intelligent.

    Observe that a very intelligent traffic light might “interpret inputs” and “construct scenarios” to move the traffic as effectively as any human could direct. I do doubt that this machine would need “pain” or “pleasure” to help it do any better. But when survival is an issue in a complex world, apparently evolution came to rely upon a punishment/reward dynamic so that life would “pay the consequences” for not personally figuring things out. This makes me wonder if ants are just “Bots,” or if they do indeed experience some sensations.

  5. From my own such model the line is drawn at “sensations” — without a punishment/reward dynamic from which to impel it, there can be no conscious function

    Over a decade ago I was considering how to make an AI. I’d decided it’d need some kind of system for good feedback to reinforce behaviour.

    But how do you make ‘good’ I thought? The world reeled slightly when I realised you don’t – it’d just be a positive five volt charge on a wire. It’d simply be the various digital processing reactions to such an input feed – the significant weight (higher numbers) that would be a result of the input would be how the processing system reacted to it as ‘good’. It’d feel ‘good’ from the inside – but there is no inside of a wire. What you’d have at best is a number of value systems that say in the case of pain, essentially go low number and ‘die’. Ie, a behaviour suddenly dies off and part of the processing system (when viewed as numbers) goes dark (ie, your behaviour of touching the hotplate suddenly ceases to be one of your behaviours because ouch!). If there was a system that reports such process changes, either in yelps of pain or more sophisticated messages, it could report something in regard to those now dead behaviours/synaptic weight changes. So there is a legitimate change going on. But it’s reportage is very, very simplified.

    Or at least I took these reactions as real enough to constitute the fundamental structure of feelings and to treat the reaction to the five volts as significant and the significance involved.

    I might also have been affected by previous fiction I’d watched before that where robotic characters referred to accessing their memory bank. It blew my mind as I suddenly imagined memory being in some sort of bank vault like tray – memory being something you could hold between thumb and forefinger.

    It’s a powerful image – imagine having a memory that you are recalling, but also with the help of wires, holding in your fingers. Then imagine crushing it between your fingers and…you just can’t recall it anymore, in the same way you forget dreams. It’s a tragic as well, of course.

  6. Okay Callan (and hopefully others), let’s talk about “Artificial Intelligence.” As above I don’t always quite understand your point, but let me tell you how I see things so that you can tell me if I am mistaken.

    As far as developing a “system for good feedback to reinforce behaviour,” I don’t see this as a problem for simple applications, such as playing Chess — here numbers can be used to judge success/failure. For example, in the traffic light situation that I’ve just mentioned, we can use a scale between -100, where no vehicles get through, to +100, where all vehicles get through without impediment. A high score here would not be enough however, since there is nothing particularly clever about giving a green light to the only car on the road. Success would instead be measured against the conflicting paths of the drivers reaching the junction (which is something that could be calculated after vehicles do indeed get through).

    Let’s say that this machine has cameras hooked up for miles around to recognize as many vehicles as possible, assess what they might do given their history and such, and thus operate the lights to keep the traffic flowing as well as it’s able given the perpetually evolving conditions. If designed well I’m sure that it could function far better than any human with the same data could, and therefore in this respect it would be far more intelligent than any human. This, however, is just the simple stuff that even us “idiots” are able figure out. Now let’s talk about the difficult stuff which seem quite beyond human engineering.

    From my own definitions life began as entities which are “purely mechanical,” which is to say that “the mind” did not yet exist here. I suppose that modern examples of “mindless life” would be microorganisms, plants, and fungi. At some point a “non-conscious” mind must have evolved, however, where inputs are centrally processed for appropriate output — essentially like our computers. Furthermore I also suspect that the vast majority of the human mind is non-conscious. But apparently given the modern state of evolution, the non-conscious mind isn’t generally enough, since consciousness does seem to have proliferated. I would love to know if there are any examples of modern life which have minds that are perfectly non-conscious. Perhaps if a barnacle on a rock in the ocean has a mind, it also has no sensations, and thus is perfectly non-conscious. Lately I’m less sure that there are any insects without sensations, as some autonomy might indeed be helpful under such reasonably diverse environments, while that barnacle on the rock might be more like a traffic light. Expert opinion would indeed be appreciated in this regard, as well as comments in general.

  7. Hi Eric,

    One way to look at it is that our interpersonal management methods (theory of mind) has developed over the millennia – and our technical capacities, like making fire, digging for and smelting metals, etc are a separate set of methods in the brain. The interpersonal management had a Darwinistic pressure on it, given how much we seemed to slaughter each other at the drop of the hat – diplomacy had reproductive dividends (as that Bakker guy likes to put it).

    That management, while obviously it’s probably more profitable to manage others, also had a pressure on it to manage the very organism the management process is in. To bite ones own tongue on occasion.

    Okay, actually getting to the idea now: The idea being that the concept of consciousness might be more of a product of interpersonal management than anything else.

    I mean, it’s rather like a handshake we all engage in – who says anyone else isn’t conscious? So we all keep up the idea/keep up the handshakes on the matter. There might be things going on, but in terms of interpersonally managing each other (so as to stop wars and other conflicts), what consciousness is held to be is more like a lot of entangled diplomatic understandings between organisms. Certainly the pharaohs thought themselves gods – and if in their presence (with their many armed guards) would you be undiplomatic about their god claims? So surely when one can humour godhood claims, something a bit more humble, like consciousness (whatever that particular word brings to mind), is a lot easier to humour.

    Suppose you dropped a lot of babies on a desert island (and ignoring how they get reared to adulthood for now), would they speak of consciousness (if they spoke any sort of invented language at all)? If not, are they conscious? They wouldn’t know what you referred to – though if you were to get angry like a pharaoh, they might humour your claim simply to calm you?

    So there’s an an idea to work around the C word – if it is a fake concept in part or in whole, simply going right at the word simply reinforces it. And if it is the case, there’s no harm in trying work around it.

  8. Callan it looks to me as if we are now setting aside the “artificial intelligence” question, as I don’t believe your #8 addresses my #7. Instead it looks as if we are reverting to the last topic, where I was shocked that you could dispute your own existence. What I take from your discussion above (if it can indeed be stated simply) is that perhaps consciousness is not so much “real,” but rather a product of “diplomacy.” Is that it? Regardless, this is how I personally see consciousness.

    I grant you that all of the existence which I perceive, may indeed be an illusion — except for one: I think, therefore I am. This is true by definition (my own definition of course, but an effective one nonetheless). If I consciously process information, then I “think,” and thus there is indeed consciousness here. If for whatever the reason there is no conscious processing however, there might still be “a body,” there might still be “a mind,” but there will by definition be no consciousness.

    Another way of getting to this position is printed at the top of this very site (though I don’t know if it’s a standard saying or if Peter deserves the credit). It reads, “If the conscious self is an illusion — who is it that’s being fooled?”

    I do try not blame anyone for making things extra complex and twisted. After all, we reside in a field that has been explored for many thousands of years, and yet has not achieved even one accepted understanding of reality to date. Callan those babies on that island, if they think, then they are by definition conscious — their ability to grasp the concept of consciousness has absolutely nothing to do with this. But will I ultimately be able to elevate this ancient study of ours into the modern realm of science? The whole thing seems quite preposterous right now. Nevertheless it’s quite clear that science is in great need of such a “non-failure.” While it may not turn out to be me, such a person should emerge in the end regardless.

  9. “The removal of any need to talk seems to make the test easier. Judges cannot use questions to test the bots’ memory (at least not in any detail), general knowledge, or ability to carry the thread of a conversation and follow unpredictable linkages of the kind human beings are so good at. ”

    .. in other words the use of language, the main determining characteristic of the species ! So if they don’t use language in what sense can they be said t be human like ??

  10. “The 2012 winners by contrast, like the most successful Loebner chat-bots, relied on replaying recorded sequences of real human behaviour.”

    .. and the difference between that and a computer program re-arranging pre-recorded words into new forms would be what exactly ? A slightly more advanced form or tape recorder ?

    “The Global Workspace idea, originated by Bernard Baars, situates consciousness as a general purpose space where inputs from different modules can be brought together and handled effectively.”

    But consciousness isn’t a functional entity. It’s a natural characteristic of functioning brains. You can think of it as a functioning entity, of course, but what value is that ? What value is there in saying “a duck is an eating and reproducing processor with extra capacity to process liquid movement data” ? It doesn’t shed one ounce of insight into the nature of a duck. All you are doing is looking at what a duck does and choosing – from YOUR perspective, not the duck’s – to characterise a duck in terms of YOUR assessment of the ‘informational’ content of a duck’s activities.

    It’s the same with just about every piece of nonsense talked about consciousness as being linked to “information”. Consciousness has no implicit relationship to information whatsoever. It’s a phenomena of characteristic and irreducible form. Like time and space it’s something we know about but can’t define. It’s what separates us from the angels and makes us hardwired for the universe we are in. We are animals, and consciousness comes with the package and we’re stuck with it, and stuck with knowing what it is but not being able to define it. It’s not complicated. We’re beasts, not computers.

    If I choose to view my conscious phenomena as “information” then so be it. But it sheds not one ounce of light on them. Crucially “information” – being only able to exist in the head of an information-understanding being – can have no causal consequences, because it has no actual phenomenal existence, being observer-relative. And causes is what we are after, specifically those material causes in biological brain machines that cause the phenomena that we all know as mental states. Rules that represent how brains might work won’t reproduce how brains actually do work. A simulation of a storm at the Meteorological centre won’t make you wet.

  11. Eric

    “Even the video game industry itself might not be all that impressed, since “non-human solutions” might be even more entertaining to see and potentially learn from. I suppose that the real reason for these efforts is to somehow make better progress in AI somewhere. ”

    As somebody who has spent far too long in computing – and with the video games business as well – take it from me, “AI” doesn’t exist. “AI” is computer programming. It really is all the same, there isn’t an ounce of difference between an “AI” program and any other.

  12. Yes John I certainly do trust you about this so called “artificial intelligence” business. I recently angered the heck out of one computer guy, I think somewhat because he would not heed my warnings that we need to throw out the “mountain of crap” which has accumulated in philosophy. But even though you seem quite well read, I also get the sense that you aren’t actually invested in this system, and therefore your situation might be my best opportunity to gain allies at this early time. Though general philosophers might technically agree with me about how woeful our field is, I also suspect that few will condone my yelling about it. This work should be fought hardest, however, by “real scientists,” or those PhDs which currently reside in “mental/behavioral” fields. How dare I compare them to “physicists before the rise of Newton”?

    John if you can think of a good test, an important philosophical quandary for me to make sense of, and thereby demonstrate that my theory is worthy of your time, you need only present one.

  13. john: ” Like time and space it’s something we know about but can’t define.”

    We can define consciousness, but the question is whether our definition leads us to a model of consciousness that enables understanding and successful prediction of relevant subjective phenomena.

  14. Arnold

    “We can define consciousness”

    Ok – I’ll let you define it – as long as you promise not to use semantic equivalents like ‘awareness’,’feelings’,’inner sense’, and so on. I haven’t seen a single definition of ‘consciousness’ yet that didn’t amount to anything other than tautology. The other definitions seem to revolve around functions and ‘coherent spaces’, usually with a few ‘feedback loops’ (I’ve been in the computer industry 25 years and I’ve never seem a computer program in that time WITHOUT a feedback loop – why do people think that they are so special ? They are an intrinsic aspect of rule-based programming. What do you call a computer program with a feedback loop ? You call it a computer program )

  15. John since you’ve now mention how pessimistic you are about the various models of human consciousness that you have considered to date, I do wonder if you’ll find my own about the same. Regardless however, this does seem like the exact sort challenge that I was hoping that you’d give me — namely that I might present a model of human consciousness which you find sensible, so that you might then explore my consciousness chapter yourself.

    First off, I see the vast majority of the human mind as “non-conscious,” like those computers that you’re so familiar with. As for the “conscious mind,” however, it functions similarly in the sense that it has “input,” “processing,” and “output” elements, though with one key difference — while existence has no personal relevance to the computer, the conscious mind functions under a “punishment/reward” dynamic EXCLUSIVELY.

    I call the first variety of conscious input “self,” and this is the punishment/reward dynamic known as “sensations” (or “qualia” if you’re so inclined). The second is “senses,” which may indeed invoke sensations though they are defined to be quite separate. The last of them is “memory,” or “past consciousness that remains.” Moving now to the conscious processor, this is termed “thought,” and comes in two varieties — “the interpretation of inputs,” such as recognizing something or feeling pain, and “the construction of scenarios,” where we figure things out in the quest to promote our happiness. As for the only (pure) “output” element of the conscious mind, this is termed “muscle operation.”

    So does this sound at all like your own consciousness? (I haven’t forgotten about you either Niv! What do you think?)

  16. (I’m now trying this again, in case three hyperlinks bring a “spam” tag.)

    Hello Arnold,

    Given that you’ve been quite involved with Peter’s site for some years, and that your theory does compete with my own, I did think that we would have had some discussions by now — and especially since I’ve put in about four months of radical commentary here. (Is it not strange how hesitant many have been to engage me?) But if my “waiting for others to inquire” strategy does not always bear fruit, then perhaps a bit of “offense” becomes appropriate.

    My position is this: “Mental/behavioral” sciences, such as Psychology, remain “primitive” today, and specifically given the failure of philosophy. I am not, however, suggesting that the field of Neurology is primitive, as associated with your own “Retinoid” model (http://www.consciousentities.com/?p=1016). While the brain is obviously a very complex organ, I do suspect that great strides are indeed being made in this field. Instead I wonder things such as how a Psychiatrist might expect to comprehend a patient’s mental problems effectively, without a functional model of the conscious mind? Is this not similar to the work of an engineer before the time of Newton?

    I do understand how tempting it must be for “real scientists” like yourself to try to solve philosophy’s problems. But I also think it’s a fallacy to assume that physiological realities also demonstrate philosophical realities to us. Once the field of philosophy is indeed able to develop effective answers for “self,” “consciousness,” and so on however, I’m sure that Neurologists will be asked to then tell us even more about these answers through associated physiological dynamics. Hopefully this day will indeed come soon.

  17. John: ” I haven’t seen a single definition of ‘consciousness’ yet that didn’t amount to anything other than tautology.”

    OK, here’s a definition that is not a tautology:

    *Consciousness is a transparent brain representation of the world from a privileged egocentric perspective*

    For more about this, see “Where Am I? Redux” and “A Foundation for the Scientific Study of Consciousness” on my Research Gate page, here:

    https://www.researchgate.net/profile/Arnold_Trehub

  18. Eric,

    I’m not what you said in #7 to address, apart from your claim of “the mind”. You seemed to be outlining something, using “the mind” as a foundation for it, so I addressed the foundation.

    Instead it looks as if we are reverting to the last topic, where I was shocked that you could dispute your own existence. What I take from your discussion above (if it can indeed be stated simply) is that perhaps consciousness is not so much “real,” but rather a product of “diplomacy.” Is that it? Regardless, this is how I personally see consciousness.

    I’m not sure why the questioning of existence would seem shocking if ‘conciousness’ seems as insubstantial as diplomacy? I’d suspect you are taking diplomacy as being between individuals and so diplomacy affirms the idea of individuals existing (as much as elements on the periodic table exist)? Diplomacy might have been a clumsy word to use – a computer and a server run diplomatic protocols in order to exchange information (which is just electrical signals). But that’s not a traditional usage and I didn’t describe that so that’s some poor communication on my part. But that would be the better way to read my post.

    If I consciously process information, then I “think,”

    I’m trying to build a model of where you’re coming from, Eric: You’d say animals don’t think at all?

    “If the conscious self is an illusion — who is it that’s being fooled?”

    I find this a bit semantic – if we treat ‘who’ as simply being another name for ‘conscious self’, the question reads “If the conscious self is an illusion — which conscious self is being fooled?”

    I feel it’s not really asking a question. That’s my estimate, anyway – sample size of one and all that and stated in the spirit of critical thinking, not antagonism.

  19. Callan let me apologize for any antagonism, as my impatience does get me carried away sometimes. Apparently things are relatively simple and certain to me, right or wrong, while they are more complex and uncertain to you. Furthermore we do seem to find each other difficult to understand, but perhaps I can make at least a few clarifications.

    -I suppose that many question whether consciousness is “real,” though I personally have no use for such uncertainty.

    -I actually suspect that virtually all animals “think,” and this is because I suspect that virtually all of them are “conscious” (and perhaps “insects” as well).

    -As far as Peter’s query goes, I suppose that the term “illusion” is the operative element — an “illusion” cannot exist if something is not “fooled.” And can something be fooled that is not conscious? We might say that a computer has been fooled, but in this context I just don’t see it this way.

    Hopefully in the future we will be able to comprehend each other just a bit better.

  20. “Consciousness is a transparent brain representation of the world from a privileged egocentric perspective”

    We need a definition of what is a “transparent brain representation”.

    I’m also not convinced that “consciousness” need represent anything. I have a certain level of consciousness in deep sleep, but nothing is getting represented. Let’s say I synthesised a brain in a biological lab : I would expect it to function like a brain, to have consciousness, but I wouldn’t expect it to represent anything.

    A representation is also a functional definition, like a painting. That for me does not encompass the internalised and qualitative nature of consciousness. As I said before, people tend to use tautologies OR functional defintions – neither of which seem to hit the mark. It’s as futile and pointless as defining time, or space.

  21. “Okay, actually getting to the idea now: The idea being that the concept of consciousness might be more of a product of interpersonal management than anything else.”

    When you stand on a rusty nail Callan, is the feeling of pain you have a cultural concept ? Let’s say that you are born like Tarzan in the jungle and raised by apes, and have no language. If you stand on a rusty nail will it hurt you, or not because you don’t have the necessary cultural background ? Will you wake up in the morning ?

    It seems to me that there are two possibilities : that consciousness is beyond cognitive scope – in which case there can’t be any discussion about it – or it’s not, and so discussion about it proves it’s very existence.

    Let’s take a group of real zombies. They physically act like people but they have no mental life. The “conversation” that these zombies have could never include consciousness, as consciousness is out of their cognitive scope. It is literally beyond their comprehension ,so they would never discuss it. Or let’s say that they have no comprehension of colour : they would never argue about colour for the same reason.

    The fact that there is a debate about consciousness indicates quite clearly it is an unambiguous and clearly understood feature of human mental life. You have to know what type of thing consciousness is in order to deny it. I don’t see any alternative. I could say ‘there is no such thing as time, it’s just a social construct’ and quite rightly I would be poo-pood. We all know I know what time is, but nobody could countermand me with an unassailable definition of ‘time’. Because their isn’t an unassailable defnition of ‘time’. But that doesn’t mean to say it doesn’t exist – I have to know what type of thing time is before I can claim to know that it doesn’t exist.

  22. john: “We need a definition of what is a ‘transparent brain representation’.”

    A *transparent brain representation*, in biological terms, is the global pattern of autaptic-cell activation in retinoid space. The neuronal structure and dynamics of the retinoid-space model successfully predicts novel auto-controlled hallucinations in normal subjects. For a description of the experimental paradigm, see pp. 324 – 325 in “Space, self, and the theater of consciousness” on my Research Gate page.

    john: “I have a certain level of consciousness in deep sleep …”

    What is your evidence for this extraordinary claim? If you actually believe this then you must have your own notion of what constitutes consciousness.

    john: “A representation is also a functional definition, like a painting.”

    A painting is an opaque representation. If the painting were a transparent representation, its content would not be seen as a painting but as the real thing out there in the world. Activation of the brain’s retinoid space generates the phenomenal experience of being at the perspectival origin of a spatial surround. This is the minimal qualitative nature of consciousness that can be enriched with all kinds of perceptual and memorial content.

  23. Hi John,

    When you stand on a rusty nail Callan, is the feeling of pain you have a cultural concept ?

    The yelp of pain i’d give is a cultural construct, definately.

    Certainly there are many cultures that seem to like to mutilate their bodies in various ways. Perhaps they do lack the social construct that says ‘that’s painful, don’t do that’. I’d heard of doctor who found someone living in the hills with a tight knit community, with a hole in his skull. Didn’t go see a doctor or anything – culturally cut off, it was just not an issue to this person.

    Further, it depends on whether you’re going to make me step on more nails if I say it’s a social construct?

    Does that make the pain and with it conciousness this real object or is it a problem solving issue where the problem is if I say it’s a social construct, I will fail the problem as the problem will be inflicted up me again? Or more obliquely, no one will render me assistance in avoiding the problem if I don’t support this ‘pain’ idea?*

    I’d call that a social construct – at gun point, but a social construct none the less.

    * Actually that reminds me of where I’ve seen my kids skin their knees when they thought I couldn’t see them – and they don’t bother trying to cry.

    It seems to me that there are two possibilities : that consciousness is beyond cognitive scope – in which case there can’t be any discussion about it – or it’s not, and so discussion about it proves it’s very existence.

    Depends on what ‘beyond cognitive scope’ is defined as – if it means ‘not actually the case as we originally thought (like geocentricism was how we originally thought things were, but not actually the case) so we can’t think about it in the sense that it’s false’, then I could agree with the first of those possibilities being listed. I’m not sure there’s necessarily just two possibilities, but atleast I could agree with one of them being a distinct chance.

    Let’s take a group of real zombies. They physically act like people but they have no mental life. The “conversation” that these zombies have could never include consciousness, as consciousness is out of their cognitive scope. It is literally beyond their comprehension ,so they would never discuss it. Or let’s say that they have no comprehension of colour : they would never argue about colour for the same reason.

    Well, in the theme of the main post, what if we raised human babies and zombie babies together, never telling who was who or that there was any difference at all – exposed them all to philosophy texts.

    Surely the zombie is just as capable of using the word ‘conciousness’ in text or verbal form? And engaging the longer sentences of the philosophy texts is surely within the z’s capacity?

    Further lets say to get their food and water (and tenure 😉 ) all of them, humans as well, must be able to keep up with various philosophy pop quizzes.

    So in a way it’s rather like the shooter game, but it’s philosophy that’s the thing rather than guns.

    Just for the moment lets assume you are okay with ‘disposing’ of zombies. Ie, ceasing their life function.

    Would you be comfortable disposing of the zombies in the group based not on knowing who(/what) was a zombie but instead the ones with the lowest grades at philosophical discussion on conciousness?

    I could say ‘there is no such thing as time, it’s just a social construct’ and quite rightly I would be poo-pood. We all know I know what time is, but nobody could countermand me with an unassailable definition of ‘time’. Because their isn’t an unassailable defnition of ‘time’.

    I think science could provide a strongly evidence supported definition of time.

    But if were not talking about a scientific definition, then I think if anyone poo-pooed you then they aren’t being terribly open minded.

    We clearly don’t all know what time ‘is’ in one single way, otherwise someone would have the one single definition to countermand you with.

    There being multiple definitions of ‘time’ is not a support for there being anything but a state of confusion.

    If ten people have ten different definitions of time, I think they are not being terribly open minded if they can’t take an 11th definition (ie, it’s all a social construct).

  24. Callan your last comment suggests that I was wrong about your uncertainty — apparently your flag has squarely been planted on the side that says “consciousness must be phony.” While I would indeed like to bring you over to my own side, I nevertheless suspect this to be unlikely for reasons that I will discuss in just a moment.

    You’ve recently suggested that consciousness might not be real, given observations of your children — one of them might skin a knee but not cry unless they thought that you were watching. While I do not doubt this to be the case, I also suspect that there’s a bit more to it. I personally remember being hurt on a number of occasions as a child, but then being brave since I knew that crying would only make me feel worse. I would then stoically go to a parent for aid, but upon seeing one of them I would inevitably burst into tears despite my efforts to remain calm. Why? Because my sensations were indeed real to me. And just because a child might do some extra “acting” from time to time in the attempt to gain some extra sympathy, this behavior should still have an explanation which is sensation based. I am quite sure you do not believe that your children are just “robots,” and therefore you shouldn’t consider consciousness to indeed be “phony.”

    I can also say that “empathy” is something which you do indeed have for your children, or essentially that your understanding of their sensations impart corresponding sensations for you to experience as well. If you try to consider this objectively, it really does seem strange. Why should the pain (or pleasure) of one conscious entity, transfer to another conscious entity that perceives the other’s state? The answer, apparently, is because evolution found that the human survives better this way. I think it’s quite amazing that we don’t hear about people who are born without empathy — plenty seem to lose it through abuse and such, but in a genetic sense, how many of us simply have no empathy dynamic?

    I’ve mentioned that my argument may not persuade you, however, and coincidentally through the sister subject to my empathy chapter — or the “and Theory of Mind Sensations” part of it. To put this simply, I enjoy being respected andI hate being disrespected, and therefore assume this to be true for people in general. Callan if your flag has indeed been planted on the “consciousness does not exist” side, then your desires for respect should naturally encourage you to oppose my arguments, simply through your current need for personal respect. This is one of many dynamics which I believe contribute to the failure of philosophy — such sensations inherently make us non-objective. Regardless, my chapter on “Empathy and Theory of Mind” may be found here:

    http://physicalethics.wordpress.com/home-page-with-a-cocktail-party/chapter-10-empathy-and-theory-of-mind-sensations/

    Hello Arnold,

    I was hoping to finally engage you a couple of days ago, but Askimet rightly stopped me given my prolific internet spamming ways. Peter then fool heartedly vouched for me however, so my comment did go up at #17. If I have missed my opportunity for a discussion with you right now however, then I do hope that we will still have one soon.

    Eric

  25. Eric,

    Callan if your flag has indeed been planted on the “consciousness does not exist” side, then your desires for respect should naturally encourage you to oppose my arguments, simply through your current need for personal respect. This is one of many dynamics which I believe contribute to the failure of philosophy — such sensations inherently make us non-objective.

    I guess it’s social contract time. I find this an asymmetrical assessment – a classic red herring that is there to draw attention away from the person making the evaluation, which indirectly reinforces their position since critical thinking has been directed away from the assessor and their own claims. What of your own motivations, Eric? You have none? Only I am driven by bestial passions, not yourself?

    Regardless, in some social contracts no one would see a problem with that line of approach (I can think of a number of forums where it would be treated as legitimate). To me however, it’s making it about the other person as well as doing so in a way that simply, in my evaluation, blocks intelligent discussion. If you can only make your point by raising the subject of the other persons hopes for respect, you are probably on flimsy ground to begin with.

    If it’s considered legitimate at conscious entities, well, consider me owned.

  26. My apologise once again Callan — my own passions are indeed no less than yours. This is a critical problem associated with all argumentation, so we must acknowledge and fight it as well as we are able. I do thank you for engaging me, since I would have a hard time describing my theory here without confident thinkers like yourself.

  27. “The yelp of pain i’d give is a cultural construct, definately.”

    You honestly believe that do you Callan ? That a Tarzan raised by apes would not yelp like a human being when he stood on a rusty nail ?

    “Surely the zombie is just as capable of using the word ‘conciousness’ in text or verbal form? And engaging the longer sentences of the philosophy texts is surely within the z’s capacity”

    Colour blind people have words for brown and green – but they don’t know what brown and green are. They have never experienced them. Similarly dogs know pain, but don’t have a word for it. Do dogs learn to yelp in pain from other dogs or their owners ? Is there a special communication medium between a puppy and its owner that teaches it to scream when you stand on it’s tail ?

    “Certainly there are many cultures that seem to like to mutilate their bodies in various ways. Perhaps they do lack the social construct that says ‘that’s painful, don’t do that’. I’d heard of doctor who found someone living in the hills with a tight knit community, with a hole in his skull. Didn’t go see a doctor or anything – culturally cut off, it was just not an issue to this person.”

    Right – can you now list how many times you’ve fallen over, or banged your head on a wall ? How many times your kids cried when they were ill ? Ho wmany times you’ve felt hurt even when you didn’t want anybody to know, or wanted any help ? Were you hurt or weren’t you ? And don’t confuse anti-pain techniques developed in some cultures with this idea that pain is a cultural construct. Those proud traditions are about pain management – about managing the response to pain.They don’t deny it exists – they just ignore it.They are not in the fantasy world of the AI fraternity who think being sizzled with electricity in a torture chamber is a reflection of your social conditioning. Why don’t the US military use these great insights of yours ? Why has pain conditioning never worked, ever ? Why does the entire discipline of anaethesia devote itself to making sure that the massive pain of invasive surgery never occurs ? And will you promise me that on your next operation that you will refuse anaesthesia ?

    “Depends on what ‘beyond cognitive scope’ is defined as – if it means ‘not actually the case as w.. ure there’s necessarily just two possibilities, but atleast I could agree with one of them being a distinct chance.”

    By “beyond cognitive scope” I mean “beyond cognitive scope”. Mathematics is beyond the cognitive scope of a dog. Language is beyond the cognitive scope of a dog. Colour is beyond the cognitive scope of a cow. Cows (if they could) would never talk about colours and dogs would never talk about mathematics. You could teach them to say ‘mathematics’ .. possibly .. but they still wouldn’t know what it meant. You could teach a cow to say ‘colour’ but it wouldn’t know what it meant. They do not have the equipment to deal with colours or mathematics. They are cows and dogs.

    Zombies – never being capable of consciousness – would never talk about it. Humans – with consciousness-generating brains – talk about it. If you talk about it, you know what it is, even to deny it. In denying pain, you actually prove that you know what it is. There is no way round this conclusion. Otherwise you’d be neutral – you’d say “I don’t what this pain is, cognitively speaking, so I can’t express an opinion on it”. Would a red-green colour-blind person argue with somebody about the difference between vermillion and scarlet ? No, he’d say “this is beyond my cognitive scope”.

    “Just for the moment lets assume you are okay with ‘disposing’ of zombies. Ie, ceasing their life function.
    Would you be comfortable disposing of the zombies in the group based not on knowing who(/what) was a zombie but instead the ones with the lowest grades at philosophical discussion on conciousness?”

    I wouldn’t kill anything , conscious or not. But if I had to kill one, I’d kill the zombie. The zombie is the one made out of metal, by the way.

    “I think science could provide a strongly evidence supported definition of time.”

    It hasn’t yet, and nobody’s even tried as far as I’m aware. People have been reflecting on time for thousands of years, and thus far “definitions” have been nothing other than a string of tautologies. It’s likely to be that way for ever, as our relationship to time is most likely determined by our cognitive scope as animals ( as Kant recognised two centuries ago). We should embrace our ambiguous relationship with time as one of our fundamental hallmarks as a species. We are not angels, we are not mathematics, but lumps of protein. That’s fine by me, but not the Ai fraternity who have replaced the deity not with physics but syntactical structures with magic powers in them.

    “There being multiple definitions of ‘time’ is not a support for there being anything but a state of confusion.”

    There is no definition of time that is not a tautology – so there is no definition of time. If you think otherwise, then kindly furnish.

  28. “What is your evidence for this extraordinary claim? ”

    “Consciousness” includes all the different states of consciousness – deep sleep, unconscious, half-asleep, hallucinating, awake etc. It’s not that extraordinary, anaesthetists take the same approach.

    “The neuronal structure and dynamics of the retinoid-space model successfully predicts novel auto-controlled hallucinations”

    How ? How do you get from a formal structure to a prediction of mental phenomena ?

  29. The problem of these “Chatterbot” Turing Tests, is, that
    it can force the developers into a local Maxima.

    Its easier to design the bot by include faking answers (database query of previous conversations),
    than actually making it understand the conversation. (having a model of the actual context of the conversation)
    Simply because the perceived quality is higher during most of the test, given the limits of development resources.

    A first “really” concious Chatterot will likely perform quite poorly compared to its Database driven competitors.

  30. About the Shooter Bots:

    Game AI is a very good playground to implement new techniques,
    approaches and optimizations from contemporary AI research.

    Its an environment where the resulting agents have an actual use
    – by creating an entertaining environment.
    Its a good starting point for AI Developers to get rewards for their ideas. (And definatly cheaper than developing an industrial robot)

  31. john, If you believe that you are conscious when you are in a state of deep sleep, can you tell us what it is like for you to be conscious when you are deeply asleep? How do you know when you are conscious?

    Also, can you tell us when a normal person is *not* conscious.

  32. “john, If you believe that you are conscious when you are in a state of deep sleep, can you tell us what it is like for yo”

    my consciousness is in a state of unconsciousness when in deep sleep.

  33. john: “my consciousness is in a state of unconsciousness when in deep sleep.”

    I take this to mean that you agree that you are not conscious when you are deeply asleep.

  34. Arnold

    No.

    I would distinguish between “unconscious” and “not conscious”. A computer is “not conscious, a cup of coffee is “not conscious” but a conscious agent in a state of deep sleep has consciousness in a state of “unconsciousness”.

    for instance, I would not expect an entity that was “not conscious” to wake up after hearing a loud bang, for instance. But if it was a person in a dramatically reduced state of consciousness – such as deep sleep – i would expect them to wake up.

  35. john,

    OK, I accept your distinction. So you agree that you are unconscious when you are in a deep sleep. So the question is what is it in your brain that makes you change from unconscious to conscious when you wake up.

  36. Eric, an apology really doesn’t matter unless ‘make it about the other person’ ceases to be a valid card to play.

    Until then I’ll just leave your claims uncontested and say that you win.

    John,
    You honestly believe that do you Callan ? That a Tarzan raised by apes would not yelp like a human being when he stood on a rusty nail ?

    I believe the yell is a social construct in that it evolved to alert other members of the tribe so that fewer tribe members stand on the rusty nail, thus improving the survival rate of the tribe overall.

    Colour blind people have words for brown and green – but they don’t know what brown and green are. They have never experienced them.

    I think this would be a fair example if the emperic measure involved went both ways. We can emperically measure colour blindness and light spectrums. That’s why I accept your claim colour blindess exists rather than dispute it – because it’s pretty much nailed down at an emperic level.

    Do you have an emperic measure of the thing you call conciousness, John?

    Why does the entire discipline of anaethesia devote itself to making sure that the massive pain of invasive surgery never occurs ?

    The answer to your questions is here – indeed, why is anaesthesia so dang effective at the very least making someone who would convulse when you cut them instead simply lay still and dormant?

    Why can pain, if it’s this thing that exists as much as the desk in front of you, be made to dissapear so easily? Like flicking of a light switch? Well perhaps because it is like flicking off a light switch?

    You’re aware of the (kind of horrible, IMO) experiments with monkeys, where they attached electrodes to the monkeys pleasure centers of the brain?

    Okay, what if they had attached them to the pain centers of their brains – and you have a cable running out to a button.

    Say the button is depressed and so too is the monkey, as it feels pain.

    Now you go and cut the cable with some snippers. You’d atleast agree the monkeys pain would stop from this physical action, right? I’m not asking rhetorically – I’m trying to see if we’d both atleast agree on something? Do we?

    Mathematics is beyond the cognitive scope of a dog.

    Advanced math is beyond a small child as well. But when that child grows up it may well grasp it. That could be called ‘beyond the cognitive scope’, but it’s treating all cognitive capacity as static when that is false.

    Zombies – never being capable of consciousness – would never talk about it. Humans – with consciousness-generating brains – talk about it. If you talk about it, you know what it is, even to deny it.

    I know what geocentricism is as well. If you’re talking about ‘knowing it’ in the sense of knowing geocentricism, sure. But knowing something doesn’t make something the case.

    And again it’s almost tied into how you argue – how would the zombies know not to talk about being conscious? If they don’t know, then could idle storymaking cross over to roughly such a concept? And perhaps if the concept is flattering, futher storytelling ensue?

    That’s the very thing – why couldn’t a zombie exhibit superstitions, for example? Thinking that they have to do X or evil spirits or something might come?

    Are the zombies immaculate and incapable of superstitions in these examples? In such a case they are being made out to be better than us (take any of us as a baby back in time to the stone age and we’d adopt all sorts of superstitions – and perhaps we do today (hopefully not – knock on wood…))

    Otherwise you’d be neutral – you’d say “I don’t what this pain is, cognitively speaking, so I can’t express an opinion on it”

    No, your ‘neutral’ is passively accepting rather than attempting to debunk something in case its not as it is taken to be.

    I don’t know what your mermaids are, cognitively speaking, so I can’t express an opinion on it. Continue your mermaid talk.

    Does that sound a legitimate way of thinking?

    Would a red-green colour-blind person argue with somebody about the difference between vermillion and scarlet ?

    They can, because the other person might still be wrong (heck, the other person might be colour blind as well and just can’t accept that) regardless of immediate capacity to percieve.

    “Just for the moment lets assume you are okay with ‘disposing’ of zombies. Ie, ceasing their life function.
    Would you be comfortable disposing of the zombies in the group based not on knowing who(/what) was a zombie but instead the ones with the lowest grades at philosophical discussion on conciousness?”

    I wouldn’t kill anything , conscious or not. But if I had to kill one, I’d kill the zombie. The zombie is the one made out of metal, by the way.

    You’ve avoided the conditions of the question, simply responding as if you get to know which is human and which is zombie.

    “There being multiple definitions of ‘time’ is not a support for there being anything but a state of confusion.”

    There is no definition of time that is not a tautology – so there is no definition of time. If you think otherwise, then kindly furnish.

    You’re talking past me – I’ve raised there being a state of confusion, you’ve responded as if I wanted to define time.

  37. I believe the yell is a social construct in that it evolved to alert other members of the tribe so that fewer tribe members stand on the rusty nail, thus improving the survival rate of the tribe overall.

    Evolutionary arguments really don’t help you here. The idea that pain has a deterrent effect on individual action is plain enough. You of course live in a world where a more isolated man would die, as he never actually feels any pain (that’s what happens to people with pain disorders by the way- they die earlier because they don;t realise they are being burnt to death etc)

    But denying pain requires that it be connected to communication, hence concluding that previous generations wandered round in bunches all the time, incapable of independent action.

    It’s a silly conclusion, unsupported by any historical evidence. Being homo sapiens today is no different tp 50,000 years ago. We don’t wander round in groups all the time and there is no reason to assume that previous generations did.

    I think this would be a fair example if the emperic measure involved went both ways.</I.
    You mean empirical ? I don't know what "emperic" is ..

    Do you have an emperic measure of the thing you call conciousness, John?

    Of course. Like the medical profession and most of the human race. I see someone snoring and I know he’s asleep. I see somebody with bullet in his head and I know he’s dead. You are falling into the predicatble, time-honoured trap of confusing the subjective character of consciousness with the objective nature of the phenomenal existence of consciousness. I can talk about “your consciousness” and “my consciousness”. I can’t actually experience you consciousness, but it would be pretty stupid to say that means you aren’t conscious. Why should I be the only person in the world to be conscious ?

    The processes that generate mental life are unknown but the medical profession has a good idea. Lack of electrical activity in the brain is usually an empirical sign that the person is brain dead and has no consciousness. Likewise anaesthetists use a range af metrics to determine the conscious state of the patient.

    You I suspect are going to claim that this is not “direct” measurement. But there is no such thing as direct measurement of anything, ever. “Measurement” is a process based upon a theory that produces a number. Even measuring a length of rope is indirect. You place the ruler next to the ends of the rope. You take one number of the ruler from one and subtract it from a number at the other. You then say “this is the length of the rope” but you’d be wrong. The process assumes a contiguity of space over the length of the rope and between the ruler and the rope. It’s actually an unjustfied belief and the “measurement” would be wrong.

    Once there is a more detailed theory of the objectively-measurable processes that cause subjectively-experienced conscious states, then there’ll be more meaningful measurement. But such measurement exists already and the entire human race uses it every day.

    You’re aware of the (kind of horrible, IMO) experiments with monkeys, where they attached electrodes to the monkeys pleasure centers of the brain?

    Caught you ! You can’t use words like ‘horrible’. They are straight reflections of the feelings of subective mental experiences. ‘Horrible’ has no meaning to psycopaths. You can’t use a word like ‘horrible’ if you don’t know what it means, and you do.

    Now you go and cut the cable with some snippers. You’d at least agree the monkeys pain would stop from this physical action, right?

    Yes – eventually. The objective physical processes that cause the subjective mental processes would no longer be operative.

    Advanced math is beyond a small child as well. But when that child grows up it may well grasp it. That could be called ‘beyond the cognitive scope’, but it’s treating all cognitive capacity as static when that is false.

    So a dog could learn mathematics ? When it got older or with the right teacher ?

    how would the zombies know not to talk about being conscious?

    For the same reason that cows wouldn’t talk about colours or dogs would never talk about mathematics. It’s not that complicated. Zombies can’t be conscious so they don’t know what the word even could mean. You on the other hand talk about it – knowing precisely what it is (proving that it exists) – and that’s because your brain is consciousness-generating.

    why couldn’t a zombie exhibit superstitions

    “superstitions” are beliefs, and beliefs are intrinsically mental and conscious. An external observer might say “that looks like a manifestation of superstitious belief” but it wouldn’t be an actual example of the same. A painting of a duck isn’t the same thing as a duck, after all.

    Thinking that they have to do X or evil spirits or something might come?

    They can’t think remember ? Just like us ? It’s all external behaviour isn’t it ? You can’t pick and choose when mental life exists just to suit your argument.

    No, your ‘neutral’ is passively accepting rather than attempting to debunk something in case its not as it is taken to be.
    I don’t know what your mermaids are, cognitively speaking, so I can’t express an opinion on it. Continue your mermaid talk.
    Does that sound a legitimate way of thinking?

    I’d love to say I knew what these sentences meant, but I haven’t a clue.

    You’ve avoided the conditions of the question, simply responding as if you get to know which is human and which is zombie.

    The mental life of a being is not determined by its observer-relative characteristics such as speech. That’s behaviourism, the Turing test, and that’s highly discredited, not to say idiotic. So to kill somebody you’d have to know if it was a human being or not. A ‘zombie’ could be a computer who may sound very convincing on a whole range of subjects. Computers always do, but they know nothing – they just deliver meaning from programmer to user, like a kind of value added telephone. Does your telephone sound convincing on consciousness ? Depends who’s speaking into it I suppose. But yes, I’d “kill” a telephone over a person any day.

    You’re talking past me – I’ve raised there being a state of confusion, you’ve responded as if I wanted to define time.

    There is confusion about time ? Really ? Not that I’m aware of. Everybody knows what it is. It’s just not possible to define it. And I repeat – if you think there is a “state of confusion” above and beyond the problem of tautolgy(there isn’t by the way) then tell me what the “confusion” is. It doesn’t exist.

  38. John,

    So your disproval is that as long as anyone ever got out of earshot of anyone else, that means the yell (which is specifically what were talking about) is proven to be for something other than social communication? There couldn’t just be a false positive application of it – instead it means the yell is definately for something else?

    confusing the subjective character of consciousness with the objective nature of the phenomenal existence of consciousness. I can’t actually experience you consciousness, but it would be pretty stupid to say that means you aren’t conscious. Why should I be the only person in the world to be conscious ?

    John, you’re basically just claiming the phenominal existance of conciousness and claiming that, whatever the qualifiers your idea of ‘conciousness’ has, you have them.

    If you just want to claim that and don’t want to discuss it, that’s okay, I can leave it there. I came in thinking you’d toy with the idea of what you’d normally claim to be the case as possibly not being the case. But if you’re not interested in doing that and just want to say it is indeed the case, then were done for this post – you’ve said your claim and that’s it, because you’re not interested in engaging whether it could be a false claim. And you don’t have to.

    You’ll have to tell me you’re interested in humouring it could be wrong – then it will be fair conversation when I start bringing up hypothetical ways in which it could be wrong.

    Right now what you’ve got is that I’m forcing a point you don’t want to engage in, thus I’m breaking social contract to a degree. So I will desist forcing the point.

    Caught you ! You can’t use words like ‘horrible’. They are straight reflections of the feelings of subective mental experiences. ‘Horrible’ has no meaning to psycopaths. You can’t use a word like ‘horrible’ if you don’t know what it means, and you do.

    On another topic, this is the problem with the human mind – by default we tend to treat our evaluation as the exact same one everyone else uses (and by default, anyone who uses another one we treat as wrong).

    You’re treating it as if your idea of what it ‘means’ is the EXACT same one I’d use.

    Think of the concept of ‘horrible’ as two circles, one for each of our own perceptions of it – they probably overlap alot, but not perfectly.

    You’re treating it as if if I know the meaning of horrible, then that validates the idea of ‘horrible’ as somehow intrinsic to the universe, because to you I’m referencing the one and only true definition/meaning of ‘horrible’. Thus validating it to you as the one and only true meaning, etc.

    You’re just failing to recognise I’m not talking about the exact same thing as you.

    And you can go the default and tell me if I don’t match up perfectly with your concept – sorry, if I don’t match up perfectly with the concept of ‘horrible’ then I’m wrong.

    But I think it’s not as easy as that. Then again I guess a ‘wrong’ person would think that, ay?

    They can’t think remember?

    I’d presumed the examples involved them being able to do something like a sudoku, or drive a car, etc?

    So when they are completing a sudoku and you are completing a sudoku, what makes it that you are thinking but they are…whatevering?

    I’d love to say I knew what these sentences meant, but I haven’t a clue.

    I find this intellectually dishonest.

    So to kill somebody you’d have to know if it was a human being or not.

    This exhibits one of the very problems the brain has in dealing with itself. Making statements which rest upon absolute knowledge (it IS a ‘somebody’, which is to say ‘human’) then circular logic from there on (and of course that’d be a ‘human’, because we just said it’s somebody, and we can only kill somebody if they are human, which they are because we are killing somebody, etc etc)

    Making a statement where you just say you absolultely know it’s a ‘somebody’ is simply dodging the question even further. If you’re not interested in engaging, okay, just leave it there.

    Everybody knows what it is. No one can speak it
    (paraphrased)

    This isn’t confusion to you then, I now see.

  39. “john, you’re basically just claiming the phenominal existance of conciousness and claiming that,
    whatever the qualifiers your idea of ‘conciousness’ has, you have them.”

    There you go again – talking ABOUT consciousness, knowing exactly what it is, and claiming it doesn’t exist. It’s not up to me to prove that consciousness exists, it’s for you to prove that it doesn’t.

    Prove to me that space exists and it’s not just a hallucination. When you do that I’ll “prove” that consciousness exists, despite the fact the conversation proves it does.0

    “Right now what you’ve got is that I’m forcing a point you don’t want to engage in, thus I’m breaking social contract to a degree. So I will desist forcing the point.”

    What point ? You’ve raised nothing I’m avoiding. You pretend that consciousness doesn’t exist, which is ludicrous, on the basis that all you can see is external behaviour. How unimagintively simplistic an assessment of the universe’s capabilities is that ? How truly, spectacularly dull ?
    When Newton first drafted the Law of Gravity, although it worked he said it was “ludicrous” because it didn’t satisfy the standard of intelligblity for the time, which was that bodies only interact by contact. You are making the same mistake and assuming that thoughts, feelings and mental content aren’t physical because they’re not material and you can’t touch them.

    “You’re treating it as if your idea of what it ‘means’ is the EXACT same one I’d use.”

    No I’m not – but they are similar.But I am treating it as having meaning – which is mental content. Which is consciousness. There you go again, assuming that you think in order to display that you don’t think.

    But if you don’t think I know what “horrible” is, why communicate it if the results are so ambiguous ? More inconsistency. WHy use language at all if it’s so unintelligle ?
    You know what horrible means. I know that you know what horrible means. You know I know that you know what horrible means. Its a FEELING.

    “So when they are completing a sudoku and you are completing a sudoku, what makes it that you are thinking but they are…whatevering?”

    Hang on – that bit about language being ambiguous ? It sure is with sentences like that ..

    “I find this intellectually dishonest.”

    Actually it’s mild sarcasm. Your English has a tendency to vagueness which for an advovcate of the computational “precise” approach is ironic.
    Most computationalists are vague – Dennett is the worst. “Consciousness Unexplained” has to be one of the vaguest books in the language.

    “Making a statement where you just say you absolultely know it’s a ‘somebody’ is simply dodging the question even further. “

    I’m the only person in the world !! That’s where all computationalists end up – in the dead end well of solipsism !!
    It was only a matter of time – well done Callan.

    Bertrand Russell told a story of a solipsist who used to write to him. “The trouble is ” she said “nobody else takes you seriously”.

    So callan your challenges are these –

    i) define time- and when you do you can say that consciousness can be dispensed with on grounds of vagueness
    ii) tell me how it is possible to talk about consciousness without knowing what it is, or what type of thing it could be. Prove that a non-conscious agent could talk about consciousness – or that any creature could discuss anything beyond it’s cognitive scope.
    iii) tell me why thoughts are not physical (as opposed to material)

Leave a Reply

Your email address will not be published. Required fields are marked *