Linespace
Konsciousness
Linespace

31 October 2004

home 

unkonsciousness

Unkonsciousness

Bitbucket OK, a modest proposal from me. One of the big problems in sorting out consciousness is simply defining what it is (as you can see from this selection of views). Ned Block is absolutely right in calling it a 'mongrel' concept - a mixed-up mess of different ideas. He makes a distinction between access or a-consciousness, which has to do with thinking about plans and actually doing stuff, and phenomenal, or p-consciousness, which is to do with having experiences, 'raw feels' and all that wishy-washy qualia stuff. That is a valid and important distinction, but there are plenty of others. For some people, for example, consciousness is basically awareness - you can't be conscious without being conscious of something. Others see nothing absurd about being conscious without having any sensory impressions at all - why can't you just sit there in the dark being conscious? Some would say that consciousness is always self-consciousness - it's only when you are aware of yourself, or perhaps of your own thoughts, that you are really conscious. But it doesn't seem to me that I spend every waking moment thinking about myself, and if every thought I had was sort of labelled 'my thought about x' I'd suspect I needed some kind of therapy for narcissism or paranoia. Some people would say that only explicit thoughts are truly conscious - perhaps only ones expressed in language, but at any rate the sort of thought simple animals can't manage. But if you compare the state of mind of a fish which has just been landed with its state of mind shortly after being clubbed on the head, I reckon there's an awfully strong resemblance to consciousness and unconsciousness.

Linespace

Blandula Of course. But that's the whole point, isn't it? These questions you're raising are the very ones we're concerned with, if we're concerned with consciousness at all.

Linespace

Bitbucket No, that's my point. If you're a philosopher, maybe these issues are the point. But the rest of us don't really care all that much: what we mainly want to know is how the thing works. Neurologists want to know how the conscious functions of the brain operate, how they go wrong, and how (maybe) they can be put right. AI researchers want to know how to generate or simulate conscious processes, and what interesting results they can get out of it. If they have to get the definitions clear and make relevant distinctions in order to achieve their goals, then well and good - but it isn't the point of what they're doing.  

Linespace

Blandula Alright, but they always do have to get the definitions clear, don't they? That has surely been the problem for the more gung-ho school of AI - the people who basically said 'never mind the definitions, pass the soldering iron'. These people thought the philosophical issues were just waffle which could be ignored, but gradually their projects ran into unforeseen problems. Take the 'naive physics' project. The idea there was that you didn't need a deep theory of thought. What you needed to do was write up people's ordinary beliefs about the world in formal guise, throw in a bit of logic (maybe with a few innovations along the way) and you would gradually get the thing to work. You'd develop a system which drew conclusions about the physical world the same way human beings did, and along the way you would acquire whatever you needed by way of theory. But it didn't happen. The theory never emerged, and it became increasingly clear that the only way to get the programs to work was to 'cheat' by leaving out the difficult bits or building in some of the experimenter's own human common sense about a particular restricted domain.    

Linespace

Bitbucket I don't think that's an adequate characterisation. I grant you the 'naive physics' thing never really flew in the end, and I'll even agree that the lack of a general theory was part of the problem. But not the lack of a philosophical theory.

On the contrary, it seems to me that the philosophical issues have constantly been a distraction from the real task, drawing researchers from their course and on to the rocks of sterile academic debate. Whenever anyone got a promising approach going, the philosophers would say: ah, yes, but that isn't really consciousness because it doesn't include x variety or cover such and such a form of consciousness. The researchers would get drawn into trying to explain qualia or self-awareness or some other nebulous thing, and their theory would capsize. So what are we going to do about it? Well, my suggestion is that we steal a technique from philosophical argument. This is where you give your opponent a word or the whole field. So if you were arguing about free will, for example, and someone kept saying, yes that's all very well but it doesn't explain what I and most other people mean by free will, you just say: OK, I'll give you the term 'free will' - I'll just accept that if it's defined the way you want, it remains an insoluble mystery. But now let's talk about my concept of, say, 'ffree will', which I'll define the way I want. You can't object that it doesn't match the normal definition, but I'll be able to draw all the interesting conclusions I wanted to draw anyway - just for the price of an extra 'f'.

Linespace

Blandula  So we're going to talk about fconsciousness, is that it? Or cconsciousness? 

Linespace

Bitbucket No, I thought I'd go for 'konsciousness'. So what I propose is that the scientists say to the philosophers - OK, you have consciousness - do what you like with it. You can go on generating confusions and mysteries about it for as long as you like. Meanwhile, we're going to address our own concept of konsciousness. You can even have an interesting new philosophical debate: is konsciousness consciousness? But pardon us if we don't join you in discussing it.

Linespace

Blandula  And konsciousness is...? Or are you going to leave that on one side while you get on with the programming?

Linespace

Bitbucket No - I propose that konsciousness is the name of any process which allows a machine or an organism to produce novel but cogent responses to input stimuli, by using an internal model of the external environment.

Linespace

Blandula You're helping yourself to rather a lot of stuff there, aren't you? How are you defining 'cogent', for a start?

Linespace

Bitbucket There you go again. I don't have to define it. I can recognise it, and that's all I need for my purposes.

Linespace

Blandula Hmm. Well, I would press you on that, but I see the word 'kogent' looming up, and I think the English language has suffered enough for one day...

Linespace

|

Earlier

home

later