meetingPetros Gelepithis has A Novel View of Consciousness in the International Journal of Machine Consciousness (alas, I can’t find a freely accessible version). Computers, as such, can’t be conscious, he thinks, but robots can; however, proper robot consciousness will necessarily be very unlike human consciousness in a way that implies some barriers to understanding.

Gelepithis draws on the theory of mind he developed in earlier papers, his theory of noèmona species. (I believe he uses the word noèmona mainly to avoid the varied and potentially confusing implications that attach to mind-related vocabulary in English.) It’s not really possible to do justice to the theory here, but it is briefly described in the following set of definitions, an edited version of the ones Gelepithis gives in the paper.

Definition 1. For a human H, a neural formation N is a structure of interacting sub-cellular components (synapses, glial structures, etc) across nerve cells able to influence the survival or reproduction of H.

Definition 2. For a human, H, a neural formation is meaningful (symbol Nm), if and only if it is an N that influences the attention of that H.

Definition 3. The meaning of a novel stimulus in context (Sc), for the human H at time t, is whatever Nm is created by the interaction of Sc and H.

Definition 4. The meaning of a previously encountered Sc, for H is the prevailed Np of Np

Definition 5. H is conscious of an external Sc if and only if, there are Nm structures that correspond to Sc and these structures are activated by H’s attention at that time.

Definition 6. H is conscious of an internal Sc if and only if the Nm structures identified with the internal Sc are activated by H’s attention at that time.

Definition 7. H is reflectively conscious of an internal Sc if and only if the Nm structures identified with the internal Sc are activated by H’s attention and they have already been modified by H’s thinking processes activated by primary consciousness at least once.

For Gelepithis consciousness is not an abstraction, of the kind that can be handled satisfactorily by formal and computational systems. Instead it is rooted in biology in a way that very broadly recalls Ruth Millikan’s views. It’s about attention and how it is directed, but meaning comes out of the experience and recollection of events related to evolutionary survival.

For him this implies a strong distinction between four different kinds of consciousness; animal consciousness, human consciousness, machine consciousness and robot consciousness. For machines, running a formal system, the primitives and the meanings are simply inserted by the human designer; with robots it may be different. Through, as I take it, living a simple robot life they may, if suitably endowed, gradually develop their own primitives and meanings and so attain their own form of consciousness. But there’s a snag…

Robots may be able to develop their own robot primitives and subsequently develop robot understanding. But no robot can ever understand human meanings; they can only interact successfully with humans on the basis of processing whatever human-based primitives and other notions were given…

Different robot experience gives rise to a different form of consciousness. They may also develop free will. Human beings act freely when their Acquired Belief and Knowledge (ABK) over-rides environmental and inherited influences in determining their behaviour; robots can do the same if they acquire an Own Robot Cognitive Architecture, the relevant counterpart. However, again…

A future possible conscious robotic species will not be able to communicate, except on exclusively formal bases, with the then Homo species.

‘then Homo’ because Gelepithis thinks it’s possible that human predecessors to Homo Sapiens would also have had distinct forms of consciousness (and presumably would have suffered similar communication issues).

Now we all have slightly different experiences and heritage, so Gelepithis’ views might imply that each of our consciousnesses is different. I suppose he believes that intra-species commonality is sufficient to make those differences relatively unimportant, but there should still be some small variation, which is an intriguing thought.

As an empirical matter, we actually manage to communicate rather well with some other species. Dogs don’t have our special language abilities and they don’t share our lineage or experiences to any great degree; yet very good practical understandings are often in place. Perhaps it would be worse with robots, who would not be products of evolution, would not eat or reproduce, and so on. Yet it seems strange to think that as a result their actual consciousness would be radically different?

Gelepithis’ system is based on attention, and robots would surely have a version of that; robot bodies would no doubt be very different from human ones, but surely the basics of proprioception, locomotion, manipulation and motivation would have to have some commonality?

I’m inclined to think we need to draw a further distinction here between the form and content of consciousness. It’s likely that robot consciousness would function differently from ours in certain ways: it might run faster, it might have access to superior memory, it might, who knows, be multi-threaded. Those would all be significant differences which might well impede communication. The robot’s basic drives might be very different from ours: uninterested in food, sex, and possibly even in survival, it might speak lyrically of the joys of electricity which must remain ever hidden from human beings. However, the basic contents of its mind would surely be of the same kind as the contents of our consciousness (hallo, yes, no, gimme, come here, go away) and expressible in the same languages?


  1. 1. Scott Bakker says:

    Define ‘contents.’ I can’t crack the paywall, but from your description, Peter, it seems like he’s chasing out the implications of intentional realism absent functionalism, which I think would be a very interesting exercise, even if it ultimately runs aground all the same simple questions – in addition to those spooky functionalism purports to solve.

    The real solution is the ugly one (as it typically is in the sciences). There is no fact of the matter regarding understanding beyond interaction. If your robot can follow instructions, hold down a decent conversation, and so on, then it ‘understands.’ Functionalists want this interactive harmony to be a matter of functional synonymy, different mechanisms doing the same things in different heads. The robot can interact with us because it possesses a functionally similar ‘mentality.’ But this need not be the case for the synchronization of behaviours (as the movie Her illustrates so wonderfully).

    Gelepithis, I’m guessing, recognizes this, yet can’t shake the intuition that ‘meaning’ is nevertheless something substantial, efficacious in its own right, rather than a heuristic humans use to derive local explanations despite utter ignorance of the larger systems actually responsible. (My last post is on this very topic: ). Absent functional synonymy, he’s forced to assert that there is no synonymy. Since synonymy is what understanding means, there will therefore be an insuperable communicative gulf between robots and humans.

    It’s paywalls all the way down! 😉

  2. 2. Mark O'Brien says:

    I cannot get my head around the view that a robot could be conscious but not a computer.

    Take a robot’s electronic brain out of its head and have it communicate with the body by remote control.

    Is it no longer conscious?

    Now instead of having a real body, simulate its body and its environment. The inputs and outputs passing to the brain are indistinguishable from when it had a physical body. Is it no longer conscious?

    You can continue in this vein, reducing the complexity and fidelity of the simulation apparently without limit. Perhaps in the end the brain’s only means of interaction is via something like a console interface, giving a more traditional view of what a sentient computer is supposed to be (as in WarGames, perhaps).

    Perhaps at this extreme one might suspect it must be unconscious. I don’t really share that intuition, myself, but since whatever environment and stimulus a computer mind might need to be conscious can be provided by the very same machine, I don’t think this is a terribly important point.

    In any case, computers routinely have cameras, microphones, speakers and so on. Computers often exert some kind of physical control over the world, whether it be to eject a CD tray or to start a cooling fan. As such, I don’t see how one can draw any fundamental distinction between a robot and a computer.

  3. 3. Peter says:

    Yes: the paper seems to assume that if it’s embodied it’s not longer a computer.

  4. 4. Mark O'Brien says:

    > Yes: the paper seems to assume that if it’s embodied it’s not longer a computer.

    Hi Peter,

    It’s not clear if you see what I mean about this being a questionable assumption: that it’s very hard to draw a clean distinction between being embodied and not being embodied.

  5. 5. Peter says:

    Sorry, I was really just responding to your last sentence, but your point about embodiment is a good one.

  6. 6. Sci says:

    I’m assuming the paper is stating a Turing Machine isn’t conscious but a more faithful physical reproduction of the human mind might be?

  7. 7. Scott Bakker says:

    Does he mean that consciousness cannot be engineered from above, that it can only arise out of actual environmental interactions? What are his criteria for a robot as opposed to a computer?

  8. 8. Peter says:

    Here’s part of what he says…

    I would argue that one of the most serious inadequacies in artificial consciousness studies is the lack of (clear) distinctions among human, non-human animal, machine, and robot consciousness [e.g., Moreno et al., 2008; Starzyk and Prasad, 2011]. Naturally, this lack of distinction is generated by a deeper lack of distinctions con- cerning the notions of human and machine meaning and understanding. For definitions and discussion on these latter and other fundamental notions see Gelepithis [1991, 1997, 2009]. Support for such lack of distinction is due to the underlying functionalist viewpoint that eschews fundamental material considerations on the basis of unsupported abstraction as it was brie°y outlined in Sec. 2. As an illus- tration, I will concentrate on Starzyk and Prasad’s paper. They propose a compu- tational model of consciousness in which \attention”, \thinking”, and \motivation” play central role. This is commentable. The objection lies with the fact that their proposal for what constitutes consciousness is merely standard classical AI work [see, for instance, Newell, 1990]. Specifically, they write that the central executive pro- vides cognitive interpretation” of the winner of internal competition [Starzyk and Prasad, 2011, p. 266]. Nevertheless, no AI system provides cognitive interpretation of anything. All AI systems do formal processing and nothing more. The interpretation is embedded in the AI system by the humans and furthermore employ human-based primitives. Robots may be able to develop their own robot primitives and sub- sequently develop robot understanding [Gelepithis, 1991]. But no robot can ever understand human meanings; they can only interact successfully with humans on the basis of processing whatever human-based primitives and other notions were given [Gelepithis, 1984, 1991].

    I believe I can legitimately email copies of the paper for purposes of discussion – if anyone wants one, let me know.

  9. 9. Tom Clark says:

    “All AI systems do formal processing and nothing more.”

    From a 3rd person perspective, the neuronal goings on of the brain constitute formal processing, in that there’s no intentional or phenomenal semantic content visible, just the transmission of various electro-chemical units and spike trains among neurons. Yet the content exists for the conscious, reasoning, and understanding subject. In which case there has to be a way in which formal processes can embody semantic content, which means there’s no in-principle reason why AIs can’t embody such content.

  10. 10. Christophe Menant says:

    Using the word “consciousness” to address “attention” is a bit problematic. Robots can have attention as animals and humans do, but this is not true for “consciousness” covering phenomenal consciousness, self-consciousness, primitive self-consciousness, pre-reflexive self-consciousness…..
    I agree with Mark O’Brien who does not see how a robot could be conscious but not a computer. Both are artificial agents based on physics and electronic DP. And robot embodiment with inert (non organic) elements brings no change. But if the robot embodiment is made with organic elements, things are different (if it is the case, pls let us know Peter).
    I feel Gelepithis is right when writing that “Robots may be able to develop their own robot primitives and subsequently develop robot understanding. But no robot can ever understand human meanings”.This brings us close to the relations between understanding and meaning generation which can be used as a tool to address robot understanding via the Turing Test (see 2013 APA newsletter,).
    But I feel difficult to consider that robot attention can bring robots to acquire free will. When addressing robots free will we should first look at the question of autonomy which is an animal performance that cannot today be reproduced in robots as we do not really understand its nature (probably related to the still unknown nature of life). Then perhaps, after having understood the nature of autonomy we could look for a possibility of free will in robots.
    I don’t want to be pessimistic, but human consciousness exists only within living entities. I feel that strong AI will really take off once we have clear enough an understanding about he nature of life. And I’m confident we will reach that point.
    (Peter, would you email me a copy of the paper?)

  11. 11. ihtio says:

    Ummm… so what exactly is new here, if we already have the theory (or a theoretical framework, if you will) of embodiment?

  12. 12. john davey says:


    ” In which case there has to be a way in which formal processes can embody semantic content, which means there’s no in-principle reason why AIs can’t embody such content.”

    A classic case of confusing matter with mathematics

  13. 13. Callan S. says:

    That guy is never going to return that robots high five, is he?

  14. 14. Peter says:

    Yeah, human beings, what can you do with them?

Leave a Reply