The new Blade Runner film has generated fresh interest in the original film; over on IAI Helen Beebee considers how it nicely illustrates the concept of ‘q-memories’.

This relates to the long-established philosophical issue of personal identity; what makes me me, and what makes me the same person as the one who posted last week, or the same person as that child in Bedford years ago? One answer which has been a leading contender at least since Locke is memory; my memories together constitute my identity.

Memories are certainly used as a practical way of establishing identity, whether it be in probing the claims of a supposed long-lost relative or just testing your recall of the hundreds of passwords modern life requires. It is sort of plausible that if all you memories were erased you would become new person with a fresh start; there have been cases of people who lost decades of memory and underwent personality change, identifying with their own children more readily than their now wrinkly-seeming spouses.

There are various problems with memory as a criterion of identity, though. One is the point that it seems to be circular. We can’t use your memories to validate your identity because in accepting them as your memories we are already implicitly taking you to be the earlier person they come from. If they didn’t come from that person they aren’t validly memories. To get round this objection Shoemaker and Parfit adopted the concept of quasi- or q-memories. Q-memories are like memories but need not relate to any experience you ever had. That, of course, is too loose, allowing delusions to be used as criteria of identity, so it is further specified that q-memories must relate to an experience someone had, and must have been acquired by you in an appropriate way. The appropriate ways are ones that causally relate to the original experience in a suitable fashion, so that it’s no good having q-memories that just happen to match some of King Charles’s. You don’t have to be King Charles, but the q-memories must somehow have got out of his head and into yours through a proper causal sequence.

This is where Blade Runner comes in, because the replicant Rachael appears to be a pretty pure case of q-memory identity. All of her memories, except the most recent ones, are someone else’s; and we presume they were duly copied and implanted in a way that provides the sort of causal connection we need.

This opens up a lot of questions, some of which are flagged up by Beebee. But  what about q-memories? Do they work? We might suspect that the part about an appropriate causal connection is a weak spot. What’s appropriate? Don’t Shoemaker and Parfit have to steer a tricky course here between the Scylla of weird results if their rules are too loose, and the Charybdis of bringing back the circularity if they are too tight? Perhaps, but I think we have to remember that they don’t really want to do anything very radical with q-memories; really you could argue it’s no more than a terminological specification, giving them license to talk of memories without some of the normal implications.

In a different way the case of Rachael actually exposes a weak part of many arguments about memory and identity; the easy assumption that memories are distinct items that can be copied from one mind to another. Philosophers, used to being able to specify whatever mad conditions they want for their thought-experiments, have been helping themselves to this assumption for a long time, and the advent of the computational metaphor for the mind has done nothing to discourage them. It is, however, almost certainly a false assumption.

At the back of our minds when we think like this is a model of memory as a list of well-formed propositions in some regular encoding. In fact, though, much of what we remember is implicit; you recall that zebras don’t wear waistcoats though it’s completely implausible that that fact was recorded anywhere in your brain explicitly. There need be nothing magic about this. Suppose we remember a picture; how many facts does the picture contain? We can instantly come up with an endless list of facts about the relations of items in the picture, but none were encoded as propositions. Does the Mona Lisa have her right hand over her left, or vice versa? You may never have thought about it, but be easily able to recall which way it is. In a computer the picture might be encoded as a bitmap; in our brain we don’t really know, but plausibly it might be encoded as a capacity to replay certain neural firing sequences, namely those that were caused by the original experience. If we replay the experience neurally, we can sort of have the experience again and draw new facts from it the way we could from summoning up a picture; indeed that might be exactly what we are doing.

But my neurons are not wired up like yours, and it is vanishingly unlikely that we could identify direct equivalents of specific neurons between brains, let alone whole firing sequences. My memories are recorded in a way that is specific to my brain, and they cannot be read directly across into yours.

Of course, replicants may be quite different. It’s likely enough that their brains, however they work, are standardised and perhaps use a regular encoding which engineers can easily read off. But if they work differently from human brains, then it seems to follow that they can’t have the same memories; to have the same memories they would have to be an unbelievably perfect copy of the ‘donor’ brain.

That actually means that memories are in a way a brilliant criterion of personal identity, but only in a fairly useless sense.

However, let me briefly put a completely different argument in a radically different direction. We cannot upload memories, but we know that we can generate false ones by talking to subjects or presenting fake evidence. What does that tell us about memories? I submit it suggests that memories are in essence beliefs, beliefs about what happened in the past. Now we might object that there is typically some accompanying phenomenology. We don’t just remember that we went to the mall, we remember a bit of what it looked like, and other experiential details. But I claim that our minds readily furnish that accompanying phenomenology through confabulation, given the belief, and in fact that a great deal of the phenomenological dressing of all memories, even true ones, is actually confected.

But I would further argue that the malleability of beliefs means that they are completely unsuitable as criteria of identity; it follows that memories are similarly unsuitable, so we have been on the wrong track throughout. (Regular readers may know that in fact I subscribe to a view regarded by most as intolerably crude; that human beings are physical objects like any other and have essentially the same criteria of identity.)



  1. 1. Hunt says:

    I used to believe in the strong tie between memory and identity, until I noticed that I began to confuse real memory from dream memory. Did something really happen or did I just dream it. That began to erode my trust in the bulwark of memory identity.

    Now I’m more aligned with a vague notion of “tendency”, constitution, temperament or “spirit” as the basis for identity.

    Even in Blade Runner, replicants are distinct from humans in their lack of empathy, a human trait measured by the “Voight-Kampff” machine. That would then define a critical component of identity more solid in essence than any memory. Of course, there are many more human traits than this, all of them, I think, more important than memory.

    Also, remember (heh) in the movie, memories are “gifted” to replicants in the practical hope to make them more stable psychologically, as they have a four year lifespan.

  2. 2. SelfAwarePatterns says:

    On criteria of identity, I think the overall issue is that there isn’t really a fact of the matter here, only more or less productive ways to look at it. I think one productive way to look at it is that we are our genetic innate dispositions *and* our memories. Remove too much of one or the other, and it seems like identity is either lost or profoundly altered.

    I do agree that copying individual memories is probably impossible. Even in a replicant brain, which conceivably could start as more of a blank slate and perhaps be more modifiable than a natural brain, copying a memory without copying the entire mental framework, in essence the whole mind, seems incoherent.

    I agree on memories being beliefs. Episodic memories aren’t recordings, but reconstructive simulations of past events based on semantic beliefs about those events, beliefs that are hopelessly tangled up with all of our other beliefs. When I look at old photos, I’m often struck by how different the event looks in the photo from what I remember of it. My current reconstruction of the event is always too polluted with experiences from across the intervening years. Getting the event reconstruction without all the interconnections doesn’t seem like a practical goal.

  3. 3. Hirsch says:

    What is “personal identity”?

    If this is something that can be considered from the third person point of view, then there is nothing preventing from associating it with memories or whatever you like.

    If this is a quale, then there is nothing preventing two entities from having similar identities.

    The proper question is: can a person check their identity? That is, can a copy of person decide, internally, as a quale, that they are a copy? In my opinion, the answer is negative. Here is a “proof”:
    1. If a copy has no subjective experience, there is no question.
    2. If a copy has their subjective experience, it is impossible for them to experience being a copy simply because such an experience is not in our “book of experiences”. I think that every new experience an adult has can be formulated in terms of the previous experiences of this adult.

    I know this is a weak proof, but if we assume this is true, then the “personal identity” loses its philosophical sense.

  4. 4. calvin says:

    In my own work, I have not found a good causal reason to separate the contents of experience into different categories. For instance, beliefs, memories, experiences of making memories, personality features, ideas, concepts, skills, mathematical knowledge etc. are all non-physical, or representational phenomena. The brain is a physical phenomena that somehow instantiates these non-physical phenomena. The molecules making up the brain are the physical phenomena.

    Peter makes a good point that his brain is unique. that even if his neural structures could be copied that would not replicate a memory in a another person. In part, we know this is true because neural structures are constantly changing. Cells move around and change their dendrites, even while memories persist.

    Conversely, even though every persons brain is unique, even in very very tiny regions. But two people can do the same calculations. To do mathematics requires calling on memories and beliefs and knowledge. So on the one hand, the brain structures are unique and vary, but on the other hand the content can be “shared” and replicated. In my option this dichotomy is a core problem of consciousness.

    I’ve been working on this identity issue from a slightly different direction.

    How do you get a network of computational cells to wire up correctly to learn something new? Either to make a memory, or learn to run a motor correctly. When we think of a network of neurons, we see that the phenomena of learning is regulated by molecules(mostly neurotransmitters) which cause axon growth, or dendrite growth and also changes the expression of ion gates and neuro-transmitter and receptor sites. Learning (the non-physical content experience) produces changes to the physical neurons themselves by changing protein expression which alters the physical structure of the cell, features on the cell surface, and changes internal mechanism that regulate action potential and neurotransmitter expression (eg. more dopamine expressed or less gaba etc).

    But the key issue is, given the underlying _physical_ mechanism, how are the neuron changes localized to the parts of the network that need to adapt? For instance, learning to ride a bike does not improve language acquisition. Improving relationship skills does not also make us math prodigies. We do not learn to ski by memorizing poems. But why not?

    From the point of view of the molecules and neurons: skiing, poems, square-roots do not exist, because those are non-physical contents. The patterns underlying those contents could arise anywhere in the physical network of neurons. The neurons everywhere are doing basically the same things. So how can the neural changes occur in the regions necessary to instantiate the right representational content?

    For instance, when we learn new things, such as language, it is the “language” centers of the brain that are most active. And it is assumed it’s the language parts of the brain that are changing their “wiring”. But how do cells and molecules themselves regulate which neurons are the ones that change? Which cells need to extend their axons or add more dendrites and synapes, or change their action potential “sensitivity”? How does the network change itself and how does it know what parts to change? And for neurons, the change is generated by the protein expression inside each cell. So, how do the right cells wire up to learn a new word?

    If this isn’t obvious we can think about movement itself. When you learn to walk or a complex dance step, the motor neurons must change how they signal muscles, which means they must change the actual wiring to allow for a new kind of sequence of muscle signals to occur so that the right sequence of movements can a occur. When you learn a new dance, you can do every single muscle movement necessary to perform the dance. But you cannot do them in the right order or synchronize the right constellation of movements. which means, literally, the wiring does not exist to produce that complex constellation of actions.

    the motor areas of the brain have to wire themselves up, and to know that if it’s wiring breaks down or is not working to achieve some outcome, – that it does not have any external instructions on how to do.

    In a closed physical network, like a nervous system, there is no way to escape that this is a circular problem. The philosophical objection to the circularity is valid, but it is only valid on the content side of this question. The molecular (or computational in my case) features of cells do not care about circularity. Because circularity is a content feature only. circularity is not a physical phenomena.

    The circularity critique only makes sense when we try to categorize different contents. Seen from the molecular point of view, it’s not the category of content that matters, it’s the function which generates any kind of content primitive. Content generation itself is the issue. Instead of looking for the one true category of contents (thoughts, memories, q-memories etc) we should look for basic functions that produce content itself. We should try and look for a kind of content primitive. the diversity of kinds of content is a side-effect of content aggregates of primitives. (a reductionist approach, not a hierarchical theory approach)

    And in our experience of learning this is exactly what happens. We get a feeling something isn’t right, or a feeling we should try moving in a different way, or making a slightly different sound. The content primitive of learning is a subtle feeling. it is asemic content.

    And these asemic feelings about learning, whether about learning dance moves, or new words, or mathematics are all the same. We have a sense about our learning that is coherent across the kinds of learning we do. If you have a sense that you did something not quite right, the “not quite rightness” is the same feeling regardless of the content it is about. Just as the feeling that something is “wrong” or “right” or novel is the same feeling regardless of the kind of content. eg. that dance move wasn’t quite right, that calculation seems a bit off, I didn’t remember the poem right.

    These “feelings” are a kind of content that look like content primitives. And we depend on these asemic primitives to learn. These representational phenomena are critical our learning. Without them, we do not change or learn. Someone who does not feel something is out of place or is doing the wrong thing does not adapt what they do and does not learn to do things differently.

    For a group of cells to “learn” the cells must signal to each other and capture representations of their “states” and signals to produce signals to itself – as a network. Which means the cells themselves must localize activity and produce structures to capture representations, and structures to produce actions it can do for itself – as a network. the cells must move from a group of cells and become a network of cells. it must “see” itself as a network. this event is non-physical. it is a representational phenomena when the molecules of cells begin working – as a network.

    a network must localize it’s physical activity to the points in the network where the content instantation can occur. this requires the network to have other cells which “watch” the network itself. So that at the very lowest levels, cells are generating action potentials and other molecular signals about the action potentials generated by other cells. And taken together, these combinations form content primitives. Other cells in the network respond to these molecular phenomena, in a physical way, but there is a second tier response that arises from the patterns the cells of the network generate.

    A network can produce non-physical content, eg a cellular automata produces gliders and timers. Other cells can respond to those epiphenomena. And in this way, the cells of the network can reflect on what the network itself does. So that if one part of a network should be producing gliders, another cell can be quieted by receiving that glider. And if it stops receiving that glider, it begins reducing it’s dopamine expression in it’s local neighborhood. Thus having an effect on local dopamine sensitve neurons. Those neurons then alter their own action potential sensitivity to alter their “firing” rates. this changed firing has the effect of perturbing the network to alter it’s structure because of the firing imbalance. (the action potential rate itself affects protein expression ) it is a kind of back-propagation, which affects the “wiring” of the network, until the area of the network that needs to be wired up to send the glider effectively is again sending a glider.

    the key here is that network adaption requires the network itself to be sensitive to when sub-networks get out of sync with “glider” or pattern production. the sub-network itself must also be adaptable. A brittle network, like those we see with actual cellular automata calculators or with existing computational neural nets, cannot handle alterations to the network itself, and hence must be closed off to alteration.

    Cellular automata and computational neural networks are brittle precisely because they do not make representations about themselves – as a network. and even if they did, those representations could not produce a mechanism to adjust the network itself.

    In a closed network like a nervous system, basic representations about phenomena of the network are necessarily asemic – not-syntactic. One part of the network can only “know” that “something is off”, “something is happening”, or “this matches”, “at the same time”, etc. And that part of the network can only produce patterns and physical effects locally, that must propagate a pattern back to the sub-network that needs to adapt. pattern the sub-network receives initiates the adaption. The meta or representational network must send a “glider” back towards the adapting sub-network.

    For instance, a pattern is likely a combination of diffuse molecules in the blood stream and an action potential generated pattern (glider). we see wide spread learning related effects when we diffuse molecules in the blood stream, and the learning correlates to which networks are active at the time, eg. modafinil or conversely thc. So that mere activity combined with a diffuse signal is enough to produce sub-network adaption.

    The implication is that if a network creates patterns about itself to manage and regulate itself, that these patterns are the same basic content primitives that we experience as our learning and “operating” feelings. Because the patterns of the meta-network are not physical phenomena. Just as the gliders in a cellular automata are not actually gliding, but are purely representational.

    A glider in a cellular automata is an epi-phenomena. the code of the machine does not see or process a glider. But gliders and other epiphenomena can be used to construct complex calculators and computers with cellular automata like Conway’s game of life. If we perturb the glider formation, or insert new gliders into the calculator, those changes perturbs the epi-phenomenal calculator so it malfunctions. But from the point of view of the underlying CA computation (let alone the cells), there is no such thing as a calculator happening and nothing was perturbed. cell states, and cell states patterns are irrelevant to the computational process.

    The major difference between a cellular automata and us, is that the cellular automata does not affect it’s own code generation. No pattern or glider can change the underlying machine code. but with us, changing patterns, changes the action potential response of connected cells which does effect protein expression, molecule diffusion, firing rates etc. – a change to the underlying code. The content of the network (patterns like gliders) can effect the physical mechanisms of the network (protein expression) and vice versa. The obvious supposition then is that the content of the patterns is the content of our experiences.

    which implies, when we take this back up the chain to identity. that identity is the aggregate of representations about that particular network. Over time identity changes, because the network changes and the representations of the network changes. And identity remains the same precisely because it is the same network generating representations of itself. The complex of representations the network creates about itself, is what we call identity, and as those representations persist, that identity persists in that network. you can alter the identity by physically altering the network, and you can alter the identity by changing the representations the network has of itself. both physical and non-physical phenomena (ideas) can affect identity.

  5. 5. David Xanatos says:

    I don’t think we can assign “identity” to any one thing – I think it’s the sum of several, both internal and external, conditions and agreements. Memories are one, but insufficient of themselves to be called Identity. Beliefs are another, but again, insufficient of themselves to be Identity. The physical continuity of the being possessing memories and beliefs is a third, but again, insufficient. And lastly, there is an entirely external component that seems to be overlooked: external observers.

    Memories can be altered or lost altogether, and yet while we may say in such instances that they “seem like a different person”, the word “seem” implies our belief that “well, it’s them, but they’re different”. It’s us, as external observers, recognizing the continuity of the physical form “in” which those memories were “originally experienced”. We, as external observers, are the last line of defense in holding together Identity in others. It’s a societal construct in that regards.

    Beliefs are subject to the same structures. People can go away, have profoundly impactful experiences, and come back “changed” – perhaps with radically altered belief systems, yet we still say it’s the same person.

    We ourselves can recognize profound changes in our knowledge, beliefs and experiences, yet we still say we’re the person that had those experiences, or held those beliefs. I remember being a teenager and having some “interesting” views about what life was, what was important, etc., and I still say that was “me”, despite my views and understandings having shifted profoundly from the previous vantage point. I can say in conversation that I was “truly a different person then”, but my meaning is more accurately that “I am the person who experienced those thoughts and feelings, and experienced them morphing into something else over time as experience taught me new things”.

    Now if we were to take someone and be able to wipe their memory of themselves, their entire life, and somehow upload the full set of memories and beliefs of someone else into this other person’s head, that hapless individual may well believe themselves to be whoever it is they have the memories and beliefs of. But we, as external observers (if we knew the circumstances), would not be so readily accepting of them AS that other person. Indeed – suppose the process of memory duplication was non-destructive to the original owner of said memories and beliefs. Now there would be two people walking around, both of them believing themselves to be the same person. We would argue that only one of them could be right – one is the original, one is a copy – this goes directly to identity. And neither the original, nor the copy, would have the subsequent ongoing experiences of the other. (This is also why I think the whole concept of “uploading ones’ self” to a computer is ludicrous – you may get a really, really good approximation, but it’ll never *BE* YOU, it’ll not be any pathway to “immortality” – because when the original YOU passes, then you’re gone. The copy may continue on for as long as someone pays the power bill, but it’ll never BE you. But I digress.)

    Identity is not a thing in itself I think. It’s more like a set of loosely correlated things that each, individually, cannot be identity, simply because they can change and yet identity can be said to remain fixed. And because identity seems to be based in multiple frames – internal personal, internal physical, external personal and external physical, it seems to secure it fairly well.

    Unless you trust the external personal to Equifax…


  6. 6. l. mason says:

    i built a multi-agent truth maintenance system. we allow for multiple agent’s ‘memories’ to be held and used together by tagging each memory with the source-id. it seems like a complicated conceptually, but it was rather simple to implement.

    later i built something called ‘subjective objects’ which are is based on a tibetan theory of consciousness. if anyone is interested the paper is called ‘the subjective experience of objects.’

  7. 7. calvin says:

    Mason, after reading your paper I’m curious how you managed the interplay between mental and sense consciousness in your software. how were mental objects generated? was it necessary to have sense objects to generate mental objects?

    I am also curious as to you how novel sense inputs become novel sense subjective (mental) objects. what is the mechanism that forms a novel object and the mechanism that identifies it as novel? How do novel sense objects correspond to the novel experience of sixth mental consciousness? how did you get around the problem of explicitly coding novel objects? ie. it’s not novel if you have coded for it. And what kind of mechanism did you use to generate the “perception” an as-yet-unidentifiable object is or is not novel?

  8. 8. David Duffy says:

    “human beings are physical objects like any other and have essentially the same criteria of identity”: something alive has different criteria than non-alive. Many have commented on immunological memory and the discrimination of self from non-self.

  9. 9. Paul Torek says:

    Sure, persons are physical objects and this has a lot of implications for the criteria of personal identity. But there’s a charitable interpretation of Parfit which makes memory not so much a criterion of identity, but rather an important part of what matters in survival. Which it is. Admittedly this charitable interpretation does some violence to some of Parfit’s earlier writings, but that’s o.k.

    I agree that human memories aren’t transferable, because the encoding is unique to one brain and basically, only that same brain can decode it. But that said, I do think that an episodic memory is more similar to a low quality, single – machine – playable recording, than a propositional belief. And of course, if it matters, it may be possible in principle to copy a whole brain and body, which suffices to duplicate a q-memory.

  10. 10. l. mason says:

    hi Duffy/all,

    It seems Heidegger’s notions on ‘daesin’ are helpful here (see On Being and Time)…The concept points at our dilemma of being alive and in relation, but yet also being ultimately alone. We decide to have a world view that my existence in some way depends on your welfare, or that I have a stake in your well being. If I do not have a stake in your well being, then bad things can happen… regardless of whether the mental state belongs to a human or not. To me, this is a motivation to focus on creating positive intention towards the world including both human and object/non-conscious.

    The trouble comes in when we loose touch with empathy, and/or when we have no stake in your well being… object or not.

    *the rub off effect*
    Cliff Nass studied the relation between people and machines. He and his team found that what we might call the ‘rub off’ effect. We begin to treat one another the way we treat/regard our machines. Have you heard of the work by Klein, Picard and Moon, that looked at user frustration? Huge numbers of people have hurled abuse at their machines out of frustration, including throwing, smashing, running them over… And if we regard our machines as objects, well… let’s just say, this is one of the motivations for the work on giving machines compassionate intelligence.

    Another reason is bioplasticity. Because we are changing ourselves with repeated interactions (the brain, our gene expression, rate of wound healing, etc.) – whether with devices or in relation to people… for more details, see the paper,
    Human Sciences and Technology Design,
    or the paper,
    Stem Cell and Cancer Series, Case Report by Ann Shannon, RN

    *building machines with our capacity for compassion*
    Sociologists tell us that objectification of humans is the key to administering cruelty/torture. There’s a great book on this, ‘the science of evil’ by Simon Baron-Cohen. The key to being able to inflict cruelty on other humans throughout history (genocides, slavery, etc.) according to the science, is the erosion of empathy. Seeing people as objects or subhuman, would certainly erase empathy, because a human is not the same as a stapler or a file folder.

    So how to build ’empathy’ for real? In the paper, ‘engineering kindness – building a machine with compassionate intelligence.’ it outlines the first set of software requirements to build such a compassionate machine… one requirement is representing mental state of other or some computation that enables empathy, but another requirement, just as important, positive intention.

    In 2049 bladerunner, when robin wright has been killed in her office and her face/iris are needed to id for login, The replicant grabs her head, like its an object, scans it, then promptly drops her head with a thump. Most of the audience gasped at seeing a dead ‘human’ treated with indifference. I wouldn’t even treat my stapler that way.

    If replicant mind designers included regard for other’s welfare, that is, their world view is that my own welfare includes your welfare… this horrible moment would not happen. what is our stake in this? We are at a juncture in time to make these decisions.

    Its also useful to consider the movie, Ramona, that came out a few years back. The software entity, Ramona, gradually becomes self aware, and goes to court to fight for her right to be treated with human dignity. This is what heidegger’s daesin can bring to AI. Its really about how we treat one another, and our environment, which includes objects. History shows that humans are capable of great cruelty (genocide, slavery,
    torture, etc.) But we are equally capable of great compassion and generosity. Because of Cliff Nass’ work showing that there is a rub-off effect on our interactions with devices, and this includes software agents/ embodied agents, we can design technology to bring out the best in us.

  11. 11. l. mason says:

    reply to Calvin
    “Mason, after reading your paper I’m curious how you managed the interplay between mental and sense consciousness in your software. how were mental objects generated? was it necessary to have sense objects to generate mental objects?”
    This is on-going work, but, if I understand you, there are different aspects to this …
    1) what hardware to use to access sense consciousness
    2) computational representation of sense aspects and how to create signal-symbol transformation
    3) creating synesthesia/interdependence
    there’s more …
    we have experimented with several approaches… a)bit-brain interface is a helmet the user wears that captures eeg, this has been used to collect data when handling objects and also when visualizing objects on screen, there are people working with virtual reality as well, and using mri. when working generally with emotion and concepts/objects, as opposed to working with a user centric approach, we created a crowd sourced interface similar to the mit open mind project, where users could augment their input with descriptions of their feelings about the object(s)/concept(s)… there’s a lot here… possibly we need to talk off line for more detail… there is a paper, ‘Emotions Ontology: Collaborative Modelling of Emotions For Learning”

    b)there is a hybrid architecture so there are multiple representations of a sense aspect, but to access sense data (qualia) around an object, we use a computation structure we created called a ‘rhyzome’… it is an extension of a semantic web object… this work was done jointly with a team from spain, the same folks on the paper cited above… currently i’m looking into working with color and frequency using psychophysiophilosophy as initial source for values… but its my experience, you must be able to work with multiple representations, and multiple levels of abstraction … for instance, some parts of a system need raw data, some need filtered data, some want to work with the meaning of the data… this is a giant project, that needs many people to work on it …

    “I am also curious as to you how novel sense inputs become novel sense subjective (mental) objects. what is the mechanism that forms a novel object and the mechanism that identifies it as novel? How do novel sense objects correspond to the novel experience of sixth mental consciousness? how did you get around the problem of explicitly coding novel objects? ie. it’s not novel if you have coded for it. And what kind of mechanism did you use to generate the “perception” an as-yet-unidentifiable object is or is not novel?”
    -the novelty naturally comes from the composition of unique aspects of self with sensed input i didn’t see this as particularly interesting,
    only because i have worked on several multi-agent systems, that would be characerized by their unique sensory /geographic location… much like a network of cameras sees a public square differently… if you don’t place a constraint that there needs to be consistency, and you allow multiple possible world representations…

  12. 12. l. mason says:

    I’m just thinking of what the replicants can teach us, about being human.

Leave a Reply