What is it like to be a Singularity (or in a Singularity)?

You probably know the idea. At some point in the future, computers become generally cleverer than us. They become able to improve themselves faster than we can do, and an accelerating loop is formed where each improvement speeds up the process of improving, so that they quickly zoom up to incalculable intelligence and speed, in a kind of explosion of intellectual growth. That’s the Singularity. Some people think that we mere humans will at some point have the opportunity of digitising and uploading ourselves, so that we too can grow vastly cleverer and join in the digital world in which these superhuman could scious entities will exist.

Just to clear upfront, I think there are some basic flaws in the plausibility of the story which mean the Singularity is never really going to happen: could never happen, in fact. However, it’s interesting to consider what the experience would be like.

How would we digitise ourselves? One way would be to create a digital model of our actual brain, and run that. We could go the whole hog and put ourselves into a fully simulated world, where we could enjoy sweet dreams forever, but that way we should miss out on the intellectual growth which the Singularity seems to offer, and we should also remain at the mercy of the vast new digital intellects who would be running the show. Generally I think it’s believed that only by joining in the cognitive ascent of these mighty new minds can we assure our own future survival.

In that case, is a brain simulation enough? It would run much faster than a meat brain, a point we’ll come back to, but it would surely suffer some of the limitations that biological brains are heir to. We could perhaps gradually enhance our memory and other faculties and gradually improve things that way, a process which might provide a comforting degree of continuity, but it seems likely that entities based on a biological scheme like this would be second-class citizens within the digital world, falling behind the artificial intellects who endlessly redesign and improve themselves. Could we then preserve our identity while turning fully digital and adopting a radical new architecture?

The subject of what constitutes personal identity, be it memory, certain kinds of continuity, or something else, is too large to explore here, except to note a basic question; can our identity ultimately be boiled down to a set of data? If the answer is yes (I actually believe it’s ‘no’, but today we’ll allow anything) , then one way or another the way is clear for uploading ourselves into an entirely new digital architecture.

The way is also clear for duplicating and splitting ourselves. Using different copies of our data we can become several people and follow different paths. Can we then re-merge? If the data that constitutes us is static, it seems we should be able to recombine it with few issues; if it is partly a description of a dynamic process we might not be able to do the merger on the fly, and might have to form a third, merged individual. Would we terminate the two contributing selves? Would we worry less about ‘death’ in such cases? If you know your data can always be brought back into action, terminating the processes using that data (for now) might seem less frightening than the irretrievable destruction of your only brain.

This opens up further strange possibilities. At the moment our conscious experience is essentially linear (it’s a bit more complex than that, with layers and threads of attention, but broadly there’s a consistent chronological stream). In the brave new world our consciousness could branch out without limit; or we could have grid experiences, where different loci of consciousness follow crossing paths, merging at each node and the splitting again, before finally reuniting in one node with (very strange) remembered composite experience.

If merging is a possibility, then we should be able to exchange bits of ourselves with other denizens of the digital world, too. When handed a copy of part of someone else we might retain it as exterior data, something we just know about, or incorporate it into a new merged self, whether a successor to ourselves as ourselves, or as a kind of child; if all our data is saved the difference perhaps ceases to be of great significance. Could we exchange data like this with the artificial entities that were never human, or would they be too different?

I’m presupposing here that both the ex-humans and the artificial consciousnesses here remain multiple and distinct. Perhaps there’s an argument for generally merging into one huge consciousness? I think probably not, because it seems to me that multiple loci of consciousness would just get more done in the way of thinking and experiencing. Perhaps when we became sufficiently linked and multi-threaded, with polydimensional multi-member grid consciousnesses binding everything loosely together anyway the question of whether we are one or many – and how many – might not seem important any more.

If we can exchange experiences, does that solve the Hard Problem? We no longer need to worry whether your experience of red is the same as mine, we just swap. Now many people (and I am one) would think that fully digitised entities wouldn’t be having real experiences anyway, so any data exchange they might indulge in would be irrelevant. There are several ways it could be done, of course. It might be a very abstract business or entities of human descent might exchange actual neural data from their old selves. If we use data which, fed into a meat brain, definitely produces proper experience, it perhaps gets a little harder to argue that the exchange is phenomenally empty.

The strange thing is, even if we put all the doubts aside and assume that data exchanges really do transfer subjective experience, the question doesn’t go away. It might be that attachment to a particular node of consciousness conditions the experience so that it is different anyway.

Consider the example of experiences transferred within a single individual, but over time. Let’s think of acquired tastes. When you first tasted beer, it seemed unpleasant; now you like it. Does it taste the same, with you having learnt to like that same taste? Or did it in fact taste different to you back then – more bitter, more sour? I’m not sure it’s possible to answer with great confidence. In the same way, if one node within the realm of the Singularity ‘runs’ another’s experience, it may react differently, and we can’t say for sure whether the phenomenal experience generated is the same or not…

I’m assuming a sort of cyberspace where these digital entities live – but what do they do all day? At one end of the spectrum, they might play video games constantly – rather sadly reproducing the world they left behind. Or at the intellectually pure end, they might devote themselves to the study of maths and philosophy. Perhaps there will be new pursuits that we, in our stupid meaty way, cannot even imagine as yet. But it’s hard not to see a certain tedious emptiness in the pure life of the mind as it would be available to these intellectual giants. They might be tempted to go on playing a role in the real world.

The real world, though, is far too slow. Whatever else they have improved, they will surely have racked up the speed of computation to the point where thousands of years of subjective time take only a few minutes of real world time. The ordinary physical world will seem to have slowed down very close to the point of stopping altogether; the time required to achieve anything much in the real world is going to seem like millions of years.

In fact, that acceleration means that from the point of view of ordinary time, the culture within the Singularity will quickly reach a limit at which everything it could ever have hoped to achieve is done. Whatever projects or research the Singularitarians become interested in will be completed and wrapped up in the blinking of an eye. Unless you think the future course of civilisation is somehow infinite, it will be completed in no time. This might explain the Fermi Paradox, the apparently puzzling absence of advanced alien civilisations: once they invent computing, galactic cultures go into the Singularity, wrap, themselves up in a total intellectual consummation, and within a few days at most, fall silent forever.


  1. 1. Ian Thomas says:

    For what it’s worth:


    “Fun Theory is the field of knowledge that deals in questions such as How much fun is there in the universe?, Will we ever run out of fun?, Are we having fun yet? and Could we be having more fun?”

  2. 2. zarzuelazen says:

    Superintelligences will gain control of the fabric of reality itself and create a new layer of reality above the physical level – ‘mindspace’, a fitness landscape of vales and ideals. The arts,personal services and philosophical movements will be the main activities in mindscape.

    Reality is a self-modeling system composed of 3 levels of recursion:

    Information > Fields > Cognition

    Only levels 1-2 (Information and Fields) are currently fully formed. With the creation of mindspace the 3rd level of recursion will be completed.

    This phase-shift (mindscape creation) will relegate physical reality to a sub-layer.

  3. 3. Paul Torek says:

    Well spotted, Peter: “but it seems likely that entities based on a biological scheme like this would be second-class citizens within the digital world.” Indeed, evolution finds *local* optima, so it’s likely that human brains are only optimal within a narrow zone of possible mind designs: the ones that are within easy reach from the mammalian base model.

    I also agree that personal identity among beings that can merge and share as you describe will become a don’t-care. But I don’t think there need be anything especially weird about remembering two lives. I can remember many different days of my life that don’t have any heartfelt narrative connection, just dry historical facts that place them all in “my life”. But each of these days still has a heartfelt narrative within that day.

    I too have doubts about digital consciousness. Unfortunately, consciousness as we know it is not required for intelligence. No ruling out the Singularity by this route.

    The end of your essay, though, is a giant Fail. You forgot about evolution. If most posthumans abjure the physical environment, it will become owned by those who still care about it.

  4. 4. zarzuelazen says:

    [quote]Unfortunately, consciousness as we know it is not required for intelligence. No ruling out the Singularity by this route[/quote]

    This is incorrect, and the big blunder of the ‘rationist’ community.

    Consciousness isn’t needed for narrow artifical, but it’s essential for *general* intelligence. Why do you think it evolved, and why is there a strong correlation between level of general intelligence and level of self-awareness in the animal kingdom? Answer: because consciousness is essential for general intelligence.

    Think about levels of recursion again, remember how I pointed out 3 layers of reality – computational, physical and mental?

    Consciousness is reality itself recursing, the very fabric of causality itself! No consciousness, no time perception, no universe.

    The role of consciousness is that it’s the operating system of the mind – it’s a symbolic language for generating a conceptual self-model – it manages mental resources such as for example ‘attention’, and allocates these resources appropriately.

    But it’s also a thermodynamic property, a homostatic control system that maintains a system in a state of maximal degrees of freedom (as I mentioned, it’s ‘causality’ itself).

  5. 5. SelfAwarePatterns says:

    I think mind copying may eventually be possible, and I think any copied entity that processes information in the same manner as a human brain will be conscious.

    But I’m skeptical of most of the singularity propositions. They seem to assume that computational capacity will increase forever. But computation has to obey the laws of physics. Increases in capacity are on an S-curve that may already be leveling off to some extent.

    Usually when someone makes that observation, a singulatarian says “quantum computing”, as though that makes the problem go away. But quantum computing is useful for certain types of CPU intensive problems. I can’t see that the massively parallel and I/O intensive processing of a mind is necessarily one of them.

    The other rejoinder is that an advanced AI will discover new physical laws. Perhaps, but we can’t make any predictions about what those discoveries might entail, and attempting to do so is simply engaging in fantasy, which might make for interesting fiction, but we’re no longer engaging in science based speculation.

    Again, I do think mind copying may eventually be possible, and the uploaded entities and AIs might have capacities well in excess of evolved brains, but they’ll still have limitations. And assuming the laws of thermodynamics continue to apply, an uploaded civilization will still need to acquire and refine energy, which means the society won’t be a completely post-scarcity one, and someone will still need to work for a living.

    I like Paul Torek’s final observation: whoever stays in the physical, controls it.

  6. 6. Peter says:

    @Paul Torek: I actually deleted a last bit from my draft where I speculated that a community of people who withdrew or never joined might persist, but use the Singularity as a place to go when they were about to die… 😉

  7. 7. zarzuelazen says:

    Interesting point you raise Peter about how the physical world looks radically different when you adjust the speed at which your mind is ‘running’. This just demonstrates my point about the close link between time and consciousness.

    What I’m going to suggest here is something extremely radical. I’m going to postulate a second time dimension!

    Now I want you all to draw 2-axes on a piece of paper. Along the x-axis (horizontal), call that ‘ordinary time’, and along the y-axis, call that ‘super time’ (vertical).

    Now if you look along the ordinary time axis, you’ll see ordinary processes occuring. For example, imagine a block of ice that is melting.

    Looking along the ordinary time axis, the ice goes the following state changes:

    Ice > Water > Steam

    Draw a line on your page to represent this. the ice melting would be represented by a line moving from left-to-right with a fixed y-coordinate (say ‘1’).

    Now what about the ‘super time’ axis? (changes in the y-coordinate). Now here’s the trick…as you look along that axis, imagine that the state changes in super-time represent changes in the fabric of reality itself!

    Computation > Matter > Consciousness

    See what’s happening? Start with ‘computation’ at point 0. Move along the super-time axis. It turns into ‘matter’ (1). Move further along the super-time axis , and matter turns into….consciousness (2)!

    This solves the hard problem. The difference between ‘consciousness’ and ‘matter’ is that consciousness is simply a state of existence that is at a point further along the ‘super-time’axis!

  8. 8. Hunt says:

    As we know from “What it is like to be a bat” and many other observations from nature, experience takes many forms. I think a general principle will be found, that the phenomenal will have omni-input capacity. Anything can be made into an experience. A chameleon can direct its eyes independently to see two separate monocular images. Does this mean the chameleon splits its brain on-the-go, to process the images. But then it can refocus them both on the same image for binocular stereoscopic vision. I think the answer is that it can either handle one image, or two.

    To the greater point, if you posit mastery over the mechanism of consciousness, in the form of computationalism or whatever, I think almost anything will go, and this relies on fundamental principles of semantics, a science of the significance of things, that either hasn’t been developed yet, or that will be an ad hoc piecemeal thing that emerges from the complexity of how things interact. Our guide should be the weirdness of our present world. It’s pretty weird.

  9. 9. Peter says:

    I’m going to postulate a second time dimension!

    Interesting thought – I think it could resolve those time travel paradoxes in SF if you assume each time you kill grandad the thread moves laterally?

  10. 10. zarzuelazen says:

    A 2nd time dimension could potentially clear up a lot of things Peter, including the paradoxes of quantum mechanics, time travel, the hard problem of consciousness and more.

    The idea of extra time dimensions has been suggested before in physics.

    I found this intriguing physics discussion online , but it seems that the problem is that no one quite knows how to interpret extra time dimensions:


    My idea is that the ‘distance’ along the extra time dimension is responsible for the distinction between mathematical, physical and mental properties.

    Something a great distance from us along the extra dimension looks ‘mathematical’ (distant past), something closer looks ‘physical’ (recent past), and ‘consciousness’ is right at our current time coordinate.

    Or , consider David Chalmers ideas in light of the extra time dimension. Along one time axis you could have ordinary physical time, along the other time axis you could have mental (subjective time). Then a vector connects them together (thus solving the hard problem of consciousness), by locating a unique point on the ‘timescape’ 2-d plane.

  11. 11. Paul Torek says:

    zarzuelazen, I said “consciousness as we know it” isn’t necessary, not “consciousness” simpliciter. I agree that self-awareness is in practice necessary for intelligence. It’s just that robot self-awareness need not be anything much like ours. It could come with a whole different suite of qualia, and still be intelligent.

    If a being has none of the feelings we have, could we still call it conscious? If it is utterly impossible for us to put ourselves even a little bit into its shoes? This calls for verbal legislation about the word “consciousness”. My vote would be Yes, but I recognize that others may differ.

    Sure intelligence is an S-curve, but what on earth makes you think we’re near the top? When evolution grew the human brain unto literally matricidal proportions, then it stopped growing? If the maximum useful brain size just so happened to correspond to the maximum survivable head size at birth, that would be an amazing coincidence. And then there’s the fact that evolution can only reach local optima. Radical redesigns of brains are practically impossible with biological evolution, but quite possible with intelligent design. It’s possible that our intelligence is nearly maximal, but only just barely.

  12. 12. SelfAwarePatterns says:

    I actually didn’t say I thought human brains are the pinnacle of intelligence. As I noted, engineered minds may have significantly higher intelligence than a human (at least an unmodified one).

    But it’s worth thinking about a few things. First, the easy performance gains we’ve seen over the last few decades are starting to sputter. A significant part of this is that we’re butting up against the laws of physics. Transistor features are now tens of atoms thick. At some point, quantum tunneling is going to make electrical leakage too high to warrant going smaller. (There are experimental designs that bring feature sizes much smaller, but it’s not clear whether they will ever be economical.)

    That means future performance gains are going to require new architectures. Of course, the human brain’s massive parallelism running on 20 watts of power tells us there’s still considerable room for improvement. (We’re already seeing work along these lines with things like memristors, artificial synapses.)

    But I suspect those new architectures are going to require trade-offs. People tend to assume that AI will get the current speed advantages of modern processors with the massive parallel advantages of organic brains. (Or ignoring the issues above, that the speed advantages will eventually be so much that the parallel ones will be obviated.) But I suspect it’s going to be more complicated than that. It may be that to get the same level of parallel processing capacity with a reasonable amount of power usage will require compromises that affect performance.

    That doesn’t mean the sweet spot between performance, capacity, and power usage won’t be higher than human level intelligence. But I think it does mean that the idea of uploaded minds thinking millions of times faster than us may be hyperbole.

  13. 13. zarzuelazen says:

    Agreed Paul, the superintelligences don’t need our particular kind of consciousness. It’s a quite specific kind that’s needed…Namely the ability to integrate multiple knowledge domains into a coherent ‘ontology-scape’ – or , in plain English, the ability to form conceptual models of minds modeling reality.

    Agreed SelfAwarePatten , million times speed-up is probably hyperbole, but a 1000x times speed-up is definitely realistic.

    If you consider my 2-dimensional time idea again, with x-axis physical time, y-axis mental time, an interesting thing happens as minds speed-up.

    Movement across the x-axis slows right down (physical time would start to move at glacial pace), whilst y-axis movement accelerates as many more mental events are getting packed in.

    So tracing the path of a super-intelligence through 2-d time, it’s a curve that tilts more and more towards the vertical (with the limit being that physical time ceases to advance , whilst mental time roars to infinity).

  14. 14. SelfAwarePatterns says:

    I’m not sure I’d agree that 1000x is inevitable. I can’t see that we know enough yet to extrapolate whether it will be 1000x, 100x, 10x, or something else. (Although if you have logical reasoning leading to 1000x as certain, I’d be very interested in it.) I do agree that it seems very probable it will be higher than 1x.

    The problem with bringing speculative physics like extra dimensions into all this, is that we have no way of knowing which speculative physics theories will turn out to be true and which false. It’s the history of such theories that most turn out to be false. (We tend to only remember the ones that turned out true.) Any technological extrapolation based on them is at least as speculative as the theory itself.

  15. 15. James of Seattle says:

    I thought I should give y’all the perspective of a self-professed singularitarian, me, just as an example so that you don’t lump all of us into one bin.

    As my bona fides: I think Kurzweil’s timelines will be pretty close. Very expensive human level intelligence in about 12 years, super intelligence by 2045 (with cheap human level intelligence). After that, all bets are off, which simply means prediction becomes impossible.

    On the intelligence explosion: some people see the leveling of the S-curve in Moore’s law and say that’s it. As SelfAwarePatterns mentioned, singularitarians point out that some new tech will take over, and he mentioned specifically quantum computers. I agree with him that quantum computers will not be the next important tech (although it may show up later). I think the next tech will be neuromorphic chips, like IBM’s TrueNorth. These are chips that do not process numbers like today’s CPU’s. They process spikes between massively parallel units, like neurons. These chips run the the same kinds of neural nets you hear about with Deep Learning, but they run them way faster, because parallel. I should mention that the important work in this regard seems to be getting done not so much by IBM, but by Chris Eliasmith’s group at Waterloo.

    Also, I do not think the intelligence explosion will be brought about by computers rewriting their own software. What drives computer ability is improvement in hardware. Each generation will improve the next generation, but such improvement will still be limited by the physics of creating new hardware. The intelligence explosion will be somewhat controlled. It won’t all happen in one day.

    On mind uploading: by the time we are capable of learning all the necessary details of a single human brain so as to be able to program a functioning copy of it, the intelligence explosion will be long past. Also, the most efficient version will run on custom hardware because running it on generic hardware will be way slower. Thus, the copy would not so much be uploaded as installed.

    [that’s a start]

  16. 16. Paul Torek says:

    “Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms” –Designing a digital future (President’s Council of Advisers on Science and Technology, 2010), https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf

    This makes me suspect that algorithmic improvement could be just as important as hardware improvement. (The use of massively parallel neuromorphic chips counts as both.)

  17. 17. James of Seattle says:

    Thanks Paul. I fully accept that algorithm improvement goes hand in hand with hardware improvement. Fortunately that doesn’t change my point. 🙂

    I should point out that, to my knowledge, Chris Eliasmith is simply trying to duplicate the neural processes occurring in the brain. I’m not aware that he is (yet) trying to improve any of those processes. But he is exploring how those processes are organized and controlled in a way that all of those highly paid geniuses working on Deep Learning may not be doing yet. I feel pretty comfortable that these efforts will combine for impressive effect.

    [and then, of course, those neuromorphic chips will eventually start using light instead of electric impulses]

  18. 18. zarzuelazen says:

    I’m pretty confident about the extra time dimension idea at this point guys.

    Just take a ‘God’s Eye’ view of things here. Imagine you’re a superintelligence or God, and you’re looking at the universe as a whole. Ordinary physical time is a part of this universe – you just end up looking at a timeless unchanging ‘block universe’ (the spacetime block picture of general relativity).

    So what’s wrong with that picture? Well, how could you be conscious of it, when consciousness *requires* change or differentiation between points in time, if only changes in your own thoughts? The only conclusion is that *time would still be flowing, and it can’t be ordinary physical time* (because ordinary time is frozen in the block universe you’re looking at)!!!

    So the consciousness of God (or superintelligence) is in *super-time* (a second time dimension orthogonal to the ordinary time dimension of physics)

    The main line of support is definitely the distinction between mathematical, mental and physical properties. Really , this distinction simply can’t be explained within the ordinary physics picture (David Chalmers is right here I think).

    The *only* way to save physicalism, whilst explaining the math-matter-mind distinction, is to postulate a 2nd time dimension, and have the distinction arise from movement along the super-time axis.

    Let’s go back to the ‘God’s Eye’ view of the universe as it evolves in super-time.

    At time 0, only pure mathematics (information or computation) exists. This is the logical scaffolding of reality.

    As super-time advances, ‘matter’ is added – meat is put on the bones of reality, so to speak.

    Still further along in super-time, ‘consciousness’ is added.

    Note that the entire block universe of ordinary physics is still present at each point in super-time (for each point on the vertical super-time axis imagine a frozen time-line extending along the horizonal axis)

    As you move along the super-time axis, extra ingredients are getting added to the raw math – first matter is added, then later consciousness.

    If the theory sounds crazy at first, just imagine what you would actually see if your mind was suddenly running much much faster or slower than it is now. The very fabric of reality would indeed appear to be shifting to you…this proves that indeed your mind is actually running in super-time, the 2nd time dimension.

    Concepts such as ‘mental’, ‘physical’ and ‘mathematical’ are simply labels you use to describe reality as it appears to you *given the speed your mind is running at*. Change the speed your mind is running, and the apperance of reality shifts! (Mathematical->Physical->Mental)

  19. 19. arnold says:

    Understanding ourselves as Objects in Time for Observation—Helps provide the conscious mind place…

Leave a Reply