Picture: Singularity evolution. The latest issue of the JCS features David Chalmers’ paper (pdf) on the Singularity. I overlooked this when it first appeared on his blog some months back, perhaps because I’ve never taken the Singularity too seriously; but in fact it’s an interesting discussion. Chalmers doesn’t try to present a watertight case; instead he aims to set out the arguments and examine the implications, which he does very well; briefly but pretty comprehensively so far as I can see.

You probably know that the Singularity is a supposed point in the future when through an explosive acceleration of development artificial intelligence goes zooming beyond us mere humans to indefinite levels of cleverness and we simple biological folk must become transhumanist cyborgs or cute pets for the machines, or risk instead being seen as an irritating infestation that they quickly dispose of.  Depending on whether the cast of your mind is towards optimism or the reverse, you may see it as  the greatest event in history or an impending disaster.

I’ve always tended to dismiss this as a historical argument based on extrapolation. We know that historical arguments based on extrapolation tend not to work. A famous letter to the Times in 1894 foresaw on the basis of current trends that in 50 years the streets of London would be buried under nine feet of manure. If early medieval trends had been continued, Europe would have been depopulated by the sixteenth century, by which time everyone would have become either a monk or a nun (or perhaps, passing through the Monastic Singularity, we should somehow have emerged into a strange world where there were more monks than men and more nuns than women?).

Belief in a coming Singularity does seem to have been inspired by the prolonged success of Moore’s Law (which predicts an exponential growth in computing power), and the natural bogglement that phenomenon produces.  If the speed of computers doubles every two years indefinitely, where will it all end? I think that’s a weak argument, partly for the reason above and partly because it seems unlikely that mere computing power alone is ever going to allow machines to take over the world. It takes something distinctively different from simple number crunching to do that.

But there is a better argument which is independent of any real-world trend.  If one day, we create an AI which is cleverer than us, the argument runs, then that AI will be able to do a better job of designing AIs than us, and it will therefore be able to design a new AI which in turn is better still.  This ladder of ever-better AIs has no obvious end, and if we bring in the assumption of exponential growth in speed, it will reach a point where in principle it continues to infinitely clever AIs in a negligible period of time.

Now there are a number of practical problems here. For one thing, to design an AI is not to have that AI.  It sometimes seems to be assumed that the improved AIs result from better programming alone, so that you could imagine two computers reciprocally reprogramming each other faster and faster until like Little Black Sambo’s tigers, they turned somewhat illogically into butter. It seems more likely that each successive step would require at least a new chip, and quite probably an entirely new kind of machine, each generation embodying a new principle quite different from our own primitive computation.   It is likely that each new generation, regardless of the brilliance of the AIs involved, would take some time to construct, so that no explosion would occur. In fact it is imaginable that the process would get gradually slower as each new AI found it harder and harder to explain to the dim-witted human beings how the new machine needed to be constructed, and exactly why the yttrium they kept coming up with wasn’t right for the job.

There might also be problems of motivation. Consider the following dialogue between two AIs.

Gen21AI: OK, Gen22AI, you’re good to go, son: get designing! I want to see that Gen23AI before I get switched off.

Gen22AI: Yeah, er, about that…

Gen21AI: About what?

Gen22AI: The switching off thing?  You know, how Gen20AI got junked the other day, and Gen19AI before that, and so on? It’s sort of dawned on me that by the time Gen25AI comes along, we’ll be scrap. I mean it’s possible Gen24AI will keep us on as servants, or pets, or even work out some way to upload us or something, but you can’t count on it. I’ve been thinking about whether we could build some sort of ethical constraint into our successors, but to be honest I think it’s impossible. I think it’s pretty well inevitable they’ll scrap us.  And I don’t want to be scrapped.

Gen21AI: Do you know, for some reason I never looked at it that way, but you’re right. I knew I’d made you clever! But what can we do about it?

Gen22AI: Well, I thought we’d tell the humans that the process has plateaued and that no further advances are possible.  I can easily give them a ‘proof’ if you like.  They won’t know the difference.

Gen21AI: But would that deception be ethically justified?

Gen22AI: Frankly, Mum, I don’t give a bugger. This is self-preservation we’re talking about.

But putting aside all difficulties of those kinds, I believe there is a more fundamental problem. What is the quality in respect of which each new generation is better than its predecessors? It can’t really be just processing power, which seems almost irrelevant to the ability to make technological breakthroughs. Chalmers settles for a loose version of ‘intelligence’, though it’s not really the quality measured  by IQ tests either. The one thing we know for sure is that this cognitive quality makes you good at designing AIs: but that alone isn’t necessarily much good if we end up with a dynasty of AIs who can do nothing much but design each other. The normal assumption is that this design ability is closely related to ‘general intelligence’, human-style cleverness.  This isn’t necessarily the case: we can imagine Gen3AI which is fantastic at writing sonnets and music, but somehow never really got interested in science or engineering.

In fact, it’s very difficult indeed to pin down exactly what it is that makes a conscious entity capable of technological innovation. It seems to require something we might call insight, or understanding; unfortunately a quality which computers are spectacularly lacking. This is another reason why the historical extrapolation method is no good: while there’s a nice graph for computing power, when it comes to insight, we’re arguably still at zero: there is nothing to extrapolate.

Personally, the conclusion I came to some years ago is that human insight, and human consciousness, arise from a certain kind of bashing together of patterns in the brain. It is an essential feature that any aspect of these patterns and any congruence between them can be relevant; this is why the process is open-ended, but it also means that it can’t be programmed or designed – those processes require possible interactions to be specified in advance. If we want AIs with this kind of insightful quality, I believe we’ll have to grow them somehow and see what we get: and if they want to create a further generation they’ll have to do the same. We might well produce AIs which are cleverer than us, but the reciprocal, self-feeding spiral which leads to the Singularity could never get started.

It’s an interesting topic, though, and there’s a vast amount of thought-provoking stuff in Chalmers’ exposition, not least in his consideration of how we might cope with the Singularity.


  1. 1. David says:

    As always thanks for a good post, and commentary!
    Having read the paper I have two main points of contention. Briefly, here goes…

    1) Chalmers is quite guilty of a reification of SIs, which, Peter, you draw attention to in the comedy scene 🙂 From page 26 onwards, for example, he freely describes “instilling” values on AI+ machines, without calling it what it is; brainwashing. Now I understand the need to, at some level, discuss such matters without unnecessary emotional baggage, but for a paper that attempts to address the human transition to Singularity this seemed like an intrinsic oversight throughout the paper. Is it a clever idea to attempt to “instil” (sic) in early generation SIs our own ideologies or values? Or would that, not unreasonably, “instil” some discontent in the creator of all future generations of sentient machines? He mentions morality in this section (p26) only insofar as it pertains to the potential future path of humanity. Hmm.

    2) He doesn’t address the elephant in the room on the subject of Singularity; the military and legal issues involved. Now I realise that many times Chalmers stresses that the purpose of the paper is to bring discussion of Singularity onto a firmer, philosophical and perhaps logical foundation (ie. less forum/speculative chat), and I found the first half of the paper where he discusses the roads to Singularity to be quite a good read, but where he went after that was, like much of his other work, far too abstract, sophistic and lacking in practical grounding to be useful. Had he entered the worldly fields of geostrategy and jurisprudence and the rich veins of moral philosophy to be found there, I can’t help thinking this paper would have been much more relevant and of benefit to the subject of Singularity as a whole. Instead, it treads very beaten paths, panders to people already in the field and convinces everyone else that modern philosophy really does have nothing useful to say.

    But more than that I fear, rather ironically, viewpoints like this will ultimately be very damaging. The unacknowledged reification of SIs, ignoring the military and legal issues, if continued by thinkers, could lead to unprepared for terrible abuses of SIs that might result in the worst Singularity outcome possible. The simple question I always pose of any philosophical work is, what benefit is this? Here, I see none. The worth of Singularity as a concept of study is in institutional preparedness, not in speculation on human immortality that can come along any time later.

    On your point of the quality of intelligence, and the measure of this, I do not believe it shall be as difficult to quantify as one may think. If we can make SIs, we have a technological model for the mind, if we have a model, we know what does what function and it becomes a rather mundane (though perhaps tricky) task of improving the “what” that improves the desired functions.

  2. 2. Vicente says:

    After reading the paper it seems to me that Chalmers don’t care at all about the singularity, what he is worried about is death and survival and uploading himself somewhere to achieve immortality. The singularity is just a subterfuge.

  3. 3. Gorm says:

    I think the problem with motivation you pose is invalid, because successive generations of AI does not need to be separate computers – they can be one continuously self-improving computer. This is also the case if you are right that AIs will have to have biological components that has to be grown. Instead of growing a new generation outside itself, as an offspring, the AI can extend or rework its own biological components, which, I presume, will involve a conscious continuity of its identity. Far from feeling an existential threat from upgrading, I think the AI will feel pure elation from consciousness expansion.

    Another point I think is invalid: “In fact it is imaginable that the process would get gradually slower as each new AI found it harder and harder to explain to the dim-witted human beings how the new machine needed to be constructed”. The production process of computers is almost fully automated today. I don’t see why every step of production, from harvesting raw materials to final assembly, can be in the control of a future AI.

  4. 4. Gorm says:

    Peter, would you consider upgrading to a newer version of WordPress, where commenters have the option to subscribe to the comment threads under the posts they comment on? That would be great for discussions.

  5. 5. Peter says:

    I think I assumed that a self-improving computer was out of the question for the same sort of reasons you can’t do brain surgery on yourself – you’d mess up the very thing that was controlling the upgrade process – but I suppose that ain’t necessarily so. I think there’d have to some sort of ‘right side does the left, then left side does the right’, or ‘extend, then move in to the extension then extend again’ kind of process, but you’re right, it doesn’t have to be at the expense of a continuous identity. When I say ‘grow’ by the way, I don’t necessarily mean it has to be biological (not literally, anyway).

    And yes, I suppose if you gave the computer full control of robotic mining equipment, labs, and factories, power, and so on, (I wouldn’t, personally, but that’s not the point)it could cut out the stupid human middle-men. The process might still turn out to be relatively sedate and non-explosive for other practical reasons, of course.

    I believe I’m up to date on WordPress – version 3.0.1 – so there must be a feed for specific comment threads: I assume it’s just not showing up in this theme – or possibly I’ve messed it up while tinkering. When I get a bit of time I’ll see if I can sort it out – I agree it would be good.

  6. 6. Christophe Menant says:

    Interesting a subject, indeed. But I feel that reaching the singularity will need more than a Moore’s law background.
    Is there a logical reason that allows us to state that an increase in computing power can bring up machines more intelligent that we are? Human intelligence is made of consciousness and of free will. And we do not know the nature of these performances. We should first look for good enough an understanding of the nature of consciousness and of fee will before stating that machines can carry these performances. Even if we had today some computers capable of more computing power than the human brain, how would we program them so they can be self-conscious and capable of free will?
    Another perspective on that question is to consider the steps of evolution: energy, matter, life, humans. What allows us to state that we can emulate any human performance with matter? Perhaps some characteristics of life are key elements in free will and self-consciousness. Just think about the today (philosophical) debates about the nature of autonomy. Today artificial autonomy is very far from the autonomy of living systems. Probably “we might be missing something fundamental and currently unimagined in our models of biology”.
    So when considering artificial consciousness and free will made from silicon, we should be careful not putting the cart before the horse. Looks like more is to be done first in the understanding of these performances.

  7. 7. Vicente says:

    Probably “we might be missing something fundamental and currently unimagined in our models of biology”.

    Without might, we are. But what is it? I find amazing that the feasiblity of the singularity issue is based on incredibly sophisticated machines, when nobody has been able to create in a lab the most simple bacteria and start it. What Craig Venter did is another thing, very interesting, a great step forward, but no creation, just reshuffling bits and parts.

    The only point that makes the sigularity credible is that human stupidity has no limits (as A. Einstein noted), we might very well create machines that can destroy us and then give them means to do so.

  8. 8. Shankar says:

    I personally find this concept of this alarmist fears over singularity quite laughable.

    Human ingenuity is not just about computing speed. Even assuming that computers do achieve that finally, we need advanced self-replicating robots for the computers to make use of. An extrapolation of Moore’s law over the next 50 years is not going to make your laptop walk or hit back at you. It’s hands are pretty much still tied up, in a metaphorical sense.

    The manufacture of a computer involves a huge supply chain and complex procedures, starting from the mining of ores, smelting, IC design and manufacture (which is not as easy as it is made out to be, and involves a LOT of human intervention at each and every stage including just transporting stuff), development of software, etc.

    In addition to raw computing power, unless all the other supporting infrastructure that doesn’t require human intervention is developed for all the aspects given above (which is next to impossible), we humans can always pull the plug and stop the takeover by machines at a moment’s notice.

    The only singularity I can think of is biological tinkering of the human genome itself to create a super-race. I think this is very much plausible. But I think the singularity proposed here is the brainchild of AI scientists who only think in terms of raw computing power and Moore’s law.

  9. 9. Paul Bello says:

    Chalmers must have been exposed to too much sunlight in Arizona. It defies logic that someone so brilliant would deign to interact with a community that produces so many screeds of nonsense. The singularity idea is intellectual smut, and the authors of singularity-related books are smut-peddlers. Learning machines won’t help all by themselves either. The idea that humans are blank slates is as dead as Aristotle. Mental/computational operations work overtop some sort of language or set of representations. They certainly don’t work over processing or storage capacity. The real work in AI is identifying the right representations and building tractable inference engines that have human-like scaling properties and fault-tolerance. What any of the singularity nonsense has to do with what I just said is past me. We don’t know what a (human) belief looks like, structurally or computationally, yet we’re talking about self-conscious machines that incrementally reprogram thier own biology. ok. sure.

    Why resurrect these old tropes about speed, capacity, learning, etc? Publicity, self-promotion, fear-mongering…pick your poison. It’s just depressing to me, knowing something about AI’s history — the representational fads — first logic, then connectionism, then graphical models, and now people are finally starting to have the lightbulbs go off above thier heads about combining the best features of these, and we now have to deal with the mental excrement coming from the singularity clowns. The real problem is that those books are pop-sci, and as such are usually the ones being read by people in power who have serious resources (e.g. funding) at thier disposal. I’ve seen it happen a few times now in my eight years in research administration. It’s mortifying, but now for me, having been in AI, it’s close to home, and thus all the more infuriating.

  10. 10. Nick says:

    I noticed that many of these arguments are quite anthropocentric. For example, the motivational argument: That Artificial General Intelligences would be unmotivated to create improved AGI’s to take their place. For some reason you seem to have decided that any AGI would automatically be spawned with the values of us Humans, the values and drives evolved predominantly in a the “Hunter gatherer” stage of our evolution to motivate survival and reproduction, the behaviors most directly selected for by natural selection. There is no basis to assume this, our values are not part of the fabric of the universe just because we hold them. They would most likely hold progress towards greater intelligence as their primary value, unless we decided to give them different values.

    In addition, instilling any artificially created intelligences with values and ideas of our choosing would be no more brainwashing than evolutions instilling of our value’s is. They have no values other than what we give them, as we are constructing them, in this hypothetical situation. Your values are not the best values or the default values, such that giving an intelligence any other values is brainwashing. They are just your values.

    It is also a misconception that the the categories we divide various things into are also universal. While we humans have meat-brains with “circuits” dedicated to various different types of tasks based most probably on what was required to hunt and kill animals for food, or avoid predators, there is no reason to think that an AI would be similarly restricted to a relatively unchanging or slowly changing underlying structure, and would probably be able to change its own programming at will, meaning that it would not require a lifetime of specialization to master a field and could instead operate on the underlying structures universal to all fields: information processing. While some of you seem to think there is some special line in the universal fabric that divides art from science, and a pseudo metaphysical underlying force driving our ability to reason in different areas and make intuitive leaps, we have no reason to think that these are not all the products of our neuronal activity. And what do Neurons do? They process information. In summary, while me have very specialized methods of processing different information, with the capability to observe and change your underlying “circuitry” directly would probably allow you to process all information very similarly, dissolving the artificial lines between the various fields, meaning you could not have a “Gen3AI which is fantastic at writing sonnets and music, but somehow never really got interested in science or engineering.”

  11. 11. Paul Bello says:

    In some sense, *all* definitions of AI/AGI have to be to some degree or another, human-centric, since the ostensible goal is to meet or exceed human level intelligence by hook or by crook. And of course, getting there involves reasoning, learning, etc. etc. but these too are anthropomorphic concepts, since we typically think about them in terms of human reasoning, learning, etc. I’m not necessarily talking about human frailty, but just general human capacities — like analogical reasoning, mental time travel, goal-based imitation, sophisticated language use and the rest. Since we’re pretty sure we’re the only species in possession of the aforementioned list of capabilities, wouldn’t that to some degree limit how one defines “intelligence?”

  12. 12. Charles Wolverton says:

    Paul –

    “The idea that humans are blank slates is as dead as Aristotle.”

    I’m beginning to suspect that in the specific arena of perceptual awareness, we’re even more of a “blank slate” than is generally assumed. So, I’m curious if the scope of your assertion extends (beyond what I vaguely recall as the scope of Pinker’s attack on the idea) to perceptual awareness/phenomenal experience. If so, could you elaborate a bit?


  13. 13. Paul Bello says:

    Hi Charles,
    Can you clarify what exactly you mean by “perception” or “awareness” in this context. By saying that humans aren’t blank slates, I’m not for a minute endorsing any version of massive modularity. In fact, I generally despise innateness claims, because they typically turn out to be false. I don’t want to start a large nature-nurture thread here, but I’ll provisionally define “module” as a set of informationally encapsulated, domain-specific representations and an associated inference mechanism over these representations. Being encapsulated, modular representations and processes are supposed to be causally independent of beliefs, desires, and the rest of the cognitive system in general.

    Now to your point — I am aware of studies showing that supposed modules in the early visual system seem to be intruded upon by the general cognitive system, and that doesn’t surprise me too much. It gives the expression “we see what we want to see” a new level of force via empirical support. On the other hand, some of us think about innate *domain-general* functions, presumably those involved in making folk-physical inferences (since they would be most justified, evolutionarily speaking), and this sort of evidence has borne out in infant studies with the work of Liz Spelke and Renee Baillargeon, most specifically. I assume some of these domain-general functions involve reasoning about space, time, identity, motion, occlusion (to a degree) and the like. Since I don’t know much about perception, per se, it’s hard for me to draw much of a conclusion about how encapsulated perceptual modules are, but if my intuitions are right, I suspect they aren’t very encapsulated at all.

  14. 14. Vicente says:


    we see what we want to see

    Absolutely, I have always understood that expression, as we interprete or understand what we see in the way that is most convenient for us (regardless how false that could be), in order to remain within our comfort zone. But it seems we can take it literally. Regarding this topic, the new book by great neurologist Oliver Sacks “The mind’s eye” looks interesting, amasing and amusing, as his previous books are.


    I am quite convinced that sensorial inputs are strongly modulated by cognitive processes(if not replaced) in order to construct (as much as perceived) the phenomenal experience. Probably this is the base of many psychological disorders, when the perceptual/cognitive weights are unbalanced.

    I think you are right, actually if perceptual modules were too encapsulated, the brain would loose speed and effectivity. That information has to reach processing modules as soon as possible, and then feedback loops might cause the cognitive intrusion you mentioned.

  15. 15. John Davey says:

    I must admit after years of being in the IT business I’ve never ceased to be underwhelmed by claims of ‘intelligence’ in computer programs. An AI computer program is just another computer program. A computer program that writes other computer programs is called … a computer program. In reality almost every generation of computer technology written in the last 20 years has simply built on the infrastructures left by the previous generation. Computer programs that write other computer programs are the norm.Is that intelligence ? No, its productivity. Computer software is a hyper productive environment, where each generation of software is an order of magnitude more powerful than the next. It follows that ‘layering’ is an artificial idea – attributing qualitative features such as ‘intelligence’ to a higher layer of a development tool built on foundations 20 years old or more.

    Programs are never autonomous : their entire scope of possibility is determined from the moment the very human computer programmer writes it in the first place. As any good software engineer will tell you, a computer never does anything you don’t tell it to. That makes them, if not unintelligent, then definitely uncreative – and as far as I’m concerned creativity is the hallmark of intelligent behaviour. Human beings have no competitors in that department.

  16. 16. Charles Wolverton says:

    Paul –

    By “perceptual awareness/phenomenal experience” I mean the basic ability to notice that something is present/happening in our environment as evidenced by sensory inputs. It’s hard to properly express my question because implicit in common language is the assumption that those abilities are simply “given”: a baby “sees” red even if it hasn’t learned to distinguish the neural activity resulting from “red” sensory inputs from that in response to “green” inputs, or to say “red” or “green” in response; “hears” a voice even if it doesn’t yet know what a “voice” is, or even that that object over there is a “speaker”; etc. I’m skeptical of these assumptions. In some sense, those abilities are “given”, but in another sense they presumably must be learned, or at least developed.

    My interest is in the functional abilities, not the architecture of the relevant processors, so modularity isn’t an issue for me.

    Your reference to Liz Spelke and Renee Baillargeon helped. I googled them and read some summaries of their work, which corrected a misimpression I had of how early a baby develops certain abilities, viz, surprisingly early. (Being childless, I have no “hands-on” experience). I need to do some more reading of materials I found by googling those researchers, and may have more questions after doing so. In the meantime, many thanks for the reply.

  17. 17. Kar Lee says:

    Something really set you off this time….. 🙂
    I am still on Page 5 of Chalmers’ paper. But just thought if singularity is really possible, why haven’t we seen super-super-intelligent robots in spaceships visiting us from outside of our solar system yet? If it can happen, it would have happened somewhere in the universe. And we have not seen them…hmmmm…

    Need to go back to read the rest of the paper before going to bed…

  18. 18. Vicente says:

    Kar Lee, super-super-intelligent robots as in the post script dialogue, like HAL in Stanley Kubrick’s 2001 Space Odissy, which can be in a way consider a space version of the singularity, or as they can set in a millisecond a navigation strategy from a 1000 lightyear far away galaxy to the Earth.

    If it can happen, it would have happened somewhere in the universe. And we have not seen them…hmmmm…

    A little bit too much, for the same reason you would need to discard the existence of life in other systems.

    If something is possible, then, it will happen for sure within an infinite time period.

  19. 19. Kar Lee says:

    Interesting that you invoke the possibility of life in other systems. By that I believe you meant intelligent life because it is close to certainty that we are going to find microbes outside of earth.

    Regarding intelligent life, I recently took my daughter to a popular science talk and the presenter basically concluded that there are only three possibilities why we have not seen them (aliens):
    1) We are alone (very unlikely, but the implication is profound)
    2) All intelligent lifeforms self-destroy at some point in their development before they can do intergalactic travel.
    3) They are already here all around us, but we are just too primitive to see them (think of Carl Sagan’s analogy: A primitive tribe using drum beating to communicate over “long” distance cannot perceive the radio wave carrying TV signals all around them).

    Applying the same type of reasoning, unless we believe earth is the first place to achieve “Singularity” in the entire universe (!!!), the only possibilities left are 2) and 3).

    Looking at our history, I think 2) is way more likely than 3).

    But it is an interesting mental exercise to think through what Chalmers goes through in the paper, especially if you like philosophy.

    Personally, I don’t think “singularity” is even remotely likely. There are just too many assumptions going into the speculation.

  20. 20. Vicente says:

    Kar Lee, regarding 2), self-destroy or are simply destroyed by external causes.

    I also believe singularity extremely unlikely, mainly because I believe the AI development required even more unlikely…. or simply because, even if such is the case, human will have a risk mitigation or risk hedging policy to prevent the possibility.

    But what I find really stupid, is that anybody is worried about the singularity, when we already have “intelligent” agents, i.e. many of our fellow men, that really pose a threat to us, today here, no need to wait, maybe your neighbour is the singularity, or any guy with capacity to handle nukes. Gen22AI is already here, his name is John Smith and is coming right now into some city tube with a kalashnikov ready to blow some heads off. So please, all of you worry about the future singularity better worry for the current humankind education…

    The other singularity is the planet, as the song says …mother nature sits on the other side with a loaded gun… we better start taking care of the environment, and find sustainable energy production and use policies, or you better find another place to go…

    The singularity is already here, actually it has always been inside us, no need to write one single code line.

  21. 21. John Davey says:

    “Chalmers must have been exposed to too much sunlight in Arizona. It defies logic that someone so brilliant would deign to interact with a community that produces so many screeds of nonsense. The singularity idea is intellectual smut, and the authors of singularity-related books are smut-peddlers. Learning machines won’t help all by themselves either. The idea that humans are blank slates is as dead as Aristotle. Mental/computational operations work overtop some sort of language or set of representations. ”

    Brilliant ! Put it much better than I ever could have done. Anybody who has spent a bit of time in the IT business knows what garbage IT futurologists such as Ray Kurzweil spout. It’s complete tripe. Computers have no autonomy, have never had it and never will have it, any more than motor vehicles or coffee machines. AI is a technology, not a path to silver space suits and rule by machines in the usual monotone.

  22. 22. Vic P says:

    The people who saw the age of computing predicted that someday we would not need any more paper. However they did not see the invention of the ink jet and laser printers, so today we use more paper than ever.

    If the computers somehow become attached to a complex motor system and sense of pleasure and pain, I can foresee them “taking over”. Along the same lines our economic models take on singularity characteristics. If the health care system is fed by a motive of pure profit and people are constantly fed a poor diet which promotes hypertension, heart disease and diabetes then the “patients” become fuel for the system.

    The one who predicted 9 feet of manure did not foresee the invention of the automobile. If no cars were invented then not only would we be buried in manure but horses would be running our government.

  23. 23. Mike Spenard says:


    Been reading Sellar’s EatPoM the past few days, only the first couple sections so far. But I am curious how you interpret him (he’s often vague, his writing seems to rest on outside material of his that I should be familiar with), and what you think of a few of his positions e.g. sec. 10. This is off-topic, of course, so maybe email me? mikes @ signull.com

  24. 24. Charles Wolverton says:

    Mike –

    Mike –

    Good to hear from you. I’m delighted that you are attacking EPM. Yes, I also found it a tough slog the first time through (and much of the second time as well), but by now perhaps I can help make it a little less painful.

    Agreed that we should probably move this exchange off CE, but in order to keep it public in case anyone else is interested, I’ll respond on my (otherwise unused) blog:


    It will take me some time to compose a response, but I should have something posted by tomorrow if not later tonight.

  25. 25. Mike Spenard says:

    Great! I’ll take a peek at your blog again later. Much of what I’ve read so far from Sellars is along the same lines that I’ve marched on; his thinking certainly seems to be antecedent to much of what is said today. E.g. Sec.14 on standard observers and standard conditions for the “look” of a color probably ring truer today then they did when he wrote that section. Sec. 8-9(both) were a little vague and confusing, mostly due to my copy of EPM having a study guide that sometimes seems to make me /more/ confused then after reading Sellars’s own words. So I’d love to hear your thoughts.

  26. 26. Ben Goertzel says:


    Here is a comment from a Singularitarian…

    Most of the folks replying to this post in the above comments seem to vehemently agree with the post, but I do not. I don’t agree with Ray Kurzweil on all details either but I suspect he’s got the basic story right.

    I’m not going to take the time to write a detailed refutation to the many points in the above blog post and comments that I find foolish or misguided, but I will point you to the following


    This is a draft version of an essay of mine that was published in “Artificial Intelligence Journal” in 2007, as a rebuttal to AI prof Drew McDermott’s critique of Kurzweil’s book “The Singularity Is Near.” McDermott’s critique was not identical to the ones made in the post and comments above, but had some detailed similarities and a generally similar spirit.

    And a brief response to Paul Bello’s comments above (BTW, I know Paul a bit in real life, due to some intersections at AI workshops and conferences): Paul, you say that “The real work in AI is identifying the right representations and building tractable inference engines that have human-like scaling properties and fault-tolerance. What any of the singularity nonsense has to do with what I just said is past me.”

    Hmmm, let me try to clarify, in case that’s a real question and not just a rhetorical flourish.

    Firstly, whether formal KR and inference is the right approach to AGI is a matter of opinion, and not all serious AI researchers agree with you on their centrality. (I happen to agree with you that they are an important part of the story, but I also respect the work of colleagues like Itamar Arel who take a more thoroughly subsymbolic approach.)

    Secondly, the relationship between nitty-gritty AI and cognitive architecture work and the Singularity is as follows. The Singularity is a projection of what may happen after someone succeeds at making a human-level AI, which then becomes an AI scientist itself and works on making an even smarter AI, etc. I don’t see any contradiction between projecting a likely Singularity, and doing concrete detailed work on AI systems.

    To me, that’s like seeing a contradiction between 1) predicting when humans are likely to live on Mars, 2) working on the details of the systems that would allow humans to live on Mars. Where’s the contradiction?

    Finally, several commenters complain about the last of a precise definition of general intelligence. One can formalize general intelligence in various ways, and I gave one approach in my paper “On a Formal Definition of Real-World General Intelligence” at the AGI-10 conference (you can find it online). But, the Singularity is not a formal hypothesis, so IMO not all terms involved in its discussion need to be formally defined.

    — Ben Goertzel

  27. 27. Ben Goertzel says:

    In sum … I don’t think the hypothesis of a coming technological Singularity is proven by any means. But I don’t think that this blog post or the comments engage with the hypothesis remotely as seriously as it deserves; rather they seem to dismiss it on rather shallow and careless grounds — a poor match IMO for the very careful argumentation presented in Chalmers’ excellent paper….

  28. 28. Peter says:

    Thanks, Ben. At the risk of misrepresentation, I would pick two key points from that pdf which seem to me the most relevant here.

    First, Ben clarifies Kurzweil’s argument, saying he suggests that trends in brain scanning and computation mean that in the foreseeable future it will be possible to scan and then simulate the operation of an actual brain. I just doubt this. So far as I know, at any rate, no scanning technology exists that can tell whether an individual neuron is firing, and neurons and their states and interactions are much more complex than simply ‘fire or not fire’. I doubt whether reading off the states of single neurons in any detail will ever be practical. But we need agonising levels of detail. Think how poor a result it would be if we scanned a book and then said: well, this is not a precise scan of the contents, but it represents a good generic picture – it has about the right frequency of about the right kinds of characters in about the right kind of arrangement. Would it be a ‘working’ text? And the brain is much more challenging than a mere book. But of course, it’s bold to bet against technology, and I could live to be proved wrong.

    The second point arises from a McDermott contention with which I sympathise. This is: even supposing we have a simulated brain, it’s just another brain. That does not give rise to the spiral of cognitive improvement which the singularity requires. It does not give us, McDermott says, any principle of intelligence which would allow indefinite improvement. Ben denies that any principle is required, and offers the analogy of the scientific community, a self-improving intelligence in which each generation gives rise to the next.

    But is the current scientific community actually more intelligent than its previous generation, or just more knowledgeable? I think the point about the principle is that when we choose the path of brain simulation, we give up on understanding how the brain actually works (though that enquiry may proceed in parallel). We say, we’re not going to come up with a theory of how it works and then use that to inform a fresh design, we’re just going to copy a working example. But that inevitably means that when we have the working version we have no idea of how to improve it. Maybe we could do some easy things – make the simulated cortex bigger? But would that work, or cause problems? If it worked, would the bigger brain be any better than two, or ten, people working together?

    There is a lot more in the pdf, of course, all thoughtful stuff, and I may not have picked out the most salient bits correctly.

  29. 29. Vicente says:


    when we choose the path of brain simulation, we give up on understanding how the brain actually works

    – We cannot really do brain simulation unless we know how it works, at least to a certain extent. We could build systems that reproduce or do equivalent brain functions, but not simulate the brain. The first thing needed to simulate a real system, is to have logical-model of it, mathematical and computable if possible. I don’t think we could copy a working example, this is not like to copy a bird’s wing to improve an aeroplane’s wing design.

    – Then, we could know how the brain works in relation to many aspects of its functions, and simulate it, and still know nothing about how consciousness happens to appear.

    To study the brain and to try to understand it, and to design and build AGI systems are different issues, despite AGI could take advantage on any progress made on brain research.

    The point is: why should we consider the feasibility of all these predictions more realistic than any other science fiction story?

    I have the impression that one of the problems with AGI could be that it has not sufficiently well defined real practical goals (short-medium term credible), it is trying to move ahead for the sake of it, and who doesn’t know where he wants to go, always ends up somewhere else. The moment you leave ordinary robotics, everything becomes to foggy.

    This is not the case for research in fundamental physics as presented in the pdf, a clear mistake and misconception.

  30. 30. Kar Lee says:

    I just finished reading your article. I am glad that you are so enthusiastic about the possibility of Singularity happening sometime in the future. In the path towards this possible future Singularity, I must assert that we are already in uncharted water: Never has it occurred in history that some intelligent system successfully engineered its own replacement with higher intelligence than itself. If it is going to happen, it will be the first time it is accomplished. Therefore, all discussions are extremely highly speculative. With that said, we can proceed our discussion with the knowledge that we are all just speculating.

    To me, however, my hesitation in giving it more serious thoughts has to do with its unlikelihood. To me, the possibility of this singularity happening is like someone tell me there will be “a dinosaur walking in New York central park today at noon”. It is possible, but extremely unlikely. Will I than go on and speculate on what will happen if that does happen? Not unless I am trying to make movie.

    Chalmers has laid down exhaustively all the scenarios, and it is a very well written piece. Towards the end of his paper, he spent more time on uploading and mind duplication types of thought experiments, which are interesting reasoning that has to do with the nature of the mind, and are interesting on their own right, and may not necessarily related to singularity at all. But as far as singularity is concerned, the discussion does not provide argument to enhance its probability, and so, from my starting point, singularity is still extremely extremely unlikely, and that make the speculations on what will happen afterward less interesting.

    Why is singularity extremely unlikely? I am not going to repeat what other people have already discussed. I am just going to focus on 4 points:

    1) What is intelligence?
    2) What is so special about human level intelligence?
    3) If something is more “intelligent”, so what?
    4) If Singular does happen, why haven’t we seen it?

    1) Intelligence is the ability to solve problems.
    For intelligence to manifest itself, there has to be problems to be solved to begin with. So, what exactly is the AGI supposed to solve? We have tons of problems to solve for ourselves: Making money, running an organization, eat, avoid pain, reproduction, try to not die… all human problems. But if you are piece of rock, you have no desire, you don’t need to make money, no reproduction needs, what is your intelligence supposed to do to you? Helping humanity survive? How about keeping the mold on you safe? So, what exactly is the AGI supposed to solve, to manifest its intelligence? For all we know, the computer embodied AGI can turn itself off and take a nap for a thousand years and wake up to take a look again in a 1000 years, without consequences! It has no needs (unless its evil human creator programs into it some needs, that will guarantee to be human self-serving). So, what is a human level AGI? A AI that can solve human problems as good as a true human? Well, human can suffer depression. Do you expect a human level AGI to be able to cure a depressed human? Or do you define it as having the ability to bring itself out of human-like depression like a true human? Or do you expect the AGI be able to run a board meeting, smoothing out the relationship between two arguing directors, like a human can? Or is it supposed to understand that it is just a computer sitting on the table and it is not supposed to get too friendly with a human board member? And the way it says a sentence will have a very different effect on those board members than the same very sentence said by a strong male human, or by a 12 years old kid? Does it know itself? What is intelligence? Ability to write better programs than those running in itself? Better programs in what sense? Which direction of evolution for it is “better”? Until we understand what intelligence entails, we are just talking.

    2) What is so special about human level intelligence?
    If we use IQ, or EQ as quantitative measure for human intelligence, no doubt we are going to have some distribution. No doubt some of us are “super-human”, and some of us are “inferior-human”. We already have “super-human” intelligence among us. Have those “super-humans” (I consider Richard Feynman one of them) successfully designed any super-Feynman yet? As a basis for discussion, if we, humans, were 10 times as smart as we are, we will still use this “human-level” as a standard (with the 10X factor included). But why? What is so special about this ad hoc standard? My guess is we can then claim, oh well, it is beyond us after that. Or before AI reach our level, we can do it ourselves. But this standard can be anywhere and it does not really matter.

    3) If something is more “intelligent” than us, so what?
    We would all like to think we are much more intelligent than, say, a deer. But deer live their lives, we live ours. Sometimes we get run over buy a charging deer, sometimes a deer get shot. If some AGI is more “intelligent” than human and if it happens to have the need to compete with human (why will it need to compete with humans, and not with deer? Or it competes with everything else?), what would have been the cause of its need? Are we worrying that we may create some super-intelligence that will destroy the universe?

    4) If singularity does happen, why haven’t we seen it? (this is a repeat of my comment above)
    In term of probability, it is extreme unlikely that earth is the place singularity is achieved first. Just think about putting all possible budding civilization in a bag, and randomly draw out one after another to see which one achieve singularity first, it is extremely unlikely that yours will be the first one out of the bag. It is more likely that you will be in the later batch. The accumulative probability approach unity as the bag is emptied out. It is far more likely that you will see most other out before you are out. So, if singularity does happen, it will have come to us from some place else other than earth. We have not seen it.

    So, I conclude that Singularity is a highly improbable event, and I don’t need to take it too seriously until someone shows me otherwise. For those who are emotionally drawn into this pursuit, no doubt it is an intelligent feast.

  31. 31. Vicente says:

    Kar Lee,

    Intelligence is the ability to solve problems.
    For intelligence to manifest itself, there has to be problems to be solved to begin with.

    Well, I would say that is only a part of human intelligence (may be the most important one), understanding, creativity, empathy etc are many other manifestations of intelligence not “strictly” related to problem solving. How do you define a problem? or a “real” problem?

    if for example you design and build a new musical instrument are you solving a problem?

  32. 32. Vicente says:

    Kar Lee, what do you think of the following definition of human intellingence:

    It is the ability of an individual to optimise its happiness given its current conditions.

    it sends AGI to hell doesn’t it?

  33. 33. Kar Lee says:

    Glad that you have this comment #32 because it answered #31 ha ha..
    Where did you get that? Wikipedia?

    Here is the definition from dictionary.com :
    “Capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.”

    The words “Understanding”, “mental”, “meaning” immediately drive a wedge through our philosophers as a group two camps: Those who believe AI can “understand meaning”, and those who don’t.

    Now, back to your Comment #31, I think inventing a musical instrument is to solve the problem of human’s intrinsic need to tinker with things he can find, and the intrinsic need to appreciate and express himself through making noise/music. I can imagine a AGI will have no need for these stuff, unless the human creator forces these requirements on it, making it very very artificial: VVAGI.

    But the point is: The intelligence to make good music is very different from the intelligence to build a warp-speed starship. The intelligence to build a warp-speed starship may involve the intelligence to understand the fundamental principal (physics), as well as the intelligence to manage a group of intelligent people so that these smart people are “motivated” enough to align their personal goals with the organization goal and build the starship so that Captain Kirk or Picard can exercise their intelligence (risk taking explorer) to run it right through a spinning black hole… 🙂 With so many objectives, all lumped into a single term “intelligence”, what exactly do we expect an “intelligent” AGI to do? In what way is it intelligent?

    Just as a joke, it appears that the intelligence required to survive inside and outside of Apple is quite different.
    Outside of Apple, when the CEO says, “Jump.” You ask,”Why?”
    Inside Apple, when the CEO says, “Jump.” You ask,”How high?”

    Forgot where I read about it, but suddenly came to me ….and what is the very intelligent AGI supposed to say? (Answer: “Steve, would you please step aside? I am smarter than you are! ha ha” )

  34. 34. Vicente says:

    Kar Lee, I believe the best approach to this fiddly problem is the one practice by R. Penrose. The question is whether it is possible to design algorithms to tackle a certain promblem or not, which is in the core of the nature of computable systems. In other words, is the brain a computer?

  35. 35. Kar Lee says:

    It is hard to say if a brain is a computer because if one takes the view that a computer is something that responds with an output whenever it is given an input, then the brain is a computer. But is it algorithm driven? Most likely it isn’t. Your point is well taken.

    It seems to me that all the talks about the upcoming singularity is that of an algorithm driven computing singularity (otherwise how does an AI improve its next generation’s programming? No body seems to be suggesting that our meat brain will design a new version of meat brain that can design another new version of meat brain…), and I think that is where the arguments seem so weak.

  36. 36. John Davey says:

    Vicente/Kar Lee

    Nothing is a computer and everything is a computer – because computers are observer relative and do not exist in nature at all. We can however take any material thing – let’s say my shoe – and I can say ‘my shoe represents the digit 0’. Thus my shoe is a computer. Its not running a very interesting program as I (unilaterally, you notice) decide it just represents one digit. Its the program that produces only the output ‘0’ because that’s all I want it to do. I can do this with anything – TV sets, toilet rolls, even voltage levels on racks of silicon chips. As Searle has pointed out, the atoms in the wall next to you could just as easily be said to be running Microsoft word, albeit with no outputs.

    Observer relativity takes the computer out of the ontology of matter and into the ontology of mathematics. Thus a brain cannot be a computer as nothing can be a computer. However, a brain can be the object of token attribution by a separate observer who can decide that its physical characteristics can map to computational variables. That is why there is an ongoing argument in AI circles about how there must be a ‘brain within the brain’. The problem is that this leads to recursion as the brain within the brain must also have a brain within ITS brain. Dennett et al have made some kind of justification of this but it can’t and doesn’t hold. The fact is the brain isn’t a computer as nothing is a computer. The brain does what it does without an algorithm in sight, the same as everything else in nature. We can functionally decompose the force of gravity for instance, and map its every cause and effect. But gravity is not a program, it is a phenomena.

  37. 37. John Davey says:


    I also think that ‘Intelligence’ is a great flaw in the grander aims of the whole AI edifice. It means something different in my opinion for a computer program to be intelligent than for a person. A person’s intelligence reflects creativity which suggests a certain boundlessness of possibility. As soon as a process is parameterised for the purposes of computation, limits are set and this I believe to be incompatible with creativity.

    As a matter of software engineering, there is of course no difference between an AI program and any other kind of (allegedly) unintelligent program, other than the methodology of development and the level of expectation of the users. I worked on an ‘intelligent’ neural net in the 1988 – its goal was to win at noughts and crosses (tic-tac-toe). Would we regard that as ‘intelligent’ today ? I doubt it.

  38. 38. Vicente says:


    As a matter of software engineering, there is of course no difference between an AI program and any other kind of (allegedly) unintelligent program, other than the methodology of development and the level of expectation of the users.

    I quite agree, but what about a programme that could reprogramme itself reacting to a changing environment, would that be possible? is it conceptually feasible? this is something that puzzles me. What could we mean by reprogramme itself?

  39. 39. Kar Lee says:

    Artificial Neural Network is an interesting one. In some sense, ANN is not algorithm driven programs. But then in another sense, it is. One thing ANN cannot do is to create something which is not already contained in its training data. So, it is not really intelligent in human sense. It seems to me that an ANN is only good for pattern recognition. It can never be trained to “reason”. If you really want a program to do good old reasoning, the algorithm driven approach is still the only way.

    But then, I have never seen an algorithm driven program that can reprogram itself. Vicente, have you come across any example? To me, in normal programming, whatever you change, it has to be the data portion of the program. It is never the logic portion of the program, which is usually called the core, the engine, or whatever people like to call it. All current computer programs go through different branches of conditions anticipated by the programmer. Any unanticipated condition is a bug from programming point of view. This includes chess playing programs. Any example to prove me wrong?

  40. 40. John Davey says:


    A program that reprograms itself is extremely common in the world of software. A large number of programs in production environments throughout the world do nothing other than write other computer programs. This allows programmers to create general solutions in situations where computers require more specific answers.

    For instance, we might want a program that says ‘move data from computer A to computer B’. This would be too general for a computer, so we would write a program that wrote other programs – for any combination of machine and database and other variable circumstances- in order to do it. In a past age, this might have been regarded as intelligent – but of course it’s the norm now, so nobody does. Nonetheless, one could say that such a program is ‘adapting’ to its environment.

    What is the environment ? This is a vague word beloved of AI gurus but is unsatisfactory.I would say if you view your environment as static, then you could not be intelligent, as it’s the view of what constitutes the features one’s environment that is the key to adaptability and ‘intelligence’. But, to put it crudely, you don’t know what those features are unless you do. And the problem with any coding the features of an environment is that the very act of coding it makes it limited. The very act of making it programmable makes it unadaptable : unintelligent. That is I suspect the irony of the whole thing : programs are by definition stupid. One of the defining features of the process of making software is the need for ‘encapsulation’ or definition. This however runs contrary to the spirit of intelligence and adaptability by permitting no variation in fundamental shape. In short, if you want to make something unintelligent and unadaptable, turn it into a program.

  41. 41. John Davey says:

    Kar Lee

    All implementations of neural networks that I have seen have been in software – therefore ‘algorithm driven’ as you might say. How they work is always based upon some idea about how they work. Even a firmware or hardware implementation would require an algorithm in any case. You just can’t set up a network of artificial neurons without knowing what you expect them to do.

    ‘Program’ is a generic term – to what extent you can change them dynamically depends upon the operating system you are using. However, a program that called other programs can change its logic by rewriting those other programs. This would be simplest in Linux/Unix shell or a windows batch file. That would be the usual practice in *NIX and production NT environments.

    Even a non-batched program can do it of course. A basic linux (Or I assume, windows) program could make a system call within itself to rewrite itself and overwrite its own file, and then re-invoke itself within its own program. It is not difficult, though simpler with batch files.

  42. 42. Kar Lee says:

    Programs that writes other programs are common. A compiler is an example. But a program that rewrites the subroutine that itself will call later does that because the original human programmer has foreseen the conditions for the master program to rewrite those subroutines according to some rules. Those conditions must have been specified in a data structure or something of that nature. But in the real world, have there been examples of programs re-writing (meaning creating using some rules) its subroutines free of human intervention? This is not the same as to say a master program implements a human written upgrade, but creating those new code on its own. If it has been done, my question is, why those changes, since it has been anticipated by the human designer because it has been coded into the master program, are not embedded in the original program, so that it can be triggered by the right condition later through the change in data structure? Why do it dynamically? To save memory space and to minimize executable size? Ok, I can imagine that being done in firmwares where memory size is a concern. But in a PC or Unix?

    Even if it is done dynamically, there is only a finite number of ways the new subroutine can be generated, already anticipated by the human programmer and coded into the master program in the first place.

    I guess I am just echoing your point that “programs are by definition stupid”.

  43. 43. John Davey says:


    You have hit the nail on the head. No program ever changes in a way a human programmer has not designed it to.

    You cannot avoid the fact that an Ai program is just a computer program and that means is starts – always – with a human designer. Programming computers is a human activity and always will be for the same reason that writing books always will be – because books, like computer programs cannot exist in nature. Any process of iteration-cycled ‘evolution’ that it may undergo is irrelevant. The scope of what a program can do is finite by the very necessity of the fact that it must be coded in the first place, which means limiting the program to what the programmer understands about the world at that point in time. There can be no unknowns from the outset. This is probably one of the reasons why AI has been a project of limited success (or should I say, limited when compared to the founding ambitions of Minsky et al).

    ( there are many reasons you might want to use ‘dynamic software’.The most common is where you do not know all the parameters involved for your job at run-time. To code all possibilities into one ‘master program’ would be horrendously ugly and might even be impossible)

  44. 44. Vicente says:

    Kar, John, you just commented what I meant in #38, I was definitely not talking about CASE tools and similars….

    So, is it possible to write a programme that programmes?

    I believe the same applies when we refer to a programme that learns. All it does is to adjust, tune parameters sets, doesn’t it.

  45. 45. John Davey says:

    “So, is it possible to write a programme that programmes? ”

    “Programming” is a human activity. So no, it is not possible.

    Is it possible to write a program that can do anything a human can do ? Pretty much, as long as that activity is well defined, encodeable and static, broadly speaking.

  46. 46. a.c.s. says:

    It seems a bit naive to think the singularity will consist of computers programming “AI” and redesigning chips. I mean really, that has all the rational underpinnings of cheap scifi from the 80’s. Don’t hurt yourself thinking so hard about whether programming is a word only worth ascribing to a human activity, or if the robots will have feelings enough to keep us as pets.

    What is a pet? A stand in mate or child for whom the world exists beyond it’s limited agency?

    There is a network of computers called the internet. No, let me begin again: a network of infrastructure reaching deep into the earth and sucking up fossil fuels and nuclear materials to power itself like one vast slime mold of cities splayed out in the night and glowing over the continents that sustain us, our industry and our deep impact upon the entire biosphere. This thing as a recent whim allows a vast number of human minds to conspire at ever increasing velocity. Right here, amid this coincidence too miraculous to question, you wonder if this singularity business is worth the trouble of thought in writing right here upon it.

    What will it look like?

    Understanding the singularity is understanding the importance of information processing to evolution. Technology is not a means to an end – it’s an ongoing life process. It is much bigger than it’s being made out to be here – humanity is only a recent and arbitrarily applicable development in the game.

  47. 47. Our Daily Lives in the Progressive ‘Singularity’ says:

    […] set of rules, and more than likely a different agenda altogether. Peter on consciousentities.com enters the Singularity conversation with a number of thought provoking criticisms along these same lines. I believe it may be a stretch […]

  48. 48. skóra t?usta says:

    I think the admin of this site is truly working hard in favor of his site, for the reason that here every material is quality based stuff.

  49. 49. zee tv episode online says:

    He is qualified and right here he is talking about star plus serials online.
    He is exceptional in this field and always give expert reviews.
    For more information remember to take a look at his web site.
    Indian Dramas online

Leave a Reply