The Singularity

Read this first: it's all here for the 'layman': http://yudkowsky.net/tmol-faq/meaningoflife.html (and it's pretty darn important to your life).

The above link was later called by Yudkowsky "The greatest mistake he ever made". Unfortunately he doesn't seem to believe in taking them down, but there's a nifty warning at the top. Oh, the singularity is still a Big Deal, but if you think AIs would *automatically* be nice to us you're sadly (and fatally) deluded.

Also, http://www.aleph.se/Trans/Global/Singularity/ , http://www.singularitywatch.com/


Someone said: A millennialist religious idea expounded by some followers of TransHumanism.

I disagree: Millennialism is the belief that just because we have reached a round number in an arbitrary base ten number system, something very significant is going to happen. Also TransHumanism is pretty rejective of religion. Read the Meaning of Life link above to see what I mean.

Millenialism is often applied to cases where the final number isn't round, and I think the gist of the comment was that transhumanism itself is a religion, at least in such extreme forms (in which case it's obviously going to reject other religions). Compare the guide linked above.


The basic gist,

At some time in the not too distant future (A lot of people and cultures believe [http://www.2013.com 2012]), technological change will accelerate so much and humanity will be so linked together and so able to communicate with itself that in a timeless moment of transcendence we will pop into a higher plane of existence in a puff of boundless optimism.

The idea has antecedents, especially TeilhardDeChardin's OmegaPoint?; but the modern version is generally credited to the ScienceFiction of VernorVinge.

Actually the gist in VernorVinge's essay on TheSingularity http://www.ugcs.caltech.edu/~phoenix/vinge/vinge-sing.html, and in recent works of "speculative fiction" by RayKurzweil? ISBN 0140282025 , HansMoravec? ISBN 0195136306 and KenMacLeod ISBN 0765305038 , seems to carry more implications for our simply being surpassed and abandoned? ignored? tolerated? eliminated? by our superintelligent machine progeny. Here's an excerpt from Vinge's essay:

In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.

Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees.

Opinions differ:


As a big Vinge fan, I have to say that I believe his inclusion of The Singularity in 'Marooned in Realtime' was primarily so that the storyline was possible and not due to any particular belief in it or any specific concept of what it was. Characters in the book were explicit on the point that it wasn't intended to be anything specific, though many possibilities were raised. Of course, it is a legitimate concept; I just felt that this comment was worth making. -- DanielKnapp

He's been writing about the Singularity in one way or another for some time. In one of the explanatory essays in True Names and Other Dangers (a collection of his Singularity-related short stories), he talks about a story he was working on for John Campbell about some kind of super-intelligence. Campbell told him he couldn't write that story, because he simply couldn't grok that much intelligence. Since then he has focused on stories in which for some reason or another the characters are near but not over the line of the Singularity.


The books of IainBanks explore life after The Singularity, and how people interact with such ultraintelligent machines.

Singularity, The. The Techno-Rapture. A black hole in the Extropian worldview whose gravity is so intense that no light can be shed on what lies beyond it. From Godling's Glossary by David Victor de Transend.

The Singularity is a common matter of discussion in TransHumanism circles. There is no clear definition, but usually the Singularity is meant as a future time when societal, scientific and economic change is so fast we cannot even imagine what will happen from our present perspective, and when humanity will become posthumanity. Another definition is used in the Extropians FAQ, where it denotes the singular time when technological development will be at its fastest. Of course, there are some who think the whole idea is just technocalyptic dreaming. -- sg

Indeed, singularity is not just about technology. TeilhardDeChardin: "Someday, after we have mastered the winds, the waves, the tides and gravity, we shall harness for God the energies of love. Then, for the second time in the history of the world, [hu]mankind will have discovered fire." Further, to speak of "the" singularity is pretty us-centric. The transition from non-life to life (in de Chardin's terms, geosphere to biosphere) is a shift as profound as anything we are approaching. (Well it would be, except that as far as we can tell, life ought to be abundant anywhere there are planets of the appropriate size and temperature. Certainly organic molecules are abundant in interstellar space.)


I think that a more likely outcome than The Singularity is what one might call "The Dissolution": we will be more and more dependent upon technology and it will become more and more complex until, finally, it will all collapse. Some essential system will crash, and everything else will crash or malfunction as a consequence. It will be impossible to get the whole thing running again, because it will be so complicated, decentralized, and interdependent that there is no correct order for starting it all up again piece by piece.

So we'll just have to start over again with fire, the plow, and the wheel.

The idea that superhuman intelligence will just inevitably "happen" as a natural consequence of technological progress seems ludicrous to me.

-- KrisJohnson

If you replace "technology" with "biology" does it sound so ludicrous? Is there a difference? -- AndyPierce

I'm not sure what the timeline is like for the development of the notion of TheSingularity, but TerenceMcKenna wrote quite a bit about it in his books. The view he held was neither technologically nor biologically based, but stemmed from the fact that the increase in human knowledge has been growing at an exponential rate, and positing that based on the rate of increase there will be some point (I think forecasted for 2026 or so) [http://www.2013.com 2012] where the rate of knowledge increase will exceed the passage of time. This is the moment that humankind could possibly enter a new phase of existence. It's a neat idea, and one that you can have lots of fun speculating about. It could be a point at which our notion of time is redefined to more minute scale, it could be the point at which we begin to perceive more than 4-dimensions, it could be TheRapture?, etc.... Fun! Some information regarding this idea from TerenceMcKenna at http://www.sculptors.com/~salsbury/Articles/singularity.html -- SeanMcNamara

Clearly, knowledge will not grow exponentially for very long.

Clearly, population will not grow exponentially for very long either. knowledge and people are related. Knowledge is assimilated by people who by necessity must specialize given the rapidity of development. Thus knowledge can not really grow any faster than an assimilating population (unless you are talking about knowledge as AutomatedIntelligence).

Certainly, a possible outcome is that we won't be able to assimilate knowledge, so the rate of new knowledge will slow. This seems quite plausible from a behaviorist/synthesists point of view. If, however, you consider the possibility of a collective mind from which knowledge is revealed, then the possibility of AutomatedIntelligence exists. This is an interesting possibility since if we accept time to be a dimension that we merely perceive as serial in nature, then everything that ever will be invented, has already been invented. If this is the case then wouldn't this be grounds for the collective consciousness from whence all thought springs? I'm not saying any of this is the case, only that it is a fruitful ground for speculation.


As may have been gathered by now there are many varieties of "the Singularity". The simplest and most brutally literal, harking back to Vinge's stories, is "the point at which science fiction authors stop being able to tell stories"; this is correlated with the rise of greater-than-human intelligence. This idea goes back to LarryNiven, actually, who had superhuman Pak protectors (but with extremely constrained values) and maybe some stuff in A World Out of Time.

All the stuff about rates of progress going to infinity or analogies to mathematical functions are, in my opinion, from people taking "Singularity" too literally and trying to do something mathematical with it. This is just wrong.

KenMacLeod's The Sky Road has a brief bit about being post-Singularity; this plus the Culture show a usage referring as much to the presence of nanotech and biological immortality as to superhuman intelligence. Basically the "whoah, weird" point.

There's questions about whether we'd notice the Singularity happening if we live through it -- some people prefer the term "Horizon", something which recedes as you approach it, and echoes event horizon, and echoes RobertHeinlein's BeyondThisHorizon? too. Others wonder if a Singularity hasn't already happened, with the acceleration of scientific progress since 1800.

-- DamienSullivan? (author of the VernorVinge page)


SL4 Wiki is specifically intended for Singularity-related discussion: http://sl4.org/wiki


Could somebody tell me what makes singularitarians think that increasing intelligence will make it easier to invent a "correct" meaning of life?

The meaning of life is something people think about. Increasing "intelligence" helps people think better about everything. Therefore, increasing "intelligence" helps people think better about the meaning of life. Yes ?

From http://yudkowsky.net/tmol-faq/meaningoflife.html :

The more intelligent you are, the better your chance of discovering the true meaning of life, the more power you have to achieve it, and the greater your probability of acting on it (16).

This assumes that life has meaning. My vast and superior intelligence has deduced that life is meaningless, aside from the meaning we assign to life in order to amuse ourselves.

You may quite well just have reached a local maximum and can't get to the real answer until your search strikes far enough in an unlikely direction to find a steeper climb. ... # 16: Some people disagree with that last part. They are, in fact, wrong. (17). But even so, very few people think that being more intelligent makes you intrinsically less moral. So, when you run the model through the algebraic goal system, it's enough to create the differential of desirability that lets you make choices (see below).

# 17: Intelligence isn't just high-speed arithmetic, or a better memory, or winning at chess, or other stereotypical party tricks. Intelligence is self-awareness, and wisdom, and the ability to not be stupid, and other things that alter every aspect of the personality.


Could somebody tell me what makes singularitarians think that utilitarianism is the ultimate value philosophy?

Why do you think we think that ?


The webcomic A Miracle of Science (http://www.project-apollo.net/mos/) depicts two possible aspects of TheSingularity: the Martian 'collective intelligence' on the one hand, and the rise of MadScientists? (see ScienceRelatedMemeticDisorder) on the other. - JayOsako

Someone here, who has read TheSingularityIsNear and wants to elaborate on this nearly empty page? -- fp

Sure: I don't think that there is/will be a singularity of that kind. My argument runes the same way as on SimulationArgument: I think that there are (and we we will discover) inherent complexity limits due to energy required to keep the complexity that make the hypothetical "planet computers" etc. infeasable. Just have a look at our MostComplexSystemThatEverWorked?. I think though that AI is possible - after all BI is possible so there are not theoretical complexity limits for that,


Robots are probably gonna kick our ass because they have one advantage over biological systems: they copy the best minds instead of have to start over again for each new generation.

How could we fix this? Thoughts?


CategoryFuture


EditText of this page (last edited November 10, 2014) or FindPage with title or text search

Meatball