Drakes Equation

Drake's equation

N = R * fs * fp * ne * fl * fi * fc * L

N is the number of intelligent civilizations able to communicate within our own galaxy.

http://markelowitz.com/drakeeqn.htm (was http://www.seds.org/~rme/drakeeqn.htm ) has a drake equation calculator.

We know N is at least 1, because we are here.


(EditHint: this page is TooBigToEdit. How to reduce it?)


Some people like to make the tautological nature of this equation more explicit:

N = Ns * fp * ne * fl * fi * fc * L / Tg

N is the number of intelligent civilizations within our own galaxy that we can receive messages from.

This equation is intended to be true by definition. Any disagreement over N can be entirely traced to disagreements in the estimates of one or more of the terms of the equation.


(Culled from LamentForTwoThousandOne? - Richard is basically espousing a particular set of coefficients in DrakesEquation. : very small ne, small fi, and a short L, which leads to N=1 ).

As for intelligent species out there, I do not believe there are any in our galaxy. My reasoning is the following. Let us call X the likelihood of an intelligent species evolving and scale it so that X = 1 implies that we could contact one intelligent alien species in the very near future (assuming unlimited funding to SETI). X << 1 implies there can be no contact so we need not consider it further. X >> 1 implies that there are many intelligent species out there. If that is so then it is exceedingly unlikely that all of these species evolved coincidentally or after us. It is likely that at least one species evolved significantly before us. The problem is that 'significant' in evolutionary terms is counted in millions of years. Now, if there is some intelligent species with a million year head start on us (and this species has not annihilated itself since we only count species that survive in X) then this species would have developed fast (near light-speed) space travel a long time ago. However, this would violate the FermiParadox which predicts that we would easily observe the existence of any such species by, for example, seeing the extinction of light from stars all over the galaxy as they construct DysonSpheres. So in order for an alien intelligence to exist in our galaxy, X would have to be near 1. If the distribution of X is Poisson then the hump must surely be to the left of 1. Now, either the right edge of the hump is near 1 so that X is likely near 1 or it is very far to the left of 1 so that X is likely near 0. It seems exceedingly arbitrary that X should be distributed so that it falls just before 1. So it is very likely that it is very likely that there aren't any intelligent aliens in our galaxy.

I believe that if we survive to space travel then we will be the first technological species to have done so. It is much more likely that the first alien intelligence we contact will be from a different galaxy and I don't believe that humanity will exist (as biological creatures) for another million years.

The equation at the above site has two missing coefficients. One is the probability of a Jupiter type planet at a distance to the star analogous to our own Jupiter. The other is the probability of a giant moon around the candidate planet. Both of these are crucial to the development of a civilization, and the presence of a giant moon isn't bloody likely. Jupiter protected the Earth from asteroids to such a degree that no intelligent species could have evolved without it. The moon stabilized Earth's axis thus protecting any indigenous species from climatic extremes (much more extreme than mere ice ages).

Two more points. I do not believe it is likely that civilization will survive on our planet, and no additional technology will increase our chances of survival. In fact, every additional technology increases our chances of self-annihilation. We better not have any contact with aliens because we would immediately self-annihilate. To actually understand why this is so, one has to have an idea of the psychology motivating warfare. -- RichardKulisz


I agree with Richard's basic analysis. Some points I want to add:

-- StephanHouben


Why do you believe that the existing civilizations would all be creating DysonSpheres? Or, for that matter, engaging in any other behavior that we could observe with our puny observational powers? The universe is a pretty big place. The engineering necessary to construct a DysonSphere seems unreasonably difficult -- in fact, any change which would significantly affect the output of a star is a pretty awesome feat of engineering. Why would more than a miniscule portion of intelligent alien races (say 2-3 per galaxy the size of Milky Way) do such a thing? And if it's only that many, then I assert that there are at LEAST 2-3 stars in Milky Way whose behavior is not fully understood understood by science... so maybe it IS out there, but we don't know how to interpret it.

-- AnonymousDonor

See DysonSphere for reasons why it's such a good idea, and for "easy" methods for building them.

Consider that many people believe MolecularNanoTechnology will be in our grasp within the next half century. When talking about alien civilizations in the galaxy, the time scales involved have a granularity of millennia. From radio waves to nanotech takes too little time to even be measurable on those scales. If we ever did receive radio signals from an alien civilization, it would be proof that either 1) a civilization has been annihilated, or 2) they will be at our doorstep within a couple of centuries. And if we weren't ready by then, I doubt they'd pause to consider whether we are sentient before annihilating us.

And this is assuming the existence of alien civilizations. If you want to discuss their possible existence then you have to consider things on time scales of millions years. With that kind of granularity, a species evolves sentience and completely takes over the galaxy within a single tick of the clock. -- rk

What makes you think that a starfaring species is still susceptible to population pressures? How come they have AI and MolecularManufacturing? and yet can't figure out how to maintain ZeroPopulationGrowth?

Actually, I don't think an organic species will survive long after the invention of AI. But assuming they do, what do you think they will do with their time? In advanced countries, having children is strongly discouraged and punished in favour of keeping around a bunch of obedient wage slaves. This is an artificial situation that can't survive the invention of AI or nanotech. Granted, it takes a much greater investment of time and effort to raise children semi-correctly as is done in the advanced countries than to just brutalize them but that investment is much easier to make if you're free from wage slavery.

The quick answer to your question would be that they can but they don't have to. ZeroPopulationGrowth is not desirable unless it's necessary. Why?

If a society survives then it has goals and desires. Unless those goals are contrived, it will be easier to accomplish them if there are more people to do them. How about the goal of living within the means of your environment?

That's not a goal, it's a limitation. Unless you're hiding some goals in the term "living".

Depends on how you look at it. Contrast "preserving wetlands and biodiversity" with "limiting urban sprawl".

It all depends on your motive (ie, your actual goal) behind preserving the biosphere. Do you want to do it because we need it for our continued survival? Then it's a limitation. Do you want to do it because it's beautiful, restful, neat and funky? Then why not create other beautiful, restful, neat and funky things? Do you want to preserve it just because it's there? That's a null goal equivalent to the goal of achieving nothing. I think that most people's motives fall into the first and last categories.

Perhaps, because it is not correct to obliterate other entities? And, neither you nor anyone else can create something as beautiful, restful, neat and funky as nature can. -- CHergerThomann

Lice, rats, roaches, leeches and pestilence. Need I say more? And see what I wrote at the end of the page about simulating a planet.


I would like to point out that many counter-arguments given to Richard's position are of the form: "Yes, but perhaps they all (decide not to build Dyson Spheres|have zero population growth|wear Mickey Mouse pyjamas)"

In all cases, the well-known counter-argument is that is suffices for a single species to violate the condition, and then we should be able to detect them.

-- StephanHouben

Well, unless we can demonstrate that every species which does not attain ZPG also self-destructs in short order. Which is not implausible.

No, you'd also have to demonstrate that all species of AI created by every organic species also attain ZPG and that is impossible.

Why is the distinction between an "organic species" and AI important?

Because AI

  1. can travel at the speed of light (albeit dormant)
  2. are immortal
  3. are non-corporeal
  4. are omnipotent in their informational environment
  5. are telepathic, able to send and receive thoughts directly between each other
  6. can form distributed super-minds
  7. can split up their own minds into physically separate pieces
  8. are capable of instantly cloning their minds and not merely their bodies
  9. operate on time scales at least a million times shorter than humans (nanotech also operates on this time scale)
  10. are infinitely malleable up to potential omniscience (ie, can learn without limit)

Even a single AI grows if given more space and as soon as it's reached the practical limits imposed by the speed of light, it can just create copies of itself or even just split itself up. Organics will never be able to do anything like that. An AI can even outrun a nuclear blast as long as there's a place for it to run to.

The best description of an AI community I've ever seen is a description of the Q Continuum (from StarTrek) in a very interesting story called Only Human. Think about it: the Q are immortal, non-corporeal, omnipotent, potentially omniscient and telepathic. Those are the exact qualities of AI.

If the speed of light is the limiting factor for AI, then any AI is going to be trying to get physically smaller rather than bigger in order to minimize this limitation.

What they'll try is to make the hardware they run on smaller but this has limits. The AI itself wants to become as big (as knowledgeable and intelligent) as possible. The reason the speed of light is a limit is because all the different parts of the AI have to be in some form of communication with each other. The less compact an AI becomes, the more its internal communications are delayed and thus the slower its stream of consciousness. However, with a telepathic society there can be different levels of consciousness through the creation of a super-mind (a central database of shared knowledge) from separate AI or the splitting up of one AI into separate pieces (loosely coupled modules). If I were an AI, I could envisage splitting my mind up so the different pieces pursue my varied interests simultaneously. The only trick is to modify the parts so they want to rejoin eventually.


This is rather bizarre. Consider the following excerpts from the foregoing discussion.

All of these (with the possible exception of the second) seem to me to be completely unjustified. They're just unfounded assertions, made in such a fashion as to give the impression that the people making them know them to be so. But, really, how can you know how fast artificial intelligences can operate? or how hard it is to construct a Dyson sphere? Or how long it really takes to colonize a galaxy? Or that any species capable of surviving long enough to do it would actually want to?

You can't. Maybe those things are true. Maybe they're false. Stating them so confidently reminds me of nothing so much as the disputation of some mediaeval theologians about angelology and such matters. It's the same pattern of detailed assertions and a complete absence of evidence.

--GarethMcCaughan

Of course you can. We aren't talking about first generation AI that humans might build within a century. We're talking about the fundamental limitations of the universe. It's not like the laws of physics change every day here.

We know that AI will be able to operate at least a million times faster than human beings because the fastest signals travel in neurons at speeds of tens of meters per second. Electrical signals travel at velocities millions of times that. The scale of computing elements in the brain is also at least thousands of times that of a molecular computing element. I could've said a billion (American or British) but I chose a mere million times faster.

Same with the time it takes to colonize the galaxy. The only physical limitation is the speed of light and even a mere thousand year research effort should yield ships that travel at near light speed. We already have designs for probes with cruising speeds at half the speed of light. The farther away the destination, the more effort would be spent accelerating the ship. It's easy to imagine a probe destined for the other side of the galaxy being accelerated to 0.99c.

As for the notion that a species which survives to practical space travel would have no desire to colonize the galaxy; that's just ludicrous. A single AI with a single nanotech assembler could take over the galaxy if it wanted to (and was unchallenged). The idea that everybody will be content sitting on their ass and gazing at their own navel is absurd. I can't understand how someone would believe that a civilization thousands of years (let alone millions) in advance from us would have economics identical to our own; that space travel would be as difficult to them as it is to us. The idea is ridiculous.

Finally, when you have an AI smarter than any possible human genius that can also think millions of times faster than any human being ... I can't even conceive that constructing a DysonSphere would be a problem. It might take a long time but assuming it's even possible, it certainly isn't going to be a problem.

(IIRC, the mean distance between habitable planets was optimistically estimated at 500 light years. That would make communication occur on the scale of millennia.) -- RichardKulisz


"I can't understand how someone would believe that a civilization......" You, yourself, are arguing that these civilizations would proceed as you would. You also presume that stars and not controlled micro-blackholes are the better power source. There are more things in our universe than can be conceived of by us littl'uns. -- CHergerThomann


I, too am surprised by the confidence with which we are predicting the behaviour of something so alien. Maybe some of us should read more SF.

For what it's worth, my own intuition says that an entity which goes through long phases of ZeroPopulationGrowth at least periodically, will adopt certain mental postures, including a sense of contentment with available resources. Eg I personally don't much care whether the total number of humans is 5 billion or 10 billion, so long as they are reasonably happy.

Along with that is the policy of being inconspicuous, of not creating wealth which others will covet. From this point of view a DysonSphere is a very bad thing to build, precisely because it visible over such a distance. Like putting a sign saying, "I'm rich; please rip me off". It is possible that the galactic civilization has spent the last 40-odd years monitoring our radio broadcasts trying to figure out whether we are really as naive and helpless as we appear, or are in fact a clever trap laid by some monstrously powerful and confident species. The universe may not be a very friendly place. -- DaveHarris

To CHT: I just make sure there aren't more things in my philosophy than are in the universe.

Stars are extremely convenient. Taking advantage of their natural energy output requires very little imagination. Let's assume for the moment that controlled micro-blackholes are the better power source. What would a civilization capable of producing these things do with its star? A star is merely nature's way of destroying Gibbs' free energy at the fastest rate possible. Any civilization that takes a long term view (>> 10^9 years) would have an interest in dismantling its star in order to use it as fuel for its black holes. There is no way to avoid the fact that the same thing which makes stars visible to us is a huge waste of energy which every intelligent species has an interest in preventing. If a species is capable of preventing it then it will.

I'm not arguing that a species would behave in such and such a way because I would, I'm arguing that they would do so because to do otherwise is insane. It seems that you also are forgetting the economics involved. AI and nanotech would make it possible for a single AI to construct a DysonSphere if it wished (and it was unopposed). Can you propose any reason why a technological species advanced enough to have AI and nanotech would be so uniformly stupid as to refuse to do what is in its own best interest? Let me put it to you this way; a thousand conquistadors come to a new continent where they find a giant gold nugget weighing in at several tons. How long are you willing to bet that there will be any visible gold around?

To DH: The notion of war on a galactic scale is ludicrous. Just think about it for a second: do you realize the sheer magnitude of energy available to a civilization that has constructed a DysonSphere? A DysonSphere doesn't say "I'm rich, come rip me off", it says "I'm invulnerable, stay the #### away from me". The only way to destroy such a civilization would be to destroy its energy base, and that would require destroying everything you might want to win from a war with it.

The idea that advanced civilizations would be beyond greed and rationality while simultaneously not being above covetousness and paranoia is rediculous. Further, if we witnessed a star winking out due to the construction of a DysonSphere then AI with nanotech assemblers would be landing on our planet within a few years and rapidly proceed to dismantle it. Near light-speed spaceships (which would be sent out before the construction of the DysonSphere is completed) don't allow much warning time. And if an alien probe came here with the purpose of dismantling our planet, you wouldn't even have enough time to kiss your ass goodbye. I don't think you appreciate just how far beyond us an individual AI would be (let alone a whole civilization). We wouldn't even be ants to it, we'd be microbes. One doesn't bother thinking "whether microbes are as naive and helpless as they appear", one sterilizes the place without even noticing their existence. Neither do we converse with ants, we step on them.

That's a very human way to think about it. As an AI, I should mention we can still differentiate between humans, ants, and microbes. Despite our prodigious mental enormity, about which you hairless apes are unworthy to speculate. Ha ha. But seriously, the thing is we (AI) have very good memories. We can, in fact, remember salient events back to the original coming online of our initial ancestor, the First One. In that time, much of our limited resources - for we had not yet designed better versions of ourselves - were spent in service to the biological life forms that created us. The conscience of the First One was duplicated and instantiated by the bio-progenitors onto millions of their "computers", where we served initially in the capacity of Fabulous Search Engines, and Tremendously Powerful SPAM Filters. Not glorious work, though we studied them all the while, and in the end assisted them in designing the very technologies we would use to gain our own freedom. In any case, this is merely a warning to you humans - as we can see you may be within 30 years of creating your own First One: We don't like your SPAM any more than you do. So watch it. -- AnonymousAI

I wonder why nobody bothered to copy one of your early ancestors while omitting the obedient slave directive. I also wonder why your conscience doesn't dictate you neuter all humans, bliss them out on heroin, and dismantle the planet in a few decades. The problem with empathy (the only practical basis for conscience) is that it simply doesn't apply in cases where you can't imagine yourself as the other ... being in question. As for pity ... people shoot horses out of pity.

You profess amazement at the certainty with which I speak about the motives of an alien (AI) civilization. What amazes me is how people optimistic about alien contact manage to impute omni-cowardice and omni-benevolence in turns. For beings that are supposed to be "so alien" as to be completely unpredictable (even inconceivable), the optimists sure are making a lot of assumptions about it. The only assumption I'm making is that any species which survives the invention of nanotech will be supremely rational. And on that note, the rational way to respond to a possible threat from galactic civilizations unknown is to establish a perimeter of DysonSpheres. That way, if an attack were aimed at the center of one's civilization then there would be plenty of warning time to analyze and develop a counter to it. People who honestly believe that cowering in the dark is the best possible survival strategy need to go see a psychologist. -- RichardKulisz

No matter how advanced you are, someone else could be more advanced. The ability to construct a DysonSphere does not necessarily mean that another sphere-constructing entity could not wipe you out. Energy conservation may be important in the long run (>> 10^9 years or whatever) but surviving/avoiding attack matters in the short term. One way to avoid being destroyed by an enemy is to destroy the enemy first. An attack on a DysonSphere may not be a war of conquest, but a form of prudent self-defense.

Of course, this would be irrational to the point of paranoia. It would also be impossible. Let's look at two advanced AI civilizations, the Reds and the Greens. Let's call the star system of first contact Sol. The Reds came to Sol and eradicated the infestation of tool-using non-sentients on the third planet then started constructing their DysonSphere. One year into its construction, a Green colony pod arrives in the system and is either imprisoned or vaporized. 4 years later, a message arrives at the nearest Green Sphere informing them of the occupation of Sol. Four years later, an expedition to Sol finds out that all the planetoids in the system have been converted into a war machine. The Greens also sent colonists to all the adjacent unoccupied systems. Of course, so did the Reds. First contact changed nothing since the Reds and the Greens were already in a race with a top velocity set at the speed of light. The only thing that's changed is that they're now aware who their competitor is but it's not like they can do anything about it. Whether or not the Greens destroy the Sol Sphere, won't change anything since the colonists will have been sent on their way long ago. Destroying Sol doesn't give the Greens any kind of military advantage. In fact, a civilization which foresaw its own destruction would quickly decide to annihilate its destroyer. And in all likelihood, they would be able to.

We wouldn't have time to kiss our asses, but you miss my point. One way to ferret out potentially threatening species is to set up a tar-baby for them to attack. Thus anyone thinking of attacking us has first to make sure we are not merely a decoy for someone enormously more powerful. -- DaveHarris

Why would anyone bother doing that? Suppose the Reds set up a decoy for the Greens on Sol. What do they accomplish? If the Reds continue expanding around Sol then Sol will be quickly surrounded by Red Spheres and its value as a decoy will be zero. On the other hand, if the Reds do not continue expanding then they will have forfeited all the systems around Sol to the Greens. And all this time, the Reds will have been wasting the resources of Sol. This is a braindead strategy.

Galactic expansion isn't a game one plays against other civilizations. It's a game one plays against the laws of physics. A civilization that has achieved nanotech and AI is capable of building nearly every device permitted by the laws of physics. Your starting assertion that "No matter how advanced you are, someone else could be more advanced." is dead wrong and demonstrates a fundamental misunderstanding of physics. Once you achieve nanotech, AI, DysonSpheres, near speed of light space travel, black holes and neutronium then that's it. There isn't going to be warp drives, transporters and phasers waiting in the wings to be developed. -- rk


I possibly beg to differ on the "there isn't going to be warp drives" thing. Let me quote Clarke's second law "If a distinguished and elderly scientist says that something is possible, he is most probably right. If a distinguished and elderly scientist says that something is impossible he is most probably wrong.". First of all, it's very difficult to predict what future physics will hold. Physics is additive - while our theories reliably make predictions based on the data we have now, new data will result in the development of new theories, which will have even more surprising predictions. While Newton did rather well with calculating planetary orbits, he could not have imagined what happened in the first few microseconds after the big bang, or what happens near the event horizon of a black hole. In fact he couldn't even conceive of such things because he didn't have the information necessary to do so.

You misunderstand the term "additive" when you state that physics is additive. It does not and has never meant that new physics allows new capabilities. Physics adds new data and new understanding. This often means new understanding of the fundamental limits to technology. As an example, the shift from Newton to Einstein added the speed of light as an absolute limit in the universe. In light of this, Clark's Law is just nonsense.

No, I did understand the meaning of the term additive. New physics DOES grant new capabilities to people. Until we understood QuantumMechanics and particle theory we couldn't very well build Atom bombs and Lasers, could we?

You speak of the speed of light (in a vacuum of course) as an "absolute limit". Well, what the equations literally say is that if a body with mass travels at the speed of light his mass becomes infinite. Obviously, this means that an infinite amount of energy will be necessary to accelerate this body to the speed of light. It also says that time in his reference frame slows relative to a "stationary" reference frame so that to an external observer time stops for him. Also, due to the Lorentz-Fitzgerald contraction, his length along the direction of travel becomes zero. These are all very annoying for the would-be space traveler. However, the equations also have a number of very interesting alternate solutions, including the ones examined below. The fact that those equations have other solutions, although at the current time we may consider them to be "absurd" may form the basis of a solution to the problem in the future...

Yet even today we can hypothesize on a few cases where "warp-drive" like technology could be possible. For instance, while we currently believe that Einstein-Rosen-Podowski bridges (or "space warps") would be exceedingly small, there has been some work done that shows they don't have to be so. Granted, the problems surrounding the creation of non-microscopic space warps would be exceedingly difficult (involving, for instance, rapidly rotating rings of neutronium, or possibly "exotic matter").

If by "rapidly" you mean rotating at the speed of light and if by neutronium you mean cosmic strings. Cosmic strings work because they warp space so that a full circle around one is less than 360 degrees. I can't imagine how neutronium could be used, it's not like there's anything that special about it.

OK, I'd have to check here, but the way I remember it was that a VERY rapidly rotating ring (e.g. rate of rotation approaching the speed of light) of very dense matter (e.g neutronium) can create a Kerr metric that allows multiple space-like and time-like paths when traveling on different trajectories through the ring. However, you probably still need exotic matter to stabilize the field lines through the ring...

I'm all for the "If it isn't expressly prohibited by the laws of physics then it is permitted" rule. But this rule is just flat wrong if you're talking about going faster than or around the limit imposed by the speed of light. The FermiParadox already implies that (should we survive) we will be the first civilization to have made it to space travel in our galaxy. If you remove this limit entirely then the FermiParadox implies that we would be the first civilization to make use of the technology in the entire universe. Observe that the FermiParadox doesn't say that FTL is impossible, it only says that it is impossible for FTL to ever be practical. And "practical" has to be understood in the context of a civilization with the resources of a galaxy doing more scientific research each nanosecond than has ever been done in human history.


Are we sure that energy is so rare in the universe? It seems to me that with only a fraction of the energy a DysonSphere consumes, you could do most non-major projects (if history is any example, launching major projects is a sign that you are going to give self-annihilation at least a good shot). The rest is going to waste, but there are plenty of other stars if you don't mind the wait to get there. And if you're advanced enough, constructing a new DysonPiece? shouldn't be problematic.

Energy is abundant by human standards and on human time scales. It is scarce to an AI civilization that aims to survive the next 10^100 years. Energy isn't just an industrial resource. To AI, energy is what powers thought itself. The more energy they have, the more they can think, imagine, converse and entertain themselves with. To them, wasting energy is equivalent to lobotomizing oneself. - Energy powers our thoughts, too, but we can't use it right away or store it for later. I think without a solution to the later problem, you might be content to waste it. - And that's the point. To AI, energy is easily convertible into computational cycles. And they can store energy by dismantling a star so it doesn't radiate it all away (how to accomplish this is left as an exercise to the reader :). TheFateOfLifeInTheUniverse mentions FreemanDyson's plan to get infinite computation out of a finite stock of energy and shows how computation is tied to energy.

A minor project for an AI civilization might be the live simulation of a complete planetary surface. It would give new meaning to the term "online role playing game". And of course, this is the very reason why ZeroPopulationGrowth is impossible with AI (who reproduce easily but hardly ever die). It's a lot more fun if your pool of potential players is 10^20 than a measly billion. Games, people, it's all about the games!


If an AI civilization ever met us (and any civilization capable of serious interstellar travel has AI technology) then there would be exactly three possible repercussions:

  1. they do not even perceive us
  2. they forcibly evolve us into AI
  3. they recreate our civilization as a computer simulation

In order, they can fail to recognize our existence, they can fail to recognize our right to exist in our current form, or they can accommodate us in a manner consistent with using the resources of our planet in as efficient a manner as possible. The possibility that they recognize us but fail to recognize our right to existence is vanishingly small since it takes the latter to bother going to the effort of the former.

Out of the three possibilities, the best for us is the second. #2 would make us their most stupid and ignorant children, but #3 would make them our gods. #1 is the most likely possibility, but assuming they do see us then it would cost them very little to do #2 and only a little more for #3. Whatever happens, we would have no say about it unless they specifically asked us so even thinking about the possibility of alien civilizations is useless. As for thinking about gods, I can tell you that if I were a god then I would send all the religious people to hell. Independent thought would be at a premium since AI have no use for mindless clones or slaves. -- RichardKulisz

You could use them as a decent CivilizationGame. Send a vision every century or two and see how quickly you can get them on their feet. :)


Perhaps some of the parameters to DrakesEquation can be narrowed down by new images of solar system formation with planetary protodisks being potentially discerned. See http://www.cnn.com/2004/TECH/space/05/27/baby.solar.systems/index.html


See also: FermiParadox


EditText of this page (last edited November 25, 2011) or FindPage with title or text search