Anthropic Principle

The principle that the only universe science can describe is one that can support scientists to describe it.

We can't describe a universe where every planet is a volcanic hell-hole (like Venus). Ice does not Ice-9 us, here, by accident. That kind of thing.

What are you, a Horta bigot? Actually, if one tweaks some of the cosmological constants a bit, the universe wouldn't even have planets, period.


The Cosmological Anthropic Principle by John D. Barrow, Frank J. Tipler, John A. Wheeler

ISBN 0-19-282147-4 From the book:

Weak Anthropic Principle (WAP): The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirement that the Universe be old enough for it to have already done so.

Strong Anthropic Principle (SAP): The Universe must have those properties which allow life to develop within it at some stage in its history.

Interpretations of the SAP:

Final Anthropic Principle (FAP): Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, it will never die out.

Very Very Strong Anthropic Principle: The universe came into existence so that I, personally, could argue cause-and-effect on this web page, specifically.


Another way of stating the WAP

Causality as accident: if we weren't here and if the universe didn't work as it does then we couldn't ask why we're here and why the universe works like it does.

Yeah, exactly. The more general formulation of the WAP that I prefer is that Observed phenomena, including values of physical and cosmological constants, are inherently limited to those which are compatible with the existence of an observer (not necessarily us). The original formulation is explicitly anthropocentric, or at least carbon-centric ;-). This formulation doesn't require humans at all, but merely states that since observability within a particular context requires an observer, it also requires other (possibly unspecified) conditions necessary for the presence of the observer.

The thing I find really elegant about the WAP (in almost any variation) is that it lets you hide a lot of the metaphysical messiness that might otherwise interfere with the question of what other strictly logical consequences (besides the presence of an observer) observation of a particular phenomenon might imply. The Carter-Leslie DoomsdayHypothesis? is a beautiful example, whether or not you agree with it.


I believe this is generally looked down on in science circles, in that it doesn't really explain why we're here. Other, less tautological explanations can often be found that are richer, have more predictive power. The AnthropicPrinciple is a last refuge of the scientist. -- DaveHarris

The principle is fine and indeed self-evident as far as it goes. Reductionist or inaccurate application is bad.


My own surmise of the situation is that the WAP is a simple and straightforward antidote to the SAP's invocation of metaphysical forces and "oughts", in regards to our universe's Hows and Whys.

SAP is just another HumansAsCenterOfTheUniverse? "theory", whereas WAP is its debunkment.

So, in other words, I see WAP's usefulness as limited to rebuttal of SAP. Beyond that, it doesn't state anything more than the logically obvious.


Those interested in questions about any version of the AnthropicPrinciple might want to check out TheLifeOfTheCosmos by LeeSmolin, in which, aside from being quite a good read, the author puts forth his own alternative to the AnthropicPrinciple, and emphasizes the need for theories that make predictions leading to some kind of testability, as his does (though he does not stand by his theory as necessarily the best formulation in that direction).


The AnthropicPrinciple both says more and less than it seems to, and discussion of it is invariably complicated by the fact that much BS has been spouted about it by people with various axes to grind. The following (long) Usenet post of mine attempts to explain what can and cannot be reasonably concluded from it, starting with basic probability theory: http://groups.google.com/groups?selm=3uh84r%246v5%40dartvax.dartmouth.edu&output=gplain

-- BenTilly

Your argument is a clever one, but it misses some very essential points. [It also omits the letter 'o' in the word 'before'.]

How likely it is a priori that there is a designer, who is interested in creating intelligent life, who would actually do it by designing a universe like this, and who is capable of doing it.

Now you go on to say that this is extraordinarily unlikely, even more unlikely than our existence in the first place. If your assertion was to be a correct one, I'm sure a mere NobelPrize would not do justice to you. Because to assert the probability of an event you ought to know (or at least reasonably estimate) the space of all events. In this case you ought to know (or estimate) the space of all Universes that would allow intelligence. Now think about the spirit of constructive mathematics: you can't derive propositions about objects whose existence is uncertain (otherwise you risk making a fool of yourself). Don't even try to start an argument about God's choices and the likelihood of this particular choice (this Universe) until you are able to construct (or point to) at least one alternative Universe that would (a) sustain life (b) sustain intelligence. So the big problem with your argument is that you don't know *anything* about the space of all possible Universes - other than that it contains this particular one, and your assertion is based on thin air.

Like all philosophical arguments, I'm afraid this is bound to be meaningless.

Please read it again. I am not trying to convince anyone else there. I am merely stating my beliefs, and am showing what conclusions that they lead me to.

In fact, the whole article is not intended to convert people either way. Instead, it tries to explain why the argument from design winds up confirming people's existing beliefs, whatever they are. And even that original goal is secondary to why I linked it here, where it is only intended to give people background on what the AnthropicPrinciple reasonably says.

If it only serves to start a religious argument, then I suggest deleting the link.

-- BenTilly

I'm having difficulty inventing a universe that works, but it's easy to invent a designer (of the universe) provided I'm not required to explain how the designer did it! Tackling the why question whilst abandoning the corresponding how question seems pretty fatuous to me. The designer principle suffers from the flaws that it's supported by almost anything that happens or could happen, and doesn't even attempt to account for the design of the designer.


Talk about complicating a simple concept! We're here because if we couldn't be we wouldn't be.

As to "why", there are two kinds of why. If you mean "by what mechanism?" any scientist can explain it. If you mean "for what purpose?" you have a problem, because you have to define "purpose" without assuming the answer to the question. --MarcThibault


A given intelligent being is most likely to be alive during the most populous times of the species. My existence here and now thus suggests were are near the maximum and it's down-hill from here, I'm afraid to say. (Type tags are doing us in, maybe.) Even if the population were stable for 10,000 years, then the average person would be living about 5,000 years from now. Being on the leading end is fairly unlikely. Can mere existence be used as an approximate "probability telescope into the future"? This implies "rough" information can travel back in time. Hmmm. -t

The counterargument is that people have to be alive in the primordial origins before humanity populates the universe. It has to be somebody. As statistically improbable as it may be, it could be us.

I agree, but that's the more unlikely scenario. The more likely scenario is that humanity will F itself. Applying Occam's Razor, choosing between us happening to be at a "privileged" position (leading end, "A" below) and between humanity F-ing up soon ("B" below), the second is the best explanation, probability-wise. The chance of being at the very head or the tail of a population span is roughly 10% (depending on where you draw the line). Using AnthropicPrinciple as a "time telescope" is only a probabilistic tool, not an absolute one such that yes we could be coincidentally at an end-spot, but it requires the presumption of a coincidence to explain. Also, I'm not necessarily comparing doomsday to spreading about the universe, but it's also an issue if humans stay on Earth. If the population remains roughly the same for say a million years and we don't venture off Earth, then our position in that total span of a million years is still coincidentally at the head tip.

That's the so-called "Doomsday Hypothesis". See http://en.wikipedia.org/wiki/Doomsday_argument Alternatively, the view of TransHumanism is that rather than annihilating ourselves, we will transcend ourselves so profoundly that we will no longer consider ourselves human.

But we'd still be aware under that scenario and contemplating our position in space/time, such that I don't think it changes the basic problem. If the future is mostly cyborgs, then the common situation would be a cyborg pondering, not a human (or biological being), still placing us in a coincidental spot

Perhaps. It's worth reading criticisms of the Doomsday Hypothesis for various alternatives.

A vertical histogram set with the past going up and the future going down:

Scenario A (coincidental position):

 *
 **
 ****
 *******
 *************
 **************  <----- We Are Here
 **************
 **************
 **************
 *************
 **************
 ************
 *************
 **************
 **************
 **************
 ***************
 ***************
 ****************
 ***************
 **************
 **************
 *************
 ************
 *************
 **************
 **************
 **************
 **************
 *************
 **************
 ************
 *************
 **************
 **************
 **************
 ***************
 ***************
 ****************
 ***************
 **************
 **************
 *************
 ************
 *************
 **************
 **********
 ******
 ***
 *

Scenario B (typical "boring" position):
 *
 **
 ****
 *******
 *************
 **************  <----- We Are Here
 **************
 *************
 **********
 ******
 ***
 *
--top

Scenario C [evolution of humanity?]

 D
 D
 D
 D
 EE
 EE  <----- We Are Here
 FFF
 EE
 EE
 D
 D
 D
 D
 EE
 EE
 FFF
 RRRRRRRRRRRRRRR>
What is "R"? -t

It's either TheSingularity or a big party at Rick's house. Not sure. It might be both.

I suspect it's some kind of evolution of humanity. But we are pondering it as a biological being, not a cyborg. Thus, we should consider the probability of us pondering as a biological being versus a cyborg or the like (as described earlier). If 98% of our future is being cyborgs or GodGoo, then we are currently in a "coincidental state", creating a "probability problem". If the 98% situation were the case, then we should be cyborgs pondering, not humans (unless cyborgs don't ponder as individuals, per above). -t

That which does the pondering, assuming "pondering" is even the right word, might be so unlike us as to render meaningless any connection between us and it.

That might be. I see it in terms of the question, "What's the best estimate of the total quantity of other ponderers like me, past and future?". If historically there have been about 60 billion humans (which is roughly the current estimate), then the best estimate for total quantity of future humans is also about 60B since we don't have enough info about whether we are on the uptick or decline. Without uptick/decline info, the "safest" model/assumption is that we are approximately in the middle. This suggests the total population of humans that will ever exist is in the ball-park of 120 billion.

This makes sense in that if, for the sake of argument, there will be a total 100 trillion humans. This means that vast bulk of them will live in the future and that me, the ponderer, is at the very leading end (start) of the human population (similar to scenario A above). But this is a coincidental position. It's unlikely I would be at the very leading end out of a random sample of 100 trillion humans. Occum's Razor is typically interpreted as selecting the least coincidental among alternatives. Sometimes this is also called the "Copernican principle", although it's arguably more specific.

Another way to look at this is suppose you are at Vegas. A new powerful-but-expensive telescope is under construction that can see into the future and into the distance. But, it's not finished yet. (A prototype could only sample galaxy-sized patterns in limited locations.) Vegas allows you to make bets on the future. Suppose your significant-other wants you to make a big bet on the future of Species X, a species recently discovered via SETI via radio waves.

Species X has a similar history to current-day humans: about 60 billion total have existed since their species formed about a quarter of a million years ago. (I'm not focusing on humans to make sure there is no emotional connection between the better and the target species.) You can put a double-or-nothing bet on the choice between: A) Species X will eventually have a total population exceeding 240 billion (4x total past and existing), or B) they won't exceed 240 billion. Bet on A or B? (The fact that a second species somewhat resembles ours in terms of population history may provide additional info, but please ignore that for this thought experiment).


One scenario to explain the conundrum is that in the future a point will be reached where the brains of the current population are scanned and digitized for emulation (ScannedBrainSimulation). The physical bodies then age and die normally, but biological reproduction is no longer permitted (or not feasible due to space travel, biological warfare, or whatever). Thus, the total number of individual existence-ponderers will be frozen at a fixed number from that time forward, and typically don't "die". Biological humanity will end, or be confined to a few labs or zoo-like areas. Lower-level duties, like mining asteroids, may be performed by armies of drone-like bots with limited intelligence, and the human-based emulated brains, perhaps enhanced, are the guardians who oversee the drones. Society may not need more than 10 billion or so guardians, even if we expand into space. The worker drone-bots are kept just smart enough to do work but not smart enough to know or care about anything outside of the duties for which they were built for. Such a society, when well-tuned, may need only one guardian for each solar system, and still colonize most of the galaxy. -t

It is not in your future. Something like it happened, millions of years in your past. What you currently perceive as "reality" is a reconstruction of an even more distant history; the current Real would be inconceivable to you, given the limited cognitive capacity of the construct which hosts your consciousness.

I'm only looking at scenarios to explain the probabilities. You are adding extra layers here. Sure, it's all speculative to begin with, but let's focus on the simplest explanations first, per OccamsRazor. I'm not sure that scenario changes the basic premise anyhow per total quantity of existence-ponderers.

If I were part of a "fake world", it would imply that most existence-pondering individuals were also in or going to be in a fake world.

In that "fake world", there is only you. You are alone.

Good. That means I can tell you to go blow maggots out of your zit-infested ass, and I am not offending anybody because there is no "anybody" to offend. It's like chewing out a stapler.

Yes, that's true.

Incidentally, you have the personality of a stapler. I hope the emulation admins improve on that code.

How does that make you feel?

Oh great, I ticked The Emulation Gods off, and they punished me by replacing you with Eliza. I'll sing them a hymn about how great and powerful they are and maybe they'll put the usual jerk-bot back. Kissing up works.

"Jerk-bot"? You seem unusually sensitive today.

Yes. Please ask Hal to send me a stress pill. Oh, and ask him to open the pod bay doors while you are at it. They seem stuck.


See also FermiParadox


CategoryWorldView CategoryPhilosophy CategoryScience CategoryBook CategoryFuture


DecemberFourteen


EditText of this page (last edited December 15, 2014) or FindPage with title or text search

Meatball   Why