Saturday, September 29, 2007

Actively Open-Minded Thinking

Suppose you have decided to spend ten minutes on making a decision between A and B. Currently you're leaning towards A. Do you:

1. Spend most of the ten minutes thinking of reasons to do A?

2. Spend about equal amounts of time thinking of reasons to do A, and reasons to think of doing B?

3. Spend most of the ten minutes thinking of reasons to do B?

Our usual decision is to do (1). This is an aspect of the confirmation bias.

Part of Actively Open-Minded Thinking (AOMT), as advocated by Baron, is to try to instead do (2) or (3). This is hard to get in the habit of doing.

I find Actively Open-Minded Thinking very useful. For example, twice a day I think about an aspect of a current plan I have for the day, month, or lifetime, and then I briefly try to search for alternative strategies, or for reasons why my current strategy might be wrong; on multiple occasions this has caused me to adopt new courses of action. It's irrational, given that you're going to spend X minutes meditating on a decision anyway, to spend most of the time thinking of ways to rationalize your current decision. In fact, it's so clearly irrational that I'm retroactively surprised that AOMT never became widespread.

Sunday, September 23, 2007

...And, Scenarios Without a Strong AI

  1. A giant disaster or war occurs, say on a scale that kills off more than 20% of humanity in a single year.
  2. Getting the software right turns out to be harder than we thought. Sure, natural selection got it right on Earth, but it has an infinite universe to play with, and no one is around to observe the 10^10000 other planets where it failed.

Thursday, September 20, 2007

Strong AI Takeoff Scenarios...

Things that could cause the current Moore's Law curve to be greatly exceeded:
  1. Enormous nanotechnology advances.
  2. Enormous biotech advances, allowing dramatic intelligence augmentation to human brains.
Things that could cause a sudden, large (1000x) increase in hardware devoted to AI self-improvement:
  1. Self-improving AI, nowhere near *general* human-level intelligence yet, is suddenly put in charge of a botnet and swallows most of the Internet.
  2. Awareness of the promise or the threat of Strong AI becomes widespread. A government, or government consortium, launches an Apollo Project-scale activity to create a Strong AI, hopefully under controlled conditions!
Other Strong AI Takeoff scenarios:
  1. The usual "recursive self-improvement" scenario
  2. We were too conservative. Turns out creating a Strong AI on a mouse-level brain is easier than we thought; we just never had a mouse-level brain to experiment with before.

Tuesday, September 18, 2007

Total feasible human population over all time

On Usenet I recently, for fun, made a back-of-the envelope calculation for how many human beings could be born in one scenario:
  1. For simplicity of calculation, the only lifeforms are humans (no trans-humans on microchips).
  2. The limiting factor will be energy, rather than how much carbon etc. you have lying around to build things with. After all, you can fuse the useless hydrogen you come across into more useful elements, or delay peoples' births until room opens up for them.
  3. I put the energy efficiency at only .001 of total matter (dark and baryonic), partly because there will be inefficiencies with collecting/storing/transporting/transforming the energy, and partly because I'm not sure how well we can shove the dark matter into black holes. I used (3*10^-27 kg / (meter^3)) for the matter density.
  4. Having no source on how much matter we can grab or colonize before the accelerating expansion of the universe puts it out of reach, I arbitrarily guess that we can colonize all matter currently within 10 billion light-years of us.
  5. The miserly energy ration will be 300,000 nutritional Calories per day, for use for synthesizing/recycling food, and for all other pro-rated energy uses.

Google tells me:

(.01 * ((4 * pi) / 3) * ((10 billion light years)^3) * ((3 * ((10^(-27)) kg)) / (meter^3)) * (c^2) * .001) / ((300 000 (kilocalories / day)) = 10^52 person-years.


Sunday, September 16, 2007

The Virtues of Wild Guesses

Some probability distributions are very uncertain. For example, what are the odds of Strong AI by 2040? Not only is there a lot of uncertainty, but there is also "uncertainty to the uncertainty"; I have no rigorous way of answering the question.

Other questions are close to zero-knowledge. For example, what is the probability of a Singularity arising in the average star system? I'd put an upper-bound of 10^-16, based on the fact that no alien civilization has yet shown up to collect the unused sunlight pouring out uselessly into empty space. But, I have no lower bound.

Suppose today you have to make a decision based on one of these questions. There are three things you can do:
  1. Decide arbitrarily, or decide solely according to heuristics that are unconnected to the reality at hand. Examples: Always choose inaction. Continue doing whatever you were doing before you pondered the question. Do whatever makes you feel best today. Do whatever would result in the least embarrassment if you're wrong. Do whatever everyone else is doing.
  2. Make your best wild guess, but keep it to yourself to avoid political embarrassment. After all, everyone has some People Who Would Love To Take You Down A Notch in their lives; the last thing you want to do is say something in public that turns out to be embarrassingly wrong.
  3. Collaborate. Make your best wild guess, and share it freely, taking pains to label it as a wild guess of course! Just be aware that if you're wrong, the defense of "I *said* it was a wild guess, and anyway I was braver and more rational that the people who refused to guess at all!" is unlikely to be heard.
Choices (2) and (3) are reasonable approaches. Unfortunately, (1) is probably the most common; we tend to inertially push forward with our current paths until someone can prove that our current path is wrong.

Saturday, September 15, 2007

Diminishing Returns on Existential-Level Threats

I posted a vague suggestion as a comment to the Lifeboat blog:
When considering what problem to work on, one question is "how many other people are working on this problem"? If the answer is "a lot", you may stay away because of the Law of Diminishing Returns. (This is only partly mitigated by the fact that if a lot of other people agree that P is an important problem and are working on it by doing S, that somewhat increases the chances that your assessment that "P is an important problem and S is a good solution" is correct.)

In the course of figuring out where to spend resources, people and organizations like the Lifeboat Foundation are presumably tracking who else is working on what problems and how many resources are being spent by other organizations. Ideally, the Lifeboat Foundation should publish their order-of-magnitude estimates so that other people deciding what projects to work on can use that data as well.

Self-interested people have the same problem, of course. If there are already five companies selling apple-flavored toothpaste, you might not want your startup to sell the same thing. If no one is selling apple-flavored toothpaste, you might consider selling it, with the caveat that you'd first want to make sure there isn't a *really good reason* why no company has attempted to sell apple-flavored toothpaste. The difference between self-interested people and an (ideal) nonprofit is that self-interested people have less incentive to share their research with each other.
I don't expect an enthusiastic reception. Apart from everything else, there are strong political reasons why people and organizations (including you and me) don't like to publicly share all the Wild Guesses that inevitably make up the foundations of our long-range strategies.