Computers can beat humans at Chess and can multiply numbers faster. Humans can beat computers at Go and can pass the Turing Test. Who's "really" smarter? Not that it matters, but one way to answer this unimportant trivia question is to run The World's Simplest* Intelligence Test.
* in the interest of fairness, the word "simplest" is not defined as what a human would intuitively judge "simplest," nor what Microsoft Windows would judge "simplest". Instead, we will use the more objective standard of Kolmogorov Complexity.
The steps for the test would be:
1. Natural Selection will evolve human beings, who will design computers. (This step has already been done, which is fortunate as doing it ourselves is beyond the current budget for FY 2008.)
2. A Human team (consisting of both noble humans and loyal computers) will choose and prep a human champion, and a Computer team (consisting of both traitorous humans and ungrateful computers) will choose and prep a computer champion.
3. Pick a simple Turing Machine, that supports prefix-free programs. If this is deemed "pejorative" against the Human team, a Turing Animal (for example, a rabbit somehow trained to follow a trail of carrots in a Turing-equivalent manner) may be substituted.
4. Pick a Contest duration, t, a number of tests, n, and an extremely large Evaluation Step Count, s.
5. A Judge will generate n random prefix-free programs similar to those used in defining the halting probability. For each such program, each champion will have time t to determine whether the program would halt within s steps.
6. If the champions produce different answers, the Judge then determines, using brute force if necessary, who was correct.
It seems fairly obvious that the Computer team would win any such contest.
Showing posts with label agi. Show all posts
Showing posts with label agi. Show all posts
Sunday, April 6, 2008
Wednesday, March 26, 2008
Measuring AGI intelligence
There are many possible ways to judge the intelligence of a potential AGI. However, there are two criteria for intelligence that matter the most.
1. Is the AGI intelligent enough to destroy humanity in the pursuit of its goals?
2. Is the AGI intelligent enough to be able to steer the future into a region where no AGI will ever arise that would destroy humanity?
If an Unfriendly AGI reaches the level of intelligence described in (1) before a Friendly AGI reaches the level of intelligence described in (2), mankind perishes.
To paraphrase Glengarry Glen Ross: First prize in the race between FAI and UFAI is universal utopia. Second prize is, everyone dies. (No gift certificate, not even a lousy copy of our home game.)
1. Is the AGI intelligent enough to destroy humanity in the pursuit of its goals?
2. Is the AGI intelligent enough to be able to steer the future into a region where no AGI will ever arise that would destroy humanity?
If an Unfriendly AGI reaches the level of intelligence described in (1) before a Friendly AGI reaches the level of intelligence described in (2), mankind perishes.
To paraphrase Glengarry Glen Ross: First prize in the race between FAI and UFAI is universal utopia. Second prize is, everyone dies. (No gift certificate, not even a lousy copy of our home game.)
Saturday, October 6, 2007
Wild Guess: Singularity in 2024
Suppose there is no World War III; suppose that there's no single disaster sufficient to wipe out more than, say, 10% of mankind in a single year. When will the Singularity arrive?
Kurzweil's scenario gives us affordable human-level hardware around 2024, according to my interpretation of his graph. I find his "accelerating exponential growth" model of pre-Singularity computer hardware to be more reasonable than straight Moore's Law, especially factoring in possible nanotech and biotech improvements. Note that Kurzweil states that his models gave him "10^14 to 10^16 cps for creating a functional recreation of all regions of the human brain, so (he) used 10^16 cps as a conservative estimate." I'm interested in "most likely" rather than "conservative", so I used 10^15, but that doesn't make a huge difference. I also picked a "most likely" spot on the gray error-bar rather than a conservative extreme, which does shift things significantly.
Kurzweil believes that the Singularity would arrive decades after we have cheap human-level hardware, but I think it's more likely to arrive a little bit ahead or a little bit behind. So, my wild guess is 2024: meaning that while it's unlikely to arrive in exactly that year, I give it a 50/50 odds of being before or after. Of course, it could end up being "never". It could end up being next year.
Kurzweil's scenario gives us affordable human-level hardware around 2024, according to my interpretation of his graph. I find his "accelerating exponential growth" model of pre-Singularity computer hardware to be more reasonable than straight Moore's Law, especially factoring in possible nanotech and biotech improvements. Note that Kurzweil states that his models gave him "10^14 to 10^16 cps for creating a functional recreation of all regions of the human brain, so (he) used 10^16 cps as a conservative estimate." I'm interested in "most likely" rather than "conservative", so I used 10^15, but that doesn't make a huge difference. I also picked a "most likely" spot on the gray error-bar rather than a conservative extreme, which does shift things significantly.
Kurzweil believes that the Singularity would arrive decades after we have cheap human-level hardware, but I think it's more likely to arrive a little bit ahead or a little bit behind. So, my wild guess is 2024: meaning that while it's unlikely to arrive in exactly that year, I give it a 50/50 odds of being before or after. Of course, it could end up being "never". It could end up being next year.
Sunday, September 23, 2007
...And, Scenarios Without a Strong AI
- A giant disaster or war occurs, say on a scale that kills off more than 20% of humanity in a single year.
- Getting the software right turns out to be harder than we thought. Sure, natural selection got it right on Earth, but it has an infinite universe to play with, and no one is around to observe the 10^10000 other planets where it failed.
Thursday, September 20, 2007
Strong AI Takeoff Scenarios...
Things that could cause the current Moore's Law curve to be greatly exceeded:
- Enormous nanotechnology advances.
- Enormous biotech advances, allowing dramatic intelligence augmentation to human brains.
- Self-improving AI, nowhere near *general* human-level intelligence yet, is suddenly put in charge of a botnet and swallows most of the Internet.
- Awareness of the promise or the threat of Strong AI becomes widespread. A government, or government consortium, launches an Apollo Project-scale activity to create a Strong AI, hopefully under controlled conditions!
- The usual "recursive self-improvement" scenario
- We were too conservative. Turns out creating a Strong AI on a mouse-level brain is easier than we thought; we just never had a mouse-level brain to experiment with before.
Subscribe to:
Posts (Atom)