Sunday, June 22, 2008

Reviews of books that changed the reviewer's mind

A nonfiction book reviewer will generally review a book on two aspects:

1. How many interesting, new, useful, and accurate ideas are presented in the book?

2. Is the book's thesis accurate and righteous? In other words, does the book fit in well with the reviewer's pre-reading worldview?

My goal is generally not to replace my own prejudices and worldview with the prejudices and worldview of the reviewer, so I would like to seek out books that meet criterion #1. If i had a thousand concurrent additional lifetimes right now, I think Rolf #428 would write up a list of book reviews where the reviewer claimed that the book substantively changed his mind about a topic, to the extent that the reviewer reversed a previously-endorsed opinion. This would produce a list of books that are more likely to meet criterion #1: the books would be more likely to have been recommended by the reviewer because the content of the book was compelling, rather than because the book affirmed pre-existing beliefs.

One caveat would be that an intensification of a previously-held belief would not count. For example, a reviewer saying "I believed before that Bush was a mediocre president, but now i realize he's the worst man who ever lived!" would not count as a reversal. In addition, a shift from an unpopular belief to a popular belief would not weigh as heavily as a shift from a popular to an unpopular belief.

Sunday, June 8, 2008

The reluctance to give probabilities (II)

The last of the three theories I gave in the last post, ties into a notion I have of what we mean by "admissible" vs. "inadmissible" evidence when evaluating what the "true" probability is of an event. There is nothing in the external objective world that corresponds to directly to what we term "admissibility"; the question of admissibility seems to be a part of our evolved social judgment kit.

I think reluctance to commit is a (possibly evolved) mechanism targeted specifically to combat other peoples' Hindsight Bias. It fills in a predictive gap in the other theories, the fact that your "weight" on the coin toss, is, in a sense, 0: once the coin is tossed, you have no reason to doubt someone who tells you it came up heads! And yet, people feel no hesitation about assigning it a .5 probability, even after the coin-flip has already been made, because they know nobody will doubt their competence for doing so.

Here are some different ways that predictions can be framed.

1. I have no idea who will win. (Extremely tentative)

2. I give a 50% chance the wrestler will win. (Less tentative)

Similarly, if the only thing you know is that wrestlers train on a wider variety of maneuvers than boxers, and therefore you gave a slight edge to the wrestler:

1. It seems a little bit more likely the wrestler will win. (Extremely tentative)

2. I give a 60% chance the wrestler will win. (Less tentative)

3. I give a 60.020001% chance the wrestler will win. (Not at all tentative)

(2) is more tentative than (3), maybe because of the norms of parsimony in conversation. It's not clear why (2) is much more tentative than the (1); the norm that you only give a numeric percentage when you're willing to stand behind your estimate,

Note that newspapers rarely give probabilities of events, aside from weather forecasts that can be "justified" by pointing to a large ensemble of past data and uncontroversial analysis.

Sunday, May 4, 2008

The reluctance to give probabilities

Coin Scenario: I am about to flip a fair coin. What is the probability it will come up Heads?

Ignorant Fight Scenario: The World's Greatest Boxer fights the World's Greatest Wrestler. Will the wrestler win? (Assume you have no reasonable methodology, other than the Principal of Insufficient Reason, for deciding who to bet on.)

Howard Raiffa argued that in both cases, you should assign a subjective probability of 0.5. However, most people are more comfortable assigning a probability in the Coin Scenario. In the Ignorant Fight Scenario, most people would rather leave it unspecified, or give a "probability of probabilities." Why? Here are some (overlapping) possible reasons.

1. Weight of Evidence: When you look at the larger model that probability assessments come from, you can sometimes attach different weights to various probability assessments. The significance of the weights is that if you find that two different probability estimates conflict, then the estimate with more weight should shift less than the estimate with less weight. Leaving a "blank space" in your probability estimate might be a reasonable way for an agent with limited memory storage to remember that the weight is close to zero. (This paper's explanation is probably isomorphic to this theory.)

2. Ease of Gaining Additional Evidence: You leave a "blank space" for now if you don't see a use for an initial estimate, but believe it would be easy to gain a more refined estimate later, "if needed." The human equivalent of "lazy evaluation."

3. Avoiding Looking Foolish: The more certain you sound about the "validity" of your estimate, the more foolish you will look when (not if) others, in hindsight, decide after the bout that there was evidence that you should have considered, but failed to, when making your probability assessment. This is the explanation that I lean toward.

Saturday, April 19, 2008

A skeptical view of the 40-hour work week

Many people hold the view that "crunch mode doesn't work"; one extreme viewpoint, endorsed by Extreme Programming, is that working more than forty hours per week doesn't result in increases in total productivity for each week. I will call this the "Productivity Laffer Curve" viewpoint.

The "Productivity Laffer Curve" hypothesis is odd, because "work" is a vague term. It's like being told that eating green things reduces your risk of cancer. What sort of green things? Vegetables? Mold? Green M&M's? The analogy here is that, just as there's no reasonable causal mechanism how greenness can directly prevent cancer, there's no reasonable causal mechanism for how the fact that someone labeled a task as “work”, can directly reduce your ability to do that task. If people who work more than 40 hours lose effectiveness, then there must be some causal factor involved, such as physical fatigue, mental fatigue, or loss of motivation; and it may be better to determine what those factors are, and address those factors directly, rather than artificially curtail your work week.

One reason for skepticism about the “Productivity Laffer Curve” is that companies don't always push 40-hour work weeks,
even though having a 40-hour work week creates a benefit for the company (it makes it easier to recruit and retain workers) beyond the benefits of greater productivity. Therefore, any company with a 40-hour work week would have an enormous advantage over other companies: the 40-hour work-week company would have a recruiting and retaining advantage, *without* having to sacrifice even a little bit of productivity. However, I don't think we see this in practice.

Sunday, April 6, 2008

The World's Simplest* Intelligence Test

Computers can beat humans at Chess and can multiply numbers faster. Humans can beat computers at Go and can pass the Turing Test. Who's "really" smarter? Not that it matters, but one way to answer this unimportant trivia question is to run The World's Simplest* Intelligence Test.

* in the interest of fairness, the word "simplest" is not defined as what a human would intuitively judge "simplest," nor what Microsoft Windows would judge "simplest". Instead, we will use the more objective standard of Kolmogorov Complexity.

The steps for the test would be:

1. Natural Selection will evolve human beings, who will design computers. (This step has already been done, which is fortunate as doing it ourselves is beyond the current budget for FY 2008.)

2. A Human team (consisting of both noble humans and loyal computers) will choose and prep a human champion, and a Computer team (consisting of both traitorous humans and ungrateful computers) will choose and prep a computer champion.

3. Pick a simple Turing Machine, that supports prefix-free programs. If this is deemed "pejorative" against the Human team, a Turing Animal (for example, a rabbit somehow trained to follow a trail of carrots in a Turing-equivalent manner) may be substituted.

4. Pick a Contest duration, t, a number of tests, n, and an extremely large Evaluation Step Count, s.

5. A Judge will generate n random prefix-free programs similar to those used in defining the halting probability. For each such program, each champion will have time t to determine whether the program would halt within s steps.

6. If the champions produce different answers, the Judge then determines, using brute force if necessary, who was correct.

It seems fairly obvious that the Computer team would win any such contest.

Wednesday, March 26, 2008

Measuring AGI intelligence

There are many possible ways to judge the intelligence of a potential AGI. However, there are two criteria for intelligence that matter the most.

1. Is the AGI intelligent enough to destroy humanity in the pursuit of its goals?

2. Is the AGI intelligent enough to be able to steer the future into a region where no AGI will ever arise that would destroy humanity?

If an Unfriendly AGI reaches the level of intelligence described in (1) before a Friendly AGI reaches the level of intelligence described in (2), mankind perishes.

To paraphrase Glengarry Glen Ross: First prize in the race between FAI and UFAI is universal utopia. Second prize is, everyone dies. (No gift certificate, not even a lousy copy of our home game.)

Sunday, March 9, 2008

Making the Best of Incomparable Utility Functions

Suppose there are only two possible worlds, the small world w that contains a small number of people, and the large world W that contains a much larger number of people. You current estimate there is a .5 chance that w is the real world, and a .5 chance that W is the real world. You don't have an intuitive way of "comparing utilities" between these worlds, so as a muddled compromise you currently spend half your resources on activities AW that will maximize utility function UW if W is real, and the other half on Aw to maximize utility Uw if w is real.

This is not a pure utilitarian strategy, so we know that our activities may be suboptimal; that we can probably do better in terms of maximizing our total expected utility. Here is one utilitarian strategy, among many, that usually dominates our current muddled compromise.

Calculate expected utility EU' of a local state s as:

EU'(s) = P(W) U'(s|W) * P(w) U'(s|w)

With P being the probablity, and U' being any new utility function that obeys these constraints:

1. U'(s|W), the utility of s if W is real, is an affine transformation of UW.

2. U'(s|w), the utility of s if w is real, is an affine transformation of Uw.

3. If the only activities you could do were AW and Aw, and P(W) and P(w) are both .5, then EU' is maximized when you spend half of your resources on AW and half on Aw.

What have we gained?

1. We now have a way to evaluate the utility of a single action that will help in both W and w.

2. We now have a way to evaluate the utility of a single action that helps in W but harms in w, or vice versa.

3. As future evidence comes in about whether W or w is real, you have a way to optimally shift resources between AW activities and Aw activities.

Sunday, February 24, 2008

Disagreement Checklist

If you seem to disagree with someone about a factual statement, it's important to consider all four possible explanations:

1. The other person's conscious beliefs are incorrect.

2. Your conscious beliefs are incorrect.

3. The other person is deliberately deceiving you about his conscious beliefs.

4. You have the same conscious beliefs, but your understanding of what the other person is trying to say is incorrect, and you perceive a factual disagreement where none exists.

Obviously more than one of these can apply to a given situation.

Sunday, February 10, 2008

Majoritarianism and the Efficient Market

Posit four cases where your opinion differs from the majority:

1a. You are considering what checkout counter to go to at the supermarket; there are no lines. You are about to go to counter A, where the cashier looks like he'd be faster, when you are told that 3/4 of the people who shop here use counter B instead when there are no lines.

1b. The checkout counters have long lines; counter B's line is three times as long as counter A. The line at counter B looks to be moving only twice a fast, so you feel inclined to use counter A.

2a. You are, bizarrely, forced to make a 50-50 bet on whether Sun Microsystems will make a profit this year. You are about to bet "yes", when you are informed that 3/4 of the people offered this bet said "no".

2b. You are forced to either sell short or buy long on Sun Microsystem's stock. You are about to buy long when you are informed that three times as many people have shorted the stock as have gone long on it.

Philosophical Majoritarianism suggests you consider changing your mind in cases 1a and 2a, but not 1b and 2b. The reason is that, to the degree that Majoritarianism is valid, it doesn't matter which you line you pick in 1b, or whether you buy or sell in case 2b; the majority decision, if wise, has washed out the difference, so Majoritarianism doesn't provide direct guidance on what to do.

Sunday, January 27, 2008

Free will and future actions

Suppose I chose to make an altruistic decision today. That decision was my choice via "free will", by which I mean that I choose my own actions, and that I take responsibility for my own actions. However, it is also the case that that decision was determined by some process, either by deterministic processes or by random quantum-mechanical fluctuations that could easily have turned out some other way. There is some set of mechanical preconditions that had to be satisfied before I could make that altruistic decision.

I intend to make reasonably altruistic decisions in the future, as well. However, if these mechanical preconditions fail to hold true for some reason in the future, I must acknowledge that, despite my current intentions, I will not make altruistic decisions in the future. I should take responsibility for the consequences of my future actions, in addition to my present actions.

Therefore, I should keep in mind the following:

* Where my present actions can influence my future actions from the mechanical point of view, there is some utility in deciding to take present actions that are likely to lead to positive future actions.

* Whenever contemplating taking on positions of power, I should avoid "moral overconfidence", that is, when considering whether power would corrupt me, I should try to objectively weigh all the data available to me, rather than believe that "free will" means that the Rolf who exists in the year 2007 can magically and infallibly determine the actions and goals of the Rolf who exists in the year 2017.

Obviously this does not mean that I can shirk responsibility for maintaining past promises; quite the opposite. In addition, the intent is not to go to an extreme and "declare war on my future self"; instead the goal is to take responsibility for all of the causal consequences of my current actions, including the consequences caused by the influence my current actions have on my future actions.

Saturday, January 5, 2008

Open letter on disenfranchisement in the Michigan Primary

To: the Democratic and Republican parties

In the past, I have considered myself independent. For example, I have voted for both Democratic and Republican candidates for President in the past, based on their individual merits. This may change soon, based on your actions.

You have until January 14, 2008 to resolve the threatened disenfranchisement of the Michigan primary voters. Satisfactory resolutions would include either of the following:

1. Your national party unambiguously withdraws the threat to punish the Michigan voters. Vague and unenforceable promises by the candidates to seat the Michigan delegates are not sufficient, since the candidates may break these promises if the convention vote ends up being close.

2. Your Michigan state party manages to move the primary back to February, thus making your threat moot.

On January 15th, if the Democrats have done a better job of resolving this crisis than the Republicans, I will vote a straight Democratic ticket for the next 20 years, at least in the usual case where there is no viable third-party candidate. If the Republicans have done a better job than the Democrats, I will vote a straight Republican ticket for the next 20 years, at least in the usual case where there is no viable third-party candidate.

Note that, if the current "status quo" continues, I will end up voting a straight Republican ticket, as the current Republican Party compromise of cutting the delegate count in half is marginally superior to the current Democratic Party threat of total disenfranchisement.

Sincerely,

Rolf Nelson