Saturday, December 29, 2007

Foresight Exchange

I opened a Foresight Exchange account today (rolf.h.d.nelson). The main purpose is to challenge myself to become a better thinker by forcing myself to think through both the sides of why future events may or may not happen. My initial plan was to buy or sell one share in each of the "U.S. News" categories. I got through about six of the items before I gave into the temptation to put all the rest of my money down on "No" for the mispriced USAGeo.

The presence of mispriced items in this play-money exchange didn't surprise me, especially for "X will happen by the year Y." Presumably people put down "Buy if it goes down to price Z" orders, and as year Y comes closer and the price drops naturally well past Z, they have little incentive to log back in and rescind their now-insane orders (especially if they've abandoned their accounts.)

What did surprise me was how *boring* the brief experience was. Most of the decisions revolved, not around profound questions of philosophy or ideology, but around peering at graphs and trying to extrapolate probabilities of non-controversial events. The poor user interface added to the boredom factor as well.

Sunday, December 9, 2007

Am I corruptible?

Lord Acton said that "power corrupts." Others say that corrupt people in power were already corruptible to begin with; we just don't notice that people are prone to corruption before they gain power, because they never had the opportunity to benefit from corruption.

As a thought experiment, let's use the following model. 80% of the people are corruptible: that is, they will act corrupt if they become the King; there is no way of determining whether someone is corrupt before they become the King. Everyone publicly denies that they are corruptible. Two worlds exist, identical in every respect except:

In the "Self-Deceptive World", everyone has a self-image of themselves as incorruptible before they gain power.

In the "Self-Aware World", everyone is fully aware of whether they are corruptible; the corruptible people merely lie and claim that they are incorruptible.

These are the only two worlds that exist; to put it another way, the a priori odds that you live in one world rather than the other is 50%.

You have the self-image of yourself as someone who is incorruptible, but you have never been the King, and are unsure of which of the two worlds you live in. In this case, I would reason as follows:

Pick ten people at random, maybe five will be from the Self-Deceptive World, and five will be from the Self-Aware World. On average, there will be five self-deceptive people from the Self-Deceptive World with an incorruptible self-image, and one self-aware person from the Self-Aware World with an incorruptible self-image. Therefore, the odds are 5:1 that you live in the Self-Deceptive World, and are corruptible.

Sunday, November 25, 2007

Efficient Market Hypothesis

Suppose that you have a logical argument, which seems compelling to you, that publicly available information has not been reflected in an asset's price. (One example might be here, otherwise I'm sure you can pick out a different argument that has occurred to you at some point.) If you have funds to invest, should you focus investment funds in that area? I would argue, generally no, because of Copernican (majoritarian) considerations, including various forms of the Efficient Market Hypothesis.

If, instead, you have a partially Ptolemaic viewpoint, and are logically consistent, you would probably come to the conclusion that, any time you see everyone else make what seems to you like a logical mistake, you should spend significant effort in determining how you can profit from the mistake.

For example, suppose you believe that, with probability p, you are now in a 'privileged epistemological position' that will increase your expected annual returns, from 1.06 to 1.10, if you actively (rather than passively) manage you portfolio. (But with probability 1-p, there is no such thing as a privileged epistemological position. If you actively manage, but there is no such thing as a privileged position, your expected returns go down to 1.05 because of transaction costs.) If your probability p is above 0.2, you would want to actively manage rather than passively manage.

The problem with active management, of course, is that in the existing market, for every winner there must be a loser. So there's a "meta-level" where the above must be, on average, bad advice. It's not clear to me how to consistently avoid these types of traps without recourse to a Copernican Epistemology.

Saturday, November 10, 2007

Pure Copernican epistemologies

There are multiple people in the room, including you, who (even after discussion of the objective facts) all have different honest ("Experience") beliefs. You have to make a correct decision, based on those beliefs. Consider four algorithms to make the decision.

1. Always base your decision on your own ("Experience") beliefs.

2. Always go with the beliefs of whoever you judge has the most "common sense" in the room (which, by staggering coincidence, happens to be you.)

3. Always go with the beliefs of whoever's Social Security Number is 987-65-4320 (which, by staggering coincidence, happens to be your own Social Security Number.)

4. Everyone takes some sort of Good Decision-Making Test that measures your general GDM (Good Decision-Making) ability.

The first two are clearly not "Copernican Epistemologies", as they posit as axioms that you have 'privileged access' to truth. If you wish to adopt a purely Copernican Epistemology, you would reject (1) and (2). Would you have a preference between (3) and (4)? Both (3) and (4), on the face, Copernican. But the decision of what algorithm to choose make your current decision is, itself, a decision! If you apply a Copernican process to that decision, and so on recursively, you would (in theory) eventually come back to some small set of consistent axioms, and would reject (3).

I personally believe that a normative "Theory of Everything" epistemology would have to be purely Copernican, rather than partially Ptolemaic. To elaborate, it would have to be an epistemology where:
  1. There are a relatively small set of axioms (for example, there is no room for axioms that directly reference Social Security Numbers)
  2. None of these axioms explicitly reference yourself as a privileged source of knowledge, with the exception that I would allow some privileged access to your own current consciousness, and your own current thoughts and beliefs. (You do not have privileged access to your past feelings, thoughts, and beliefs; you have to infer those from your current thoughts and beliefs, like everyone else.) To be clear, this privileged access would not be of the form "I have privileged knowledge that my belief about X is correct," but rather, of the form "I have privileged knowledge that I know that 'I believe X is correct.' In contrast, I don't know whether Joe believes that 'X is correct'; he says he does, but for all I know, he's deliberately lying."

Saturday, November 3, 2007

Some hypothetical answers for the Wire Disagreement Dilemma

Here is a sampling of possible answers for the Wire Disagreement Dilemma:

1. Always go with your own "Experience" beliefs (cut the red wire).

2. Always go with the beliefs of whoever's Social Security Number is 987-65-4320 (which, by staggering coincidence, happens to be your own Social Security Number.)

3. Always go with the beliefs of whoever you judge has the most "common sense" in the room (which, by staggering coincidence, happens to be you.)

4. Always go with the majority belief.

5. Always go with the belief of the person who had the highest IQ test scores on his most recent test.

6. Always go with the person with the most education (as measured in years of schooling).

7. Assign a score, based on a preconceived formula that weights one or more of the previous considerations. Then go with whoever has the highest score, unless you really dislike the outcome, it which case go with your "Experience" beliefs.

Ptolemaic vs. Copernican Epistemologies. One of the differences between these solutions is the degree to which they presuppose that you have privileged access to the truth. For lack of a better term, I would call systems Copernican Epistemologies if they posit that have no privileged access to the truth, and Ptolemaic Epistemologies if they posit that you do have privileged access to the truth. This is a spectrum;
"Always go with your own 'Experience' beliefs" is the exemplar of Ptolemaic belief; "I have no privileged 'Experience' beliefs" is the exemplar of Copernican belief; there are plenty of gradients between.

Note that it is not possible for a human to actually implement a 100% pure Ptolemaic belief system, nor a 100% pure Copernican belief system. For example, your beliefs of "what I would have believed, apart from other peoples' opinions" will, in practice, be tainted by your knowledge of what other people believe.

Sunday, October 28, 2007

Wire Disagreement Dilemma

You are locked in a room with two other people and a time bomb. To disarm the bomb, you must choose correctly between cutting the red wire or the blue wire on the bomb; cutting the wrong wire, or failing to cut either of the wires in time, will trigger the bomb. Any one of the three of you can choose to lunge forward and cut one of the wires at any time.

Each of you puzzles over the circuit-wiring schematic. You find an airtight, 100% certain proof that the red wire is the wire that needs to be cut. But simultaneously, your two allies report that they have come up with airtight, 100% certain proofs the blue wire needs to be cut! You cannot come to a consensus, either because you do not have time, or because you simply cannot understand each others' proofs.

Your choices are:

1. Lunge forward and cut the red wire.

2. Allow your allies to cut the blue wire.

How do you make your decision? Call this the Wire Disagreement Dilemma.

Notes:

1. According to the most straightforward application of classical logic, you should lunge forward and cut the red wire.

2. Philosophical Majoritarianism doesn't tell you exactly what to do. PM seems to be a heuristic that you use alongside other, sometimes conflicting, heuristics. As I've seen it outlined, it doesn't seem to tell you much about when the heuristic should be used and when it shouldn't.

3. There's a sense in which you never have an actual proof when you make a decision, you only have a memory that you had a proof.

4. Consider two people, Alice and Bob. Alice should not automatically give her own beliefs "magical precedence" over Bob's beliefs. However, there are many circumstances where Alice should give her own beliefs precedence over Bob's; there are also circumstances where Alice should defer to Bob.

5. This type of thinking is so rare, that (to my knowledge) we don't even have a short word to describe the difference between "I believe X because I reasoned it out myself" and "I believe X because someone smarter or more experienced than me told me X, even though, on my own, I would have believed Y."

In normal conversation, you have to use cumbersome phrases and idioms: for example, "it seems to me like X" in the former case and "my understanding is that X" in the latter case.

Experience vs. Hearing: As technical terms, I'd propose that in the former case we say "I Experience X" or "my Experience is X." In the latter case we can say "I Hear that X" or "my Hearing is X."

6. One asymmetry, when Alice is evaluating reality, is that she generally knows her own beliefs but doesn't necessarily know Bob's beliefs. Bob may be unavailable; Bob may be unable to correctly articulate his beliefs; Alice may misunderstand Bob's beliefs; there may not be time to ask Bob his beliefs; or Bob may deliberately deceive Alice about his beliefs.

Saturday, October 20, 2007

Occam's Meta-Razor

Let me define the Occam's Meta-Razor Problem as follows: What is the smallest and simplest set of basic philosophical postulates that a rational agent needs in order to act in a way that is intuitively satisfactory? The goal is that the behavior should satisfice, even if it's not necessarily optimal. Call this the Occam's Meta-Razor problem.

Intuitively, I think we want three items:

1. A simple way to analyze probabilities. Something like Solomonoff Induction might satisfice, if the Pascal's Mugging problem were solved.

2. A utility function. An initial start might be, “Maximize the expected amount of X in the Universe,” where X is some weighted combination of happiness, freedom from pain, autonomy, etc. A satisfactory but simple description for X would be difficult to unambiguously specify, especially in the case where the agent wields super-human intelligence. Two of many possible pitfalls:

  • For almost all X, the current set of humans who are alive (and humanity in general) are going to be sub-optimal, from the point-of-view of the agent. However, we want the agent to decide against wiping out humanity and replacing it with species that are “more worthy” according to its utility function.
  • We would want some portion of X to include the concept of “autonomy” and preserve our abilities to make informed, uncoerced decisions. But, a sufficiently smart agent could peacefully convince (trick?) me into making any number of ludicrous decisions. It's not clear how to unambiguously define “coercion” in the case of a super-intelligent agent.

3. A simple decision theory, such as Evidential Decision Theory (which I believe subsumes Hofstadter superrationality), or Causal Decision Theory (which is the standard in mainstream Game Theory.) Either should satisfice, though I regard Evidential Decision Theory as much simpler.

Being philosophical principles, obviously these can't be directly used to create a real, resource-limited AGI; for example, Solomonoff Induction is too slow for practical use.

But, as a set of normative philosophical principals for a human being to use, these seem like a reasonable starting point.


[edit -- decided to call it "Occam's Meta-Razor" rather than "Meta-Occam's Razor"]