tag:blogger.com,1999:blog-38007438425708812402024-03-08T05:53:31.747-07:00Speculations on Math, Philosophy, and Humanity"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." -FeynmanUnknownnoreply@blogger.comBlogger27125tag:blogger.com,1999:blog-3800743842570881240.post-78478158069448973712009-08-03T01:43:00.003-06:002009-08-03T01:46:13.736-06:00Cognitive biases and programming languages: some speculative musingsTrue story: In college, our professor asked our 20-person class what language they thought was better: C++ or Scheme. To this vague question, 19 of us voted for C++; one indignant holdout student (and the professor) said Scheme.<br /><br />The professor then asked us which of the two languages we had learned first. The same nineteen of us said C++; the same one holdout student had, oddly, learned Scheme some years ago before ever touching an imperative language.<br /><br />The holdout seized on an explanation for why the rest of the class had the wrong opinion and declaimed without a hint of irony: "You all learned C++ first, that means <span style="font-style: italic;">you're</span> all biased!"<br /><br /><span style="font-weight: bold;">Rational considerations</span><br /><br />As a software developer, I often have to choose what programming language to use to implement a new software project. Many rational factors can be weighed; here is an incomplete list:<br /><ul><li>What languages are current and potential project members most proficient in?</li><li>Is the language a good match for the project's domain?</li><li>How much does the given language enhance productivity?</li><li>Is the language well-supported on the target platform?</li><li>What libraries are available to integrate with in the various languages?</li></ul><span style="font-weight: bold;">Irrational considerations</span><br /><br />Unfortunately, it is all too easy to succumb to irrationality. While I unfortunately have only anecdotal evidence, I believe that we tend to over-rate languages we have learned proficiently, and under-rate languages we are less proficient in. Furthermore, I believe that this holds true even when the pattern of languages we've learned is accidental: while we may learn a language because we over-rate the language, it's also true that we over-rate a language because we've learned it. To further extend my unscientific speculation, here are some possible reasons why we might over-rate our own languages:<br /><br />* <span style="font-weight: bold;">Illusory Superiority</span>. If you're good at basketball but bad at soccer, you will find your brain generating reasons why basketball is more important than soccer, even though we rational people know the reverse to be true.<br /><br />* <span style="font-weight: bold;">Rationalization</span>. If you choose to learn a programming language, you will want to rationalize afterwards that your choice was correct, even if in reality you had far too little information at the time to make an informed decision. If you decline to pick up proficiency in a specific different programming language, you will want to rationalize that you declined because the programming language is inferior, rather than that you were too lazy to explore the new language.<br /><br />* <span style="font-weight: bold;">Ingroup bias</span>. The existence of Internet flamewars shows that we can root for programming languages the same way we root for sports teams.<br /><br />In evolutionary-psychology terms, the first two are products of evolutionary pressures to project an aura of competence; the third is a product of evolutionary pressures to signal loyalty to your ingroup.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3800743842570881240.post-52378938560477037062008-06-22T19:10:00.003-06:002008-06-22T19:39:34.024-06:00Reviews of books that changed the reviewer's mindA nonfiction book reviewer will generally review a book on two aspects:<br /><br />1. How many interesting, new, useful, and accurate ideas are presented in the book?<br /><br />2. Is the book's thesis accurate and righteous? In other words, does the book fit in well with the reviewer's pre-reading worldview?<br /><br />My goal is generally not to replace my own prejudices and worldview with the prejudices and worldview of the reviewer, so I would like to seek out books that meet criterion #1. If i had a thousand concurrent additional lifetimes right now, I think Rolf #428 would write up a list of book reviews where the reviewer claimed that the book substantively changed his mind about a topic, to the extent that the reviewer reversed a previously-endorsed opinion. This would produce a list of books that are more likely to meet criterion #1: the books would be more likely to have been recommended by the reviewer because the content of the book was compelling, rather than because the book affirmed pre-existing beliefs.<br /><br />One caveat would be that an intensification of a previously-held belief would not count. For example, a reviewer saying "I believed before that Bush was a mediocre president, but now i realize he's the worst man who ever lived!" would not count as a reversal. In addition, a shift from an unpopular belief to a popular belief would not weigh as heavily as a shift from a popular to an unpopular belief.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-79228105194470081052008-06-08T15:14:00.003-06:002009-08-03T01:33:53.318-06:00The reluctance to give probabilities (II)The last of the three theories I gave in the <a href="http://rolfnelson.blogspot.com/2008/05/reluctance-to-give-probabilities.html">last post</a>, ties into a notion I have of what we mean by "admissible" vs. "inadmissible" evidence when evaluating what the "true" probability is of an event. There is nothing in the external objective world that corresponds to directly to what we term "admissibility"; the question of admissibility seems to be a part of our evolved social judgment kit.<br /><br />I think reluctance to commit is a (possibly evolved) mechanism targeted specifically to combat other peoples' Hindsight Bias. It fills in a predictive gap in the other theories, the fact that your "weight" on the coin toss, is, in a sense, 0: once the coin is tossed, you have no reason to doubt someone who tells you it came up heads! And yet, people feel no hesitation about assigning it a .5 probability, even after the coin-flip has already been made, because they know nobody will doubt their competence for doing so.<br /><br />Here are some different ways that predictions can be framed.<br /><br />1. I have no idea who will win. (Extremely tentative)<br /><br />2. I give a 50% chance the wrestler will win. (Less tentative)<br /><br />Similarly, if the only thing you know is that wrestlers train on a wider variety of maneuvers than boxers, and therefore you gave a slight edge to the wrestler:<br /><br />1. It seems a little bit more likely the wrestler will win. (Extremely tentative)<br /><br />2. I give a 60% chance the wrestler will win. (Less tentative)<br /><br />3. I give a 60.020001% chance the wrestler will win. (Not at all tentative)<br /><br />(2) is more tentative than (3), maybe because of the norms of parsimony in conversation. It's not clear why (2) is much more tentative than the (1); the norm that you only give a numeric percentage when you're willing to stand behind your estimate,<br /><br />Note that newspapers rarely give probabilities of events, aside from weather forecasts that can be "justified" by pointing to a large ensemble of past data and uncontroversial analysis.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-3800743842570881240.post-84841013369874623482008-05-04T09:54:00.007-06:002009-08-03T01:33:53.318-06:00The reluctance to give probabilitiesCoin Scenario: I am about to flip a fair coin. What is the probability it will come up Heads?<br /><br />Ignorant Fight Scenario: The World's Greatest Boxer fights the World's Greatest Wrestler. Will the wrestler win? (Assume you have no reasonable methodology, other than the Principal of Insufficient Reason, for deciding who to bet on.)<br /><br /><a href="http://en.wikipedia.org/wiki/Howard_Raiffa">Howard Raiffa</a> argued that in both cases, you should assign a subjective probability of 0.5. However, most people are more <span style="font-style: italic;">comfortable</span> assigning a probability in the Coin Scenario. In the Ignorant Fight Scenario, most people would rather leave it unspecified, or give a "probability of probabilities." Why? Here are some (overlapping) possible reasons.<br /><br />1. Weight of Evidence: When you look at the larger model that probability assessments come from, you can sometimes attach different weights to various probability assessments. The significance of the weights is that <span style="font-style: italic;">if</span> you find that two different probability estimates conflict, then the estimate with more weight should shift less than the estimate with less weight. Leaving a "blank space" in your probability estimate might be a reasonable way for an agent with limited memory storage to remember that the weight is close to zero. (<a href="http://www.stat.columbia.edu/%7Ecook/movabletype/archives/2005/09/the_boxer_the_w.html">This paper</a>'s explanation is probably isomorphic to this theory.)<br /><br />2. Ease of Gaining Additional Evidence: You leave a "blank space" for now if you don't see a use for an initial estimate, but believe it would be easy to gain a more refined estimate later, "if needed." The human equivalent of "<a href="http://en.wikipedia.org/wiki/Lazy_evaluation">lazy evaluation</a>."<br /><br />3. Avoiding Looking Foolish: The more certain you sound about the "validity" of your estimate, the more foolish you will look <a href="http://en.wikipedia.org/wiki/Hindsight_bias"><span style="font-style: italic;">when (not if)</span></a> others, in hindsight, decide after the bout that there was evidence that you <span style="font-style: italic;">should have considered</span>, but failed to, when making your probability assessment. This is the explanation that I lean toward.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-66738161144640556152008-04-19T11:18:00.004-06:002008-04-19T11:26:38.239-06:00A skeptical view of the 40-hour work weekMany people hold the view that "<a href="http://www.igda.org/articles/erobinson_crunch.php">crunch mode doesn't work</a>"; one extreme viewpoint, endorsed by <a href="http://en.wikipedia.org/wiki/Extreme_Programming_Practices#Sustainable_pace">Extreme Programming</a>, is that working more than forty hours per week doesn't result in increases in total productivity for each week. I will call this the "Productivity <a href="http://en.wikipedia.org/wiki/Laffer_curve">Laffer Curve</a>" viewpoint.<br /><br />The "Productivity Laffer Curve" hypothesis is odd, because "work" is a vague term. It's like being told that eating green things reduces your risk of cancer. What sort of green things? Vegetables? Mold? Green M&M's? The analogy here is that, just as there's no reasonable causal mechanism how greenness can <i>directly</i><span style="font-style: normal;"> prevent cancer, there's no reasonable causal mechanism for how the fact that someone labeled a task as “work”, can directly reduce your ability to do that task. If people who work more than 40 hours lose effectiveness, then there must be some causal factor involved, such as physical fatigue, mental fatigue, or loss of motivation; and it may be better to determine what those factors are, and address those factors directly, rather than artificially curtail your work week.</span><span style="font-style: normal;"><br /><br />One reason for skepticism about the “Productivity Laffer Curve” is that companies don't always push 40-hour work weeks, </span><i>even though</i><span style="font-style: normal;"> having a 40-hour work week creates a benefit for the company (it makes it easier to recruit and retain workers) </span><i>beyond</i><span style="font-style: normal;"> the benefits of greater productivity. Therefore, any company with a 40-hour work week would have an enormous advantage over other companies: the 40-hour work-week company would have a recruiting and retaining advantage, </span><span style="font-style: normal;"><span style="">*without* having to sacrifice even a little bit of productivity. However, I don't think we see this in practice.</span></span>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-72899941155072153102008-04-06T17:54:00.003-06:002009-08-03T01:35:09.471-06:00The World's Simplest* Intelligence TestComputers can beat humans at Chess and can multiply numbers faster. Humans can beat computers at Go and can pass the Turing Test. Who's "really" smarter? Not that it matters, but one way to answer this unimportant trivia question is to run The World's Simplest* Intelligence Test.<br /><br />* in the interest of fairness, the word "simplest" is not defined as what a human would intuitively judge "simplest," nor what Microsoft Windows would judge "simplest". Instead, we will use the more objective standard of <a href="http://en.wikipedia.org/wiki/Kolmogorov_complexity">Kolmogorov Complexity</a>.<br /><br />The steps for the test would be:<br /><br />1. Natural Selection will evolve human beings, who will design computers. (This step has already been done, which is fortunate as doing it ourselves is beyond the current budget for FY 2008.)<br /><br />2. A Human team (consisting of both noble humans and loyal computers) will choose and prep a human champion, and a Computer team (consisting of both traitorous humans and ungrateful computers) will choose and prep a computer champion.<br /><br />3. Pick a simple Turing Machine, that supports prefix-free programs. If this is deemed "pejorative" against the Human team, a Turing Animal (for example, a rabbit somehow trained to follow a trail of carrots in a Turing-equivalent manner) may be substituted.<br /><br />4. Pick a Contest duration, t, a number of tests, n, and an extremely large Evaluation Step Count, s.<br /><br />5. A Judge will generate n random prefix-free programs similar to those used in defining the <a href="http://en.wikipedia.org/wiki/Chaitin%27s_constant">halting probability</a>. For each such program, each champion will have time t to determine whether the program would halt within s steps.<br /><br />6. If the champions produce different answers, the Judge then determines, using brute force if necessary, who was correct.<br /><br />It seems fairly obvious that the Computer team would win any such contest.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-3800743842570881240.post-23410343519495621652008-03-26T21:50:00.003-06:002009-08-03T01:35:09.471-06:00Measuring AGI intelligenceThere are many possible ways to judge the intelligence of a potential AGI. However, there are two criteria for intelligence that matter the most.<br /><br />1. Is the AGI intelligent enough to destroy humanity in the pursuit of its goals?<br /><br />2. Is the AGI intelligent enough to be able to steer the future into a region where no AGI will ever arise that would destroy humanity?<br /><br />If an Unfriendly AGI reaches the level of intelligence described in (1) before a Friendly AGI reaches the level of intelligence described in (2), mankind perishes.<br /><br />To paraphrase Glengarry Glen Ross: First prize in the race between FAI and UFAI is universal utopia. Second prize is, everyone dies. (No gift certificate, not even a <a href="http://www.youtube.com/watch?v=K_JIg9NB47M">lousy copy of our home game</a>.)Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-21465790622471911602008-03-09T15:23:00.002-06:002009-08-03T01:39:12.326-06:00Making the Best of Incomparable Utility FunctionsSuppose there are only two possible worlds, the small world w that contains a small number of people, and the large world W that contains a much larger number of people. You current estimate there is a .5 chance that w is the real world, and a .5 chance that W is the real world. You don't have an intuitive way of "comparing utilities" between these worlds, so as a muddled compromise you currently spend half your resources on activities A<span style="font-style: italic;">W</span> that will maximize utility function U<span style="font-style: italic;">W</span> if W is real, and the other half on A<span style="font-style: italic;">w</span> to maximize utility U<span style="font-style: italic;">w</span> if w is real.<br /><br />This is not a pure utilitarian strategy, so we know that our activities may be suboptimal; that we can probably do better in terms of maximizing our total expected utility. Here is one utilitarian strategy, among many, that usually dominates our current muddled compromise.<br /><br />Calculate expected utility EU' of a local state s as:<br /><br />EU'(s) = P(W) U'(s|W) * P(w) U'(s|w)<br /><br />With P being the probablity, and U' being any new utility function that obeys these constraints:<br /><br />1. U'(s|W), the utility of s if W is real, is an affine transformation of U<span style="font-style: italic;">W</span>.<br /><br />2. U'(s|w), the utility of s if w is real, is an affine transformation of U<span style="font-style: italic;">w</span>.<br /><br />3. If the only activities you could do were A<span style="font-style: italic;">W</span> and A<span style="font-style: italic;">w</span>, and P(W) and P(w) are both .5, then EU' is maximized when you spend half of your resources on A<span style="font-style: italic;">W</span> and half on A<span style="font-style: italic;">w</span>.<br /><br />What have we gained?<br /><br />1. We now have a way to evaluate the utility of a single action that will help in both W and w.<br /><br />2. We now have a way to evaluate the utility of a single action that helps in W but harms in w, or vice versa.<br /><br />3. As future evidence comes in about whether W or w is real, you have a way to optimally shift resources between A<span style="font-style: italic;">W</span> activities and A<span style="font-style: italic;">w</span> activities.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-3800743842570881240.post-72904439395336337212008-02-24T13:55:00.003-07:002009-08-03T01:36:47.395-06:00Disagreement ChecklistIf you seem to disagree with someone about a factual statement, it's important to consider all four possible explanations:<br /><br />1. The other person's conscious beliefs are incorrect.<br /><br />2. Your conscious beliefs are incorrect.<br /><br />3. The other person is deliberately deceiving you about his conscious beliefs.<br /><br />4. You have the same conscious beliefs, but your understanding of what the other person is trying to say is incorrect, and you perceive a factual disagreement where none exists.<br /><br />Obviously more than one of these can apply to a given situation.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-60123389075725547062008-02-10T16:34:00.000-07:002009-08-03T01:36:47.395-06:00Majoritarianism and the Efficient MarketPosit four cases where your opinion differs from the majority:<br /><br />1a. You are considering what checkout counter to go to at the supermarket; there are no lines. You are about to go to counter A, where the cashier looks like he'd be faster, when you are told that 3/4 of the people who shop here use counter B instead when there are no lines.<br /><br />1b. The checkout counters have long lines; counter B's line is three times as long as counter A. The line at counter B looks to be moving only twice a fast, so you feel inclined to use counter A.<br /><br />2a. You are, bizarrely, forced to make a 50-50 bet on whether Sun Microsystems will make a profit this year. You are about to bet "yes", when you are informed that 3/4 of the people offered this bet said "no".<br /><br />2b. You are forced to either sell short or buy long on Sun Microsystem's stock. You are about to buy long when you are informed that three times as many people have shorted the stock as have gone long on it.<br /><br /><a href="http://www.overcomingbias.com/2007/03/on_majoritarian.html">Philosophical Majoritarianism</a> suggests you consider changing your mind in cases 1a and 2a, but not 1b and 2b. The reason is that, to the degree that Majoritarianism is valid, <i>it doesn't matter</i> which you line you pick in 1b, or whether you buy or sell in case 2b; the majority decision, if wise, has washed out the difference, so Majoritarianism doesn't provide direct guidance on what to do.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-39543904136900098752008-01-27T09:42:00.000-07:002009-08-03T01:39:12.326-06:00Free will and future actionsSuppose I chose to make an altruistic decision today. That decision was my choice via "free will", by which I mean that I choose my own actions, and that I take responsibility for my own actions. However, it is also the case that that decision was determined by some process, either by deterministic processes or by random quantum-mechanical fluctuations that could easily have turned out some other way. There is some set of mechanical preconditions that had to be satisfied before I could make that altruistic decision.<br /><br />I intend to make reasonably altruistic decisions in the future, as well. However, if these mechanical preconditions fail to hold true for some reason in the future, I must acknowledge that, despite my current intentions, I will not make altruistic decisions in the future. I should take responsibility for the consequences of my future actions, in addition to my present actions.<br /><br />Therefore, I should keep in mind the following:<br /><br />* Where my present actions can influence my future actions from the mechanical point of view, there is some utility in deciding to take present actions that are likely to lead to positive future actions.<br /><br />* Whenever contemplating taking on positions of power, I should avoid "moral overconfidence", that is, when considering whether power would corrupt me, I should try to objectively weigh all the data available to me, rather than believe that "free will" means that the Rolf who exists in the year 2007 can magically and infallibly determine the actions and goals of the Rolf who exists in the year 2017.<br /><br />Obviously this does not mean that I can shirk responsibility for maintaining past promises; quite the opposite. In addition, the intent is not to go to an extreme and "declare war on my future self"; instead the goal is to take responsibility for all of the causal consequences of my current actions, including the consequences caused by the influence my current actions have on my future actions.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-11467099529038510192008-01-05T09:32:00.000-07:002008-01-05T09:49:41.835-07:00Open letter on disenfranchisement in the Michigan PrimaryTo: the Democratic and Republican parties<br /><br />In the past, I have considered myself independent. For example, I have voted for both Democratic and Republican candidates for President in the past, based on their individual merits. This may change soon, based on your actions.<br /><br />You have until January 14, 2008 to resolve the threatened disenfranchisement of the Michigan primary voters. Satisfactory resolutions would include either of the following:<br /><br />1. Your national party unambiguously withdraws the threat to punish the Michigan voters. Vague and unenforceable promises by the candidates to seat the Michigan delegates are not sufficient, since the candidates may break these promises if the convention vote ends up being close.<br /><br />2. Your Michigan state party manages to move the primary back to February, thus making your threat moot.<br /><br />On January 15th, if the Democrats have done a better job of resolving this crisis than the Republicans, I will vote a straight Democratic ticket for the next 20 years, at least in the usual case where there is no viable third-party candidate. If the Republicans have done a better job than the Democrats, I will vote a straight Republican ticket for the next 20 years, at least in the usual case where there is no viable third-party candidate.<br /><br />Note that, if the current "status quo" continues, I will end up voting a straight Republican ticket, as the current Republican Party compromise of cutting the delegate count in half is marginally superior to the current Democratic Party threat of total disenfranchisement.<br /><br />Sincerely,<br /><br />Rolf NelsonUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-68795961197664792472007-12-29T13:48:00.001-07:002007-12-29T14:23:58.128-07:00Foresight ExchangeI opened a Foresight Exchange account today (rolf.h.d.nelson). The main purpose is to challenge myself to become a better thinker by forcing myself to think through both the sides of why future events may or may not happen. My initial plan was to buy or sell one share in each of the "U.S. News" categories. I got through about six of the items before I gave into the temptation to put all the rest of my money down on "No" for the mispriced USAGeo.<br /><br />The presence of mispriced items in this play-money exchange didn't surprise me, especially for "X will happen by the year Y." Presumably people put down "Buy if it goes down to price Z" orders, and as year Y comes closer and the price drops naturally well past Z, they have little incentive to log back in and rescind their now-insane orders (especially if they've abandoned their accounts.)<br /><br />What did surprise me was how *boring* the brief experience was. Most of the decisions revolved, not around profound questions of philosophy or ideology, but around peering at graphs and trying to extrapolate probabilities of non-controversial events. The poor user interface added to the boredom factor as well.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3800743842570881240.post-54966666906848108622007-12-09T16:19:00.000-07:002009-08-03T01:38:31.591-06:00Am I corruptible?Lord Acton said that "power corrupts." Others say that corrupt people in power were already corruptible to begin with; we just don't notice that people are prone to corruption before they gain power, because they never had the opportunity to benefit from corruption.<br /><br />As a thought experiment, let's use the following model. 80% of the people are corruptible: that is, they will act corrupt if they become the King; there is no way of determining whether someone is corrupt before they become the King. Everyone publicly denies that they are corruptible. Two worlds exist, identical in every respect except:<br /><br />In the "Self-Deceptive World", everyone has a self-image of themselves as incorruptible before they gain power.<br /><br />In the "Self-Aware World", everyone is fully aware of whether they are corruptible; the corruptible people merely lie and claim that they are incorruptible.<br /><br />These are the only two worlds that exist; to put it another way, the <span style="font-style: italic;">a priori</span> odds that you live in one world rather than the other is 50%.<br /><br />You have the self-image of yourself as someone who is incorruptible, but you have never been the King, and are unsure of which of the two worlds you live in. In this case, I would reason as follows:<br /><br />Pick ten people at random, maybe five will be from the Self-Deceptive World, and five will be from the Self-Aware World. On average, there will be five self-deceptive people from the Self-Deceptive World with an incorruptible self-image, and one self-aware person from the Self-Aware World with an incorruptible self-image. Therefore, the odds are 5:1 that you live in the Self-Deceptive World, and are corruptible.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-68855609710384864342007-11-25T16:32:00.000-07:002009-08-03T01:36:47.395-06:00Efficient Market HypothesisSuppose that you have a logical argument, which seems compelling to you, that publicly available information has not been reflected in an asset's price. (One example might be <a href="http://www.overcomingbias.com/2007/11/interpreting-th.html">here</a>, otherwise I'm sure you can pick out a different argument that has occurred to you at some point.) If you have funds to invest, should you focus investment funds in that area? I would argue, generally no, because of <a href="http://rolfnelson.blogspot.com/2007/11/some-hypothetical-answers-for-wire.html#Ptolemaic_vs._Copernican_Epistemologies">Copernican</a> (majoritarian) considerations, including various forms of the <a href="http://en.wikipedia.org/wiki/Efficient_market_hypothesis">Efficient Market Hypothesis</a>.<br /><br />If, instead, you have a partially Ptolemaic viewpoint, and are logically consistent, you would probably come to the conclusion that, any time you see everyone else make what seems to you like a <span style="font-style: italic;">logical</span> mistake, you should spend significant effort in determining how you can profit from the mistake.<br /><br />For example, suppose you believe that, with probability p, you are now in a 'privileged epistemological position' that will increase your expected annual returns, from 1.06 to 1.10, if you actively (rather than passively) manage you portfolio. (But with probability 1-p, there is no such thing as a privileged epistemological position. If you actively manage, but there is no such thing as a privileged position, your expected returns go down to 1.05 because of transaction costs.) If your probability p is above 0.2, you would want to actively manage rather than passively manage.<br /><br />The problem with active management, of course, is that in the existing market, for every winner there must be a loser. So there's a "meta-level" where the above must be, on average, bad advice. It's not clear to me how to consistently avoid these types of traps without recourse to a Copernican Epistemology.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-91255011782308568202007-11-10T08:09:00.000-07:002009-08-03T01:36:47.395-06:00Pure Copernican epistemologiesThere are multiple people in the room, including you, who (even after discussion of the objective facts) all have different honest ("<a href="http://rolfnelson.blogspot.com/2007/10/wire-disagreement-dilemma.html#Experience_and_Hearing">Experience</a>") beliefs. You have to make a correct decision, based on those beliefs. Consider four algorithms to make the decision.<br /><br />1. Always base your decision on your own ("Experience") beliefs.<br /><br />2. Always go with the beliefs of whoever you judge has the most "common sense" in the room (which, by staggering coincidence, happens to be you.)<br /><br />3. Always go with the beliefs of whoever's Social Security Number is 987-65-4320 (which, by staggering coincidence, happens to be your own Social Security Number.)<br /><br />4. Everyone takes some sort of Good Decision-Making Test that measures your general GDM (Good Decision-Making) ability.<br /><br />The first two are clearly not "<a href="http://rolfnelson.blogspot.com/2007/11/some-hypothetical-answers-for-wire.html#Ptolemaic_vs._Copernican_Epistemologies">Copernican Epistemologies</a>", as they posit as axioms that you have 'privileged access' to truth. <span style="font-style: italic;">If</span> you wish to adopt a purely Copernican Epistemology, you would reject (1) and (2). Would you have a preference between (3) and (4)? Both (3) and (4), on the face, Copernican. But the decision of what algorithm to choose make your current decision is, itself, a decision! If you apply a Copernican process to that decision, and so on recursively, you would (in theory) eventually come back to some small set of consistent <a href="http://rolfnelson.blogspot.com/2007/10/meta-occams-razor.html">axioms</a>, and would reject (3).<br /><br />I personally believe that a normative "Theory of Everything" epistemology would have to be purely Copernican, rather than partially Ptolemaic. To elaborate, it would have to be an epistemology where:<br /><ol><li>There are a relatively small set of axioms (for example, there is no room for axioms that directly reference Social Security Numbers)</li><li>None of these axioms explicitly reference yourself as a privileged source of knowledge, with the exception that I would allow some privileged access to your own <span style="font-style: italic;">current</span> consciousness, and your own <span style="font-style: italic;">current</span> thoughts and beliefs. (You do not have privileged access to your <span style="font-style: italic;">past</span> feelings, thoughts, and beliefs; you have to infer those from your current thoughts and beliefs, like everyone else.) To be clear, this privileged access would not be of the form "I have privileged knowledge that my belief about X is correct," but rather, of the form "I have privileged knowledge that I know that 'I believe X is correct.' In contrast, I don't know whether Joe believes that 'X is correct'; he says he does, but for all I know, he's deliberately lying."</li></ol>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3800743842570881240.post-71639936699024455042007-11-03T11:15:00.000-06:002009-08-03T01:36:47.395-06:00Some hypothetical answers for the Wire Disagreement DilemmaHere is a sampling of possible answers for the <a href="http://rolfnelson.blogspot.com/2007/10/wire-disagreement-dilemma.html">Wire Disagreement Dilemma</a>:<br /><br />1. Always go with your own "<a href="http://rolfnelson.blogspot.com/2007/10/wire-disagreement-dilemma.html#Experience_and_Hearing">Experience</a>" beliefs (cut the red wire).<br /><br />2. Always go with the beliefs of whoever's Social Security Number is 987-65-4320 (which, by staggering coincidence, happens to be your own Social Security Number.)<br /><br />3. Always go with the beliefs of whoever you judge has the most "common sense" in the room (which, by staggering coincidence, happens to be you.)<br /><br />4. Always go with the majority belief.<br /><br />5. Always go with the belief of the person who had the highest IQ test scores on his most recent test.<br /><br />6. Always go with the person with the most education (as measured in years of schooling).<br /><br />7. Assign a score, based on a preconceived formula that weights one or more of the previous considerations. Then go with whoever has the highest score, unless you really dislike the outcome, it which case go with your "Experience" beliefs.<br /><br /><a name="Ptolemaic_vs._Copernican_Epistemologies"><span style="font-weight:bold;">Ptolemaic vs. Copernican Epistemologies</span></a>. One of the differences between these solutions is the degree to which they presuppose that you have privileged access to the truth. For lack of a better term, I would call systems Copernican Epistemologies if they posit that have no privileged access to the truth, and Ptolemaic Epistemologies if they posit that you <span style="font-style: italic;">do </span>have privileged access to the truth. This is a spectrum;<br />"Always go with your own 'Experience' beliefs" is the exemplar of Ptolemaic belief; "I have no privileged 'Experience' beliefs" is the exemplar of Copernican belief; there are plenty of gradients between.<br /><br />Note that it is not possible for a human to actually implement a 100% pure Ptolemaic belief system, nor a 100% pure Copernican belief system. For example, your beliefs of "what I would have believed, apart from other peoples' opinions" will, in practice, be tainted by your knowledge of what other people believe.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3800743842570881240.post-78704214838762027072007-10-28T18:47:00.000-06:002009-08-03T01:36:47.395-06:00Wire Disagreement DilemmaYou are locked in a room with two other people and a time bomb. To disarm the bomb, you must choose correctly between cutting the red wire or the blue wire on the bomb; cutting the wrong wire, or failing to cut either of the wires in time, will trigger the bomb. Any one of the three of you can choose to lunge forward and cut one of the wires at any time.<br /><br />Each of you puzzles over the circuit-wiring schematic. You find an airtight, 100% certain proof that the red wire is the wire that needs to be cut. But simultaneously, your two allies report that they have come up with airtight, 100% certain proofs the blue wire needs to be cut! You cannot come to a consensus, either because you do not have time, or because you simply cannot understand each others' proofs.<br /><br />Your choices are:<br /><br />1. Lunge forward and cut the red wire.<br /><br />2. Allow your allies to cut the blue wire.<br /><br />How do you make your decision? Call this the <span style="font-style: italic;">Wire Disagreement Dilemma</span>.<br /><br />Notes:<br /><br />1. According to the most straightforward application of classical logic, you should lunge forward and cut the red wire.<br /><br />2. <a href="http://www.overcomingbias.com/2007/03/on_majoritarian.html">Philosophical Majoritarianism</a> doesn't tell you exactly what to do. PM seems to be a heuristic that you use alongside other, sometimes conflicting, heuristics. As I've seen it outlined, it doesn't seem to tell you much about when the heuristic should be used and when it shouldn't.<br /><br />3. There's a sense in which you never have an actual proof when you make a decision, you only have <span style="font-style: italic;">a </span><span style="font-style: italic;">memory that</span><span style="font-style: italic;"></span> you had a proof.<br /><br />4. Consider two people, Alice and Bob. Alice should not <span style="font-style: italic;">automatically</span> give her own beliefs "magical precedence" over Bob's beliefs. However, there are many circumstances where Alice should give her own beliefs precedence over Bob's; there are also circumstances where Alice should defer to Bob.<br /><br />5. This type of thinking is so rare, that (to my knowledge) we don't even have a short word to describe the difference between "I believe X because I reasoned it out myself" and "I believe X because someone smarter or more experienced than me told me X, even though, on my own, I would have believed Y."<br /><br />In normal conversation, you have to use cumbersome phrases and idioms: for example, "it seems to me like X" in the former case and "my understanding is that X" in the latter case.<br /><br /><a name="Experience_and_Hearing"><span style="font-weight: bold;">Experience vs. Hearing: </span></a>As technical terms, I'd propose that in the former case we say "I Experience X" or "my Experience is X." In the latter case we can say "I Hear that X" or "my Hearing is X."<br /><br />6. One asymmetry, when Alice is evaluating reality, is that she generally knows her own beliefs but doesn't necessarily know Bob's beliefs. Bob may be unavailable; Bob may be unable to correctly articulate his beliefs; Alice may misunderstand Bob's beliefs; there may not be time to ask Bob his beliefs; or Bob may deliberately deceive Alice about his beliefs.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-17020428207911031942007-10-20T11:45:00.000-06:002009-08-03T01:39:12.326-06:00Occam's Meta-Razor<p style="margin-bottom: 0in;">Let me define the Occam's Meta-Razor Problem as follows: What is the smallest and simplest set of basic philosophical postulates that a rational agent needs in order to act in a way that is intuitively satisfactory? The goal is that the behavior should satisfice, even if it's not necessarily optimal. Call this the Occam's Meta-Razor problem.<br /><br />Intuitively, I think we want three items:<br /><br />1. A simple way to analyze probabilities. Something like Solomonoff Induction might satisfice, if the <a href="http://www.overcomingbias.com/2007/10/pascals-mugging.html">Pascal's Mugging</a> problem were solved.<br /><br />2. A utility function. An initial start might be, “Maximize the expected amount of X in the Universe,” where X is some weighted combination of happiness, freedom from pain, autonomy, etc. A satisfactory but simple description for X would be difficult to unambiguously specify, especially in the case where the agent wields super-human intelligence. Two of many possible pitfalls:</p> <ul><li>For almost all X, the current set of humans who are alive (and humanity in general) are going to be sub-optimal, from the point-of-view of the agent. However, we want the agent to decide against wiping out humanity and replacing it with species that are “more worthy” according to its utility function.</li><li>We would want some portion of X to include the concept of “autonomy” and preserve our abilities to make informed, uncoerced decisions. But, a sufficiently smart agent could peacefully convince (trick?) me into making any number of ludicrous decisions. It's not clear how to unambiguously define “coercion” in the case of a super-intelligent agent.</li></ul> <p style="margin-bottom: 0in;">3. A simple decision theory, such as Evidential Decision Theory (which I believe subsumes Hofstadter superrationality), or Causal Decision Theory (which is the standard in mainstream Game Theory.) Either should satisfice, though I regard Evidential Decision Theory as much simpler.<br /><br />Being philosophical principles, obviously these can't be directly used to create a real, resource-limited AGI; for example, Solomonoff Induction is too slow for practical use.<br /><br />But, as a set of normative philosophical principals for a human being to use, these seem like a reasonable starting point.</p><br />[edit -- decided to call it "Occam's Meta-Razor" rather than "Meta-Occam's Razor"]Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-89328934894388843032007-10-13T09:45:00.000-06:002009-08-03T01:39:12.326-06:00Superrationality and the placebo effectLet me introduce a conjecture that I will call the Strong Biological Placebo Effect. The Strong Biological Placebo Effect states: if you believe a course of action can improve your health, then the mere belief invariably triggers biological changes that improve your health.<br /><br />If the Strong Biological Placebo Effect is true, then it creates a situation where superrationality applies to human beings. You can consistently, and rationally, choose to believe that setting your alarm clock to prime numbers <span style="font-weight: bold;">will</span> increase your health; alternatively, you can consistently, and rationally, choose to believe that setting your alarm clock to prime numbers will <span style="font-weight: bold;">not</span> increase your health. If you are superrational, you will choose the latter option, and you will be healthier because of the superrationality.<br /><br />(Caveat: the Strong Biological Placebo Effect is probably not even remotely true, so don't whip out your magnetic bracelets quite yet.)Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-3800743842570881240.post-47761757227749774432007-10-06T09:04:00.000-06:002009-08-03T01:35:09.472-06:00Wild Guess: Singularity in 2024Suppose there is no World War III; suppose that there's no single disaster sufficient to wipe out more than, say, 10% of mankind in a single year. When will the <a href="http://www.singinst.org/overview/whatisthesingularity">Singularity</a> arrive?<br /><br />Kurzweil's scenario gives us affordable human-level hardware around 2024, according to my interpretation of his <a href="http://upload.wikimedia.org/wikipedia/en/d/df/PPTExponentialGrowthof_Computing.jpg">graph</a>. I find his "accelerating exponential growth" model of pre-Singularity computer hardware to be more reasonable than straight Moore's Law, especially factoring in possible nanotech and biotech improvements. Note that Kurzweil <a href="http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=17%23664">states</a> that his models gave him "10^14 to 10^16 cps for creating a functional recreation of all regions of the human brain, so (he) used 10^16 cps as a conservative estimate." I'm interested in "most likely" rather than "conservative", so I used 10^15, but that doesn't make a huge difference. I also picked a "most likely" spot on the gray error-bar rather than a conservative extreme, which does shift things significantly.<br /><br />Kurzweil believes that the Singularity would arrive decades after we have cheap human-level hardware, but I think it's more likely to arrive a little bit ahead or a little bit behind. So, my wild guess is 2024: meaning that while it's unlikely to arrive in exactly that year, I give it a 50/50 odds of being before or after. Of course, it could end up being "never". It could end up being next year.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-26280934579810592362007-09-29T10:22:00.000-06:002007-09-29T10:50:43.958-06:00Actively Open-Minded ThinkingSuppose you have decided to spend ten minutes on making a decision between A and B. Currently you're leaning towards A. Do you:<br /><br />1. Spend most of the ten minutes thinking of reasons to do A?<br /><br />2. Spend about equal amounts of time thinking of reasons to do A, and reasons to think of doing B?<br /><br />3. Spend most of the ten minutes thinking of reasons to do B?<br /><br />Our usual decision is to do (1). This is an aspect of the <a href="http://en.wikipedia.org/wiki/Confirmation_bias">confirmation bias</a>.<br /><br />Part of <a href="http://www.upenn.edu/almanac/v42/n24/teach.html">Actively Open-Minded Thinking</a> (AOMT), as advocated by Baron, is to try to instead do (2) or (3). This is hard to get in the habit of doing.<br /><br />I find Actively Open-Minded Thinking very useful. For example, twice a day I think about an aspect of a current plan I have for the day, month, or lifetime, and then I briefly try to search for alternative strategies, or for reasons why my current strategy might be wrong; on multiple occasions this has caused me to adopt new courses of action. It's irrational, given that you're going to spend X minutes meditating on a decision anyway, to spend most of the time thinking of ways to rationalize your current decision. In fact, it's so clearly irrational that I'm retroactively <a href="http://www.overcomingbias.com/2007/05/think_like_real.html">surprised</a> that AOMT never became widespread.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-70382077390115250132007-09-23T13:01:00.000-06:002009-08-03T01:35:09.472-06:00...And, Scenarios Without a Strong AI<ol><li>A giant disaster or war occurs, say on a scale that kills off more than 20% of humanity in a single year.</li><li>Getting the software right turns out to be harder than we thought. Sure, natural selection got it right on Earth, but it has an infinite universe to play with, and no one is around to observe the 10^10000 other planets where it failed.</li></ol>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-3800743842570881240.post-59163730304473285092007-09-20T18:31:00.000-06:002009-08-03T01:35:09.472-06:00Strong AI Takeoff Scenarios...Things that could cause the current Moore's Law curve to be greatly exceeded:<br /><ol><li>Enormous nanotechnology advances.<br /></li><li>Enormous biotech advances, allowing dramatic intelligence augmentation to human brains.</li></ol>Things that could cause a sudden, large (1000x) increase in hardware devoted to AI self-improvement:<br /><ol><li>Self-improving AI, nowhere near *general* human-level intelligence yet, is suddenly put in charge of a botnet and swallows most of the Internet.</li><li>Awareness of the promise or the threat of Strong AI becomes widespread. A government, or government consortium, launches an Apollo Project-scale activity to create a Strong AI, hopefully under controlled conditions!<br /></li></ol>Other Strong AI Takeoff scenarios:<br /><ol><li>The usual "recursive self-improvement" scenario</li><li>We were too conservative. Turns out creating a Strong AI on a mouse-level brain is easier than we thought; we just never had a mouse-level brain to experiment with before.</li></ol>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-3800743842570881240.post-5946488949752795972007-09-18T19:45:00.000-06:002007-09-18T19:56:09.268-06:00Total feasible human population over all timeOn Usenet I recently, for fun, made a back-of-the envelope calculation for how many human beings could be born in one scenario:<br /><ol><li>For simplicity of calculation, the only lifeforms are humans (no trans-humans on microchips).</li><li>The limiting factor will be energy, rather than how much carbon etc. you have lying around to build things with. After all, you can fuse the useless hydrogen you come across into more useful elements, or delay peoples' births until room opens up for them.</li><li>I put the energy efficiency at only .001 of total matter (dark and baryonic), partly because there will be inefficiencies with collecting/storing/transporting/transforming the energy, and partly because I'm not sure how well we can shove the dark matter into black holes. I used (3*10^-27 kg / (meter^3)) for the matter density.</li><li>Having no source on how much matter we can grab or colonize before the accelerating expansion of the universe puts it out of reach, I arbitrarily guess that we can colonize all matter currently within 10 billion light-years of us. </li><li>The miserly energy ration will be 300,000 nutritional Calories per day, for use for synthesizing/recycling food, and for all other pro-rated energy uses.<br /></li></ol><p>Google tells me:<br /></p><p>(.01 * ((4 * pi) / 3) * ((10 billion light years)^3) * ((3 * ((10^(-27)) kg)) / (meter^3)) * (c^2) * .001) / ((300 000 (kilocalories / day)) = <span style="font-weight: bold;">10^52</span> person-years.<br /></p><p><br /></p>Unknownnoreply@blogger.com1