Sunday, June 8, 2008

The reluctance to give probabilities (II)

The last of the three theories I gave in the last post, ties into a notion I have of what we mean by "admissible" vs. "inadmissible" evidence when evaluating what the "true" probability is of an event. There is nothing in the external objective world that corresponds to directly to what we term "admissibility"; the question of admissibility seems to be a part of our evolved social judgment kit.

I think reluctance to commit is a (possibly evolved) mechanism targeted specifically to combat other peoples' Hindsight Bias. It fills in a predictive gap in the other theories, the fact that your "weight" on the coin toss, is, in a sense, 0: once the coin is tossed, you have no reason to doubt someone who tells you it came up heads! And yet, people feel no hesitation about assigning it a .5 probability, even after the coin-flip has already been made, because they know nobody will doubt their competence for doing so.

Here are some different ways that predictions can be framed.

1. I have no idea who will win. (Extremely tentative)

2. I give a 50% chance the wrestler will win. (Less tentative)

Similarly, if the only thing you know is that wrestlers train on a wider variety of maneuvers than boxers, and therefore you gave a slight edge to the wrestler:

1. It seems a little bit more likely the wrestler will win. (Extremely tentative)

2. I give a 60% chance the wrestler will win. (Less tentative)

3. I give a 60.020001% chance the wrestler will win. (Not at all tentative)

(2) is more tentative than (3), maybe because of the norms of parsimony in conversation. It's not clear why (2) is much more tentative than the (1); the norm that you only give a numeric percentage when you're willing to stand behind your estimate,

Note that newspapers rarely give probabilities of events, aside from weather forecasts that can be "justified" by pointing to a large ensemble of past data and uncontroversial analysis.

2 comments:

Jennifer Rodriguez-Mueller said...

I think it's helpful to search for a quantitative basis for things sometimes, as a way to focus the mind in on critical aspects of the world. Call it the "How would I model this?" mode of thought (keeping in mind that any model will be a simplification designed with much more than accuracy in mind - there's also tractable applicability, for example).

"Admissibility" sounds very hard to model with a simple number, but "commitment" might be captured using something related to the amount of money you'd stake on a question (given your current finances and expectations for the future - inflation and job options and so on).

There's a lot of people focusing on tying belief to Bayesian statistics and thence to betting, but I don't see much attention to "bet budgeting". Perhaps a well financed agent in a setting where with a very thick betting market full of tiny bets is capable of ignoring budgetary considerations and focusing strictly on accurately estimating odds, but in the real world there are usually some rather large bets going on that are especially critical relative to one's budget.

I understand that when you bring up examples like coins and wrestlers you probably don't mean every example to carry all of it's ecological implications along with it, but maybe paying attention to some of the background is useful?

Example: Bets on sporting events are a mature market run by sophisticated economic agents at a profit. They have created exactly the "thick market of tiny bets" environment within which to operate and bring considerable analytical resources to bear in estimating odds and/or structuring the odds so as to ensure profit. My expectation there is that, in general, I will lose money in those markets and should bet nothing.

The statement "I have no idea who will win" does not sound tentative to me, it sounds very firm actually, possibly even defensive. It's a refusal to make a commitment.

The statement "I give a 50% chance the wrestler will win" uses the term "give" and sounds generous. It offers odds in the terminology of betting that someone else may "take" with a response like "I'll take $1000 of that action".

Consider a middle ground between the statements you offered: "I'd risk a dollar at even odds that the wrestler will win."

There's an extension lurking here to a general value-investing-based criticism of statistical reasoning about one off events. Unless a system is regular and predictable, offering something like "actuarial predictability", then what you're engaged in might be called speculation rather than investment.

An alternative framing might be found in Richard Templar's Rule 23 of Money (Speculate to Accumulate). Speculate is meant here in every sense of the word: discuss, think deeply, then put some of your time, effort, and life into not-entirely-certain ventures. Though he specifically counsels against "speculating" by "blowing it all on red"... he's trying to encourage thoughtful risk taking.

At that point you're deep into fuzzy questions about risk tolerance in general which can connect to meaning-of-life issues related to life-arcs and highest-values and such. They call it investment philosophy for a reason :-P

Nick Tarleton said...

I like how Jennifer looks at it. If someone wants to take my bet, I have to consider their doing so as evidence about the outcome of the match; and if I've thoroughly researched the subject and still assign p=.5, I should shift my assessment much less in response to this than if I had no idea whatsoever; so I should be much more willing to take the bet. (Similar to your "weight of evidence" theory.)