The last of the three theories I gave in the last post, ties into a notion I have of what we mean by "admissible" vs. "inadmissible" evidence when evaluating what the "true" probability is of an event. There is nothing in the external objective world that corresponds to directly to what we term "admissibility"; the question of admissibility seems to be a part of our evolved social judgment kit.
I think reluctance to commit is a (possibly evolved) mechanism targeted specifically to combat other peoples' Hindsight Bias. It fills in a predictive gap in the other theories, the fact that your "weight" on the coin toss, is, in a sense, 0: once the coin is tossed, you have no reason to doubt someone who tells you it came up heads! And yet, people feel no hesitation about assigning it a .5 probability, even after the coin-flip has already been made, because they know nobody will doubt their competence for doing so.
Here are some different ways that predictions can be framed.
1. I have no idea who will win. (Extremely tentative)
2. I give a 50% chance the wrestler will win. (Less tentative)
Similarly, if the only thing you know is that wrestlers train on a wider variety of maneuvers than boxers, and therefore you gave a slight edge to the wrestler:
1. It seems a little bit more likely the wrestler will win. (Extremely tentative)
2. I give a 60% chance the wrestler will win. (Less tentative)
3. I give a 60.020001% chance the wrestler will win. (Not at all tentative)
(2) is more tentative than (3), maybe because of the norms of parsimony in conversation. It's not clear why (2) is much more tentative than the (1); the norm that you only give a numeric percentage when you're willing to stand behind your estimate,
Note that newspapers rarely give probabilities of events, aside from weather forecasts that can be "justified" by pointing to a large ensemble of past data and uncontroversial analysis.