This is Taleb's point I think.<p>In the standard gambler's fallacy situation, it's assumed known that the coin is fair.<p>However, in real life there's always some probability that the assumption is incorrect.<p>One way to think about it is in terms of likelihoods, priors, and posteriors over models, in addition to the probability of an outcome conditional on a model.<p>So, the classical assumption is something like P(X | Mf) = 0.5 for a "fair" model Mf, and you're asking someone "what's the probability of heads?". However, there's also the possibility that the coin is actually biased, under Mb. So the actual probability of an observed sequence is something like<p>P(X|Mf)P(Mf) + P(X|Mb)P(Mb).<p>Usually we assume that P(Mf) >> P(Mb) but there must be some point at which P(Mb) becomes great enough that it would be rational to start to question that.<p>Implicitly there's some Bayesian estimate of P(Mb|X) that could be estimated, and some decision point where you decide P(Mb|X) > P(Mf|X).