4 Comments
May 10Liked by Johan Fourie

Good piece. One problem I notice with forecasting, particularly political forecasting, is that a correct prediction can serve to further embed the forecaster's prior convictions (whether those are biases or belief in their model), when the reality is that there were only limited outcomes. Brexit was a great example. Those who correctly predicted the UK would vote to leave the EU used the result to validate their worldview of rising populism, anger amongst a left-behind population etc. But there were only ever two options from the vote, and the result ended up being incredibly close. In that context what conclusions can you really draw about what the result said about the state of the UK, let alone the world? If a few more under-30s turned up and voted to remain, does that make those conclusions untrue?

For those of us whose job it is to take market positions on the back of these forecasts, asking 'what do I think is going to happen' is less valuable than asking 'what does the market think is going to happen?' That's when opportunities begin to present themselves.

Expand full comment
author

This is a fantastic point, especially for predictions with binary outcomes, as you say. Sample size matters.

Expand full comment

Interesting. Thanks Johan

Expand full comment

What do you make of the fact that Silver plainly did not avoid those pitfalls himself in his forecasting practice?

I think Philip Tetlock's book Expert Political Judgment does a fair job of describing a method for good political forecasting, which is to be a methodologically heterodox generalist who is consciously trying to stave off using forecasting to try and win political or disciplinary arguments, or even to defend a particular methodological strategy. (Which is maybe where Silver went wrong, which was touting his own overly narrow approach.)

Expand full comment