Those who see the future
'Being right is often a function of avoiding the pitfalls that trip up lesser forecasters'
This is a free post from Our Long Walk, my blog about South Africa’s economic past, present, and future. If you enjoy it and want to support more of my writing, please consider a paid subscription to the twice-weekly posts, which include my columns, guest essays, interviews, and summaries of the latest relevant research.
Knowing the future is not easy, yet it has always fascinated us. From our ancient ancestors who studied their environment to secure food to modern financial analysts who use sophisticated econometric techniques to forecast stock prices, accurate forecasts have always been valuable. Yet accuracy could never be guaranteed. That is why the Oracle of Delphi, a popular prophetic figure in ancient Greece, often gave ambiguous predictions; her statements typically required interpretation and sound judgment.
But most of us today don’t rely on the visions of an oracle anymore. We instead log into our weather apps daily because their forecasts are more accurate than not, partly due to better models, but also because nature follows predictable laws. The challenge is more difficult when humans are involved. That is because humans act ‘irrationally’ or, phrased differently, because humans are swayed by our (unconscious) biases. Take one famous South African study. As I report in Our Long Walk to Economic Freedom, between 1956 and 1962, psychologist Kurt Danziger asked 436 South African students to imagine how the twentieth century might unfold. Two-thirds of black South African students imagined that apartheid would end; only 4 per cent of white Afrikaners did. His point was clear: we are likelier to predict what we want to see happen.
Economists like to predict what they want to see, too. Consider how frequently private sector economists (or the Minister of Finance) overestimate GDP growth in the coming years. But we should be concerned not just about overoptimism. As Daan Steenkamp writes in Business Day, ‘central bankers like to present their economic forecasts as if they are weather reports’. Yet, unlike meteorologists, central banks can influence the things they predict. As a result, inaccurate predictions due to poor data, overlooked uncertainties, or behavioural biases can lead to policies that worsen economic conditions rather than improve them. To rephrase Lord Acton’s famous quip, predictions corrupt, and absolute predictions corrupt absolutely.
That is why some economists now say that predictions are basically worthless. Even the best macro models get it wrong more often than not, writes Andy Haldane, a former chief economist at the Bank of England, in a scathing Financial Times column. Forecasting, he says, ‘is likely to remain interpretative dance – always mysterious, occasionally enlightening, a show without much tell’.
If economists are bad at predicting, perhaps political scientists can do better. UJ political scientist Mcebisi Ndletyana recently wrote an excellent News24 piece on the history of polling, in the United States and in South Africa, concluding that polls have become more scientific, and therefore reliable, over time.
But polls are only one type of political forecast. Big Data techniques now allow for pre-electing forecasting, made famous by Nate Silver, who gained fame for his statistical approach to forecasting the 2008 U.S. presidential election. Silver aggregated and weighted numerous polls and used Monte Carlo simulations and econometric techniques to predict state-by-state outcomes. (Three weeks ago, he announced that he’ll be running another prediction model for this year’s US elections, now at his Substack account.)
I asked Dawie Scholtz, an independent elections analyst, whether South Africa has any political forecaster similar to Nate Silver.
‘Pre-election forecasting in South Africa is very immature to non-existent. I have in the past done some pre-election forecasts utilising by-election results, but this requires sufficient by-election ‘samples’ from sufficiently diverse sections of the electorate in the runup to the election, with all of the major parties contesting those to be able to use it reliably. Other than that, I have not seen any meaningful attempts at forecasting pre-election in South Africa. This year, I will not be making any forecasts since the by-election data is not sufficient. MK has emerged too late, and the by-elections sample from the last 6 months is not sufficiently diverse.’
With regards to polling, Scholtz says, South Africa is somewhat mature but with fewer practitioners and smaller budgets.
We have 2-3 reliable polling practitioners in South Africa: Victory Research, IPSOS and MarkData. South Africa is challenging to poll for a number of reasons and so despite our relative maturity in polling in South Africa, I’d say the reliability of individual polls remains moderate at best.
By contrast, ‘South African election forecasters, for various reasons related to our demographic segregation, are better (and faster) at election projections on election night than what one sees in the US’.
In his book The Signal and the Noise, Silver notes that ‘being right is often a function of avoiding the pitfalls that trip up lesser forecasters.’ I ask Scholtz about the biggest mistakes South Africans make when forecasting election results: Are there any examples of political forecasters getting it spectacularly wrong?
I think there are two big mistakes that are continually made. First, blurring the lines between ‘hypotheses’ and ‘facts’. A great example in this election is the hypothesis that MK will have an outsized impact in KZN only or with Zulu speakers only. I agree that this is a reasonable hypothesis, but we don’t really know. ‘What if MK has no impact or what if Zuma has an impact across all provinces? We have hypotheses on these, but no factual answers.
Second, people often assume that the past is prologue or, put differently, that the rate of change will be constant. Past patterns are a great starting point, but they’re never guaranteed to repeat themselves.
Of course, predicting election outcomes is one thing. Predicting likely alliance partners is entirely different, partly because it depends on the personalities of the politicians more than on the electorate.
Says Scholtz:
I completely agree with you! Forecasting the actions and coalition choices of politicians after the election is basically impossible; I certainly don’t try to do so. If one absolutely had to make a prediction, I’d say, in theory, the best way is to construct a utility function for each party that is very heavily weighted towards maximizing support in the next election and assume that each party will attempt to maximize its own utility function in a non-collaborative manner.
Predictions, whether those of meteorologists, economists or political forecasters, are increasingly sophisticated. Yet that sophistication – often a form of scientific showmanship using techniques that are poorly understood or without the requisite theoretical backing – may also be a weakness, as it can lead to overconfidence in the precision of the forecasts.
What counts, despite (or perhaps because of) the sophisticated methods in use nowadays, is sound judgment, the ability to interpret data critically, identify potential biases, recognize limitations, account for uncertainties, and balance risks. In short, deriving meaningful insights from predictions ultimately requires the wisdom of the Oracle.
An edited version of this article was published on News24. Support more such writing by signing up for a paid subscription. The image was created with Midjourney v6.
[1] K. Danziger, The psychological future of an oppressed group, Social Forces, 42 (1), 1963, 31–40.
Good piece. One problem I notice with forecasting, particularly political forecasting, is that a correct prediction can serve to further embed the forecaster's prior convictions (whether those are biases or belief in their model), when the reality is that there were only limited outcomes. Brexit was a great example. Those who correctly predicted the UK would vote to leave the EU used the result to validate their worldview of rising populism, anger amongst a left-behind population etc. But there were only ever two options from the vote, and the result ended up being incredibly close. In that context what conclusions can you really draw about what the result said about the state of the UK, let alone the world? If a few more under-30s turned up and voted to remain, does that make those conclusions untrue?
For those of us whose job it is to take market positions on the back of these forecasts, asking 'what do I think is going to happen' is less valuable than asking 'what does the market think is going to happen?' That's when opportunities begin to present themselves.
Interesting. Thanks Johan