The History of Predicting the Future - 10 minutes read




+++lead-in-text

The future has a history. The good news is that it’s one from which we can learn; the bad news is that we very rarely do. That’s because the clearest lesson from the history of the future is that knowing the future isn’t necessarily very useful. But that has yet to stop humans from trying.

+++

Take [Peter Turchin’s famed for 2020. In 2010 he developed a quantitative analysis of history, known as cliodynamics, that allowed him to predict that the West would experience political chaos a decade later. Unfortunately, no one was able to act on that prophecy in order to prevent damage to US democracy. And of course, if they had, Turchin’s prediction would have been relegated to the ranks of failed futures. This situation is not an aberration. 

Rulers from Mesopotamia to Manhattan have sought knowledge of the future in order to obtain strategic advantages—but time and again, they have failed to interpret it correctly, or they have failed to grasp either the political motives or the speculative limitations of those who proffer it. More often than not, they have also chosen to ignore futures that force them to face uncomfortable truths. Even the technological innovations of the 21st century have failed to change these basic problems—the results of computer programs are, after all, only as accurate as their data input.

There is an assumption that the more scientific the approach to predictions, the more accurate forecasts will be. But this belief causes more problems than it solves, not least because it often either ignores or excludes the lived diversity of human experience. Despite the promise of more accurate and intelligent technology, there is little reason to think the increased deployment of AI in forecasting will make prognostication any more useful than it has been throughout human have long tried to find out more about the shape of things to come. These efforts, while aimed at the same goal, have differed across time and space in several significant ways, with the most obvious being methodology—that is, *how* predictions were made and interpreted. Since the earliest civilizations, the most important distinction in this practice has been between individuals who have an intrinsic gift or ability to predict the future, and systems that provide rules for calculating futures. The predictions of oracles, shamans, and prophets, for example, depended on the capacity of these individuals to access other planes of being and receive divine inspiration. Strategies of divination such as astrology, palmistry, numerology, and Tarot, however, depend on the practitioner’s mastery of a complex theoretical rule-based (and sometimes highly mathematical) system, and their ability to interpret and apply it to particular cases. Interpreting dreams or the practice of necromancy might lie somewhere between these two extremes, depending partly on innate ability, partly on acquired expertise. And there are plenty of examples, in the past and present, that involve both strategies for predicting the future. Any internet search on “dream interpretation” or “horoscope calculation” will throw up millions of SUBSCRIBE


[#image: to WIRED and stay smart with more of your favorite writers.
+++

+++

In the last century, technology legitimized the latter approach, as developments in IT (predicted, at least to some extent, by Moore’s law) provided more powerful tools and systems for forecasting. In the 1940s, the analog computer MONIAC had to use actual tanks and pipes of colored water to model the UK economy. By the 1970s, the Club of Rome could turn to the World3 computer simulation to model the flow of energy through human and natural systems via key variables such as industrialization, environmental loss, and population growth. Its report, *Limits to Growth,* became a best seller, despite the sustained criticism it received for the assumptions at the core of the model and the quality of the data that was fed into it.

At the same time, rather than depending on technological advances, other forecasters have turned to the strategy of crowdsourcing predictions of the future. Polling public and private opinions, for example, depends on something very simple—asking people what they intend to do or what they think will happen. It then requires careful interpretation, whether based in quantitative (like polls of voter intention) or qualitative (like the Rand corporation’s DELPHI technique) analysis. The latter strategy harnesses the wisdom of highly specific crowds. Assembling a panel of experts to discuss a given topic, the thinking goes, is likely to be more accurate than individual prognostication.

This approach resonates in many ways with yet another forecasting method—war-gaming. Beginning in the 20th century, military field exercises and maneuvers were increasingly supplemented, and sometimes replaced, by simulation. Undertaken both by human beings and by computer models such as the RAND Strategy Assessment Center, this strategy is no longer confined to the military, but is now used extensively in politics, commerce, and industry. The goal is to increase present resilience and efficiency as much as it is to plan for futures. Some simulations have been very accurate in predicting and planning for possible outcomes, particularly when undertaken close to the projected events—like the Sigma war game exercises conducted by the Pentagon in the context of the developing Vietnam War, for example, or the Desert Crossing 1999 games played by United States Central Command in relation to Saddam Hussein’s Iraq.

+++lead-in-text

As these strategies have continued to evolve, two very different philosophies for predicting communal futures have emerged, particularly at the global, national, and corporate level. Each reflects different assumptions about the nature of the relationship between fate, fluidity, and human agency.

+++

Understanding previous events as indicators of what’s to come has allowed some forecasters to treat human history as a series of patterns, where clear cycles, waves, or sequences can be identified in the past and can therefore be expected to recur in the future. This is based on the success of the natural sciences in crafting general laws from accumulated empirical evidence. Followers of this approach included scholars as diverse as Auguste Comte, Karl Marx, Oswald Spengler, Arnold Tonynbee, Nicolai Kondratiev, and, of course, Turchin. But whether they were predicting the decline of the West, the emergence of a communist or scientific utopia, or the likely recurrence of global economic waves, their success has been limited.

More recently, research at MIT has focused on developing algorithms to predict the future based on the past, at least in the extremely short term. By teaching computers what has “usually” happened next in a given situation—will people hug or shake hands when they meet?—researchers are echoing this search for historical patterns. But, as is often a flaw in this approach to predictions, it leaves little room, at least at this stage of technological development, to expect the unexpected.

Another set of forecasters, meanwhile, argue that the pace and scope of techno-economic innovation are creating a future that will be qualitatively *different* from past and present. Followers of this approach search not for patterns, but for emergent variables from which futures can be extrapolated. So rather than predicting one definitive future, it becomes easier to model a set of *possibilities* that become more or less likely, depending on the choices that are made. Examples of this would include simulations like World3 and the war games mentioned earlier. Many science fiction writers and futurologists also use this strategy to map the future. In the 1930s, for instance, H. G. Wells took to the BBC to broadcast a call for “professors of forethought,” rather than of history. He argued that this was the way to prepare the country for unexpected changes, such as those brought by the automobile. Similarly, writers going back to Alvin and Heidi Toffler have extrapolated from developments in information technology, cloning, AI, genetic modification, and ecological science to explore a range of potential desirable, dangerous, or even post-human futures. 

But if predictions based on past experience have limited capacity to anticipate the unforeseen, extrapolations from techno-scientific innovations have a distressing capacity to be deterministic. Ultimately, neither approach is necessarily more useful than the other, and that’s because they both share the same fatal flaw—the people framing the approach of the forecaster, and however sophisticated their tools, the trouble with predictions is their proximity to power. Throughout history, futures have tended to be made by white, well-connected, cis-male people. This homogeneity has had the result of limiting the framing of the future, and, as a result, the actions then taken to shape it. Further, predictions resulting in expensive or undesirable outcomes, like Turchin’s, tend to be ignored by those making the ultimate decisions. This was the case with the nearly two decades worth of pandemic war-gaming that preceded the emergence of Covid-19. Reports in both the US and the UK, for example, stressed the significance of public health systems in responding effectively to a global crisis, but they did not convince either country to bolster their systems. What’s more, no one predicted the extent to which political leaders would be unwilling to listen to scientific advice. Even when futures did have the advantage of taking into account human error, they still produced predictions that were systematically disregarded where they conflicted with political strategies.

+++

Which brings us to the crucial question of who and what predictions are for. Those who can influence what people *think* will be the future are often the same people able to command considerable resources in the present, which in turn help determine the future. But very rarely do we hear the voices of the populations governed by the decisionmakers. It’s often at the regional or municipal level that we see efforts by ordinary people to predict and shape their own communal and familial futures, often in response to the need to distribute scarce resources or to limit exposure to potential harms. Both issues are becoming ever more pressing in the presently unfolding climate catastrophe.

The central message sent from the history of the future is that it’s not helpful to think about “*the* *Future*.” A much more productive strategy is to think about *futures*; rather than “prediction,” it pays to think probabilistically about a range of potential outcomes and evaluate them against a range of different sources. Technology has a significant role to play here, but it’s critical to bear in mind the lessons from World3 and *Limits to Growth* about the impact that assumptions have on eventual outcomes. The danger is that modern predictions with an AI imprint are considered more scientific, and hence more likely to be accurate, than those produced by older systems of divination. But the assumptions underpinning the algorithms that forecast criminal activity, or identify potential customer disloyalty, often reflect the expectations of their coders in much the same way as earlier methods of prediction did.

Rather than depending purely on innovation to map the future, it’s more sensible to borrow from history, and combine newer techniques with a slightly older model of forecasting—one that combines scientific expertise with artistic interpretation. It would perhaps be more helpful to think in terms of *diagnosis*, rather than prediction, when it comes to imagining—or improving—future human histories.

Source: Wired

Powered by NewsAPI.org