Fifty years ago, long term weather forecasts were already a scientific impossibility, and Edward Lorenz proved it. In 1962, Lorenz published his definitive work on meteorology in Volume 20 of the Journal of the Atmospheric Sciences. No one knew it then, but the paper was perhaps the first published example of what would become chaos theory.
“All the money spent on long-range forecasting – about half a billion dollars in the last few decades – is money wasted,” Jeff Goldblum’s character in the movie Jurassic Park says. “It’s a fools errand. It’s as pointless as trying to turn lead into gold.”
Before Lorenz, meteorologists were strangely confident in their ability to one day not only predict but actually control the weather. By the 1980s, meteorologists were in fact producing fairly accurate short-term forecasting. However, beyond two- and three-day forecasts, the science became speculation. Beyond five and six days? The forecasts were worthless.
Sound familiar? Thirty years later, and long term weather forecasting has not advanced at all. Even today forecasters cannot predict beyond a couple days with any accuracy. Think of the last hurricane track forecast you have seen – the cone-shaped “possibilities” of the hurricane’s future position cover hundreds of miles after a day or two.
The logic behind Lorenz’s presumption lays in the “butterfly effect” – scientifically known as ‘sensitive dependence on initial conditions.’
Using primitive computers, Lorenz discovered something incredible in his numbers, and by accident. By merely changing the decimal point in certain ‘input parameters’ – say for example, starting humidity at a given point in the atmosphere – the resulting graphs that showed the ups and downs of the resulting ‘weather’ gradually diverged, when in theory they should have remained the same (the input parameters were for practical purposes identical).
The implications were immediately obvious. It’s impossible to exactly measure the real atmosphere needed to make accurate predictions. Even if weather sensors could theoretically measure every condition of the atmosphere at points inches apart and to the nth decimal place, the slight variations between those inches and the sheer number of variables that create weather – humidity, pressure, temperature, etc. – create the chaos. Those inconsistencies multiply in the dynamic system over time, creating more and bigger inconsistencies and the model quickly breaks down after the first few days.
In practice, modern GRIB forecasts paint a wonderful picture of chaos theory in action. When GRIBs are downloaded, they offer a picture of the weather over a large scale in real-time. A simple test on the accuracy of a forecast – and a perfect visual analogy to what Lorenz first discovered in 1962 – is to simply ‘animate’ the GRIB file seven days out, then download the real-time file seven days later. Do they match? Not likely.