Predicting the weather is a little bit like delivering the mail. If you get it right, no one notices. If you get it wrong, everyone does. Compounding the problem is the fact that predicting the weather is a substantially more complex problem than delivering a few billion pieces of mail.
In 1961 Dr. Edward Lorentz was running one of the early computer models analyzing the weather. Having stopped the program for a while, he rounded off his results to the nearest thousandth, replacing .506127 with .506 when he restarted, assuming that such a small amount could not affect the program to any appreciable degree. The program’s subsequent calculations followed a startlingly different trajectory from the originals. Dr. Lorentz’s “Aha” moment may have quickly become an “uh oh” moment, as the difficulty inherent in trying to forecast the weather became intuitively clear. Chaos and the study of nonlinear dynamical systems entered the scientific mainstream.
Chaotic systems, like the weather, are defined as systems with extreme sensitivity to initial conditions. A tiny change in input can produce a dramatic change in output. The popular notion of the “butterfly effect”, that a butterfly flapping its wings in California can change the weather in New York, is an effective, if slightly poetic, way of visualizing the mathematical and scientific challenges faced by meteorologists.
Computer algorithms used to predict the weather use the equations of fluid dynamics and thermodynamics to determine the state of the atmosphere at a given time in the future. The models divide the troposphere into a three dimensional matrix, and attempt to determine how adjacent points on the matrix will interact using those equations.
The closer the matrix grid lines, the more accurate the forecast, but the more calculations required to develop it. Since initial conditions are so important, the initial parameterization of the matrix points is critical. Input data is taken from satellites and weather stations around the world. Given this difficulty, and the variation in conditions between the data points, and the nonlinear quality of the equations, the number of possible solutions increases exponentially in time. After five days, even with the most powerful supercomputers, a prediction is almost a wild guess.
In order to smooth out the seemingly random nature of the data, meteorologists try to combine and average several predictions using different models and parameterizations. Ensemble forecasting, as it is known, uses complex statistical methods to average results from many sources in order to arrive at the most accurate weather predictions. Forecasts are given as the statistical likelihood of a given weather event. A thirty percent chance of rain means that when atmospheric conditions match current conditions it will rain thirty percent of the time.
So, while forecasting the weather is not exactly predicting the unpredictable, it’s not to far from it. Somehow though, scientists manage to cobble together a pretty decent approximation. So the next time it doesn’t rain on your parade, and all your mail shows up delivered correctly, try to take notice. There are just too many butterflies out there making a mess of things, and it may not be the scientist who is to blame.