Many years ago a a grizzled old Gunnery Sargent told our class that being a weather forecaster was a lot like being a baseball player in that if you hit .300 you were doing pretty good. At the time it seemed humorous and many of us young pups laughed it off thinking this old guy was just behind the times and hadn’t fully caught up with technology. My first few years on the job I boasted that I was a .600 hitter, but in reality, I was closer to .280 and good at framing the remaining .320 as successes. Only a third of a way into my career as an Air Force meteorologist I found myself repeating the very same phrase to my new trainees. That ol’ gunney knew a thing or two in hindsight.
Not to date myself too much but over the years I had the opportunity to work with some of the earliest versions of Doppler Radar, NEXRAD, MIDDAS (Now antiquated and out of use), and a slew of different can’t miss technological upgrades that were going to make forecasting the weather move light years ahead, and according to some of the model developers replace humans all together. We tried them all. Some were okay and some were so bad we felt they had to be nothing more than really big paper weights. In the case of Doppler and NEXRAD, they got better. In the case of MIDDAS it became known as the “gee whiz machine” because it made great tye-dye looking pictures of storms and little more.
While it sounds as if I am down on the technology side of the house that couldn’t be further from the truth. Many programs take time to debug and get operating properly. Any piece of accurate data is of great value when properly interpreted. It is the interpretation of data that needs to change. For a meteorologist to do an effective job of putting together a forecast it takes a lot more than just looking at the radar. There are several components which go into making a truly comprehensive picture of the atmosphere at any given time.
The first thing is evaluating the synoptic data. Synoptic data consists of weather observations culled from observers at every weather station within the forecasters area of responsibility. These observers are located at airports and air fields large and small, military bases, on board ships, from buoys which have been placed strategically placed to auto-report conditions, and from air traffic both military, commercial, and civilian. As you can see there are a ton of sources to gather data from. Some is good and some is extraordinarily flawed. Sometimes machines don’t register conditions correctly. Sometimes transmissions get garbled, and at other times there is the human factor of just dropping the ball. So step one to improving weather forecasting is improving the quality of synoptic data that is transmitted. Bad data never ever helps.
Synoptic data is plotted to charts by observers using a system of numerical and pictoral code which reflects whatever data the forecaster may require. In some cases it may just be wind speed, direction, and pressure. In other cases temperature, cloud cover, and any number of things may be included. When the chart/s (Sometimes both an 850mb surface chart and 200mb upper air chart are plotted) the forecaster can then perform one or two types of analysis on it. Generally an isobaric analysis is all that is performed, but in some facilities a streamline analysis is preferred. It isn’t unusual to see a chert that has been streamline analyzed also contain an isobaric analysis in a small area or two.
Step two is to make sure that facilities not only have the best radar available, whatever system they have contracted to use, but to make sure it is properly calibrated on a monthly basis at the least. With that said if the operator doesn’t know how to properly read it it then it is completely useless. If radar readings come in even a little bit out of the acceptable range for error an entire forecast can be easily blown because it throws off the way a forecaster can factor in variables for terrain, currents, and a host of other nuances which play into being able to correctly envision movement.
The next important factor is to understand is which forecasting models work the best under each set of conditions that can be present. In most cases there are several computer generated models which will interpret any combination of data input to it. Usually these are generated from a facility along the lines of Global Weather Central, and these models provide a prediction of what will happen over intervals which can range from 6 to 96 hours. Not all models are created equally and all are only as good as the information they are given. While one may excel forecasting the movement of a fast moving cold front, it may be very weak when it comes to predicting everything else. By knowing something like that in advance you know when to value a given model and when to discount it.
Once all this information is collected, plotted, analyzed, and digested, the forecaster is then faced with the task of envisioning the atmosphere not only in planes but in micro and macro climates and how they interact with each other. At that point what happens is they simply follow their gut instinct as to what the data and imagery is telling them has just happened and what is likely to happen. That is why the motto of many weather centers is and has been “your guess is as good as mine.”
After getting an idea of what goes into building a forecast and everything that can go wrong along the way both large and small, the best way to improve weather forecasting is better luck. That sounds like a huge cop out, but it is as true as true can be. The fact is no matter what we know or think we know, nature does as it pleases. We are always a step behind. We can improve technology, we can do our best to improve the human element, but in the end it comes down to good instincts built over years of experience and some geat luck. At best all we ever really do is guess with nature, we never out-guess it.