The Truth About Verification
The December 2011 precipitation forecast issued by the International Research Institute for Climate and Society called for a 75 percent chance of above normal precipitation over parts of the Philippines between January and March. As the months played out, storms brought roughly eight inches more rain than usual for the period. That’s about 85 percent more than usual.
Does this mean the forecast was right? What if the storms never materialized and the region received eight inches of rain less than normal? Would the forecast then have been wrong?
In both cases, the answer would be no. That’s because there’s no such thing as a right or wrong probabilistic forecast. A 75 percent chance of above-normal rain also implies a 25 percent chance of normal or below-normal rain.
However, a forecast’s merits (or lack thereof) can be judged using a handful of different metrics. IRI has been verifying its forecasts internally and in academic journals for years. However, last year IRI scientists Tony Barnston and Simon Mason worked with IRI staff members to make those results public.
The end result is a host of scores across five categories displayed using interactive charts and maps. Taken individually or as a whole, the scores given users a better sense of where IRI’s forecasts are more skillful and areas where there’s room for improvement. That in turn can help decision makers make, well, better decisions about whether to use the forecast and how much trust to put in it.