Download presentation
Presentation is loading. Please wait.
Published byHope Nicholson Modified over 9 years ago
1
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 1 Pertti Nurmi Juha Kilpinen Sigbritt Näsman Annakaisa Sarkanen ( Finnish Meteorological Institute ) Probabilistic Forecasts and Their Verification as Decision-Making Tools for Warnings against Near-Gale Force Winds WSN05: WWRP Symposium on Nowcasting and Very Short Range Forecasting Toulouse, 5-9 September 2005
2
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 2 Develop warning criteria / Guidance methods to forecast probability of near-gale force winds in the Baltic Joint Scandinavian research undertaking –e.g. Finland and Sweden issue near-gale & storm force wind warnings for same areas using different criteria Homogenize ! 6 Finnish coastal stations c. 15-20 stations from Sweden, Denmark, Norway Probabilistic vs. deterministic approach HIRLAM ECMWF model input Different calibration methods, e.g. Kalman filtering Goal: Common Scandinavian operational warning practice Introduction:
3
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 3 HIRLAM (limited area model) RCR ~ 22 km version MBE ~ 9 km version Data coverage: 9.11.2004 – 31.3.2005 ~ 140 cases ECMWF Applied as reference, only Data interpolated to 0.5 o *0.5 o Nearest grid point Data coverage: 1.10.2004 – 30.4.2005 ~ 210 cases Forecast lead time: +6 hrs (and beyond ECAM paper) Forecasts: wind speed at 10m Observations: 10 minute mean wind speed Data:
4
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 4 Potential problems: with height of instrumentation ? with observing site surroundings and obstacles ? –with the coast ? –with nearby islands ? –with barriers ? –with installations ? with low-level stability ? NE “Statistical correction” scheme available at FMI
5
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 5 (m) 55 50 45 40 35 30 25 20 15 10 5 Height of the instrumentation - Large filled dots: 6 Finnish stations being used - Yellow dot: Station_981; Results presented here
6
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 6 32 m 15,5 m/s 10 m 15 Wind speed dependence: Logarithmic wind profile 14 m/s 979 - Bogskär Unstable Neutral Stable threshold
7
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 7 Methods for producing probabilistic forecasts 1 : Deterministic forecasts: Error distribution of original sample (~140 cases) Approximation of the error distribution with a Gaussian fit ( , ): ”Dressing” method 1.ECMWF EPS (51 members) P (wind speed) > 14 m/s 2.Kalman filtering Various approaches No details given here 3.Deterministic forecast, “dressed” with “a posteriori” description of the observed error distribution of the past, dependent sample P (wind speed) > 14 m/s “Simplistic reference” !
8
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 8 Methods for producing probabilistic forecasts 2 : 4.Deterministic forecast, adjusted with a Gaussian fit to model forecasted stability ( Temperature forecasts from 2 adjacent model levels ) P (wind speed) > 14 m/s “Stability” method ~Scheme used at SMHI (H. Hultberg) 5.“Uncertainty area” method (aka ”Neighborhood method”) (aka ”Probabilistic upscaling”) Spatial (Fig.) and/or temporal neighboring grid points Size of uncertainty area ? Size of time window ? c. 50-500 “members” RCR: ± 3 points ~ 150*150 km 2 MBE: ± 6 points ~ 120*120 km 2
9
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 9 Relative Operating Characteristic To determine the ability of a forecasting system to discriminate between situations when a signal is present (here, occurrence of near-gale) from no-signal cases (“noise”) To test model performance ( H vs. F ) relative to a given probability threshold Applicable for probability forecasts and also for categorical deterministic forecasts Allows for their comparison “R” statistical package used for ROC computation/presentation Probabilistic FCs: ROC
10
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 10 ROC A = 0.82 ROC A fit = 0.91 ROC curve/area; Station_981; +6 hrs; No. of events ~25/130 ROC A = 0.93 ROC A fit = 0.91 ”Simple reference” (dep. sample): HIR_MBE_”Dressing” HIR_MBE_”Uncertainty area” ~ 120 * 120 km
11
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 11 ROC A = 0.93 ROC A fit = 0.91 ROC curve/area; Station_981; +6 hrs; No. of events ~25/130 ROC A = 0.84 ROC A fit = 0.82 HIR_MBE_”Stability” ”Simple reference” (dep. sample): HIR_MBE_”Dressing”
12
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 12 Comparison of methods; Station_981; +6 hrs
13
pertti.nurmi@fmi.fi 9.8.2005 WWRP_WSN05, Toulouse, 5-9 September 2005 / 13 We’ve only scratched the (sea) surface Need (much) more experimentation with various methods Different methods for different time/space scales ? Apply to data of other Scandinavian counterparts (here, only single station) Scores depend on station properties (e.g. observation height; Not dealt with here) (Statistical) adjustment of original observations required ! Finland has an operational scheme for this ! “Dressing” of dependent sample: quality level hard to reach “Uncertainty area” size: a tricky issue Higher resolution HIRLAM version produces higher scores Not necessarily a trivial result ! Reach the goal, i.e. common operational practice !!! Conclusions Future:
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.