Download presentation
Presentation is loading. Please wait.
Published byEsmond Walker Modified over 9 years ago
1
plas@knmi.nl EMS 2013 (Reading UK) Verification techniques for high resolution NWP precipitation forecasts Emiel van der Plas (plas@knmi.nl) Kees Kok Maurice Schmeits
2
2/15 plas@knmi.nl EMS 2013 (Reading UK) 2 Introduction NWP has come a long way… It was: Then it became Hirlam: Now it is Harmonie It should be GALES (or so) It looks better… But how is it better? Does it perform better? That remains to be seen…
3
3/15 plas@knmi.nl EMS 2013 (Reading UK) 3 Representation: “double penalty” Forecast localised phenomena: False alarm + Miss = double penalty Station (gauge) data: Forecast vs Radar data: When we take point-by-point errors (ME/RMSE):
4
4/15 plas@knmi.nl EMS 2013 (Reading UK) 4 This talk HARP: Hirlam Aladin R-based verification Packages Tools for spatial, ensemble verification Based on R FSS, SAL, … Relies on eg SpatialVX package (NCAR) Generalized MOS approach Comparison high vs low resolution Hirlam (11 km, hydrostatic) Harmonie (2.5 km, non-hydrostatic, w/ & w/o Mode-S) ECMWF (T1279, deterministic) Lead times: +003, +006, +009, +012 Accumulated precipitation vs (Dutch) radar, synop
5
5/15 plas@knmi.nl EMS 2013 (Reading UK) 5 Neo-classical: neighborhood methods, FSS Options: FSS, ISS, SAL, … Fraction Skill Score (fuzzy verification) (Roberts & Lean, 2008) Straightforward interpretation ‘Resolves’ double penalty But ‘smoothes’ away resolution that may contain information! ( V storm t ) == upscaling Baserate , FSS observation forecast
6
6/15 plas@knmi.nl EMS 2013 (Reading UK) 6 FSS results: Differences are sometimes subtle:
7
7/15 plas@knmi.nl EMS 2013 (Reading UK) 7 FSS results: Differences are sometimes subtle: 1x1 3x3
8
8/15 plas@knmi.nl EMS 2013 (Reading UK) 8 FSS: more results Higher resolutions: higher thresholds? DMO!
9
9/15 plas@knmi.nl EMS 2013 (Reading UK) 99/15 How would a trained meteorologist look at direct model output? Model Output Statistics Learn for each model, location, … separately!
10
10/15 plas@knmi.nl EMS 2013 (Reading UK) 10 Model Output Statistics Construct a set of predictors (per model, station, starting and lead time) : For now: use precipitation only Use various ‘areas of influence’: 25,50,75,100 km DMO, coverage, max(DMO) within area, distance to forecasted precipitation, … Apply logistic regression Forward stepwise selection, backward deletion Probability of threshold exceedance! Verify probabilities based on DMO, coefficients of selected predictors Training data: day 1-20, `independent’ data: day 21 – 28/31
11
11/15 plas@knmi.nl EMS 2013 (Reading UK) 11 Model (predictor) selection Based on AIC (Akaike Information Criterion) Take the predictor with highest AIC in training set (day 1 - 20) Test on independent set (day 21 – 28/31) Sqrt(tot_100) Sqrt(max)_100 More predictors != more skill distext_100 exp2int_100
12
12/15 plas@knmi.nl EMS 2013 (Reading UK) 12 Model (predictor) selection Based on AIC (Akaike Information Criterion) Take the predictor with highest AIC in training set (day 1 - 20) Test on independent set (day 21 – 28/31) Sqrt(tot_75) Max_50 More predictors != more skill
13
13/15 plas@knmi.nl EMS 2013 (Reading UK) 13 Model comparison (April – October 2012) Hirlam, Harmonie (based on Hirlam) ECMWF 12UTC+003 12UTC+006 12UTC+009
14
14/15 plas@knmi.nl EMS 2013 (Reading UK) 14 Summer vs winter Harmonie is expected to perform better in convective situations: What happens during winter? Summer (apr-nov)Winter (nov-apr)
15
15/15 plas@knmi.nl EMS 2013 (Reading UK) 15/15 Discussion, to do MOS method: Stratification per station, season, … More data necessary, reforecasting under way Representation error: take (small) radar area Use ELR, conditional probabilities for higher thresholds Extend to wind, fog/visibility, MSG/cloud products, etc FSS: Use OPERA data
16
16/15 plas@knmi.nl EMS 2013 (Reading UK) 16/15 Conclusion/Discussion Comparison between NWP’s of different resolution is, well, fuzzy Realism != Score Fraction Skill Score yields numbers, but sometimes hard to draw conclusions MOS method: Resolution/model independent Takes into account what we know Doubles (potentially) as predictive guide Thank you for your attention!
17
17/15 plas@knmi.nl EMS 2013 (Reading UK) 17 Binary predictand y i (here: precip > q) Probability: logistic: Joint likelihood: L 2 penalisation (using R: stepPLR by Mee Young Park and Trevor Hastie, 2008) : minimise Use threshold (sqrt(q)) as predictor: complete distribution function (Wilks, 2009) Few cases, many potential predictors: pool stations, max 5 terms 17/15 Extended Logistic Regression (ELR)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.