Verification Overview

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast User Meeting -- Reading, June 2006 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
Mei Xu, Jamie Wolff and Michelle Harrold National Center for Atmospheric Research (NCAR) Research Applications Laboratory (RAL) and Developmental Testbed.
QPF verification of the 4 model versions at 7 km res. (COSMO-I7, COSMO-7, COSMO-EU, COSMO-ME) with the 2 model versions at 2.8 km res. (COSMO- I2, COSMO-IT)
VERIFICATION Highligths by WG5. 9° General MeetingAthens September Working package/Task on “standardization” The “core” Continuous parameters: T2m,
FILTERED NOCTURNAL EVOLUTION Data from 23 AWS, 22 of them from the official Catalan Met. Service (see black dots in figure) and one from the Spanish Met.
COSMO General Meeting – Roma Sept 2011 Some results from operational verification in Italy Angela Celozzi – Giovanni Favicchio Elena Oberto – Adriano.
April 23, 2009 Geography 414 Group 3 1 Boone, NC Laura Beth Adams- Average Temperature Alec Hoffman – Daily Temperature Range Jill Simmerman- Maximum Temperature.
NATS 101 Lecture 3 Climate and Weather. Climate and Weather “Climate is what you expect. Weather is what you get.” -Robert A. Heinlein.
Verification of DWD Ulrich Damrath & Ulrich Pflüger.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
Performance of the MOGREPS Regional Ensemble
COSMO General Meeting – Moscow Sept 2010 Some results from operational verification in Italy Angela Celozzi - Federico Grazzini Massimo Milelli -
Characteristics of Extreme Events in Korea: Observations and Projections Won-Tae Kwon Hee-Jeong Baek, Hyo-Shin Lee and Yu-Kyung Hyun National Institute.
Verification methods - towards a user oriented verification WG5.
9° General Meeting Adriano Raspanti - WG5. 9° General MeetingAthens September Future plans CV-VerSUS project future plans COSI “The Score” new package.
SEASONAL COMMON PLOT SCORES A DRIANO R ASPANTI P ERFORMANCE DIAGRAM BY M.S T ESINI Sibiu - Cosmo General Meeting 2-5 September 2013.
COSMO General Meeting, Offenbach, 7 – 11 Sept Dependance of bias on initial time of forecasts 1 WG1 Overview
We carried out the QPF verification of the three model versions (COSMO-I7, COSMO-7, COSMO-EU) with the following specifications: From January 2006 till.
ECA&D return periods: the key to a more uniform warning system for MeteoAlarm Ine Wijnant, Andrew Stepek and ECA&D staff at KNMI MeteoAlarm Expert Meeting.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Conditional verification of all COSMO countries: first.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
The latest results of verification over Poland Katarzyna Starosta Joanna Linkowska COSMO General Meeting, Cracow September 2008 Institute of Meteorology.
Verification Verification with SYNOP, TEMP, and GPS data P. Kaufmann, M. Arpagaus, MeteoSwiss P. Emiliani., E. Veccia., A. Galliani., UGM U. Pflüger, DWD.
U. Damrath, COSMO GM, Athens 2007 Verification of numerical QPF in DWD using radar data - and some traditional verification results for surface weather.
Short Range Ensemble Prediction System Verification over Greece Petroula Louka, Flora Gofa Hellenic National Meteorological Service.
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
Overview of WG5 activities and Conditional Verification Project Adriano Raspanti - WG5 Bucharest, September 2006.
VERIFICATION Highligths by WG5. 2 Outlook Some focus on Temperature with common plots and Conditional Verification Some Fuzzy verification Long trends.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Latest results in the precipitation verification over Northern Italy Elena Oberto, Marco Turco, Paolo Bertolotto (*) ARPA Piemonte, Torino, Italy.
Verification methods - towards a user oriented verification The verification group.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Experiments at MeteoSwiss : TERRA / aerosols Flake Jean-Marie.
VERIFICATION Highligths by WG5. 2 Outlook The COSMO-Index COSI at DWD Time series of the index and its DWD 2003.
VALIDATION OF HIGH RESOLUTION SATELLITE-DERIVED RAINFALL ESTIMATES AND OPERATIONAL MESOSCALE MODELS FORECASTS OF PRECIPITATION OVER SOUTHERN EUROPE 1st.
WG5 COSMO General Meeting, Rome 2011 Authors: ALL Presented by Adriano Raspanti.
New results in COSMO about fuzzy verification activities and preliminary results with VERSUS Conditional Verification 31th EWGLAM &16th SRNWP meeting,
Cécile Hannay, Julio Bacmeister, Rich Neale, John Truesdale, Kevin Reed, and Andrew Gettelman. National Center for Atmospheric Research, Boulder EGU Meeting,
Assessment of high-resolution simulations of precipitation and temperature characteristics over western Canada using WRF model Asong. Z.E
Introducing the Lokal-Modell LME at the German Weather Service
Smoothing of Soil Wetness Index (SWI) in ALADIN/LACE domain
Operational Verification at HNMS
COSMO-RU operational verification in Russia using VERSUS2
Studies with COSMO-DE on basic aspects in the convective scale:
Current verification results for COSMO-EU and COSMO-DE at DWD
QPF sensitivity to Runge-Kutta and Leapfrog core
Synoptic Weather.
Tuning the horizontal diffusion in the COSMO model
High resolution climate simulations and future change over Vietnam
Simulating daily precipitation variability.
WG5-Report from Switzerland: Verification of aLMo in the year 2005
(Elena Oberto, Massimo Milelli - ARPA Piemonte)
On HRM3 (a.k.a. HadRM3P, a.k.a. PRECIS) North American simulations
COSMO General Meeting 2009 WG5 Parallel Session 7 September 2009
Conditional verification of all COSMO countries: first results
The COSMO-LEPS cloud problem over the Alpine region
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
WEATHER and THE FIRE EVIRONMENT
2007 Mei-yu season Chien and Kuo (2009), GPS Solutions
Ming-Dah Chou Department of Atmospheric Sciences
Some Verification Highlights and Issues in Precipitation Verification
Verification Overview
Ulrich Pflüger & Ulrich Damrath
Latest Results on Variational Soil Moisture Initialisation
Topic 3: Meteorology and data filtering
Title Inorganic PM at selected sites during intensive period 2008:
Seasonal common scores plots
Short Range Ensemble Prediction System Verification over Greece
Verification using VERSUS at RHM
VERIFICATION OF THE LAMI AT CNMCA
Presentation transcript:

Verification Overview (based on CP activity) Dimitra Boucouvala & WG5 This year, I was assigned the task of the Common Verification, which included comparison of model results over a Common Area Verification Overview, COSMO GM, 2-8 Sept 2018, St. Petersburg

VERIFICATION OVER THE COMMON AREA COSMO-7(7km), COSMO-GR(4km), COSMO-I5(5km), COSMO-ME(5km), COSMO-PL(7km), COSMO-RU7(7km) ICON-EU (6.5km), IFS (9km), ICON(13.5) Annual reports and Seasonal analytics are located at the website: http://cosmo-model.org/content/tasks/verification.priv/

+ - Scores for 00 UTC and 12 UTC Runs Continuous parameters (3h) Performance Diagram Temperature at 2 m Dew point temperature at 2 m Pressure reduced to Mean Sea Level Wind speed at 10 m Total cloud cover + - Dichotomic parameters Total Precipitation in 6 hours Total Precipitation in 24 hours Total Cloud Cover (0-25, 25-75, 75-100%) Wind Gust ( Threshold > 2.5, 15, 20) The performance diagram combines different dichotomic scores based on contigency tables and the scores are better when they are close to the top right Extremal Dependence Indice EDI 6h -24h Preci Thresholds: 1, 5, 10, 15, 20, 25, 30

00 UTC Run and 12 UTC Run score difference is not significant T2M 00 UTC Run 12 UTC Run 00 UTC Run and 12 UTC Run score difference is not significant 00 UTC Run and 12 UTS Run score difference is not significant, except some scores of 2m Dew Point Temperature. WS

Mean Sea Level Pressure RMSE RMSE ME ME JJA RMSE clear diurnal variability.and increases with time lead in all seasons. RMSE RMSE ME ME

Temperature 2m RMSE RMSE ME ME RMSE RMSE ME ME Clear bias diurnal variability with overstimation at night and understimation in the day. In JJA bias is high at night . RMSE RMSE ME ME

ME RMSE TOTAL CLOUD COVER From: 2017-06-01 To: 2018-05-31 Mean Annual Daily Cycle Values COSMO → ME>0 ICON-EU/ICON → ME~0 IFS → ME<0 RMSE ME and RMSE maximum during nighttime The mean annual daily variation of the scores are given for

Total Cloud Dichotomic FBI (0-25,25-75,75-100%)mperature 2m IFS, ICON, ICON-EU FBI > 1 for 0-25% FBI < 1 for 75-100% 0-25% Low Cloud cover occurrences are slightly overestimated and high cloud cover underestimated Whan we performed dichotomic cloud analysis we found that 75-100%

WIND SPEED AT 10 M From: 2017-06-01 To: 2018-05-31 Mean Annual Daily Cycle Values ICON-EU/ICON have negative Bias, less diurnal bias variation, maximum in daytime in contrast to other models.

WIND GUST Dichotomic 10m 00 UTC Run DJF >12.5 >15 There is Underestimation of occurrences for Higher Wind Gust threshold, ETS scores worse (closer to 0) > 12.5 >20 > 20

Annual RMSE TRENDS TCC TEMP TDEW WSPD JJA DJF Drop of ME and RMSE error in DJF JJA DJF

DJF no diurnal variability and overestimation + - In JJA there is a FBI diurnal cycle Max values 12UTC- Light precipitation overestimation mainly by IFS, ICON Precipitation 6h > 0.2mm Threat score + DJF no diurnal variability and overestimation -

Precipitation 6h > 5mm + - For higher thresholds FBI weaker diurnal cycle- tendency of Underestimation + -

Extremal Dependency Index 00 UTC Run Preci 6h > 5mm Preci 6h > 5mm 5mm 5mm Better Values for DJF and lower thresholds. Diurnal Cycle for higher thresholds. Index values decrease with time lead. Preci 6h > 10mm Preci 6h > 10mm 10mm 10mm

Standard Verification on Common Area 2- High Resolution, COSMO-DE2, COSMO-1, COSMO-IT, COSMO-PL2, and ICON-EU, ICON and IFS for 00 UTC Run

2m Temp FIne Coarse 2m Dew Point T Coarse Fine

Better results for HR models, Higher ETS and FBI closer to 1 Coarse Fine Preci 6h > 0.2mm Better results for HR models, Higher ETS and FBI closer to 1

Standard Verification on Common Area 3- Dry Area Climate COSMO-IL(2 Standard Verification on Common Area 3- Dry Area Climate COSMO-IL(2.8km), COSMO-GR (4km) , COSMO-ME(5km) for 00 UTC Run

TCC, Wspeed and Tdew scores generally worse than CM1

The “performance-rose” A novel diagram in which scores and type of errors of wind forecast are summarized according to directions was introduced by Maria Stefania Tesini was applied for Common Area this year.

How it works Light: ws<10 knots For each station, obs and fcs wind data are categorized in octants for wind direction and in classes for wind speed. Light: ws<10 knots Light-Moderate: 10≤ ws < 20 Knots Moderate: 20≤ ws < 30 Knots Strong: ≥30 Knots For each class a separate plot is done

How it works Blue line is the observed frequency of the specific speed-class in each direction Red line is the forecast frequency of the specific speed-class in each direction The number of events can be read on the radial scale (frequency axis), increasing outward from the center The Frequency Bias can be easily deduced by RELATIVE POSITION of blue and red line Red outer  overestimation Blue outer  underestimation

Speed class is correct Colored sectors represent how model predicts the reference speed class in each direction, being the direction correct speed is overestimated of 1 class speed is underestimated of 1 class The forecast is

Selected Stations for Common Area

According to this implementation of the PerfRose, some questions can be interesting for “COMMON PLOTS” users Do COSMO models overestimate mountain 10m wind speed ? Are there significant differences between forecasts of Day 1 and 2 ? (Useful for warnings) Are there differences between a model and its driving model ?

Station: 11683 SVRATOUCH Height: 740 m Period: 1 Year of data from June 2017 to May 2018 Understimation of 10m winds in mountain Model: COSMO-5M Run:00 Forecast Day: 1 Step: 3 hr

SON2017 – 12830: COSMO-GR DAY 1 Differences of day 1 and 2 DAY 2

Summary of results I Total cloud cover: Temperature 2m: Positive BIAS especially at night. IFS, ICON, ICON-EU similar behavior with weaker variability and small negative values. Dichotomic TCC scores showed that the larger scale models overpredicted cases with low cloudiness. Temperature 2m: Clear diurnal cycle of BIAS with higher values during night . Dew point temperature 2m: Weaker variability in SON and DJF. Mean surface level pressure: All models (also IFS, ICON, ICON-EU) show a maximum of RMSE during summer at late afternoon. RMSE increases with time lead. Wind speed 10m: Lower BIAS amplitudes for IFS, ICON, ICON-EU. Wind Gust 10m: Scores are worse for higher thresholds.

Summary of results II Precipitation: Tendency of RMSE : Summer: Overestimation for occurences of low precipitation amounts during day especially for 06 - 12 UTC,– Underestimation for 18 – 24 UTC. (FBI decreases for higher precipitation amounts) . Winter: Overestimation for occurences of low precipitation during the whole day. For higher precipitation amounts frequency bias is slightly greater than 1 with worse quality compared to low precipitation amounts Overestimation for ICON, IFS for low precipitation amounts. Tendency of RMSE : On annual basis most RMSE scores apart from Wind Speed imroved. JJA scores were slightly worse than last year (too warm and dry summer), while DJF scores improved. Test for Common Area II and III : Scores for HR over CM2 were slightly better than CM1, but for dry Area CM3, some scores such as TCC, TDEW, WS were worse. Wind Rose Approach : Errors of wind and direction plotted in a novel wind rose scheme can be applied and can be very useful tool for error classification.

Thank you for your attention

JJA , MAM ME RMSE diurnal cycle. ICON, ICON-EU bias >0 T DEW 2m JJA , MAM ME RMSE diurnal cycle. ICON, ICON-EU bias >0 What is different here is the diurnal cycle observed in JJA and MAM with overestimation in the afternoon, and ICON ICON-EU tendency of overestimation

Temp Seasonal TRENDS Increase in JJA The annual RMSE decrease was due to DJF because there was an increase in the summer

Wind Speed Seasonal TRENDS Decrease in MAM The annual RMSE increase was due all seasons except MAM

Conclusions I We investigated the errors of COSMO models over common datasets from JJA 2017 to MAM 2018. The results showed that the general models behaviour did not significantly change from the previous years. The summertime JJA17 most RMSE scores increased compared to last year, possibly due to the extremely warm and dry summer. On the other side, winter DJF17-18 RMSE scores were generally better. On Annual basis, T, T2m, Tdew, TCC RMSE scores improved Cloud cover Global Models ME is negative and lower than COSMO and the dichotomic analysis showed that this mostly due to the overestimation of lower thresholds cases. The CSI and ETS scores are worse for 25-75 threshold.

Conclusions II The precipitation FBI scores show diurnal variability in JJA (as in previous years) with drizzle overestimation of global models especially IFS Extremal Forecast Index was better for DJF and for lower precipitation thresholds HR models scores for CM2 are slightly better than CM1 especially for JJA CM3 Dry Area scores are worse than CM1 especially for TCC, Tdew and Wind Speed (mainly DJF which is the season that differs from CM1). Can this be a conclusion (together with the JJA CM1 results) that in drier regimes the models do not perform so well ?