Advances in Rfdbk based Verification at DWD

Slides:



Advertisements
Similar presentations
ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
Advertisements

Implementation and Verification of a Cache Coherence protocol using Spin Steven Farago.
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Composition CMSC 202. Code Reuse Effective software development relies on reusing existing code. Code reuse must be more than just copying code and changing.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Lecture 23: Tues., Dec. 2 Today: Thursday:
1 Intercomparison of low visibility prediction methods COST-722 (WG-i) Frédéric Atger & Thierry Bergot (Météo-France)
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
1 Chapter 20 — Creating Web Projects Microsoft Visual Basic.NET, Introduction to Programming.
Engineering H192 - Computer Programming The Ohio State University Gateway Engineering Education Coalition Lect 3P. 1Winter Quarter Structured Engineering.
COSMO General Meeting Zurich, 2005 Institute of Meteorology and Water Management Warsaw, Poland- 1 - Verification of the LM at IMGW Katarzyna Starosta,
Performance of the MOGREPS Regional Ensemble
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
Advanced Shell Programming. 2 Objectives Use techniques to ensure a script is employing the correct shell Set the default shell Configure Bash login and.
Copyright 2012, University Corporation for Atmospheric Research, all rights reserved Verifying Ensembles & Probability Fcsts with MET Ensemble Stat Tool.
13 ° COSMO General Meeting Rome VERSUS2 Priority Project Report and Plan Adriano Raspanti.
ECMWF WWRP/WMO Workshop on QPF Verification - Prague, May 2001 NWP precipitation forecasts: Validation and Value Deterministic Forecasts Probabilities.
COSMO GM2013.E.Astakhova,D.Alferov,A.Montani,etal The COSMO-based ensemble systems for the Sochi 2014 Winter Olympic Games: representation and use of EPS.
Atmospheric Data Analysis on the Grid Kevin Hodges ESSC Co-workers: Brian Hoskins, Lennart Bengtsson Lizzie Froude.
An Introduction to Programming and Algorithms. Course Objectives A basic understanding of engineering problem solving process. A basic understanding of.
Tutorial. Other post-processing approaches … 1) Bayesian Model Averaging (BMA) – Raftery et al (1997) 2) Analogue approaches – Hopson and Webster, J.
Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,
Chapter 8 Cookies And Security JavaScript, Third Edition.
An Introduction to Java Chapter 11 Object-Oriented Application Development: Part I.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Data mining in the joint D- PHASE and COPS archive Mathias.
Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Accounting for Change: Local wind forecasts from the high-
Short-Range Ensemble Prediction System at INM José A. García-Moya SMNT – INM 27th EWGLAM & 12th SRNWP Meetings Ljubljana, October 2005.
Refinement and Evaluation of Automated High-Resolution Ensemble-Based Hazard Detection Guidance Tools for Transition to NWS Operations Kick off JNTP project.
Engineering H192 - Computer Programming Gateway Engineering Education Coalition Lect 3P. 1Winter Quarter Structured Engineering Problem Solving and Logic.
WWRP_WSN05, Toulouse, 5-9 September 2005 / 1 Pertti Nurmi Juha Kilpinen Sigbritt Näsman Annakaisa Sarkanen ( Finnish Meteorological.
© Crown copyright Met Office Standard Verification System for Long-range Forecasts (SVSLRF) Richard Graham, Met Office Hadley Centre. With acknowledgements.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Verification and Metrics (CAWCR)
EGR 115 Introduction to Computing for Engineers Branching & Program Design – Part 1 Monday 29 Sept 2014 EGR 115 Introduction to Computing for Engineers.
Overview of WG5 future activities Adriano Raspanti Zurich, September 2005.
0 0 July, 2009 WRF-Var Tutorial Syed RH Rizvi WRFDA Analysis/Forecast Verification Syed RH Rizvi National Center For Atmospheric Research NCAR/ESSL/MMM,
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
Climate-SDM (1) Climate analysis use case –Described by: Marcia Branstetter Use case description –Data obtained from ESG –Using a sequence steps in analysis,
LIGO-G9900XX-00-M DMT Monitor Verification with Simulated Data John Zweizig LIGO/Caltech.
 Description of Inheritance  Base Class Object  Subclass, Subtype, and Substitutability  Forms of Inheritance  Modifiers and Inheritance  The Benefits.
Deutscher Wetterdienst FE VERSUS 2 Priority Project Meeting Langen Use of Feedback Files for Verification at DWD Ulrich Pflüger Deutscher.
Introduction to Algorithmic Processes CMPSC 201C Fall 2000.
Station lists and bias corrections Jemma Davie, Colin Parrett, Richard Renshaw, Peter Jermey © Crown Copyright 2012 Source: Met Office© Crown copyright.
 1- Definition  2- Helpdesk  3- Asset management  4- Analytics  5- Tools.
Satellite data monitoring
EGR 115 Introduction to Computing for Engineers
Introduction to Logic for Artificial Intelligence Lecture 2
Update on the Northwest Regional Modeling System 2013
BACY = Basic Cycling A COSMO Data Assimilation Testbed for Research and Development Roland Potthast, Hendrik Reich, Christoph Schraff, Klaus.
Lecture 25 More Synchronized Data and Producer/Consumer Relationship
Verifying and interpreting ensemble products
Tom Hopson, Jason Knievel, Yubao Liu, Gregory Roux, Wanli Wu
Makarand A. Kulkarni Indian Institute of Technology, Delhi
CAP 5636 – Advanced Artificial Intelligence
WEB PROGRAMMING JavaScript.
Post Processing.
WG5 Verification and Case studies Overview of activities Flora Gofa
Probabilistic forecasts
COSMO-LEPS Verification
Deterministic (HRES) and ensemble (ENS) verification scores
Christoph Gebhardt, Zied Ben Bouallègue, Michael Buchhold
Ensemble-4DWX update: focus on calibration and verification
Verification of Tropical Cyclone Forecasts
JavaScript: Introduction to Scripting
Confidence Intervals = Z intervals on your Calculator
Short Range Ensemble Prediction System Verification over Greece
PP CARMA Common Area with Rfdbk/MEC Application
Ryan Kang, Wee Leng Tan, Thea Turkington, Raizan Rahmat
Presentation transcript:

Advances in Rfdbk based Verification at DWD Felix Fundel Deutscher Wetterdienst FE 15 – Predictability & Verification Tel.:+49 (69) 8062 2422 Email: Felix.Fundel@dwd.de

Content Rfdbk package news Verification news III. Plans Significance Tests Conditional Verification I & II Probabilistic Verification Smaller Changes III. Plans

Rfdbk Package Bug occurring when extracting only one observation is fixed New function “bitcheck” allowing to filter bit encoded integers bitcheck Some observation attributes in feedback files are given as bit string, converted to a integer (e.g. flags) By this any combination of attribute values can be assigned to a observation Bitcheck allow to filter all observations for a set of bit values e.g. bitcheck(flags,c(10,18)) tells if the FG and redundancy flag is set Feedback File Definition 2014

Rfdbk Verification Optional namelist options NAME VALUE DESCRIPTION IdentList ‘/path/to/your/identlist’ # Use only station(s) given in list file (integer) statidList ‘/path/to/your/statidlist’ # Use only station(s) given in list file () dateList ‘/path/to/your/datelist’ # Verify dates in list separately (YYYYMMDD) lonlims/latlims ‘0,30’ # Restrict verification domain (faster) iniTimes ‘0,12’ # Use only runs given in argument inclEnsMean ‘TRUE’ # Include EPS mean in det. verification mimicVersus `TRUE` # Uses VERSUS quality check only sigTest ‘TRUE’ # Perform sign. test on differences in score mean conditionN `R code defining condition` # Perform conditional verification alignObs `TRUE`| `FALSE`|`REDUCED` # full/no/reduced observation alignment insType `1,2,3..` # Select txpe of instrument If not given, DWD standard settings are used! Essentially any observation/forecast characteristic contained in feedback files (~50) can be used to refine the verification via namelist.

Significance Test Included in TEMP, SYNOP, det. and ensemble verification t-test for significant difference from 0 of difference of scores from 2 experiments Implemented for area mean scores (not station based) 24 hours between score validity time needed (e.g. test for each initial time separately) t-tests requirements (normally dist. measurements, iid) is approximately valid for daily scores Point colors indicate significance Only visible if runs are not aggregates (e.g. Ini Time 00 or 12) Visible also when score difference is plotted Works also in hindcast mode As for now, not possible with conditional verification

Conditional Verifikation I Implemented for SYNOP deterministic verification Needs a list (ascii file) of dates (,YYYYMMDD) given in the namelist Model name is extended by +/- (date in/not in list) Stratification by dates happend during score aggregation, i.e. already produced score files can be reused Meant to do a conditional verification for e.g. weather types

Conditional Verifikation II Implemented for SYNOP deterministic verification Using observation properties to define conditions Several properties can be combined Arbitrary number of conditions is possible Conditions are set in namelist Model name is extended by number of class Stations that do not report an observation used in a condition statement are not used Example namelist condition1 "list(N='obs==0',N='abs(veri_data-obs)<1') " condition2 "list(N='obs==0',N='abs(veri_data-obs)>=1') " condition3 "list(N='obs%between%c(1,4)',N='abs(veri_data-obs)<1') " condition4 "list(N='obs%between%c(1,4)',N='abs(veri_data-obs)>=1') " condition5 "list(N='obs%between%c(5,7)',N='abs(veri_data-obs)<1') " condition6 "list(N='obs%between%c(5,7)',N='abs(veri_data-obs)>=1') " condition7 "list(N='obs==8',N='abs(veri_data-obs)<1') " condition8 "list(N='obs==8',N='abs(veri_data-obs)>=1')" With this implementation conditions need to relate to the observation (i.e. not possible is lon%between%c(0,20))

Probabilistic Verification Based on „probability files“ that need to be produced from feedback files. „probability files“ hold information on probability of an EPS forecast to exceed a threshold Arbitrary probability files can be aggregated to calculate e.g. Brier Scores (and decomposition), ROC-Area, ROC curve, reliability diagram This approach is time consuming and not very flexible Working on verification script to combine ensemble and probabilistic verification

Smaller Changes Verification added scores for dew-point, calculated from RH and T (TEMP) Use of RH observation >300hPa only from Vaisala sondes (before no observations used) (TEMP) More concise formulation of score aggregation, also consistent across observation & forecast types (groupingsets function coming with newer data.table version) Including obs. and forecast mean in scores (SYNOP & TEMP) Parallelization (partly) of station based verification (less run time with same memory requirements) Visualization Show station ID with station based scores Speed improvements for station based scores app Summary plots (score cards) allow to modify shown variables and sub-domains Summary plots allow to adjust score range (to increase visibility of small or very large effects)

Plans Rfdbk Verification Conditional verification for TEMP Unify probabilistic and ensemble verification Add flexibility concerning bias correction (allow to use/discard correction) Significance test for categorical scores (not good idea yet) Allow for single member verification Maybe separate verification functionality from Rfdbk and create a extra R package for that Spatial Verification See other presentation

Thank you!