Bias Adjusted Precipitation Scores Fedor Mesinger NOAA/Environmental Modeling Center and Earth System Science Interdisciplinary Center (ESSIC), Univ. Maryland,

Slides:



Advertisements
Similar presentations
Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
Advertisements

SNPP VIIRS green vegetation fraction products and application in numerical weather prediction Zhangyan Jiang 1,2, Weizhong Zheng 3,4, Junchang Ju 1,2,
Psychology 290 Special Topics Study Course: Advanced Meta-analysis April 7, 2014.
Toward Improving Representation of Model Microphysics Errors in a Convection-Allowing Ensemble: Evaluation and Diagnosis of mixed- Microphysics and Perturbed.
Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
NOAA/NWS Change to WRF 13 June What’s Happening? WRF replaces the eta as the NAM –NAM is the North American Mesoscale “timeslot” or “Model Run”
Monitoring the Quality of Operational and Semi-Operational Satellite Precipitation Estimates – The IPWG Validation / Intercomparison Study Beth Ebert Bureau.
Paul Fajman NOAA/NWS/MDL September 7,  NDFD ugly string  NDFD Forecasts and encoding  Observations  Assumptions  Output, Scores and Display.
Reliability Trends of the Global Forecast System Model Output Statistical Guidance in the Northeastern U.S. A Statistical Analysis with Operational Forecasting.
Exploring the Use of Object- Oriented Verification at the Hydrometeorological Prediction Center Faye E. Barthold 1,2, Keith F. Brill 1, and David R. Novak.
Lecture 9: One Way ANOVA Between Subjects
Confidence Intervals: Estimating Population Mean
Verification has been undertaken for the 3 month Summer period (30/05/12 – 06/09/12) using forecasts and observations at all 205 UK civil and defence aerodromes.
1 How Are We Doing? A Verification Briefing for the SAWS III Workshop April 23, 2010 Chuck Kluepfel National Weather Service Headquarters Silver Spring,
Hypothesis Testing. Distribution of Estimator To see the impact of the sample on estimates, try different samples Plot histogram of answers –Is it “normal”
© Crown copyright Met Office Forecasting Icing for Aviation: Some thoughts for discussion Cyril Morcrette Presented remotely to Technical Infra-structure.
Forecast Skill and Major Forecast Failures over the Northeastern Pacific and Western North America Lynn McMurdie and Cliff Mass University of Washington.
Application of low- resolution ETA model data to provide guidance to high impact weather in complex terrain Juha Kilpinen Finnish Meteorological Institute.
1 What’s New in Verification? A Verification Briefing for the SAWS IV Workshop October 26, 2011 Chuck Kluepfel National Weather Service Headquarters Silver.
Verification of the Cooperative Institute for Precipitation Systems‘ Analog Guidance Probabilistic Products Chad M. Gravelle and Dr. Charles E. Graves.
CHAPTER 14 MULTIPLE REGRESSION
A Comparison of the Northern American Regional Reanalysis (NARR) to an Ensemble of Analyses Including CFSR Wesley Ebisuzaki 1, Fedor Mesinger 2, Li Zhang.
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
We carried out the QPF verification of the three model versions (COSMO-I7, COSMO-7, COSMO-EU) with the following specifications: From January 2006 till.
A Preliminary Verification of the National Hurricane Center’s Tropical Cyclone Wind Probability Forecast Product Jackie Shafer Scitor Corporation Florida.
10/28/2014 Xiangshang Li, Yunsoo Choi, Beata Czader Earth and Atmospheric Sciences University of Houston The impact of the observational meteorological.
Gridded Rainfall Estimation for Distributed Modeling in Western Mountainous Areas 1. Introduction Estimation of precipitation in mountainous areas continues.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Performance of Growth Models for Salmonella and Other Pathogens Thomas P. Oscar, Agricultural Research Service, USDA, Room 2111, Center for Food Science.
Survey Construction A meta-analysis of item wording and response options to reduce Bias in survey research Prepared by: Amanda Mulcahy Loyola University.
Real-time Verification of Operational Precipitation Forecasts using Hourly Gauge Data Andrew Loughe Judy Henderson Jennifer MahoneyEdward Tollerud Real-time.
Gouge – Navy slang for the bare essential knowledge to get by.
Hydrometeorological Prediction Center HPC Experimental PQPF: Method, Products, and Preliminary Verification 1 David Novak HPC Science and Operations Officer.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
Evolution of MJO in ECMWF and GFS Precipitation Forecasts John Janowiak 1, Peter Bauer 2, P. Arkin 1, J. Gottschalck 3 1 Cooperative Institute for Climate.
Traditional Verification Scores Fake forecasts  5 geometric  7 perturbed subjective evaluation  expert scores from last year’s workshop  9 cases x.
VALIDATION AND IMPROVEMENT OF THE GOES-R RAINFALL RATE ALGORITHM Background Robert J. Kuligowski, Center for Satellite Applications and Research, NOAA/NESDIS,
P.1 QPF verif scores for NCEP and International Models ● 2013 ETS/bias scores for 00-24h and 24-48h forecasts (the two forecast ranges that all datasets.
CBRFC Stakeholder Forum February 24, 2014 Ashley Nielson Kevin Werner NWS Colorado Basin River Forecast Center 1 CBRFC Forecast Verification.
The Eta Model: Design, History, Performance, What Lessons have we Learned? Fedor Mesinger NCEP/EMC, and UCAR, Camp Springs, MD;
Daily Science Pg.30 Write a formula for finding eccentricity. Assign each measurement a variable letter. If two focus points are 450 km away from one another.
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Logistic Regression Analysis Gerrit Rooks
Common verification methods for ensemble forecasts
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
COMPARISONS OF NOWCASTING TECHNIQUES FOR OCEANIC CONVECTION Huaqing Cai, Cathy Kessinger, Nancy Rehak, Daniel Megenhardt and Matthias Steiner National.
10th COSMO General Meeting, Cracow, Poland Verification of COSMOGR Over Greece 10 th COSMO General Meeting Cracow, Poland.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
The WRF Verification Toolkit Lacey Holland, Tressa Fowler, and Barbara Brown Research Applications Laboratory NCAR Lacey Holland, Tressa Fowler, and Barbara.
Update on Dropout Team Work and Related COPC Action Items Bradley Ballish NOAA/NWS/NCEP/PMB Co-Chair JAG/ODAA April 2009 CSAB Meeting.
Verification methods - towards a user oriented verification The verification group.
Overview of SPC Efforts in Objective Verification of Convection-Allowing Models and Ensembles Israel Jirak, Chris Melick, Patrick Marsh, Andy Dean and.
An Evaluation of Aspects of Tropical Precipitation Forecasts from the ECMWF & NCEP Model Using CMORPH John Janowiak 1, M.R.P. Sapiano 1, P. A. Arkin 1,
WG4 Oct 2006 – Sep 2007 plans COSMO General Meeting, 21 September 2006 Pierre Eckert.
On the Verification of Particulate Matter Simulated by the NOAA-EPA Air Quality Forecast System Ho-Chun Huang 1, Pius Lee 1, Binbin Zhou 1, Jian Zeng 6,
UERRA user workshop, Toulouse, 3./4. Feb 2016Cristian Lussana and Michael Borsche 1 Evaluation software tools Cristian Lussana (2) and Michael Borsche.
AMPS Update – July 2010 Kevin W. Manning Jordan G. Powers Mesoscale and Microscale Meteorology Division NCAR Earth System Laboratory National Center for.
Outline Sampling Measurement Descriptive Statistics:
False Assumptions Objective: Students will collaborate using problem solving and critical thinking skills to arrive at a correct description to a vague.
Intensity-scale verification technique
Systematic timing errors in km-scale NWP precipitation forecasts
Spatial Verification Intercomparison Meeting, 20 February 2007, NCAR
5Developmental Testbed Center
Dan Petersen Bruce Veenhuis Greg Carbin Mark Klein Mike Bodner
Mike Staudenmaier NWS/WR/STID
Air Quality Forecast Verification (AFQx)
Challenge: High resolution models need high resolution observations
Quantitative verification of cloud fraction forecasts
Statistical comparison of metabolites and analysis of differential metabolites and key metabolic pathways. Statistical comparison of metabolites and analysis.
Chapter 9 Hypothesis Testing: Single Population
Presentation transcript:

Bias Adjusted Precipitation Scores Fedor Mesinger NOAA/Environmental Modeling Center and Earth System Science Interdisciplinary Center (ESSIC), Univ. Maryland, College Park, MD VX-Intercompare Meeting Boulder, 20 February 2007

Most popular “traditional statistics”: ETS, Bias Problem: what does the ETS tell us ?

“The higher the value, the better the model skill is for the particular threshold” (a recent MWR paper)

Example: Three models, ETS, Bias, 12 months, “Western Nest” Is the green model loosing to red because of a bias penalty?

What can one do ?

BIAS NORMALIZED PRECIPITATION SCORES Fedor Mesinger 1 and Keith Brill 2 1 NCEP/EMC and UCAR, Camp Springs, MD 2 NCEP/HPC, Camp Springs, MD J th Prob. Stat. Atmos. Sci.; 20th WAF/16th NWP ( Seattle AMS, Jan. ‘04)

Two methods of the adjustment for bias (“Normalized” not the best idea) 1.dHdF method: Assume incremental change in hits per incremental change in bias is proportional to the “unhit” area, O-H Objective : obtain ETS adjusted to unit bias, to show the model’s accuracy in placing precipitation ( The idea of the adjustment to unit bias to arrive at placement accuracy: Shuman 1980, NOAA/NWS Office Note) 2. Odds Ratio method: different objective

Forecast, Hits, and Observed (F, H, O) area, or number of model grid boxes:

dHdF method, assumption: can be solved; a function H (F) obtained that satisfies the three requirements:

Number of hits H -> 0 for F -> 0; The function H(F) satisfies the known value of H for the model’s F, the pair denoted by F b, H b, and, H(F) -> O as F increases

West Eta GFS NMM Bias adjusted eq. threats

A downside: if H b is close to F b, or to O, it can happen that dH/dF > 1 for F -> 0 Physically unrealistic ! Reasonableness requirement:

“dHdM” method: Assume as F is increased by dF, ratio of the infinitesimal increase in H, dH, and that in false alarms dM=dF-dH, is proportional to the yet unhit area:

One obtains ( Lambertw, or ProductLog in Mathematica, is the inverse function of )

H (F) now satisfies the additional requirement: dH/dF never > 1

H(F)H(F) H = OH = O H = FH = F Fb, HbFb, Hb dHdF method

H(F)H(F) H = OH = O H = FH = F Fb, HbFb, Hb dHdM method

Results for the two “focus cases”, dHdM method (Acknowledgements: John Halley Gotway, data; Du š an Jovi ć, code and plots)

5/13 Case dHdM wrf2caps wrf4ncar wrf4ncep

6/01 Case dHdM wrf2caps wrf4ncar wrf4ncep

Impact, in relative terms, for the two cases is small, because the biases of the three models are so similar !

One more case, for good measure:

5/25 Case dHdM wrf2caps wrf4ncar wrf4ncep

Comment: Scores would have generally been higher had the verification been done on grid squares greater than ~4 km This would have amounted to a poor-person’s version of “fuzzy” methods !