Verifying Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia.

Slides:



Advertisements
Similar presentations
Nowcasting and Short Range NWP at the Australian Bureau of Meteorology
Advertisements

Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
Empirical Analysis and Statistical Modeling of Errors in Satellite Precipitation Sensors Yudong Tian, Ling Tang, Robert Adler, and Xin Lin University of.
6th WMO tutorial Verification Martin GöberContinuous 1 Good afternoon! नमस्कार नमस्कार Guten Tag! Buenos dias! до́брый день! до́брыйдень Qwertzuiop asdfghjkl!
Validation of Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert BMRC, Melbourne, Australia 3 rd IPWG Workshop / 3.
A Global Daily Gauge-based Precipitation Analysis, Part I: Assessing Objective Techniques Mingyue Chen & CPC Precipitation Working Group CPC/NCEP/NOAA.
1 00/XXXX © Crown copyright Use of radar data in modelling at the Met Office (UK) Bruce Macpherson Mesoscale Assimilation, NWP Met Office EWGLAM / COST-717.
Details for Today: DATE:3 rd February 2005 BY:Mark Cresswell FOLLOWED BY:Assignment 2 briefing Evaluation of Model Performance 69EG3137 – Impacts & Models.
Daria Kluver Independent Study From Statistical Methods in the Atmospheric Sciences By Daniel Wilks.
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
Assessment of Tropical Rainfall Potential (TRaP) forecasts during the Australian tropical cyclone season Beth Ebert BMRC, Melbourne, Australia.
Monitoring the Quality of Operational and Semi-Operational Satellite Precipitation Estimates – The IPWG Validation / Intercomparison Study Beth Ebert Bureau.
Validation of the Ensemble Tropical Rainfall Potential (eTRaP) for Landfalling Tropical Cyclones Elizabeth E. Ebert Centre for Australian Weather and Climate.
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Quantitative precipitation forecasts in the Alps – first.
The Centre for Australian Weather and Climate Research A partnership between CSIRO and the Bureau of Meteorology Object-based Spatial Verification for.
Barbara Casati June 2009 FMI Verification of continuous predictands
1 Verification of nowcasts and very short range forecasts Beth Ebert BMRC, Australia WWRP Int'l Symposium on Nowcasting and Very Short Range Forecasting,
On Estimation of Surface Soil Moisture from SAR Jiancheng Shi Institute for Computational Earth System Science University of California, Santa Barbara.
1 On the use of radar data to verify mesoscale model precipitation forecasts Martin Goeber and Sean Milton Model Diagnostics and Validation group Numerical.
4th Int'l Verification Methods Workshop, Helsinki, 4-6 June Methods for verifying spatial forecasts Beth Ebert Centre for Australian Weather and.
Economic Cooperation Organization Training Course on “Drought and Desertification” Alanya Facilities, Antalya, TURKEY presented by Ertan TURGU from Turkish.
4IWVM - Tutorial Session - June 2009 Verification of categorical predictands Anna Ghelli ECMWF.
LMD/IPSL 1 Ahmedabad Megha-Tropique Meeting October 2005 Combination of MSG and TRMM for precipitation estimation over Africa (AMMA project experience)
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Evaluation of the ability of Numerical Weather Prediction models run in support of IHOP to predict the evolution of Mesoscale Convective Systems Steve.
Towards an object-oriented assessment of high resolution precipitation forecasts Janice L. Bytheway CIRA Council and Fellows Meeting May 6, 2015.
STEPS: An empirical treatment of forecast uncertainty Alan Seed BMRC Weather Forecasting Group.
Regional climate prediction comparisons via statistical upscaling and downscaling Peter Guttorp University of Washington Norwegian Computing Center
© Crown copyright Met Office Preliminary results using the Fractions Skill Score: SP2005 and fake cases Marion Mittermaier and Nigel Roberts.
Heidke Skill Score (for deterministic categorical forecasts) Heidke score = Example: Suppose for OND 1997, rainfall forecasts are made for 15 stations.
Ebert-McBride Technique (Contiguous Rain Areas) Ebert and McBride (2000: Verification of precipitation in weather systems: determination of systematic.
Model validation Simon Mason Seasonal Forecasting Using the Climate Predictability Tool Bangkok, Thailand, 12 – 16 January 2015.
Latest results in verification over Poland Katarzyna Starosta, Joanna Linkowska Institute of Meteorology and Water Management, Warsaw 9th COSMO General.
Combining CMORPH with Gauge Analysis over
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Priority project « Advanced interpretation and verification.
Page 1© Crown copyright Scale selective verification of precipitation forecasts Nigel Roberts and Humphrey Lean.
VALIDATION AND IMPROVEMENT OF THE GOES-R RAINFALL RATE ALGORITHM Background Robert J. Kuligowski, Center for Satellite Applications and Research, NOAA/NESDIS,
Feature-based (object-based) Verification Nathan M. Hitchens National Severe Storms Laboratory.
Verification of Precipitation Areas Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia
Object-oriented verification of WRF forecasts from 2005 SPC/NSSL Spring Program Mike Baldwin Purdue University.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Evaluation of gridded multi-satellite precipitation (TRMM -TMPA) estimates for performance in the Upper Indus Basin (UIB) Asim J Khan Advisor: Prof. Dr.
1 Validation for CRR (PGE05) NWC SAF PAR Workshop October 2005 Madrid, Spain A. Rodríguez.
Nathalie Voisin 1, Florian Pappenberger 2, Dennis Lettenmaier 1, Roberto Buizza 2, and John Schaake 3 1 University of Washington 2 ECMWF 3 National Weather.
Page 1© Crown copyright 2004 The use of an intensity-scale technique for assessing operational mesoscale precipitation forecasts Marion Mittermaier and.
WRF Verification Toolkit Workshop, Boulder, February 2007 Spatial verification of NWP model fields Beth Ebert BMRC, Australia.
NCAR, 15 April Fuzzy verification of fake cases Beth Ebert Center for Australian Weather and Climate Research Bureau of Meteorology.
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz Weather type dependant fuzzy verification of precipitation.
Modeling Errors in Satellite Data Yudong Tian University of Maryland & NASA/GSFC Sponsored by NASA ESDR-ERR Program.
DOWNSCALING GLOBAL MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR FLOOD PREDICTION Nathalie Voisin, Andy W. Wood, Dennis P. Lettenmaier University of Washington,
VERIFICATION OF A DOWNSCALING SEQUENCE APPLIED TO MEDIUM RANGE METEOROLOGICAL PREDICTIONS FOR GLOBAL FLOOD PREDICTION Nathalie Voisin, Andy W. Wood and.
Application of the CRA Method Application of the CRA Method William A. Gallus, Jr. Iowa State University Beth Ebert Center for Australian Weather and Climate.
VALIDATION OF HIGH RESOLUTION SATELLITE-DERIVED RAINFALL ESTIMATES AND OPERATIONAL MESOSCALE MODELS FORECASTS OF PRECIPITATION OVER SOUTHERN EUROPE 1st.
Deutscher Wetterdienst Long-term trends of precipitation verification results for GME, COSMO-EU and COSMO-DE Ulrich Damrath.
Application of Probability Density Function - Optimal Interpolation in Hourly Gauge-Satellite Merged Precipitation Analysis over China Yan Shen, Yang Pan,
Validation of Satellite Rainfall Estimates over the Mid-latitudes Chris Kidd University of Birmingham, UK.
Multi-Site and Multi-Objective Evaluation of CMORPH and TRMM-3B42 High-Resolution Satellite-Rainfall Products October 11-15, 2010 Hamburg, Germany Emad.
Intensity-scale verification technique
Systematic timing errors in km-scale NWP precipitation forecasts
Verifying Precipitation Events Using Composite Statistics
Multi-scale validation of high resolution precipitation products
Verifying and interpreting ensemble products
Nathalie Voisin, Andy W. Wood and Dennis P. Lettenmaier
Validation of Satellite Precipitation Estimates using High-Resolution Surface Rainfall Observations in West Africa Paul A. Kucera and Andrew J. Newman.
Quantitative verification of cloud fraction forecasts
Hydrologically Relevant Error Metrics for PEHRPP
Numerical Weather Prediction Center (NWPC), Beijing, China
Verification of Tropical Cyclone Forecasts
Presentation transcript:

Verifying Satellite Precipitation Estimates for Weather and Hydrological Applications Beth Ebert Bureau of Meteorology Research Centre Melbourne, Australia 1 st IPWG Workshop, September 2002, Madrid

val. i. date ( ) tr.v. 1. To declare or make legally valid. 2. To mark with an indication of official sanction. 3. To substantiate; verify. ver. i. fy ( ) tr.v. 1. To prove the truth of by the presentation of evidence or testimony; substantiate. 2. To determine or test the truth or accuracy of, as by comparison, investigation, or reference: "Findings are not accepted by scientists unless they can be verified" (Norman L. Munn) -e The American Heritage Dictionary of the English Language. William Morris, editor, Houghton Mifflin, Boston, 1969.

Satellite precipitation estimates -- what do we especially want to get right? Climatologists - mean bias NWP data assimilation (physical initialization) - rain location and type Hydrologists - rain volume Forecasters and emergency managers - rain location and maximum intensity Everyone needs error estimates!

Short-term precipitation estimates High spatial and temporal resolution desirable Dynamic range required Motion may be important for nowcasts Can live with some bias in the estimates if it's not too great Verification data need not be quite as accurate as for climate verification Land-based rainfall generally of greater interest than ocean-based

Some truths about "truth" data No existing measurement system adequately captures the high spatial and temporal variability of rainfall. Errors in validation data artificially inflate errors in satellite precipitation estimates

Rain gauge observations AdvantagesDisadvantages True rain measurements May be unrepresentative of aerial value Verification results biased toward regions with high gauge density Most obs made once daily

Radar data AdvantagesDisadvantages Excellent spatial and Beamfilling, attenuation, temporal resolution overshoot, clutter, etc. Limited spatial extent TRMM PR

Rain gauge analyses AdvantagesDisadvantages Grid-scale quantitiesSmoothes actual rainfall Overcomes uneven values distribution of rain gauges

Stream flow measurements AdvantagesDisadvantages Integrates rainfall overDepends on soil conditions, a catchmenthydrological model Many accurate measure-Time delay between rain ments availableand outflow Hydrologists want itBlurs spatial distribution time Discharge (m 3 /hr) estimated observed

Verification strategy for satellite precipitation estimates Use (gauge-corrected) radar data for local instantaneous or very short-term estimates Use gauge or radar-gauge analysis for larger spatial and/or temporal estimates

Focus on methods, not results What scores and methods can we use to verify precipitation estimates? What do they tell us about the quality of precipitation estimates? What are some of the advantages and disadvantages of these methods? Will focus on spatial verification

Does the satellite estimate look right? Is the rain in the correct place? Does it have the correct mean value? Does it have the correct maximum value? Does it have the correct size? Does it have the correct shape? Does it have the correct spatial variability?

Spatial verification methods Visual ("eyeball") verification Continuous statistics Categorical statistics Joint distributions Scale decomposition methods Entity-based methods "standard" "scientific" or "diagnostic"

Step 1: Visual ("eyeball") verification Visually compare maps of satellite estimates and observations Advantage: "A picture tells a thousand words…" Disadvantages: Labor intensive, not quantitative, subjective Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Rozumalski, 2000

Continuous verification statistics Measure the correspondence between the values of the estimates and observations Examples: mean error (bias) mean absolute error root mean squared error skill score linear error in probability space (LEPS) correlation coefficient Advantages: Simple, familiar Disadvantage: Not very revealing as to what's going wrong in the forecast

Mean absolute error Measures: Average magnitude of forecast error Root mean square error Measures: Error magnitude, with large errors having a greater impact than in the MAE Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Mean error (bias) Measures: Average difference between forecast and observed values

Time series of error statistics 24-hr rainfall from NRL Experimental Geostationary algorithm validated against Australian operational daily rain gauge analysis 0.25° grid boxes, tropics only

Linear error in probability space (LEPS) Measures: Probability error - does not penalise going out on a limb when it is justified. Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability OiOi FiFi Cumulative probability of observations CDF o Value error {

Correlation coefficient Measures: Correspondence between estimated spatial distribution and observed spatial distribution, independent of mean bias Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Danger...

Rozumalski, 2000 AutoEstimator validated against Stage III 8x8 km grid boxes

Skill score Measures: Improvement over a reference estimate. When MSE is the score used in the above expression then the resulting statistic is called the reduction of variance. The reference estimate is usually one of the following (a) random chance (b) climatology (c) persistence but it could be another estimation algorithm. Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability

Cross-validation - useful when observations are included in the estimates where Y i * is the estimate at point i computed with O i excluded from the analysis Measures: Expected accuracy at the scale of the observations. The score is usually bias, MAE, RMS, correlation, etc. Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability

Categorical statistics Measure the correspondence between estimated and observed occurrence of events Examples: bias score probability of detection false alarm ratio threat score equitable threat score odds ratio Hanssen and Kuipers score Heidke skill score Advantages: Simple, familiar Disadvantage: Not very revealing

Estimated yes no yes hits misses no false correct alarmsnegatives Observed Estimated Observed False alarms Hits Misses Correct negatives Categorical statistics

Bias score Measures: Ratio of estimated area (frequency) to observed area (frequency) Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability

Probability of Detection Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability False Alarm Ratio Threat score (critical success index) Equitable threat score Odds ratio

Hanssen and Kuipers discriminant (true skill statistic) Measures: Ability of the estimation method to separate the "yes" cases from the "no" cases. Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Heidke skill score Measures: Fraction of correct yes/no detections after eliminating those which would be correct due purely to random chance

Categorical verification of daily satellite precipitation estimates from GPCP 1DD algorithm during summer over Australia Rain threshold varies from light to heavy North (tropics) Southeast (mid-latitudes)

Real-time verification example 24-hr rainfall from NRL Experimental Geostationary algorithm

Real-time verification example 24-hr rainfall from NRL Experimental blended microwave algorithm

Distributions oriented view Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Advantage: Much more complete picture of forecast performance Disadvantage: Lots of numbers

PREDICTED (mm/d) total 0.0 | | | | | | | | | | | total OBSERVED (mm/d) 24-hr rainfall from NRL Experimental Geostationary algorithm validated against Australian operational daily rain gauge analysis on 21 Jan 2002

Scatterplot Shows: Joint distribution of estimated and observed values NRL geo R=0.63

Probability distribution function Shows: Marginal distributions of estimated and observed values geo anal NRL geo

Heidke skill score (K distinct categories) Measures: Skill of the estimation method in predicting the correct category, relative to that of random chance Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability

Scale decomposition methods Measure the correspondence between the estimates and observations at different spatial scales Examples: 2D Fourier decomposition wavelet decomposition upscaling Advantages: Scales on which largest errors occur can be isolated, can filter noisy data Disadvantages: Less intuitive, can be mathematically tricky

Discrete wavelet transforms Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Concept: Decompose fields into scales representing different detail levels. Test whether the forecast resembles the observations at each scale. Measures, for each scale: % of total MSE linear correlation RMSE categorical verification scores others...

Casati and Stephenson (2002) technique Step 1: "Recalibrate" forecast using histogram matching error total = error bias + error recalibrated Step 2: Threshold the observations and recalibrated forecast to get binary images

Step 3: Subtract to get error (difference) image Step 4: Discrete wavelet decomposition of error to scales of resolution x 2 n

Odds ratio Step 5: Compute verification statistics on error field at discrete scales. Repeat for different rain thresholds.

Multiscale statistical organization Zepeda-Arce et al. (J. Geophys. Res., 2000) Concept: Observed precipitation patterns have multi- scale spatial and spatio-temporal organization. Test whether the satellite estimate reproduces this organization. Method: Start with fine scale, average to coarser scale Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Measures: TS vs. scale depth vs. area spatial scaling parameter dynamic scaling exponent

obs fcst Scale (km) Threat score Area (km 2 ) Depth (mm) obs fcst Std. dev. Scale (km) * * * * obs fcst

Upscaling verification of IR power law rainrate 16 September 2002, Melbourne IR radar IR radar mm hr -1

GMSRA validated against rain gauge analyses at different spatial scales (Ba and Gruber, 2001)

Entity-based methods Use pattern matching to associate forecast and observed entities ("blobs"). Verify the properties of the entities. Examples: CRA (contiguous rain area) verification Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Advantages: Intuitive, quantifies "eyeball" verification Disadvantage: May fail if forecast does not sufficiently resemble observations

CRA (entity) verification Ebert and McBride (J. Hydrology, Dec 2000) Concept: Verify the properties of the forecast (estimated) entities against observed entities Method: Pattern matching to determine location error, error decomposition, event verification Verifies this attribute? Location Size Shape Mean value Maximum value Spatial variability Measures: location error size error error in mean, max values pattern error

Determine the location error using pattern matching: Horizontally translate the estimated blob until the total squared error between the estimate and the observations is minimized in the shaded region. Other possibilities: maximum correlation, maximum overlap The displacement is the vector difference between the original and final locations of the estimate. Observed Estimated

CRA error decomposition The total mean squared error (MSE) can be written as: MSE total = MSE displacement + MSE volume + MSE pattern The difference between the mean square error before and after translation is the contribution to total error due to displacement, MSE displacement = MSE total – MSE shifted The error component due to volume represents the bias in mean intensity, where and are the CRA mean estimated and observed values after the shift. The pattern error accounts for differences in the fine structure of the estimated and observed fields, MSE pattern = MSE shifted - MSE volume

24-hr rainfall from NRL Experimental Geostationary algorithm validated against Australian operational daily rain gauge analysis

Diagnosis of systematic errors Displacement (km) NRL Experimental Geostationary algorithm 289 CRAs April March 2002

Diagnosis of systematic errors Estimate Analyzed NRL Experimental Geostationary algorithm 289 CRAs April March 2002

Tropical Rain Potential (TRaP) verification? TRaP 24 h rain from _16

Which methods verify which attributes?

Conclusions The most effective diagnostic verification method is still visual ("eyeball") verification. Categorical statistics based on yes-no discrimination are probably the least informative of all of the verification methods, although they remain very useful for quantitative algorithm intercomparison. The newer diagnostic verification methods (scale decomposition, entity-based) give a more complete and informative diagnosis of algorithm performance Need methods to deal with observational uncertainty

__________________ UNDER CONSTRUCTION __________________