Operational Use of ECMWF products at the Met Office: Current practice, Verification and Ideas for the future Tim Hewson 17th June 2005 © Crown copyright.

Slides:



Advertisements
Similar presentations
How Well Forecast Were the 2004 and 2005 Atlantic and U. S
Advertisements

Test Info: 35 questions 20 multiple choice 5 matching 10 short answer
Climate Prediction Applications Science Workshop
Weather Forecasting This chapter discusses: 1.Various weather forecasting methods, their tools, and forecasting accuracy and skill 2.Images for the forecasting.
Part 3 Probabilistic Decision Models
Chapter 1 The Study of Body Function Image PowerPoint
UNITED NATIONS Shipment Details Report – January 2006.
UEA contribution to ENSEMBLES WP6.2 – where we are and where we think we are going! Tom Holt Climatic Research Unit, University of East Anglia, Norwich,
Ensemble Forecasting of High-Impact Weather Richard Swinbank with thanks to various, mainly Met Office, colleagues High-Impact Weather THORPEX follow-on.
AREP GAW Section 14 – Daily Air Quality Forecast Operations 1 Overview of Course Course Content: Background –Introduction and Overview of Course –What.
AREP GURME Section 11 Case Studies of Episodes What is a Case Study? How to Conduct Case Studies Examples.
© Crown copyright Met Office WAFC CAT verification Objective verification of GRIB CAT forecasts Dr Philip G Gill, WAFC Science Meeting, Washington, 20.
THOR Annual Meeting - Bergen 9-11 November /25 On the impact of initial conditions relative to external forcing on the skill of decadal predictions:
Norwegian Meteorological Institute met.no LAMEPS – Limited area ensemble forecasting in Norway, using targeted EPS Marit Helene Jensen, Inger-Lise Frogner,
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Extended range forecasts at MeteoSwiss: User experience.
Page 1© Crown copyright 2004 Seasonal forecasting activities at the Met Office Long-range Forecasting Group, Hadley Centre Presenter: Richard Graham ECMWF.
ECMWF Slide 1Met Op training course – Reading, March 2004 Forecast verification: probabilistic aspects Anna Ghelli, ECMWF.
User Meeting 15 June 2005 Monthly Forecasting Frederic Vitart ECMWF, Reading, UK.
Slide 1ECMWF forecast products users meeting – Reading, June 2005 Verification of weather parameters Anna Ghelli, ECMWF.
The COSMO-LEPS system at ECMWF
Use of Medium and Extended Range Forecasts in Slovenia Jure Cedilnik ARSO [EARS – Environmental Agency of Slovenia, Met. service]
Page 1 © Crown copyright 2005 ECMWF User Meeting, June 2006 Developments in the Use of Short and Medium-Range Ensembles at the Met Office Ken Mylne.
Page 1© Crown copyright Some Strengths and Weaknesses of ECMWF Forecasts for the UK Tim Hewson 15 th June 2006 Contributors include: Eleanor Crompton,
Severe Weather Forecasts
XP New Perspectives on Microsoft Office Word 2003 Tutorial 7 1 Microsoft Office Word 2003 Tutorial 7 – Collaborating With Others and Creating Web Pages.
Sub-seasonal to seasonal prediction David Anderson.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
CS1512 Foundations of Computing Science 2 Week 3 (CSD week 32) Probability © J R W Hunter, 2006, K van Deemter 2007.
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
Evaluating Provider Reliability in Risk-aware Grid Brokering Iain Gourlay.
1 PV Generation in the Boundary Layer Robert Plant 18th February 2003 (With thanks to S. Belcher)
Robin Hogan Ewan OConnor Damian Wilson Malcolm Brooks Evaluation statistics of cloud fraction and water content.
Chapter 7 Sampling and Sampling Distributions
APS Teacher Evaluation
Chapter 13 – Weather Analysis and Forecasting
Seasonal Normal Weather Base
© 2010 The MITRE Corporation. All Rights Reserved. Sector Capacity Prediction for Traffic Flow Management Lixia Song April 13 th, 2010.
WIND INSIGHT a wind power forecasting tool for power system security management Dr Nicholas Cutler 21 March 2013
Page 1 NAE 4DVAR Oct 2006 © Crown copyright 2006 Mark Naylor Data Assimilation, NWP NAE 4D-Var – Testing and Issues EWGLAM/SRNWP meeting Zurich 9 th -12.
Hash Tables.
Capacity Planning For Products and Services
Target Costing If you cannot find the time to do it right, how will you find the time to do it over?
Defect Tolerance for Yield Enhancement of FPGA Interconnect Using Fine-grain and Coarse-grain Redundancy Anthony J. YuGuy G.F. Lemieux September 15, 2005.
Chapter 10 Software Testing
Model and Relationships 6 M 1 M M M M M M M M M M M M M M M M
1 NWS-COMET Hydrometeorology Course 15 – 30 June 1999 Meteorology Primer.
Institut für Physik der Atmosphäre Institut für Physik der Atmosphäre Object-Oriented Best Member Selection in a Regional Ensemble Forecasting System Christian.
PSSA Preparation.
Simple Linear Regression Analysis
Multiple Regression and Model Building
January Structure of the book Section 1 (Ch 1 – 10) Basic concepts and techniques Section 2 (Ch 11 – 15): Inference for quantitative outcomes Section.
1 McGill University Department of Civil Engineering and Applied Mechanics Montreal, Quebec, Canada.
Import Tracking and Landed Cost Processing An Enhancement For AS/400 DMAS from  Copyright I/O International, 2001, 2005, 2008, 2012 Skip Intro Version.
ECMWF long range forecast systems
PRESENTS: FORECASTING FOR OPERATIONS AND DESIGN February 16 th 2011 – Aberdeen.
Page 1© Crown copyright 2006ESWWIII, Royal Library of Belgium, Brussels, Nov 15 th 2006 Forecasting uncertainty: the ensemble solution Mike Keil, Ken Mylne,
Gridded OCF Probabilistic Forecasting For Australia For more information please contact © Commonwealth of Australia 2011 Shaun Cooper.
1 Systematic and Random Errors in Operational Forecasts by the UK Met Office Global Model Tim Hewson Met Office Exeter, England Currently at SUNY, Albany.
Ensemble Post-Processing and it’s Potential Benefits for the Operational Forecaster Michael Erickson and Brian A. Colle School of Marine and Atmospheric.
Performance of the MOGREPS Regional Ensemble
How can LAMEPS * help you to make a better forecast for extreme weather Henrik Feddersen, DMI * LAMEPS =Limited-Area Model Ensemble Prediction.
© Crown copyright Met Office Probabilistic turbulence forecasts from ensemble models and verification Philip Gill and Piers Buchanan NCAR Aviation Turbulence.
APPLICATION OF NUMERICAL MODELS IN THE FORECAST PROCESS - FROM NATIONAL CENTERS TO THE LOCAL WFO David W. Reynolds National Weather Service WFO San Francisco.
Page 1© Crown copyright 2004 SRNWP Lead Centre Report on Data Assimilation 2005 for EWGLAM/SRNWP Annual Meeting October 2005, Ljubljana, Slovenia.
Details for Today: DATE:13 th January 2005 BY:Mark Cresswell FOLLOWED BY:Practical Dynamical Forecasting 69EG3137 – Impacts & Models of Climate Change.
LEPS VERIFICATION ON MAP CASES
Linking operational activities and research
Strategic Intervention A novel use of Ensembles in Forecast Guidance
Presentation transcript:

Operational Use of ECMWF products at the Met Office: Current practice, Verification and Ideas for the future Tim Hewson 17th June 2005 © Crown copyright

Contents Describe use of ECMWF output, in the Operations Centre at the Met Office (which provides guidance to the outfield), in 3 forecast categories: 1. ‘Short range’ (up to 36 hours) 2. ‘Medium range’ (36 hours to 6 days) 3. ‘Trend period’ (6 to 10 days) 4. Some verification ideas (severe weather) © Crown copyright

1. Short Range Relatively recent changes in ECMWF operational suite (2 operational runs per day, much reduced delay in forecast arrival time, etc) render ECMWF operational runs much more useful for short range forecasting than hitherto Data time (DT) difference c.f. Met Office operational runs now averages 9 hours (previously it was 21 hours) 9 hours is comparable to the short period lead time gain of ECMWF over Met Office (10 hours), implying high potential for adding value to deterministic forecasts using the consensus (“poor man’s ensemble”) approach Though not necessarily part of the ‘ECMWF remit’ if ECMWF model output can potentially improve our short range forecasts we will use it Some use also made of ensemble data (though much less than in the medium range – due to reduced reliability of severe weather probabilities) © Crown copyright

Operational Model Errors RANK Best - EC  UK  FR.. NH rms MSLP error vs Lead Time Lead time gain = 10 hours 1 5 days 10 short range | medium range | trend period | © Crown copyright

Short Range Forecast Example RAW OBS RAW MOD Cloud cover Ppn rate Ppn type Mslp © Crown copyright

Verification of Modifications “What was it, specifically, about the good forecast that made it better?” Helps highlight common model errors © Crown copyright

Scope for Improvement Forecasters are usually able to improve upon raw (Met Office) model output, using different models, knowledge of systematic errors, comparison with current trends Degree of improvement could potentially be increased by making more use of the high quality ECMWF operational run, which at present is under-utilised WISH LIST! – 3 hourly data, T+0 to T+48 Instantaneous total ppn rates, plus cloud cover and mslp (same format?!) Separate plots showing dynamic /convective rain and snow components 10m mean wind and likely gust strength Sub areas – parts of Europe ? Timely appearance on ECMWF web site is crucial (probably the most expedient route for making this data available) © Crown copyright

Severe Weather Table records ‘Significant Errors’, from perspective of hazardous weather, in Raw (Red) and Modified (Blue) forecasts (2.5 years of data) A key forecasting target is a reduction in the number of blue boxes This verification has highlighted warm air summer convection as one area where the forecaster is able to add little value, and where there can be major errors in the model Potentially ECMWF oper runs could assist in this area, via the wish list data? (convective characteristics?) Cold air convection is also a problem area, though one that the forecaster often addresses reasonably successfully. FGEW verification has highlighted this as a significant weakness in ECMWF output (snow in NE’lies, Feb/Mar 2005) Low pressure centres and surface wind gusts in strong gradient regions are another sig problem © Crown copyright

2. Medium Range Use of ECMWF data - operational and ensemble runs - is more common in the medium range Timeliness is a key issue for practical applications; recently the ECMWF web site has been used more often, because of timely updating Issued forecast guidance has both deterministic and probabilistic components © Crown copyright

Deterministic Component Underlying Strategy: First ascertain the key meteorological feature(s), Then incorporate different model and ensemble solutions accordingly, weighting as appropriate (on screen or on paper) to arrive at a consensus solution © Crown copyright

Operational Model Errors NH rms MSLP error vs Lead Time 1 5 days 10 short range | medium range | trend period | © Crown copyright

Simple Example to Illustrate Cold front is the main feature Consensus would move GM front south, possibly hinting at wave development, as GM underdoes waves, and as NCEP shows one. © Crown copyright

Modifying a Selected Model Run Use Field Modification tool (devised by Eddy Carroll) AFTER a model run has finished Allows quick, interactive, dynamically consistent changes to be made to a set of 3-d fields from one model run, eg: Move low or front Deepen/fill low Introduce low or wave… Relies on modification vectors applied to PV distribution, followed by PV inversion with changed boundary conditions Equivalent translation vectors applied to ppn and RH; simple boundary layer model used for surface winds Temporal consistency achieved via time-linking (with ‘decay’ parameter) Precipitation rate & type, winds etc can also be adjusted directly and time-linked © Crown copyright

Field Modification Example – moving a low with slight deepening © Crown copyright

Resulting fields (takes ~2 seconds) © Crown copyright

Initial Fields © Crown copyright

500mb ht and 100-500mb Thickness - before © Crown copyright

500mb ht and 100-500mb Thickness - after © Crown copyright

Objective Verification © Crown copyright

Subjective Verification Met Office Global model, ECMWF Oper and issued Modified forecasts are compared, primarily for mslp, but also for fronts and thickness, over NW Europe, using 12Z data times, for T+48,72, 96 and 120h The overall ranking is (best first): Modified >> ECMWF >> UKMO This is despite the objectively calculated lead time gain of ECMWF over UKMO (10-16 hours) being generally greater than it is for Modified over UKMO (0-12 hours) Objective schemes can penalise good forecasts of cyclogenesis (especially rms errors) A new ‘more discriminating’ subjective verification scheme will be introduced this year © Crown copyright

Forecaster Impact Modification Time © Crown copyright

Probabilistic component Consists of a headline summary of probabilities of severe weather, in a number of categories (introduced relatively recently) Minima and maxima for sites are issued with upper and lower bounds. Probabilities are given for rainfall total exceedance. Starting point is calibrated ECMWF ensemble output. ‘Alternative scenarios’ can be issued to highlight uncertainties Probabilistic components appear to be used far less by customers than the deterministic component. Somewhat disappointing – education required, but may take a long time. © Crown copyright

3. Trend Forecast Consists of an issued worded forecast, with expected temperature anomalies, and considerable discussion of uncertainties, highlighting possible severe weather Based primarily on ECMWF data, but with some input from other models and ensembles, notably from NCEP Customer base for this forecast has been dwindling (perhaps they watch Swedish TV) © Crown copyright

Trend Period – Subjective Verification Based primarily on mslp, over NW Europe, ECMWF only Ensemble mean rated a little more useful than Oper run, but not by much Only 1 in 5 forecasts for days 6 and 7 were considered useful, and 1 in 10 forecasts for days 8, 9 and10 Scores have not changed a great deal over the years © Crown copyright

Extras for ‘Wish List’ Meteograms to include overlapping 24-hour rainfall totals (but still in 6 hour blocks) Total cloud cover – is this ‘altitude weighted’, or is 8 oktas cirrus considered ‘cloudy’? Weighting would be preferable Postage stamps showing estimated surface gusts, with colour-shading for high values Cluster ensemble means for mslp, thickness, annotated with percentages of members © Crown copyright

4. Verification – some ideas Conceptually there should be a ‘deterministic limit’ for predicting a pre-defined meteorological event Simply defined this could be the point in lead time beyond which forecasts concerning that event are more likely, on average, to be wrong than right Defined in this way, this provides some guidance on when to shift the emphasis, in forecasts for particular events, towards the probabilistic For rare events at least, correct null forecasts – ie the majority - can be ignored as not relevant © Crown copyright

Verification ideas (contd) The ‘deterministic limit’ for the event in question is then simply the lead time at which, over a suitably large forecast sample, hits equals the sum of misses and false alarms (or CSI = 0.5) misses + false alarms number ‘deterministic limit’ hits d c b a Fc Ob   lead time a/(b+c) = 1 © Crown copyright

Event Examples (numbers are very crude estimates) Tornado within 2km radius (deterministic limit ~ 5 mins) Snow falling at a point (~5 hours) Rain falling at a point (~18 hours) Gale force gusts at a point (~6 hours) Gale force gusts within a UK county (~24 hours) Rainfall >15mm in 3 hours somewhere in a UK county (2 hours) Cyclonic surface pressure pattern at a point (~120 hours) Atmospheric front within 200km of a point (~60 hours) Day with maximum above 30C in London (~96 hours) © Crown copyright

Benefits Potential to provide a meaningful measure of what to expect from, and therefore what to put into, a forecast. Too many forecast elements are deterministic. It is something that the public, other customers (and auditors!) could potentially relate to The equivalent, from an idealised, reliable ensemble prediction system, would be the lead time at which the average probability, for hindcasts of observed past events, fell to 50% (?) As always extreme events would be more difficult to represent (though hindcasts from re-analyses are becoming increasingly tractable) Facility also to measure forecast improvements, compare systems, assess forecaster performance © Crown copyright

Summary The Met Office makes extensive use of ECMWF products for forecasts from T+48 to T+240, and is increasingly using the operational run as an input to short range forecasts Verification indicates that the forecaster is adding value in many areas, in part using the poor man’s ensemble approach, though some weaknesses remain Various enhancements to ECMWF web-based output have been suggested Disappointingly, the customer base for probabilistic forecasts is currently limited More guidance on what we can and cannot forecast deterministically is required A new measure of ‘deterministic limit’ has been tentatively proposed © Crown copyright

References Carroll, Meteorological Applications, 1997, for field modification description Carroll and Hewson, Weather and Forecasting, 2005 (out shortly) for ops centre practice and verification © Crown copyright

Accreditation WAFC World Area Forecast Centre © Crown copyright