Presentation is loading. Please wait.

Presentation is loading. Please wait.

Project Phoenix – An Update EuMetCAL Workshop Toulouse, August 2008 Jaymie Gadal.

Similar presentations


Presentation on theme: "Project Phoenix – An Update EuMetCAL Workshop Toulouse, August 2008 Jaymie Gadal."— Presentation transcript:

1 Project Phoenix – An Update EuMetCAL Workshop Toulouse, August 2008 Jaymie Gadal

2 DRAFT – Page 2 – November 14, 2015 Phoenix – The Experiment Original project Phoenix was an experiment: –To determine if the human forecaster remained relevant in the forecast process –To determine if the forecaster retained a usable measure of skill, without NWP –To assess the areas of strength and weakness of the forecaster –To get an idea of what the Role of the Forecaster should be

3 DRAFT – Page 3 – November 14, 2015 Phoenix Experiment - Methodology Forecasters were put in a parallel ‘office’ – access to NWP was denied Forecasts generated without NWP, but using the same preparation tool – Scribe 3 Outputs: automatic, official, and Phoenix team forecast verification results compared using very intensive scoring system Successes & Failures analyzed, in real time, and as trends over time

4 DRAFT – Page 4 – November 14, 2015 Phoenix Experiment - Results Forecaster still did have significant ability to forecast better than models in many circumstances – but which? Previous attempts to determine where the forecaster should concentrate were found to be simplistic Greatest strengths stem from analysis, diagnosis, prognosis, situational awareness

5 DRAFT – Page 5 – November 14, 2015 A Few Particular & Interesting Results When forecasters were ‘right’, they tended to ‘hedge’ Whole lot of tweaking going on, very little successful Junior forecasters often perform better than experienced ones – why? Some parameters/situations best left alone Some model weaknesses identified

6 DRAFT – Page 6 – November 14, 2015 Phoenix - Conclusions Forecaster needs to know on a parameter-by- parameter basis when to intervene Operational routines must stress: –ADP prior to consultation of the models –Constant situational awareness Real-time verification more critical than previously Optimum Human/Machine mix remains unclear – no clear division of roles

7 DRAFT – Page 7 – November 14, 2015 Notes Regarding Severe Weather & Warning Situations Role of the forecaster in times of extreme weather remains more traditional –NWP continues to fail to capture extreme events – forecaster intervention often required –Forecaster interpretation of NWP results strongly required –ADP even more critical –SA & rapid response remain essential

8 DRAFT – Page 8 – November 14, 2015 Role of the Forecaster Choosing best model? Short range only? High impact weather only? Interpretation of models? Consultation? Integration of ensemble prediction results? Tuning of automatic forecasting algorithms?

9 DRAFT – Page 9 – November 14, 2015 Phoenix – The Training Program Experiment converted into a training simulator Combatting “Meteorological Cancer” Forecasters were employing an operational routine which effectively by-passed their strengths Return to ADP/SA results in improved forecaster performance

10 DRAFT – Page 10 – November 14, 2015 Steps Taken 2007/2008 – All MSC forecasters given a week of Phoenix simulator training Phoenix simulator training given to all new forecasters Offices continuing to give staff Phoenix simulator training periodically Simulator extended into real-time by automation of the scoring system

11 DRAFT – Page 11 – November 14, 2015 Phoenix Scoring System Error score, essentially forecast-observed Error scores are normalized between parameters Scores for different time periods, parameters, locations can be weighted according to relative importance Relative importance determined from user surveys Parallels forecaster’s intention (not a ‘proper’ score) Scores are rolled up for summary and can be drilled-down to discover ‘root-causes’ Output available in xml for greater analysis

12 DRAFT – Page 12 – November 14, 2015 Example Output Scores 23_SITES_AM_ISSUE_2008-08-24scores_score.xls Phoenix Monthly Report.xls

13 DRAFT – Page 13 – November 14, 2015 Automated Phoenix Scoring Output generated in near-real time for 2 dozen stations daily User interface for generating scores for any given forecast Possibility of doing studies seasonally, situationally-dependent, individualized Option of configuring scores for different users

14 DRAFT – Page 14 – November 14, 2015 Other Uses of Phoenix Simulations have been run using the methodology to investigate: Severe weather forecasting, Aviation, Extended range, Marine Results & conclusions generally the same, with different levels of intervention required Other types of simulators Research Validation

15 DRAFT – Page 15 – November 14, 2015 What’s Next Completion of the automation of the full range or parameters, expansion to other forecasts Develop Weather Event Simulator capability using Ninjo, with Phoenix for evaluation Evaluation of training Use in QA for ISO certification Extended to more parameters and forecast types in real time More comprehensive use in operations management, identification of needed research, assessment of training needs


Download ppt "Project Phoenix – An Update EuMetCAL Workshop Toulouse, August 2008 Jaymie Gadal."

Similar presentations


Ads by Google