Presentation is loading. Please wait.

Presentation is loading. Please wait.

Joint Ensemble Forecast System (JEFS) NCAR Sep 2005.

Similar presentations


Presentation on theme: "Joint Ensemble Forecast System (JEFS) NCAR Sep 2005."— Presentation transcript:

1 Joint Ensemble Forecast System (JEFS) NCAR Sep 2005

2 Overview Motivation/Goal Requirements Resources System Design Roadmap Products/Applications

3 Prove the value, utility, and operational feasibility of ensemble forecasting to DoD operations. Deterministic Forecasting ? Ignores forecast uncertainty Potentially very misleading Oversells forecast capability Reveals forecast uncertainty Yields probabilistic information Enables optimal decision making Ensemble Forecasting …etc JEFS’ Goal

4 AFW Strategic Plan and Vision, FY2008-2032 Issue #3/4-3: Use of multi-scale (kilometer to meter resolution), ensemble, and consensus model forecasts, combined with automation of local techniques, to support planning and execution of military operations. “Ensembles have the potential to help quantify the certainty of a prediction, which is something that users have been interested in for years. The military applications of ensemble forecasting are only at their beginnings; there are years’ worth of research waiting to be done.” Operational Requirements Document, USAF 003-94-I/II/III-D, Centralized Aerospace Weather Capability (CAWC ORD) …will support ensemble forecasting with the following capabilities: 1) The creation of sets of perturbed initial conditions of the fine-scale model initialized fields in selected regional windows. 2) Assembly of ensemble forecasts either from model output sets derived from the multiple sets of perturbed initial conditions or from sets assembled from the output from different models. 3) Evaluation of forecasting skill of ensemble forecasts compared to single forecast model outputs. Air Force Weather, FY 06-30, Mission Area Plan (AFW MAP) Deficiency: Mesoscale Ensemble Forecasting “The key to successful ensemble forecasting is many different realizations of the same forecast events. Studies using different models - or the same model with different configurations - consistently yield better overall forecasts. This demonstrates a definite need for multiple model runs.” R&D Portfolio MSA Shortfall D-08-07K: Insufficient ensemble forecasting capability for AFWA’s theater scale model Ensemble Forecast Requirements Air Force (and Army)

5 No documented requirement or supporting Fleet request for ensemble prediction. Navy ‘requirements’ are written in terms of warfighting capabilities. The current (draft) METOC ICD (old MNS) only specifies parameters required for support. However, ensembles present a solution for the following specified warfighter requirements: Long-range prediction for mission planning, optimum track ship routing, severe weather avoidance Tropical cyclone prediction for safety of operations, personnel safety Winds, turbulence, boundary layer structure for chem/bio/nuclear dispersion (WMD support) Cloud base, fog, aerosol for slant range visibility (aerial recon, flight operations, targeting) Boundary layer structure/atmospheric refractivity (T, q) for EM propagation (detection, tracking, communications) Surface winds (ASW, mine drift, SAR, flight operations in enclosed/narrow waterways) Surf and sea heights (SOF, small boat ops, logistics) Turbulence, cloud base/tops (OPARS, safety of flight) Whenever the uncertainty of the wx phenomena exceeds operational sensitivity, either a reliable probabilistic or a range-of-variability prediction is required. Ensemble Forecast Requirements Navy

6 JEFSTEAMJEFSTEAM & AFIT

7 Apr 03: FNMOC and AFWA proposed a split distributed center to the DoD High Performance Computing Modernization Program (HPCMP) as a DoD Joint Operational Test Bed for the Weather Research and Forecast (WRF) modeling framework Apr 03: FNMOC and AFWA proposed a split distributed center to the DoD High Performance Computing Modernization Program (HPCMP) as a DoD Joint Operational Test Bed for the Weather Research and Forecast (WRF) modeling framework Apr 04: Installation began of $4.2M in IBM HPC hardware, Apr 04: Installation began of $4.2M in IBM HPC hardware, split equally between FNMOC and AFWA split equally between FNMOC and AFWA (two 96 processor IBM Cluster 1600 p655+ systems) (two 96 processor IBM Cluster 1600 p655+ systems) Fosters significant Navy/Air Force collaboration in NWP for Fosters significant Navy/Air Force collaboration in NWP for 1) Testing and optimizing of WRF configurations to meet unique Navy and Air Force NWP requirements unique Navy and Air Force NWP requirements 2) Developing and testing mesoscale ensembles based on multiple WRF configurations to meet DoD needs multiple WRF configurations to meet DoD needs 3) Testing of Grid Computing concepts and tools for NWP Apr 08: Project Completion Apr 08: Project Completion FY04 HPCMP Distributed Center (DC) Award

8 Description: Combination of current GFS and NOGAPS global, medium-range ensemble data. Possible expansion to include ensembles from CMC, UKMET, JMA, etc. Initial Conditions: Breeding of Growing Modes 1 Model Variations/Perturbations: Two unique models, but no model perturbations Model Window: Global Grid Spacing: 1.0  1.0  (~80 km) Number of Members: 40 at 00Z 30 at 12Z Forecast Length/Interval: 10 days/12 hours Timing Cycle Times: 00Z and 12Z Products by: 07Z and 19Z 1 Toth, Zoltan, and Eugenia Kalnay, 1997: Ensemble Forecasting at NCEP and the Breeding Method. Monthly Weather Review: Vol. 125, No. 12, pp. 3297–3319. Joint Global Ensemble (JGE)

9 5 km 15 km Description: Multiple high resolution, mesoscale model runs generated at FNMOC and AFWA Initial Conditions: Ensemble Transform Filter 2 run on short-range (6-h), mesoscale data assimilation cycle driven by GFS and NOGAPS ensemble members Model variations/perturbations: Multimodel: WRF-ARW, COAMPS Varied-model: various configurations of physics packages Perturbed-model: randomly perturbed sfc boundary conditions (e.g., SST) Model Window: East Asia (COPC directive, Apr ’04) Grid Spacing: 15 km for baseline JME (summer ’06) 5 km nest later in project Number of Members: 30 (15 run at each DC site) Forecast Length/Interval: 60 hours/3 hours Timing Cycle Times: 06Z and 18Z Products by: 14Z and 02Z ~7 h production /cycle 2 Wang, Xuguang, and Craig H. Bishop, 2003: A Comparison of Breeding and Ensemble Transform Kalman Filter Ensemble Forecast Schemes. Journal of the Atmospheric Sciences: Vol. 60, No. 9, pp. 1140–1158. Joint Mesoscale Ensemble (JME)

10 Storage of principal fields NCEP Medium Range Ensemble  44 staggered GFS runs, T126, 15 d  Analysis perturbations: Bred Modes  Model Perturbations: in design Joint Ensemble Forecast System lateral boundary conditions multiple first guesses Joint Mesoscale Ensemble (JME)  30 members, 15/5km, 60 h, 2/day  One “demonstration” theater  Multi model (WRF, COAMPS)  Perturbed model: varied physics and surface boundary conditions FNMOC JME Products  Apply postprocessing calibration  Short-range products tailored to support warfighter operations AFWA Observations “warm start” Data Assimilation 3DVAR / NAVDAS FNMOC Medium Range Ensemble  18 00Z, 8 12Z NOGAPS, T119, 10 d  Analysis Perturbations: Bred Modes  Model Perturbations: None Storage of principal fields Calibrate Joint Global Ensemble (JGE) Products  Apply postprocessing calibration  Long-range products tailored to support warfighter planning Ensemble Transform  Generate initial condition perturbations Calibrate Observations and Analyses

11 GFS ensemble Grids to AFWA and FNMOC NOGAPS ens. grids to AFWA Interpolate and calibrate JGE Make/Distribute JGE products Obtain global analysis Update JGE Calibration Data Assimilation Run 6-h forecasts and do ET Run JME models Exchange output Make/Distribute JME Products Update JME Calibration 00 03 06 09 12 15 18 21 24(Z) 00Z cycle data06Z cycle data12Z cycle data18Z cycle data 06Z production cycle18Z production cycle JEFS Production Schedule

12 Notional Roadmap for JEFS and Beyond 1. AFWA/FNMOC Awarded HPCMPO DC Nov 03 2. AFWA Awarded PET-CWO On-Site 3. NRL Awarded mesoscale ensemble research 4. DTRA-AFWA Ensemble Investment 5. ARL SIBR Phase I & II and AFWA UFR 6. NCAR & UW Contract, funded by AFWA Wx Fcst 3600 JEFS Design 1 2 4 5. ARL SIBR Phase II w/ AFWA UFR JGE RDT&E JME RDT&E 3 3. Probabilistic Pred. of High Impact Wx 5. Phase I 2. Programming Environment and Training - Climate Weather Ocean On-Site JGE IOC 1 st Meso. EPS H/W Procurement* 2 nd Meso. EPS H/W Procurement* 3 rd Meso. EPS H/W Procurement* Mesoscale EPS IOC Mesoscale EPS FOC 1. HPCMPO DC H/W * Note: Funded via PEC 35111F Weather Forecasting (3080M) FY04 FY05 FY06 FY07 FY08 FY09 FY10 FY11 Phase I 6. NCAR & UW Contract Phase II 4. DTRA-AFWA Support

13 Tailor products to customers’ needs and weather sensitivities Forecaster Products/Applications  Design to help transition from deterministic to stochastic thinking Warfighter Products/Applications  Design to aid critical decision making (Operational Risk Management) Product Strategy

14 PACIFIC AIR FORCES Forecasters 20 th Operational Weather Squadron 17 th Operational Weather Squadron 607 Weather Squadron Warfighters PACAF 5 th Air Force Naval Pacific Meteorological and Oceanographic Center Forecasters Yokosuka Navy Base Warfighters 7 th Fleet FIFTH Air Force SEVENTH Fleet Operational Testing & Evaluation

15 Forecaster Products/Applications

16 Consensus (isopleths): shows “best guess” forecast (ensemble mean or median) Model Confidence (shaded) Increase Spread in Less Decreased confidence the multiple forecasts Predictability in forecast Maximum Potential Error (mb, +/-) 6 5 4 3 2 1 <1 Consensus & Confidence Plot

17 Probability of occurrence of any weather phenomenon/threshold (i.e., sfc wnds > 25 kt ) Clearly shows where uncertainty can be exploited in decision making Can be tailored to critical sensitivities, or interactive (as in IGRADS on JAAWIN) % Probability Plot

18 Current Deterministic Meteogram Show the range of possibilities for all meteogram-type variables Box & whisker, or confidence interval plot is more appropriate for large ensembles Excellent tool for point forecasting (deterministic or stochastic) Multimeteogram

19 Probability of Warning Criteria at Osan AB What is the potential risk to the mission? When is a warning required? 11/18 12/00 06 12 18 13/00 06 12 18 14/00 06 Valid Time (Z) 90% CI Extreme Min Extreme Max Surface Wind Speed at Misawa AB Mean Valid Time (Z) Requires paradigm shift into “stochastic thinking” Sample JME Products

20 Probability of Severe Turbulence @FL300 70% 50% 10% 50% 30% 90% 30% 70% Sample JGE Product (Forecaster)

21 Upper Level Turbulence 280 350 Sample JGE Product? (Warfighter)

22 Chance of Upper Level Turbulence Intensity: Severe Low Med High 250/370 280/370 300/330 Base/Top LEGEND Negligible Chance Sample JGE Product (Warfighter)

23 Warfighter Products/Applications

24 Integrated Weather Effects Decision Aid (IWEDA) Deterministic Forecast > 13kt 10-13kt 0-9kt Weapon System Weather Thresholds* Drop Zone Surface Winds 6kt *AFI 13-217 ? Stochastic Forecast Binary Decisions/Actions Bombs on Target Go / No Go AR Route Clear & 7 Crosswinds In / Out of Limits T-Storm Within 5 Flight Hazards IFR / VFR GPS Scintillation Bridging the Gap 10% 20% 70% Stochastic Forecast Drop Zone Surface Winds 6kt 3 6 9 12 15 18kt Probabilistic IWEDA -- for Operational Risk Management (ORM)

25 Event: Damage to parked aircraft Threshold: sfc wind > 50kt Cost (of protecting): $150K Loss (if damaged): $1M Method #1: Decision Theory  Minimize operating cost (or maximize effectiveness) in the long run by taking action based on an optimal threshold of probability, rather than an event threshold.  What is the cost of taking action?  What is the loss if…  the event occurs and without protection?  opportunity was missed since action was not taken?  Good for well defined, commonly occurring events Optimal Threshold = 15% Example (Hypothetical)

26 The greater the confidence required (i.e., less acceptable risk), the less certain we can be of the desired outcome. 90% Confidence  Army Research Lab’s stochastic decision aid, in development by Next Century Corporation  Stoplight color based on 1) Ensemble forecast probability distribution 2) Weapon systems’ operating thresholds 3) Warfighter-determined level of acceptable risk Drop Zone Surface Winds (kt) 80% 70% Method #2: Weather Risk Analysis and Portrayal (WRAP) 5 10 15 Cumulative Probability 9kt Threshold 13kt Threshold

27 Surface Winds (kt) 5 10 15 Acceptable Risk Decision Input Low Med High Low Med High Low Med High 99%1%0% 1%31%68% 37%52%11% (90 th Percentile) (60 th Percentile) (30 th Percentile) 9kt Threshold 13kt Threshold Drop Zone #1 Drop Zone #2 Drop Zone #3 18kt ? Threshold Method #2: Weather Risk Analysis and Portrayal (WRAP)

28

29 ENSEMBLES AHEAD

30 Backup Slides

31 Sensitive to Initial Conditions: nearby solutions diverge Describable State: system specified by set of variables that evolve in “phase space” Deterministic: system appears random but process is governed by rules Solution Attractor: Limited region in phase space where solutions occur Aperiodic: Solutions never repeat exactly, but may appear similar The Atmosphere is a Chaotic, Dynamic System Analogy Two adjacent drops in a waterfall end up very far apart. Predictability is primarily limited by errors in the analysis To account for this effect, we can make an ensemble of predictions (each forecast being a likely outcome) to encompass the truth.

32 T The true state of the atmosphere exists as a single point in phase space that we never know exactly. A point in phase space completely describes an instantaneous state of the atmosphere. (pres, temp, etc. at all points at one time.) Nonlinearities drive apart the forecast trajectory and true trajectory (i.e., Chaos Theory) P H A S E S P A C E Encompassing Forecast Uncertainty 12h forecast 36h forecast 24h forecast 48h forecast T 48h verification T T T 12h verification 36h verification 24h verification An analysis produced to run a model is somewhere in a cloud of likely states. Any point in the cloud is equally likely to be the truth.

33 T Ensemble Forecasting: -- Encompasses truth -- Reveals uncertainty -- Yields probabilistic information T P H A S E S P A C E 48h forecast Region Analysis Region An ensemble of likely analyses leads to an ensemble of likely forecasts Encompassing Forecast Uncertainty

34 The Wind Storm That Wasn’t (Thanksgiving Day 2001) Mean Sea Level Pressure (mb) and shaded Surface Wind Speed (m s -1 ) Eta-MM5 Forecast Verification

35 avn-MM5 Forecast ngps-MM5 Forecast cmcg-MM5 Forecast tcwb-MM5 Forecast ukmo-MM5 Forecast eta-MM5 Forecast cent-MM5 Forecast avn-MM5 Forecast ngps-MM5 Forecast cmcg-MM5 Forecast tcwb-MM5 Forecast ukmo-MM5 Forecast The Wind Storm That Wasn’t (Thanksgiving Day 2001) eta-MM5 ForecastVerification

36 Deterministic Forecasting Single solution Variable and unknown risk Attempt to minimize uncertainty Utility reliant on: 1) Accuracy of analysis 2) Accuracy of model 3) Flow of the day 4) Forecaster experience 5) Random chance Cost / Return: Mod / Mod Deterministic vs. Ensemble Forecasting Ensemble Forecasting Multiple solutions Variable and known risk Attempt to define uncertainty Utility reliant on: 1) Accounting of analysis error 2) Accounting of model error 3) Flow of the day 4) Machine-to-Machine 5) Random sampling (# of model runs) Cost / Return: High / High+

37 The Deterministic Pitfall The deterministic atmosphere should be modeled deterministically. A high resolution forecast is better. A single solution is easier for interpretation and forecasting. The customer needs a single forecast to make a decision. A single solution is more affordable to process. NWP was designed deterministically. There are many spectacular success stories of deterministic forecasting Notion Reality A better looking simulation is not necessarily a better forecast. (precision ≠ accuracy) Misleading and incomplete view of the future state of the atmosphere. Poor support to the customer since in many cases, a reliable Y/N forecast is not possible. Good argument in the past, but not anymore. How can you afford not to do ensembles? Yes and no. NWP founders designed models for deterministic use, but knew the limitation. Result of forecast situation with low uncertainty, or dumb luck of random sampling. Need for stochastic forecasting is a result of the sensitivity to initial conditions.

38 Event: Satellite drag alters LEO orbits Threshold: A p > 100 Cost (of preparing): $4.5K Loss (of reacting): $10K Method #1: Decision Theory  Minimize operating cost (or maximize effectiveness) in the long run by taking action based on an optimal threshold of probability, rather than an event threshold.  What is the cost of taking action?  What is the loss if…  the event occurs and without protection?  opportunity was missed since action was not taken?  Good for well defined, commonly occurring events Example (Hypothetical) Optimal Threshold = 45%

39 EF Vision 2020 United Global Mesoscale Ensemble Runs/Cycle: O(100) Resolution: O(10km) Length: 10 days Global Mesoscale Ensemble Runs/Cycle: O(10) Resolution: O(10km) Length: 15 days Microscale Ensemble Runs/Cycle: O(10) Resolution: O(100m) Length: 2 days Global Mesoscale Ensemble Runs/Cycle: O(10) Resolution: O(10km) Length: 10 days Global Mesoscale Ensemble Runs/Cycle: O(10) Resolution: O(10km) Length: 10 days FNMOC Microscale Ensembles Runs/Cycle: O(10) Resolution: O(100m) Length: 24 hours Microscale Ensembles Runs/Cycle: O(10) Resolution: O(100m) Length: 24 hours Coalition Weather Centers Global Mesoscale Ensembles AFWA JMA ABM MSC …etc.


Download ppt "Joint Ensemble Forecast System (JEFS) NCAR Sep 2005."

Similar presentations


Ads by Google