Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Meso- and Storm-Scale NWP: Scientific and Operational Challenges for the Next Decade Kelvin K. Droegemeier School of Meteorology and Center for Analysis.

Similar presentations


Presentation on theme: "1 Meso- and Storm-Scale NWP: Scientific and Operational Challenges for the Next Decade Kelvin K. Droegemeier School of Meteorology and Center for Analysis."— Presentation transcript:

1 1 Meso- and Storm-Scale NWP: Scientific and Operational Challenges for the Next Decade Kelvin K. Droegemeier School of Meteorology and Center for Analysis and Prediction of Storms University of Oklahoma COMET Faculty Course on NWP 9 June 1999 Boulder, Colorado

2 2 What Are Operational Models Predicting? n Global and synoptic flow patterns n Precipitation via crude parameterizations that are unable to resolve individual clouds n Topographic forcing n Coastal and lake influences n Crude linkages between the land surface and atmosphere

3 3 What Do Forecasters Use? n Single forecasts n Output frequency of 3 to 12 hours n Accumulated precipitation and other traditional fields n Graphical overlays of model, radar, satellite GETTING THIS FROM THIS

4 4 What Do We Need to Predict? n Individual thunderstorms and squall lines n Lake effect snow storms n Down-slope wind storms n Convective initiation n Seabreeze convection n Stratocumulus decks off the coast n Cold air damming n Post-frontal rainbands

5 5 Why? n Local high-impact weather causes economic losses in the US that average $300 M per week n Over 10% of the $7 trillion US economy is impacted each year n Commercial aviation losses are $1-2 B per year (one diverted flight costs $150K) n Agriculture losses exceed $10 B/year n Other industries (power utilities, surface transport) n About 50% of the loss is preventable! Pielke Jr. (1997)

6 6 What is Needed? n Models that –run at high spatial resolution (1-3 km) –utilize high-resolution observations (e.g., from the WSR-88D network) –handle terrain well –represent important physical processes, especially microphysics and land-surface interactions n Physical/theoretical understanding n Tools for integrating model output, observations

7 7 Role of the University Community n Educating students about NWP -- a whole new ballgame! –Physical processes –Data sets & observing platforms –Numerical models & methods –Data assimilation & predictability n Research in all facets of NWP n Running models in in real time –More than 25 universities do this today! –Major change from 20 years ago! –Academia is driving operational NWP n Collecting data –GPS, WSR-88D, other

8 8 Trends in Large-Scale Forecast Skill

9 9 Predictability: Hitting the Wall n For global models, the predictability increases for all resolvable scales as the spatial resolution increases (quasi 2-D dynamics) –The improvement is bounded –Going beyond a few 10s of km gives little payoff n The next quantum leap in NWP will come when we start resolving explicitly the most energetic weather features, e.g., individual convective storms (3-D) 60 km30 km 10 km 2 km

10 10 Center for Analysis and Prediction of Storms (CAPS) n One of first 11 NSF Science and Technology Centers established in 1989 n STCs were designed to attack problems of fundamental research that eventually would yield important benefits to society n Mission of CAPS: To demonstrate the practicability of numerically predicting local, high-impact storm-scale spring and winter weather, and to develop, test, and help implement a complete analysis and forecast system appropriate operational, commercial, and research applications

11 11 The Key Scientific Questions n Can value be added to present-day NWP and radar- based nowcasting by storm-resolving models? n Which storm-scale events are most predictable, and will fine-scale details enhance or reduce predictability? n What physics is required, and do we understand it well enough for practical application? n What observations are most critical, and can data from the national NEXRAD Doppler radar network be used to initialize NWP models? Can this be done in real time? n What networking and computational infrastructures are needed to support high-resolution NWP? n How can useful decision making information be generated from forecast model output?

12 12 Prediction Targets n Somewhat problematic n For 1-3 km resolution grids, location to within –200 km 6 hours in advance –100 km 4 hours in advance –50 km 2 hours in advance –10 km 1 hour in advance n Initiation n Movement n Intensity n Duration

13 13 Meso-scale NWP n The prediction of the general characteristics associated with mesoscale weather phenomena 6-hour ARPS Forecast at 9 km resolution WSR-88D CREF (02 UTC 30 Nov 1999)

14 14 Storm-scale NWP n The prediction of explicit updraft/downdrafts and related features (e.g., gust fronts, meso-cyclones) NEXRAD Radar Observations ARPS 90 min Forecast (3 km)

15 15 Model Spatial Resolution Breadth of Application Economic Impact Negative Consequences of a Bad Forecast 1980’s 1970’s 1990’s 2000-2010

16 16 Present NWS Operations

17 17 NWS Forecast Offices

18 18 Small-Scale Weather is LOCAL! Severe Thunderstorms FogRain and Snow Rain and Snow Intense Turbulence Snow and Freezing Rain

19 19 The Future of Operational NWP 10 km 3 km 1 km 20 km CONUS Ensembles

20 20 The Future of Operational NWP??

21 21 Principal Differences Between Large- and Small-Scale NWP n Large-scale: Rawinsondes observe “everything” that is needed to initialize a model (T, RH, u, v) n Small-scale: Doppler radar observes only the radial wind and reflectivity in precipitation regions; clear- air PBL data available in some situations n Large-scale: Well-known balances can be applied to reconcile wind and mass fields (e.g., geostrophy, balance equation) n Small-scale: Only simple balances available (mass continuity); otherwise, it’s the full equations!!

22 22 n Large-scale: Forecasts are of sufficient duration to be produced and disseminated in reasonable time frames n Small-scale: Forecasts are of very short duration and thus are highly perishable n Large-scale: Observing network is mature and errors and natural variability are understood n Small-scale: Key observing system (WSR-88D) is new; only a few links exist for providing base data in real time

23 23 n Large-scale: Dynamics and predictability limits are fairly well understood; model physics and numerics are reasonably mature n Small-scale: Dynamics fairly well understood, but predictability limits have not been established; model physics still evolving; physical processes complicated (addition of detail a double-edged sword) n Large-scale: Conventional data assimilation techniques work well; large-scale features evolve slowly n Small-scale: Conventional data assimilation techniques not applicable; events are spatially intermittent and evolve rapidly; how to remove an incorrect thunderstorm and insert the correct one???

24 24 n Large-scale: Computing power reasonably sufficient n Small-scale: Need 100 to 1000 times more computing power than is now available commercially n Large-scale: No lateral boundary conditions to worry about for global and hemispheric models n Small-scale: Lateral boundaries in limited-area models exert a tremendous influence on the solution; compromise between high spatial resolution and domain size

25 25 Recipe for a Storm-Scale NWP System n Advanced numerical model with appropriate physics parameterizations n High-resolution observations (WSR-88D, profilers, satellites, MDCRS) and appropriate ways for using them n Powerful computers and networks n A way to retrieve quantities that cannot be observed directly n Strategies for converting output to useful decision making information

26 26 The CAPS Advanced Regional Prediction System (ARPS)

27 27 NEXRAD Doppler Radar Data

28 28 n We observe... –one (radial) wind component –reflectivity n We need... –3 wind components –temperature –humidity –pressure –water substance (6-10 fields) n SDVR solves the inverse problem –control theory (adjoint), simpler methods –computationally very intensive Single-Doppler Velocity Retrieval (SDVR) realwind observedcomponent

29 29 Sample SDVR Result Dual-DopplerSDVR-Retrieved Weygandt (1998)

30 30 Sample SDVR Result Dual-DopplerSDVR-Retrieved Weygandt (1998)

31 31 Dual-DopplerSDVR-Retrieved Sample SDVR Result Weygandt (1998)

32 32 5 April 1999 - Impact of Radar Data Initial 700 mb Vertical Velocity Using NIDS 12 Z Reflectivity Initial 700 mb Vertical Velocity Using Level II Data and SDVR

33 33 5 April 1999 - Impact of Radar Data 15 Z Reflectivity 3 hr ARPS CREF Forecast (9 km) Using Level II Data and SDVR Valid 15Z 3 hr ARPS CREF Forecast (9 km) Using NIDS Data Valid 15Z

34 34 The Lahoma, OK Hailstorm Conway et al. (1996)

35 35

36 36 n CAPS has been using Level II (base) NEXRAD data in case study predictions down to 1 km resolution and Level III data (NIDS) in its daily operational forecasts n Although NIDS data are available in real time from all radars, they are insufficient in many cases for storm-scale NWP –Precision is degraded via value quantization –Only the lowest 4 tilts are transmitted n No national strategy yet exists for the real time collection and distribution of Level II data n An example of universities leading the way!! Availability of Base Data

37 37 Real Time Test Bed for Acquiring WSR-88D Base Data (Project CRAFT) INX DDC AMA LBB FWS TLX KFSM ICT Radars Online Approval Pending

38 38 CRAFT Phase I

39 39 Regional Collection Concept Must await open-RPGGreatopportunityforuniversities!

40 40 The CAPS Vision

41 41 n Daily operation of experimental forecast models is critical for –involving operational forecasters in R&D –evaluating model performance under all conditions –testing new forecast strategies (e.g., rapid model updates, forecasts on demand, re-locatable domains) –developing measures of skill and reliability based on a long- term data base of model output –learning how to integrate new forecast information into operational decision making n Over 25 groups around the US are running models in real time in collaboration with NWS Offices or NCEP Centers; few are assimilating observations Real Time Testing

42 42 CAPS’ Real Time Testing n Daily operational forecasts with full-physics at spatial resolutions down to 3 km n Assimilation of high-resolution observations consistent with the model high spatial resolution –WSR-88D Level II (base) data –WSR-88D Level III (NIDS) data –GOES satellite data for quantitative vapor/cloud/precip –MDCRS commercial aircraft T and V –Surface mesonets n More than 2000 products produced each hour and posted on the web (http://hubcaps.ou.edu) n Execution on the 256-node Origin 2000 at NCSA

43 43 ARPSView Decision Support System

44 44 1999 Special Operational Period 5-Member, 30 km Ensemble 9 km 3 km WSR-88D Base Data Being Ingested WSR-88D Base Data Pending

45 45 ARPS 32 km Forecast - AR Tornadoes Radar (Tornadoes in Arkansas) ARPS 12-hour, 32 km Resolution Forecast CREF Valid at 00Z on 1/22/99 Proprietary Radar

46 46 ARPS 9km Forecast - AR Tornadoes Radar (Tornadoes in Arkansas) ARPS 6-hour, 9 km Forecast CREF Valid at 00Z on 1/22/99 Proprietary Radar

47 47 ARPS 3km Forecast - AR Tornadoes Weather Channel Radar at 2343 Z ARPS 6-hour, 3 km Forecast CREF Valid at 00Z

48 48 6 January 1999 GOES Visible Image 1745Z, 6 Jan 99 ARPS 12 h Forecast Visibility (27 km) Valid 18Z, 6 Jan 99

49 49 9-10 May 1999 Composite Radar Valid 2347 Z on Sunday, 9 May 1999 NCEP Eta 12-hour Forecast Valid 00 Z Monday, 10 May 1999

50 50 9-10 May 1999 Composite Radar Valid 0344 Z on Monday, 10 May 1999 ARPS 4-hour, 3 km CREF Forecast Valid 04 Z Monday, 10 May 1999

51 51 1 June 1999 KFWS CREF Valid 00 Z on Tuesday, 1 June 1999 ARPS CREF Initial Condition Valid 00 Z on Tuesday, 1 June 1999 (3 km resolution with Level II data from KTLX and KFWS + NIDS)

52 52 1 June 1999 KFWS CREF Valid 01 Z on Tuesday, 1 June 1999 ARPS CREF 1-hour Forecast Valid 01 Z on Tuesday, 1 June 1999 (3 km resolution with Level II data from KTLX and KFWS + NIDS)

53 53 1 June 1999 KFWS CREF Valid 02 Z on Tuesday, 1 June 1999 ARPS CREF 2-hour Forecast Valid 02 Z on Tuesday, 1 June 1999 (3 km resolution with Level II data from KTLX and KFWS + NIDS)

54 54 1 June 1999 KFWS CREF Valid 03 Z on Tuesday, 1 June 1999 ARPS CREF 3-hour Forecast Valid 03 Z on Tuesday, 1 June 1999 (3 km resolution with Level II data from KTLX and KFWS + NIDS)

55 55 1 June 1999 KFWS CREF Valid 04 Z on Tuesday, 1 June 1999 ARPS CREF 4-hour Forecast Valid 04 Z on Tuesday, 1 June 1999 (3 km resolution with Level II data from KTLX and KFWS + NIDS)

56 56 1 June 1999 KFWS CREF Valid 05 Z on Tuesday, 1 June 1999 ARPS CREF 5-hour Forecast Valid 05 Z on Tuesday, 1 June 1999 (3 km resolution with Level II data from KTLX and KFWS + NIDS)

57 57 3 June 1999 KAMA CREF Valid 00 Z on 3 June 1999 ARPS 3-hour 3 km Forecast Valid 00 Z on 3 June 1999 (without NEXRAD base data)

58 58 3 June 1999 KAMA CREF Valid 03 Z on 3 June 1999 ARPS 6-hour 3 km Forecast Valid 03 Z on 3 June 1999 (without NEXRAD base data)

59 59 3 June 1999 KAMA CREF Valid 04 Z on 3 June 1999 ARPS 7-hour 3 km Forecast Valid 04 Z on 3 June 1999 (without NEXRAD base data)

60 60 3 June 1999 KAMA CREF Valid 05 Z on 3 June 1999 ARPS 8-hour 3 km Forecast Valid 05 Z on 3 June 1999 (without NEXRAD base data)

61 61 3 June 1999 KAMA CREF Valid 06 Z on 3 June 1999 ARPS 9-hour 3 km Forecast Valid 06 Z on 3 June 1999 (without NEXRAD base data)

62 62 NEXRAD Radar Observations 5:30 pm ARPS Prediction Model (1/2 hour forecast) Numerical Forecasts of the May 3 Tornadic Storms

63 63 NEXRAD Radar Observations 6:00 pm ARPS Prediction Model (1 hour forecast) Numerical Forecasts of the May 3 Tornadic Storms

64 64 NEXRAD Radar Observations 6:30 pm ARPS Prediction Model (1 1/2 hour forecast) Numerical Forecasts of the May 3 Tornadic Storms

65 65 NEXRAD Radar Observations 7:00 pm ARPS Prediction Model (2 hour forecast) Numerical Forecasts of the May 3 Tornadic Storms

66 66 7:00 pm Numerical Forecasts of the May 3 Tornadic Storms 12-hour Eta Forecast Moore, OK Tornadic Storm ARPS Prediction Model (2 hour forecast) NEXRAD Radar Observations Moore, OK Tornadic Storm

67 67 7:00 pm Numerical Forecasts of the May 3 Tornadic Storms ARPS With and Without NEXRAD Base Data ARPS Prediction Model (2 hour forecast) WITH (3 ARPS hour forecast) WITHOUT NEXRAD Radar Observations

68 68 How Good are the Forecasts? ForecastVerification 40 km for 3 Hour Forecast D/FW Airport

69 69 How Good Are the Forecasts?

70 70 n Traditional skill measures (e.g., threat score or “overlap” agreement) not appropriate for intermittent storm-scale phenomena n Specific character of storms (intensity, motion, initiation, decay) important for operational forecasters n QPF is critical! n Problem: We forecast more things than we can observe/verify (how to verify 500 mb height fields that contain thunderstorms?) n Point verification is rather meaningless The Issues

71 71 n Qualitative (by hand) verification –location, speed, timing, duration, intensity, orientation, mode –“With 4 hours of lead time, the location of storms was within 30 km of observed 80% of the time” –“The model predicted storms 10% of the time when none were observed” n Phase-shifting verification –maximize spatial correlation –generates a shift vector n Will eventually have to consider cost-benefit and reliability Approaches

72 72 Quantitative Forecast Evaluation for May 3 Forecasts (3 km) n Hourly analysis of echoes by county in Oklahoma (N=77) n Statistics –Hit Rate: The fraction of correct forecasts (best=1, worst=0) –Critical Success Index (Threat Score): The hit rate after removing the correct forecasts of no echoes (best=1, worst=0) –False Alarm Ratio: The fraction of forecasts that are incorrect (best=0, worst=1) –Probability of Detection: The fraction of forecasts that are correct (best=1, worst=1) –Bias: A measure of the tendency to overforecast or underforecast. (bias=1 is optimal)

73 73 Averages HR = 0.911 CSI = 0.621 FAR = 0.233 POD = 0.798 BIAS = 1.110

74 74 Averages HR = 0.940 CSI = 0.511 FAR = 0.258 POD = 0.633 BIAS = 0.939

75 75 Averages HR = 0.919 CSI = 0.398 FAR = 0.324 POD = 0.489 BIAS = 0.771

76 76 t = 2 hours t = 1 hour Truth Forecast Zhang (1999) TheImportanceof Phase Errors

77 77 Zhang (1999) Standard Threat Score

78 78 Zhang (1999) Phase-Shifted Threat Score Average Phase Shift Error (km)

79 79 n Getting the larger-scale features correct is the easy part -- getting the reflectivity correct is tough! –But does it matter? –These models are not reflectivity generators! n Solution sensitivity (surface characteristics, soil moisture) n Initial conditions are the critical aspect -- much work needed in data assimilation and parameter retrieval n Model physics seem adequate (QPF needs work, though) n How good is good enough? n Fine resolution gives more detail but also greater uncertainty and sensitivity (e.g., caps, outflow boundaries) n Forecasters easily overwhelmed by zillions of new products n More experience needed with ensemble forecasting Lessons Learned

80 80

81 81

82 82 Ensemble Forecasting n Advantages –Ensemble mean is generally superior –Ensembles provide n a measure of expected skill or confidence n a quantitative basis for probabilistic forecasting n a rational framework for forecast verification n information for targeted observations n Limitations/Challenges –Not clear how to optimally specify the initial conditions (singular vectors, breeding, perturbed observations) –Requires more computer resources

83 83 n Collaborative effort among CAPS, NCAR, AFWA, NCEP and NSSL n Performed during May, 1998 n Goal: Examine the value of coarse-resolution, multi- model ensemble forecasts versus single high-resolution deterministic forecasts n Expose operational forecasters in real time to both types of output Storm and Mesoscale Ensemble Experiment (SAMEX)

84 84 SAMEX Domains

85 85

86 86

87 87 3-hour Observed Precipitation 25-Member Ensemble POP > 0.1 inch/hour Oops!!

88 88 Explicit 9 km Prediction 3-hour Accumulated Precipitation 9 km, 15-hour ARPS Forecast Reflectivity

89 89 500 mb Errors

90 90

91 91 n Storm-scale NWP is a significant scientific and technological challenge n Predictability appears plausible at storm scales n More work needed in –data assimilation, especially from satellite, GPS, WSR-88D –physics parameterizations (especially cloud microphysics, radiation, and land-atmosphere exchanges) –fundamental predictability and sensitivity n Transition to operations will be a major challenge –centralized versus distributed? –verification techniques –creation of useful products –forecaster interpretation and utilization n NWS FO involvement in R&D will be critical Summary

92 92 Some Key Scientific Issues n Predictability of storm-scale flows and application of ensemble strategies and forecast verification techniques at 1-3 km resolution n Data impact/sensitivity, especially land-atmosphere interactions n Advanced data assimilation techniques (3DVAR, 4DVAR): most everything boils down to the initial conditions! n Feedback of cloud-scale NWP to global and regional climate n Use of cloud-scale forecasts in hydrologic models n Application of new remote sensing technologies (e.g., GPS, phased- array radars, polarization-diversity radars, MDCRS) n Linkages between high-impact local weather and local ecosystems, biodiversity, and health n Intelligent distributed computing and networking: learning how to create and deliver the information n Economic and societal impacts and mitigation: learning how to use the information


Download ppt "1 Meso- and Storm-Scale NWP: Scientific and Operational Challenges for the Next Decade Kelvin K. Droegemeier School of Meteorology and Center for Analysis."

Similar presentations


Ads by Google