Download presentation
Presentation is loading. Please wait.
Published byAdela McKinney Modified over 9 years ago
1
Precipitation Verification of CAPS Real- time Forecasts During IHOP 2002 Ming Xue 1,2 and Jinzhong Min 1 Other contributors: Keith Brewster 1 Dan Weber 1, Kevin Thomas 1 mxue@ou.edu 3/26/2003 Center for Analysis and Prediction of Storms (CAPS) 1 School of Meteorology 2 University of Oklahoma
2
IHOP Related Research at CAPS CAPS is supported through an NSF grant to Contribute to the IHOP field experiment and Perform research using data collected Emphases of our work include Optimal Assimilation and Qualitative assessment of the impact of water vapor and other high-resolution observations on storm-scale QPF.
3
Goals of CAPS Realtime Foreacst During IHOP To provide additional high-resolution NWP support for the real time operations of IHOP To obtain an initial assessment of numerical model performance for cases during this period To identify data sets and cases for extensive retropective studies
4
CAPS Real Time Forecast Domain 273×195 183×163 213×131
5
CAPS Real Time Forecast Timeline
6
ARPS Model Configuration Nonhydrostatic dynamics with vertically-stretched terrain- following grid Domain 20 km deep with 53 levels. 3 ice-phase microphysics (Lin-Tao) New Kain-Fritsch cumulus parameterization on 27 and 9 km grids NCSA Long and Short Wave Radiative Transfer scheme 1.5-order TKE-based SGS turbulence and PBL Parameterization 2-layer soil and vegetation model
7
Data and Initial Conditions IC from ADAS analysis with cloud/diabatic initialization Eta BC for CONUS grid and background of IC analysis Rawinsonde and wind profiler data used on CONUS and 9km grids MDCRS (aircraft), METAR (surface) and Oklahoma Mesonet data on all grids Satellite: IR cloud-top temperature used in cloud analysis. CRAFT Level-II and NIDS WSR-88D data: Reflectivity used in cloud analysis on 9 and 3km grids, and radial velocity used to adjust the wind fields.
8
Cloud Analysis in the Initial Conditions Level-II data from 12 radars (via CRAFT) and Level- III (NIDS) data from 12 others in the CGP were used The cloud analysis also used visible and infrared channel data from GOES-8 satellite and surface observations of clouds The cloud analysis procedure analyzes qv, T and microphysical variables
9
Computational Issues The data ingest, preprocessing, analysis and boundary condition preparation as well as post-processing were done on local workstations. The three morning forecasts were made on a PSC HP/Compaq Alpha- based clusters using 240 processors. The 00 UTC SPstorm forecast was run on NCSA’s Intel Itanium-based Linux cluster, also using 240 processors. Perl-based ARPScntl system used to control everything Both NCSA and PSC systems were very new at the time. Considerable system-wide tuning was still necessary to achieve good throughput. A factor of 2 overall speedup was achieved during the period. Data I/O was the biggest bottleneck. Local data processing was another.
10
Dissemination of Forecast Products Graphical products, including fields and sounding animations, were generated and posted on the web as the hourly model outputs became available. A workstation dedicated to displaying forecast products was placed at the IHOP operation center. A CAPS scientist was on duty daily to evaluate and assist in the interpretation of the forecast products. A web-based evaluation form was used to provide an archive of forecast evaluations and other related information. The forecast products are available at http://ihop.caps.ou.edu, and we will keep the products online to facilitate retrospective studies.
11
CAPS IHOP Forecast Page CAPS IHOP Forecast Page http://ihop.caps.ou.edu
12
Standard QPF Verifications Precipitation forecasts scores (ETS, Bias) calculated against hourly rain gauge station data (grid to point) from NCDC (~3000 station in CONUS) Scores for 3, 6, 12 and 24 h forecast length calculated Scores calculated for full grids and for common domains Scores also calculated against NCEP stage IV data (grid to grid) Mean scores over the entire experiment period (~40 days) will be presented
13
Questions we can ask How skillful is a NWP model at short range precipitation forecast? Does hi-resolution really help improve precipitation scores, and if so, how much? How much did the diabatic initialization help? Do model predicted precipitation systems/patterns have realistic propagations, and what are the modes of the propagations? Is parameterized precipitation well behaved?
14
ETS on CONUS grid
15
ETS on SPmeso (9km) grid
16
ETS on SPstorm (3km) grid
17
ETS on all three grids 27km 9km 3km
18
Notes on ETS from the 3 grids On CONUS grid, 3hourly ETS much lower than that on the two higher-res grids 12 and 24-hour precip scores are higher on the CONUS grid (keep in mind the difference in domain coverage) Skill scores decrease as the verification interval decreases, but less so on the 9km and 3km grids High thresholds have lower skills Second conclusion changes when comparison is on a common grid
19
CONUS and 9km ETS in the COMMON 9km domain
20
9km (SPmeso) and 3km (SPstorm) ETS in the common 3km domain
21
Comments on ETS in common domains ETS scores are consistently better on higher resolution grids when verification in the same domain The differences are larger for shorter verification intervals Improvements at low thresholds are more significant Improvement from 27 to 9 km more significant than that from 9 to 3 km (0.28/ 0.17 v.s. 0.27/0.22) The forecasts have less skill in the 3km domain (not grid), presumable due to more active convection Keep in mind that the high-resolution forecast is to some extent dependent on coarser grid BC’s
22
Biases of CONUS and SPmeso Grids in COMMON SPmeso Domain
23
Biases of SPmeso and SPstorm Grids in COMMON SPstorm Domain
24
Comments on Bias Scores High biases are seen for high thresholds at all resolutions High biases more severe at higher resolutions Low biases are only observed at low thresholds on CONUS grid Cumulus parameterization (KF scheme is known to produce high biases at high thresholds – e.g., ETA-KF runs of NSSL)? Too much initial moisture introduced by cloud analysis? Microphysics problem? Too strong dynamic feed back? Still insufficient resolutions to properly resolve updrafts? Other causes?
25
CONUS ETS verified on NCEP 236 grid (dx~40km) (May 15 – June 25, 2002) 3-hr accumulated precipitation ETS for different forecast periods CONUS ETS verified on NCEP 236 grid (dx~40km) (May 15 – June 25, 2002) Different 3 hour periods 0.21
26
Preliminary comparison with WRF, RUC, MM5, and ETA run during the IHOP 3hr accumulated precipitation ETS and Bias WRF, RUC, MM5 and ETA scores generated at FSL RTVS page at http://www-ad.fsl.noaa.gov/fvb/rtvs/ihop/station/ (earlier presentation by Andy Loughe) The scores were calculated by interpolating forecast to hourly gauge stations, and are for the first forecast period only (not the mean of periods over the entire forecast range) ARPS scores shown are against Stage IV gridded data
27
0.01 0.1 0.25 0.5 0.75 1.0 1.5 2.0 3.0 WRF(22km) and RUC(20km) ARPS (27km) Comparison with WRF and RUC for the same period 3hr accumulated precipitation ETS and Bias versus thresholds http://www-ad.fsl.noaa.gov/fvb/rtvs/ihop/station/ 0.16 2.7 0.2 1.5
28
Verified on SPmeso domain 6hr accumulated precipitation ETS and Bias versus thresholds 0.01 0.1 0.25 0.5 0.75 1.0 1.5 2.0 3.0 WRF(22km) and RUC(20km) ARPS (27km) 0.3
29
WRF(22km) and RUC(20km) ARPS (27km) 0.01 0.1 0.25 0.5 0.75 1.0 1.5 2.0 3.0 12hr accumulated precipitation ETS and Bias versus thresholds 0.35 0.38
30
SPmeso grid verification Comparison with WRF, ETA, MM5 and RUC for the same period 3hr accumulated precipitation ETS and Bias versus thresholds 0.01 0.1 0.25 0.5 0.75 1.0 1.5 2.0 3.0 WRF(10km), ETA(12), MM5(12) and RUC(10) ARPS (9km) 0.23
31
6hr accumulated precipitation ETS and Bias versus thresholds 0.01 0.1 0.25 0.5 0.75 1.0 1.5 2.0 3.0 WRF(10km), ETA(12), MM5(12) and RUC(10) ARPS (9km)
32
12hr accumulated precipitation ETS and Bias versus thresholds 0.01 0.1 0.25 0.5 0.75 1.0 1.5 2.0 3.0 WRF(10km), ETA(12), MM5(12) and RUC(10) ARPS (9km) 0.35
33
Hovmoller Diagrams of Hourly y (latitudinal) mean Precipitation Questions: Inspired by Carbone et al (2002) How does the propagation of precipitating systems compare at different resolutions? Does parameterized precipitation propagate at the right speed? Is explicit precipitation on high-resolution grid better forecasted? Predictability Implications
34
CAPS Real Time Forecast Domain 273×195 183×163 213×131
35
Hovmoller diagrams of hourly forecast rainfall for 15 May to 5 June 2002
36
Hovmoller diagrams of hourly forecast rainfall for 6-25 June 2002
37
Hovmoller diagram hourly forecast rainfall for 16-18 May 2002
38
Hovmoller diagram of hourly forecast rainfall for 23-26 May 2002
39
June 15, 2002, CONUS Grid NCEP Hourly Precip 27 km Forecast Precip Hourly Rate. 14 hour forecast valid at 02 UTC L
40
June 15, 2002, CONUS Grid NCEP Hourly Precip 27 km Forecast Precip Hourly Rate. 24 h forecast
41
June 15, 2002, CONUS Grid NCEP Hourly Precip 27 km Forecast Precip Hourly Rate. 14 hour forecast valid at 02 UTC L
42
June 15, 2002, 9km Grid NCEP Hourly Precip 9 km Forecast Precip Hourly Rate. 14 hour forecast valid at 02 UTC L
43
June 15, 2002, 9km Grid NCEP Hourly Precip 9 km Forecast Precip Hourly Rate. 24 hour forecast
44
June 15, 2002, 9km Grid NCEP Hourly Precip 9 km Forecast Precip Hourly Rate. 14 hour forecast valid at 02 UTC L
45
June 15, 2002 – 3km grid NCEP Hourly Precip 3 km Forecast Hourly Precip Rate 11 hour forecast valid at 02 UTC L
46
June 15, 2002 – 3km grid NCEP Hourly Precip Analysis 3 km Forecast Hourly Precip Rate 11 hour forecast
47
June 15, 2002 – 3km grid NCEP Hourly Precip 3 km Forecast Hourly Precip Rate 11 hour forecast valid at 02 UTC L
48
June 15, 2002 NCEP Hourly Precip ARPS 3 km Forecast – Comp. Ref. 11 hour forecast valid at 02 UTC
49
Hovmoller diagram of hourly forecast rainfall for 15-18 June 2002 Oklahoma
50
Comments on Hovmoller Diagrams Propagation of precipitation systems is found on all grids, including CONUS and SPmeso that used cumulus parameterization Propagation not necessarily faster on higher-resolution grids The short forecast lengths (15 and 12h) of 3 km grid complicate the interpretation More detailed process analyses are needed to understand the mode of comparison Diagrams of observed precip will be created for comparison
51
June 12-13, 2002 Case
52
00-12UTC, June 13, 2002, Hourly Precip
54
Future Plan Refine the QPF verifications Perform detailed studies on selected CI and QPF cases with the emphasis on model simulations Rerun selected cases and the entire periods by assimilating more data, initially relatively easy ones (e.g., surface network, dropsondes, radiometric profiles) Study the sensitivities of forecast to these data Study the QPF sensitivity to initial conditions (via forward as well as adjoint models) Develop new capabilities to assimilate indirect observations, e.g., GPS slant water delay (want to work with instrument people here) Verify model prediction against special IHOP data sets (e.g., AB profiles) Make available assimilated data sets to the community
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.