Download presentation
Presentation is loading. Please wait.
2
Relevant publications
mPING data mPING: Crowd-sourcing weather reports for research: Elmore et al BAMS Verifying forecast precipitation type with mPING: Elmore et al WAF Precipitation type evaluation Evaluation of cold-season preciptiatin forecasts generated by the hourly updaing HRRR model: Ikeda et al WAF Explicit precipitation-type diagnosis from a model using a mixed-phase bulk cloud-precipitation microphysics parameterization: Benjamin et al WAF Uncertainty in precipitation type Sources of uncertainty in precipitation type forecast: Reeves et al WAF Classification of precipitation types duirng transitional winter weather using the RUC model and plarimetric radar retrievals: Schuur et al JAMC - Winter surface hydrometeor classification algorithm (WSHCA)
3
Model datasets Operational HRRR (3km) Operational NAMnest (4km)
Hybrid level data Mixing ratio (at each level): cloud water, cloud ice, rain, snow, graupel Categorical (sfc): rain, snow, ice pellets, freezing rain Operational NAMnest (4km) Isobaric level data (Grid 227, 5km LC) 42 isobaric levels from hPa Mixing ratios (at each level): cloud water, cloud ice, rain, snow Hybrid level data did not include microphysical species necessary
4
NAM Ptype algorithm (Elmore et al. 2015 WAF)
The NAM explicit scheme starts with explicit mixing ratios of frozen and liquid particles. If less than 50% of all particles are frozen and the skin temperature is greater than 0C, the method produces RA; if the skin temperature is less than 0C, the method produces FZRA. If greater than 50% of all particles are frozen and the riming factor is high, the method produces PL; if the riming factor is low, it produces SN. NAM (and GFS) predominantly uses implicit classifiers, which rely solely on predicted temperature and humidity profiles. Implicit classifiers tend to yield substantially worse forecasts of PL and FZRA than for SN and RA, prompting the NWS to use an ensemble technique for the NAM (and GFS). Here five different classifiers are used in NAM (4 in GFS) and the dominant category is declared the ptype.
5
HRRR Ptype algorithm (Benjamin et al. 2016 WAF)
Updated in operations late 2015 Each column is considered separately and all precipitation rates are at the ground If a minimal amount of precipitation occurred during the last hour, 3 decisions trees are considered: rain vs freezing rain/drizzle, snow vs. rain, and ice pellets vs rain or snow
6
Verification datasets
METARs Pulling from Greg’s mysql database weather types: freezing rain/freezing drizzle, snow grains, graupel, ice pellets, snow, rain, sleet, freezing fog, fog From Elmore et al.: 852 ASOS sites - 73 A (full augmentation), 55 B (limited augmentation), 296 C and 428 D (no augmentation) Only A and B report PL; 85% of ASOS stations do not report PL. ASOS A/B stations will be used to identify reliable observations of non-occurrence of precipitation in order to thoroughly verify precipitation type and assess any over forecasting rate. mPING Pulling daily (research license has a 48 h delay in data availability) Data types: drizzle, freezing drizzle, rain, freezing rain, ice pellets-sleet, snow, mixed rain and snow, mixed rain and ice pellets, mixed ice pellets and snow, hail, none PIREPs
7
Verification methods for mixing ratios
Leveraging code from Greg to pull model mixing ratios at a configurable horizontal distance around the observation: 4 (2x2 box), 16 (4x4 box), and 36 points (6x6 box); count the number of those points that are snow, freezing rain, and so on (1/36 is a hit) Conditioned on having an observation of precip type
8
Verification methods for Ptype
How do we use observations? Obs different in terms of space, frequency, and reliability Summarize mPING into groups/bins (time and space) and take mode of observations to eliminate spurious or changing observations over small space/time (e.g., rain->snow transition) Case study for certain types of observations because of infrequency Problems with "null”: majority of obs say precip/type but majority of area is none at any given time. Need null obs to measure / avoid overforecasting. How do we match them? Decision points – user defined If forecast is mixed and the ob is one of those types is that ok? If it is raining and the model says freezing rain is that ok? Categories are not mutually exclusive Precip/no precip is a natural divider and a good place to start. Then look at type later to see how close to each category they are.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.