AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 IMPACT OF TAMDAR ON THE RUC MODEL: A LOOK INTO SOME OF THE STATISTICS WITH CASE STUDIES.

Slides:



Advertisements
Similar presentations
RADCOR for US Sondes Dr. Bradley Ballish NCEP/NCO/PMB 10 March 2011.
Advertisements

February 19, 2004 Texas Dryline/Dust Storm Event.
Jordan Bell NASA SPoRT Summer Intern  Background  Goals of Project  Methodology  Analysis of Land Surface Model Results  Severe weather case.
Met Brief, Lenny Pfister Nick Heath. Weather today/yesterday.
Lightning NOx Emissions GEOS5 v vs. GEOS4 v Lee T. Murray,
Improving Severe Weather Forecasting: Hyperspectral IR Data and Low-level Inversions Justin M. Sieglaff Cooperative Institute for Meteorological Satellite.
NATS 101 Lecture 3 Climate and Weather. Climate and Weather “Climate is what you expect. Weather is what you get.” -Robert A. Heinlein.
1. What is shown here? weather or climate ? analysis valid 18 UTC on 4 Oct 2005.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
1 NATS 101 Lecture 3 Climate and Weather. 2 Review and Missed Items Pressure and Height-Exponential Relationship Temperature Profiles and Atmospheric.
Weather Forecasting - II. Review The forecasting of weather by high-speed computers is known as numerical weather prediction. Mathematical models that.
Ryan Ellis NOAA/NWS Raleigh, NC.  The development of orographically induced cirrus clouds east of the southern Appalachian Mountain chain can result.
Radar Animation 9:30 AM – 7:00 PM CST November 10, 2006 …Excerpt from Meteorological Overview of the November 10, 2006 Winter Storm… Illustrate value of.
TAMDAR Alaskan data compiled by Ed Szoke NOAA/CIRA/GSD 2007 cases comparing TAMDAR out of Anchorage (ANC) and other Alaska airports nearby RAOB cases Airports.
1 Aircraft Data: Geographic Distribution, Acquisition, Quality Control, and Availability Work at NOAA/ESRL/GSD and elsewhere.
Forecast Skill and Major Forecast Failures over the Northeastern Pacific and Western North America Lynn McMurdie and Cliff Mass University of Washington.
1 AMDAR Quality Assurance Bradley Ballish NOAA/NWS/NCEP/NCO/PMB SSMC2/Silver Spring 23 March, 2009.
Hurricane lecture for KMA Ed Szoke 1 October 20, 2004 Overview of 2004 Atlantic Hurricane Season Ed Szoke* NOAA Forecast Systems Laboratory Forecast Research.
Week in Review 8/28/13 to 9/4/13 John Cassano. Weather Situation – Strong upper level ridge over central US – Jet stream well north of US – Weak frontal.
AMS 23 rd Conference on Severe Local Storms/2006 – St. Louis Talk November 8, 2006 AN EVALUATION OF TAMDAR SOUNDINGS IN SEVERE WEATHER FORECASTING.
AMB Verification and Quality Control monitoring Efforts involving RAOB, Profiler, Mesonets, Aircraft Bill Moninger, Xue Wei, Susan Sahm, Brian Jamison.
Meteorology of Winter Air Pollution In Fairbanks.
COSMIC GPS Radio Occultation Temperature Profiles in Clouds L. LIN AND X. ZOU The Florida State University, Tallahassee, Florida R. ANTHES University Corporation.
Module 7: Comparing Datasets and Comparing a Dataset with a Standard How different is enough?
TAMDAR Workshop 2006 – Boulder, Colorado 1 April 13, 2006 UPDATE ON TAMDAR IMPACT ON RUC FORECASTS & RECENT TAMDAR/RAOB COMPARISONS Ed Szoke,* Brian Jamison*,
Section 10.1 Confidence Intervals
Evaluation of the WVSS-II Sensor Using Co-located In-situ and Remotely Sensed Observations Sarah Bedka, Ralph Petersen, Wayne Feltz, and Erik Olson CIMSS.
How well can we model air pollution meteorology in the Houston area? Wayne Angevine CIRES / NOAA ESRL Mark Zagar Met. Office of Slovenia Jerome Brioude,
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
1 Results from Winter Storm Reconnaissance Program 2008 Yucheng SongIMSG/EMC/NCEP Zoltan TothEMC/NCEP/NWS Sharan MajumdarUniv. of Miami Mark ShirleyNCO/NCEP/NWS.
2006(-07)TAMDAR aircraft impact experiments for RUC humidity, temperature and wind forecasts Stan Benjamin, Bill Moninger, Tracy Lorraine Smith, Brian.
1 Hyperspectral Infrared Water Vapor Radiance Assimilation James Jung Cooperative Institute for Meteorological Satellite Studies Lars Peter Riishojgaard.
Statistical Process Control04/03/961 What is Variation? Less Variation = Higher Quality.
Verification Verification with SYNOP, TEMP, and GPS data P. Kaufmann, M. Arpagaus, MeteoSwiss P. Emiliani., E. Veccia., A. Galliani., UGM U. Pflüger, DWD.
Characteristics of Fog/Low Stratus Clouds are composed mainly of liquid water with a low cloud base Cloud layers are highly spatially uniform in both temperature.
10 th COSMO General Meeting, Krakow, September 2008 Recent work on pressure bias problem Lucio TORRISI Italian Met. Service CNMCA – Pratica di Mare.
AMS 22 nd Conference on Weather Analysis and Forecasting/18 th Conference on Numerical Weather Prediction – Park City, Utah 1 June 26, 2007 IMPACT.
Flavour break-up July7th 2008 Our aim was modest: 1)To alter fc=0.15 to fc=0.09 following investigations of the charm fraction 2)To take into account the.
Ed Szoke 1 April 12, 2005 TAMDAR Project – April Boulder Meeting Ed Szoke* NOAA Forecast Systems Laboratory *Joint collaboration with the Cooperative Institute.
Boulder TAMDAR Meeting - Ed Szoke 1 August 25, 2005 RUC – RAOB – TAMDAR SOUNDINGS Ed Szoke* NOAA Forecast Systems Laboratory *Joint collaboration with.
NATIONALLY AND ACROSS OHIO GETTY IMAGES. DROUGHTOF 2012 DROUGHT OF 2012 PRE EXISTING CONDITIONS PRE EXISTING CONDITIONS COMPARISONS TO THE LAST DROUGHT.
Printed by The Mechanisms and Local Effects of Heavy Snow in Interior Valleys of Northwest Californi a Matthew Kidwell, Senior Forecaster.
Boundary layer depth verification system at NCEP M. Tsidulko, C. M. Tassone, J. McQueen, G. DiMego, and M. Ek 15th International Symposium for the Advancement.
General Meeting Moscow, 6-10 September 2010 High-Resolution verification for Temperature ( in northern Italy) Maria Stefania Tesini COSMO General Meeting.
Analysis of Select Data Biases in North America Dr. Bradley Ballish NCEP/NCO/PMB October 2008 JAG/ODAA Meeting “Where America’s Climate and Weather Services.
Stratiform Precipitation Fred Carr COMAP NWP Symposium Monday, 13 December 1999.
AMS 13 th Conference on Aviation, Range and Aerospace Meteorology – January 2008 – New Orleans, Louisiana 1 January 22, 2008 EFFECT OF TAMDAR ON RUC SHORT-TERM.
National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology Pasadena, California
Assimilation of AIRS SFOV Profiles in the Rapid Refresh Rapid Refresh domain Haidao Lin Ming Hu Steve Weygandt Stan Benjamin Assimilation and Modeling.
MODIS Winds Assimilation Impact Study with the CMC Operational Forecast System Réal Sarrazin Data Assimilation and Quality Control Canadian Meteorological.
Convective Parameterization in NWP Models Jack Kain And Mike Baldwin.
PRELIMINARY VALIDATION OF IAPP MOISTURE RETRIEVALS USING DOE ARM MEASUREMENTS Wayne Feltz, Thomas Achtor, Jun Li and Harold Woolf Cooperative Institute.
Twenty-Three Foot Waves on Lake Michigan! Examining Storm Events on the Lake Mike Bardou and Kevin Birk Courtesy Mike Bardou.
STMAS Aviation Weather Testbed (AWT-2011) case: 25 July 2011 Highlight: A line of storms over nw NY at 12z is moving to the southeast with potential to.
Xiujuan Su 1, John Derber 2, Jaime Daniel 3,Andrew Collard 1 1: IMSG, 2: EMC/NWS/NOAA, 3.NESDIS Assimilation of GOES hourly shortwave and visible AMVs.
Precipitation. Last year in Oneonta Total precipitation: 42.16” Average: 42.22” ( )
Federal Department of Home Affairs FDHA Federal Office of Meteorology and Climatology MeteoSwiss Experiments at MeteoSwiss : TERRA / aerosols Flake Jean-Marie.
1 Recent AMDAR (MDCRS/ACARS) Activities at GSD New AMDAR-RUC database that helps evaluate AMDAR data quality Optimization study that suggests data can.
Comparing Datasets and Comparing a Dataset with a Standard How different is enough?
Figures from “The ECMWF Ensemble Prediction System”
Heavy Rain Climatology of Upper Michigan Jonathan Banitt National Weather Service Marquette MI.
AMS 22 nd Conference on Weather Analysis and Forecasting/18 th Conference on Numerical Weather Prediction – Park City, Utah 1 June 26, 2007 IMPACT.
Indirect impact of ozone assimilation using Gridpoint Statistical Interpolation (GSI) data assimilation system for regional applications Kathryn Newman1,2,
Planetary Wind & Moisture Belts in the Troposphere (Annotated Version)
Stratosphere Issues in the CFSR
Verification of LAMI: QPF over northern Italy and vertical profiles
Tony Wimmers, Wayne Feltz
Lidia Cucurull, NCEP/JCSDA
Edward I. Tollerud1, Brian D. Jamison2, Fernando Caracena1, Steven E
OC3570 Operational Meteorology and Oceanography LCDR John A. Okon
Presentation transcript:

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 IMPACT OF TAMDAR ON THE RUC MODEL: A LOOK INTO SOME OF THE STATISTICS WITH CASE STUDIES Ed Szoke,* Stan Benjamin, Randy Collander*, Brian Jamison*, Bill Moninger, Tom Schlatter, and Tracy Smith* NOAA/ESRL Global Systems Division Boulder, CO USA *Joint collaboration with the Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, CO

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Overview The main issue: Objective evaluation (statistics) of relative humidity (RH) has occasionally shown poorer performance for RUC runs with TAMDAR Statistics - calculated by comparing RUC forecasts with and without TAMDAR to RAOBs at the standard pressure levels (850, 700, 500 mb) Is this really worse performance with TAMDAR OR are there other reasons for the poorer scores? Procedure: Find days that stand out with poorer scores Examine individual RAOBs with forecast soundings to see where the errors occur Concentrate on the Great Lakes subset (13 RAOBs)

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Verification areas: for this study we used the inner (blue) box containing 13 RAOBs

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 RUC with TAMDAR (“dev2”, blue line) RUC run without TAMDAR (“dev”, red line) for the Great Lakes area. Bottom plot shows the difference, positive if dev2 is a better forecast than dev. Starred days highlight poorer scores for dev2. 3-h RMS error statistics for June-October 2006 at 700 mb for Great Lakes area – 13 RAOBs Dev - control Dev2 – add TAMDAR

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Statistics (RMS error) for RH for 6-h RUC forecasts valid at 0000 UTC at 700 mb for the Great Lakes area. Starred days highlight poorer scores for dev2. 6-h RMS statistics for June-October 2006 at 700 mb **** Dev - control Dev2 – add TAMDAR

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 1: 23 June 06 00z – 3h forecasts RMS score for dev2 is 7% worse than dev for 3-h forecasts valid at 0000 UTC 23 June. RAOB comparison showed 2 sites account for most of this error. Peoria, Illinois (ILX) comparison is shown here. For all plots RAOB is green, dev (RUC w/o TAMDAR) in blue dev2 (RUC w/TAMDAR) in black. The shape (character) of the dev2 RH appears to be a better match to the RAOB, but is off by ~50 mb so scores poorly at 700 mb. RH at 700 mb: RAOB = 74 % dev = 94 % dev2 = 34 % (60% error!) raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 1: 23 June 06 Pittsburgh (PIT) was the other RAOB where the RH is significantly worse for dev2 than for dev. In this case, dev2 is 39% drier than the RAOB, while dev is only 12% excessively moist. While one could argue that the shape of the dev2 RH profile better matches the changes in the vertical shown by the RAOB, the excessive drying for dev2 is probably simply not as good a forecast in this case. raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 2: 14 July 06 – 3 h 700 mb forecasts RMS scores for dev2 again were ~7% worse than for dev, for 3-h forecasts valid at 0000 UTC 14 July. RAOB comparison showed 4 sites account for most of this error. Buffalo, NY (BUF) comparison is shown here. In this case the dev2 follows the RAOB RH profile nicely until there is a more moist shift exactly at 700 mb, yielding what apperas to be an unrepresentative error at 700 mb for dev2 while dev happens to get a perfect match. RH differences: dev2 ~20% dev almost no error raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 2: 14 July 06 Aberdeen, South Dakota (ABR) comparison is shown here. The dev2 RH forecast more closely matches the RAOB up to ~770 mb, then both forecasts dry out, while the RAOB does not. While both forecasts dry at about the same rate in the vertical, it happens that the dev forecast crosses the RAOB at 700 mb. But this is because it is erroneously more moist below 750 mb! So the better score at 700 mb is not representative (with dev2 being 23% drier than the RAOB). raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 2a: 14 July 06 – 6 h 700 mb forecasts RMS scores for dev2 forecasts were ~5% worse than for dev, for 6-h forecasts valid at 0000 UTC 14 July. RAOB comparison showed about half the sites accounting for smaller errors. Buffalo, NY (BUF) 6-h comparison is shown as it illustrates the error that happens to occur with a sharp but vertically shallow more moist layer in the RAOB just at 700 mb. Nothing in other observations to know if this is real. Without this layer the dev2 forecast follows the RAOB moisture more closely than dev. raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 3: 12 October 06 – 6h forecasts valid 0000 UTC Green Bay, Wisconsin (GRB) comparison is shown here. The RMS error at 700 mb for dev2 on 12 Oct was 7% worse than for dev. Almost all of this error comes from the GRB comparison. RH & Differences at 700 mb: RAOB: 88% RH dev: 83% (-5% diff) dev2: 22% (-66% diff) The difference at 700 mb is the largest found during this period. It occurs as the dev2 forecast dries out a deep portion of the troposphere in nw flow. Is the forecast as bad as it looks? raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 3: 12 October UTC 700 mb plot The forecast from dev2 may not be as bad as it appeared. There is significant drying to the west and northwest of GRB behind the deep 700 mb upper level low. (Dewpoint is number below the temperature on the station plots.) So the main issue may be that the forecast from dev2 is just off a small amount in timing.

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 3: 12 October UTC – RAOB and dev2 comparison illustrating drying Another way to show this drying is illustrated here with an overlay of the GRB RAOB and 2 upstream RAOBs (MPX and INL), along with the dev2 6-h forecast. MPX, more to the west of GRB, is drier above 700 mb. INL, more to the nw, shows the drier layer reaching all the way down past 700 mb. Note that the dev2 forecast compares rather well to the INL RAOB, verifying nearly exactly at 700 mb.

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 4: 20 October UTC 3-h forecasts at 700 mb RMS score for dev2 is 4.5% worse than for dev for 3-h forecasts valid at 0000 UTC 20 October. RAOB comparison showed 2 sites account for most of this error (INL & MPX). International Falls (INL) comparison is shown. The deep layer of drying in the RAOB is better captured by the dev2 RH forecast, while dev appears to mainly miss this dry layer, but happens to verify better at 700 mb. RH & Differences at 700 mb: RAOB: 12% RH dev: 42% (+30% diff) dev2: 70% (+58% diff) raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 4: 20 October UTC 500 mb plot with IR Similar to the last case, drying is occurring behind a trough axis passing INL, so could argue that the character of the dev2 forecast is more representative of what is really happening than the dev forecast, though scoring worse at 700 mb.

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 5: 28 June UTC 3-h forecasts at 700 mb Better verification for dev2 (RUC w/TAMDAR) This time the RMS score for dev2 is 10% better than for dev for 3-h forecasts valid at 0000 UTC 28 June. RAOB comparisons found that a lot of variability, but some big errors for dev. Wilmington, Ohio (ILN) comparison is shown. Both forecasts have the drying beginning lower than observed, but because it does not start for dev2 until just above 700 mb it scores much better than dev. RH & Differences at 700 mb: RAOB: 78% RH dev: 13% (-55% diff) dev2: 61% (-17% diff) raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Case 6: 18 Oct UTC 3-h forecasts at 700 mb Better verification for dev2 (RUC w/TAMDAR) In this case.. dev2 is 4% better than dev (3-h forecasts ) Minneapolis, Minnesota (MSP) This case illustrates the effect of a very sharp dry layer in the RAOB (which may or may not be real). shape of both RUC forecasts is similar, but dev2 moisture profile is shifted ~30 mb lower and happens to closely match the RAOB at 700 mb, yielding an RH value 31% better than dev at 700 mb. raob dev dev2

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 What this all means... RH often varies strongly in the vertical as shown in RAOB profiles Calculating error statistics only at the mandatory levels makes them more vulnerable to unrepresentativeness It can only take 1 or 2 bad RAOB comparisons (out of 13 in the Great Lakes area) to yield a large RMS error With only mandatory levels being used, slight shifts of the RH in the vertical can be severely penalized The RAOBs often have some very sharp RH variations in the vertical that may or may not be real and can result in huge errors if they fall at a mandatory level Additionally, it is unrealistic to expect the RUC model to resolve some of these fluctuations (if they are indeed real) Considering the above reasoning, we decided to change the verification to a layer method Calculations made at 10-mb intervals

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Comparison of new scoring method with old for October 2006 Days of interest are highlighted. For 12 Oct and 20 Oct dev still scores better, but the error is much reduced (~2% RMS for layer on 12 Oct vs. 5% at 700 mb only; for 20 Oct also ~2% RMS for layer on vs. ~5% at 700 mb only). For 18 Oct, when dev2 scored better at 700 mb, the difference is also reduced by more than half. These results appear to be consistent with the sounding comparisons shown earlier. New method: 900 to 650 mb averagingOld method: 700 mb single level

AMS Annual Meeting 2007 – San Antonio 11 th IOAS-AOLS 18 January 2007 Summary We began the study as a forensic pathology study to try to better understand why the RMS RH scores were substantially worse for the RUC runs with TAMDAR on some days Used the Great Lakes area with 13 RAOB/forecast comparisons Focused on 3-h and 6-h forecasts valid at 0000 UTC since TAMDAR data in abundance for these initialization times Discovered issues with the mandatory-only scoring method Change to a layer average method have produced more representative results Found no characteristic problems with TAMDAR data or with RUC no misdesign with RUC assimilation or model additional TAMDAR in upstream airports would decrease aliasing