PREPARED BY BRADLEY ZAVODSKY NASA/MSFC SHORT-TERM PREDICTION RESEARCH AND TRANSITION (SPORT) CENTER SPoRT MET Scripts Tutorial.

Slides:



Advertisements
Similar presentations
XP New Perspectives on Microsoft Office Word 2003 Tutorial 6 1 Microsoft Office Word 2003 Tutorial 6 – Creating Form Letters and Mailing Labels.
Advertisements

Summary Statistics/Simple Graphs in SAS/EXCEL/JMP.
CC SQL Utilities.
© Copyright by Deitel & Associates, Inc. and Pearson Education Inc. All Rights Reserved. 1 Outline 24.1 Test-Driving the Ticket Information Application.
Introduction to GRCP Boualem RABTA Center for World Food Studies (SOW-VU) Vrije Universiteit - Amsterdam.
Model Evaluation Tools MET. What is MET Model Evaluation Tools ( MET )- a powerful and highly configurable verification package developed by DTC offering:
Module 12 WSP quality assurance tool 1. Module 12 WSP quality assurance tool Session structure Introduction About the tool Using the tool Supporting materials.
ATMO5332 WRF-ARW Tutorial 0.01”.
The Components There are three main components of inDepth Lite, inDepth and inDepth+ Real Time Component Reporting Package Configuration Tools.
Climate Predictability Tool (CPT)
Use of Prognostic Meteorological Model Output in Dispersion Models Eighth Modeling Conference Research Triangle Park, NC.
Model Evaluation Tools (MET): An Overview. MET Verification Techniques Standard verification methods –gridded model data to point-based observations.
Agenda Overview Why TransCAD Challenges/tips Initiatives Applications.
Transitioning unique NASA data and research technologies to the NWS 1 Evaluation of WRF Using High-Resolution Soil Initial Conditions from the NASA Land.
Univ of AZ WRF Model Verification. Method NCEP Stage IV data used for precipitation verification – Stage IV is composite of rain fall observations and.
This is the footer Running WRF on HECToR Ralph Burton, NCAS (Leeds) Alan Gadian, NCAS (Leeds) With thanks to Paul Connolly, Hector.
Introduction to SPSS (For SPSS Version 16.0)
Office of Research and Development National Exposure Research Laboratory, Atmospheric Modeling Division, Applied Modeling Research Branch October 8, 2008.
Chapter Seven Advanced Shell Programming. 2 Lesson A Developing a Fully Featured Program.
Advanced Shell Programming. 2 Objectives Use techniques to ensure a script is employing the correct shell Set the default shell Configure Bash login and.
Copyright 2012, University Corporation for Atmospheric Research, all rights reserved Verifying Ensembles & Probability Fcsts with MET Ensemble Stat Tool.
13 ° COSMO General Meeting Rome VERSUS2 Priority Project Report and Plan Adriano Raspanti.
Introduction to the WRF Modeling System Wei Wang NCAR/MMM.
Rsv-control Marco Mambelli – Site Coordination meeting October 1, 2009.
OBSERVATIONS & PRÉVISIONS CÔTIÈRES 3 rd SeaDataNet training course – Ostende – June 2008 NEMO reformatting tool v1 M. Fichaut.
Erich Franz Stocker - Page 1EGU-2015 General Assembly April 14, 2015 Global Precipitation Measurement (GPM) mission Precipitation Processing System (PPS)
Morpho Activity Start Entering/Practicing with real data.
November 1, 2013 Bart Brashers, ENVIRON Jared Heath Bowden, UNC 3SAQS WRF Modeling Recommendations.
Rich Quinley California Department of Transportation.
Driving UM-SCM with reanalysis / model data Vaughan Barras ACCESS Model Development Group.
StAR web server tutorial for ROC Analysis. ROC Analysis ROC Analysis: This module allows the user to input data for several classifiers to be tested.
Verification Summit AMB verification: rapid feedback to guide model development decisions Patrick Hofmann, Bill Moninger, Steve Weygandt, Curtis Alexander,
Verification in NCEP/HPC Using VSDB-fvs Keith F. Brill November 2007.
RAMS Evaluation and Visualization Utilities (REVU) Post Processing.
LAPS __________________________________________ Analysis and nowcasting system for Finland/Scandinavia Finnish Meteorological Institute Erik Gregow.
Graphing Data: Introduction to Basic Graphs Grade 8 M.Cacciotti.
Requirements from KENDA on the verification NetCDF feedback files: -produced by analysis system (LETKF) and ‘stat’ utility ((to.
EDExpress Training Presented by Doug Baldwin – CPS/SAIG Technical Support Bob Berry – U.S Department of Education/FSA.
6 th Annual Focus Users’ Conference 6 th Annual Focus Users’ Conference Import Testing Data Presented by: Adrian Ruiz Presented by: Adrian Ruiz.
WRF Four-Dimensional Data Assimilation (FDDA) Jimy Dudhia.
Regional Climate Model Evaluation System based on satellite and other observations for application to CMIP/AR downscaling Peter Lean 1, Jinwon Kim 1,3,
Transitioning unique NASA data and research technologies to the NWS AIRS Profile Assimilation - Case Study results Shih-Hung Chou, Brad Zavodsky Gary Jedlovec,
July, 2009 WRF-Var Tutorial Syed RH Rizvi 0 WRFDA Background Error Estimation Syed RH Rizvi National Center For Atmospheric Research NCAR/ESSL/MMM, Boulder,
Running CESM An overview
Cal/Val for physics MED-MFC internal meeting CMCC-INGV-SOCIB Lecce E. Clementi, INGV.
___________________________________________________________________________WRF-SI ___________________________________________________Community Modeling.
0 0 July, 2009 WRF-Var Tutorial Syed RH Rizvi WRFDA Analysis/Forecast Verification Syed RH Rizvi National Center For Atmospheric Research NCAR/ESSL/MMM,
DET Module 1 Ensemble Configuration Linda Wharton 1, Paula McCaslin 1, Tara Jensen 2 1 NOAA/GSD, Boulder, CO 2 NCAR/RAL, Boulder, CO 3/8/2016.
16-1 PC-HYSPLIT WORKSHOP Workshop Agenda Introduction to HYSPLIT Introduction.ppt Model Overview Model_Overview.ppt Meteorological Data Meteorological_Data.ppt.
Testing of Objective Analysis of Precipitation Structures (Snowbands) using the NCAR Developmental Testbed Center (DTC) Model Evaluation Tools (MET) Software.
Some of the utilities associated with the development of programs. These program development tools allow users to write and construct programs that the.
The TIGGE Model Validation Portal: An Improvement in Data Interoperability 1 Thomas Cram Doug Schuster Hannah Wilcox Michael Burek Eric Nienhouse Steven.
Verify WRF Forecast using MET
Overview Modern chip designs have multiple IP components with different process, voltage, temperature sensitivities Optimizing mix to different customer.
NEMO – Reformating tool
Introduction to Metview
Analyzing Data Module 4.
Business Objects Overview
Chapter 7 Process Control.
The Use of AMET and Automated Scripts for Model Evaluation
WRF Four-Dimensional Data Assimilation (FDDA)
TIGGE Archives and Access
Cloud Verification package post-processing
gWRF Workflow and Input Data Requirements
WEB PROGRAMMING JavaScript.
Lidia Cucurull, NCEP/JCSDA
Two methods to observe tutorial
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Advances in Rfdbk based Verification at DWD
Comeaux and Worley, NSF/NCAR/SCD
Presentation transcript:

PREPARED BY BRADLEY ZAVODSKY NASA/MSFC SHORT-TERM PREDICTION RESEARCH AND TRANSITION (SPORT) CENTER SPoRT MET Scripts Tutorial

Effectively evaluating model performance requires a combination of quantitative metrics and case studies SPoRT does not only transition data and products SPoRT values transition of capabilities that enable NWS partners to perform evaluations that support forecaster-led conference presentations and journal articles Purpose of SPoRT MET Scripts SPoRT directly interacts with NWS forecasters

Purpose of SPoRT MET Scripts Model Evaluation Tools (MET) is a software package developed by NCAR that contains a number of executable programs that will: Reformat observations Match model grid to observations Perform statistical evaluation Steep learning curve due to missing pieces Creating ASCII files from MADIS Dynamic scripts to easily run the software Open source plotting scripts to visualize stats

Tarball Contents Once unzipped and untarred, a number of directories, Perl scripts (*.pl), Perl modules (*.pm), and a namelist.met file should appear Namelist is modified by users to configure what variables and statistics are to be generated for the current run Scripts run the MET workflow Modules contain subroutines and code used by multiple scripts Directories contain documentation, configuration templates used by scripts, or are placeholders where scripts dump data Users should only modify the namelist.met file to configure What follows is an overview of the various components that make up the SPoRT MET Scripts

Scripts: runSPoRTMETScripts.pl Orange dashed circle in workflow that wraps and drives the entire set of scripts and modules Contains a series of true/false statements that are read from namelist.met to determine which parts of MET will be run Uses readNamelist.pm to extract the necessary information needed to run the scripts

Scripts: madisFormatAscii2Nc.pl Orange box, ASCII Point Obs under Input, and ASCII2NC under Reformat in workflow Creates temporary MET-formatted ASCII file needed to run ASCII2NC Runs ASCII2NC to create NetCDF files used by Point Stat (output in pointData directory) sfcobs_YYYYMMDD_HHHH.nc upperobs_YYYYMMDD_HHHH.nc Automatically accesses MADIS FTP server to obtain files for each case study date and hour Users need MADIS account Creating ASCII files from MADIS

Scripts: runPointStat.pl Green Point Stat circle under Statistics in workflow Runs Point Stat to match point observations (e.g. METARs and RAOBs) to the nearest grid point in the model output field to generate text files to be read into Stat Analysis (output in pointStatOutput directory) point_stat_EXP_LEV_FF0000L_YYYYYMMDD_VV0000V_*.stat EXP = 5 character experiment name LEV = surface (sfc) or pressure level (PPPmb) FF = forecast hour time YYYYMMDD = valid year, month, & day VV = valid hour

Scripts: runPointStatAnalysis.pl Green Stat Analysis circle under Analysis in workflow Uses the *.stat files output from Point Stat to generate statistics comparing the model output to the observations (output in pointStatAnalysisOutput) LOC_YYYYMMDDFF_VAR_POLY_LEV_EXP_MPR.txt LOC = surface land (lnd), surface water (wtr), or upper air (upa) YYYYMMDDFF = year, month, day, and forecast hour VAR = variable POLY = verification subdomain LEV = surface (sfc) or pressure level (PPPmb) EXP = experiment name

Scripts: runGridStat.pl Green GenPolyMask and PCP Combine circles under Reformat and green Grid Stat circle under Statistics in workflow Maps gridded verification data to model forecast grid and produces files of the differences (gridStatOutput directory) grid_stat_EXP_VAR_FF0000L_YYYYYMMDD_VV0000L_*.stat EXP = 5 character experiment name VAR = variable name FF = forecast initialization time YYYYMMDD = valid year, month, and day VV = valid hour Currently supports grid comparisons between NCEP Stage IV precipitation and NAM-218 analysis (must download manually)

Scripts: runGridStatAnalysis.pl Green Stat Analysis circle under Statistics in workflow Uses the *.stat files output from Grid Stat to generate statistics comparing the model output to the gridded verification dataset (output in gridStatAnalysisOutput) YYYYMMDDFF_EXP_VAR_sfc_NBHOOD_POLY.txt YYYYMMDDFF = year, month, day, and forecast hour EXP = experiment name VAR = variable NBHOOD = number of surrounding grid points to consider in matching POLY = verification subdomain

namelist.met: Overview namelist.met text file is designed for easy modification to perform evaluations on only metrics of interest Each script begins with a series of calls to the readNamelist.pm module which scours namelist.met for only variables needed by each script and defines these variables as designated by the user Some variables can be a list and should be separated by a comma (with no spaces) Blocks separate like variables

Tutorial Objective: set up data files and namelist.met to verify 0-24h control and SPoRT forecasts output every three hours from 27 April 2011 Should have already downloaded tarballs containing updated MET scripts and tutorial data Untar MET file Create a directory called forecasts Move Forecast file to newly created directory; untar Move Stage IV file to gridData directory; untar Open namelist.met using your favorite text editor

namelist.met: &DirectoryInfo Block Gets scripts familiar with users directory structure and points script to data to process RunDir: full pathname of directory where the SPoRT MET script was untarred and will run NetCDFDir: full pathname of NetCDF directory for running ncdump to process MADIS data EMSBinDir: full pathname of directory in WRF-EMS software package containing wgrib, wgrib2, etc. utility binaries METDir: full pathname of highest level MET directory ForecastDir: full pathname of location of all GRIB forecast files to be processed

namelist.met: &Domain and &Time &Domain helps to reduce processing time by only extracting observations that are within the model domain Input the lower left and upper right latitude and longitude (enter the LL and UR coordinates of your local domain) &Time controls the number of forecast days are going to be evaluated Enter the first initialization date and time for the Start variables (enter 2011, 04, 27, and 00 for Start variables) Enter the final forecast date and time for the End variables (enter 2011, 04, 28, and 00 for End variables) NumberVerifyDays: number of consecutive case study days (1)

namelist.met: &ForecastInfo &ForecastInfo provides the needed information to tell the script how many hours each forecast is, how frequently a GRIB file of model output was generated, and the names of the two (or more) experiments being verified TotalForecastHours: total number of hours per forecast (24) ForecastVerifyInterval: how often were the forecasts output? Or with what frequency does the user want to verify (3) ExperimentNames: a list of names that match the five- character experiment appended to the beginning of the GRIBNAME variable in WRF-EMS (SPORT,CNTRL)

namelist.met: &MADISInfo &MADISInfo allows the user to configure which MADIS observations he/she wants to verify against Currently, ACARS profiles, profiler, RAOB, METAR, Mesonet, Maritime, and SAO are available Set the ProcessMADIS variable to true to do point verifcation; set to false if only doing grid verification (true) Set the UseVAR variables to true to use a dataset; set to false to exclude from verification (set all UseVAR variables to true) TimeRangeVAR variables tell MET to match up observations that fall within ±n minutes of the forecast valid time (Raob to 30; all others to 10) Useful for stations that do not always report exactly at the top of the hour when the forecasts are valid

namelist.met: &METInfo &METInfo allows for configuration of which parts of MET will run and allows for a user-defined verification domain (if a sub-domain of the overall domain is desired) RunPROGRAM: set to true to run each component of the MET package; set to false to not run selected components (all true) PressureLevels: upper air pressure levels (in hPa) to be verified by either Point Stat or Grid Stat (850,500) VerificationRegions: NCEP verification region on which to verify, USER for a user-defined domain, GRID for entire model grid (USER) UserVerifyCOORDINATES: lower left and upper right corners of user-defined verification grid (enter the LL and UR coordinates of your local domain)

namelist.met: &PointStatInfo &PointStatInfo provides the needed information to run Point Stat UseVerifySurfacePoint/UpperPoint: set to true to verify against surface observations and/or upper air observations respectively (true) Surface/UpperPointVerificationVariables: GRIB table variable name for variables on which to perform verification (Surface: TMP,DPT,PRMSL; Upper: TMP,DPT) VerticalObsWindow: Vertical pressure range (hPa) over which upper air observations will be accepted for forecast matching (10) StatsFilter: Easiest to just set this to MPR for now (MPR)

namelist.met: &GridStatInfo &GridStatInfo provides the needed information to run Grid Stat NeighborhoodBox: provides the width of the neighboorhood grids over which verification is performed (5) Must be an odd number If set to 1 will only do grid point to grid point matching PercentCoverage: determines percentage of a neighborhood grid has to contain the forecasted value for a hit to register (0.25) UseVerifyVAR: set to true to verify against precipitation and/or gridded analysis (Precipitation: true; Grids: false) GriddedPrecipitationVerificationAnalysis: set to STIV as NCEP Stage IV is the only precipitation analysis currently supported (STIV)

namelist.met: &GridStatInfo (contd) &GridStatInfo provides the needed information to run Grid Stat AccumulatedPrecipitationHours: accumulated precipitation totals to be verified for (must be greater than forecastInterval (03,12) PrecipitationThresholds: precipitation thresholds to use for binning skill scores (in mm) (1,5,10,25,50) GriddedVerificationModel: determines which large-scale NCEP analysis will be used for gridded verification of non- precipitation variables (none for tutorial) Surface/UpperGridVerificationVariables: GRIB table variable name of variable on which to verify (none for tutorial)

namelist.met: &PlottingInfo &PlottingInfo allows the user to automatically produce plots of the hour-by-hour forecast validation using the Open Source GD::Graph Perl module MakePlots: set to true to generate plots or set to false to make your own from the ASCII output Surface/UpperPlotVariables: GRIB table variable name for surface/upper-air variables for which to generate plots PressurePlotLevels: pressure levels (hPa) on which to generate plots NCEPPlotRegions: NCEP verification regions to generate plots PlotStatistics: statistics to plot PlotColors: color of line for each forecast (in same order as experiments were defined in ExperimentsNames variable under &ForecastInfo