Presentation is loading. Please wait.

Presentation is loading. Please wait.

May 27, 2014 National Central University, Jhongli, Taiwan Tsengdar Lee, Ph.D. With inputs from Duane Waliser, Karl Taylor, Veronika Eyring, Peter Gleckler.

Similar presentations


Presentation on theme: "May 27, 2014 National Central University, Jhongli, Taiwan Tsengdar Lee, Ph.D. With inputs from Duane Waliser, Karl Taylor, Veronika Eyring, Peter Gleckler."— Presentation transcript:

1 May 27, 2014 National Central University, Jhongli, Taiwan Tsengdar Lee, Ph.D. With inputs from Duane Waliser, Karl Taylor, Veronika Eyring, Peter Gleckler Observations for Model Intercomparisons (obs4MIPs) in support of Coupled Climate Model Intercomparison Project (CMIP6)

2 Outline Background, Motivation & Plans 1)obs4MIPs: Evolution, Status and Plans 2)obs4MIPs: Standards, Conventions and Infrastructure 3)CMIP Phase6 Planning 4)Benchmarking Climate Model Performance Ref: http:://dx.doi.org/10.1175/BAMS-D-12-00204.1 http://www.earthsystemcog.org/projects/obs4mips

3 Initial Sentiments Behind obs4MIPS Under-Exploited Observations for Model Evaluation How to bring as much observational scrutiny as possible to the CMIP/IPCC process? How to best utilize the wealth of satellite observations for the CMIP/IPCC process? Courtesy of Duane Waliser

4 Model and Observation Overlap Where to Start? For what quantities are comparisons viable? Example: NASA – Current Missions ~14 Total Missions Flown ~ 60 Many with multiple instruments Most with multiple products (e.g. 10-100s) Many cases with the same products Over 1000 satellite- derived quantities ~120 ocean ~60 land ~90 atmos ~50 cryosphere Over 300 Variables in (monthly) CMIP Database Taylor et al. 2008 Courtesy of Duane Waliser

5 1.Use the CMIP5 simulation protocol (Taylor et al. 2009) as guideline for selecting observations. 1.Observations to be formatted the same as CMIP Model output (e.g. NetCDF files, CF Convention) 2.Include a Technical Note for each variable describing observation and use for model evaluation (at graduate student level). 3.Hosted side by side on the ESGF with CMIP model output. obs4MIPs: The 4 Commandments Model Output Variables Satellite Retrieval Variables Target Quantities Modelers Observation Experts Analysis Community Initial Target Community Courtesy of Duane Waliser

6 obs4MIPs “Technical Note” Content Intent of the Document Data Field Description Data Origin Validation and Uncertainty Estimate Considerations for use in Model Evaluation Instrument Overview References Revision History POC Courtesy of Duane Waliser

7 Evolution and Status of obs4MIPs Pilot Phase - CMIP5 Initial protocols – variables, sources & documentation Compatibility with CMIP file formats and ESGF PCMDI Science & GSFC IT Workshops Identified and provided ~ 15 variables Expanding the Pilot Additional data types, sources, variables (e.g., CFMIP, CMUG, CEOS, reanalysis) Reconsidering the initial guidelines/boundaries Some partial oversight & governance: NASA+ Working Group  Planning Ahead – CMIP6 Identify additional data sources / remaining variable gap across Earth System Judicious/expanded model output considerations, including satellite simulators Increasing community input on objectives and on implementation contributions Broadening awareness & governance: WDAC obs4MIPs Task Team Courtesy of Duane Waliser

8 obs4MIPs: Current Set of Observations AIRS (≥ 300 hPa) Temperature profile Specific humidity profile MLS (< 300 hPa) Temperature profile Specific humidity profile QuikSCATOcean surface winds TESOzone profile AMSR-ESST ATSR (ARC/CMUG)SST TOPEX/JASONSSH CERES TOA & Surface radiation fluxes TRMMPrecipitation GPCPPrecipitation MISRAerosol Optical Depth MODIS Cloud fraction Aerosol Optical Depth NSIDC Sea Ice (in progress) CloudsatClouds Calipso Clouds & Aerosols ISCCPClouds ParasolClouds & Aerosols CFMIP-OBS Provided ECMWF Zonal Winds Meridional Winds Expected to expand to other fields and sources of reanalysis ana4MIPs Provided Reanalysis ARMBE/DOE Clouds, Radiation, Meteorology, Land Surface, etc Discussions on protocols and additions to the in-situ holdings underway Initial in-situ Example Courtesy of Duane Waliser

9 ESGF: Side by Side with CMIP obs4MIPS Project Courtesy of Duane Waliser

10 (NASA datasets only) * “Unique” counts unique user ID downloads of a complete dataset, not individual files. Repeat downloads of the same dataset by the same user were counted only once. 119 unique* users from 16 countries 641 dataset downloads in 2012-13 obs4MIPs: Access Statistics MODIS/MISR AMSR-E TOPEX/JASON 1,2 AIRS/ MLS MODIS GPCP/TR MM CERES T ES QuikSCAT As of Nov 2013 Courtesy R. Ferraro (NASA Datasets Only)

11 AMIP I&II obs4MIPs Clim Metrics Panel CMIP 1&2 xMIPs Bridging Models and Observations CMIP 3&5 ESGF Systematic Application of Metrics Systematic coordination of obs for MIPs Systematic IT Support & Infrastructure Systematic Model Experiments Systematic Model Experiments Systematic Model Development & Improvement Systematic Model Development & Improvement Global ModelingObservations 2+ Decades of Directed Progress Closin g Gap CMIP 6 Courtesy of Duane Waliser

12 Evolution and Plans of obs4MIPs Pilot Phase - CMIP5 Initial protocols – variables, sources & documentation Compatibility with CMIP file formats and ESGF PCMDI Science & GSFC IT Workshops Identified and provided ~ 15 variables Expanding the Pilot Additional data types, sources, variables (e.g., CFMIP, CMUG, CEOS, reanalysis) Reconsidering the initial guidelines/boundaries Some partial oversight & governance: NASA+ Working Group  Present: Planning Ahead – CMIP6 Identify additional data sources / remaining variable gap across Earth System Judicious/expanded model output considerations, including satellite simulators Increasing community input on objectives and on implementation contributions Broadening awareness & governance: WDAC obs4MIPs Task Team  Future More synergistic feedbacks between model development/improvements and new mission/observation definitions and prioritization Courtesy of Duane Waliser

13 Outline Background, Motivation & Plans 1)obs4MIPs: Evolution, Status and Plans 2)obs4MIPs: Standards, Conventions and Infrastructure 3)CMIP Phase6 Planning 4)Benchmarking Climate Model Performance Ref: http:://dx.doi.org/10.1175/BAMS-D-12-00204.1 http://www.earthsystemcog.org/projects/obs4mips

14 Why data standards? Standards facilitate discovery and use of data MIP standardization has increased steadily over more than 2 decades User community expanded from 100’s to 10,000 Standardization requires Conventions and controlled vocabularies Tools enforcing or facilitating conformance Standardization enables: ESG federated data archive Uniform methods of reading and interpreting data Automated methods and “smart” software to process data efficiently Courtesy of Karl Taylor

15 What standards is obs4MIPs building on? netCDF – an API for reading and writing certain types of HDF formatted data ( www.unidata.ucar.edu/software/netcdf/ ) CF Conventions – providing for standardized description of data contained in a file ( cf-convention.github.io ) Data Reference Syntax (DRS) – defining vocabulary used in uniquely identifying MIP datasets and specifying file and directory names ( cmip- pcmdi.llnl.gov/cmip5/output_req.html ). CMIP output requirements – specifying the data structure and metadata requirements for CMIP data ( cmip-pcmdi.llnl.gov/cmip5/output_req.html ) CMIP “standard output” list ( cmip-pcmdi.llnl.gov/cmip5/output_req.html ) Courtesy of Karl Taylor

16 CF-conventions Driven by the needs of climate and weather scientists Written by climate scientists and computer scientists (late 1990’s) Establishes relatively few requirements, but allows information deemed important by users to be recorded in a standardized way. Enables self-describing datasets: Global attributes Variable attributes (standard name, units, and dimensions) Dimension attributes (location in space & time) Courtesy of Karl Taylor

17 MIP Data Reference Syntax (DRS) Specifies “controlled vocabulary” for recording: Experiment names (“historical”, “rcp85”, etc.) Output sampling frequency (“mon”, “da”, etc.) Model names Identification of ensemble members (“rip” designator) Time-range format specification Etc. This enables: Unique identification of datasets Unified organization of data Sharing of data across the network. Courtesy of Karl Taylor

18 CMIP model output requirements These requirements taxed modeling group resources but were essential to creating an easy-to-use archive of model output Lots of details (16 pages + appendices) including File format (netCDF “classic” data model) Global attributes identifying model, experiment, ensemble member, and more Variable attributes indicating spatial and temporal sampling of data Full description of horizontal and vertical grids. The “Climate Model Output Rewriter” (CMOR) was key to ensuring compliance with the standards. Much of the required metadata was supplied via “CMOR tables” Error checks ensured a much “cleaner” data archive Courtesy of Karl Taylor

19 List of “Standard Output” variables Has evolved (and expanded) over two decades of MIPs PCMDI seeks advice and guidance from a wide range of users and modeling experts Domains include: atmosphere, aerosols, ocean, ocean biogechemistry, land surface & carbon cycle, sea ice, land ice, and CFMIP output Temporal sampling includes: annual, monthly, daily, 6-hourly, and 3- hourly Courtesy of Karl Taylor

20 Standards governance and evolution CF-conventions Governance board, Standards committee, active grass-roots input DRS PCMDI and international data centers (BADC, DKRZ) Extensions made by leaders of obs4MIPs, CORDEX, and SPECS CMIP output requirements Specified by PCMDI (with community input) CMIP standard output Coordinated by PCMDI, with community input and CMIP Panel oversight Courtesy of Karl Taylor

21 The WCRP supports adoption and extension of the CMIP standards for all its projects/activities Working Group on Coupled Modelling (WGCM) mandates use of established infrastructure for all MIPs Standards and conventions ESGF The WGCM has established the WGCM Infrastructure Panel (WIP) to govern evolution of standards, including: CF metadata standards Specifications beyond CF guaranteeing fully self-describing and easy-to-use datasets (e.g., CMIP requirements for output) Catalog and software interface standards ensuring remote access to data, independent of local format (e.g., OPeNDAP, THREDDS) Node management and data publication protocols Defined dataset description schemes and controlled vocabularies (e.g., the DRS) Standards governing model and experiment documentation (e.g., CIM) Courtesy of Karl Taylor

22 What support is needed/available to maintain and advance the data standards and infrastructure Website support obs4MIPs protocols and requirements CF conventions and standard names WIP website (???) Committee and panel service Mainly climate scientists Development of tools capitalizing on the standards and facilitating conformance netCDF libraries Climate Model Output Rewriter (CMOR) ESGF CF conformance checker Courtesy of Karl Taylor

23 Immediate needs of obs4MIPs CMOR needs to be generalized to better handle observational data. CMOR was developed to meet modeling needs Some of the attributes written by CMOR don’t apply to observations (e.g., model name, experiment name) Modifications and extensions are needed for the DRS (should be proposed by the WDAC obs4MIPs Task Team to the WIP) A “recipe” for preparation of obs4MIPs datasets should be posted CoG website needs to be modified to meet obs4MIPs requirements. Courtesy of Karl Taylor

24 obs4MIPs imperatives Align observational data with model output to facilitate analysis Conform to data conventions followed in CMIP5 Serve data via ESGF Modify DRS and the dataset global attributes. Generalize CMOR software to provide an easy option for conforming data to obs4MIPs specifications: Also provide user-friendly examples Configure the CoG wiki to serve as the obs4MIPs website and portal to the data Review the CMIP standard output list for completeness Courtesy of Karl Taylor

25 Some closing thoughts Standards and infrastructure fundamentally rely on community consensus Must serve the needs of multiple projects Must meet the the diverse set of needs of a broad spectrum of users Requires substantial coordination Demands community input and the oversight relying on the expertise of a diversity of scientists and data specialists The current system is fragile in that many of the components are funded by individual projects that could disappear, impairing viability of the entire infrastructure. Every funder is essential to supporting this collaborative effort where Leadership and recognition is shared across multiple projects Integration of the “whole” imposes external requirements on the individual components. Courtesy of Karl Taylor

26 Outline Background, Motivation & Plans 1)obs4MIPs: Evolution, Status and Plans 2)obs4MIPs: Standards, Conventions and Infrastructure 3)CMIP Phase6 Planning 4)Benchmarking Climate Model Performance Ref: http:://dx.doi.org/10.1175/BAMS-D-12-00204.1 http://www.earthsystemcog.org/projects/obs4mips

27 Since 1995, the Coupled Model Intercomparison Project (CMIP) has coordinated climate model experiments involving multiple international modeling teams. CMIP has led to a better understanding of past, present and future climate change and variability. CMIP has developed in phases, with the simulations of the fifth phase, CMIP5, now mostly completed. Though analyses of the CMIP5 data will continue for at least several more years, science gaps and outstanding science questions have prompted preparations to get underway for the sixth phase of the project (CMIP6). An initial proposal for the design of CMIP6 has been published to inform interested research communities and to encourage discussion and feedback for consideration in the evolving experiment design. CMIP: Toward understanding past, present and future climate (organized by the WCRP Working Group on Coupled Modelling (WGCM)) Meehl et al., EOS, 2014 Courtesy of veronika Eyring

28 Initial proposal for the design of CMIP6 to inform interested research communities and to encourage discussion and feedback for consideration in the evolving experiment design. See details at Meehl, G. A., R. Moss, K. E. Taylor, V. Eyring, R. J. Stouffer, S. Bony and B. Stevens, Climate Model Intercomparisons: Preparing for the Next Phase, Eos Trans. AGU, 95(9), 77, 2014. Feedback on this initial CMIP6 proposal is being solicited over the year from modeling groups and model analysts. Please send comments to CMIP Panel chair, Veronika Eyring, Veronika.Eyring@dlr.de by September 2014. The WGCM and the CMIP Panel will then iterate on the proposed experiment design, with the intention to finalize it at its meeting in October, 2014.Veronika.Eyring@dlr.de Based on CMIP5 survey and Aspen & WGCM/AIMES meetings. Preparing for Phase 6 of the Coupled Model Intercomparison Project (CMIP6) see updates on the CMIP Panel website http://www.wcrp-climate.org/index.php/wgcm-cmip/about-cmip Courtesy of veronika Eyring

29 WCRP Coupled Model Intercomparison Project Phase 5 (CMIP5) Survey Prepared by the CMIP Panel and Co-Chairs of WGCM (in June 2013) Veronika Eyring, Ron Stouffer, Sandrine Bony, Jerry Meehl, Bjorn Stevens, and Karl Taylor Background: As most of the simulations for CMIP5 have been completed, and their analysis is in full swing it seems timely to ask, while experiences are still fresh, as to what went well, what didn’t, and what gaps in the science are emerging. In particular, are gaps emerging that could be filled or bridged by a coordinated set of model experiments, and thus should be considered as a component of CMIP6. Goal of the Survey: To learn from those most active in CMIP5 what went well and what didn’t and to provide input for future CMIP6 planning workshops. Addressees: sent to representatives of the climate community end of June 2013 1.Point of Contacts CMIP5 model groups 2.Co-Chairs WCRP working groups 2.Co-Chairs WGCM-Endorsed Community Coordinated Project 3.Co-Chairs Model Intercomparison Project 4.Point of Contacts Integrated Assessment Model groups 5.Co-Chairs Related IGBP group or activity 6.Contacts ESG Federation 7.Members of Climate Services Partnership Coordinating Group 8.Some Additional People Courtesy of veronika Eyring

30 The specific experimental design would be focused on three broad scientific questions: 1.How does the Earth System respond to forcing? 2.What are the origins and consequences of systematic model biases? 3.How can we assess future climate changes given climate variability, predictability and uncertainties in scenarios? It is proposed to use as the scientific backdrop for CMIP6 the six WCRP Grand Challenges, and an additional theme encapsulating questions related to biospheric forcings and feedbacks. 1.Clouds, Circulation and Climate Sensitivity 2.Changes in Cryosphere 3.Climate Extremes 4.Regional Climate Information 5.Regional Sea-level Rise 6.Water Availability 7.AIMES theme for collaboration: biospheric forcings and feedbacks Initial CMIP6 Proposal: Scientific Focus Meehl et al., EOS, 2014 All relate to Obs4MIPs Courtesy of veronika Eyring

31 CMIP5 Survey: Model evaluation, performance metrics NBP Detailed and systematic model evaluation during the development process should be facilitated by CMIP and the metrics panel. The continued push for standard performance metrics, readily published and viewable on a central website (also for providing guidance for impact analyses). CMIP Standardized diagnostic and performance metrics package that runs on the ESGF Obs4MIPs seen as very positive way to improve regular model-obs evaluation, should be grown. Process-oriented evaluation to understand model biases and error compensation. Code repository (e.g. at WGNE/WGCM Climate Metrics Panel Website) Anav et al., 2013 In the NH over land almost all models systematically underestimate the sink, reflecting absence of sinks due to nitrogen deposition or poor representation of forest regrowth. NH Net carbon flux from land to atmosphere (Net Biome Productivity - NBP) Courtesy of veronika Eyring

32 CMIP would be comprised of two elements: 1.Ongoing CMIP Diagnostic, Evaluation and Characterization of Klima (DECK) experiments: a small set of standardized experiments that would be performed whenever a new model is developed. The DECK experiments are chosen to provide continuity across past and future phases of CMIP, to evolve only slowly with time, and to take advantage of what is already common practice in many modeling centers: i.an AMIP simulation (~1979-2010); ii.a multi-hundred year pre-industrial control simulation; iii.a 1%/yr CO 2 increase simulation to quadrupling to derive the transient climate response; iv.an instantaneous 4xCO 2 run to derive the equilibrium climate sensitivity; v.a simulation starting in the 19th century and running through the 21st century using an existing scenario (RCP8.5). 2.Standardization, coordination, infrastructure, and documentation functions that make the simulations and their main characteristics performed under CMIP available to the broader community. Meehl et al., EOS, 2014 Initial CMIP6 Proposal: A Distributed Organization under the oversight of the CMIP Panel Courtesy of veronika Eyring

33 CMIP Phase 6 (CMIP6): CMIP6-Endorsed MIPs would propose additional experiments, and modeling groups could choose a subset of these to run according to their interest, computing and/or human resources and funding constraints. The MIPs would also likely have additional experiments that would not be part of CMIP6 but would be of interest and relevant to their respective communities. Participation A scientist or group of scientists could send a ‘Request for a CMIP6-Endorsed MIP’ at any time to the CMIP Panel Chair (see template on CMIP webpage). The main criteria for MIPs to be endorsed for CMIP6 are The MIP addresses at least one of the key science questions of CMIP6; The MIP follows CMIP standards in terms of experimental design, data format and documentation; A sufficient number of modeling groups have agreed to participate in the MIP; The MIP builds on the shared CMIP DECK experiments; A commitment to contribute to the creation of the CMIP6 data request. Meehl et al., EOS, 2014 Initial CMIP6 Proposal: A Distributed Organization under the oversight of the CMIP Panel Courtesy of veronika Eyring

34 WCRP Grand Challenges: (1) Clouds, circulation and climate sensitivity, (2) Changes in cryosphere, (3) Climate extremes, (4) Regional climate information, (5) Regional sea-level rise, and (6) Water availability, plus an additional theme on “biospheric forcings and feedbacks” Meehl et al., EOS, 2014 Courtesy of veronika Eyring

35 Model Evaluation A CMIP benchmarking and evaluation software package (made available to everyone, for example through the WGNE/WGCM metrics panel wiki) would produce well-established analyses as soon as model results become available. The objective is to enable routine model evaluation and to aid the model development process by providing feedback concerning systematic model errors in the individual models. Communication The new distributed nature of CMIP6 requires WGCM and the CMIP Panel to play a strong role in facilitating communication between MIPs, and between the MIPs and the modeling groups. Next Steps and Time Line The overall data preparation will follow procedures developed in CMIP5. The historical emissions would be made available in spring 2015, and the emissions for the future climate scenarios by the end of 2015. Analyses of CMIP6 data would be ongoing, with the simulation phase of CMIP6 running for five years, from 2015 to 2020, followed by many more years of model analysis. The runs for the ScenarioMIP would probably occur near the end of the CMIP6 cycle, and thus likely begin in 2017 and continue into 2018. A possible IPCC AR6 that would likely assess CMIP6 simulations could take place from roughly 2017 to 2020, but when or even whether there will be an AR6 will not be known until 2015 at the earliest. Even without an AR6, CMIP6 will still operate, as previous phases of CMIP have, to provide a set of state-of-the-art global climate model simulations as a resource for the international climate science community. Meehl et al., EOS, 2014

36 … 2014201520162017201820192020 Diagnostic, Evaluation and Characterization Model Version 1 with standardized metrics & assessment Model Version 2 Model Version 4 Model Version 3 CMIP DECK CMIP6 Endorsed MIPs MIP1 MIP2 MIP3 MIP1 MIP4 MIP2 Future projection runs CMIP6 Timeline Nominal Simulation Period of CMIP6 Scenario MIP studies, MIP matrix, pattern scaling, scenario pairs Possible IPCC AR6 Formulate scenarios to be run by AOGCMs and ESMs Finalize experiment design (WGCM) Run and analyze scenario simulations from matrix Community input on CMIP6 design Forcing data: harmonization, emissions to concentrations Preliminary ESM/AOGCM runs with new scenarios Courtesy of veronika Eyring

37 Outline Background, Motivation & Plans 1)obs4MIPs: Evolution, Status and Plans 2)obs4MIPs: Standards, Conventions and Infrastructure 3)CMIP Phase6 Planning 4)Benchmarking Climate Model Performance Ref: http:://dx.doi.org/10.1175/BAMS-D-12-00204.1 http://www.earthsystemcog.org/projects/obs4mips

38 Documenting climate model performance Hundreds of model versions, many thousands of simulations Research papers focused on model evaluation are essential, but we need to do better at documenting and benchmarking the baseline performance of all climate models Doing this properly requires keeping track of which observational datasets are used, and their product versions Courtesy of Peter Gleckler

39 Maps and plots such as these are routinely produced by many climate scientists and modeling centers Provide an essential “first-look” comparison between models and observations Not well suited for summarizing diverse performance characteristics from many models Routine comparisons of model simulations with observations IPCC AR5 WGI Figure 9.5 Courtesy of Peter Gleckler

40 Provides a way to quantify and synthesize basic characteristics of model behavior Facilitates the monitoring of model performance (are models getting better?) Complements diagnostics (maps, etc.) but can not replace them First step beyond a difference map: scalar performance metrics IPCC AR5 WGI Figure 9.6 Global annual mean pattern correlations between observations and models Courtesy of Peter Gleckler

41 Metrics facilitate assessing the strengths and weaknesses of different models Can be very useful to climate modelers as sanity checks during model development Sometimes controversial because information is misinterpreted A fundamental research challenge: finding ways to disqualify models or “weight” some more than others – routine metrics have not resolved this, but are valuable for gauging overall model quality Synthesizing the strengths and weaknesses of different models IPCC AR5 WGI Figure 9.7 Portrait plot of relative model performance in CMIP5 Annual cycle space-time global RMS Error obs4MIPs ana4MIPs Courtesy of Peter Gleckler

42 Establishing a mechanism for systematic benchmarking of climate model performance WGNE/WGCM Climate Model Metrics Panel: Tasked to identify and promote a limited set of frequently used performance metrics in an attempt to establish community benchmarks for climate models Some desired characteristics of performance metrics for benchmarking: A useful quantification of model error (with respect to observations) Well established in the literature and widely used Relatively easy to compute, reproduce and interpret Fairly robust results (hence an initial emphasis on mean climate) Well suited for repeated use Courtesy of Peter Gleckler

43 Making CMIP more relevant to model development The benefit to modeling groups contributing to CMIP has always been limited by the research phase. By the time results are presented/published, most groups have moved on to their next generation model. Modeling groups would learn more if they could assess the relative strengths and weakness of their model during the development process. PCMDI has developed a framework that will enable modeling groups to “quick-look” compare their newer model versions with all available CMIP simulations. Courtesy of Peter Gleckler

44 Includes code, documentation, observations and a database of results for all available CMIP simulations Built on a light version of PCMDI’s UV-CDAT which is based on Python. ESMF regridding is built-in so that data on ocean grids can be interpolated to directly compare with observations Alpha version under testing at three modeling centers; to be made available to all CMIP modeling groups this year Designed to enable integration into existing analysis systems PCMDI’s metrics package EMBRACE 2013 | Peter Gleckler Courtesy of Peter Gleckler

45 Statistics: Bias, pattern correlation, RMS, variance and mean absolute error Domains: Global, tropical, and extra-tropical seasonal climatologies Field examples: Air temperature and winds (200 and 850hPa), geopotential height (500hPa), surface air temperature, winds and humidity, TOA radiative fluxes, precipitation, precipitable water, SST and SSH Package includes updated climatologies from multiple sources, mostly near-global satellite data (obs4MIPs) and reanalysis (ana4MIPs) Planning to incrementally identify a more diverse set of performance tests Metrics Panel collaborating with other activities, and receiving recommendations for additional metrics : Global Monsoon Precipitation Index (CLIVAR AAMP), ENSO metrics (CLIVAR Pacific Panel), Working Group on Ocean Model Development, ILAMB (AIMES/IGBP), MJO task force … Initial Set of Baseline Metrics (recommended by the WGNE/WCGM Metrics Panel) obs4MIPs 2014 | Peter Gleckler Courtesy of Peter Gleckler

46 Leaf area index (LAI) is one of a variety of ESM variables considered for benchmarking (ILAMB) Many other quantities to consider, e.g.: NPP, AOD, [O 3 ] Earth System Models (ESMs): New simulations characteristics to evaluate Anav et al., 2013, Jclim Mean annual LAI as simulated by CMIP5 models over land subdomains Courtesy of Peter Gleckler

47 Successful benchmarking capability must readily enable reproducibility: Clarification of which datasets are used for performance metrics Dataset version (updates) documented and clearly traceable Observations “technically aligned” CMIP data and directly accessible via the same mechanism (ESGF) Ideally, data experts are involved in the process (i.e., not 3 rd party) The critical value of obs4MIPs for benchmarking model performance Courtesy of Peter Gleckler

48 This year and next All CMIP modeling groups able to use a common analysis framework for baseline in-house benchmarking against selected obs4MIPs/ana4MIPs datasets. During model development, modelers able to assess the relative strengths and weakness of their model by comparing it to other state-of-the-art CMIP models. This can help modelers make more informed decisions on priorities. Looking ahead… An increasingly diverse and targeted set of benchmarks An expectation develops that all new simulations are tested against baseline metrics, results are made widely available as soon as model data become public A benchmarking trail is established, documenting characteristics of all newer models and the observations (versions) used for benchmarking. Prospects for advancing model benchmarking Courtesy of Peter Gleckler

49 Much of what has been done to date has been ad-hoc CMIP DECK experiments encourage more systematic evaluation Diagnostic packages that target the CMIP DECK (e.g., ESMValTool under development in the EU Embrace project) will facilitate a community-based approach to more in-depth model evaluation Well organized observations are essential for all of this! Beyond baseline benchmarking: Community-based model diagnosis Courtesy of Peter Gleckler


Download ppt "May 27, 2014 National Central University, Jhongli, Taiwan Tsengdar Lee, Ph.D. With inputs from Duane Waliser, Karl Taylor, Veronika Eyring, Peter Gleckler."

Similar presentations


Ads by Google