Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jim Boyle, Peter Gleckler, Stephen Klein, Mark Zelinka, Yuying Zhang Program for Climate Model Diagnosis and Intercomparison / LLNL Robert Pincus University.

Similar presentations


Presentation on theme: "Jim Boyle, Peter Gleckler, Stephen Klein, Mark Zelinka, Yuying Zhang Program for Climate Model Diagnosis and Intercomparison / LLNL Robert Pincus University."— Presentation transcript:

1 Jim Boyle, Peter Gleckler, Stephen Klein, Mark Zelinka, Yuying Zhang Program for Climate Model Diagnosis and Intercomparison / LLNL Robert Pincus University of Colorado and NOAA/Earth System Research Laboratory Are Climate Model Simulations of Clouds Improving? An Evaluation Using the ISCCP Simulator LLNL-PRES-635486 April 24, 2013 ISCCP at 30: What Do We Know and What Do We Still Need to Know? City College of New York New York, New York Prepared by LLNL under Contract DE-AC52-07NA27344

2 Stephen A. Klein, 24 April 2013, p. 2 The ISCCP Simulator for Model Evaluation  From its beginning, ISCCP has been concerned with improving the modeling of clouds in climate models  Widespread use by climate modelers of the attractive ISCCP data did not occur until “turn-key” software was available for interpreting model clouds in terms of ISCCP observables  Software was needed to approximately and efficiently answer the question: “What would ISCCP have retrieved for cloud properties if the atmosphere had the clouds of a climate model?” “[ISCCP’s] basic objective is to collect and analyze satellite radiance data to infer the global distribution of cloud radiative properties in order to improve the modeling of cloud effects on climate” Schiffer and Rossow (BAMS 1983, Abstract, 2 nd sentence)

3 Stephen A. Klein, 24 April 2013, p. 3 The ISCCP Simulator for Model Evaluation  The software accounts for issues of sub- grid scale cloud variability, detection thresholds, and retrieval characteristics to create a nominally identical joint histogram of cloud amount by  and ctp  This software (“ISCCP simulator”) – was created around 1997-2001 by Mark Webb and me, expanding upon Yu et al. (1996)  The potential for the simulator to be a community tool was soon realized  Besides facilitating the comparison of models to the ISCCP observations, the software makes it more likely that model-observation differences can be attributed to model error rather than observational limitations

4 Stephen A. Klein, 24 April 2013, p. 4 The ISCCP Simulator’s Success & Impacts  Every major climate modeling group uses the ISCCP simulator to compare their simulated clouds to ISCCP data Unexpected outcomes of the ISCCP simulator  Inter-model comparison of cloud properties has been placed on a firmer basis (Zhang et al. 2005 among others) –ISCCP simulator diagnostics are requested output in climate model intercomparison projects (CMIP)  It inspired the creation of other satellite simulators for CloudSat, Calipso, MODIS, MISR  COSP (Bodas-Salcedo et al. 2011)  ISCCP simulator output has been used to understand cloud feedbacks to climate change (Webb et al. 2006, Williams et al. 2006, Ringer et al. 2006, Williams and Tselioudis 2007, Williams and Webb 2009, Zelinka et al. 2012a, 2012b, 2013, among others)

5 Stephen A. Klein, 24 April 2013, p. 5 Are the simulations of clouds improving?  We examine two recent climate model ensembles: –CFMIP1 (~CMIP3/AR4) – 9 models, ca. 2005 –CFMIP2 (CMIP5/AR5) – 10 models, ca. 2012  Using satellite cloud climatologies from ISCCP and MODIS, we examine model simulations of: –Cloud amount –Column-integrated cloud optical thickness  –Cloud-top pressure (ctp) of the highest cloud in a column  We only use ISCCP simulator output from the models  We only examine results for the region (60°N-60°S)

6 Stephen A. Klein, 24 April 2013, p. 6 How often does a cloud occur?  > 1.3) MODELS SATELLITE OBSERVATIONS

7 Stephen A. Klein, 24 April 2013, p. 7 Multi-model means can hide improvements in individual models HADLEY CENTRE MODELS SATELLITE OBSERVATIONS

8 Stephen A. Klein, 24 April 2013, p. 8 Models show improved simulations of  

9 Stephen A. Klein, 24 April 2013, p. 9 Models show improved simulations of  

10 Stephen A. Klein, 24 April 2013, p. 10 Models show improved simulations of  

11 Stephen A. Klein, 24 April 2013, p. 11 How often do optically thick clouds occur?  > 23) MODELS SATELLITE OBSERVATIONS

12 Stephen A. Klein, 24 April 2013, p. 12 Improvement in the amount of optically thick cloud is widespread CCCMA MIROC NSF-DOE- NCAR MOHC MODEL FAMILIES NOAA GFDL AGCM4 CanAM4 MIROC (losens) MIROC(hisens) MIROC5 CCSM3.0 CCSM4 CESM1 (CAM5) HadSM3HadSM4 HadGSM1 HadGEM2-A GFDL MLM2.1 GFDL-CM3 Fractional Area Covered by Clouds with  > 23

13 Stephen A. Klein, 24 April 2013, p. 13 Compensating errors are reduced  Models tend to have cloud amounts lower than observed but  greater than observed (“too-few, too-bright” problem)  Cloud radiative kernels (Zelinka et al. 2012a) permit quantification of the impact of biases relative to observations of ISCCP simulator clouds NOAA GFDL NSF- DOE- NCAR MULTI- MODEL MEANS  CFMIP1 CFMIP2 MLM2.1 CM3 CCSM3.0 CESM1(CAM5) W m -2 Biases in Reflected Shortwave as a Function of  (Absolute Values; Positive Bias = Solid Line; Negative Bias = Dashed Line)

14 Stephen A. Klein, 24 April 2013, p. 14 Can we demonstrate quantitative improvement in cloud simulations?  Scalar measures of the fidelity of model simulations (a.k.a. “metrics”) have been computed for models E TCA is the normalized RMS error in the simulation of 60°N - 60°S total cloud amount (  > 1.3) as a function of location and month. Normalization is by the space-time variability in the observations. CAM CCCMA GFDL MIROC MOHC Mean Model

15 Stephen A. Klein, 24 April 2013, p. 15 Can we demonstrate quantitative improvement in cloud simulations?  A measure of  and cloud top pressure (CTP) shows greater improvement E CTP-  is the normalized RMS error in the simulation of 60°N - 60°S amounts of clouds in the 6 optically intermediate and thick cloud bins of the joint CTP -  histogram as a function of location and month. Normalization is by the space-time variability in the observations.  CTP CAM CCCMA GFDL MIROC MOHC Mean Model Figure credit Swati Gehlot

16 Stephen A. Klein, 24 April 2013, p. 16 Is there a relationship of skill with the cloud response to climate change? CFMIP1 Models CFMIP2 Models  Within each model ensemble (but not together!), both the global mean net and shortwave cloud feedback is higher in models with cloud property smaller errors  However, more work would be needed to determine whether this is truly significant and how this could be explained physically

17 Stephen A. Klein, 24 April 2013, p. 17 Are the simulations of clouds improving?  Yes, they are broadly speaking – particularly for simulations of , for which models simulate less optically thick cloud and more optically intermediate cloud  There is no widespread improvement in simulations of cloud amount or cloud top pressure, but individual models do differ  Improved simulations of  lead to smaller “too few, too bright” compensating errors

18 Stephen A. Klein, 24 April 2013, p. 18 Why are  simulations improving? It’s hard to be sure because  diagnostics are not generally saved or analyzed. Still here’s some informed speculation:  Cloud microphysics? ✓ –Better mixed-phase cloud microphysics reduces super-cooled liquid overestimate  Improved cloud radiative properties? ✓ –Monte-Carlo Independent Column Approximation (Pincus et al. 2003) seems to have made a difference  Increased horizontal resolution? ✗  Increased vertical resolution? ✓ –Physically thinner clouds encourages optically thinner clouds It’s likely that a combination of factors is responsible.

19 Stephen A. Klein, 24 April 2013, p. 19 Thank you!

20 Stephen A. Klein, 24 April 2013, p. 20 Why “satellite simulators” for clouds?  Diagnosing cloud processes in climate models with observations is difficult and fraught with issues –Correspondence of model quantities to available observations –Limitations of satellite observations –Scale of model grids (~100 km) vs. satellite pixels (~1 km)  Simulators reduce the effects of these issues in order that comparisons between models and observations more likely are an evaluation of the model  The simulator is a piece of diagnostic code that mimics the observational process by converting model variables into pseudo-satellite observations –What would a satellite see if the atmosphere had the clouds of a climate model?

21 Stephen A. Klein, 24 April 2013, p. 21 Simulator conceptual diagram  A simulator addresses issues of –Cloud overlap (column-integrated τ and cloud-top pressure p ct of the high cloud in the column) –Detection thresholds (τ >= 0.3, dBZ > -25, SR > 5) –Retrieval characteristics (different ways to calculate p ct ) Downscaler & Forward Model Retrieval Algorithm Retrieval Algorithm Observed Radiances Simulated Radiances Simulated Fields Retrieved Fields Model Fields Agree? Satellite Simulator Satellite Instrument Climate Model

22 Stephen A. Klein, 24 April 2013, p. 22 CFMIP Observation Simulator Package  About 5 years ago, the CFMIP community came together to form a community software package of simulators  COSP has simulators for 5 satellite cloud products –ISCCP, MISR, MODIS, CloudSat, Calipso  All major climate models use COSP –Also used in global models with very-high resolution (~10 km) –Most have put the code in-line to their model  A matching set of observations for each simulator has been specially prepared in ESG compatible format COSP Flow Chart Bodas-Salcedo et al. (2011)


Download ppt "Jim Boyle, Peter Gleckler, Stephen Klein, Mark Zelinka, Yuying Zhang Program for Climate Model Diagnosis and Intercomparison / LLNL Robert Pincus University."

Similar presentations


Ads by Google