2013 DOE/EU Retrieval Workshop, Köln “Common Algorithm Evaluation Approaches ” Session Shaocheng Xie, Kerstin Ebell, Dave Turner, and Ulrich Lohnert EU/DOE.

Slides:



Advertisements
Similar presentations
Ewan OConnor, Robin Hogan, Anthony Illingworth Drizzle comparisons.
Advertisements

Proposed new uses for the Ceilometer Network
Ewan OConnor, Robin Hogan, Anthony Illingworth, Nicolas Gaussiat Radar/lidar observations of boundary layer clouds.
Anthony Illingworth, + Robin Hogan, Ewan OConnor, U of Reading, UK and the CloudNET team (F, D, NL, S, Su). Reading: 19 Feb 08 – Meeting with Met office.
Radar/lidar observations of boundary layer clouds
Robin Hogan Department of Meteorology University of Reading Cloud and Climate Studies using the Chilbolton Observatory.
Robin Hogan, Richard Allan, Nicky Chalmers, Thorwald Stein, Julien Delanoë University of Reading How accurate are the radiative properties of ice clouds.
Integrated Profiling at the AMF
Marc Schröder et al., FUB BBC2 Workshop, De Bilt, 10.´04 Problems related to absorption dependent retrievals and their validation Marc Schröder 1, Rene.
Application of Cloudnet data in the validation of SCIAMACHY cloud height products Ping Wang Piet Stammes KNMI, De Bilt, The Netherlands CESAR Science day,
A thermodynamic model for estimating sea and lake ice thickness with optical satellite data Student presentation for GGS656 Sanmei Li April 17, 2012.
EGU-Boers (2006) Accuracy assessment of an integrated profiling technique for temperature, humidity and liquid water content profiles U. Lohnert (1), E.
TRMM Tropical Rainfall Measurement (Mission). Why TRMM? n Tropical Rainfall Measuring Mission (TRMM) is a joint US-Japan study initiated in 1997 to study.
Microphysical and radiative properties of ice clouds Evaluation of the representation of clouds in models J. Delanoë and A. Protat IPSL / CETP Assessment.
Evaluation of ECHAM5 General Circulation Model using ISCCP simulator Swati Gehlot & Johannes Quaas Max-Planck-Institut für Meteorologie Hamburg, Germany.
May 10, 2004Aeronet workshop Can AERONET help with monitoring clouds? Alexander Marshak NASA/GSFC Thanks to: Y. Knyazikhin, K. Evans, W. Wiscombe, I. Slutsker.
Aerosol radiative effects from satellites Gareth Thomas Nicky Chalmers, Caroline Poulsen, Ellie Highwood, Don Grainger Gareth Thomas - NCEO/CEOI-ST Joint.
Princeton University Global Evaluation of a MODIS based Evapotranspiration Product Eric Wood Hongbo Su Matthew McCabe.
Atmospheric structure from lidar and radar Jens Bösenberg 1.Motivation 2.Layer structure 3.Water vapour profiling 4.Turbulence structure 5.Cloud profiling.
Remote sensing of Stratocumulus using radar/lidar synergy Ewan O’Connor, Anthony Illingworth & Robin Hogan University of Reading.
Initial testing of longwave parameterizations for broken water cloud fields - accounting for transmission Ezra E. Takara and Robert G. Ellingson Department.
1 Cloud Droplet Size Retrievals from AERONET Cloud Mode Observations Christine Chiu Stefani Huang, Alexander Marshak, Tamas Várnai, Brent Holben, Warren.
Bredbeck Workshop, 7 – 10 July 2003 Jörg Schulz Meteorological Institute, University of Bonn Harald Czekala RPG Radiometer.
Ewan O’Connor Anthony Illingworth Comparison of observed cloud properties at the AMF COPS site with NWP models.
1 H.W.J Russchenberg 1, C.L. Brandau 1, U. Loehnert 2 and K. Ebell 2 (1) International Research Centre for Telecommunication and Radar (IRCTR), Delft Technical.
IRCTR - International Research Centre for Telecommunication and Radar ATMOS Ice crystals properties retrieval within ice and mixed-phase clouds using the.
Observed and modelled long-term water cloud statistics for the Murg Valley Kerstin Ebell, Susanne Crewell, Ulrich Löhnert Institute for Geophysics and.
Profiling Clouds with Satellite Imager Data and Potential Applications William L. Smith Jr. 1, Douglas A. Spangenberg 2, Cecilia Fleeger 2, Patrick Minnis.
Remote-sensing of the environment (RSE) ATMOS Analysis of the Composition of Clouds with Extended Polarization Techniques L. Pfitzenmaier, H. Russchenbergs.
-integral to VOCALS objectives -we want to do the best we can: * marine stratus over ocean is the idealization many retrievals of warm cloud properties.
Characterization of Arctic Mixed-Phase Cloudy Boundary Layers with the Adiabatic Assumption Paquita Zuidema*, Janet Intrieri, Sergey Matrosov, Matthew.
Determination of the optical thickness and effective radius from reflected solar radiation measurements David Painemal MPO531.
CAUSES (Clouds Above the United States and Errors at the Surface) "A new project with an observationally-based focus, which evaluates the role of clouds,
Princeton University Development of Improved Forward Models for Retrievals of Snow Properties Eric. F. Wood, Princeton University Dennis. P. Lettenmaier,
Infrared Interferometers and Microwave Radiometers Dr. David D. Turner Space Science and Engineering Center University of Wisconsin - Madison
AGU 2002 Fall Meeting NASA Langley Research Center / Atmospheric Sciences Validation of GOES-8 Derived Cloud Properties Over the Southeastern Pacific J.
Matthew Shupe, Ola Persson, Amy Solomon CIRES – Univ. of Colorado & NOAA/ESRL David Turner NOAA/NSSL Dynamical and Microphysical Characteristics and Interactions.
Estimation of Cloud and Precipitation From Warm Clouds in Support of the ABI: A Pre-launch Study with A-Train Zhanqing Li, R. Chen, R. Kuligowski, R. Ferraro,
1 Optimal Channel Selection. 2 Redundancy “Information Content” vs. “On the diagnosis of the strength of the measurements in an observing system through.
BBC Workshop DeBilt, 18.–19. October 2004 Reconstruction of three dimensional cloud fields from two dimensional input datasets Klemens Barfus & Franz H.
Water cloud retrievals O. A. Krasnov and H. W. J. Russchenberg International Research Centre for Telecommunications-transmission and Radar, Faculty of.
Radiative Atmospheric Divergence using ARM Mobile Facility, GERB data and AMMA stations –led by Tony Slingo, ESSC, Reading University, UK Links the ARM.
Lidar+Radar ice cloud remote sensing within CLOUDNET. D.Donovan, G-J Zadelhof (KNMI) and the CLOUDNET team With outside contributions from… Z. Wang (NASA/GSFC)
The retrieval of the LWC in water clouds: the comparison of Frisch and Radar-Lidar techniques O. A. Krasnov and H. W. J. Russchenberg International Research.
Robin Hogan Ewan O’Connor The Instrument Synergy/ Target Categorization product.
April Hansen et al. [1997] proposed that absorbing aerosol may reduce cloudiness by modifying the heating rate profiles of the atmosphere. Absorbing.
1 Atmospheric profiling to better understand fog and low level cloud life cycle ARM/EU workshop on algorithms, May 2013 J. Delanoe (LATMOS), JC.
Turner and Ebell, 2013 DOE/EU Retrieval Workshop, Köln Retrieval Algorithm Frameworks Dave Turner NOAA National Severe Storms Laboratory Kerstin Ebell.
A Global Rainfall Validation Strategy Wesley Berg, Christian Kummerow, and Tristan L’Ecuyer Colorado State University.
BBHRP Assessment Part 2: Cirrus Radiative Flux Study Using Radar/Lidar/AERI Derived Cloud Properties David Tobin, Lori Borg, David Turner, Robert Holz,
The role of boundary layer clouds in the global energy and water cycle: An integrated assessment using satellite observations Ralf Bennartz University.
Towards a Characterization of Arctic Mixed-Phase Clouds Matthew D. Shupe a, Pavlos Kollias b, Ed Luke b a Cooperative Institute for Research in Environmental.
Challenges and Strategies for Combined Active/Passive Precipitation Retrievals S. Joseph Munchak 1, W. S. Olson 1,2, M. Grecu 1,3 1: NASA Goddard Space.
Wanda R. Ferrell, Ph.D. Acting Director Climate and Environmental Sciences Division February 24, 2010 BERAC Meeting Atmospheric System Research Science.
Considerations for the Physical Inversion of Cloudy Radiometric Satellite Observations.
Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment, Part II: Multi-layered cloud GCSS.
Visible optical depth,  Optically thicker clouds correlate with colder tops Ship tracks Note, retrievals done on cloudy pixels which are spatially uniform.
Basis of GV for Japan’s Hydro-Meteorological Process Modelling Research GPM Workshop Sep. 27 to 30, Taipei, Taiwan Toshio Koike, Tobias Graf, Mirza Cyrus.
An Evaluation of Cloud Microphysics and Radiation Calculations at the NSA Matthew D. Shupe a, David D. Turner b, Eli Mlawer c, Timothy Shippert d a CIRES.
Towards parameterization of cloud drop size distribution for large scale models Wei-Chun Hsieh Athanasios Nenes Image source: NCAR.
Active and passive microwave remote sensing of precipitation at high latitudes R. Bennartz - M. Kulie - C. O’Dell (1) S. Pinori – A. Mugnai (2) (1) University.
Rob Roebeling CloudNET meeting 18 – 19 October 2004, Delft METEOSAT-8 OBSERVATIONS AND DERIVED CLOUD MICROPHYSICAL PROPERTIES.
-integral to VOCALS objectives -we want to do the best we can: * marine stratus over ocean is the idealization many retrievals of warm cloud properties.
UNIVERSITY OF BASILICATA CNR-IMAA (Consiglio Nazionale delle Ricerche Istituto di Metodologie per l’Analisi Ambientale) Tito Scalo (PZ) Analysis and interpretation.
June 20, 2005Workshop on Chemical data assimilation and data needs Data Assimilation Methods Experience from operational meteorological assimilation John.
What Are the Implications of Optical Closure Using Measurements from the Two Column Aerosol Project? J.D. Fast 1, L.K. Berg 1, E. Kassianov 1, D. Chand.
Toward Continuous Cloud Microphysics and Cloud Radiative Forcing Using Continuous ARM Data: TWP Darwin Analysis Goal: Characterize the physical properties.
Understanding warm rain formation using CloudSat and the A-Train
Radar-lidar synergy for the retrieval of water cloud parameters
M. De Graaf1,2, K. Sarna2, J. Brown3, E. Tenner2, M. Schenkels4, and D
Presentation transcript:

2013 DOE/EU Retrieval Workshop, Köln “Common Algorithm Evaluation Approaches ” Session Shaocheng Xie, Kerstin Ebell, Dave Turner, and Ulrich Lohnert EU/DOE Ground-based Cloud and Precipitation Retrieval Workshop, May 2013, Cologne, Germany Goals Identify common algorithm evaluation approaches for retrieval developments and uncertainty quantification Identify group activities to address those challenging issues that arise from previous intercomparison studies

2013 DOE/EU Retrieval Workshop, Köln What Will Be Covered? Common algorithm evaluation approaches Observation System Simulation Experiments (OSSEs) In-situ comparisons Radiative closure Comparison with other retrieval datasets (satellite, other instruments) Intercomparison of retrievals Talks (~40 minutes) Dave Turner: Some examples on using these approaches to evaluate cloud retrievals Shaocheng Xie: Questions for discussion Discussion (~50 minutes)

2013 DOE/EU Retrieval Workshop, Köln Earlier Efforts Made By EU/DOE Major cloud retrieval intercomparison studies: EG-CLIMET Lohnert, Donovan, Ebell, et al. (2013): Assessment of ground-based cloud liquid water profiling retrieval techniques - (3 algorithms, Continental Stratus – CABAUW; Maritime Stratocumulus – ASTEX, only liquid, both OSSEs and real cases) DOE: Comstock et al. (2007): High level ice clouds (16 algorithms, SGP- March 2000 IOP) Turner et al. (2007): Optically thin liquid clouds (18 algorithms, SGP-March 2000 IOP) Shupe et al. (2008): Mixed-phase clouds (8 algorithms, NSA-MPACE)

2013 DOE/EU Retrieval Workshop, Köln  Limitations of instruments and uncertainties in the measurements and assumptions used in the algorithms account for a significant portion of the differences  The accuracy varies greatly with instrument, analysis method, and cloud type  No single retrieval method can work properly for all instruments and all cloud conditions

2013 DOE/EU Retrieval Workshop, Köln Evaluating Retrieval Algorithm Results DOE / EU Ground-based Cloud and Precipitation Retrieval Workshop

2013 DOE/EU Retrieval Workshop, Köln Evaluating Different Retrievals: Which is Better? (1) Observation System Simulation Experiments (OSSEs) –Start with known atmospheric conditions –Use forward model to compute radiance / backscatter and by adding realistic random noise create the ‘observations’ –Retrievals applied to these observations can be compared against a known truth –Do simulations cover entire range of possible conditions? –Assumes that the forward model is ‘perfect’ –Biases are generally not evaluated here –However, careful, well-constructed, simulations can be quite illuminating, especially when comparing sensitivities of different instruments and techniques

2013 DOE/EU Retrieval Workshop, Köln Assessment of ground-based cloud liquid water profiling retrieval techniques Ulrich Löhnert (University of Cologne) Dave Donovan (KNMI) Kerstin Ebell (University of Cologne) Giovanni Martucci (Galway University) Simone Placidi (TU Delft) Christine Brandau (TU Delft) Ewan O’Connor (FMI/University of Reading) Herman Russchenberg (TU Delft) weatherreport.com

2013 DOE/EU Retrieval Workshop, Köln MISSION To recommend an optimum European network of economical and unmanned ground-based profiling stations for observing winds, humidity, temperature and clouds (together with associated erorrs) for use in evaluating climate and NWP models on a global and high resolution (typically 1km) scale and ultimately for assimilating into NWP. European Ground-based Observations of Essential Variables for Climate and Operational Meteorology EG-CLIMET: ES www.eg-climet.org 16 countries, 13 national weather services European COST action EG-CLIMET

2013 DOE/EU Retrieval Workshop, Köln Objective of this study Evaluate current liquid cloud profiling techniques identify errors and discuss assumptions correct for errors  recommendations for an optimal retrieval method Simulation case “truth” known direct evaluation need to simulate measurements Real case application to real measurements evaluation with radiation closure ECSIM (EarthCareSimulator) radar, lidar, microwave radiometer SW, LW fluxes

2013 DOE/EU Retrieval Workshop, Köln Overview of measurements and parameters... measurements which are used... Lidar/cloud radar: cloud base & top (Cloudnet TC) Z: cloud radar reflectivity factor (dBZ) MWR: brightness temperature TB (K) LWP: MWR liquid water path (gm -2 )... parameters to be retrieved... LWC: liquid water content (gm -3 ) Reff: cloud droplet effective radius (μm) N 0 : cloud droplet concentration (cm -3 )

2013 DOE/EU Retrieval Workshop, Köln Retrievals BRANDAU (Brandau et al. / Frisch et al.)  retrieve LWC(z), Reff(z) and N 0 input: MWR LWP and radar Z, cloud boundaries uni-modal drop size distribution relation between moments (2nd and 3rd) of DSD (Brenguier et al. 2011) Cloudnet (O’Connor et al.)  retrieve LWC(z) input: MWR LWP, cloud boundaries & temperature linearly scaled adiabatic LWC, non-drizzle IPT (Löhnert et al. / Ebell et al.)  retrieve LWC(z), Reff(z) input: MWR TB, radar Z and a priori LWC, cloud boundaries, cloudnet TC minimize cost function to meet TB, LWC a priori profiles and radar Z-LWC relation, Reff accoring to Frisch et al. (2002) All retrievals use common “measurements” and cloud boundaries

2013 DOE/EU Retrieval Workshop, Köln Simulated case #1: Continental Stratus One continental case simulated with LES CABAUW: only bulk microphysics (LWC), reasonable drop size distribution assumed (uni-modal), non-drizzling, only liquid ASTEX

2013 DOE/EU Retrieval Workshop, Köln CABAUW case: Cloudnet large random error due to linear scaling rand. error: 50-60% sys. error: ~0% LWC / gm -3

2013 DOE/EU Retrieval Workshop, Köln CABAUW case: Brandau LWC & Reff red: retrieval black: „truth“ LWC / gm -3 rand. error: <10% sys. error: <10% Reff / m -6 rand. error: < 5% sys. error: ~10%

2013 DOE/EU Retrieval Workshop, Köln CABAUW case: Brandau with LWP error LWP error: +25 gm -2 ( 20 < LWP < 50 gm -2 ) LWP accuracy crucial! Reff / m -6 rand. error: ~10% sys. error: ~50% rand. error: < 5% sys. error: ~15% LWC / gm -3

2013 DOE/EU Retrieval Workshop, Köln CABAUW case: IPT LWC & Reff rand. error: <15% sys. error: 20-35% rand. error: ~5% sys. error: up to 70% LWC / gm -3 Reff / m -6 red: retrieval black: „truth“

2013 DOE/EU Retrieval Workshop, Köln Simulated case #2: Maritime Stratocumulus One maritime case simulated with LES ASTEX: spectrally resolved microphysics, low LWP (< 100 gm -2 ), partially drizzling, only liquid ASTEX

2013 DOE/EU Retrieval Workshop, Köln ASTEX: Brandau rand. error: 20-50% sys. error: ~50% rand. error: >100% sys. error: ~50% drop size distribution no longer uni-modal  small number of drizzle droplets lead to Reff overestimation LWC / gm -3 Reff / m -6 red: retrieval black: „truth“

2013 DOE/EU Retrieval Workshop, Köln ASTEX: IPT rand. error: 30-50% sys. error: <30% rand. error: ~50% sys. error: <60% fairly robust LWC profile in drizzle „contaminated“ region LWC / gm -3 Reff / m -6 red: retrieval black: „truth“

2013 DOE/EU Retrieval Workshop, Köln Evaluating Different Retrievals: Which is Better? (2) Comparisons with other retrieved datasets –Which is truth? Different retrievals applied in single-layer warm liquid clouds

2013 DOE/EU Retrieval Workshop, Köln Evaluating Different Retrievals: Which is Better? (3) Comparisons against in-situ observations –Sampling volumes can be quite different –Temporal and spatial sampling issues –Statistics are key –Aircraft are expensive –In-situ obs aren’t necessarily truth! Radar volume 45 m 30 m m/s

2013 DOE/EU Retrieval Workshop, Köln Evaluating Different Retrievals: Which is Better? (4) Closure exercises can provide a robust test, if the closure variable is independent of the retrieval Broadband fluxes are critical for ARM, so agreement in radiative flux is a nice metric to use Compute broadband fluxes to compare with observed fluxes at the surface –Use retrieved cloud properties as input (what we want to test) –Need ancillary observations also (T/q profiles, surface albedo, aerosol properties, etc) –Cloud fraction is important modulator of the observed flux, so need to select cases that are ‘homogeneously’ overcast –Generally evaluate improvement in RMS, not in bias This is a necessary, but not sufficient, closure exercise

2013 DOE/EU Retrieval Workshop, Köln Radiative Closure Example Using ARM Mobile Facility data from Pt. Reyes, California in July-August 2005 –Overcast ~85% of the time –Clouds were low altitude (~500 m) and warm (i.e., liquid only) –Very few cases with overlapping higher-altitude clouds Evaluated two different passive retrieval methods –Combined MWR+AERI retrieval of LWP and Reff –MWR-only retrieved LWP (used AERI+MWR retrieval for R eff ) Comparison in SW radiative flux at surface and TOA –Should not use LW flux, since AERI measures LW radiance –Compare both BIAS and RMS of flux differences Turner JGR 2007

2013 DOE/EU Retrieval Workshop, Köln 2-Day Example from Pt. Reyes Turner JGR 2007

2013 DOE/EU Retrieval Workshop, Köln 2-Day Example from Pt. Reyes Combined AERI+MWR retrieval has smaller bias in SW surface flux Turner JGR 2007

2013 DOE/EU Retrieval Workshop, Köln Surface SW Closure Exercise Similar exercise as LW closure No aerosols included in the calculations MIXCRA shows negative bias, but small amount of aerosol would improve results (and worsen MWRRET results) Variance in MIXCRA results is much lower than variance in MWRRET results for LWP below 120 g/m 2 Turner JGR 2007

2013 DOE/EU Retrieval Workshop, Köln TOA SW Closure Exercise Similar exercise as SW surface closure No aerosols included in the calculations Both methods show negative bias, but small amount of aerosol would improve result slightly Unable to get agreement with both surface and TOA by changing LWP Variance in MIXCRA results is much lower than variance in MWRRET results for LWP below 100 g/m 2 Turner JGR 2007

2013 DOE/EU Retrieval Workshop, Köln Questions for discussion What are the major issues in using these evaluation approaches for algorithm development and uncertainty quantification? What are the strategies for better using these algorithm evaluation approaches? What are the key areas that need the EU/DOE retrieval community to work together to improve algorithm evaluation approaches? What are our future plans?

2013 DOE/EU Retrieval Workshop, Köln Some thoughts on these questions

2013 DOE/EU Retrieval Workshop, Köln Q#1: What are the major issues in using these evaluation approaches for algorithm development and uncertainty quantification? No one is perfect –OSSEs: simulations may not cover entire range of possible conditions and the forward model is not perfect –In-Situ data: sampling issues, uncertainties, limited cases –Radiative closure: uncertainties in other input data and surface/TOA radiative fluxes; cannot evaluate vertical structures of cloud properties –Intercomparison of different retrievals: differences in retrieval basis, parameters, and underlying assumptions, as well as input and constraint parameters –Compare with other retrievals (e.g., MFRSR, satellite): no one is truth!

2013 DOE/EU Retrieval Workshop, Köln Q#2: What are the strategies for better using these algorithm evaluation approaches? Need to identify what types of retrievals are of interest to the EU/DOE joint effort –We may only focus on those algorithms that are used to retrieve cloud properties from radar, lidar, and radiometer since they are available for both ARM and European sites and provide long-term continuous retrieval data Uncertainty is large in both retrieved products and in-situ observations –Statistics could reduce the uncertainty –Case studies vs. statistical evaluations Can new instruments help? What are critically needed for algorithm development and uncertainty quantification? What are critically needed by the modeling community? –Error bars; statistics for various types of clouds

2013 DOE/EU Retrieval Workshop, Köln Q#2: What are the strategies for better using these algorithm evaluation approaches? Possible improvements –OSSEs: develop more OSSE cases that cover various types of clouds –In-Situ data: statistics is key – need long-term continuous observations to build up these statistics for different cloud types –Radiative closure: facilitate the evaluation by retrievals using BBHRP –Intercomparison of different retrievals: Common input dataset and consistent set of assumptions –Compare with other retrievals (e.g., MFRSR): is there a consensus in the community on which instruments or retrievals are more reliable for a particular cloud parameter? Compare with new instrument retrievals?

2013 DOE/EU Retrieval Workshop, Köln Q#2: What are the strategies for better using these algorithm evaluation approaches? Develop a cloud retrieval testbed –Suitable for both case studies and statistical evaluation –Combine strengths of these common algorithm evaluation approaches –Build-up a cloud retrieval test case library that will include OSSE cases, as well as radar, lidar, radiometer measurements co-located with reference in-situ microphysical parameters –Build-up common input dataset and use consistent assumptions for key parameters in current retrieval algorithms –Make use of BBHRP –Quantify uncertainties in validation datasets –Develop statistics for each type of clouds based on long-term observations

2013 DOE/EU Retrieval Workshop, Köln Q#3: What are the key areas that need the EU/DOE retrieval community to work together to improve algorithm evaluation approaches? Develop the cloud retrieval testbed –Data sharing –Algorithm sharing Intercomparison studies on both retrieval algorithms and forward models

2013 DOE/EU Retrieval Workshop, Köln Q#4: What are our future plans? Collaborate with other existing science focus groups (e.g., ASR QUICR, IcePro; EG-CLIMET) Develop the cloud retrieval testbed Intercomparison studies –retrievals –forward models –Key assumptions Future workshops (coordinated with ASR and AGU meetings)

2013 DOE/EU Retrieval Workshop, Köln Questions: What are the major issues in using these evaluation approaches for algorithm development and uncertainty quantification? What are the strategies for better using these algorithm evaluation approaches? What are the key areas that need the EU/DOE retrieval community to work together to improve algorithm evaluation approaches? What are our future plans? Discussion on Common Algorithm Evaluation Approaches

2013 DOE/EU Retrieval Workshop, Köln