Presentation is loading. Please wait.

Presentation is loading. Please wait.

H. Worden, ASSFTS 11 October 8 - 10, 2003 Characterization of Aura-TES (Tropospheric Emission Spectrometer) Nadir and Limb Retrievals Helen Worden, Reinhard.

Similar presentations


Presentation on theme: "H. Worden, ASSFTS 11 October 8 - 10, 2003 Characterization of Aura-TES (Tropospheric Emission Spectrometer) Nadir and Limb Retrievals Helen Worden, Reinhard."— Presentation transcript:

1 H. Worden, ASSFTS 11 October 8 - 10, 2003 Characterization of Aura-TES (Tropospheric Emission Spectrometer) Nadir and Limb Retrievals Helen Worden, Reinhard Beer, Kevin Bowman, Annmarie Eldering, Mingzhao Luo, Gregory Osterman, Susan Sund Kulawik, John Worden Jet Propulsion Laboratory, California Institute of Technology Shepard A. Clough, Mark Shephard, AER (Atmospheric and Environmental Research, Inc.) Michael Lampel Raytheon Systems Co, ITSS Clive Rodgers Oxford University

2 H. Worden, ASSFTS 11 October 8 - 10, 2003 TES Advantages Lower probability of cloud interference, good horizontal spatial resolution. Disadvantages Primary Species Limited vertical resolution. T atm, H 2 O, O 3, CH 4, CO Nadir View The Tropospheric Emission Spectrometer (TES) on the Aura Platform Advantages Good vertical resolution. Enhanced sensitivity for trace constituents. Disadvantages Higher probability of cloud interference. Poorer line-of-sight spatial resolution. Primary Species Nadir Species + HNO 3,NO,NO 2 Limb View

3 H. Worden, ASSFTS 11 October 8 - 10, 2003 TES Data Processing Steps L1A Level 1A: Produces geolocated interferograms. L1B Level 1B: Produces radiometrically and frequency calibrated radiance spectra and estimated NESR (noise equivalent spectral radiance). L2 Level 2: Produces species abundance and temperature profiles. For cloud-free nadir views, Level 2 also produces surface temperatures and emissivities. L3 Level 3: Produces global maps for each standard pressure level.

4 H. Worden, ASSFTS 11 October 8 - 10, 2003 0.07 cm -1 resolution 0.0175 cm -1 resolution L1B Products

5 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 2 Algorithms Target Scene Selection Logic for selecting which scenes to process and the order of processing (e.g. Nadir ocean first) Retrieval Algorithms for a single “target”, either limb or nadir where nadir is the average of 2 scans (same ground location) in the Global Survey. Following slides will describe: Retrieval Strategy & Suppliers ELANOR (Earth Limb and Nadir Operational Retrieval) Error Analysis Product Generation L2 standard products for each Global Survey Temporal & spatial averaging

6 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 2 Retrieval Strategy & Suppliers Retrieval Suppliers (Inputs for a single target scene) Appropriate pressure grid including surface for nadir targets Atmospheric & surface Full State Vector (FSV) initial guess T, H 2 O from meteorological data Other trace gases from climatology data land surface emissivity derived from surface type data Microwindows (small spectral frequency ranges used for retrieval) Mapping of FSV to retrieval parameters (and back to FSV) A priori vectors and covariance matrices or other constraint types L1B spectral averaging, apodization & microwindow extraction

7 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 2 Retrieval Strategy & Suppliers Retrieval Strategy (determine appropriate retrieval steps) FSV input for cloud free target Cloud determination step Revision of input if cloud detected

8 H. Worden, ASSFTS 11 October 8 - 10, 2003 Global Land Use Database Operational Support Product (OSP) used by surface emissivity supplier Reference: http://edcdaac.usgs.gov/glcc/glcc.htmlhttp://edcdaac.usgs.gov/glcc/glcc.html Land Processes Distributed Active Archive Center

9 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 2 Nadir Cloud Detection Cloud detection will use tests of the radiances in the “window” region (11  m) L1B will flag large variability of this radiance across detectors so that nadir scenes with broken clouds or variable surface are rejected for L2. Retrieval strategy must make an initial estimate of the clear-sky radiance to identify uniform cloudy scenes (i.e., cloud-filled footprint) vs. clear. Cloudy-clear brightness temperature differences vs. optical depth for various cloud altitudes.

10 H. Worden, ASSFTS 11 October 8 - 10, 2003 Limb Cloud Scenarios 1) cloud before or after T.P. 2) cloud filling half of pixel cloud filling most of pixel 3)

11 H. Worden, ASSFTS 11 October 8 - 10, 2003 cloudy partly cloudy clear Detector 0 Detector 15 modeled rays L ray_0 = L det_0 (measured) L ray_1 = L det_1 (measured) L ray_2 = L det_2 (measured) Approach for Including Cloud Radiances in FOV Integration

12 H. Worden, ASSFTS 11 October 8 - 10, 2003 ELANOR is the algorithm for a single retrieval step Retrieval step is for a subset of retrieved species using spectral “microwindows” of the L1B data. Performs non-linear least squares fit with iterations of: Forward model (FM) spectrum & Jacobian generation for estimate of atmospheric and surface FSV (full state vector) Ray Tracing Optical depth calculation (tables or line-by-line) Radiative Transfer ILS (Instrument Line Shape) convolution Limb only: FOV (Field of View) integration Inversion (FM compared to L1B data) for retrieval parameters and update of FSV. Levenberg-Marquardt algorithm Algorithm inheritance from LBLRTM (AER) and SEASCRAPE (JPL). Level 2 Algorithms ELANOR (Earth Limb And Nadir Operational Retrieval)

13 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 2 Error Analysis & Post-retrieval Processing For each retrieved parameter in a target scene, we estimate covariances for: Retrieval noise (using measurement NESR) Smoothing error (accuracy due to vertical smoothing) Systematic errors from instrument and model

14 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 3 Algorithms Level 3 will mainly produce browse products with gridded, interpolated map images.

15 H. Worden, ASSFTS 11 October 8 - 10, 2003 One Day Test (ODT) Objectives Test L2 algorithm accuracy, robustness & performance for a large number of retrievals. 1168 Nadir, 3504 Limb targets Determine which retrieval diagnostics and visualization methods will be most useful for monitoring science data quality & instrument performance. Evaluate statistical accuracy of error estimates as compared to known errors (true-retrieved). Can’t do this with real data. Determine if target scene selection would significantly improve performance by reducing the number of iterations required.

16 H. Worden, ASSFTS 11 October 8 - 10, 2003 1 Orbit of the One Day Test Mozart model for O 3 at 422 mb with TES targets for a single orbit overplotted (X)

17 H. Worden, ASSFTS 11 October 8 - 10, 2003

18 H. Worden, ASSFTS 11 October 8 - 10, 2003 One Day Test Input and Assumptions Simulated TES spectra were created using profiles sampled from MOZART3 (NCAR) model data. MOZART resolution is 2.8°x2.8° with 66 vertical levels (0-140 km). MOZART3 was driven by WACCM met. fields; data are for the Oct. 2 day of an arbitrary year. (WACCM = Whole Atmosphere Community Climate Model, from NCAR) No clouds or aerosols. Results from this 1 st test will be a baseline for later tests. Retrieved surface T, but not surface emissivity. Noise added to simulations is from instrument characterization.

19 H. Worden, ASSFTS 11 October 8 - 10, 2003 Ozone: True StateRetrieved Retrieved - True 0.01 0.07 0.3 0.7 1.0 Ozone VMR (ppmV) One Day Test Nadir Results

20 H. Worden, ASSFTS 11 October 8 - 10, 2003 Comparison of initial uncertainty, mean 1  actual errors (retrieved – true), and estimated errors for One-Day Test Ozone retrievals. Fractional errors for O 3 (log VMR) retrievals Nadir O 3 Error Comparison for the One-Day Test

21 H. Worden, ASSFTS 11 October 8 - 10, 2003 TES Error Characterization In order to compare with the “true” profile, we have: Where Constraint matrix

22 H. Worden, ASSFTS 11 October 8 - 10, 2003 Our smoothing error is somewhat underestimated: Doesn’t include representation error (need to use full K) S a true has artificially small variance (computed from a single model day). Still need to characterize: Forward model parameter error i.e., errors from non-retrieved parameters: S f = GK b S b K b T G T Model error i.e., errors from model approximations

23 H. Worden, ASSFTS 11 October 8 - 10, 2003 Estimated Errors from Microwindow Selection

24 H. Worden, ASSFTS 11 October 8 - 10, 2003 Current / Future Work Updating the ATBD (now at V1.2, V2.0 by 10/2003) Preparation for post-launch mayhem (a.k.a. commissioning) Aura Intercomparisons Limb “tomographic” retrieval and 2-D characterization - uses 3 limb measurements to retrieve 4 profiles over 6 o latitude. Cloud top pressure retrieval - include lower bound pressure in retrieval vector. Retrievals with transmissive clouds/aerosols using proxy gray-body layers.

25 H. Worden, ASSFTS 11 October 8 - 10, 2003 Back-up Slides

26 H. Worden, ASSFTS 11 October 8 - 10, 2003

27 H. Worden, ASSFTS 11 October 8 - 10, 2003 2B1 Magnitude Spectra, ST-2, part B

28 H. Worden, ASSFTS 11 October 8 - 10, 2003 Test of 7 different latitude cases to estimate effect of substituting measured detector radiances for modeled limb rays. Microwindow 6 is an optically thin case. Other optically thick cases have smaller radiance errors.

29 H. Worden, ASSFTS 11 October 8 - 10, 2003 Level 2 Absorption Coefficient Tables “ABSCO” tables allow better performance with only slightly worse accuracy (compared to line-by-line calculation) Pressures and Temperatures for Stored Absorption Coefficients Must be read efficiently and storage becomes an architecture design issue.

30 H. Worden, ASSFTS 11 October 8 - 10, 2003 Inversion Simultaneous fit to multiple atmospheric and surface parameters minimizes the cost function: [y - F(x)] T S e -1 [y - F(x)] + [x - x a ] T S a -1 [x - x a ] where y is the measured spectrum, S e is the measurement covariance, x is the state vector of atmospheric parameters, x a is the a priori state vector and S a is the a priori covariance. Cost function algorithm uses either the first term only: maximum likelihood (ML) or both terms: maximum a posteriori (MAP) Iteration step x i+1 = x i + (K T S e -1 K + S a -1 +  ) -1 (K T S e -1 [y - F(x i )] - S a -1 [x i - x a ]) either Gauss-Newton (  = 0; small residual case, i.e., good initial guess) or Levenberg-Marquardt (  > 0; non-linear regimes - slower but more robust) Convergence first term of cost function reaches estimated measurement noise variance.

31 H. Worden, ASSFTS 11 October 8 - 10, 2003 data Manual operation stored data procedure preparation X start point Analysis Flow Legend decision

32 H. Worden, ASSFTS 11 October 8 - 10, 2003 TES L1B Post-launch Analysis Flow (Nadir) Nadir 1B2 Clear ocean scene w/reliable SST Nadir 2B1 Nadir 2A1 Nadir 1A1 L1B Radiances Calibration w/initial or updated parameters X Compare w/ AIRS and model radiance Compare w/ AIRS, model radiance and 1B2 Compare w/ AIRS, model Radiance & 1B2 Compare w/ model radiance Freq. calibration w/averaged data line positions compared to model Updated OSPs e.g. Freq. Cal. Coeff. X Updated OSPs X Updated OSPs X Updated OSPs X ok? X y n Offset dependence analysis e.g. w/latitude, to determine cal. avg. strategy Check on-orbit variable BB cal. compare w/ground Verify spatial cal. (pixel positions) compare w/ground Identify cloud free Ocean scene L1A IFGMs


Download ppt "H. Worden, ASSFTS 11 October 8 - 10, 2003 Characterization of Aura-TES (Tropospheric Emission Spectrometer) Nadir and Limb Retrievals Helen Worden, Reinhard."

Similar presentations


Ads by Google