Vicarious Calibration of Sentinel-3 Toward the Blending of Methods

Slides:



Advertisements
Similar presentations
SNPP VIIRS On-Orbit Calibration for Ocean Color Applications MODIS / VIIRS Science Team Meeting May 2015 Gene Eplee, Kevin Turpie, Gerhard Meister, and.
Advertisements

MERIS US Workshop, Silver Springs, 14 th July 2008 MERIS US Workshop Vicarious Calibration Methods and Results Steven Delwart.
MERIS US Workshop, Silver Springs, 14 th July 2008 MERIS US Workshop, 14 July 2008, Washington (USA) Henri LAUR Envisat Mission Manager.
Page 1 ENVISAT Validation Review – Frascati – 9-13 December st Envisat Validation Workshop MERIS, December 2002 Conclusions and Recommendations.
Page 1 ENVISAT Validation Review – Frascati – 9-13 December st Envisat Validation Workshop MERIS Conclusions and recommendations.
ESA UNCLASSIFIED – For Official Use Intercomparison of Proba-V observations to simulations over the Libya-4 site M. Bouvet – ESA/ESTEC Proba-V QWG#2.
AATSR Calibration Using DesertsENVISAT Validation Review 9-13 December 2002 Inter-Comparisons of AATSR, MERIS and ATSR-2 Calibrations using Desert Targets.
Radiometric Calibration PROBA-V QWG #2. PRESENTATION OUTLINE »Introduction »Stability of PROBA-V »ICP updates since QWG#1 »Outlook »Moon calibration GSICS.
Japan Meteorological Agency, June 2016 Coordination Group for Meteorological Satellites - CGMS JMA’s Cal/Val activities Presented to CGMS-44 Working Group.
Japan Meteorological Agency, June 2016 Coordination Group for Meteorological Satellites - CGMS Non-Meteorological Application for Himawari-8 Presented.
1 Bertrand Fougnie Patrice Henry, Sophie Lachérade, Philippe Gamet Processing team CNES-DCT/ME Synergic Calibration Crossing Multiple Methods GSICS Annual.
R. Santer and B. Berthelot Final meeting, ESRIN, Frascati, April 21, 2009 Calibration Test Sites Selection and Characterisation WP260 – Error analysis:
PLEIADES Lunar Observations Sophie Lachérade, Bertrand Fougnie
Activities in the framework of GSICS CNES Agency Report
GRWG UV Sub-Group Briefing Report
Paper under review for JGR-Atmospheres …
SADE Export Web Site Claire Tinel, Denis Blumstein, Patrice Henry - CNES Pascale Lafitte - CNES GSICS WG Meeting – Feb 2010 – Claire Tinel / CNES.
NOAA VIIRS Team GIRO Implementation Updates
JAXA Himawari-8 Ocean Color and Aerosol
Benjamin Scarino, David R
Crossing Multiple Methods
Toward a wider use of the Moon for In-flight Characterization
Sébastien Wagner, Tim Hewison In collaboration with D. Doelling (NASA)
Fangfang Yu and Xiangqian Wu
Study of Asian and Australian desert sites for sensor cross-calibration in the VPIR range Patrice Henry, Bertrand Fougnie, Sophie Lacherade, Philippe Gamet,
Deep Convective Clouds (DCC) BRDF Characterization Using PARASOL Bidirectional Observations Bertrand Fougnie CNES.
SEVIRI Solar Channel Calibration system
Technical Expectations from Organizers & Participants B. Fougnie, S
LEO Calibration over Rayleigh Scattering … the ATBD
Doelling, Wagner 2015 GSICS annual meeting, New Delhi March 20, 2015
Meteorological Satellite Center Japan Meteorological Agency
Vicarious calibration by liquid cloud target
WP300 – Recommendations for S2 and S3
An Overview of MODIS Reflective Solar Bands Calibration and Performance Jack Xiong NASA / GSFC GRWG Web Meeting on Reference Instruments and Their Traceability.
Using SCIAMACHY to calibrate GEO imagers
Activities in the framework of GSICS CNES GPRC Report
Calibration and Performance MODIS Characterization Support Team (MCST)
Centre National d’Etudes Spatiales - Toulouse - France
Combining Vicarious Calibrations
GSICS VIS/NIR subgroup report
CALIBRATION over the Moon An introduction to « POLO »
Combination Approaches
MODIS Characterization and Support Team Presented By Truman Wilson
Closing the GEO-ring Tim Hewison
Deep Convective Cloud BRDF characterization using PARASOL
Characterizing DCC as invariant calibration target
Calibration monitoring based on snow PICS over Tibetan Plateau
Sensitivity ANALYSIS Sébastien Wagner (EUMETSAT) In collaboration with
Implementation of DCC at JMA and comparison with RTM
Himawari-8 Launch and its calibration approaches
Combination Approaches
LEO Calibration over Rayleigh Scattering …toward the ATBD
Moving toward inter-calibration using the Moon as a transfer
Inter-calibration of the SEVIRI solar bands against MODIS Aqua, using Deep Convective Clouds as transfer targets Sébastien Wagner, Tim Hewison In collaboration.
Dorothee Coppens.
Status of the EUMETSAT GSICS DCC product
Deep Convective Clouds (DCC) BRDF Characterization Using PARASOL Bidirectional Observations Bertrand Fougnie CNES.
Moving toward inter-calibration using the Moon as a transfer
Strawman Plan for Inter-Calibration of Solar Channels
Progress toward DCC Demo product
Implementation of DCC algorithm for MTSAT-2/Imager
Inter-band calibration using the Moon
T. Hewison, S. Wagner, A. Burini, O. Perez Navarro, M. Burla, F
S3B OLCI Lunar Observations
G16 vs. G17 IR Inter-comparison: Some Experiences and Lessons from validation toward GEO-GEO Inter-calibration Fangfang Yu, Xiangqian Wu, Hyelim Yoo and.
T. Hewison, S. Wagner, A. Burini, O. Perez Navarro, M. Burla, F
Calibration of SEVIRI / MSG2
GRWG VIS/NIR sub-group briefing report
Toward a synergy between on-orbit lunar observations
Latest updates on alternative, fully automated approaches to Optical Sensors Radiometric Calibration and data quality IDEAS+ WP3520 Calibration and Data.
Presentation transcript:

Vicarious Calibration of Sentinel-3 Toward the Blending of Methods GSICS Annual Meeting 20-24th March 2017, Madison, Wisconsin Vicarious Calibration of Sentinel-3 Toward the Blending of Methods Bertrand Fougnie Camille Desjardins + support from CNES-DNO/OT/PE

Summary Sentinel-3 Optical Sensors S3A nominal on-board calibration The Vicarious Calibration Tool Box Calibration Results OLCI and SLSTR Cross-calibration over desert-sites Interband and absolute calibration Assessing the temporal monitoring Feedbacks – Combination of Methods Questions and support for discussion

Vicarious Calibration of Sentinel-3 Optical Sensors

Sentinel-3 Sentinel 3 Sentinel-3A OLCI SLSTR ESA mission - Operational needs of the European Copernicus Program Oceanography and land monitoring S3A : launched in February 2016 S3B : to be launched in November 2017 Altimetry + 2 optical sensors OLCI and SLST (follow-on MERIS and AATSR) OLCI = 21 bands from 410 to 1020nm, 300m, 1270km SLSTR = 11 bands from 550 to 12 000nm, 500m/1km, 1400km On-board diffusers + Black body (nominal) + Vicarious calibration methods Sentinel-3A OLCI SLSTR

OLCI - Ocean and Land Colour Imager OLCI is a follow-on of MERIS 21 VISNIR 300m-bands derived from a spectrometer 5 cameras 1270km - reduced sunglint : 12.6º tilt Calibration wheel : 2 diffuser + spectral

SLSTR – Sea and Land Surface Temperature Radiometer SLSTR is a follow-on of ATSR-1&2 and AATSR instruments Scanning radiometer 2 views : nadir (1400km, OLCI overlap) and 55º backward (740km) 3 VISNIR + 3 SWIR 500m-bands 3 TIR + 2 fire 1km-bands Blackbody + VISCAL targets

Nominal Calibration Nominal calibration = on-board calibration OLCI Spectral diffuser (monthly) : spectral calibration using Erbium-doped “pink” diffuser spectral calibration using diffuser and Fraunhofer lines check the spectrometer response, adjust the definition of bands Primary diffuser (every 2 weeks) : optical PTFE panel covering the field-of-view field-of-view characterization + reflectance calibration Secondary diffuser (every 3 months) : same optical PTFE degradation monitoring SLSTR VISCAL unit (every orbit at sunrise) : same optical PTFE as OLCI calibration and monitoring of visible bands Black bodies (every scan) : 2 highly stable references hot (305ºK) and cold (265ºK) calibration of TIR bands OLCI Calibration wheel

Validation of the radiometry Additional validation will be performed Secure the nominal calibration Detect early anomalies on calibration parameter Detect possible improvement on calibration or radiometry Validate the accuracy to a wide range of targets Multi-method approach : If every method tries to evaluate the same instrumental calibration, each method is also sensitive to different instrumental radiometric behavior e.g. variation within the field-of-view, variation with time, spectral consistency, linearity behavior, straylight, spectral knowledge, saturation… A good consistency at the end would be a contribution to the validation of the whole radiometric instrumental behavior, including of course the absolute calibration

Operational Configuration for Sentinel-3 S3ETRAC / SADE / MUSCLE Operational Environment = S3ETRAC + SADE + MUSCLE 1/ S3ETRAC = Extraction and Selection of Measurements = PREPROCESSING reading of S3 images, selection of relevant data, extraction of data 2/ SADE = Measurement & Calibration Data Repository = DATABASE 3 steps : measurements // elementary calibration and synthesis Various methods for VIS-NIR-SWIR range Easy data management & traceability product identifier, calibration version, SADE identifier acquisition conditions : dates, geometries, meteorological data tool version, processing date and parameters… 3/ MUSCLE = Multiple Method Calibration tools (Front-end Graphic) = CALIBRATION 3 steps : insertion // calibration // synthesis Common calibration tools for all sensors

Map of Calibration Approaches in SADE/Muscle Pseudo Invariant Calibration Sites (PICS) use of 20 desert sites in Africa/Arabia [Lachérade et al., 2013] reference = one sensor (i.e. MODIS or MERIS) or one date geometrical matching (no simultaneity req.) + spectral interpolation [Lachérade et al., 2013] cross-calibration/trending for all REFLECTIVE bands additional use of 4 snowy sites (inc. Dome C) for VIS-NIR bands [Six et al., 2004] Oceanic Oligotrophic Sites (very clear non-turbid scenes) Strict selection : very clear + non-turbid situations for atmosphere + surface Rayleigh [Hagolle et al., 1999, Fougnie et al., 2010] reference = Rayleigh scattering (~90% of TOA signal after selection - predictable) absolute calibration over a wide range of the fov (exc. sunglint) for VISIBLE range Sunglint [Hagolle et al., 2004; Fougnie 2016] Reference = one spectral band (red band ~620-660nm) Interband for all REFLECTIVE bands wrt the reference band Cloudy Sites (DCC - deep convective clouds) Strict selection of DCC [Fougnie and Bach, 2009; Fougnie 2016] Interband for VISNIR bands All are statistical approaches Result is Measured / Simulated ratio

Validation OLCI and SLSTR OLCI (additional) Validation of OLCI&SLSTR calibration though SADE/MUSCLE All methods were already tested over many sensors, including MERIS OLCI and SLSTR Deserts Sun Glint Clouds DCC OLCI (additional) Snow Rayleigh

Chronology Launch in mid-February 2016 First data available for calibration : beginning of March Mid-Term Review (MTR) in End of April 2016 MTR : Only 1.5 month available for vicarious calibration A first evaluation was possible These preliminary results were very relevant Absolute bias, interband consistency, temporal evolution These conclusions were confirmed and consolidated at IOCR In Orbit Calibration Review (IOCR) in 1st July 2016 Confirmation of the results on ~3 months of data Note : the best time series available for desert sites Swap of level-1 processing from the prototype (IPFP operated in ESTEC) to the operational level-1 processing chain (IPF operated in EUMETSAT and ESRIN) Since IOCR time series are longer no significant changes on the vicarious results re-analysis of the early weeks of the mission to track ageing  successful

Temporal consistency at IOCR MTR Stability as seen by cross-calibration over desert sites Very good stability after adjustment made before Mid-Term Review (dashed line) Oa8-665 Oa18-885 at IOCR S1-555 S3-865 MTR S6A-2225 S5A-1600

Temporal consistency Stability of the radiometry after IOCR MERIS S2-659nm MODIS

Temporal consistency Derived from the primary on-board diffuser 1% Ageing observed by the diffuser Can be trust the diffuser and implement the correction as baseline ? Noise on the absolute trending (diffuser BRDF) Bands are increasing, other decreasing Can we use vicarious calibration to conclude ? Derived from the primary on-board diffuser 1%

Temporal consistency OLCI 7-620 time series Is it possible to extract very small instrumental drift from desert time-series ? Usual time series of calibration results : dispersion is too high ! Interband approach wrt a reference band (620nm) OLCI 7-620 time series OLCI 7-620 is the time reference Time series are arbitrarily shifted up/down for clarity Time unit = 4 days

Temporal consistency Is it possible to extract very small instrumental drift from desert time-series Usual time series of calibration results : dispersion is too high ! Interband approach wrt a reference band (620nm) : limitation On-going analysis based on an “interband to closest band” approach : spectral correlation

Temporal consistency How to improve the result ? Combination ! (that’s not blending) MTR = some updates on the CCDB (parameters) Time series were not homogeneous Before MTR Desert : nice cover of the time series  very suitable to assess the trending DCC : only 2 dates  not possible to assess the trending  Use of Desert sites After MTR Desert : ~no data before October  poor information for the trending DCC : not so many matchups, but at different dates on the time series  better for the trending estimation  Use of DCC

Temporal consistency Results : camera#4 Same observation for cameras 3 and 5 19

Consistency with other sensors PICS-Desert sites : References MERIS (best spectral interpolation for VISNIR) + MODIS (GSICS reference) S2A-MSI (Sentinel) + AATSR (consistency with Envisat) Classical approach [Lachérade et al., 2013] + double difference (when few matchups) VISNIR : bias between OLCI/SLSTR/AATSR and MERIS/MODIS/S2MSI SWIR : bias between SLSTR and MODIS/S2MSI/AATSR OLCI SLSTR

Spectral consistency Various methods very well agree OLCI SLSTR Interband methods (clouds, sunglint), and Desert/Rayleigh : all renormalized OLCI : perfect spectral consistency <1% (exc. Absorption bands, and perhaps 1020) SLSTR : within 2% for VISNIR, but very important bias on SWIR bands OLCI SLSTR Normalization mean[ 7-620; 8-665; 12-754; 17-865] Normalization S2-659

Consistency between all methods Multi-method comparison Interband methods (clouds, sunglint), and Desert/Rayleigh : all renormalized OLCI : absolute residue ~2% (exc. Absorption bands, and perhaps 1020) SLSTR : residue of 3% for VISNIR, but very important bias on SWIR bands (10 to 40%) OLCI SLSTR Normalization for interband : 1.025 Normalization for interband : 1.042

Consistency within field-of-view No large variation detected within field-of-view OLCI = pushbroom with 5 cameras SLSTR = scanner Interband over white target validated : some residues to be explained Nevertheless : variation of the ISR with detector number for OLCI to be considered to explain residues on Rayleigh scattering and DCC OLCI OLCI SLSTR Oa2-412 Oa8-865 S3-865 DCC Desert-MERIS Desert-MODIS Oa16-779 Oa6-560 S1-555 DCC Rayleigh Rayleigh

Consistency within field-of-view Comparison between methods would help to identify errors (need statistics) 8-665 4-490 Rayleigh 8-665 4-490 DCC 8-665 4-490 Desert

Combining methods as seen by S3A Results derived from various methods were important in IOCR for OLCI&SLSTR : Confirm a light absolute bias in VISNIR (about 2.5%) [Desert, Rayleigh] OLCI : Provide confidence on the on-board diffuser [Desert, DCC] Check the temporal stability of the instrument [Desert, DCC] Provide high confidence on the spectral consistency [Desert, DCC, sunglint] Investigation on the variation within fov [Rayleigh, Desert, DCC] SLSTR : Confirm a large absolute bias in SWIR (about 10 to 30%) [Desert, Sunglint] Additional results are foreseen Check the temporal stability [Desert, Sunglint] Check the variation nadir/oblique [Rayleigh, Desert] But that’s combination, still not merging !

Combination of Methods toward the Blending ?

Observation Several calibration methods could be available Cloud-DCC, Moon, PICS-Desert, Rayleigh, Sunglint, PICS-Antarctica, SNO, Ray-matching… Implement several methods will provide various results which will in general differ sometimes consistent, sometimes not at all What’s the definition of « consistant » for GSICS needs ? GSICS needs to define the target in term of « self-consistency » Self-consistency of calibration results will differ because : theoretical performance of the method : bias and noise properties /representativity of the matchups known contribution from the sensor and considered on the calibration method (e.g. SBAF) known contribution from the sensor but not considered (e.g. homogeneity of ISR) unknown contribution from the sensor (or level-1)

Observation It is often called “calibration error” a radiometric artifact which is not a calibration error  Straylight, spectral rejection, non-linearity, variation with scan, polarization… They are radiometric errors, but not calibration errors They are radiometric biases, but varying with every different situation They are not full instrumental biases  What do we mean by “calibration” ? For some of us, this means “global adjustment of level-1” We may empirically correct the bias, but it could not be purely a calibration error a good instrument : all radiometric artifacts are well controlled  May often means good engineering a good calibration : consistency on results wrt the prime calibration May often means good instrument GSICS : to be considered when selected a reference (metric and scoring) Blending is not a fate, if a single method remains the best estimation GSICS has to face the way to consider this in order to provide to users the most relevant results  Difficult to adopt a standardized unique method (approach ?)

Radiometric artifacts or DCC Moon Rayleigh Calibration units GSICS Blending Strategy Evaluation & Scoring Investigation Limitations or discrepancies Radiometric artifacts or Method artifacts Diagnosis 3/ Investigation Loop Full consistency Limitations GSICS Standard Blind blending GPRC modulated blending 1/ Blending Template 2/ Optimized Blending

To be evaluated for each band The Synergy matrix Sensor to calibrate To be evaluated for each band Uncertainty from implemented method (depending on data sampling) Uncertainty from sensor Characterization to be addressed DCC Moon PICS-desert PICS-snow SNO Rayleigh Sunglint Spectral response knowledge Straylight Linearity Polarization Radiometric noise … Trending Absolute Interband Cross-calibration …

Best method or weighted mean Radiometric explanation The Synergy matrix Sensor to calibrate To be evaluated for each band Uncertainty from implemented method (depending on data sampling) Uncertainty from sensor Characterization to be addressed DCC Moon PICS-desert PICS-snow SNO Rayleigh Sunglint Spectral response knowledge Straylight Linearity Polarization Radiometric noise … Trending Absolute Interband Cross-calibration … 1 One calibration coefficient per band per date Best method or weighted mean GSICS Output Radiometric explanation

Combination of Methods

Combination of Methods PICS-Desert Moon DCC SNO Rayleigh PICS Snow Sunglint DIY Calibration activities « Calibration, that’s hobby… » (Dave Doelling)

DIY is fine, but GSICS needs operationality Blended Method GSICS House-blend DIY is fine, but GSICS needs operationality

Discussion – Questions Derive results from various methods How to define the mean value, or weighted value ? How to define corresponding weights ? Consider theoretical accuracy of methods Consider the sensor behavior If results are consistent between methods Do we keep the best theoretical method ? Do we derive a blend through a weighted mean in order to reduce residual bias ? This is the job of every agency or GPRS If results are not consistent between methods How discrepancies will be interpreted ? Ignored ? Will GSICS try to understand what radiometric artifacts could be ? If not solved/solvable, do we propose all results and ask users their own selection Ideally not recommended But may help various users  rename to GSICS adjustment instead of calibration (because we do not adjust calibration but something else)

Blending methods as seen by S3A Which calibration result could we recommend based on the analysis of OLCI calibration during the 1st year in orbit ? Inter-calibration over desert (MERIS) : If we want to optimize the continuity with MERIS or the historical archive Calibration over Rayleigh scattering : If we want to optimize the performance for ocean color application Calibration over DCC: If we want to optimize the performance for clouds retrieval Blending of ~50% desert and ~50% DCC : If we want to optimize the level-1 calibration at the state-of-the-art Even for OLCI/S3A, which can be considered as a « very good » sensor, the wrap-up is not obvious

« This is not Science, this is Art… » (again Dave Doelling) Wrap-up In general, derive results from various (accurate) methods = provide different information from the radiometry In general, results will slightly/largely differ Probably, no universal method can be defined in advance for blending, but an approach was proposed At least at the beginning, this would be a case-by-case analysis and conclusion GSICS must define how the various calibration sets will be used Unclear information toward users may endanger the future of calibration correction proposed by GSICS « This is not Science, this is Art… » (again Dave Doelling)

That’s all Folks !

The GSICS paradigm Tim’s car GSICS’s vehicle when WG in US