Presentation is loading. Please wait.

Presentation is loading. Please wait.

Maa-57.2040 Kaukokartoituksen yleiskurssi General Remote Sensing Image restoration Autumn 2007 Markus Törmä

Similar presentations


Presentation on theme: "Maa-57.2040 Kaukokartoituksen yleiskurssi General Remote Sensing Image restoration Autumn 2007 Markus Törmä"— Presentation transcript:

1 Maa-57.2040 Kaukokartoituksen yleiskurssi General Remote Sensing Image restoration Autumn 2007 Markus Törmä Markus.Torma@tkk.fi

2 Digital image processing Image is manipulated using computer Image  mathematical operation  new image Application areas: –Image restoration –Image enhancement –Image interpretation / classification

3 Image restoration Errors due to imaging process are removed Geometric errors –position of image pixel is not correct one when compared to ground Radiometric errors –measured radiation do not correspond radiation leaving ground Aim is to form faultless image of scene

4 Image enhancement Image is made better suitable for interpretation Different objects will be seen better  manipulation of image contrast and colors Different features (e.g. linear features) will be seen better  e.g. filtering methods Multispectral images: combination of image channels to compress and enhance imformation –ratio images –image transformations Necessary information is emphasized, unnecessary removed

5 Digital image processing Analog signal: –phenomena is described of measured continuosly according to time or space Digital signal: –analog signal is sampled with some interval 2-dimensional digital signal digital image

6 Digital image processing Image function f(x,y): Function according to spatial coordinates x and y Value of function in position (x,y) corresponds to brightness of image in corresponding position

7 Digital image processing Digital image consists of individual picture elements, pixels, which form discere lattice in spatial domain (x,y) Digital image can also be presented using SIN-waves with different frequencies and amplitudes –called frequence domain (u,v) –Fourier transform is used to determine frequencies Manipulation of digital image: –Image domain: pixel values are modified directly at iomage using some algorithm –Spatial domain: frequencies and amplitudes of SIN-waves are modified

8 Sources of error Movement opf imaging platform Changes in height or speed Attitude of satellite Image provider should correct Instrument Scanning or measurement principle Failures of sensors Production methods or accuracy of instrument Calibration of instrument

9 Sources of error Atmosphere Attenuation of radiation and decrease of contrast Image is blurred Difficult to correct Object Roundness of Earth Rotation of Earth Topography It is possible to cerrect these quite well

10 Geometric correction Position of image pixel is not correct when compared to ground Errors due to instrument, movement of imaging platform and object are removed Known errors in geometry: –Earth cirvature and rotation –Topography –Imaging geometry Rectification to map projection –Geometric transformation –Interpolation of pixel digital numbers

11 Geometric correction Geometric correction is made –automatically using orbital parameters or –manually using ground control points Alternatives –orbital parameters –ground control points –orthocorrection Most accurate results by combining all

12 Raw image data It can be difficult to recognize ground features from raw image data, because they are not necessarily similar than in nature

13 Raw image vs. corrected image

14 Geometric correction using orbital parameters Information about –position of satellite (XYZ) –attitude (  ) Correction can be low quality due to poor orbital information Knowledge about scanning geometry, movement of object and topography (DEM) will increase accuracy

15 Geometric correction Accuracy of correction depends on quality of used information Some examples about accuracy using orbital parameters: –NOAA AVHRR: 5 km - 1.5 km –Spot 1-4: 350 m –Spot 5: 50 m Ground control points: accuracy should be better than 1 pixel –depends also mathematical model and topographic variations Orthocorrection most accurate

16 Geometric correction Manual correction: Ground control points (GCPs): –image coordinates are measured from image –map coordinates from map or georeferenced image

17 Geometric correction GCPs are known and well distinguished ground features –crossroads, buildings, small lakes, small islands, features in waterline More GCPs is better When transformation between image coordinate system and map coordinate system is defined using polynomials, minimum number of points: –1st degree polynomial: 3 points –2nd degree: polynomial: 6 points

18 Example Erdas Imagine Old Landsat TM-image is georeferenced to same map coordinate system than Landsat ETM-image

19 Example 2nd degree polynomial 15 GCPs

20 Automatic correction Software searches corresponding points from image to be georeferenced and image in map coordinate system This can be based on –correlation between subimages –recognizable features (linear like roads or lakes) These points are used as GCPs Software produces many (e.g. 200-300 points for Landsat ETM-image) –user has to select which can be used Automatization is needed when there are many images and/or it has to be made daily

21 Topographic error Property of imaging system using central projection Image of object is in incorrect place due to height variations

22 Orthocorrection Topographic error is removed by changing the perspective of image from central projection to orthogonal projection DEM is needed

23 Interpolation of digital numbers Digital numbers for pixels of corrected image must be interpolated from uncorrected image Seldomly number from uncorrected image can be used directly Methods –nearest neighbor interpolation –bilinear interpolation –cubic convolution interpolation

24 Nearest neighbor interpolation Take value of closest pixel from uncorrected image Easy to compute Values do not change Result can be inaccurate korjattu kuva alkuperäinen kuva

25 Nearest neighbor interpolation Values of some pixels are chosen more than once, some not at all ”Piecewise” image Linear features can disappear korjattu kuva alkuperäinen kuva

26 Bilinear interpolation 4 closest pixels from uncorrected image are used Average weighted by distance Changes digital numbers –corresponds to average filtering korjattu kuva alkuperäinen kuva

27 Cubic convolution 16 (4x4) closest pixels from uncorrected image are used Smaller interpolation error than NN or BL- interpolation korjattu kuva alkuperäinen kuva

28 Alkuperäinen kuva bilineaarinen kuutio lähin naapuri

29 Image formation Scene f(x) Acquired image g(x) Scene f(x) is corrupted by atmosphere and instrument in imaging process They act like filters h(x)

30 Image formation Image acquisition can be modelled with image degradation model: f(x) * h(x) + n(x) = g(x) g(x): acquired image h(x): filter corresponding to averaging effects due to atmosphere and instrument n(x): random errors due to instrument and data transmission f(x): scene

31 Inverse filtering Errorness image of scene f(x) should be acquired by making inverse process to known image g(x) Image degradation model in frequency domain: G(u)=F(u)H(u)+N(u) Ideal inverse filtering F e (u) = G(u)/H(u) - N(u)/H(u) In practice difficult to solve –zeros in H(u) –effect of N(u) increases Usually radiometric correction is divided to different phases which are corrected individually

32 Radiometric correction In order that measurements taken with –different instruments –differents dates are comparable Aim: radiance or reflectance Radiance (W/m 2 /sr): –physical term which describes intensity of radiation leaving ground to some direction Reflectance: –Radiance / incoming irradiance

33 Instrument calibration Instruments are calibrated before satellite launch –measurements of known targets –instrument response is followed by measuring calibration targets in instrument or stable targets on ground Each channel have calibration coefficients GAIN and OFFSET –these can vary within time –response decreases, so same target looks more dark

34 Instrument gain Pixel digital number is multiplied with gain radiance = Dn * Gain Gain = Lmax – Lmin / 255 –Lmax: largest radiance which can be measured by instrument –Lmin: smallest radiance which can be measured by instrument

35 Instrument offset Background noide detected by instrument Measurement, when instrument do not receive any radiation = Lmin

36 Radiometric correction Equation: R = (Lmax-Lmin)/255*DN + Lmin OR R=Gain*DN + offset

37 Other corrections Sun zenith angle: DN’=DN / SIN(sun  ) Ground is illuminated differently when Sun zenith angle changes –seasonal differences

38 Other corrections : Distance between Sun and Earth This distance varies according to seasons Irradiance incoming to ground is when taking sun zenith angle into account:

39 Atmospheric correction Absorption and scattering Decrease of diffuse skylight –due to atmospheric scattering –largest at smaller waveleghts (blue), decreases as wavelenght increases –dampens image contrast

40 Atmospheric correction REF: reflectance of pixel L sat : radiance measured by instrument L haze : radiance due to atmospheric scattering (diffuse skylight) TAU v : atmospheric transmittance from ground to instrument E 0 : Sun spectral irradiance outside atmosphere, including effect of distance between Sun and Earth: E 0 = E / d 2, where E is Sun spectral irradiance outside atmosphere and d is distance between Sun and Earth in Astronomical Units sz: Sun zenith angle TAU z : atmospheric transmittance from Sun to ground E down : maanpinnalle tullut ilmakehän sironnan vaikutus

41 Atmospheric correction Apparent reflectance model Removes effects of changing Sun-Earth distance and Sun zenith angle changing imaging geometry Does not remove atmospheric effects: absorption or scattering Following parameters for correction model: TAU z : 1.0, TAU v : 1.0, E down : 0.0, L haze : 0.0

42 Atmospheric correction DARK-OBJECT-SUBTRACTION It is supposed that there are areas in image that are on shadow, so that all radiation coming to instrument from these areas is due to diffuse skylight Following parameters for correction model : TAU z : 1.0 TAU v : 1.0 E down : 0.0 L haze : measure radiance from target which is in shadow (like shadow of cloud) or does not reflect radiation (water in infrared)

43 Atmospheric correction CHAVEZ Modified DOS Atmospheric transmittances are approximated by angles of imaging geometry Following parameters for correction model : TAU z : cos(sz) TAU v : 1.0 or cos(incidence angle) E down : 0.0 L haze : measure radiance from target which is in shadow (like shadow of cloud) or does not reflect radiation (water in infrared)

44 Atmospheric correction More theoretic methods try to model how radiation travels in atmosphere Aim is to model –atmospheric transmittance and absorption –scattering due to gases –reflectances due to atmosphere, not ground Difficult, requires lot of computing and precise knowledge about the state of atmosphere

45 VTT atmospheric correction SMAC: Simplified Method of Atmospheric Correction Removes Rayleigh scattering, absorption due to atmospheric gases, imaging geometry (sun anlges and distance) Digital number  reflectance Needed calibration coefficients of instrument sun zenith and azimuth angles amount of water vapour ozone atmospheric optical depth

46 VTT atmospheric correction Original Landsat ETM-images Atmospheric optical Depth can be estimated from image if there are suitable channels ETM7/ETM3 ratio should be about 0.4 for olf coniferous forests

47 VTT atmospheric correction Corrected Landsat ETM- mosaic Eastern Finland 7 images RGB: 321

48 VTT atmospheric correction Landsat ETM- mosaic of Northern Finland consists of 9 images

49 Dehazing image Based on Tasselled Cap-transformation –TC4 image sensiive to atmospheric effects Landsat-5 TM-image: TC4 = 0.8461*TM1 - 0.7031*TM2 - 0.4640*TM3 - 0.0032*TM4 - 0.0492*TM5 - 0.0119*TM7 + 0.7879 Image channel is corrected by subtracting TC4 from it TM cx = TM x - (TC4 - TC4 0 )*A x TM cx : Corrected digital number of channel x TM x : Original digital number of channel x TC4 0 : Value of haze-free pixel in TC4 A x : Correction factor determined from image

50 Dehazing image Original and corrected TM1

51 Dehazing image Original and corrected TM2

52 Dehazing image Original and corrected TM3

53 Dehazing image Original and corrected TM4

54 Dehazing image Original and corrected TM5

55 Dehazing image Original and corrected TM7

56 Clouds and their shadows It is difficult to automatically remove clouds at visible and infrared regions –thermal infrared helps –thich clouds easy, thin clouds and haze difficult Removal by masking –Cloud interpretation –  Mark their area as ”Nodata” or 0 Problem shadows –mixed with water areas –difficult to automate Relatively simple way: –interprete clouds using thresholding or clustering –make cloud mask (1: cloud, 0: other) –dilate mask –move mask so that it covers also shadows

57 Topographic correction Imaging geometry (positions and angles between instrument, object and radiation source) changes locally E.g. deciduous forest on the sunny side or same side as instrument of the hill looks brighter than similar forest on the shadow side Reflectance is highest when slope is perpendicular to incoming radiation

58 Topographic correction Correction of image pixel values is based on topographic variations, other characteristics are not taken into account Determine the angle between incoming radiation and local surface normal  Illumination image cos(i)

59 Topographic correction Landsat ETM (RGB: 743) and DEM from NLS

60 Topographic correction Landsat ETM (RGB: 743) and illumination image

61 Topographic correction Variables of equations: L O : Original reflectance L C : Corrected reflectance sz: Sun zenith angle i: Angle between incoming radiation and local surface normal k: Minnaert coefficient, estimated from image m: Slope of regression line between illumination image and image pixels, estimated from image b: Offset of regression line between illumination image and image pixels, estimated from image C: Correction factor of C-correction, C = b/m

62 Topographic correction Lambert cosine correction: it is assumed that ground reflects radiation as Lambertian surface, in other words same amount to different directions L C = L O COS(sz) / COS(i)

63 Topographic correction Minnaert correction: Coefficient k is used to model the effect of different surfaces L C = L O [ COS(sz) / COS(i) ] k

64 Topographic correction Ekstrand correction: Minnaert coefficient k varies according to illumination L C = L O [ COS(sz) / COS(i) ] k COS(i)

65 Topographic correction Statistical-empirical correction: correction removes correlation between illumination image and image channel L C = L O – m cos(i)

66 Topographic correction C-correction: coefficient C should model diffuse light L C = L O [ ( cos(sz) + C ) / ( cos(i) + C ) ]

67 ATCOR 2/3 Developed by DLR (German Aerospace Center) – http://www.op.dlr.de/atcor/ http://www.op.dlr.de/atcor/ ATCOR 2/3 tries to remove components 1 and 3: 1.path radiance: radiation scattered by the atmosphere 2.reflected radiation from the viewed pixel 3.radiation reflected by the neighborhood and scattered into the view direction (adjacency effect) –Only 2 contains information from the viewed pixel.

68 ATCOR 2/3 Atmospheric Database Atmospheres with different vertical profiles of pressure, air temperature, humidity, ozone content Various aerosol types (rural, urban, maritime, desert) Visibilities: 5 - 120 km (hazy to very clear) Ground elevations 0 - 2.5 km, extrapolated for higher regions Solar zenith angles 0 - 70 degree Tilt geometries with tilt angles up 50 degrees are supported A discrete set of relative azimuth angles (0 - 180 deg., increment 30 deg.) is provided for the atmospheric LUTs of tilt sensors, interpolation is applied if necessary Capability for mixing of atmospheres (water vapor, aerosol) The sensor-specific database is available for the standard multispectral sensors, i.e., Landsat TM, ETM+, SPOT, IRS-P6 etc.

69 ATCOR 2 Atmospheric correction for flat terrain SPECTRA module to determine the atmospheric parameters (aerosol type, visibility, water vapor) compare retrieved scene reflectance spectra of various surface covers with library spectra tune correction of retrieved spectra so that resemples library spectra Constant atmospheric conditions or spatially varying aerosol conditions Retrieval of atmospheric water vapor column for sensors with water vapor bands (around 940/1130 nm). Example sensors: MOS-B, Hyperion. Statistical haze removal: a fully automatic algorithm that masks haze and cloud regions and removes haze of land areas. De-shadowing of cloud or building shadow areas. Automatic classification of spectral surface reflectance (program SPECL2) using 10 surface cover templates. This is not a land use classification, but a reflectance-shape classification. Still it may be useful as it is a fast automatic classification algorithm. Surface emissivity and surface (brightness) temperature maps for thermal band sensors. Value added products: vegetation index SAVI, LAI, FPAR, wavelength-integrated albedo, absorbed solar radiation flux, surface energy fluxes for thermal band sensors: net radiation, ground heat flux, latent heat, sensible heat flux

70 ATCOR 2 Atmospheric correction for flat terrain SPECTRA module to determine the atmospheric parameters (aerosol type, visibility, water vapor) –compare retrieved scene reflectance spectra of various surface covers with library spectra –tune correction of retrieved spectra so that resemples library spectra Constant atmospheric conditions or spatially varying aerosol conditions Retrieval of atmospheric water vapor column for sensors with water vapor bands (around 940/1130 nm). Example sensors: MOS-B, Hyperion. Statistical haze removal: a fully automatic algorithm that masks haze and cloud regions and removes haze of land areas.

71 ATCOR 2 De-shadowing of cloud or building shadow areas. Automatic classification of spectral surface reflectance (program SPECL2) using 10 surface cover templates –not a land use classification, but a reflectance-shape classification Surface emissivity and surface (brightness) temperature maps for thermal band sensors. Value added products: –vegetation index SAVI –LAI, FPAR –wavelength-integrated albedo –absorbed solar radiation flux –surface energy fluxes for thermal band sensors: net radiation, ground heat flux, latent heat, sensible heat flux

72 ATCOR 2 examples of haze removal IRS-1C Liss-3 scene recorded 25 June 1998 (Craux, France) Ikonos scene of Dresden, 18 August 2002

73 ATCOR 2 Dresden, the SPOT-3 sensor (22 April 1995) Left: surface reflectance after atmospheric correction (RGB=SPOT bands 3/2/1, NIR/Red/Green) Right: the results of the automatic spectral reflectance classification Color coding of the classification map: - dark to bright green: different vegetation covers - blue: water - brown: bare soil - grey: asphalt, dark sand/soil - white: bright sand/soil - red: mixed vegetation/soil - yellow: sun flower, rape, while blooming

74 ATCOR 3 Correction for mountaineous regions Scene has to be ortho-rectified Processing eliminates the atmospheric / topographic effects and generates surface data (reflectance, temperature) corresponding to a flat terrain Features as ATCOR2, except –Quick topographic correction (without atmospheric correction) –SKYVIEW: sky view factor calculation with a ray tracing program to determine the proportion of the sky hemisphere visible for each pixel of the terrain. –SHADOW: cast shadow calculation depending on solar zenith and azimuth angle employing a ray tracing program

75 ATCOR 3 Components: 1.path radiance: radiation scattered by the atmosphere (photons without ground contact) 2.reflected radiation from the viewed pixel 3.adjacency radiation: ground reflected from the neighborhood and scattered into the view direction 4.terrain radiation reflected to the pixel (from opposite hills, according to the terrain view factor) Only component 2 contains information from the viewed pixel.

76 ATCOR 3 Top left: DEM (800 - 1800 m asl) Top right: illumination image Bottom left: original TM data (bands 3/2/1 coded RGB) Bottom right: surface reflectance image after combined atmospheric and topographic correction Areas that appear dark in the original image due to a low solar illumination are raised to the proper reflectance level in the corrected image Walchensee lake and surrounding mountains in the Bavarian Alps TM scene acquired 28 July 1988

77 ATCOR 2/3 Output: Surface reflectance channels Surface (brightness) temperature, surface emissivity map Visibility index map (corresponds to total optical thickness at 550 nm) and aerosol optical thickness map Water vapor map (if required water vapor channels are available, e.g. at 940 nm) Surface cover map derived from template surface reflectance spectra (10 classes): a fast automatic spectral classification. Value added channels: SAVI, LAI, FPAR, albedo, radiation and heat fluxes

78 Errors in images Errorneous pixel or scanning lines are due to instrument or data transmission malfunctions Fill with neighboring data –bad idea Treat as clouds –holes in data

79 Errors in images Landsat TM 189/18, 27.7.1989

80 Errors in images: striping Instrument channels have many sensors which are badly calibrated

81 Errors in images: striping IRS WiFS 36/22, 13.6.1999, channel 1 (red)

82 Jun 29 1999 Something wrong with data transmission?

83 Interactive restoration Interactive restoration of image using Fourier- transformation


Download ppt "Maa-57.2040 Kaukokartoituksen yleiskurssi General Remote Sensing Image restoration Autumn 2007 Markus Törmä"

Similar presentations


Ads by Google