AMwww.Remote-Sensing.info Ch 5. Initial Display Alternatives and Scientific Visualization www.Remote-Sensing.info.

Slides:



Advertisements
Similar presentations
Comparision Of Pixel - Level Based Image Fusion Techniques And Its Application In Image Classification by 1D. Srinivasa Rao, 2Dr. M.Seetha, 3Dr. MHM.
Advertisements

Image Data Representations and Standards
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
AMwww.Remote-Sensing.info Ch 4. Image Quality Assessment and Statistical Evaluation
Overview of Graphic Systems
Graphics File Formats. 2 Graphics Data n Vector data –Lines –Polygons –Curves n Bitmap data –Array of pixels –Numerical values corresponding to gray-
1 King ABDUL AZIZ University Faculty Of Computing and Information Technology CS 454 Computer graphicsIntroduction Dr. Eng. Farag Elnagahy
Introduction to Computer Graphics
Introduction to Digital Data and Imagery
Monitors and Sound Systems section 3A This lesson includes the following sections: · Monitors · PC Projectors · Sound Systems.
1 Internet Graphics. 2 Representing Images  Raster Image: Images consist of “dots” of color, not lines  Pixel: Picture element-tiny rectangle  Resolution:
Chapter 12 Spatial Sharpening of Spectral Image Data.
Colour Digital Multimedia, 2nd edition Nigel Chapman & Jenny Chapman
Maa Kaukokartoituksen yleiskurssi General Remote Sensing Image enhancement I Autumn 2007 Markus Törmä
Spectral contrast enhancement
Accuracy Assessment. 2 Because it is not practical to test every pixel in the classification image, a representative sample of reference points in the.
IE433 CAD/CAM Computer Aided Design and Computer Aided Manufacturing Part-2 CAD Systems Industrial Engineering Department King Saud University.
2001 by Jim X. Chen: 1 The purpose of a color model is to allow convenient specification of colors within some color gamut.
Digital Images The digital representation of visual information.
Basics of a Computer Graphics System Introduction to Computer Graphics CSE 470/598 Arizona State University Dianne Hansford.
CGMB214: Introduction to Computer Graphics
Image Display & Enhancement Lecture 2 Prepared by R. Lathrop 10/99 updated 1/03 Readings: ERDAS Field Guide 5th ed Chap 4; Ch 5: ; App A Math Topics:
ECE291 Computer Engineering II Lecture 9 Josh Potts University of Illinois at Urbana- Champaign.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 14 Introduction to Computer Graphics.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
Digital Numbers The Remote Sensing world calls cell values are also called a digital number or DN. In most of the imagery we work with the DN represents.
Color. There are established models of color, each discipline uses it own method for describing and discussing color intelligently.
Computer Graphics Raster Devices Transformations Areg Sarkissian.
 In electrical engineering and computer science image processing is any form of signal processing for which the input is an image, such as a photograph.
Resolution A sensor's various resolutions are very important characteristics. These resolution categories include: spatial spectral temporal radiometric.
© 1999 Rochester Institute of Technology Introduction to Digital Imaging.
Guilford County SciVis V Applying Pixel Values to Digital Images.
Remote Sensing and Image Processing: 2 Dr. Hassan J. Eghbali.
Topic 5 - Imaging Mapping - II DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
1 Remote Sensing and Image Processing: 2 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel:
What is an image? What is an image and which image bands are “best” for visual interpretation?
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
Remote Sensing Data Acquisition. 1. Major Remote Sensing Systems.
Digital Image Processing Part 1 Introduction. The eye.
CSC361/ Digital Media Burg/Wong
Image Display & Enhancement Lecture 2 Prepared by R. Lathrop 10/99 updated 1/03 Readings: ERDAS Field Guide 5th ed Chap 4; Ch 5: ; App A Math Topics:
Lecture 7: Intro to Computer Graphics. Remember…… DIGITAL - Digital means discrete. DIGITAL - Digital means discrete. Digital representation is comprised.
Graphics: Conceptual Model Real Object Human Eye Display Device Graphics System Synthetic Model Synthetic Camera Real Light Synthetic Light Source.
Ch 6 Color Image processing CS446 Instructor: Nada ALZaben.
DIGITAL IMAGE. Basic Image Concepts An image is a spatial representation of an object An image can be thought of as a function with resulting values of.
Three-Receptor Model Designing a system that can individually display thousands of colors is very difficult Instead, colors can be reproduced by mixing.
Digital Imaging Fundamentals Ms. Hema C.R. School of Mechatronic Engineering.
Introduction to Computer Graphics
Image Display & Enhancement
Color Web Design Professor Frank. Color Displays Based on cathode ray tubes (CRTs) or back- lighted flat-screen Monitors transmit light - displays use.
Greg Humphreys CS445: Intro Graphics University of Virginia, Fall 2003 Raster Graphics and Color Greg Humphreys University of Virginia CS 445, Fall 2003.
ERDAS 1: INTRODUCTION TO ERDAS IMAGINE
Intro to Color Theory. Objectives Identify and discuss various color models including RGB, CMYK, Black/white and spot color. Investigate color mixing.
Applying Pixel Values to Digital Images
Remote Sensing Image Enhancement. Image Enhancement ► Increases distinction between features in a scene ► Single image manipulation ► Multi-image manipulation.
Data Models, Pixels, and Satellite Bands. Understand the differences between raster and vector data. What are digital numbers (DNs) and what do they.
Week 9 Monitors and output to the screen. Monitors, also known as Visual display units (V.D.Us) Desktop computers contain a Cathode Ray Tube (C.R.T.)
ECE 638: Principles of Digital Color Imaging Systems Lecture 5: Primaries.
Chapter 3 Color Objectives Identify the color systems and resolution Clarify category of colors.
Lecture 11 Text mode video
Unsupervised Classification
Computer Graphics: Achromatic and Coloured Light.
Image Enhancement Band Ratio Linear Contrast Enhancement
Initial Display Alternatives and Scientific Visualization
Binary Notation and Intro to Computer Graphics
Color on Remotely Sensed Imagery
Digital Data Format and Storage
Chapter 6: Color Image Processing
Digital Image Processing
Presentation transcript:

AMwww.Remote-Sensing.info Ch 5. Initial Display Alternatives and Scientific Visualization

Initial Display Alternatives and Scientific Visualization Scientists interested in displaying and analyzing remotely sensed data actively participate in scientific visualization, defined as: “visually exploring data and information in such a way as to gain understanding and insight into the data”. The difference between scientific visualization and presentation graphics is that the latter are primarily concerned with the communication of information and results that are already understood. During scientific visualization we are seeking to understand the data and gain insight. Scientists interested in displaying and analyzing remotely sensed data actively participate in scientific visualization, defined as: “visually exploring data and information in such a way as to gain understanding and insight into the data”. The difference between scientific visualization and presentation graphics is that the latter are primarily concerned with the communication of information and results that are already understood. During scientific visualization we are seeking to understand the data and gain insight.

Initial Display Alternatives and Scientific Visualization Scientific visualization of remotely sensed data is still in its infancy. Its origin can be traced to the simple plotting of points and lines and contour mapping. We have the ability to conceptualize and visualize remotely sensed images in 2D space in true color (2D to 2D). We have the ability to conceptualize and visualize remotely sensed images in 2D space in true color (2D to 2D). We can drape remotely sensed data over a digital terrain model (DEM) and display the synthetic 3D model on a 2D map or computer screen (i.e., 3D-2D). We can drape remotely sensed data over a digital terrain model (DEM) and display the synthetic 3D model on a 2D map or computer screen (i.e., 3D-2D). If we turned this same 3D model into a physical model that we could touch, it would occupy the 3D to 3D portion of scientific visualization mapping space. If we turned this same 3D model into a physical model that we could touch, it would occupy the 3D to 3D portion of scientific visualization mapping space. This discussion identifies the challenges and limitations associated with displaying remotely sensed data and makes suggestions about how to display and visualize the data using black-and-white and color output devices. Scientific visualization of remotely sensed data is still in its infancy. Its origin can be traced to the simple plotting of points and lines and contour mapping. We have the ability to conceptualize and visualize remotely sensed images in 2D space in true color (2D to 2D). We have the ability to conceptualize and visualize remotely sensed images in 2D space in true color (2D to 2D). We can drape remotely sensed data over a digital terrain model (DEM) and display the synthetic 3D model on a 2D map or computer screen (i.e., 3D-2D). We can drape remotely sensed data over a digital terrain model (DEM) and display the synthetic 3D model on a 2D map or computer screen (i.e., 3D-2D). If we turned this same 3D model into a physical model that we could touch, it would occupy the 3D to 3D portion of scientific visualization mapping space. If we turned this same 3D model into a physical model that we could touch, it would occupy the 3D to 3D portion of scientific visualization mapping space. This discussion identifies the challenges and limitations associated with displaying remotely sensed data and makes suggestions about how to display and visualize the data using black-and-white and color output devices.

Scientific Visualization

Input and Output Relationships

Temporary Video Image Display Bitmapped Graphics The digital image processing industry refers to all raster images that have a pixel brightness value at each row and column in a matrix as being bitmapped images. The tone or color of the pixel in the image is a function of the value of the bits or bytes associated with the pixel and the manipulation that takes place in a color look-up table. For example, the simplest bitmapped image is a binary image consisting of just ones (1) and zeros (0). Bitmapped Graphics The digital image processing industry refers to all raster images that have a pixel brightness value at each row and column in a matrix as being bitmapped images. The tone or color of the pixel in the image is a function of the value of the bits or bytes associated with the pixel and the manipulation that takes place in a color look-up table. For example, the simplest bitmapped image is a binary image consisting of just ones (1) and zeros (0).

Jensen, 2004 The brightness value of a picture element (pixel) is read from mass storage by the central processing unit (CPU). The digital value of the stored pixel is in its proper i,j location in the image processor’s random access memory (RAM), often referred to as a video frame buffer. The brightness value is then passed through a black-and-white or color look-up table where modifications can be made. The output from the digital color look-up table is passed to a digital-to-analog converter (DAC). The output from the DAC determines the intensity of the signal for the three guns (Red, Green, and Blue) in the back of the monitor that stimulate the phosphors on a computer cathode-ray tube (CRT) at a specific x,y location or the transistors in a liquid crystal display (LCD).

Characteristics of A Binary Bitmapped Image

Bitmap Displays

RGB Color Coordinate System Jensen, 2004 Digital remote sensor data are usually displayed using a Red-Green-Blue (RGB) color coordinate system based on additive color theory and the three primary colors of red, green, and blue. Additive color theory is based on what happens when light is mixed, rather than when pigments are mixed using subtractive color theory. For example, in additive color theory a pixel having RGB values of 255, 255, 255 produces a bright white pixel. Conversely, we would get a dark pigment if we mixed equally high proportions of blue, green, and red paint (subtractive color theory). Using three 8-bit images and additive color theory, we can display 2 24 = 16,777,216 color combinations. RGB brightness values of 255, 255, 0 would yield a bright yellow pixel, and RGB brightness values of 255, 0, 0 would produce a bright red pixel. RGB values of 0, 0, 0 yield a black pixel. Grays are produced along the gray line in the RGB color coordinate system when equal proportions of blue, green, and red are encountered (e.g., an RGB of 127, 127, 127 produces a medium-gray pixel on the screen or hard-copy device).

RGB Color Coordinate System

Color Look-up Tables: 8-bit Jensen, 2004 How do we control the exact gray tone or color of the pixel on the computer screen after we have extracted a byte of remotely sensed data from the mass storage device? The gray tone or color of an individual pixel on a computer screen is controlled by the size and characteristics of a separate block of computer memory called a color look-up table, which contains the exact disposition of each combination of red, green, and blue values associated with each 8-bit pixel. Evaluating the nature of an 8-bit image processor and associated color look-up table provides insight into the way the remote sensing brightness values and color look-up table interact. How do we control the exact gray tone or color of the pixel on the computer screen after we have extracted a byte of remotely sensed data from the mass storage device? The gray tone or color of an individual pixel on a computer screen is controlled by the size and characteristics of a separate block of computer memory called a color look-up table, which contains the exact disposition of each combination of red, green, and blue values associated with each 8-bit pixel. Evaluating the nature of an 8-bit image processor and associated color look-up table provides insight into the way the remote sensing brightness values and color look-up table interact.

8-bit Digital Image Processing System

Color Density Slice of the Thematic Mapper Band 4 Charleston, SC Image Color Density Slice of the Thematic Mapper Band 4 Charleston, SC Image

Color Density Slice of the Thermal Infrared Image of the Savannah River Color Density Slice of the Thermal Infrared Image of the Savannah River

Color Class IntervalVisualColor Color Lookup Table Values Red, Green, Blue Apparent Temperature Low High Brightness Value Low High 1. Land gray 127, River Ambient Dark blue 0, 0, C Light blue 0, 0, – 2.8 C Green 0, 255, – 5.0 C Yellow 255, 255, – 10.0 C Orange 255, 50, – 20 C Red 255, 0, > 20 C White 255, 255, Class Intervals and Color Lookup Table Values for Color Density Slicing the Pre-dawn Thermal Infrared Image of the Savannah River

Color Density Slice of the Thermal Infrared Image of the Savannah River Color Density Slice of the Thermal Infrared Image of the Savannah River

24-bit Digital Image Processing System

Color Compositing

Where s k is the standard deviation for band k, and r j is the absolute value of the correlation coefficient between any two of the three bands being evaluated. The largest OIF will generally have the most information (as measured by variance) with the least amount of duplication (as measured by correlation). Applicable to any multispectral dataset. Optimum Index Factor Ranks the 20 three-band combinations that can be made from six bands of Landsat TM data (not including the thermal-infrared band). Optimum Index Factor Ranks the 20 three-band combinations that can be made from six bands of Landsat TM data (not including the thermal-infrared band). Band combination: 1,2,31,2,41,2,51,2,62,3,42,3,52,3,63,4,5 3,4,6 etc. 3,4,6 etc. Band combination: 1,2,31,2,41,2,51,2,62,3,42,3,52,3,63,4,5 3,4,6 etc. 3,4,6 etc.

Optimum Index Factor Ranks the 20 three-band combinations that can be made from six bands of Landsat TM data Optimum Index Factor Ranks the 20 three-band combinations that can be made from six bands of Landsat TM data Band combination: 1,2,3 1,2,3 3,4,5 3,4,5 BetterBetter

Sheffield Index Band combination: 1,2,31,2,41,2,51,2,62,3,42,3,52,3,63,4,5 3,4,6 etc. 3,4,6 etc. Band combination: 1,2,31,2,41,2,51,2,62,3,42,3,52,3,63,4,5 3,4,6 etc. 3,4,6 etc. A statistical band selection index based on the size of the hyperspace spanned by the three bands under investigation. Sheffield suggests that the bands with the largest hypervolumes be selected. The index is based on computing the determinant of each p by p sub-matrix generated from the original 6  6 covariance matrix (if six bands are under investigation). The Sheffield Index (SI) is: where is the determinant of the covariance matrix of subset size p. In this case, p = 3 because we are trying to discover the optimum three-band combination for image display purposes. The SI is first computed from a 3  3 covariance matrix derived from just band 1, 2, and 3 data. It is then computed from a covariance matrix derived from just band 1, 2, and 4 data, etc. This process continues for all 20 possible band combinations if six bands are under investigation, as in the previous example. The band combination that results in the largest determinant is selected for image display. All of the information necessary to compute the SI is actually present in the original 6  6 covariance matrix. The Sheffield Index can be extended to datasets containing n bands. A statistical band selection index based on the size of the hyperspace spanned by the three bands under investigation. Sheffield suggests that the bands with the largest hypervolumes be selected. The index is based on computing the determinant of each p by p sub-matrix generated from the original 6  6 covariance matrix (if six bands are under investigation). The Sheffield Index (SI) is: where is the determinant of the covariance matrix of subset size p. In this case, p = 3 because we are trying to discover the optimum three-band combination for image display purposes. The SI is first computed from a 3  3 covariance matrix derived from just band 1, 2, and 3 data. It is then computed from a covariance matrix derived from just band 1, 2, and 4 data, etc. This process continues for all 20 possible band combinations if six bands are under investigation, as in the previous example. The band combination that results in the largest determinant is selected for image display. All of the information necessary to compute the SI is actually present in the original 6  6 covariance matrix. The Sheffield Index can be extended to datasets containing n bands.

Merging Remotely Sensed Data Jensen, 2004 Band Substitution Band Substitution Color Space Transformation and Substitution Color Space Transformation and Substitution - RGB to IHS Transformation and back again - Chromaticity Color Coordinates System and the Brovey Transformation and the Brovey Transformation Principal Component Substitution Principal Component Substitution Pixel-by-pixel Addition of High-Frequency Pixel-by-pixel Addition of High-Frequency Information Information Smoothing Filter-based Intensity Modulation Smoothing Filter-based Intensity Modulation Image Fusion Image Fusion Band Substitution Band Substitution Color Space Transformation and Substitution Color Space Transformation and Substitution - RGB to IHS Transformation and back again - Chromaticity Color Coordinates System and the Brovey Transformation and the Brovey Transformation Principal Component Substitution Principal Component Substitution Pixel-by-pixel Addition of High-Frequency Pixel-by-pixel Addition of High-Frequency Information Information Smoothing Filter-based Intensity Modulation Smoothing Filter-based Intensity Modulation Image Fusion Image Fusion

All data sets to be merged must be accurately registered to one another and resampled to the same pixel size. Several alternatives exist for merging the data sets, including: 1.Simple band substitution methods 2.Color space transformation and substitution methods using various color coordinate systems. 3.Substitution of the high spatial resolution data for principal component #1. All data sets to be merged must be accurately registered to one another and resampled to the same pixel size. Several alternatives exist for merging the data sets, including: 1.Simple band substitution methods 2.Color space transformation and substitution methods using various color coordinate systems. 3.Substitution of the high spatial resolution data for principal component #1. Merging Different Types of Remotely Sensed Data for Effective Visual Display Merging Different Types of Remotely Sensed Data for Effective Visual Display

Merging Remotely Sensed Data Using the Band Substitution Method

Color Composite of Marco Island, Florida SPOT Imagery October 11, 1988 Created using the band substitution method: R = SPOT band 3 (NIR) 20 m G = SPOT band 4 (Pan) 10 m B = SPOT band 1 (Green) 20 m Color Composite of Marco Island, Florida SPOT Imagery October 11, 1988 Created using the band substitution method: R = SPOT band 3 (NIR) 20 m G = SPOT band 4 (Pan) 10 m B = SPOT band 1 (Green) 20 m

Merging Remotely Sensed Data Jensen, 2004 Color Space Transformation and Substitution Color Space Transformation and Substitution - RGB to IHS Transformation and back again - Chromaticity Color Coordinates System and the Brovey Transformation and the Brovey Transformation Principal Component Substitution Principal Component Substitution Pixel-by-pixel Addition of High-Frequency Pixel-by-pixel Addition of High-Frequency Information Information Smoothing Filter-based Intensity Modulation Smoothing Filter-based Intensity Modulation Image Fusion Image Fusion Color Space Transformation and Substitution Color Space Transformation and Substitution - RGB to IHS Transformation and back again - Chromaticity Color Coordinates System and the Brovey Transformation and the Brovey Transformation Principal Component Substitution Principal Component Substitution Pixel-by-pixel Addition of High-Frequency Pixel-by-pixel Addition of High-Frequency Information Information Smoothing Filter-based Intensity Modulation Smoothing Filter-based Intensity Modulation Image Fusion Image Fusion

Intensity, Hue, Saturation (HIS) Color Coordinate System

Intensity-Hue-Saturation (IHS) Substitution: IHS values can be derived from the RGB values through the transformation equations: Intensity-Hue-Saturation (IHS) Substitution: IHS values can be derived from the RGB values through the transformation equations: Merging Different Types of Remotely Sensed Data for Effective Visual Display Merging Different Types of Remotely Sensed Data for Effective Visual Display Substitute Intensity data from the IHS transformation for one of the bands, e.g., RGB = 4, I, 2

Relationship Between RGB and IHS Color Systems

Jensen, 2004 Chromaticity Color Coordinate System A chromaticity color coordinate system can be used to specify color. The coordinates in the chromaticity diagram represent the relative fractions of each of the primary colors (red, green, and blue) present in a given color. Since the sum of all three primaries must add to 1, we have the relationship: or Entry into the chromaticity diagram is made using the following relationships: where R, G, and B represent the amounts of red, green, and blue needed to form any particular color, and x, y, and z represent the corresponding normalized color components, also known as trichromatic coefficients. Only x and y are required to specify the chromaticity coordinates of a color in the diagram since x + y + z = 1. A chromaticity color coordinate system can be used to specify color. The coordinates in the chromaticity diagram represent the relative fractions of each of the primary colors (red, green, and blue) present in a given color. Since the sum of all three primaries must add to 1, we have the relationship: or Entry into the chromaticity diagram is made using the following relationships: where R, G, and B represent the amounts of red, green, and blue needed to form any particular color, and x, y, and z represent the corresponding normalized color components, also known as trichromatic coefficients. Only x and y are required to specify the chromaticity coordinates of a color in the diagram since x + y + z = 1.

ChromaticityDiagramChromaticityDiagram

Jensen, 2004 Image Merging using the Brovey Transform The Brovey transform may be used to merge (fuse) images with different spatial and spectral characteristics. It is based on the chromaticity transform and is a much simpler technique than the RGB-to-IHS transformation. The Brovey transform also can be applied to individual bands if desired. It is based on the following intensity modulation: where R, G, and B are the spectral band images of interest (e.g., 30  30 m Landsat ETM + bands 4, 3, and 2) to be placed in the red, green, and blue image processor memory planes, respectively, P is a co-registered band of higher spatial resolution data (e.g., 1  1 m IKONOS panchromatic data), and I = intensity. The Brovey transform may be used to merge (fuse) images with different spatial and spectral characteristics. It is based on the chromaticity transform and is a much simpler technique than the RGB-to-IHS transformation. The Brovey transform also can be applied to individual bands if desired. It is based on the following intensity modulation: where R, G, and B are the spectral band images of interest (e.g., 30  30 m Landsat ETM + bands 4, 3, and 2) to be placed in the red, green, and blue image processor memory planes, respectively, P is a co-registered band of higher spatial resolution data (e.g., 1  1 m IKONOS panchromatic data), and I = intensity.

Image Merging using Band Substitution, Principal Components Substition, and the Brovey Transform

Image Merging using Principal Component Substitution Chavez et al. (1991) used principal components analysis applied to six Landsat TM bands. The SPOT panchromatic data were contrast stretched to have approximately the same variance and average as the first principal component image. The stretched panchromatic data were substituted for the first principal component image and the data were transformed back into RGB space. The stretched panchromatic image may be substituted for the first principal component image because the first principal component image normally contains all the information that is common to all the bands input to PCA, while spectral information unique to any of the input bands is mapped to the other n principal components. The stretched panchromatic image may be substituted for the first principal component image because the first principal component image normally contains all the information that is common to all the bands input to PCA, while spectral information unique to any of the input bands is mapped to the other n principal components. Chavez et al. (1991) used principal components analysis applied to six Landsat TM bands. The SPOT panchromatic data were contrast stretched to have approximately the same variance and average as the first principal component image. The stretched panchromatic data were substituted for the first principal component image and the data were transformed back into RGB space. The stretched panchromatic image may be substituted for the first principal component image because the first principal component image normally contains all the information that is common to all the bands input to PCA, while spectral information unique to any of the input bands is mapped to the other n principal components. The stretched panchromatic image may be substituted for the first principal component image because the first principal component image normally contains all the information that is common to all the bands input to PCA, while spectral information unique to any of the input bands is mapped to the other n principal components.