Download presentation
Presentation is loading. Please wait.
1
Digital Image Processing
Chapter 2 Digital Image Fundamentals
2
Table of Contents Elements of Visual Perception
Structure of the Human Eye Image Formation in the Eye Brightness Adaptation and Discrimination Light and the Electromagnetic Spectrum Image Sensing and Acquisition Image Acquisition Using a Single Sensor Image Acquisition Using a Sensor Strip Image Acquisition Using Sensor Arrays A Simple Image Formation Model Image Sampling and Quantization Basic Concepts in Sampling and Quantization. Representing Digital Images Spatial and Gray-Level Resolution Aliasing and Moire Patterns Zooming and Shrinking Digital Images Basic Relationships Between Pixels Neighbors of a Pixel Adjacency, Connectivity, Regions, and Boundaries Distance Measures Image Operations on a Pixel Basis Linear and Nonlinear Operations Summary
3
Elements of Visual Perception
Structure of the Human Eye Outer Sheath or Shroud Ciliary Body Sclera: White outer sheath blocking light from entering. Choroid: Inner membrane of the Sclera that contains the blood vessels. Window Cornea: Transparent protective membrane. Iris: Aperture that controls the amount of light entering the lens. Lens: Fatty tissue containing water that focuses light onto the fovea. Ciliary Muscle: Deforms the Lens in order to control the focus. Image Sensing Vitreous Humor: Inner cavity of the eye ball. Retina: Contains a network of discrete light receptors. Fovea: The focal point on the back of the eye ball. Blind Spot: The area were the optical nerve exits the eye ball.
4
Elements of Visual Perception
Image Formation in the Eye The lens of the human eye is flexible. The ciliary muscles put tension on the lens controlling the thickness of the lens. Focal Length of the Human Eye varies between 17mm and 14mm. Graphical representation of the eye looking at a palm tree. Point C is the optical center of the lens.
5
Elements of Visual Perception
Distribution of Rods and Cones Cones are most dense in the center of the retina Rods increase in density from the center out to approximately 20° off axis and then decrease in density out to the extreme periphery of the retina.
6
Elements of Visual Perception
Cone Receptors The cones in each eye number between 6 and 7 million. They are located primarily in the central portion of the retina, called the fovea, and are highly sensitive to color. Humans can resolve fine details with these cones largely because each one is connected to its own nerve end. Muscles controlling the eye rotate the eyeball until the image of an object of interest falls on the fovea. Cone vision is called photopic or bright-light vision. Rod Receptors The number varies between 75 to 150 million, and they are distributed over the retinal surface. The larger area of distribution and the fact that several rods are connected to a single nerve end reduce the amount of detail discernible by these receptors. Rods serve to give a general, overall picture of the field of view.They are not involved in color vision and are sensitive to low levels of illumination. This phenomenon is known as scotopic or dim-light vision.
7
Elements of Visual Perception
Brightness Adaptation and Discrimination The intensity perceived by the human eye is a logarithmic function of the light intensity incident on the eye. The range of photopic vision alone is 10^6 The transition from scotopic to photopic is gradual over the range of .001 to .1 millilamberts. The eye accomplishes this by brightness adaptation. The range of intensities that can be simultaneously distinguished is very small compared to the total adaptive range.
8
Elements of Visual Perception
Brightness Adaptation and Discrimination Range of Subjective Brightness The range of light intensity levels to which the human visual system can adapt is enormous-on the order of 10^10-from the scotopic threshold to the glare limit. Experimental evidence indicates that subjective brightness (intensity as perceived by the human visual system) is a logarithmic function of the light intensity incident on the eye.
9
Elements of Visual Perception
Brightness Adaptation and Discrimination Light Intensity Experiment A diffuser area, such as opaque glass, that is illuminated from behind by a light source whose intensity, I, can be varied. An increment of illumination, I, is added in the form of a short-duration flash that appears as a circle in the center of the uniformly illuminated field. When I is strong enough, the subject will give a response of “yes” all the time.
10
Elements of Visual Perception
Brightness Adaptation and Discrimination Experimental Results The quantity dIc/I where dIc is the increment of illumination discriminable 50% of the time with background illumination I, is called the Weber ratio. This curve shows that brightness discrimination is poor (the Weber ratio is large) at low levels of illumination, and it improves significantly (the Weber ratio decreases) as background illumination increases. The two branches in the curve reflect the fact that at low levels of illumination vision is carried out by activity of the rods, whereas at high levels (showing better discrimination) vision is the function of cones.
11
Elements of Visual Perception
Brightness Adaptation and Discrimination Two phenomena demonstrate that perceived brightness is not a simple function of intensity. The first is called Mach Bands after Ernst Mach. The second is called simultaneous contrast. All sorts of optical illusions illustrate human perception phenomenon.
12
Elements of Visual Perception
Brightness Adaptation and Discrimination Visual Phenomena An example showing that perceived brightness is not a simple function of intensity. The relative vertical positions between the two profiles in have no Special significance; they were chosen for clarity. An examples of simultaneous contrast. All the inner squares have the same intensity, but they appear progressively darker as the background becomes lighter.
13
Light and the Electromagnetic Spectrum
Some Simple Descriptors In 1666 Sir Isaac Newton discovered that light passed through a prism emerges as its constituent parts. The wavelength and the frequency of EM radiation are related by , where Lambda is the wavelength, C is the speed of light in a vacuum, and V is the frequency in Hertz. The Energy of EM radiation is , where h is Plank’s constant (2.998*10^8 m/s) Electromagnetic waves can be thought of as mass less particles containing a packet of energy having wave like properties. These packets are called photons. The color of an object is determined by its reflectance properties. If an object reflects green light better than other frequencies then the object will have a greenish color. Achromatic light is light that has no color, or which encompasses the entire spectrum of visible light. Chromatic light is measured by three components, Radiance, Luminance, and Brightness. Radiance is the total amount of energy put out by the light source.
14
Light and the Electromagnetic Spectrum
Some Simple Descriptors Luminance is the amount of light perceived by the observer. Brightness is a subjective quality and embodies the achromatic notion of intensity. It is key to the description of color sensation. If a sensor can be produced to detect a certain band of the EM spectrum then in theory we can image it. In practice the molecules that we are using the radiation to observe must be equal to or greater in size than the wavelength of the EM radiation.
15
Light and the Electromagnetic Spectrum
Wavelength Vs. Frequency The electromagnetic spectrum can be expressed in terms of wavelength, frequency, or energy.Wavelength and frequency f are related by the expression , where c is the speed of light (2.998*108 m/s). Energy of Electromagnetic Radiation The energy of the various components of the electromagnetic spectrum is given by the expression E=hv, where h is Planck’s constant. The units of wavelength are meters, with the terms microns (denoted m and equal to 10-6 m) and nanometers (10-9 m) being used just as frequently. Frequency is measured in Hertz (Hz).
16
Image Sensing and Acquisition
Image Acquisition Using a Single Sensor Mechanical motion applied along two axis. Great control over resolution. Extremely slow. Image Acquisition Using a Sensor Strip A geometry that is used much more frequently than single sensors consists of an in-line arrangement of sensors in the form of a sensor strip. The strip provides imaging elements in one direction. Motion perpendicular to the strip provides imaging in the other direction. Great control over resolution on one axis. Faster than single sensor techniques.
17
Image Sensing and Acquisition
Image Acquisition Using Sensor Arrays Image Acquisition Using Sensor Arrays No mechanical motion required. Resolution set by the density of the sensor array. Numerous electromagnetic and some ultrasonic sensing devices frequently are arranged in an array format.
18
Image Sensing and Acquisition
A simple Image Formation Model The input signal to the sensors is comprised of two properties the illumination and the reflectance. The input signal is f(x,y)=i(x,y)*r(x,y). Where 0<i(x,y)<infinity and 0<r(x,y)<1. The output of the sensing systems is a continuous voltage signal which must be sampled in discrete time and quantized. Average Illumination and Reflectance Average Illumination Values Clear Day: 90,000 Lamberts/square-meter Cloudy Day: 10,000 Lamberts/square-meter Commercial Office: 1,000 Lamberts/square-meter Clear Night: .1 Lamberts/square-meter Average Reflectance Values Black velvet .01 Stainless Steel .65 Flat white wall paint 0.80 Silver plated metal 0.90 Snow 0.93
19
Image Sensing and Acquisition
A simple Image Formation Model The input signal to the sensors is comprised of two properties the illumination and the reflectance. The input signal is f(x,y)=i(x,y)*r(x,y). Where 0<i(x,y)<infinity and 0<r(x,y)<1. The output of the sensing systems is a continuous voltage signal which must be sampled in discrete time and quantized. Average Illumination and Reflectance Average Illumination Values Clear Day: 90,000 Lamberts/square-meter Cloudy Day: 10,000 Lamberts/square-meter Commercial Office: 1,000 Lamberts/square-meter Clear Night: .1 Lamberts/square-meter Average Reflectance Values Black velvet .01 Stainless Steel .65 Flat white wall paint 0.80 Silver plated metal 0.90 Snow 0.93
20
Image Sampling and Quantization
Sampling is the process of taking values of a signal at discrete points in time and space. Quantizing is the process of taking a continuous amplitude input signal and outputting a signal with distinct levels of amplitude.
21
Image Sampling and Quantization
Digital Image Representation A digital image can be represented by a matrix of pixel values. The area of the matrix is determined by the spatial resolution. The color depth determines the matrix height.
22
Image Sampling and Quantization
Spatial Resolution Increasing the spatial resolution of an image increases the amount of spatial detail. It also increases the size of an image. Increasing the gray-level resolution of an image increases the intensity detail, it also increases the size of an image. The quality of a digital image is a function of both its spatial and gray-level resolutions.
23
Image Sampling and Quantization
Spatial Resolution Assuming a N2 (1024 * 1024 pixel) image with L=8 (256 gray level resolution.) Sampling it down to ½ the size by deleting every other row and every other column, while keeping L the same. A Checkerboard pattern will result as the sub-sampling reduces the number of N by ½ each time.
24
Image Sampling and Quantization
Gray-Level Resolution Doing so gradually resulted very fine ridge like structures in areas of smooth gray levels. This effect, caused by the use of an insufficient number of gray levels in smooth areas of a digital image, is called false contouring. Varying the gray level from 256 bits to 2 bits, while maintaining the spatial resolution.
25
Image Sampling and Quantization
Gray-Level Resolution
26
Image Sampling and Quantization
Isopreference Curve Only a few gray levels may be needed for images with a large amount of detail A decrease in k tends to increase the apparent contrast of an image, a visual effect that humans often perceive as improved quality in an image.
27
Image Sampling and Quantization
Zooming and Shrinking Digital Images Zooming and shrinking a digital image can be thought of as over or under sampling. The difference is that is applied to a digital image and not the original. Three Methods of zooming. Nearest Neighbor Interpolation. Pixel Replication. Bilinear Interpolation. Two Methods of shrinking. Row-Column Deletion. Reverse Nearest Neighbor Interpolation
28
Image Sampling and Quantization
Zooming and Shrinking Digital Images Zooming and shrinking a digital image can be thought of as over or under sampling. The difference is that is applied to a digital image and not the original. Three Methods of zooming. Nearest Neighbor Interpolation. Pixel Replication. Bilinear Interpolation. Two Methods of shrinking. Row-Column Deletion. Reverse Nearest Neighbor Interpolation
29
Basic Relationships Between Pixels
Neighbors of a Pixel A pixel p at coordinates (x, y) has: Four horizontal and vertical neighbors N4(p) denoted by Four diagonal neighbors ND(p) denoted by (x, y) (x+1, y) (x-1, y) (x, y+1) (x, y-1) (x+1, y+1) (x-1, y+1) (x+1, y-1) (x-1, y-1)
30
Basic Relationships Between Pixels
Adjacency, Connectivity, Regions, and Boundaries For two pixels to be connected they must be neighbors and must meet some specific similarity criteria. Adjacency 4-Adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p) 8-Adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set N8(p) M-Adjacency: Two pixels p and q with values from V are m-adjacent if q is in N4(p) or q is in ND(p) and the set N4(p) union N4(q) has no pixels whose values are from V. R is a region of an image if R is a connected set of pixels. A pixel is a boundary pixel if one or more of its neighbors is not in the region the the pixel is in. A (digital) path (or curve) from pixel p with coordinates (x, y) to pixel q with coordinates (s, t) is a sequence of distinct pixels with coordinates (x0,y0), (x1,y1),….., (xn,yn) Where (x0,y0) = (x,y), (xn,yn) = (s, t), and pixels (xi,yi) and (xi-1,yi-1) are adjacent for 1 <= i <= n. In this case, n is the length of the path. If (x0,y0) = (xn,yn), the path is a closed path. We can define 4-, 8-, or m-paths depending on the type of adjacency specified.
31
Basic Relationships Between Pixels
Distance Measures Euclidean Distance: City-Block Distance: Chessboard Distance City Block Distance Chess Board Distance
32
Basic Relationships Between Pixels
Image Operations on a Pixel Basis Dividing one image by another is defined as dividing one pixel by the corresponding pixel in the other image. Adding one image is added to another to form the composite of the two. Subtraction one image is formed by subtracting one from another. NOT, OR, and AND operations to form the inverted image or to window an image. Linear and Non-Linear Operations H is a linear operation if f and g are two images, a and b are two scalar values, and the equation H(af+bg)=aH(f)+bH(g) is satisfied. Linear operations are important because they provide well understood practical results. While nonlinear operations sometimes provide better results they are often unpredictable.
33
Linear and Non-Linear Operations
H is a linear operation if f and g are two images, a and b are two scalar values, and the equation H(af+bg)=aH(f)+bH(g) is satisfied. Linear operations are important because they provide well understood practical results, while nonlinear operations sometimes provide better results they are often unpredictable.
34
Summary Structure of the Human Eye
The eye has an opaque outer covering shielding the receptors form light. The lens and ciliary muscles focus light coming through the iris onto the receptors. The receptors (rods and cones) turn light rays into electrical impulses which are sent to the brain. Image Formation in the Eye Light enters the eye through the iris. The lens is contorted by the ciliary muscles to focus the light on to the fovea. The rods and cones detect the image pattern and transmit it by the optic nerve to the brain. Cones are for bright vision and also detect color. Rods are for low light vision and cover a wider range on the interior of the eye.
35
Summary Brightness Adaptation and Discrimination
The eye can cover a range from less than 1 Lamberts to around 106 Lamberts. The eye obtains its range of colors by adapting to the intensity of a region. The adaptive color sensing causes the phenomena of perceived brightness. Light and the Electromagnetic Spectrum Wavelength and frequency are related by Frequency and energy are related by Image Acquisition Using a Single Sensor The spatial resolution is determined solely by the motion of the sensor. High degrees of accuracy can be obtained with these systems.
36
Summary Image Acquisition Using a Sensor Strip
The spatial resolution is determined by the sensor density along one axis and by motion along the other axis. These sensors are fairly cheap while still obtaining good image quality. These sensors are used in most desktop scanners. Image Acquisition Using a Sensor Array The spatial resolution is determined solely by the sensor density. These sensors are capable of capturing images very quickly. A Simple Image Formation Model The image is projected onto the sensor array. The image is captured by the sensory array. The captured image is then sampled and quantized.
37
Summary Basic Concepts in Sampling and Quantization
Sampling is the process of taking signal values at discrete intervals of time. Quantizing is the process of making a continuous valued function and giving it discrete values. Representing Digital Images A digital image can be thought of as a matrix of functions that represent the intensity values of the pixels. A digital image can also be thought of as simply a matrix of intensity values. Spatial and Gray-Level Resolution Spatial and Gray-Level resolution determine the size of an image. Under sampling an image (insufficient spatial resolution) causes a checkerboard pattern in the resulting image. Under quantiziation of an image (insufficient gray-level resolution) causes false contouring in the resulting image.
38
Summary Aliasing and Moire Patterns Under sampling causes aliasing.
It is impossible to totally eliminate aliasing, because it is impossible to work with signals of infinite duration. Moire patterns in images are caused by aliasing. Zooming and Shrinking Digital Images The three methods of zooming are Nearest Neighbor Interpolation, Pixel Replication, and Bilinear Interpolation. The two methods of shrinking are Row-Column Deletion and Reverse Nearest Neighbor Interpolation. Neighbors of a Pixel The neighbors of a pixel are determined by For many operations it is important to determine the neighbors of a pixel.
39
Summary Adjacency, Connectivity, Regions, and Boundaries
Two pixels are 4-adjacent if they both have values from a set V and if they are in the set N4(p). Two pixels are 8-adjacent if they both have values from a set V and if they are in the set N8(p) . Two pixels are M-adjacent if they both have values from a set V, if q is in N4(p) or q is ND(p) and the set N4(p) union N4(q) has no pixels whose values are from V. A region is a connected set of pixels. A boundary is a pixel who have at least one neighbor which is not in a region.
40
Summary Distance Measures Euclidean Distance: City-Block Distance:
Chessboard Distance: Image Operations on a Pixel Basis Logical AND, OR, and NOT. Image Addition, Subtraction, or Division. Linear and Non-linear Operations Linear operations take the form of H(af+bg)=aH(f)+bH(g) Non-linear operations are operations that are not linear.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.