Download presentation
Presentation is loading. Please wait.
Published byJoan Fox Modified over 9 years ago
1
Lecture Nine Image processing
2
2 Learning outcomes By the end of the lecture you will be familiar with: The importance of image processing in multi-media The range of application areas in which image processing is used Have an understanding of some of the basic principles and concepts Image enhancement techniques
3
3 Introduction What is Image Processing / Computer Vision Application areas An Image model Image bands Sampling and quantization Elements of an image processing system Image acquisition Image enhancement
4
4 What is image processing… Covers those techniques concerned with obtaining information from imagery Not limited to visible band, but included for example, infra-red images Encompasses studies in: computer science, physics, electronics, mathematics Main concern of image processing is the manipulation and analysis of pictures by computer
5
5 What is image processing… Main topics include: –digitization, coding / compression –enhancement, restoration –segmentation, description –recognition, analysis –feature extraction, interpertation
6
6 What is image processing… Graphics vs. Image processing vs. Computer Vision Graphics Image processing Computer vision Circle (20, 20) Triangle (50,50) Square (30,40) Circle (20, 20) Triangle (50,50) Square (30,40)
7
7 What is image processing… The computer vision hierarchy: Data Knowledge Decreasing dataIncreasing knowledge High Medium Low Computer vision Pattern recognition Interpertation Feature extraction Image analysis Image enhancement Image processing
8
8 Applications Medical image processing –enhancement of X-ray, CT, MRI data –3D visualization of scans –Analysis of shape and size of structures Factory automation –PCB inspection –Quality control –Robotics Satelte Imagery –Classification of terrain –Detection of resources –Automatic map-making Military –Target tracking Video conferencing Secutiry –Finger/hand print analysis –Eye retina comparision Weather mapping Documentation reading Licence plate inspection Tax disk inspection Face recognition
9
9 Image model… An image is a picture, photograph, display or other form A digital image is a sampled and quantized representation of an scene Sampling is achieved by averaging the brightness of small patches in an image – pixels Quantization corresponds to assigning a (discrete) value to the brightness of a pixel (i.e. a greylevel)
10
10 Image model… Typical values: Image size 256x256 512x512 Greyscale – 0.255 (16 bits for MRI) Memory implications??? Axis convention Image [row][col] Image origin (0,0) Image size 256x256 Pixel
11
11 Image bands A scene can have several images associated with it Colour images formed from three separate components, (RGB) Multispectral scanners flown in aircraft or satellites typically gather between 3 and 11 images of a scene Each individual image is known as a band
12
12 Elements of an image processing system… ADC Frame store DAC Monitor Image processing software Camera
13
13 Elements of an image processing system… Illumination –The success of most industrial image processing systems is fundamentally based on adequate illumination –Uncontrolled light – a particular challenge –Objects positioned between camera and light – silhoutte of the object –Relative position of object, light and camera important –For moving scenes, flashing strobe light us used to “freeze” the image
14
14 Elements of an image processing system… Processing –Computer will acquire and process image data –Reads sensors –Usually requires special-purpose computers/hardware –Typical software development environment: library of standard procedures, tools for realising new algorithms, (high-level language compiler, debugger, etc.) and an appropriate U.I.
15
15 Image acquisition… Real-time capture –25 images / second for no “flicker” –1 second of 512*512 mono images requires 6.25 Mb of storage Range images –Light intensity is not captured but object distance from the sensor is modelled
16
16 Image acquisition… Scanners –Generally capture a still, flat image with considerable accuracy –Resolution 100 dpi to 1200dpi + –Use a single row of CCDs (eg 2048) to collect data –Main problems: Only still images Mechanical operation – may not be reliable, some hand held
17
17 Image acquisition… Satellite imagery –Used in military, meteorological, geographical and agricultural applications –Typically scan 6 horizontal lines at a time and produce images of very high quality 2340*3380 8 bits per pixel –Resolution varies depending on height of satellite (or aircraft), camera/scanning technology and weather conditions –Current technology may be able to read a car number plate at an altitude of 180 miles
18
18 Image acquisition… Ranging devices –Ultrasound radar: short range (up to 40m) image collection –Laser radar: pulses of light are transmitted at points equivalent to pixels on the image – the transmitter is switched off and the reciever ‘sees’ the increase in light intensity and calculates distance from time taken for beam to return
19
19 Image enhancement Image enhancement refers to any technique that: Improves or modifies the image data –either for purposes of subsequent visual evaluation or for further numerical processing. Image enhancement techniques include: –grey-level and contrast manipulation –noise reduction –edge sharpening/detection Image enhancements carried out by: –point- and region-based operations. –Point operations modify pixels of an image based on the value of the pixel.
20
20 Point operations… Aim –To emphasise or suppress grey-levels Grey-level smoothing Emphasising grey-level differences Sharpening grey-level steps How? –Pass a operator matrix over the image –Assign a new value to a pixel –New value determined by the surrounding pixels
21
21 Greylevel Smoothing Used to smooth edges and reduce noise Noise can be introduced in the image acquisition or transmission stages Operations : –Mean –Min –Max –Median Can unfortunately remove fine detail, may need to emphasise edges first!
22
22 Edge detection Also uses operators to calculate new pixel values Utilises areas of sharp contrasts within the image –Looks at gradients within the image –Edges are characterised by large slopes in the image function f(x,y) x y 6 12 24 2 6 Gradient 6/2 = 3
23
23 Emphasising greylevel differences Prewitt operator Sobel operator 1 2 1 0 0 0 -1 -2 -1 -1 01 -2 02 -1 01 1 2 1 0 0 0 -1 -2 -1 -1 01 -2 02 -1 01
24
24 How do edge operators work? Sobel operator Operation –Image data is a series of grey-level values –The horizontal and vertical operators are passed over the image with the centre of the matrix on each pixel –New value for the pixel calculated and stored accordingly 1 2 1 0 0 0 -1 -2 -1 -1 01 -2 02 -1 01 123657895123 45962567836 14756963278 65125863569 781482487569
25
25 How do edge operators work? Centred around 96 we would compute: X= 123*-1 + 65*0 + 78*1 + 45*-2 +96*0 + 256*2 +147*-1 + 56*0 + 96*1 Y= 123*1 + 65*2 + 72*1 + 45*0 +96*0 + 256*0 + 147*-1 + 56*-2 + 96*-1 Output = x 2 + y 2 new value = 327 (rounded down to 256)
26
26 Prewitt Sobel
27
Pattern Recognition
28
28 Introduction The purpose of pattern recognition is to place objects in a given world into categories The interface between the world and the pattern recognition system is provided by sensors
29
29 Pattern Recognition Procedure The first step of the procedure extracts features from the input data which characterise the objects. Based on these features, the objects are identified and sorted into classes.
30
30 Labels In order to sort the objects, the system needs information concerning the features of the objects i.e. the system needs a label
31
31 Types of Pattern Recognition System Un-Supervised These are able to generate their labels themselves, assigning them to objects with similar features which could belong to the same class. These systems cannot recognise the object they are analysing.
32
32 Supervised These systems are ‘taught’ such information as ‘this is a banana’. The work for this system is divided into two stages: Training step - requires a teacher who describes the properties of each class. Classification step - compares the features of an actual object with those values which have been taught. The object is assigned to the class which fits best. Types of Pattern Recognition System
33
33 Problem with second approach If someone inserts a foreign object into a fruit recognising system (e.g. a calculator), then there is a class into which the calculator ‘fits better’ than any other. This can be overcome by introducing a rejection level which tests the limits of similarity.
34
34 Feature Space The features extracted from the input data form a feature space blue green yellow red Apple Plum Banana Orange colour compactness (surface area:Volume)
35
35 Problems with the Feature Space Too few or unsuitable features results in classes which are not separable –If an appropriate choice of features is not possible, or too expensive, then the aim should be to use features leading to a minimum classification error. –To avoid the classification errors, it may be necessary to ‘reduce’ the world, e.g. restrict the colour of apples to green. If this is not practical, then an additional feature must be introduced, e.g. surface texture.
36
36 Review During this lecture we have looked at: –Image processing –The image model –Sampling and quantization –The image processing system –Image acquisition –Point operations –Image enhancement –Pattern recognition
37
37 Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.