Digital Image Processing ECE 480 Technical Lecture Team 4 Bryan Blancke Mark Heller Jeremy Martin Daniel Kim.

Slides:



Advertisements
Similar presentations
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Advertisements

EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Vision Based Control Motion Matt Baker Kevin VanDyke.
6/9/2015Digital Image Processing1. 2 Example Histogram.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
3. Introduction to Digital Image Analysis
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Segmentation (Section 10.2)
Introduction to Computer Vision CS / ECE 181B Thursday, April 22, 2004  Edge detection (HO #5)  HW#3 due, next week  No office hours today.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
Computer Vision Lecture 3: Digital Images
LAPPEENRANTA UNIVERSITY OF TECHNOLOGY THE DEPARTMENT OF INFORMATION TECHNOLOGY 1 Computer Vision: Fundamentals & Applications Heikki Kälviäinen Professor.
SCCS 4761 Introduction What is Image Processing? Fundamental of Image Processing.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
Machine Vision for Robots
CP467 Image Processing and Pattern Recognition Instructor: Hongbing Fan Introduction About DIP & PR About this course Lecture 1: an overview of DIP DIP&PR.
Perception Introduction Pattern Recognition Image Formation
Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
DIGITAL IMAGE PROCESSING
1 Digital Image Processing Dr. Saad M. Saad Darwish Associate Prof. of computer science.
1 Chapter 1: Introduction 1.1 Images and Pictures Human have evolved very precise visual skills: We can identify a face in an instant We can differentiate.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
MACHINE VISION Machine Vision System Components ENT 273 Ms. HEMA C.R. Lecture 1.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Autonomous Robots Vision © Manfred Huber 2014.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
1 Machine Vision. 2 VISION the most powerful sense.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
1 Teaching Innovation - Entrepreneurial - Global The Centre for Technology enabled Teaching & Learning, N Y S S, India DTEL DTEL (Department for Technology.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
12:071 Digital Image Processing:. 12:072 What is a Digital Image? A digital image is a representation of a two- dimensional image as a finite set of digital.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Visual Information Processing. Human Perception V.S. Machine Perception  Human perception: pictorial information improvement for human interpretation.
Course : T Computer Vision
Edge Detection slides taken and adapted from public websites:
Artificial Intelligence
- photometric aspects of image formation gray level images
Image Segmentation – Edge Detection
Image Recognition. Contents: Motivation Objective Definition Introduction Preprocessing / Edge Detection Neural Networks in Image Recognition Practical.
Introduction Computer vision is the analysis of digital images
Fourier Transform: Real-World Images
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Introduction Computer vision is the analysis of digital images
Presented by :- Vishal Vijayshankar Mishra
Object Recognition Today we will move on to… April 12, 2018
Filtering Things to take away from this lecture An image as a function
Ceng466 Fundamentals of Image Processing
IT523 Digital Image Processing
© 2010 Cengage Learning Engineering. All Rights Reserved.
Filtering An image as a function Digital vs. continuous images
IT472 Digital Image Processing
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Digital Image Processing ECE 480 Technical Lecture Team 4 Bryan Blancke Mark Heller Jeremy Martin Daniel Kim

Background What is digital image processing?  Modification of digital data for improving the image qualities with the aid of computer  Improvement of pictorial information for human interpretation  The processing helps in maximising clarity, sharpness and details of features of interest towards information extraction and further analysis. i.e. edge detection, removal of noise.

Applications  Image enhancement & restoration  Medical visualisation  Law enforcement  Industrial inspection  Artistic effects  Human computer interfaces

Segmentation  Segmentation is to subdivide an image into its component regions or objects.  No single segmentation technique is perfect

Segmentation (continued)  There are two most common types of segmentation techniques: 1)Thresholding 2)Edge detection

Edge Detection Motivation for edge detection.  Produce a line drawing of a scene from an image of that scene.  Important features can be extracted from the edges of an image (ex: corners,lines, curves).  These features are used by higher-level computer vision algorithms (ex: recognition).

Edge Detection (continued) Edge Models: can be modeled according to their intensity profiles a)Step Edge b) Ramp Edge c) Roof Edge

Edge Detection (continued) There are four steps for edge detection: 1) Smoothing: Remove noise as much as possible. 2) Enhancement: Apply a filter to enhance the quality of edges in the original image (ex: sharpening,contrast) 3) Detection: Determine which edge pixels should be thrown as a noise or retained for edge detection. 4) Localization: Identify the location of an edge.

Detection Methods - Prewitt vs. Sobel  Edge detection filters  Only difference is the coefficient.  Sobel has better noise suppression  Both fail when exposed to high level of noise (laplacian operator for a  better solution)

Example

Example - Prewitt vs Sobel PrewittSobel

Anisotropic diffusion Used to remove noise from images without removing critical parts of the image Noise exists in images and is created from outside signals such as radio waves and light exposure

Diffused vs. Original Image

Pixelation and Antialiasing Pixelation occurs when a section of a high-resolution is displayed and the single-colored square elements become visible To solve this problem, antialiasing is used

Antialiasing Aliasing is an effect that will cause signals to be indistinguishable (cannot be reconstructed). What does antialiasing mean? Antialiasing is the minimization of distortion and a small-scale reconstruction of a part of an image

Pixel Interpolation A type of antialiasing, pixel interpolation will occur when zoomed on a specific piece of an image Pixel interpolation smoothly blends the color of one pixel into the next

Pixelization Occasionally, pixelation can be beneficial. The act of intentional pixelation is called pixelization. Pixelization will essentially reverse interpolate the pixels, and enlarge them to create a jagged image. Useful for obscenities and anonymity

Computer Vision Vision Systems and Image Processing Manufacturing Facilities ▫Increases Speed of Production ▫Automates Inspection of Products

Methods of Vision Systems Image Acquisition Image Pre-processing Feature Extraction Detection/Segmentation High-Level Processing Decision Making

Methods of Vision Systems Image Acquisition Image Pre-processing Feature Extraction Detection/Segmentation High-Level Processing Decision Making

Object Identification Differentiate Between Classes of Objects Sort Objects

Fault Detection Image Data Scanned for Pre-determined Conditions

Object Tracking Determine Relative Position of Objects

Optical Flow Assumes Stationary Camera System Estimates Motion and Acceleration of Objects Motion Displayed as Vectors

Egomotion Motion Estimation of Camera System Determines Position Relative to Surroundings Creates 3D Computer Model of Observed Space

Digital Image Processing Tasks Digital image processing is the only practical technology for:  Statistical Classification  Feature Extraction  Graphical Projection  Pattern Recognition

Statistical Classification Identifying which set of sub categories new observations belong too based on sets of data whose category membership is known  Explanatory Variables: Individual observations are analyzed into quantifiable properties Categories, ordinal, integer, or real value.  Example: Digital Camera Color Array Filtering  Bayer Filter is put over the cameras phosphate layer  Interpolation: Processor guesses colors of each pixel based on nearby information

Feature Extraction  Involves simplifying the amount of resources required to describe a large set of data accurately  Reduces processing by eliminating redundant and unnecessary data  It is expected that feature sets will extract relevant information from input data to perform the desired task Uses in image processing  Edge Detection  Points where image brightness changes sharply  Curvature edge direction  Cross-correlation of image between time lag  Motion Detection  Change of position relative to surroundings  Thresholding  Forms greyscale images  Hough Transform  Detects lines to estimate text

Graphical Projection  Process of projecting a three dimensional object onto a planar surface with mathematical computation  Converts complex 3D objects into a 2D equivalent  Computations involve Fourier Transforms and Hermite Functions

Pattern Recognition  Which features distinguish objects from others?  Algorithms classify objects or clusters of an image  Applications  Face Recognition  Fingerprint Identification  Document Image Analysis  3D Object Recognition  Robot Navigation

Questions?