By Doğaç Başaran & Erdem Yörük

Slides:



Advertisements
Similar presentations
Review from this Lesson
Advertisements

Applications of one-class classification
QR Code Recognition Based On Image Processing
電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Designing Facial Animation For Speaking Persian Language Hadi Rahimzadeh June 2005.
CS 4487/9587 Algorithms for Image Analysis
Graphical Examination of Data Jaakko Leppänen
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
2-3 We use a visual tool called a histogram to analyze the shape of the distribution of the data.
E.G.M. PetrakisFiltering1 Linear Systems Many image processing (filtering) operations are modeled as a linear system Linear System δ(x,y) h(x,y)
Automatic Feature Extraction for Multi-view 3D Face Recognition
Facial feature localization Presented by: Harvest Jang Spring 2002.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Computing motion between images
An Approach to Korean License Plate Recognition Based on Vertical Edge Matching Mei Yu and Yong Deak Kim Ajou University Suwon, , Korea 指導教授 張元翔.
CS443: Digital Imaging and Multimedia Filters Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring 2008 Ahmed Elgammal Dept.
Color a* b* Brightness L* Texture Original Image Features Feature combination E D 22 Boundary Processing Textons A B C A B C 22 Region Processing.
Iris localization algorithm based on geometrical features of cow eyes Menglu Zhang Institute of Systems Engineering
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Facial Features Extraction Amit Pillay Ravi Mattani Amit Pillay Ravi Mattani.
Smart Traveller with Visual Translator. What is Smart Traveller? Mobile Device which is convenience for a traveller to carry Mobile Device which is convenience.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
CSE473/573 – Stereo Correspondence
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
VEHICLE NUMBER PLATE RECOGNITION SYSTEM. Information and constraints Character recognition using moments. Character recognition using OCR. Signature.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
A 3D Model Alignment and Retrieval System Ding-Yun Chen and Ming Ouhyoung.
Digital Image Processing CCS331 Relationships of Pixel 1.
Joon Hyung Shim, Jinkyu Yang, and Inseong Kim
1 Multiple Classifier Based on Fuzzy C-Means for a Flower Image Retrieval Keita Fukuda, Tetsuya Takiguchi, Yasuo Ariki Graduate School of Engineering,
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Object Recognition in Images Slides originally created by Bernd Heisele.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Its now time to see the light…..  A lens is a curved transparent material that is smooth and regularly shaped so that when light strikes it, the light.
1 Eye Detection in Images Introduction To Computational and biological Vision Lecturer : Ohad Ben Shahar Written by : Itai Bechor.
Autonomous Robots Vision © Manfred Huber 2014.
Image Registration Advanced DIP Project
Face Detection Using Skin Color and Gabor Wavelet Representation Information and Communication Theory Group Faculty of Information Technology and System.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Wonjun Kim and Changick Kim, Member, IEEE
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
: Chapter 5: Image Filtering 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Grim Grins Project Number 5.. Grim Grins: The Team. Team members: Adrian Hoitan (Romania) Serkan Öztürk (Turkey) Günnar Yagcilar (Turkey) Póth Miklós.
Chapter 6 Skeleton & Morphological Operation. Image Processing for Pattern Recognition Feature Extraction Acquisition Preprocessing Classification Post.
SIFT.
CONTENTS:  Introduction.  Face recognition task.  Image preprocessing.  Template Extraction and Normalization.  Template Correlation with image database.
By: Suvigya Tripathi (09BEC094) Ankit V. Gupta (09BEC106) Guided By: Prof. Bhupendra Fataniya Dept. of Electronics and Communication Engineering, Nirma.
Medical Image Analysis
The YELLOW Cross Lesson 5 Lesson 4 Review Lesson Extension
Image Primitives and Correspondence
Fitting Curve Models to Edges
Histogram Histogram is a graph that shows frequency of anything. Histograms usually have bars that represent frequency of occuring of data. Histogram has.
CMSC5711 Revision 3 CMSC5711 revision 3 ver.x67.8c.
Image Processing, Lecture #8
Local Binary Patterns (LBP)
By: Mohammad Qudeisat Supervisor: Dr. Francis Lilley
Image Processing, Lecture #8
SIFT.
Outline Announcement Perceptual organization, grouping, and segmentation Hough transform Read Chapter 17 of the textbook File: week14-m.ppt.
A Novel Smoke Detection Method Using Support Vector Machine
Revision 4 CSMSC5711 Revision 4: CSMC5711 v.9a.
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

By Doğaç Başaran & Erdem Yörük Eye & Mouth Detection By Doğaç Başaran & Erdem Yörük

Introduction Face image processing is recently used for personal identification, facial expression detection, drowsiness detection and so on. The important face parts such as eyebrows, eyes, noise, mouth are used to express facial feature Active contour model Deformable template model Local smoothness of image density Color information of image Knowledge about shape and locationship of face parts are several methods used to detect face parts

Our method to detect eye&mouth When application of face image recognition is limited to the specific purpose, in the case which one face is always clearly obtained, algorithm for extracting face parts becomes extremely simple by using SYMMETRY. We focus bilateral symmetries between and within face parts and detect the symmetry measure on face parts using the gradient directions We determine width and height of face parts extraction windows from horizontally and vertically projected histograms of symmetry measures. Then we use template matching method to detect eyes and mouth in those reduced search areas.

Gradient directions We define 8 kinds of gradient direction corresponding to horizontal, vertical and oblique edges as shown in the figure. The gradient direction at a given pixel is the direction in which the maximum increase of the image function occurs when traveling to one of its eight neighbours.

Symmetry detection We examine direction of gradient at the same distance from point of interest. If both directions of gradient at the same distance from point of interest have bilateral symmetry eachother symmetry measure at this point is incremented

Parameters of the algorithm bm(i,j)+=1 (if g(x(i,j-d)) is bilaterally symmetric with g(x(i,j+d))) SM(j)=bm(m,j) SM(i)=bm(i,n)   where, d=horizontal distance from point of interest g(x(i,j))=gradient function at the pixel x(i,j) giving direction of gradient at that pixel bm(i,j)=symmetry matrix with the same size of the image giving bilateral symmetry measure at pixel x(i,j) SM(j)=projection of symmetry matrix bm onto horizontal j-axis showing symmetry accumulated in coloumns SM(i)=projection of symmetry matrix bm onto vertical i-axis showing symmetry accumulated in rows

Original image

X projection

Y_projection

Face parts extraction window The next task will be creating a window utilizing the maximum points of these two histograms where their intersection point nearly represents the center of the face. The peak in the latter histogram and the width of the lobe where it occurs gives us a clue for the height and vertical position of this extraction window. The width of the window is then determined using the positions of the secondary maximum points in the first histogram symmetrically located around the midline which actually correspond to the coloumns of the image which contain eyes and eyebrows. This window represents the reduced search area where we can apply the template matching method.

Template matching A measure of similarity is the normalized cross correlation between template and original image g(i,j) is the original image, t(i,j) is the template.

Left eye template

Right eye template

Mouth detection We assume that the distance of right and left eyes is approximatelly equal to distance between the mouth and the middle of the eyes with an error of 10 pixels Since from y porjection, we have a peak around mouth, by using this information, we can find the peak of mouth at y projection After finding the eyes and peak of mouth at y projection, we take an imaginary line which is vertical to the line combining the eyes, from the middle point of the eyes, and intersect this line with the peak value of mouth and we put a cross operator there.

Output data