Introduction of Real-Time Image Processing

Slides:



Advertisements
Similar presentations
1/6 ELIS – Multimedia Lab Optimization of Automated Video Surveillance Using Multi-modal Video Analysis Viktor Slavkovikj 05/12/2012 Viktor Slavkovikj.
Advertisements

Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Actions in video Monday, April 25 Kristen Grauman UT-Austin.
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
Facial feature localization Presented by: Harvest Jang Spring 2002.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Image Segmentation some examples Zhiqiang wang
Høgskolen i Gjøvik Saleh Alaliyat Video - based Fall Detection in Elderly's Houses.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Background Estimation with Gaussian Distribution for Image Segmentation, a fast approach Gianluca Bailo, Massimo Bariani, Paivi Ijas, Marco Raggio IEEE.
LYU 0102 : XML for Interoperable Digital Video Library Recent years, rapid increase in the usage of multimedia information, Recent years, rapid increase.
Video summarization by graph optimization Lu Shi Oct. 7, 2003.
Automatic Facial Landmark Tracking in Video Sequences using Kalman Filter Assisted Active Shape Models Utsav Prabhu, Keshav Seshadri, Marios Savvides 報告人.
Virtual Dart – An Augmented Reality Game on Mobile Device Supervised by Prof. Michael R. Lyu LYU0604Lai Chung Sum ( )Siu Ho Tung ( )
Hand Movement Recognition By: Tokman Niv Levenbroun Guy Instructor: Todtfeld Ari.
Image Subtraction for Real Time Moving Object Extraction Shahbe Mat Desa, Qussay A. Salih, CGIV’04.
1 Mean shift and feature selection ECE 738 course project Zhaozheng Yin Spring 2005 Note: Figures and ideas are copyrighted by original authors.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
GESTURE ANALYSIS SHESHADRI M. (07MCMC02) JAGADEESHWAR CH. (07MCMC07) Under the guidance of Prof. Bapi Raju.
3D SLAM for Omni-directional Camera
Implementing Codesign in Xilinx Virtex II Pro Betim Çiço, Hergys Rexha Department of Informatics Engineering Faculty of Information Technologies Polytechnic.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
New Human Machine Interfaces for Games Narrated by Michael Song Digiwinner Limited Aug
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Eurecom, 6 Feb 2007http://biobimo.eurecom.fr Project BioBiMo 1.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
COMPUTER GRAPHICS. Can refer to the number of pixels in a bitmapped image Can refer to the number of pixels in a bitmapped image The amount of space it.
1 Machine Vision. 2 VISION the most powerful sense.
Video Surveillance Under The Guidance of Smt. D.Neelima M.Tech., Asst. Professor Submitted by G. Subrahmanyam Roll No: 10021F0013 M.C.A.
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
SUREILLANCE IN THE DEPARTMENT THROUGH IMAGE PROCESSING F.Y.P. PRESENTATION BY AHMAD IJAZ & UFUK INCE SUPERVISOR: ASSOC. PROF. ERHAN INCE.
Motion Estimation of Moving Foreground Objects Pierre Ponce ee392j Winter March 10, 2004.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
National Taiwan Normal A System to Detect Complex Motion of Nearby Vehicles on Freeways C. Y. Fang Department of Information.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
Computer Vision. Overview of the field  Image / Video => Data  Compare to graphics (the reverse)  Sample applications  Video Camera feed => ID room.
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
Goal: Predicting the Where and What of actors and actions through Online Action Localization Figure 1.
Environmental Remote Sensing GEOG 2021
Signal and Image Processing Lab
Image Processing Digital image Fundamentals. Introduction to the course Grading – Project: 30% – Midterm Exam: 30% – Final Exam : 40% – Total: 100% –
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Gender Classification Using Scaled Conjugate Gradient Back Propagation
Motion Detection And Analysis
By SAIKUMAR KEESARI VAMSI KRISHNA EDARA
Developing Artificial Intelligence in Robotics
Introduction Computer vision is the analysis of digital images
SoC and FPGA Oriented High-quality Stereo Vision System
A new data transfer method via signal-rich-art code images captured by mobile devices Source: IEEE Transactions on Circuits and Systems for Video Technology,
Fast and Robust Object Tracking with Adaptive Detection
Range Imaging Through Triangulation
4.2 Data Input-Output Representation
Introduction Computer vision is the analysis of digital images
Single Image Rolling Shutter Distortion Correction
Introduction to Object Tracking
Research Institute for Future Media Computing
Introduction Computer vision is the analysis of digital images
A Novel Smoke Detection Method Using Support Vector Machine
Scalable light field coding using weighted binary images
Multi-UAV to UAV Tracking
Report 7 Brandon Silva.
Presentation transcript:

Introduction of Real-Time Image Processing Parya Jandaghi Prof. Arabnia Spring 2016

Outline Key Parameters in Image Processing Differences between Real-time and Non Real-time Image Processing Examples of Real-time Image Processing Face Recognition Emotion Recognition QR Code Detection Post Processing in Video Games Speed Detection

Image Processing Output Processor Input One Time Continuous Extract Data Modify Add Output One Image Sequence of Images Array of data

Real Time Not Real Time Output & Input Produce output simultaneously with input (continuous) Has no value when delivered too late Not Real Time Non continuous Time of Processing is not the priority

Real-Time Image Processing – Multi Resolution encoding

Face Recognition Find a person in videos

Emotion Recognition

QR Code Detection

QR Code Decoding

QR Code Decoding

QR Code Detection

Post Processing in Video Games Bloom Effect Anti Aliasing Effect

Bloom Effect in real world

Bloom Effect in video games Frame Buffer Binary Version Applied Gaussian Filter

Bloom Effect in video games

Anti Aliasing Effect What is aliasing? Solution?

Anti Aliasing Effect

Anti Aliasing Effect Solution?

Anti Aliasing Effect

Speed Detection Camera System using Image Processing Usage: Speeds of vehicles on high ways, sport, competitions, etc. Stages: Object Detection Phase Object Tracking Phase (Segmentation, Labelling, Center Extraction) Speed Calculation Phase

Speed Controller Frame T Frame T+1 18 Meters Frame T+29 Frame T+30 Video Recorder (30 Frames Per Second) -> Time of 30 Frames = 1 Second Distance ~= 18 Meters V=dx/dt -> Speed = Distance / Time = 18 Meters / 1 Second = 18 m/s = 40.26 mile/h

Extracting motion Frame n-1 Frame n Difference I(n, x, y) = Color of pixel(x, y) in the nth frame D(n, n-1, x, y) = 0 if |l(n, x, y) – l(n-1, x, y)| < epsilon(~0) 1 otherwise

Extracting motion Frame n Frame n+1 Difference I(n, x, y) = Color of pixel(x, y) in the nth frame D(n, n-1, x, y) = 0 if |l(n, x, y) – l(n-1, x, y)| < epsilon(~0) 1 if otherwise

Extracting motion Difference n-1&n Difference n&n+1 Common Common(n-1, n, n+1, x, y) = 0 if D(n, n-1, x, y) ~= D(n+1, n, x, y) < epsilon(~0) 0 if |D(n, n-1, x, y) - D(n+1, n, x, y)| > epsilon(~0) 1 if otherwise (Both pixels are white then common is white) --------- Common = D(n, n-1, x, y) * D(n+1, n, x, y)

Object Tracking Object Segmentation Scan the foreground image horizontally Scan the foreground image vertically First iteration

Object Tracking Object Segmentation Scan the foreground image horizontally Scan the foreground image vertically Second iteration

Object Labelling In order to keep track of the moving objects, labelling is an essential process. This is because each object must be represented by a unique label while keeping in mind that the object shall preserve its label without any change. This is since the moment it enters the scene (at frame F0) till it leaves the scene (at frame Fn)

Center Extracting The object is being ready for the tracking phase. But, for optimization issues, we have discovered that no need to track the whole object pixel by pixel, we just need a descriptive point representing the object.

Speed Calculation

Challenges Dealing with noises Object Dismissal Advantages compared to Doppler devices

Thank you 