Introduction of Real-Time Image Processing Parya Jandaghi Prof. Arabnia Spring 2016
Outline Key Parameters in Image Processing Differences between Real-time and Non Real-time Image Processing Examples of Real-time Image Processing Face Recognition Emotion Recognition QR Code Detection Post Processing in Video Games Speed Detection
Image Processing Output Processor Input One Time Continuous Extract Data Modify Add Output One Image Sequence of Images Array of data
Real Time Not Real Time Output & Input Produce output simultaneously with input (continuous) Has no value when delivered too late Not Real Time Non continuous Time of Processing is not the priority
Real-Time Image Processing – Multi Resolution encoding
Face Recognition Find a person in videos
Emotion Recognition
QR Code Detection
QR Code Decoding
QR Code Decoding
QR Code Detection
Post Processing in Video Games Bloom Effect Anti Aliasing Effect
Bloom Effect in real world
Bloom Effect in video games Frame Buffer Binary Version Applied Gaussian Filter
Bloom Effect in video games
Anti Aliasing Effect What is aliasing? Solution?
Anti Aliasing Effect
Anti Aliasing Effect Solution?
Anti Aliasing Effect
Speed Detection Camera System using Image Processing Usage: Speeds of vehicles on high ways, sport, competitions, etc. Stages: Object Detection Phase Object Tracking Phase (Segmentation, Labelling, Center Extraction) Speed Calculation Phase
Speed Controller Frame T Frame T+1 18 Meters Frame T+29 Frame T+30 Video Recorder (30 Frames Per Second) -> Time of 30 Frames = 1 Second Distance ~= 18 Meters V=dx/dt -> Speed = Distance / Time = 18 Meters / 1 Second = 18 m/s = 40.26 mile/h
Extracting motion Frame n-1 Frame n Difference I(n, x, y) = Color of pixel(x, y) in the nth frame D(n, n-1, x, y) = 0 if |l(n, x, y) – l(n-1, x, y)| < epsilon(~0) 1 otherwise
Extracting motion Frame n Frame n+1 Difference I(n, x, y) = Color of pixel(x, y) in the nth frame D(n, n-1, x, y) = 0 if |l(n, x, y) – l(n-1, x, y)| < epsilon(~0) 1 if otherwise
Extracting motion Difference n-1&n Difference n&n+1 Common Common(n-1, n, n+1, x, y) = 0 if D(n, n-1, x, y) ~= D(n+1, n, x, y) < epsilon(~0) 0 if |D(n, n-1, x, y) - D(n+1, n, x, y)| > epsilon(~0) 1 if otherwise (Both pixels are white then common is white) --------- Common = D(n, n-1, x, y) * D(n+1, n, x, y)
Object Tracking Object Segmentation Scan the foreground image horizontally Scan the foreground image vertically First iteration
Object Tracking Object Segmentation Scan the foreground image horizontally Scan the foreground image vertically Second iteration
Object Labelling In order to keep track of the moving objects, labelling is an essential process. This is because each object must be represented by a unique label while keeping in mind that the object shall preserve its label without any change. This is since the moment it enters the scene (at frame F0) till it leaves the scene (at frame Fn)
Center Extracting The object is being ready for the tracking phase. But, for optimization issues, we have discovered that no need to track the whole object pixel by pixel, we just need a descriptive point representing the object.
Speed Calculation
Challenges Dealing with noises Object Dismissal Advantages compared to Doppler devices
Thank you