Quality Assessment Recognition Tasks (QART) – Recent Results Mikołaj Leszczuk, Lucjan Janowski, Łukasz Dudek, Sergio Garcia AGH – University of Science.

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

Presented by Xinyu Chang
Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
MPEG4 Natural Video Coding Functionalities: –Coding of arbitrary shaped objects –Efficient compression of video and images over wide range of bit rates.
IntroductionIntroduction AbstractAbstract AUTOMATIC LICENSE PLATE LOCATION AND RECOGNITION ALGORITHM FOR COLOR IMAGES Kerem Ozkan, Mustafa C. Demir, Buket.
Motivation Application driven -- VoD, Information on Demand (WWW), education, telemedicine, videoconference, videophone Storage capacity Large capacity.
Tracking Features with Large Motion. Abstract Problem: When frame-to-frame motion is too large, KLT feature tracker does not work. Solution: Estimate.
Video Coding with Linear Compensation (VCLC) Arif Mahmood, Zartash Afzal Uzmi, Sohaib A Khan Department of Computer.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
MRI Image Segmentation for Brain Injury Quantification Lindsay Kulkin 1 and Bir Bhanu 2 1 Department of Biomedical Engineering, Syracuse University, Syracuse,
1 Static Sprite Generation Prof ︰ David, Lin Student ︰ Jang-Ta, Jiang
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Highlights Lecture on the image part (10) Automatic Perception 16
Prediction-based Monitoring in Sensor Networks: Taking Lessons from MPEG Samir Goel and Tomasz Imielinski Department of Computer Science Rutgers, The State.
Physics 121 Topics: Course announcements Quiz
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
On the Design, Construction and Operation of a Diffraction Rangefinder MS Thesis Presentation Gino Lopes A Thesis submitted to the Graduate Faculty of.
CSCI 347 / CS 4206: Data Mining Module 06: Evaluation Topic 01: Training, Testing, and Tuning Datasets.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Copyright 1998, S.D. Personick. All Rights Reserved1 Telecommunications Networking I Lectures 2 & 3 Representing Information as a Signal.
COMPUTER MODELS FOR SKY IMAGE ANALYSIS OF THE INASAN ZVENIGOROD OBSERVATORY Sergei Pirogov ( Institute of Astronomy, Russian Academy of Sciences) VIIth.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
An Introduction to Programming and Algorithms. Course Objectives A basic understanding of engineering problem solving process. A basic understanding of.
Research & Development 1 High Frame Rate Television Mike Armstrong, Steve Jolly, Richard Salmon BBC R&D HPA Technical Retreat 18 February 2009.
COLOR HISTOGRAM AND DISCRETE COSINE TRANSFORM FOR COLOR IMAGE RETRIEVAL Presented by 2006/8.
Proposed Task-Based VQEG Project Carolyn Ford, Mikołaj Leszczuk.
Image Processing and Computer Vision: 91. Image and Video Coding Compressing data to a smaller volume without losing (too much) information.
Slide 1. © 2012 Invensys. All Rights Reserved. The names, logos, and taglines identifying the products and services of Invensys are proprietary marks.
1 PSCR Sponsors Department of Homeland Security Office for Interoperability and Compatibility Department of Justice Office of Community Oriented Policing.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
AGH and Lancaster University. Assess based on visibility of individual packet loss –Frame level: Frame dependency, GoP –MB level: Number of affected MBs/slices.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
1 JEG hybrid model Iñigo Sedano June, Three years working at Tecnalia Technology Corporation, Telecom Unit, Broadband networks group, Spain (
240-Current Research Easily Extensible Systems, Octave, Input Formats, SOA.
Discussion for JEG in Ghent Marcus Barkowsky. July 2013 Marcus Barkowsky - 2 -
Human pose recognition from depth image MS Research Cambridge.
1 Video Quality for Public Safety Applications Margaret Pinson Public Safety Communications Research U.S. Dept. of Commerce June 6, 2011.
How to remove an out layer tester Lucjan Janowski Faculty of Electrical Engineering, Automatics, Computer Science and Electronics Department of Telecommunications.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Semantic Extraction and Semantics-Based Annotation and Retrieval for Video Databases Authors: Yan Liu & Fei Li Department of Computer Science Columbia.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
infinity-project.org Engineering education for today’s classroom Outline Images Then and Now Digitizing Images Design Choices in Digital Images Better.
Objective Quality Assessment Metrics for Video Codecs - Sridhar Godavarthy.
ELIS – Multimedia Lab Marcus Barkowsky Lucjan Janowski Glenn Van Wallendael VQEG JEG-Hybrid.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Monitoring of Audio-Visual Quality by Key Indicators (MOAVI) Mikołaj Leszczuk, Jakub Nawała and Kais Rouis.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Content Based Coding of Face Images
Can Computer Algorithms Guess Your Age and Gender?
Classification with Perceptrons Reading:
Introduction Computer vision is the analysis of digital images
Final Year Project Presentation --- Magic Paint Face
Video-based human motion recognition using 3D mocap data
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
How TEST4U works TEST4U is an interactive test that automatically assesses candidate’s answer to a question.
Introduction Computer vision is the analysis of digital images
0.69 B A E F G H I C D time Figure 1. Example of a minimal spatiotemporal configuration. A short initial video clip showing.
MPEG4 Natural Video Coding
Object Recognition Today we will move on to… April 12, 2018
CSSE463: Image Recognition Day 29
EE 492 ENGINEERING PROJECT
Introduction Computer vision is the analysis of digital images
Report 7 Brandon Silva.
Jiwon Kim Steve Seitz Maneesh Agrawala
Presentation transcript:

Quality Assessment Recognition Tasks (QART) – Recent Results Mikołaj Leszczuk, Lucjan Janowski, Łukasz Dudek, Sergio Garcia AGH – University of Science and Technology Joel Dumke ITS – Institute for Telecommunication Sciences

Presentation Plan Measuring video quality Quantifying video sequences Reminder about QART 2

REMINDER ABOUT QART 3

Quality Assessment for Recognition Tasks (QART) Mission – “To study effects of resolution, compression and network effects on quality of video used for recognition tasks” Goals – To perform series of tests to study effects and interactions of Compression Scene characteristics – To test existing or develop new objective measurements that will predict results of subjective tests of visual intelligibility 4

And where do we go from here? QART Quantifying Video Sequences Measuring Video Quality Research topics within area of quality assessment for recognition tasks… 5

QUANTIFYING VIDEO SEQUENCES 6

Quantifying Video Sequences (Automatically) Target size: 70% accuracy, Lighting level: 93% accuracy Motion Level: planned Generalized Use Class Choose Target Size Choose Motion Choose Lighting Level Choose Discriminatio n Level Choose Time Frame Reference application for practitioners, quantifying GUC (pending) – screenshot of user interface 7

Target Size 2 classes – large/small Represents the sizes of appearing objects relative to frame dimensions The larger side of objects bounding box is compared to the respective dimension of the frame. The threshold of “large” class is 0.4 Any object which has been large on the majority of frames it appears in, is classified as large. Otherwise it’s small. General results for the scene is the one that applies to more objects 8

Lighting Level Binary classification – high/low The threshold of gray level is 55, when range is [0, 255] Calculated for every objects as average luminance for every frame, as well as for whole frames Objects are bright if they are bright for more frames than not Final result – the class which more objects represent 9

Motion Level Also 2 values – high and low Not defined properly, therefore not implemented – results presented as undefined When necessary, a dummy measure used: average magnitude of gradients in the temporal direction 10

Example The only moving object in this clip (person) would be classified as large and bright. Consequently, this would be the class assigned to the scene. 11

MEASURING VIDEO QUALITY 12

QART Model 13

Test sequence database Goal: create database of object recognition results for collection of video sequences of differing quality Store objective quality measures for each video Check how well people discern specified visual information from video sequences 126 original video sequences – without artifacts (SRC) 130 HRCs 1860 derived clips (PVS) 14

Many clips from each source Every original scene is down-sampled, cropped or distorted in one of several ways to see how quality loss affects accuracy of target recognition … and so on Source (SRC) 15

What are we testing? For each clip human subjects were asked about specified information Is it gun? Or phone? Or radio? What is license number? Target object How many correct answers? How many wrong? 16

Target Tracking Target object whose “recognizability” we are examining must be located in every moment throughout scene It has to be marked by human user once in source sequence Selection is stored as rectangle position and number of frame in which it was indicated Bounding rectangles for other frames must then be obtained by object tracker in both time directions from that point 17

Parameters Calculated Quality measures such as blur, noise, blockiness, interlacing, etc. List should include enough parameters to enable finding conclusive statistical results Each one computed for every frame Stored separately for whole frames and for current target location 18

Pixels located in further frameFollowing object in other frames Object selected Quality parameters calculated for whole frame and for object being followed 19

When Work Is Done… Measures: noise, blur, … m 1,m 2,m 3 …m n Probability of recognition: p(Correct)=f(m 1,m 2 …m n ) 20

THANK YOU – QUESTIONS AND DISCUSSION Any comments??? 21