Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. A scalable platform for learning and evaluating a real-time vehicle detection system.

Slides:



Advertisements
Similar presentations
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Advertisements

Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Learning Semantic Scene Models From Observing Activity in Visual Surveillance Dimitios Makris and Tim Ellis (2005) Presented by Steven Wilson.
The Viola/Jones Face Detector Prepared with figures taken from “Robust real-time object detection” CRL 2001/01, February 2001.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Robust Moving Object Detection & Categorization using self- improving classifiers Omar Javed, Saad Ali & Mubarak Shah.
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
A Study of Approaches for Object Recognition
Generic Object Detection using Feature Maps Oscar Danielsson Stefan Carlsson
Vehicle Movement Tracking
Adaboost and its application
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Background Subtraction for Urban Traffic Monitoring using Webcams Master Graduation Project Final Presentation Supervisor: Rein van den Boomgaard Mark.
Viola and Jones Object Detector Ruxandra Paun EE/CS/CNS Presentation
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
FACE DETECTION AND RECOGNITION By: Paranjith Singh Lohiya Ravi Babu Lavu.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Identifying Computer Graphics Using HSV Model And Statistical Moments Of Characteristic Functions Xiao Cai, Yuewen Wang.
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Supervised Learning of Edges and Object Boundaries Piotr Dollár Zhuowen Tu Serge Belongie.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Limitations of Cotemporary Classification Algorithms Major limitations of classification algorithms like Adaboost, SVMs, or Naïve Bayes include, Requirement.
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
Robust Nighttime Vehicle Detection by Tracking and Grouping Headlights Qi Zou, Haibin Ling, Siwei Luo, Yaping Huang, and Mei Tian.
CS-498 Computer Vision Week 9, Class 2 and Week 10, Class 1
Vehicle Detection in Aerial Surveillance Using Dynamic Bayesian Networks Hsu-Yung Cheng, Member, IEEE, Chih-Chia Weng, and Yi-Ying Chen IEEE TRANSACTIONS.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
Date of download: 5/28/2016 Copyright © 2016 SPIE. All rights reserved. Flowchart of the computer-aided diagnosis (CAD) tool. (a) Segmentation: The region.
Date of download: 5/29/2016 Copyright © 2016 SPIE. All rights reserved. From left to right are camera views 1,2,3,5 of surveillance videos in TRECVid benchmarking.
Date of download: 6/1/2016 Copyright © 2016 SPIE. All rights reserved. Diagrams of the (a) OC-SVM classifier and (b) MHOC-SVM classifier. Figure Legend:
Date of download: 6/1/2016 Copyright © 2016 SPIE. All rights reserved. Feedback model of a flatness control system. Figure Legend: From: Real-time flatness.
Date of download: 6/1/2016 Copyright © 2016 SPIE. All rights reserved. (a) Optical image of fore and hind wings from a male S. charonda butterfly at different.
Date of download: 6/3/2016 Copyright © 2016 SPIE. All rights reserved. A framework for improved pedestrian detection performance through blind image distortion.
Date of download: 6/3/2016 Copyright © 2016 SPIE. All rights reserved. Block diagram of the proposed regional PIFS-based contrast enhancement method. Figure.
Date of download: 6/9/2016 Copyright © 2016 SPIE. All rights reserved. Basic stages involved in the design of a classification system. Figure Legend: From:
Date of download: 6/18/2016 Copyright © 2016 SPIE. All rights reserved. (a) Hardware system set, (b) animal burn, and (c) first cut in burn tissue. 17.
Date of download: 6/23/2016 Copyright © 2016 SPIE. All rights reserved. Adaptive temporal-frame motion differential computation. Figure Legend: From: Single-object-based.
Date of download: 6/24/2016 Copyright © 2016 SPIE. All rights reserved. Single pixel camera architecture. 16 Figure Legend: From: Image reconstruction.
1 Munther Abualkibash University of Bridgeport, CT.
Date of download: 6/27/2016 Copyright © 2016 SPIE. All rights reserved. Histograms representing the distributions of two overlapped states with respect.
Date of download: 7/2/2016 Copyright © 2016 SPIE. All rights reserved. Illustration of quantities in Eq. : Ath is the ratio of the striped area under the.
Date of download: 7/6/2016 Copyright © 2016 SPIE. All rights reserved. Optical setup and schematic description of the acquisition. (a) Optical setup inside.
Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. The workflows of the two types of joint sparse representation methods. (a) The workflow.
SEMINAR ON TRAFFIC MANAGEMENT USING IMAGE PROCESSING by Smruti Ranjan Mishra (1AY07IS072) Under the guidance of Prof Mahesh G. Acharya Institute Of Technology.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Date of download: 9/19/2016 Copyright © 2016 SPIE. All rights reserved. WDR traffic scene taken with the same camera in (a) linear mode and (b) WDR mode.
REAL-TIME DETECTOR FOR UNUSUAL BEHAVIOR
2. Skin - color filtering.
License Plate Detection
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Session 7: Face Detection (cont.)
Article Review Todd Hricik.
Real-Time Human Pose Recognition in Parts from Single Depth Image
Adversarially Tuned Scene Generation
Pearson Lanka (Pvt) Ltd.
New horizons in the artificial vision
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Vehicle Segmentation and Tracking in the Presence of Occlusions
Model generalization Brief summary of methods
Multi-Information Based GCPs Selection Method
Learning complex visual concepts
Report 2 Brandon Silva.
Presentation transcript:

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. A scalable platform for learning and evaluating a real-time vehicle detection system. A large network of connected cameras and real- time roadside processors captures and streams traffic video. A distributed machine learning system continually samples ground truth data and supervised training examples. It learns better classification parameters and evaluates real-time vehicle detection on a large testing set. Improvements are incorporated by sending incremental parameter updates to the roadside processors. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. (a) A roadway, Harrison’s implementation of a Reichardt motion model 32,33 creates (b) strong responses for the moving vehicles, (green) and (blue), but fails to create any response for the stationary vehicle (red). (c) A different roadway location containing minor wind conditions, and (d) the motion response is strong for moving background scenery, making it difficult to determine which motion responses correspond to background and which ones correspond to vehicles. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. SURF keypoints 37 detected using constant sensitivity parameters. Note that the car in (a) has keypoints, which may be matched for detection, while the car in (b) has no SURF keypoints. Furthermore, a background object, a bridge, in (c) has keypoints, which could result in a FP detection. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. Overview of the large-scale machine learning system: a network of cameras, each with a roadside processor, are deployed at signalized intersections. (a and b) The processor records traffic video segments, which can be uploaded to the video database. (c) Random still frames are sampled from the database and assigned to human image observers, who annotate the frame to assign ground truth vehicle observations. (d) All annotated observations are stored within the training samples database. (e) These samples are then used by the training algorithm to generate (f) classifier parameters. From (g) the video segments and (h) the learned parameters, (i) the detection algorithm generates vehicle observations. The same video segments are also annotated by a human video observer to generate ground truth vehicle observations. The evaluation step compares the (i) detected and (j) annotated observations to measure the effectiveness of the newly trained detection algorithm. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. Continuous learning is a simple extension to the system whereby the roadside processor executes the current detection algorithm and randomly samples the video stream, indicating the presence or absence of vehicles. These samples are uploaded and assigned to a (a) human image observer, who independently annotates the ground truth and adds it to the (b) training sample database. Updated samples are used during (c) training to produce new (d) classifier parameters, which generate updated vehicle observations. Falsely classified observations are fed back to a (e) human image observer, who adds them to the training sample database. Should the new classifier parameters show improvements, they can be downloaded to the (f) roadside processor in the field. Although much more costly, it is still possible to upload new video segments from the (g) roadside processor to continuously grow the testing set for (h) evaluation. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. An example Haar kernel on an image patch (gray). A feature value is associated with (a) all translations and (b) all scales for each translation. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. For this distributed learning system, first, the Haar-like features are extracted and sent to an iterative AdaBoost learning system, where multiple processes compute weak learners corresponding to a single feature. Then a master machine combines all the learners to select a best learner. Further, the best parameters are used to update the sample weights for the next iteration. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. (a) Two southbound vehicle presence detection zones are annotated as yellow polygons. Vehicles enter each zone at the green line segment and exit at the red line. (b) Three vehicle regions define the lane entry, mid-point, and exit. A truck has just reached the exit region and is labeled as a large truck with trailer. (c) Boundaries are drawn around objects of interest, which are also classified, e.g., road, pick-up truck, or signal head. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. These figures represent several of many conditions, resolutions, camera perspectives, and locations contained in the proposed ITS dataset. (a) Glare, (b) partially wet, (c) snow, (d) reflections, (e) glare, (f) rain, (g) reflections, (h) night, (i) shadows, (j) partially wet, (k) snow, and (l) snow. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. Cumulative distributions of four annotation categories. From time of day (a), the trained video detector performs better at dawn than at dusk (27.3 h of video). The authors’ video detector detects vehicles through headlights during the night and performs better at dawn because it was observed that drivers typically have headlights on at dawn more often than they have headlights on at dusk. From observing (b) camera properties and (c) precipitation types, the detector performs worse in light rain conditions and also when reflections from vehicles are present. Also from observing (c) precipitation types and (d) road conditions, it should be noted that the video detector performs very well in snow and snow-covered conditions, which may be due to the higher contrast between vehicles and their surroundings. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI

Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. ROC curve illustrating that the detection classifier achieves the maximum TPs and minimum FPs. Figure Legend: From: Large-scale machine learning and evaluation platform for real-time traffic surveillance J. Electron. Imaging. 2016;25(5): doi: /1.JEI