Toward Real-Time Extraction of Pedestrian Contexts with Stereo Camera Kei Suzuki, Kazunori Takashio, Hideyuki Tokuda, Masaki Wada, Yusuke Matsuki, Kazunori.

Slides:



Advertisements
Similar presentations
Advanced Image Processing Student Seminar: Lipreading Method using color extraction method and eigenspace technique ( Yasuyuki Nakata and Moritoshi Ando.
Advertisements

Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Automatic classification of weld cracks using artificial intelligence and statistical methods Ryszard SIKORA, Piotr BANIUKIEWICZ, Marcin CARYK Szczecin.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
Høgskolen i Gjøvik Saleh Alaliyat Video - based Fall Detection in Elderly's Houses.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
The Science of Digital Media Microsoft Surface 7May Metropolia University of Applied Sciences Display Technologies Seminar.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
ART: Augmented Reality Table for Interactive Trading Card Game Albert H.T. Lam, Kevin C. H. Chow, Edward H. H. Yau and Michael R. Lyu Department of Computer.
Domenico Bloisi, Luca Iocchi, Dorothy Monekosso, Paolo Remagnino
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Cindy Song Sharena Paripatyadar. Use vision for HCI Determine steps necessary to incorporate vision in HCI applications Examine concerns & implications.
A Smart Sensor to Detect the Falls of the Elderly.
A Real-Time for Classification of Moving Objects
TelosCAM: Identifying Burglar Through Networked Sensor-Camera Mates with Privacy Protection Presented by Qixin Wang Shaojie Tang, Xiang-Yang Li, Haitao.
Rowing Motion Capture System Simon Fothergill Ph.D. student, Digital Technology Group, Computer Laboratory Jesus College graduate conference May 2009.
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Learning and Recognizing Activities in Streams of Video Dinesh Govindaraju.
Oral Defense by Sunny Tang 15 Aug 2003
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Jan SedmidubskySeptember 23, 2014Motion Retrieval for Security Applications Jan Sedmidubsky Jakub Valcik Pavel Zezula Motion Retrieval for Security Applications.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
MACHINE VISION GROUP Multimodal sensing-based camera applications Miguel Bordallo 1, Jari Hannuksela 1, Olli Silvén 1 and Markku Vehviläinen 2 1 University.
On the Design, Construction and Operation of a Diffraction Rangefinder MS Thesis Presentation Gino Lopes A Thesis submitted to the Graduate Faculty of.
SensEye: A Multi-Tier Camera Sensor Network by Purushottam Kulkarni, Deepak Ganesan, Prashant Shenoy, and Qifeng Lu Presenters: Yen-Chia Chen and Ivan.
A Method for Modeling of Pedestrian Flow in the Obstacle Space using Laser Range Scanners Yoshitaka NAKAMURA †, Yusuke WADA ‡, Teruo HIGASHINO ‡ & Osamu.
Real-time object tracking using Kalman filter Siddharth Verma P.hD. Candidate Mechanical Engineering.
Motion Object Segmentation, Recognition and Tracking Huiqiong Chen; Yun Zhang; Derek Rivait Faculty of Computer Science Dalhousie University.
Takuya Imaeda Graduate School of Media and Governance, Keio University Hideyuki Tokuda Faculty of Environment and Information Studies, Keio University.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
Implementing Codesign in Xilinx Virtex II Pro Betim Çiço, Hergys Rexha Department of Informatics Engineering Faculty of Information Technologies Polytechnic.
Generalized Fuzzy Clustering Model with Fuzzy C-Means Hong Jiang Computer Science and Engineering, University of South Carolina, Columbia, SC 29208, US.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal VideoConference Archives Indexing System.
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
1 Multiple Classifier Based on Fuzzy C-Means for a Flower Image Retrieval Keita Fukuda, Tetsuya Takiguchi, Yasuo Ariki Graduate School of Engineering,
MyActivity: A Cloud-Hosted Ontology-Based Framework for Human Activity Querying Amin BakhshandehAbkear Supervisor:
1 Context-dependent Product Line Practice for Constructing Reliable Embedded Systems Naoyasu UbayashiKyushu University, Japan Shin NakajimaNational Institute.
Stable Multi-Target Tracking in Real-Time Surveillance Video
VEGETABLE SEEDLING FEATURE EXTRACTION USING STEREO COLOR IMAGING Ta-Te Lin, Jeng-Ming Chang Department of Agricultural Machinery Engineering, National.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
TEMPLATE DESIGN © E-Eye : A Multi Media Based Unauthorized Object Identification and Tracking System Tolgahan Cakaloglu.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Adaptive Tracking in Distributed Wireless Sensor Networks Lizhi Yang, Chuan Feng, Jerzy W. Rozenblit, Haiyan Qiao The University of Arizona Electrical.
Quiz Week 8 Topical. Topical Quiz (Section 2) What is the difference between Computer Vision and Computer Graphics What is the difference between Computer.
Virtual Pointing Device Using Stereo Camera The 6th International Conference on Applications and Principles of Information Science Jan , 2007, Kuala.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Product Introduction Dec * Exclusive stereo vision technology* * Intelligent Video Analytics by Multiple Cameras * SuperEyes 3D Counter * SuperEyes.
AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
National Taiwan Normal A System to Detect Complex Motion of Nearby Vehicles on Freeways C. Y. Fang Department of Information.
Instantaneous Geo-location of Multiple Targets from Monocular Airborne Video.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
Table of contents INTRODUCTION Background Problem Statement Scope of The Research Objective Method DESIGN SYSTEM TESTING AND EVALUATION CONCLUSION.
EYE TRACKING TECHNOLOGY
Hand Gestures Based Applications
Gender Classification Using Scaled Conjugate Gradient Back Propagation
For Evaluating Dialog Error Conditions Based on Acoustic Information
ABSTRACT FACE RECOGNITION RESULTS
Seunghui Cha1, Wookhyun Kim1
Graphical User Interface Based Digital Sixth Sense
Anindya Maiti, Murtuza Jadliwala, Jibo He Igor Bilogrevic
SUSPICIOUS ACTIVITY DETECTION
Human Computer Interaction International 2007
Creating Data Representations
Region and Shape Extraction
Presentation transcript:

Toward Real-Time Extraction of Pedestrian Contexts with Stereo Camera Kei Suzuki, Kazunori Takashio, Hideyuki Tokuda, Masaki Wada, Yusuke Matsuki, Kazunori Umeda Graduate School of Media and Governance, Keio University Department of Precision Mechanics, Faculty of Science and Engineering, Chuo University Fifth International Conference on Networked Sensing Systems June , 2008 // Kanazawa, Japan

Introduction Vision-based monitoring systems are becoming important Many researches of pedestrian recognition from vision data have been conducted. –Pedestrian Detection, Tracking, Activities Security camera

Motivation Current works are difficult for extracting pedestrian’s high-level contexts. Pedestrian Detection, Tracking, Activity Pedestrian high-level Contexts How suspicious is this situation? Snatch Skirmish

Goals Extract the high-level contexts of pedestrians from vision data in real-time –We aim to extract suspicious individuals, groups and uneasy atmosphere b ased on the position, velocity, width, and height of those individuals. –We developed a prototype system that can find people tumbling, walking in a group. At a street cornersystem image

Our Approach Use of a stereo camera system that focuses on a moving region –We can make real-time 3D measurement of more than one pedestrian’s region center of gravity coordinates, height, width, label number –Easy to install in a new place Infer the pedestrian contexts by using Bayesian Networks –We model pedestrian movement as time-series data –We make Bayesian models and let them learn for each context

The target contexts table individual group Atmosphere of the place Prototype system Not yet walking in a group tumble Project target Walking Running Tumble Snatch Skirmish Crowded Quiet

System Overview Hardware architecture At a street corner Stereo camera Analyze vision data network Stereo camera system Pedestrian context Infer system Display result of contexts Infer the context

System Overview The flow of inferring contexts e.g. A: velocity vector similarity B: the average distance C: the average vector angle The thread of event detection The inference thread using the Bayesian Networks sliding window Input variables into Bayesian Model Stereo camera system data

Experiment . Extract contexts with real-data Extract two pedestrian contexts –Tumble as a individual context –Walking with friends as a group context Result –Show the effectiveness of extracting two or more contexts in real-time. –The required time of extracting the contexts was 58 msec, but it worked in real-time due to the event trigger model.

Conclusion / Future Work We extracted the pedestrian contexts with stereo camera system We developed the prototype system, and confirmed by the experiments. Future work –Further evaluation of accuracy, and compare its performance with some other methods.

Thank you!

Experiment1 . Stereo camera’s output test Walk 6.5m from the side of a camera, then turn right. Stereo camera’s frame: 14 frame/sec Camera Distance from camera (m) Time (sec) Turn right

Bayesian Network Model The sample of tumble model

Pedestrian Recognition at a street corner Many projects of pedestrian recognition from video data have been conducted. –Pedestrian Detection –Pedestrian Tracking –Pedestrian’s activities Security camera

Characteristics of Pedestrian The activities of Individuals –walking, running, tumbling The activities of Groups, mobs –Harmless group Companion , with the same intention –Contingent group Moving to same direction, temporary crowded with people –Suspicious group Snatchers, fighting, entering no admittance area Atmospheres of a place –Crowded, quiet Pedestrians have 3 characteristics based on their movement.

Goals Extracting from suspicious individuals and uneasy atmosphere by using video cameras –We aim to extract high-level pedestrian contexts in real-time –The system infers contexts based on pedestrians’ moving region data

Stereo Camera System that focusing on moving regions Calculate Moving region feature –Center of gravity coordinate –Distances from camera –Height and width –Timestamp –Label number

Project abstract (1/2) Extracting from suspicious individuals and uneasy atmosphere by using stereo cameras –We aim to extract high-level pedestrian contexts in real-time –The system infers contexts based on pedestrians’ moving region data from stereo camera At a street corner

Project abstract (2/2) Recognizing group and mobs –harmless crowd Companion group –Accidental mobs spectators –Suspicious mobs ⇒ target snatchers, fighting mobs

Bayesian Network Extracting of high-level pedestrian contexts using Bayesian Network.

System Over View At the street Stereo camera Video data analyze network Stereo camera system Pedestrian context Infer system Display result of contexts Context infer

Hardware architecture Pedestrians at street corner Stereo camera Processing video data network Stereo camera system Inferring Pedestrian Contexts in real-time Contexts inferred result Infering contexts

Prototype System’s target Contexts target individuals group Atmosphere Of a place Prototype system’s target contexts Not yet Walking with friends tumble Project target contexts Normal walking Running Tumble Snatchers, Fighting Crowding Quiet

実験 2 .コンテクスト推定部の負荷実験 コンテクスト推定の最大負荷時