Project on Visual Monitoring of Human Behavior and Recognition of Human Behavior in Metro Stations with Video Understanding M. Thonnat Projet ORION INRIA.

Slides:



Advertisements
Similar presentations
HealthCare Monitoring: GERHOME Project Monique Thonnat, Francois Bremond & Nadia Zouba PULSAR, INRIA Date.
Advertisements

1 Early Pest Detection in Greenhouses Vincent Martin, Sabine Moisan INRIA Sophia Antipolis Méditerranée, Pulsar project-team, France.
ETISEO Project Corpus data - Video sequences contents - Silogic provider.
ETISEO Benoît GEORIS and François BREMOND ORION Team, INRIA Sophia Antipolis, France Lille, December th 2005.
By: Ryan Wendel.  It is an ongoing analysis in which videos are analyzed frame by frame  Most of the video recognition is pulled from 3-D graphic engines.
Automated Shot Boundary Detection in VIRS DJ Park Computer Science Department The University of Iowa.
SmartPlayer: User-Centric Video Fast-Forwarding K.-Y. Cheng, S.-J. Luo, B.-Y. Chen, and H.-H. Chu ACM CHI 2009 (international conference on Human factors.
Scientific Development Branch Dataset Production and Performance Evaluation for Event Detection and Tracking Paul Hosmer Detection and Vision Systems Group.
Towards a Video Camera Network for Early Pest Detection in Greenhouses
New Kind of Logic The first step to approch this questions consists of a new definition of logic operators able to explain the richness of the events happened.
Visual Event Detection & Recognition Filiz Bunyak Ersoy, Ph.D. student Smart Engineering Systems Lab.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
102-OCT-02CANDELA presentation STSIConfidential information/proprietary CANDELA Content Analysis for Networked DELivery Architectures
PULSAR Perception Understanding Learning Systems for Activity Recognition Theme: Cognitive Systems Cog C Multimedia data: interpretation and man-machine.
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 5, Analysis: Dynamic Modeling.
Nice, 17/18 December 2001 Autonomous mapping of natural fields using Random Closed Set Models Stefan Rolfes, Maria Joao Rendas
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
1 Temporal Scenarios, learning and Video Understanding Francois BREMOND, Monique THONNAT, … INRIA Sophia Antipolis, PULSAR team, FRANCE
ORION Project-team Monique THONNAT INRIA Sophia Antipolis Creation: July 1995 Multidisciplinary team: artificial intelligence, software engineering, computer.
Vigilant Real-time storage and intelligent retrieval of visual surveillance data Dr Graeme A. Jones.
Abandoned Object Detection for Indoor Public Surveillance Video Dept. of Computer Science National Tsing Hua University.
Trip Report for The IASTED International Conference on Internet and Multimedia Systems and Applications (EuroIMSA 2006) February 13-15, 2006 Innsbruck,
1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.
Introduction What is “image processing and computer vision”? Image Representation.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
INRIA, NICE, December 7 th -8 th 2006 Data Set and Ground Truth.
Learning to classify the visual dynamics of a scene Nicoletta Noceti Università degli Studi di Genova Corso di Dottorato.
Ambulation : a tool for monitoring mobility over time using mobile phones Computational Science and Engineering, CSE '09. International Conference.
An approach to Intelligent Information Fusion in Sensor Saturated Urban Environments Charalampos Doulaverakis Centre for Research and Technology Hellas.
Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE Mahmut Karakaya Graduate Student Electrical Engineering and Computer.
Trends in Computer Vision Automatic Video Surveillance.
Multimedia Information Retrieval and Multimedia Data Mining Chengcui Zhang Assistant Professor Dept. of Computer and Information Science University of.
A Proposal for a Video Modeling for Composing Multimedia Document Cécile ROISIN - Tien TRAN_THUONG - Lionel VILLARD Presented by: Tien TRAN THUONG Project.
Probabilistic Context Free Grammars for Representing Action Song Mao November 14, 2000.
Situated Design of Virtual Worlds Using Rational Agents Mary Lou Maher and Ning Gu Key Centre of Design Computing and Cognition University of Sydney.
Orion Image Understanding for Object Recognition Monique Thonnat INRIA Sophia Antipolis.
Benjamin Gamble. What is Time?  Can mean many different things to a computer Dynamic Equation Variable System State 2.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
ETISEO Benoît GEORIS, François BREMOND and Monique THONNAT ORION Team, INRIA Sophia Antipolis, France Nice, May th 2005.
1 ETISEO: Video Understanding Performance Evaluation Francois BREMOND, A.T. Nghiem, M. Thonnat, V. Valentin, R. Ma Orion project-team, INRIA Sophia Antipolis,
1 Scene Understanding perception, multi-sensor fusion, spatio-temporal reasoning and activity recognition. Francois BREMOND PULSAR project-team, INRIA.
Using Inactivity to Detect Unusual behavior Presenter : Siang Wang Advisor : Dr. Yen - Ting Chen Date : Motion and video Computing, WMVC.
Recognition of Human Behaviors with Video Understanding M. Thonnat, F. Bremond and B. Boulay Projet ORION INRIA Sophia Antipolis, France 08/07/2003 Inria/STMicroelectronics.
Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris.
Prototype 3: MI prototype for video surveillance and biometry CVDSP-UJI Computer Vision Group – UJI Digital Signal Processing Group – UV November 2010.
 Designed to monitor the movement of people in given area.  Used video cameras to transmit a signal to a specific place on a limited set of monitors.
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
AVITRACK Project FP INRIA WP1 - Apron Activity Model WP3 - Scene Tracking WP4 - Scene Understanding Brussels, January 17th 2006.
Efficient Visual Object Tracking with Online Nearest Neighbor Classifier Many slides adapt from Steve Gu.
ETISEO Benoît GEORIS and François BREMOND ORION Team, INRIA Sophia Antipolis, France Lille, December th 2005.
Activity Monitoring October 19-20, 1999 DARPADARPA Bob Bolles, Brian Burns, Marty Fischler, Ravi Gopalan, Marsha Jo Hannah, Dave Scott SRI International.
VISION for Security Monique THONNAT ORION INRIA Sophia Antipolis.
ETISEO François BREMOND ORION Team, INRIA Sophia Antipolis, France.
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
Multi-view Synchronization of Human Actions and Dynamic Scenes Emilie Dexter, Patrick Pérez, Ivan Laptev INRIA Rennes - Bretagne Atlantique
  Computer vision is a field that includes methods for acquiring,prcessing, analyzing, and understanding images and, in general, high-dimensional data.
By Akhilesh K. Sinha Nishant Singh Supervised by Prof. Amitabha Mukerjee Video Surveillance of Basketball Matches and Goal Detection Indian Institute of.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
GraphiCon 2008 | 1 Trajectory classification based on Hidden Markov Models Jozef Mlích and Petr Chmelař Brno University of Technology, Faculty of Information.
Research and Future Perspectives on Intelligent Video Surveillance Systems Monique THONNAT Senior Scientist Head of Orion research team INRIA Sophia Antipolis.
REAL-TIME DETECTOR FOR UNUSUAL BEHAVIOR
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Scene Understanding Francois BREMOND
Video-based human motion recognition using 3D mocap data
Eric Grimson, Chris Stauffer,
Anomaly Detection in Crowded Scenes
Knowledge-based event recognition from salient regions of activity
Presentation transcript:

Project on Visual Monitoring of Human Behavior and Recognition of Human Behavior in Metro Stations with Video Understanding M. Thonnat Projet ORION INRIA Sophia Antipolis, France 18/03/2003 Inria/NSC

2 Project on Visual Monitoring of Human Behavior Recognition of Human Behavior in Metro Stations with Video Understanding Video Understanding Results for Metro Stations Conclusion Outline

18/03/2003 Inria/NSC 3 Cooperation Project Title: Visual Monitoring of Human Behavior Involved teams : 1- Yi-Ping Hung & Chu-Song Chen, Academia Sinica,Taipei,Taiwan 2- Pau-Choo Chung, National Cheng Kung Univ., Tainan,Taiwan 3- Monique Thonnat, INRIA Sophia Antipolis, France Objectives: Sub-goal 1: Human Detection, Tracking and Recognition using Video Cameras (mainly teams 1 and 3) Sub-goal 2: Behavior Recognition for Medical Purposes (mainly teams 2 and 3)

Recognition of Human Behavior in Metro Stations with Video Understanding M. Thonnat, F. Cupillard, F. Bremond and A. Avanzi Projet ORION INRIA Sophia Antipolis, France 18/03/2003 Inria/NSC

5 Objective: to automate the recognition of specific human behaviors from video sequences ---> Interpretation of the videos from pixels to alarms Video understanding « Blocking an exit » «Fighting » « Fraud » « Overcrowding» ALARM Interface for alarm management Behavior recognition People detection and tracking Cameras

18/03/2003 Inria/NSC 6 Video Understanding Context: European project ADVISOR: Annotated Digital Video for Intelligent Surveillance and Optimised Retrieval Intelligent system of video surveillance in metros Problem : 1000 cameras but few human operators Automatic selection in real time of the cameras viewing abnormal behaviours Automatic annotation of recognised behaviors in a video data base using XML

18/03/2003 Inria/NSC 7 Scene Models (3D) - Scene objects - zones - calibration matrices - Scene objects - zones - calibration matrices Alarms Multi-cameras Combination Behaviors Recognition - States - Events - Scenarios Individual Tracking Group Tracking Crowd Tracking - Motion Detector - F2F Tracker - Motion Detector - F2F Tracker - Motion Detector - F2F Tracker Mobile objects Annotation Scenario Models Video Understanding Platform

18/03/2003 Inria/NSC 8 Definition : a priori knowledge of the observed empty scene Cameras: 3D position of the sensor, calibration matrix field of view,... 3D Geometry of physical objects (bench, trash, door, walls) and interesting zones (entrance zone) with position, shape and volume n Semantic information : type (object, zone), characteristics (yellow, fragile) and its function (seat) Role: to keep the interpretation independent from the sensors and the sites : many sensors, one 3D referential to provide additional knowledge for behavior recognition Video Understanding: 3D Scene Model

18/03/2003 Inria/NSC 9 Barcelona Metro Station Sagrada Famiglia mezzanine (cameras C10, C11 and C12) Video Understanding: Scene Model

18/03/2003 Inria/NSC 10 States, Events and Scenarios : State: a spatio-temporal property involving one or several actors on a time interval Ex : « close», « walking», « seated» Event: a significant change of states Ex : « enters», « stands up», « leaves » Scenario: a long term symbolic application dependent activity Ex : « fighting», « vandalism» Video Understanding

18/03/2003 Inria/NSC 11 Several types of States : posture  {lying, crouching, standing} direction  {towards the right, towards the left, leaving, arriving} speed  {stopped,walking, running} distance/object  {close, far} distance/person  {close, far} posture/object  {seated, any} Several types of Events : 1 person : falls down, crouches down, stands up, goes right side, goes left side, goes away, arrives, stops, starts running 1 person & 1 zone : leaves, enters 1 person & 1 equipment : moves close to,sits on, moves away from 2 persons : moves close to, moves away from Video Understanding

18/03/2003 Inria/NSC 12 We use several formalisms to recognise states, events and scenarios : specific routines classification finite state automaton propagation of temporal constraints Scenario Recognition

Exit zone Mobile objects Detection Group Tracking Recognition of the behaviour « A Group of people blocks an Exit» Blocking The operator of the scenario “A Group of people blocks an Exit” is based on a Finite state automaton Scenario Recognition : Automaton INIT Grp x is tracked Grp x is inside a ZOI Grp X is stopped in the ZOI > 30 sec Enter_ZOI Exit_ZOI « Blocking » Stops Start_walking Start_running 13 26/02/2003 IDSS

18/03/2003 Inria/NSC 14 A temporal scenario is constituted by three parts : Characters : people, physical objects and interesting zones relative to the scenario. Constraints : a set of constraints on the characteristics of the actors and on the states, events and sub-scenarios involving the actors. Production : generation of a scenario instance which can be part of more complex scenarios. Use of symbolic, logical, spatial and temporal constraints including Allen’s interval algebra operators. Scenario Recognition : Temporal constraints

18/03/2003 Inria/NSC 15 Vandalism scenario description : Scenario(vandalism_against_ticket_machine, Characters((p : Person), (eq : Equipment, Name = “Ticket_Machine”) ) Constraints((exist ((event s1: p move_close_to eq) (state s2: p stay_at eq) (event s3: p move_away_from eq) (event s4: p move_close_to eq) (state s5: p stay_at eq) ) ((s1 != s4) (s2 != s5) (s1 before s2) (s2 before s3) (s3 before s4) (s4 before s5) ) ) ) Production( (sc : Scenario) ( (Name of sc := "vandalism_against_ticket_machine") (StartTime of sc := StartTime of s1) (EndTime of sc := EndTime of s5) ) ) ) Scenario Recognition : Temporal constraints

18/03/2003 Inria/NSC 16 Results Vandalism in metro (Nuremberg)

Examples : Brussels and Barcelona Metros Exit zone Jumping over barrier Blocking Overcrowding Fighting Group behavior Crowd behavior Individual behavior Group behavior Results 17 26/02/2003 IDSS

18/03/2003 Inria/NSC 18 Recognition of five behaviors: “Blocking”, “Fighting”, “Jumping over barrier”, “Vandalism” and “Overcrowding” Tested on 50 metro sequences (10 hours) True positive per sequence: 70% (“Fighting”) to 95% (“Blocking”) False positive per sequence: 5% (“Fighting”, “Jumping over barrier”) to 0% (others) Results

18/03/2003 Inria/NSC 19 Conclusion Hypotheses: fixed cameras 3D model of the empty scene predefined behavior models Results: Behavior understanding for Individuals, Groups of people or Crowd an operational language for video understanding (more than 20 states and events) a real-time platform (5 to 25 frames/s)

18/03/2003 Inria/NSC 20 Future work: Starting live evaluation at Barcelona Metro Learning techniques to compute optimal set of operator parameters and to dynamically configure the platform Visual Monitoring of Human Behavior project Conclusion