1 Scene Understanding perception, multi-sensor fusion, spatio-temporal reasoning and activity recognition. Francois BREMOND PULSAR project-team, INRIA.

Slides:



Advertisements
Similar presentations
The Quest for Correctness Joseph Sifakis VERIMAG Laboratory 2nd Sogeti Testing Academy April 29th 2009.
Advertisements

HealthCare Monitoring: GERHOME Project Monique Thonnat, Francois Bremond & Nadia Zouba PULSAR, INRIA Date.
1 Early Pest Detection in Greenhouses Vincent Martin, Sabine Moisan INRIA Sophia Antipolis Méditerranée, Pulsar project-team, France.
Timed Automata.
Bernd Bruegge & Allen Dutoit Object-Oriented Software Engineering: Conquering Complex and Changing Systems 1 Software Engineering September 12, 2001 Capturing.
1 A Description Logic with Concrete Domains CS848 presentation Presenter: Yongjuan Zou.
Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects.
Towards a Video Camera Network for Early Pest Detection in Greenhouses
New Kind of Logic The first step to approch this questions consists of a new definition of logic operators able to explain the richness of the events happened.
Visual Event Detection & Recognition Filiz Bunyak Ersoy, Ph.D. student Smart Engineering Systems Lab.
Problem Solving and Search in AI Part I Search and Intelligence Search is one of the most powerful approaches to problem solving in AI Search is a universal.
PULSAR Perception Understanding Learning Systems for Activity Recognition Theme: Cognitive Systems Cog C Multimedia data: interpretation and man-machine.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Algorithms and Problem Solving-1 Algorithms and Problem Solving.
Knowledge Acquisitioning. Definition The transfer and transformation of potential problem solving expertise from some knowledge source to a program.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
1 Temporal Scenarios, learning and Video Understanding Francois BREMOND, Monique THONNAT, … INRIA Sophia Antipolis, PULSAR team, FRANCE
Dieter Pfoser, LBS Workshop1 Issues in the Management of Moving Point Objects Dieter Pfoser Nykredit Center for Database Research Aalborg University, Denmark.
Requirements Elicitation Chapter 4. Establishing Requirements Two questions –What is the purpose of the system –What is inside and what is outside the.
Lecture 6 & 7 System Models.
1 MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING By Kaan Tariman M.S. in Computer Science CSCI 8810 Course Project.
System Analysis Overview Document functional requirements by creating models Two concepts help identify functional requirements in the traditional approach.
Timed UML State Machines Ognyana Hristova Tutor: Priv.-Doz. Dr. Thomas Noll June, 2007.
Systems Analysis and Design in a Changing World, Fifth Edition
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
ECE 720T5 Winter 2014 Cyber-Physical Systems Rodolfo Pellizzoni.
1 Performance Evaluation of Computer Networks: Part II Objectives r Simulation Modeling r Classification of Simulation Modeling r Discrete-Event Simulation.
Chapter 4 System Models A description of the various models that can be used to specify software systems.
System Models Hoang Huu Hanh, Hue University hanh-at-hueuni.edu.vn Lecture 6 & 7.
Mirco Nanni, Roberto Trasarti, Giulio Rossetti, Dino Pedreschi Efficient distributed computation of human mobility aggregates through user mobility profiles.
Knowledge representation
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Probabilistic Context Free Grammars for Representing Action Song Mao November 14, 2000.
Orion Image Understanding for Object Recognition Monique Thonnat INRIA Sophia Antipolis.
Benjamin Gamble. What is Time?  Can mean many different things to a computer Dynamic Equation Variable System State 2.
K. J. O’Hara AMRS: Behavior Recognition and Opponent Modeling Oct Behavior Recognition and Opponent Modeling in Autonomous Multi-Robot Systems.
Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 4, Requirements Elicitation.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Standards for Mathematical Practice
Chapter 7 System models.
Big Ideas Differentiation Frames with Icons. 1. Number Uses, Classification, and Representation- Numbers can be used for different purposes, and numbers.
Sommerville 2004,Mejia-Alvarez 2009Software Engineering, 7th edition. Chapter 8 Slide 1 System models.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Recognition of Human Behaviors with Video Understanding M. Thonnat, F. Bremond and B. Boulay Projet ORION INRIA Sophia Antipolis, France 08/07/2003 Inria/STMicroelectronics.
Introduction to Problem Solving. Steps in Programming A Very Simplified Picture –Problem Definition & Analysis – High Level Strategy for a solution –Arriving.
Project on Visual Monitoring of Human Behavior and Recognition of Human Behavior in Metro Stations with Video Understanding M. Thonnat Projet ORION INRIA.
5 Systems Analysis and Design in a Changing World, Fifth Edition.
AVITRACK Project FP INRIA WP1 - Apron Activity Model WP3 - Scene Tracking WP4 - Scene Understanding Brussels, January 17th 2006.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Data Structures and Algorithms Dr. Tehseen Zia Assistant Professor Dept. Computer Science and IT University of Sargodha Lecture 1.
Human Activity Recognition at Mid and Near Range Ram Nevatia University of Southern California Based on work of several collaborators: F. Lv, P. Natarajan,
VISION for Security Monique THONNAT ORION INRIA Sophia Antipolis.
Data Mining and Decision Support
Data Mining for Surveillance Applications Suspicious Event Detection Dr. Bhavani Thuraisingham.
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
Artificial Intelligence Knowledge Representation.
Building Valid, Credible & Appropriately Detailed Simulation Models
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Model Checking Early Requirements Specifications in Tropos Presented by Chin-Yi Tsai.
Research and Future Perspectives on Intelligent Video Surveillance Systems Monique THONNAT Senior Scientist Head of Orion research team INRIA Sophia Antipolis.
Introduction to Machine Learning, its potential usage in network area,
Learning Patterns of Activity
Utilizing AI & GPUs to Build Cloud-based Real-Time Video Event Detection Solutions Zvika Ashani CTO.
Scene Understanding Francois BREMOND
Vehicle Segmentation and Tracking in the Presence of Occlusions
ece 627 intelligent web: ontology and beyond
Knowledge-based event recognition from salient regions of activity
Presentation transcript:

1 Scene Understanding perception, multi-sensor fusion, spatio-temporal reasoning and activity recognition. Francois BREMOND PULSAR project-team, INRIA Sophia Antipolis, FRANCE Key words: Artificial intelligence, knowledge-based systems, cognitive vision, human behavior representation, scenario recognition

2 Event recognition: State of the art finite state automata [ICNSC04] Bayesian network [ICVS03b] CSP Temporal constraintsTemporal constraints [AVSS05b], [IJCAI03], [ICVS03a], [PhDTV04], [ICDP06] Autonomous systems: performance evaluationperformance evaluation [VSPETS05], [PETS05], [IDSS04], [ICVIIP03], [WMVC07], [AVSS07] program supervision [ICVS06c], [ICVIIP04], [MVA06a] learning parameterslearning parameters [PhDBG06] knowledge discoveryknowledge discovery [ICDP06], [VIE07] learning scenario models [ICVS06a], [ICDP06b] Results and demonstrations: metro, bank, train, airport and trichogramma monitoring, Homecare [ICVS06b], [AJCAI06], [ICVW06], [ITSC05], [BR06], [MVA06b], [SETIT07] Outline

3 Event Recognition Several formalisms can be used: Event representation: n-ary tree, frame, aggregate (structure). finite state automaton, sequence (evolution). graph, set of constraints. Event recognition: feature based routine. Classification, Bayesian, neural network, SVM, clustering. DBN, HMM. Petri net, grammar, constraint resolution propagation, verification of temporal constraints.

4 Event Recognition Outline: Event Representation Temporal Scenario Recognition Scenarios representation recognition process Results: recognition of several scenarios CARETAKER: management of large multimedia collections Trajectory clustering Activity clustering Frequent Composite Event Discovery in Videos (Learning Scenario Models)

5 Event Representation Several entities are involved in the scene understanding process: Moving region: any intensity change between images. Context object: predefined static object of the scene environment (entrance zone, wall, equipment, door...). Physical object : any moving region which has been tracked and classified (person, group of persons, vehicle, … etc). Physical object of interest: meaningful object, but depending on applications (person/ door, parked vehicle, … etc).

6 Event Representation Events and scenarios : large variety more or less composed of sub-events (running/fighting). involving few/many actors (football game). general (standing)/sensor and application/view (sit down, stop) dependent. spatial granularity: the view observed by one camera/the whole site. temporal granularity: instantaneous/long term with complex relationships (synchronize).  3 levels of complexity depending on the complexity of temporal relations and on the number of physical objects : non-temporal constraint relative to one physical object (sitting). Intuitive association of probabilities to get precision. temporal sequence of sub-scenarios relative to one physical object (open the door, go toward the chair then sit down). complex temporal constraints relative to several physical objects (A meets B at the coffee machine then C gets up and leaves). Need of logic expressiveness but sensitive to vision errors.

7 Video events: real world notions corresponding to short actions up to activities. Primitive State: a spatio-temporal property linked to vision routines involving one or several actors, valid at a given time point or stable on a time interval Ex : « close», « walking», « sitting» Composite State: a combination of primitive states Primitive Event: a significant change of states Ex : « enters», « stands up», « leaves » Composite Event: a combination of states and events. Corresponds to a long term (symbolic, application dependent) activity. Ex : « fighting», « vandalism» Event Representation

8 A video event is mainly constituted of five parts: Physical objects: all real world objects present in the scene observed by the cameras Mobile objects, contextual objects, zones of interest Components: list of states and sub-events involved in the event Forbidden Components: list of states and sub-events that must not be detected in the event Constraints: symbolic, logical, spatio-temporal relations between components or physical objects Action : a set of tasks to be performed when the event is recognized Event Representation

9 Example: “inside_zone” & “changes_zone” scenario models primitive-state( inside_zone, physical-objects((p : Person), (z : Zone)) constraints((p in z)) ) primitive-event( changes_zone, physical-objects((p : Person), (z1 : Zone), (z2 : Zone)) components( (e1 : primitive-state inside_zone (p, z1)) (e2 : primitive-state inside_zone (p, z2)) ) constraints((e1 before e2) ))

10 Event Representation Representation Language to describe Temporal Events of interest. Example: a “Bank_Attack” scenario model composite-event (Bank_attack, physical-objects ((employee : Person), (robber : Person)) components( (e1 : primitive-state inside_zone (employee, "Back")) (e2 : primitive-event changes_zone (robber, "Entrance", "Infront")) (e3 : primitive-state inside_zone (employee, "Safe")) (e4 : primitive-state inside_zone (robber, "Safe")) ) constraints ((e2 during e1) (e2 before e3) (e1 before e3) (e2 before e4) (e4 during e3) ) action (“Bank attack!!!”) )

11 Scenario Representation (1) The employee is in his position behind the counter. (2) A person enters the bank (3) The second person moves to the front of the counter (4) Both of them arrive at the safe door A “Bank attack” scenario instance

12 Event Representation: Event Knowledge Base Definition: A event knowledge base is a set of events pre- defined by experts. Example: A part of the scenario knowledge base used in bank monitoring application. stopped walkinginside zonestandingseatedholding gun close to stays atfar from changes zone stays inside zone goes away moves close to bank attack one commercial & one robber followingqueuingtailgating opens door customer at cash- machine customer waiting customer in agency elementary scenario model composed scenario model decomposition into sub- scenario models

13 Video Event Ontology: a set of concepts and relation is used as a reference between all the actors of the domain to describe knowledge Enable experts to describe video events of interest (e.g. composite event) and to structure the knowledge: ontology of the application domain. Share knowledge between developers: ontology of visual concepts (e.g. a stopped mobile object) Ease communication between developers and end users and enable performance evaluation: ontology of the video understanding process (what should be detected: mobile object (a parked car), object of interest (a door), visible object (occluded person)) Architecture interoperability: separation between specification and knowledge description Event Representation: Video Event Ontology

14 Event Representation: Video Event Ontology Video Event Ontology Physical Objects Spatial Relations Mobile Contextual Person Car Zone Equipment … … Distance Close Far Topological Inside Outside Video Events Temporal Relations Primitive State Composite State Primitive Event Composite Event Quantitative Qualitative Symbolic Allen’s Temporal Relations … … … Relations Spatial- Temporal Relations One Person + Equipment …

15 Event Representation: Video Event Ontology

16 Scenario Recognition

17 Scenario Recognition: Specific Routines Advisor project: F. Cupillard, A. Avanzi,…

18 running walking stopped Scenario Recognition : Running Scenario Recognition: Specific Routines Results in metro station Scenario:Running -> ALARM State: walking State: stopped

19 Several types of States : posture  {lying, crouching, standing} direction  {towards the right, towards the left, leaving, arriving} speed  {stopped,walking, running} distance/object  {close, far} distance/person  {close, far} posture/object  {sitting, any} Several types of Events : 1 person : falls down, crouches down, stands up, goes right side, goes left side, goes away, arrives, stops, starts running 1 person & 1 zone : leaves, enters 1 person & 1 equipment : moves close to,sits on, moves away from 2 persons : moves close to, moves away from Scenario Recognition: Specific Routines Results in metro station

20 Exit zone Mobile objects Detection Group Tracking Recognition of the behaviour « a Group of people blocks an Exit» Blocking Scenario Recognition: automaton The scenario “A Group of people blocks an Exit” is based on a Finite state automaton INIT Grp x is tracked Grp x is inside a ZOI Grp X is stopped in the ZOI > 30 sec Enter_ZOI Exit_ZOI « Blocking » Stops Start_walking Start_running 20

21 Examples : Brussels and Barcelona Metros Exit zone Jumping over barrier Blocking Overcrowding Fighting Group behavior Crowd behavior Individual behavior Group behavior Scenario Recognition: automaton 21

22 Recognition of five behaviors: “Blocking”, “Fighting”, “Jumping over barrier”, “Vandalism” and “Overcrowding” Tested on 50 metro sequences (10 hours) and one week live recognition True positive per sequence: 70% (“Fighting”) to 95% (“Blocking”) False positive per sequence: 5% (“Fighting”, “Jumping over barrier”) to 0% (others) Scenario Recognition : automaton

23 Scenario Recognition: Temporal Constraints Work done in collaboration with T. Vu

24 Scenario Recognition: Temporal Constraints Overview of the recognition process Recognition of elementary scenarios Scenario compilation Recognition of composed scenarios Prediction and uncertainty Example of the recognition of a “Bank attack” scenario and more…

25 Scenario Recognition: issues Many event representations are not easy to use and not re-usable (need learning, e.g. incremental, supervised) scenarios defined at one time point / interval, do not let the experts to describe their scenarios in a natural way (context awareness, user feedback). Event recognition approaches : allow an efficient recognition of events, but some temporal constraints cannot be processed (e.g. static scenes, synchronization) they require that the events are bounded in time (temporal complexity). deal partially with inaccuracy and uncertainty (e.g. lost tracks), are not linked to signal.

26 Scenario (algorithmic notion): any type of video events Two types of scenarios: elementary (primitive states) composed (composite states and events). Algorithm in two steps. Scenario Recognition: Temporal Constraints (T. Vu) 1) Recognize all Elementary Scenario models 2) Trigger the recognition of selected Composed Scenario models 1) Recognize all triggered Composed Scenario models 2) Trigger the recognition of other Composed Scenario models Tracked Mobile Objects Recognized Scenarios A priori Knowledge - Scenario knowledge base - 3D geometric & semantic information of the observed environment

27 Scenario Recognition: Elementary Scenario The recognition of a compiled elementary scenario model m e consists of a loop: 1. Choosing a physical object for each physical-object variable 2. Verifying all constraints linked to this variable m e is recognized if all the physical-object variables are assigned a value and all the linked constraints are satisfied.

28 Person 1 Inside_zone Changes_zone Zone 1 list of time intervals Scenario Recognition: Elementary Scenario Zone 2 list of time intervals Zone 2 list of time intervals Zone 3 list of time intervals Zone 1 Inside_zone Zone 3 list of time intervals Zone 4 list of time intervals Person_2 Legend model actor path list of time intervals of recognized scenarios The resolution of temporal constraints is improved by structuring the search domain of already recognized states, events and scenarios. The path (Person 1  Inside_zone  Zone 1  ) shows the list of time intervals while Person 1 is inside Zone 1.

29 Scenario Recognition: Composed Scenario Problem: given a scenario model m c = (m 1 before m 2 before m 3 ); if a scenario instance i 3 of m 3 has been recognized then the main scenario model m c may be recognized. However, the classical algorithms will try all combinations of scenario instances of m 1 and of m 2 with i 3  a combinatorial explosion. Solution: decompose the composed scenario models into simpler scenario models in an initial (compilation) stage such as each composed scenario model is composed of two components: m c = (m 4 before m 3 )  a linear search.

30 Scenario Recognition: Composed Scenario Example: original “Bank_attack” scenario model composite-event (Bank_attack, physical-objects((employee : Person), (robber : Person)) components( (1) (e1 : primitive-state inside_zone(employee, "Back")) (2) (e2 : primitive-event changes_zone(robber, "Entrance", "Infront")) (3) (e3 : primitive-state inside_zone(employee, "Safe")) (4) (e4 : primitive-state inside_zone(robber, "Safe")) ) constraints((e2 during e1) (e2 before e3) (e1 before e3) (e2 before e4) (e4 during e3) ) alert(“Bank attack!!!”) )

31 Scenario Recognition: Composed Scenario Compilation: Original scenario model is decomposed into 3 new scenarios composite-event (Bank_attack_1, physical-objects((employee : Person), (robber : Person)) components( (1)(e1 : primitive-state inside_zone (employee, "Back")) (2)(e2 : primitive-event changes_zone (robber, "Entrance", "Infront")) constraints((e1 during e2) )) composite-event ( Bank_attack_2, physical-objects((employee : Person), (robber : Person)) components( (3)(e3 : primitive-state inside_zone (employee, "Safe")) (4)(e4 : primitive-state inside_zone (robber, "Safe")) ) constraints((e3 during e4) )) composite-event ( Bank_attack_3, physical-objects((employee : Person), (robber : Person)) components( (att_1 : composite-event Bank_attack_1 (employee, robber)) (att_2 : composite-event Bank_attack_2 (employee, robber)) ) constraints(((termination of att_1) before (start of att_2)) ) alert(“Bank attack!!!”) )

32 Scenario Recognition: Composed Scenario A compiled scenario model m c is composed of two components: start and termination. To start the recognition of m c, its termination needs to be already instantiated. The recognition of a compiled scenario model m c consists of a loop: 1. Choosing a scenario instance for the start of mc, 2. Verifying the temporal constraints of mc, 3. Instantiating the physical-objects of m c with physical-objects of the start and of the termination of mc, 4. Verifying the non-temporal constraints of mc. 5. Verifying forbidden constraints.

33 Scenario Recognition: Temporal Constraints Results A generic formalism to help experts model intuitively states, events and scenarios. Recognition algorithm processes temporal operators in an efficient way. The recognition of complex scenarios (large number of actors) becomes real time. However, prediction and uncertainty needs to be taken care

34 Scenario recognition: capacity of prediction Issue: in the bank monitoring application, an alert “Bank attack!!!” is triggered when a scenario “Bank_attack” is recognized. However, it can be too late for security agents to cope with the situation. Requirement: is the temporal scenario recognition method able to predict scenarios that may occur in the future? Answer: YES, the recognition algorithm can predict scenarios that may occur by adding automatically alerts (during the compilation) to some generated scenario models. This task can be specified in scenario models.

35 Scenario recognition : uncertainty Temporal precision Issue: several scenario models are defined with too precise temporal constraints  they cannot be recognized with real videos. Solution: we defined a temporal tolerance Δt as an integer, then all temporal comparisons are estimated using an approximation of Δt. Incorrect mobile object tracking Issue: the vision algorithms may loose the track of several detected mobile objects  the system cannot recognize correctly scenario occurrences in several videos. Solution1: experts describe different scenario models representing various situations corresponding to several combinations of physical objects.

36 Uncertainty Representation Solution2: management of the vision uncertainty (likelihood): within predefined event models (off-line) coefficients (on mobile objects and components) are provided by default. propagated (on-line) through the event instances 1.mobile objects: computed by vision algorithms. 2.primitive states (elementary): a coefficient to each physical object for representing the likelihood relation between the state and each involved mobile object. 3.events and composite states (composed): a coefficient to each component for representing the likelihood relation between the event and each component. defining a threshold into each state/event model for specifying at which likelihood level the given state/event should be recognized.

37 Uncertainty Representation PrimitiveState (Person_Close_To_Vehicle, Physical Objects ( (p : Person, 0.7), (v : Vehicle, 0.3) ) Constraints ((p distance v ≤ close_distance) (recognized if likelihood > 0.8)) ) CompositeEvent (Crowd_Splits, Physical Objects ((c1: Crowd, 0.5), (c2 : Crowd, 0.5), (z1: Zone) ) Components ((s1 : CompositeState Move_toward (c1, z1), 0.3) (e2 : CompositeEvent Move_away (c2, c1), 0.7) ) Constraints ( (e2 during s1) (c2's Size > Threshold) (recognized if likelihood > 0.8)) )

38 The likelihood of an event is calculated by the following formula: The likelihood of an event is calculated by the following formula: like is the likelihood, coef is the coefficient Two functions f are defined: Two functions f are defined: for a primitive state (elementary) e : for an event or composite state (composed) Ec : An event E is recognized if and only if: likelihood(E) ≥ threshold(E) An event E is recognized if and only if: likelihood(E) ≥ threshold(E) Issue: event uncertainty directly links to tracking uncertainty (difficult for real-world scenes). Using the Uncertainty

39 Scenario recognition: Results The experts of six projects in video interpretation have realized three types of tests. on recorded videos: to verify whether the recognition algorithm can recognize sufficiently scenario occurrences. on live videos: to verify whether the recognition algorithm can work in a longtime interval. on recorded/simulated videos: to estimate the processing time of the recognition algorithm.

40 Scenario recognition: Results Bank agency monitoring in Paris (M. Maziere)

41 Vandalism scenario example (temporal constraints) : Scenario(vandalism_against_ticket_machine, Physical_objects((p : Person), (eq : Equipment, Name=“Ticket_Machine”) ) Components ((event s1: p moves_close_to eq) (state s2: p stays_at eq) (event s3: p moves_away_from eq) (event s4: p moves_close_to eq) (state s5: p stays_at eq) ) Constraints ((s1 != s4) (s2 != s5) (s1 before s2) (s2 before s3) (s3 before s4) (s4 before s5) ) ) ) Scenario recognition: Results

42 Scenario Recognition: Results Vandalism in metro in Nuremberg

43 Scenario recognition: Results Example: a “Vandalism against a ticket machine” scenario

44 Scenario recognition: Results Example: a “abandoned goods” scenario

45 Example of the Unloading Front Operation (global) CompositeEvent (UnLoading_Front_Global_Operation, PhysicalObjects ( (v1 : Vehicle), (v2 : Vehicle), (z1 : Zone), (z2 : Zone), (z3 :Zone)) Components ( (c1 : CompositeEvent Loader_Arrival(v1, z1, z2)) (c2 : CompositeEvent Transporter_Arrival(v2, z1, z3)) Constraints ( (v1->SubType = LOADER) (v2->SubType = TRANSPORTER) (z1->Name = ERA) (z2->Name = RF_DoorC_Access) (z3->Name = LOADER_BackZone) (c1 before c2))) Scenario recognition: Results Example: “Unloading Front Operation ” event

46 “Unloading Global Operation” Scenario recognition: Results Example: “Unloading Global Operation” event

47 Example of the Unloading Front Operation (detailed) CompositeEvent (UnLoading_Front_Detailed_Operation, PhysicalObjects ( (p1 : Person), (v1 : Vehicle), (v2 : Vehicle), (v3 : Vehicle), (z1 : Zone), (z2 : Zone), (z3 :Zone), (z4 : Zone)) Components ( (c1 : CompositeEvent Loader_Arrival(v1, z1, z2)) (c2 : CompositeEvent Transporter_Arrival(v2, z1, z3)) (c3 : CompositeState Worker_Manipulating_Container(p1, v3, v2, z3, z4))) Constraints ( (v1->SubType = LOADER) (v2->SubType = TRANSPORTER) (z1->Name = ERA) (z2->Name = RF_DoorC_Access) (z3->Name = LOADER_BackZone) (z4->Name = Behind_RF_DoorC_Access) (c1 before c2) (c2 before c3))) Scenario recognition: Results Example: “Unloading Front Operation ” event

48 Scenario recognition: Results Parked aircraft monitoring in Toulouse (F Fusier) “Unloading Front Operation”

49 Aircraft Arrival Preparation (involving the GPU) Scenario recognition: Results Example: “Aircraft Arrival Preparation ” event

50 Scenario recognition: Results Example: “Tow Tractor Arrival” event Tow Tractor Arrival

51 CompositeEvent ( vandalism_against_window, PhysicalObjects( (vandal : Person) ), (w : Equipment)) Components( (vandalism_against_window_VIDEO : CompositeEvent vandal_close_to_window(vandal, w)) (vandalism_against_window_AUDIO : CompositeEvent tag_detected_close_to_person(vandal))) Constraints((vandalism_against_window_VIDEO during vandalism_against_window_AUDIO) ) Alarm(AText("Vandalism against window") AType("URGENT") )) Scenario recognition: Results Example: “vandalism_against_window” event

52 Scenario recognition: Results Example: “Scratch & theft in a train” scenarios

53 Scenario recognition: Results Example: a “Disturbing people in a train” scenario

54 Scenario recognition: Results Video Understanding for Trichogramma Monitoring

55 Recognition of the “Prepare meal” event Visualization of a recognized event in the Gerhome laboratory The person is recognized with the posture "standing with one arm up”, “located in the kitchen” and “using the microwave”. The person is recognized with the posture "standing with one arm up”, “located in the kitchen” and “using the microwave”.

56 Recognition of the “Resting in living-room” event Recognition of the “Resting in living-room” event The person is recognized with the posture “sitting in the armchair” and “located in the living- room”. The person is recognized with the posture “sitting in the armchair” and “located in the living- room”. Visualization of a recognized event in the Gerhome laboratory