Detecting and Tracking Hostile Plans in the Hats World.

Slides:



Advertisements
Similar presentations
David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.
Advertisements

Setting up the Observation…
Probabilistic Reasoning over Time
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Russell and Norvig, AIMA : Chapter 15 Part B – 15.3,
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Rolling Dice Data Analysis - Hidden Markov Model Danielle Tan Haolin Zhu.
Introduction of Probabilistic Reasoning and Bayesian Networks
Data Mining Glen Shih CS157B Section 1 Dr. Sin-Min Lee April 4, 2006.
Introduction to Hidden Markov Models
Hidden Markov Models Modified from:
Profiles for Sequences
Hidden Markov Models Fundamentals and applications to bioinformatics.
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.
… Hidden Markov Models Markov assumption: Transition model:
HIDDEN MARKOV MODELS IN MULTIPLE ALIGNMENT. 2 HMM Architecture Markov Chains What is a Hidden Markov Model(HMM)? Components of HMM Problems of HMMs.
Lecture 5: Learning models using EM
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Hidden Markov Models Usman Roshan BNFO 601. Hidden Markov Models Alphabet of symbols: Set of states that emit symbols from the alphabet: Set of probabilities.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Hidden Markov Models Usman Roshan BNFO 601. Hidden Markov Models Alphabet of symbols: Set of states that emit symbols from the alphabet: Set of probabilities.
Kalman Filtering Jur van den Berg. Kalman Filtering (Optimal) estimation of the (hidden) state of a linear dynamic process of which we obtain noisy (partial)
Hidden Markov Models David Meir Blei November 1, 1999.
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Dishonest Casino Let’s take a look at a casino that uses a fair die most of the time, but occasionally changes it to a loaded die. This model is hidden.
Isolated-Word Speech Recognition Using Hidden Markov Models
Markov Localization & Bayes Filtering
CSCE555 Bioinformatics Lecture 6 Hidden Markov Models Meeting: MW 4:00PM-5:15PM SWGN2A21 Instructor: Dr. Jianjun Hu Course page:
Computer vision: models, learning and inference Chapter 19 Temporal models.
Fundamentals of Hidden Markov Model Mehmet Yunus Dönmez.
K. J. O’Hara AMRS: Behavior Recognition and Opponent Modeling Oct Behavior Recognition and Opponent Modeling in Autonomous Multi-Robot Systems.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
1 USC Information Sciences InstituteYolanda Gil AAAI-08 Tutorial July 13, 2008 Part VI AI Workflows AAAI-08 Tutorial on Computational Workflows.
Woburn, MA ▪ Washington, DC © 2008, Aptima, Inc. Mission Plan Recognition: Developing Smart Automated Opposing Forces for Battlefield Simulations.
CS 416 Artificial Intelligence Lecture 17 Reasoning over Time Chapter 15 Lecture 17 Reasoning over Time Chapter 15.
CS Statistical Machine learning Lecture 24
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
CSC321: Neural Networks Lecture 16: Hidden Markov Models
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
MaskIt: Privately Releasing User Context Streams for Personalized Mobile Applications SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Inferring High-Level Behavior from Low-Level Sensors Donald J. Patterson, Lin Liao, Dieter Fox, and Henry Kautz.
Decision Making Under Uncertainty CMSC 471 – Spring 2014 Class #12– Thursday, March 6 R&N, Chapters , material from Lise Getoor, Jean-Claude.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Human and Optimal Exploration and Exploitation in Bandit Problems Department of Cognitive Sciences, University of California. A Bayesian analysis of human.
Probabilistic Robotics
OBJECT TRACKING USING PARTICLE FILTERS. Table of Contents Tracking Tracking Tracking as a probabilistic inference problem Tracking as a probabilistic.
Kalman Filtering And Smoothing
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
Modeling human action understanding as inverse planning
Tracking We are given a contour G1 with coordinates G1={x1 , x2 , … , xN} at the initial frame t=1, were the image is It=1 . We are interested in tracking.
Markov ó Kalman Filter Localization
Course: Autonomous Machine Learning
CSC 594 Topics in AI – Natural Language Processing
Probabilistic Reasoning over Time
Probabilistic Reasoning over Time
Auxiliary particle filtering: recent developments
Hidden Markov Models Part 2: Algorithms
Hidden Markov Autoregressive Models
CONTEXT DEPENDENT CLASSIFICATION
Hidden Markov Models By Manish Shrivastava.
Qiang Huo(*) and Chorkin Chan(**)
Presentation transcript:

Detecting and Tracking Hostile Plans in the Hats World

Detection and Tracking of Plans Goals define plans, plans are then put into action which result in observations Given a stream of observations, and some behavioral model of the agent that generates those observations – What are their most likely intentions of the agent? – What kind of plan is he pursuing? Plan recognition: The problem of inferring an agent’s hidden state of plans or intentions based on their observable actions Actions Plans Observations Intentions (Goals)

What is the Hats Simulator? Plan recognition in a virtual domain A simulation of a society in a box with about 100,000 agents (hats) that engage in individual and collective activities. Most of the agents are benign, very few are known adversaries (terrorists) and some are covert adversaries. Hats have capabilities which are a set of elementary attributes that can be traded with other hats at meetings. Beacons are special locations or landmarks that are characterized by their vulnerabilities. When a group of adversarial agents (task-force) has the capabilities that match the beacon’s vulnerabilities the beacon is destroyed.

What is the Hats Simulator? Each hat belongs to one or more organizations, which are not known and must be inferred from meetings. – Each adversarial hat belongs to at least one adversarial organization. – Each benign hat belongs to at least one benign organization and no adversarial organizations. When a meeting is planned, the list of participants is drawn from the set of hats belonging to the same organization. – A meeting planned by an adversarial organization will only consist of adversarial agents. – A meeting planned by a benign organization will consist of both benign and adversarial agents.

Hierarchical Model for Hats The planner chooses an organization to carry out the attack and the beacon A task force is then chosen The planner sets up an elaborate sequence of meetings for each task force member to acquire a capability The generative planner chooses an organization to carry out the attack and the beacon A task force is then chosen The planner sets up an elaborate sequence of meetings for each task force member to acquire a capability

The Goals of Hats To find and neutralize the adversarial task force behind a beacon attack given the record of meetings between hats

Bayesian Framework The state of the i-th agent at time t is characterized as – Terrorist indicator: – Intention to acquire a capability: – Actual capability carried: The joint state of the system is given by

Hidden Markov Model Transition Matrix: – The probability that the system will be in state given that it was in the state and produced an observation Emission Model: – The probability of seeing a particular observation in a particular state

Bayesian Filtering Filtering distribution (joint distribution over the hidden state at time t and observations up to time t): Recursive update formula: – When a transition matrix and emission model are specified this equation can be used for state estimation

Bayesian Guilt by Association Model Estimates group membership of agents based on observed meetings Meetings only contain two agents State of the system: Probability of a meeting between the same type of agents: Probability of a meeting between agents of different types: Kronecker’s -function: if

Bayesian Guilt by Association Model Emission model: Transition matrix: Recursive update formula: