Change Detection in Dynamic Environments Mark Steyvers Scott Brown UC Irvine This work is supported by a grant from the US Air Force Office of Scientific.

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.
Probabilistic models Haixu Tang School of Informatics.
Hypothesis testing Another judgment method of sampling data.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
Observers and Kalman Filters
1 1 The Scientist Game Scott S. Emerson, M.D., Ph.D. Professor of Biostatistics University of Washington Scott S. Emerson, M.D., Ph.D. Professor of Biostatistics.
1 In-Network PCA and Anomaly Detection Ling Huang* XuanLong Nguyen* Minos Garofalakis § Michael Jordan* Anthony Joseph* Nina Taft § *UC Berkeley § Intel.
Descriptive statistics Experiment  Data  Sample Statistics Sample mean Sample variance Normalize sample variance by N-1 Standard deviation goes as square-root.
HON207 Cognitive Science Sequential Sampling Models.
Constructing Dynamic Treatment Regimes & STAR*D S.A. Murphy ICSA June 2008.
Probability theory Much inspired by the presentation of Kren and Samuelsson.
Evaluating Hypotheses
Hypothesis Tests for Means The context “Statistical significance” Hypothesis tests and confidence intervals The steps Hypothesis Test statistic Distribution.
Lecture 16 – Thurs, Oct. 30 Inference for Regression (Sections ): –Hypothesis Tests and Confidence Intervals for Intercept and Slope –Confidence.
C82MCP Diploma Statistics School of Psychology University of Nottingham 1 Overview of Lecture Independent and Dependent Variables Between and Within Designs.
Part III: Inference Topic 6 Sampling and Sampling Distributions
Bayesian Framework EE 645 ZHAO XIN. A Brief Introduction to Bayesian Framework The Bayesian Philosophy Bayesian Neural Network Some Discussion on Priors.
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Prediction and Change Detection Mark Steyvers Scott Brown Mike Yi University of California, Irvine This work is supported by a grant from the US Air Force.
Inference in Dynamic Environments Mark Steyvers Scott Brown UC Irvine This work is supported by a grant from the US Air Force Office of Scientific Research.
Lecture 19 Simple linear regression (Review, 18.5, 18.8)
Sampling Theory Determining the distribution of Sample statistics.
Multivariate Probability Distributions. Multivariate Random Variables In many settings, we are interested in 2 or more characteristics observed in experiments.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Sample size determination Nick Barrowman, PhD Senior Statistician Clinical Research Unit, CHEO Research Institute March 29, 2010.
Random Sampling, Point Estimation and Maximum Likelihood.
The Argument for Using Statistics Weighing the Evidence Statistical Inference: An Overview Applying Statistical Inference: An Example Going Beyond Testing.
University of Ottawa - Bio 4118 – Applied Biostatistics © Antoine Morin and Scott Findlay 08/10/ :23 PM 1 Some basic statistical concepts, statistics.
Herding Dynamical Weights Max Welling Bren School of Information and Computer Science UC Irvine.
SIGNAL DETECTION IN FIXED PATTERN CHROMATIC NOISE 1 A. J. Ahumada, Jr., 2 W. K. Krebs 1 NASA Ames Research Center; 2 Naval Postgraduate School, Monterey,
 A probability function is a function which assigns probabilities to the values of a random variable.  Individual probability values may be denoted by.
Evaluation of Non-Uniqueness in Contaminant Source Characterization based on Sensors with Event Detection Methods Jitendra Kumar 1, E. M. Zechman 1, E.
When Uncertainty Matters: The Selection of Rapid Goal-Directed Movements Julia Trommershäuser, Laurence T. Maloney, Michael S. Landy Department of Psychology.
1 E. Fatemizadeh Statistical Pattern Recognition.
Chapter 7 Sampling and Sampling Distributions ©. Simple Random Sample simple random sample Suppose that we want to select a sample of n objects from a.
7 - 1 © 1998 Prentice-Hall, Inc. Chapter 7 Inferences Based on a Single Sample: Estimation with Confidence Intervals.
Beam Sampling for the Infinite Hidden Markov Model by Jurgen Van Gael, Yunus Saatic, Yee Whye Teh and Zoubin Ghahramani (ICML 2008) Presented by Lihan.
Chapter1: Introduction Chapter2: Overview of Supervised Learning
© Copyright McGraw-Hill 2004
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Random Variables Numerical Quantities whose values are determine by the outcome of a random experiment.
6. Population Codes Presented by Rhee, Je-Keun © 2008, SNU Biointelligence Lab,
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate its.
Psychology and Neurobiology of Decision-Making under Uncertainty Angela Yu March 11, 2010.
Evaluating Hypotheses. Outline Empirically evaluating the accuracy of hypotheses is fundamental to machine learning – How well does this estimate accuracy.
Bias Management in Time Changing Data Streams We assume data is generated randomly according to a stationary distribution. Data comes in the form of streams.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Chapter 6 INFERENTIAL STATISTICS I: Foundations and sampling distribution.
Sampling and Sampling Distributions
Vision Sciences Society Annual Meeting 2012 Daniel Mann, Charles Chubb
Goals of Statistics 8/27.
Review Measure testosterone level in rats; test whether it predicts aggressive behavior. What would make this an experiment? Randomly choose which rats.
Goals of Statistics.
Statistical Inference
Chapter Six Normal Curves and Sampling Probability Distributions
Sean Duffy Steven Gussman John Smith
Determining the distribution of Sample statistics
Sample vs Population comparing mean and standard deviations
Comparison of observed switching behavior to ideal switching performance. Comparison of observed switching behavior to ideal switching performance. Conventions.
Choice Certainty Is Informed by Both Evidence and Decision Time
Volume 66, Issue 6, Pages (June 2010)
Shunan Zhang, Michael D. Lee, Miles Munro
Speech recognition, machine learning
Correlation and Covariance
Bernoulli Trials Two Possible Outcomes Trials are independent.
Mathematical Foundations of BME Reza Shadmehr
Speech recognition, machine learning
Machine Learning: Lecture 5
Presentation transcript:

Change Detection in Dynamic Environments Mark Steyvers Scott Brown UC Irvine This work is supported by a grant from the US Air Force Office of Scientific Research (AFOSR grant number FA )

Overview Experiments with dynamically changing environments Task: Given a sequence of random numbers, predict the next one Questions: –How do observers detect changes? –What are the individual differences?  “Jumpiness” Bayesian models + simple process models

= observed data = prediction Two-dimensional prediction task 11 x 11 button grid Touch screen monitor 1500 trials Self-paced Same sequence for all subjects

Sequence Generation (x,y) locations are drawn from two binomial distributions of size 10, and parameters θ At every time step, probability 0.1 of changing θ to a new random value in [0,1] Example sequence: Time θ=.12θ=.95θ=.46θ=.42θ=.92θ=.36

Example Sequence

= observed sequence Bayesian Solution = prediction Subject 4 – change detection too slow Subject 12 – change detection too fast (sequence from block 5)

Tradeoffs Detecting the change too slowly will result in lower accuracy and less variability in predictions than an optimal observer. Detecting the change too quickly will result in false detections, leading to lower accuracy and higher variability in predictions.

Average Error vs. Movement = subject

“Ideal” observer: inferring the HMM that generated the data... Time12tt+1 Measurements states changepoints Change probability

Gibbs sampling Model prediction Inferred changepoint Sample from distribution over change points. Prediction is based on average measurement after last inferred changepoint

Average Error vs. Movement = subject

A simple process model 1.Make new prediction some fraction α of the way between recent outcome and old prediction α = change proportion 2.Fraction α is a linear function of the error made on last trial 3.Two free parameters: A, B A<B bigger jumps with higher error A=B constant smoothing α 0 1 A B B A

Average Error vs. Movement = subject = model

One-dimensional Prediction Task Possible Locations Where will next blue square arrive on right side?

Average Error vs. Movement = subject = model

New Experiments Prediction judgments might not be best measurement for assessing psychological change New experiments: –Inference judgment: what currently is the state of the system?

Inference Task (aka filtering)... Time12tt+1 What is the cause of y t+1 ?

Tomato Cans Experiment Cans roll out of pipes A, B, C, or D Machine perturbs position of cans (normal noise) At every trial, with probability 0.1, change to a new pipe (uniformly chosen) (real experiment has response buttons and is subject paced) ABCDABCD

Tomato Cans Experiment (real experiment has response buttons and is subject paced) ABCDABCD Cans roll out of pipes A, B, C, or D Machine perturbs position of cans (normal noise) At every trial, with probability 0.1, change to a new pipe (uniformly chosen) Curtain obscures sequence of pipes

Tasks ABCDABCD Inference: what pipe produced the last can? A, B, C, or D?

Cans Experiment 136 subjects 16 blocks of 50 trials Vary change probability across blocks –0.08 –0.16 –0.32 Question: are subjects sensitive to the number of changes?

ideal Prob. =.08 Prob. =.16 Prob. =.32

Plinko/ Quincunx Experiment Physical version Web version

Conclusion Adaptation in non-stationary decision environments Individual differences –Over-reaction: perceiving too much change –Under-reaction: perceiving too little change

Do the experiments yourself:

Number of Perceived Changes per Subject Low medium high Change Probability (Red line shows ideal number of changes) Subject #1

Number of Perceived Changes per Subject 55% of subjects show increasing pattern 45% of subjects show non- increasing pattern Low, medium, high change probability Red line shows ideal number of changes

Tasks ABCDABCD Inference: what pipe produced the last can? A, B, C, or D? Prediction: in what region will the next can arrive? 1, 2, 3, or 4?

Cans Experiment 2 63 subjects 12 blocks –6 blocks of 50 trials for inference task –6 blocks of 50 trials for prediction task –Identical trials for inference and prediction

INFERENCE PREDICTION Sequence ABCDABCD Trial

INFERENCE PREDICTION Sequence Ideal Observer ABCDABCD Trial

INFERENCE PREDICTION Sequence Ideal Observer Individual subjects Trial ABCDABCD

INFERENCE PREDICTION Sequence Ideal Observer Trial ABCDABCD Individual subjects

INFERENCE PREDICTION Sequence Ideal Observer Trial ABCDABCD Individual subjects

= Subject ideal INFERENCEPREDICTION

ideal INFERENCEPREDICTION = Process model = Subject