Recording a Game of Go: Hidden Markov Model Improves Weak Classifier Steven Scher

Slides:



Advertisements
Similar presentations
Joint Face Alignment The Recognition Pipeline
Advertisements

Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
Active Appearance Models
David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.
Lecture 16 Hidden Markov Models. HMM Until now we only considered IID data. Some data are of sequential nature, i.e. have correlations have time. Example:
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Learning HMM parameters
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Profiles for Sequences
Hidden Markov Models Theory By Johan Walters (SR 2003)
What is the temporal feature in video sequences?
… Hidden Markov Models Markov assumption: Transition model:
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
Learning Shape in Computer Go David Silver. A brief introduction to Go Black and white take turns to place down stones Once played, a stone cannot move.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Lyle Ungar, University of Pennsylvania Hidden Markov Models.
S. Maarschalkerweerd & A. Tjhang1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.
Lecture 9 Hidden Markov Models BioE 480 Sept 21, 2004.
Hidden Markov Models Usman Roshan BNFO 601. Hidden Markov Models Alphabet of symbols: Set of states that emit symbols from the alphabet: Set of probabilities.
Part 4 c Baum-Welch Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Elze de Groot1 Parameter estimation for HMMs, Baum-Welch algorithm, Model topology, Numerical stability Chapter
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
Hand Movement Recognition By: Tokman Niv Levenbroun Guy Instructor: Todtfeld Ari.
Hidden Markov Models David Meir Blei November 1, 1999.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Learning Hidden Markov Model Structure for Information Extraction Kristie Seymour, Andrew McCullum, & Ronald Rosenfeld.
Learning and Recognizing Activities in Streams of Video Dinesh Govindaraju.
1 Functional Testing Motivation Example Basic Methods Timing: 30 minutes.
Sensys 2009 Speaker:Lawrence.  Introduction  Overview & Challenges  Algorithm  Travel Time Estimation  Evaluation  Conclusion.
Kalman filter and SLAM problem
Probabilistic Context Free Grammars for Representing Action Song Mao November 14, 2000.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Dan Rosenbaum Nir Muchtar Yoav Yosipovich Faculty member : Prof. Daniel LehmannIndustry Representative : Music Genome.
Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi Presented by: Alon Pakash & Gilad Karni.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
22CS 338: Graphical User Interfaces. Dario Salvucci, Drexel University. Lecture 10: Advanced Input.
School of Engineering and Computer Science Victoria University of Wellington Copyright: Peter Andreae, VUW Image Recognition COMP # 18.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Wong, Gardner, Krieger, Litt (2006) Zack Dvey-Aharon, March 2008.
Robust Real Time Face Detection
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Automated Speach Recognotion Automated Speach Recognition By: Amichai Painsky.
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Strong Supervision From Weak Annotation Interactive Training of Deformable Part Models ICCV /05/23.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Dan Roth University of Illinois, Urbana-Champaign 7 Sequential Models Tutorial on Machine Learning in Natural.
CS498-EA Reasoning in AI Lecture #23 Instructor: Eyal Amir Fall Semester 2011.
Learning, Uncertainty, and Information: Learning Parameters
Software Testing.
Structured prediction
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Table 1. Advantages and Disadvantages of Traditional DM/ML Methods
Tracking Objects with Dynamics
Hidden Markov Models Part 2: Algorithms
Visual Recognition of American Sign Language Using Hidden Markov Models 문현구 문현구.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

Recording a Game of Go: Hidden Markov Model Improves Weak Classifier Steven Scher

Input: video sequence Video (straightened)

Weak classifier Simple Simple Acts independently at each possible stone location Acts independently at each possible stone location Looks for Circles Looks for Circles Hough Transform Hough Transform Looks for extremely bright or dark patches Looks for extremely bright or dark patches Very Noisy Very Noisy Video (straightened) weak frame-by-frame Stone-classifier = black stone = white stone = wait until next slide

Improve Weak classifier: Ignore Motion Ignore spurious detections caused by hands Ignore spurious detections caused by hands May delete good detections May delete good detections Overall, seems to improve the classifier Overall, seems to improve the classifier Motion classifier Video (straightened) weak frame-by-frame Stone-classifier = black stone = white stone = ignored (motion)

Estimating the Sequence of moves: A Simple Algorithm Discard movie frames where too many new stones appear since the last valid frame Discard movie frames where too many new stones appear since the last valid frame Discard movie frames with inappropriate mix of classified black and white stones Discard movie frames with inappropriate mix of classified black and white stones should be equal, +/-1 should be equal, +/-1 Results Results Relies on occasional clean, noiseless frames Relies on occasional clean, noiseless frames For S > 5, many artifacts introduced For S > 5, many artifacts introduced For S < 5, algorithm cannot handle a long sequence (if there is no valid frame during the time that S stones are actually played) For S < 5, algorithm cannot handle a long sequence (if there is no valid frame during the time that S stones are actually played)

A Better Model: Hidden Markov Model Consider Each Possible Stone Location Independently Consider Each Possible Stone Location Independently Consider each transition to depend only on current state Consider each transition to depend only on current state empty Black stone White stone

HMM over time is a tree time Hidden state = {empty, black, white} Observed state = {empty, black, white, unknown/motion} time=ttime=t+1time=t-1

Transition probabilities empty Black stone White stone P new P stay = 1-2*P new 1 1 P new Estimate:

Probabilities of Observations if there is a stone empty Black stone White stone empty Black stoneWhite stone Hand moving (no information) (Black stones’ probabilities similar to white stone) Estimate:

Probabilities of Observations if there is no stone empty Black stone White stone empty Black stoneWhite stone Hand moving (no information).7.1 Estimate:

Data-Derived Estimate from Baum-Welch Algorithm empty Black stone White stone empty Black stoneWhite stone Hand moving (no information).7.1 X.5 X.3 Baum-Welsh:

Baum-Welch & Viterbi Algorithms Baum-Welch Algorithm Baum-Welch Algorithm Expectation-Maximization Expectation-Maximization Viterbi Algorithm Viterbi Algorithm Recursive (Dynamic Programming) Recursive (Dynamic Programming) Quick Quick Linear in length of sequence Linear in length of sequence Quadratic in # of hidden states Quadratic in # of hidden states

Likely sequence of hidden states: Viterbi Algorithm Motion classifier Video (straightened) weak frame-by-frame Stone-classifier Likely Sequence Of Hidden States

Algorithm provides a reasonable solution not robust enough to be fully autonomous Motion classifier Video (straightened) weak frame-by-frame Stone-classifier Likely Sequence Of Hidden States mistakes

Minimizing User Interaction Assign the first frame to be empty Assign the first frame to be empty Add e.g. 10 copies to beginning of sequence Add e.g. 10 copies to beginning of sequence Correct errors in the last frame by hand Correct errors in the last frame by hand

User Interface Not Fully Automated Not Fully Automated Useful tool, given a good user interface Useful tool, given a good user interface

Future Work Improve the Graphical Model Improve the Graphical Model Only one stone allowed to appear Only one stone allowed to appear Per image frame? Per image frame? Per t seconds of real-time? Per t seconds of real-time? Improve the stone/empty classifier Improve the stone/empty classifier More robustness against lighting changes desired More robustness against lighting changes desired Train on labeled examples Train on labeled examples Viola & Jones? Viola & Jones? Improve the motion detector Improve the motion detector Upgrade to a hand-tracking algorithm Upgrade to a hand-tracking algorithm This also be done with a graphical model This also be done with a graphical model Incorporate into one model? Incorporate into one model? GUI GUI Allow a variety of user labeling, update as new labels are made? Allow a variety of user labeling, update as new labels are made?

Questions? Weak Classifier is noisy Weak Classifier is noisy Estimating sequence of states from observations removes much of the noise Estimating sequence of states from observations removes much of the noise