CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley.

Slides:



Advertisements
Similar presentations
Lecture 16 Hidden Markov Models. HMM Until now we only considered IID data. Some data are of sequential nature, i.e. have correlations have time. Example:
Advertisements

Classification Classification Examples
Particle Filtering Sometimes |X| is too big to use exact inference
Lirong Xia Probabilistic reasoning over time Tue, March 25, 2014.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Advanced Artificial Intelligence
1 Reasoning Under Uncertainty Over Time CS 486/686: Introduction to Artificial Intelligence Fall 2013.
HMMs Tamara Berg CS Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell, Andrew.
Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.
… Hidden Markov Models Markov assumption: Transition model:
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
CS 188: Artificial Intelligence Fall 2009 Lecture 21: Speech Recognition 11/10/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
CS 188: Artificial Intelligence Spring 2007 Lecture 14: Bayes Nets III 3/1/2007 Srini Narayanan – ICSI and UC Berkeley.
CS 188: Artificial Intelligence Fall 2006 Lecture 17: Bayes Nets III 10/26/2006 Dan Klein – UC Berkeley.
CPSC 322, Lecture 32Slide 1 Probability and Time: Hidden Markov Models (HMMs) Computer Science cpsc322, Lecture 32 (Textbook Chpt 6.5) March, 27, 2009.
Bayesian Networks Alan Ritter.
CPSC 422, Lecture 14Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 14 Feb, 4, 2015 Slide credit: some slides adapted from Stuart.
Rutgers CS440, Fall 2003 Introduction to Statistical Learning Reading: Ch. 20, Sec. 1-4, AIMA 2 nd Ed.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
CHAPTER 15 SECTION 3 – 4 Hidden Markov Models. Terminology.
Crash Course on Machine Learning
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
QUIZ!!  T/F: The forward algorithm is really variable elimination, over time. TRUE  T/F: Particle Filtering is really sampling, over time. TRUE  T/F:
Harmonics November 1, 2010 What’s next? We’re halfway through grading the mid-terms. For the next two weeks: more acoustics It’s going to get worse before.
Bayesian networks Classification, segmentation, time series prediction and more. Website: Twitter:
CS 188: Artificial Intelligence Fall 2008 Lecture 19: HMMs 11/4/2008 Dan Klein – UC Berkeley 1.
CHAPTER 15 SECTION 1 – 2 Markov Models. Outline Probabilistic Inference Bayes Rule Markov Chains.
CS 188: Artificial Intelligence Fall 2006 Lecture 18: Decision Diagrams 10/31/2006 Dan Klein – UC Berkeley.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
Statistical NLP Spring 2011
CHAPTER 8 DISCRIMINATIVE CLASSIFIERS HIDDEN MARKOV MODELS.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Machine Learning  Up until now: how to reason in a model and how to make optimal decisions  Machine learning: how to acquire a model on the basis of.
CHAPTER 6 Naive Bayes Models for Classification. QUESTION????
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
CPS 170: Artificial Intelligence Markov processes and Hidden Markov Models (HMMs) Instructor: Vincent Conitzer.
Bayes Nets & HMMs Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart Russell,
CS 188: Artificial Intelligence Spring 2007 Speech Recognition 03/20/2007 Srini Narayanan – ICSI and UC Berkeley.
Reasoning over Time  Often, we want to reason about a sequence of observations  Speech recognition  Robot localization  User attention  Medical monitoring.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Naïve Bayes Classifiers
Qian Liu CSE spring University of Pennsylvania
Read R&N Ch Next lecture: Read R&N
Probabilistic reasoning over time
Perceptrons Lirong Xia.
Instructor: Vincent Conitzer
Read R&N Ch Next lecture: Read R&N
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence
CS 188: Artificial Intelligence Spring 2006
CS 188: Artificial Intelligence Fall 2008
Speech recognition, machine learning
Speech Recognition: Acoustic Waves
Read R&N Ch Next lecture: Read R&N
Naïve Bayes Classifiers
Read R&N Ch Next lecture: Read R&N
Instructor: Vincent Conitzer
Instructor: Vincent Conitzer
Perceptrons Lirong Xia.
Probabilistic reasoning over time
CS 188: Artificial Intelligence Fall 2008
Speech recognition, machine learning
Presentation transcript:

CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley

Announcements  Review session tonight:  ???

Hidden Markov Models  An HMM is  Initial distribution:  Transitions:  Emissions: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5

Most Likely Explanation  Remember: weather Markov chain  Tracking:  Viterbi: sun rain sun rain sun rain sun rain

Most Likely Explanation  Question: most likely sequence ending in x at t?  E.g. if sun on day 4, what’s the most likely sequence?  Intuitively: probably sun all four days  Slow answer: enumerate and score …

Mini-Viterbi Algorithm  Better answer: cached incremental updates  Define:  Read best sequence off of m and a vectors sun rain sun rain sun rain sun rain

Mini-Viterbi sun rain sun rain sun rain sun rain

Viterbi Algorithm  Question: what is the most likely state sequence given the observations e 1:t ?  Slow answer: enumerate all possibilities  Better answer: incremental updates

Example

Digitizing Speech

Speech in an Hour  Speech input is an acoustic wave form s p ee ch l a b Graphs from Simon Arnfield’s web tutorial on speech, Sheffield: “l” to “a” transition:

 Frequency gives pitch; amplitude gives volume  sampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec)  Fourier transform of wave displayed as a spectrogram  darkness indicates energy at each frequency s p ee ch l a b frequency amplitude Spectral Analysis

Adding 100 Hz Hz Waves

Spectrum Frequency in Hz Amplitude Frequency components (100 and 1000 Hz) on x-axis

Part of [ae] from “lab”  Note complex wave repeating nine times in figure  Plus smaller waves which repeats 4 times for every large pattern  Large wave has frequency of 250 Hz (9 times in.036 seconds)  Small wave roughly 4 times this, or roughly 1000 Hz  Two little tiny waves on top of peak of 1000 Hz waves

Back to Spectra  Spectrum represents these freq components  Computed by Fourier transform, algorithm which separates out each frequency component of wave.  x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude)  Peaks at 930 Hz, 1860 Hz, and 3020 Hz.

Resonances of the vocal tract  The human vocal tract as an open tube  Air in a tube of a given length will tend to vibrate at resonance frequency of tube.  Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end. Closed end Open end Length 17.5 cm. Figure from W. Barry Speech Science slides

From Mark Liberman’s website

Acoustic Feature Sequence  Time slices are translated into acoustic feature vectors (~39 real numbers per slice)  These are the observations, now we need the hidden states X frequency …………………………………………….. e 12 e 13 e 14 e 15 e 16 ………..

State Space  P(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound)  P(X|X’) encodes how sounds can be strung together  We will have one state for each sound in each word  From some state x, can only:  Stay in the same state (e.g. speaking slowly)  Move to the next position in the word  At the end of the word, move to the start of the next word  We build a little state graph for each word and chain them together to form our state space X

HMMs for Speech

Markov Process with Bigrams Figure from Huang et al page 618

Decoding  While there are some practical issues, finding the words given the acoustics is an HMM inference problem  We want to know which state sequence x 1:T is most likely given the evidence e 1:T :  From the sequence x, we can simply read off the words

POMDPs  Up until now:  MDPs: decision making when the world is fully observable (even if the actions are non-deterministic  Probabilistic reasoning: computing beliefs in a static world  What about acting under uncertainty?  In general, the formalization of the problem is the partially observable Markov decision process (POMDP)  A simple case: value of information

POMDPs  MDPs have:  States S  Actions A  Transition fn P(s’|s,a) (or T(s,a,s’))  Rewards R(s,a,s’)  POMDPs add:  Observations O  Observation function P(o|s,a) (or O(s,a,o))  POMDPs are MDPs over belief states b (distributions over S) a s s, a s,a,s’ s’ a b b, a o b’

Example: Battleship  In (static) battleship:  Belief state determined by evidence to date {e}  Tree really over evidence sets  Probabilistic reasoning needed to predict new evidence given past evidence  Solving POMDPs  One way: use truncated expectimax to compute approximate value of actions  What if you only considered bombing or one sense followed by one bomb?  You get the VPI agent from project 4! a {e} e, a e’ {e, e’} a b b, a o b’ a bomb {e} e, a sense e’ {e, e’} a sense U(a bomb, {e}) a bomb U(a bomb, {e, e’})

More Generally  General solutions map belief functions to actions  Can divide regions of belief space (set of belief functions) into policy regions (gets complex quickly)  Can build approximate policies using discretization methods  Can factor belief functions in various ways  Overall, POMDPs are very (actually PSACE-) hard  We’ll talk more about POMDPs at the end of the course!

Machine Learning  Up till now: how to reason or make decisions using a model  Machine learning: how to select a model on the basis of data / experience  Learning parameters (e.g. probabilities)  Learning structure (e.g. BN graphs)  Learning hidden concepts (e.g. clustering)

Classification  In classification, we learn to predict labels (classes) for inputs  Examples:  Spam detection (input: document, classes: spam / ham)  OCR (input: images, classes: characters)  Medical diagnosis (input: symptoms, classes: diseases)  Automatic essay grader (input: document, classes: grades)  Fraud detection (input: account activity, classes: fraud / no fraud)  Customer service routing  … many more  Classification is an important commercial technology!

Classification  Data:  Inputs x, class labels y  We imagine that x is something that has a lot of structure, like an image or document  In the basic case, y is a simple N-way choice  Basic Setup:  Training data: D = collection of pairs  Feature extractors: functions f i which provide attributes of an example x  Test data: more x’s, we must predict y’s  During development, we actually know the y’s, so we can check how well we’re doing, but when we deploy the system, we don’t

Bayes Nets for Classification  One method of classification:  Features are values for observed variables  Y is a query variable  Use probabilistic inference to compute most likely Y  You already know how to do this inference

Simple Classification  Simple example: two binary features  This is a naïve Bayes model M SF direct estimate Bayes estimate (no assumptions) Conditional independence +

General Naïve Bayes  A general naive Bayes model:  We only specify how each feature depends on the class  Total number of parameters is linear in n C E1E1 EnEn E2E2 |C| parameters n x |E| x |C| parameters |C| x |E| n parameters

Inference for Naïve Bayes  Goal: compute posterior over causes  Step 1: get joint probability of causes and evidence  Step 2: get probability of evidence  Step 3: renormalize +

General Naïve Bayes  What do we need in order to use naïve Bayes?  Some code to do the inference (you know this part)  For fixed evidence, build P(C,e)  Sum out C to get P(e)  Divide to get P(C|e)  Estimates of local conditional probability tables  P(C), the prior over causes  P(E|C) for each evidence variable  These probabilities are collectively called the parameters of the model and denoted by   These typically come from observed data: we’ll look at this now