Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning, Uncertainty, and Information Big Ideas November 8, 2004.

Similar presentations


Presentation on theme: "Learning, Uncertainty, and Information Big Ideas November 8, 2004."— Presentation transcript:

1 Learning, Uncertainty, and Information Big Ideas November 8, 2004

2 Roadmap Turing, Intelligence, and Learning Noisy-channel model –Uncertainty, Bayes’ Rule, and Applications Hidden Markov Models –The Model –Decoding the best sequence –Training the model (EM) N-gram models: Modeling sequences –Shannon, Information Theory, and Perplexity Conclusion

3 Turing & Intelligence Turing (1950): –Computing Machinery and Intelligence –“Imitation Game” (aka Turing test) Functional definition of intelligence as indistinguishable from human –Key question raised: Learning Can a system be intelligent if only knows program? Learning necessary for intelligence –1) Programmed knowledge –2) Learning mechanism –Knowledge, reasoning, learning, communication

4 Noisy-Channel Model Original message not directly observable –Passed through some channel b/t sender, receiver + noise –From telephone (Shannon), Word sequence vs acoustics (Jelinek), genome sequence vs CATG, object vs image Derive most likely original input based on observed

5 Bayesian Inference P(W|O) difficult to compute –W – input, O – observations – Generative and Sequence

6 Applications AI: Speech recognition!, POS tagging, sense tagging, dialogue, image understanding, information retrieval Non-AI: –Bioinformatics: gene sequencing –Security: intrusion detection –Cryptography

7 Hidden Markov Models

8 Probabilistic Reasoning over Time Issue: Discrete models –Many processes continuously changing –How do we make observations? States? Solution: Discretize –“Time slices”: Make time discrete –Observations, States associated with time: Ot, Qt Observations can be discrete or continuous –Here focus on discrete for clarity

9 Modelling Processes over Time Infer underlying state sequence from observed Issue: New state depends on preceding states –Analyzing sequences Problem 1: Possibly unbounded # prob tables –Observation+State+Time Solution 1: Assume stationary process –Rules governing process same at all time Problem 2: Possibly unbounded # parents –Markov assumption: Only consider finite history –Common: 1 or 2 Markov: depend on last couple

10 Hidden Markov Models (HMMs) An HMM is: –1) A set of states: –2) A set of transition probabilities: Where aij is the probability of transition qi -> qj –3)Observation probabilities: The probability of observing ot in state i –4) An initial probability dist over states: The probability of starting in state i –5) A set of accepting states

11 Three Problems for HMMs Find the probability of an observation sequence given a model –Forward algorithm Find the most likely path through a model given an observed sequence –Viterbi algorithm (decoding) Find the most likely model (parameters) given an observed sequence –Baum-Welch (EM) algorithm

12 Bins and Balls Example Assume there are two bins filled with red and blue balls. Behind a curtain, someone selects a bin and then draws a ball from it (and replaces it). They then select either the same bin or the other one and then select another ball… –(Example due to J. Martin)

13 Bins and Balls Example Bin 1Bin 2.6.7.4.3

14 Bins and Balls Bin1Bin2 Bin10.60.4 Bin20.30.7 Π Bin 1: 0.9; Bin 2: 0.1 A B Bin 1Bin 2 Red0.70.4 Blue0.30.6

15 Bins and Balls Assume the observation sequence: – Blue Blue Red (BBR) Both bins have Red and Blue –Any state sequence could produce observations However, NOT equally likely –Big difference in start probabilities –Observation depends on state –State depends on prior state

16 Bins and Balls Blue Blue Red 1 1 1(0.9*0.3)*(0.6*0.3)*(0.6*0.7)=0.0204 1 1 2(0.9*0.3)*(0.6*0.3)*(0.4*0.4)=0.0077 1 2 1(0.9*0.3)*(0.4*0.6)*(0.3*0.7)=0.0136 1 2 2(0.9*0.3)*(0.4*0.6)*(0.7*0.4)=0.0181 2 1 1(0.1*0.6)*(0.3*0.7)*(0.6*0.7)=0.0052 2 1 2(0.1*0.6)*(0.3*0.7)*(0.4*0.4)=0.0020 2 2 1(0.1*0.6)*(0.7*0.6)*(0.3*0.7)=0.0052 2 2 2(0.1*0.6)*(0.7*0.6)*(0.7*0.4)=0.0070

17 Answers and Issues Here, to compute probability of observed –Just add up all the state sequence probabilities To find most likely state sequence –Just pick the sequence with the highest value Problem: Computing all paths expensive –2T*N^T Solution: Dynamic Programming –Sweep across all states at each time step Summing (Problem 1) or Maximizing (Problem 2)

18 Forward Probability Where α is the forward probability, t is the time in utterance, i,j are states in the HMM, aij is the transition probability, bj(ot) is the probability of observing ot in state bj N is the max state, T is the last time

19 Pronunciation Example Observations: 0/1

20 Sequence Pronunciation Model

21 Acoustic Model 3-state phone model for [m] –Use Hidden Markov Model (HMM) –Probability of sequence: sum of prob of paths OnsetMidEndFinal 0.7 0.30.9 0.1 0.4 0.6 C1: 0.5 C2: 0.2 C3: 0.3 C3: 0.2 C4: 0.7 C5: 0.1 C4: 0.1 C6: 0.5 C6: 0.4 Transition probabilities Observation probabilities

22 Forward Algorithm Idea: matrix where each cell forward[t,j] represents probability of being in state j after seeing first t observations. Each cell expresses the probability: forward[t,j] = P(o 1,o 2,...,o t,q t =j|w) q t = j means "the probability that the t th state in the sequence of states is state j. Compute probability by summing over extensions of all paths leading to current cell. An extension of a path from a state i at time t-1 to state j at t is computed by multiplying together: i. previous path probability from the previous cell forward[t-1,i], ii. transition probability a ij from previous state i to current state j iii. observation likelihood b jt that current state j matches observation symbol t.

23 Forward Algorithm Function Forward(observations length T, state-graph) returns best-path Num-states<-num-of-states(state-graph) Create path prob matrix forwardi[num-states+2,T+2] Forward[0,0]<- 1.0 For each time step t from 0 to T do for each state s from 0 to num-states do for each transition s’ from s in state-graph new-score<-Forward[s,t]*at[s,s’]*bs’(ot) Forward[s’,t+1] <- Forward[s’,t+1]+new-score

24 Viterbi Code Function Viterbi(observations length T, state-graph) returns best-path Num-states<-num-of-states(state-graph) Create path prob matrix viterbi[num-states+2,T+2] Viterbi[0,0]<- 1.0 For each time step t from 0 to T do for each state s from 0 to num-states do for each transition s’ from s in state-graph new-score<-viterbi[s,t]*at[s,s’]*bs’(ot) if ((viterbi[s’,t+1]==0) || (viterbi[s’,t+1]<new-score)) then viterbi[s’,t+1] <- new-score back-pointer[s’,t+1]<-s Backtrace from highest prob state in final column of viterbi[] & return

25 Modeling Sequences, Redux Discrete observation values –Simple, but inadequate –Many observations highly variable Gaussian pdfs over continuous values –Assume normally distributed observations Typically sum over multiple shared Gaussians –“Gaussian mixture models” –Trained with HMM model

26 Learning HMMs Issue: Where do the probabilities come from? Solution: Learn from data –Trains transition (aij) and emission (bj) probabilities Typically assume structure –Baum-Welch aka forward-backward algorithm Iteratively estimate counts of transitions/emitted Get estimated probabilities by forward comput’n –Divide probability mass over contributing paths


Download ppt "Learning, Uncertainty, and Information Big Ideas November 8, 2004."

Similar presentations


Ads by Google