Speech, Perception, & AI Artificial Intelligence CMSC 25000 March 5, 2002.

Slides:



Advertisements
Similar presentations
Robust Speech recognition V. Barreaud LORIA. Mismatch Between Training and Testing n mismatch influences scores n causes of mismatch u Speech Variation.
Advertisements

Building an ASR using HTK CS4706
Speech Recognition Part 3 Back end processing. Speech recognition simplified block diagram Speech Capture Speech Capture Feature Extraction Feature Extraction.
Angelo Dalli Department of Intelligent Computing Systems
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Chapter 15 Probabilistic Reasoning over Time. Chapter 15, Sections 1-5 Outline Time and uncertainty Inference: ltering, prediction, smoothing Hidden Markov.
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Hidden Markov Models Theory By Johan Walters (SR 2003)
1 Hidden Markov Models (HMMs) Probabilistic Automata Ubiquitous in Speech/Speaker Recognition/Verification Suitable for modelling phenomena which are dynamic.
Lecture 15 Hidden Markov Models Dr. Jianjun Hu mleg.cse.sc.edu/edu/csce833 CSCE833 Machine Learning University of South Carolina Department of Computer.
Acoustic / Lexical Model Derk Geene. Speech recognition  P(words|signal)= P(signal|words) P(words) / P(signal)  P(signal|words): Acoustic model  P(words):
Speech Translation on a PDA By: Santan Challa Instructor Dr. Christel Kemke.
Application of HMMs: Speech recognition “Noisy channel” model of speech.
Speech Recognition. What makes speech recognition hard?
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
CS 188: Artificial Intelligence Fall 2009 Lecture 20: Particle Filtering 11/5/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
Learning, Uncertainty, and Information Big Ideas November 8, 2004.
Big Ideas in Cmput366. Search Blind Search Iterative deepening Heuristic Search A* Local and Stochastic Search Randomized algorithm Constraint satisfaction.
Part 6 HMM in Practice CSE717, SPRING 2008 CUBS, Univ at Buffalo.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
Dynamic Time Warping Applications and Derivation
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
CHAPTER 15 SECTION 3 – 4 Hidden Markov Models. Terminology.
ISSUES IN SPEECH RECOGNITION Shraddha Sharma
Introduction to Automatic Speech Recognition
1 7-Speech Recognition (Cont’d) HMM Calculating Approaches Neural Components Three Basic HMM Problems Viterbi Algorithm State Duration Modeling Training.
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Midterm Review Spoken Language Processing Prof. Andrew Rosenberg.
7-Speech Recognition Speech Recognition Concepts
Perception Vision, Sections Speech, Section 24.7.
Machine Learning in Spoken Language Processing Lecture 21 Spoken Language Processing Prof. Andrew Rosenberg.
A brief overview of Speech Recognition and Spoken Language Processing Advanced NLP Guest Lecture August 31 Andrew Rosenberg.
Csc Lecture 7 Recognizing speech. Geoffrey Hinton.
Machine Translation  Machine translation is of one of the earliest uses of AI  Two approaches:  Traditional approach using grammars, rewrite rules,
IRCS/CCN Summer Workshop June 2003 Speech Recognition.
Modeling Speech using POMDPs In this work we apply a new model, POMPD, in place of the traditional HMM to acoustically model the speech signal. We use.
Entropy & Hidden Markov Models Natural Language Processing CMSC April 22, 2003.
LML Speech Recognition Speech Recognition Introduction I E.M. Bakker.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
22CS 338: Graphical User Interfaces. Dario Salvucci, Drexel University. Lecture 10: Advanced Input.
Speech, Perception, & AI Artificial Intelligence CMSC February 13, 2003.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Hidden Markov Models: Decoding & Training Natural Language Processing CMSC April 24, 2003.
Hidden Markov Models: Probabilistic Reasoning Over Time Natural Language Processing.
Speech Recognition with CMU Sphinx Srikar Nadipally Hareesh Lingareddy.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
Performance Comparison of Speaker and Emotion Recognition
Hidden Markov Models: Probabilistic Reasoning Over Time Artificial Intelligence CMSC February 26, 2008.
Chapter 7 Speech Recognition Framework  7.1 The main form and application of speech recognition  7.2 The main factors of speech recognition  7.3 The.
Automated Speach Recognotion Automated Speach Recognition By: Amichai Painsky.
A Hybrid Model of HMM and RBFN Model of Speech Recognition 길이만, 김수연, 김성호, 원윤정, 윤아림 한국과학기술원 응용수학전공.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Hidden Markov Models: Probabilistic Reasoning Over Time Natural Language Processing CMSC February 22, 2005.
Chapter 21 Robotic Perception and action Chapter 21 Robotic Perception and action Artificial Intelligence ดร. วิภาดา เวทย์ประสิทธิ์ ภาควิชาวิทยาการคอมพิวเตอร์
1 7-Speech Recognition Speech Recognition Concepts Speech Recognition Approaches Recognition Theories Bayse Rule Simple Language Model P(A|W) Network Types.
By: Nicole Cappella. Why I chose Speech Recognition  Always interested me  Dr. Phil Show Manti Teo Girlfriend Hoax  Three separate voice analysts proved.
Probabilistic Pronunciation + N-gram Models CMSC Natural Language Processing April 15, 2003.
Speaker Recognition UNIT -6. Introduction  Speaker recognition is the process of automatically recognizing who is speaking on the basis of information.
Learning, Uncertainty, and Information: Learning Parameters
Artificial Intelligence for Speech Recognition
Uncertain Reasoning over Time
Speech Processing Speech Recognition
CS 188: Artificial Intelligence Fall 2008
LECTURE 15: REESTIMATION, EM AND MIXTURES
CPSC 503 Computational Linguistics
Speech recognition, machine learning
Speech Recognition: Acoustic Waves
Natural Language Processing (NLP) Systems Joseph E. Gonzalez
Speech recognition, machine learning
Presentation transcript:

Speech, Perception, & AI Artificial Intelligence CMSC March 5, 2002

Agenda Speech –AI & Perception –Speech applications Speech Recognition –Coping with uncertainty Conclusions –Exam admin –Applied AI talk –Evaluations

AI & Perception Classic AI –Disembodied Reasoning: Think of an action Expert systems Theorem Provers Full knowledge planners Contemporary AI –Situated reasoning: Thinking & Acting Spoken language systems Image registration Behavior-based robots

Speech Applications Speech-to-speech translation Dictation systems Spoken language systems Adaptive & Augmentative technologies –E.g. talking books Speaker identification & verification Ubiquitous computing

Speech Recognition Goal: –Given an acoustic signal, identify the sequence of words that produced it –Speech understanding goal: Given an acoustic signal, identify the meaning intended by the speaker Issues: –Ambiguity: many possible pronunciations, –Uncertainty: what signal, what word/sense produced this sound sequence

Decomposing Speech Recognition Q1: What speech sounds were uttered? –Human languages: phones Basic sound units: b, m, k, ax, ey, …(arpabet) Distinctions categorical to speakers –Acoustically continuous Part of knowledge of language –Build per-language inventory –Could we learn these?

Decomposing Speech Recognition Q2: What words produced these sounds? –Look up sound sequences in dictionary –Problem 1: Homophones Two words, same sounds: too, two –Problem 2: Segmentation No “space” between words in continuous speech “I scream”/”ice cream”, “Wreck a nice beach”/”Recognize speech” Q3: What meaning produced these words? –NLP (But that’s not all!)

Signal Processing Goal: Convert impulses from microphone into a representation that –is compact –encodes features relevant for speech recognition Compactness: Step 1 –Sampling rate: how often look at data 8KHz, 16KHz,(44.1KHz= CD quality) –Quantization factor: how much precision 8-bit, 16-bit (encoding: u-law, linear…)

(A Little More) Signal Processing Compactness & Feature identification –Capture mid-length speech phenomena Typically “frames” of 10ms (80 samples) –Overlapping –Vector of features: e.g. energy at some frequency –Vector quantization: n-feature vectors: n-dimension space –Divide into m regions (e.g. 256) –All vectors in region get same label - e.g. C256

Speech Recognition Model Question: Given signal, what words? Problem: uncertainty –Capture of sound by microphone, how phones produce sounds, which words make phones, etc Solution: Probabilistic model –P(words|signal) = – P(signal|words)P(words)/P(signal) –Idea: Maximize P(signal|words)*P(words) P(signal|words): acoustic model; P(words): lang model

Language Model Idea: some utterances more probable Possible solution: PCFGs??? –+ Probabilities, - Little context effect “Dog bites man” as probable as “Man bites dog” Standard solution: “n-gram” model –Typically tri-gram: P(wi|wi-1,wi-2) Collect training data –Smooth with bi- & uni-grams to handle sparseness –Product over words in utterance

Acoustic Model P(signal|words) –words -> phones + phones -> vector quantiz’n Words -> phones –Pronunciation dictionary lookup Multiple pronunciations? –Probability distribution »Dialect Variation: tomato »+Coarticulation –Product along path towm aa ey tow 0.5 t ow m aa ey tow 0.5 ax

Acoustic Model P(signal| phones) –Use Hidden Markov Model (HMM) –Transition probabilities, Output probabilities –Probability of sequence: sum of prob of paths Combine into big HMM OnsetMidEndFinal C1: 0.5 C2: 0.2 C3: 0.3 C3: 0.2 C4: 0.7 C5: 0.1 C4: 0.1 C6: 0.5 C6: 0.4

Viterbi Algorithm Find BEST word sequence given signal –Best P(words|signal) –Take HMM & VQ sequence => word seq (prob) Dynamic programming solution –Record most probable path ending at a state i Then most probable path from i to end O(bMn)

Learning HMMs Issue: Where do the probabilities come from? Solution: Learn from data –Signal + word pairs –Baum-Welch algorithm learns efficiently See CMSC 35100

Does it work? Yes: –99% on isolate single digits –95% on restricted short utterances (air travel) –80+% professional news broadcast No: –55% Conversational English –35% Conversational Mandarin –?? Noisy cocktail parties

Speech Recognition as Modern AI Draws on wide range of AI techniques –Knowledge representation & manipulation Optimal search: Viterbi decoding –Machine Learning Baum-Welch for HMMs Nearest neighbor & k-means clustering for signal id –Probabilistic reasoning/Bayes rule Manage uncertainty in signal, phone, word mapping Enables real world application

Final Exam Thursday March 14: 10:30-12:30 –One “cheat sheet” Cumulative –Higher proportion on 2nd half of course Style: Like midterm: apply concepts No coding necessary Review session: ??? Review materials: will be available

Seminar Tomorrow: Wednesday March 6, 2:30pm Location: Ry 251 Title: –Computational Complexity of Air Travel Planning Speaker: Carl de Marcken, ITA Software Very abstract abstract: –Application of AI techniques to real world problems –aka: how to make money doing AI