Lecture 29 Word Sense Disambiguation-Revision Dialogues

Slides:



Advertisements
Similar presentations
Language Processing Technology Machines and other artefacts that use language.
Advertisements

CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 9: Human conversation, frame-based dialogue systems.
Word sense disambiguation and information retrieval Chapter 17 Jurafsky, D. & Martin J. H. SPEECH and LANGUAGE PROCESSING Jarmo Ritola -
INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING NLP-AI IIIT-Hyderabad CIIL, Mysore ICON DECEMBER, 2003.
User interaction ‘Rules’ of Human-Human Conversation
U1, Speech in the interface:2. Dialogue Management1 Module u1: Speech in the Interface 2: Dialogue Management Jacques Terken HG room 2:40 tel. (247) 5254.
CS Word Sense Disambiguation. 2 Overview A problem for semantic attachment approaches: what happens when a given lexeme has multiple ‘meanings’?
CS 4705 Lecture 19 Word Sense Disambiguation. Overview Selectional restriction based approaches Robust techniques –Machine Learning Supervised Unsupervised.
Speech and Language Processing
Information Retrieval and Extraction 資訊檢索與擷取 Chia-Hui Chang National Central University
CS 4705 Word Sense Disambiguation. Overview Selectional restriction based approaches Robust techniques –Machine Learning Supervised Unsupervised –Dictionary-based.
6/28/20151 Spoken Dialogue Systems: Human and Machine Julia Hirschberg CS 4706.
Natural Language Processing Lecture 22—11/14/2013 Jim Martin.
9/8/20151 Natural Language Processing Lecture Notes 1.
CS 4705 Lecture 19 Word Sense Disambiguation. Overview Selectional restriction based approaches Robust techniques –Machine Learning Supervised Unsupervised.
1 Natural Language Processing Lecture Notes 14 Chapter 19.
Disambiguation Read J & M Chapter 17.1 – The Problem Washington Loses Appeal on Steel Duties Sue caught the bass with the new rod. Sue played the.
Introduction to Markup Languages January 31, 2002.
CS 4705 Lecture 17 Semantic Analysis: Robust Semantics.
Integrating Multiple Knowledge Sources For Improved Speech Understanding Sherif Abdou, Michael Scordilis Department of Electrical and Computer Engineering,
Natural Language Processing Information Extraction Jim Martin (slightly modified by Jason Baldridge)
LB160 (Professional Communication Skills For Business Studies)
Natural Language Processing Vasile Rus
Natural Language Understanding
CS 224S / LINGUIST 285 Spoken Language Processing
Vocabulary Module 2 Activity 5.
CSC 594 Topics in AI – Natural Language Processing
System Design Ashima Wadhwa.
Topics Introduction to Repetition Structures
Graph Coverage for Specifications CS 4501 / 6501 Software Testing
Application Area: Natural Language Processing
Lecture 2 Introduction to Programming
Natural Language Understanding
Robust Semantics, Information Extraction, and Information Retrieval
Introduction CSE 1310 – Introduction to Computers and Programming
Intro to PHP & Variables
Spoken Dialogue Systems
Issues in Spoken Dialogue Systems
Spoken Dialogue Systems: Human and Machine
Spoken Dialogue Systems: Managing Interaction
Lecture 4: October 7, 2004 Dan Jurafsky
Specifying, Compiling, and Testing Grammars
Lecture 16: Lexical Semantics, Wordnet, etc
Lecture 3: October 5, 2004 Dan Jurafsky
Machine Learning in Natural Language Processing
Statistical NLP: Lecture 9
CSCI 5832 Natural Language Processing
CSCI 5832 Natural Language Processing
Probabilistic and Lexicalized Parsing
Lecture 26 Lexical Semantics
Managing Dialogue Julia Hirschberg CS /28/2018.
Lecture 29 Word Sense Disambiguation-Revision Dialogues
UNIT 3 CHAPTER 1 LESSON 4 Using Simple Commands.
Lecture 13 Information Extraction
CSCI 5832 Natural Language Processing
Lecture 30 Dialogues November 3, /16/2019.
Programming We have seen various examples of programming languages
Graph Coverage for Specifications CS 4501 / 6501 Software Testing
Spoken Dialogue Systems: System Overview
Chapter 11 user support.
CSCI 5832 Natural Language Processing
Information Retrieval
IT 244 Database Management System
Information Retrieval
Statistical NLP : Lecture 9 Word Sense Disambiguation
CS249: Neural Language Model
CPSC 503 Computational Linguistics
VoiceXML An investigation Author: Mya Anderson
Presentation transcript:

Lecture 29 Word Sense Disambiguation-Revision Dialogues October 19, 2005 11/28/2018

Word-Sense Disambiguation Word sense disambiguation refers to the process of selecting the right sense for a word from among the senses that the word is known to have Semantic selection restrictions can be used to disambiguate Ambiguous arguments to unambiguous predicates Ambiguous predicates with unambiguous arguments Ambiguity all around 11/28/2018

Word-Sense Disambiguation We can use selectional restrictions for disambiguation. He cooked simple dishes. He broke the dishes. But sometimes, selectional restrictions will not be enough to disambiguate. What kind of dishes do you recommend? -- we cannot know what sense is used. There can be two lexemes (or more) with multiple senses. They serve vegetarian dishes. Selectional restrictions may block the finding of meaning. If you want to kill Turkey, eat its banks. These situations leave the system with no possible meanings, and they can indicate a metaphor. 11/28/2018

WSD and Selection Restrictions Ambiguous arguments Prepare a dish Wash a dish Ambiguous predicates Serve Denver Serve breakfast Both Serves vegetarian dishes 11/28/2018

WSD and Selection Restrictions This approach is complementary to the compositional analysis approach. You need a parse tree and some form of predicate-argument analysis derived from The tree and its attachments All the word senses coming up from the lexemes at the leaves of the tree Ill-formed analyses are eliminated by noting any selection restriction violations 11/28/2018

Problems Selection restrictions are violated all the time. This doesn’t mean that the sentences are ill-formed or preferred less than others. This approach needs some way of categorizing and dealing with the various ways that restrictions can be violated 11/28/2018

WSD Tags A dictionary sense? What’s a tag? For example, for WordNet an instance of “bass” in a text has 8 possible tags or labels (bass1 through bass8). 11/28/2018

WordNet Bass The noun ``bass'' has 8 senses in WordNet bass - (the lowest part of the musical range) bass, bass part - (the lowest part in polyphonic music) bass, basso - (an adult male singer with the lowest voice) sea bass, bass - (flesh of lean-fleshed saltwater fish of the family Serranidae) freshwater bass, bass - (any of various North American lean-fleshed freshwater fishes especially of the genus Micropterus) bass, bass voice, basso - (the lowest adult male singing voice) bass - (the member with the lowest range of a family of musical instruments) bass -(nontechnical name for any of numerous edible marine and freshwater spiny-finned fishes) 11/28/2018

Representations Vectors of sets of feature/value pairs Most supervised ML approaches require a very simple representation for the input training data. Vectors of sets of feature/value pairs I.e. files of comma-separated values So our first task is to extract training data from a corpus with respect to a particular instance of a target word This typically consists of a characterization of the window of text surrounding the target 11/28/2018

Representations This is where ML and NLP intersect If you stick to trivial surface features that are easy to extract from a text, then most of the work is in the ML system If you decide to use features that require more analysis (say parse trees) then the ML part may be doing less work (relatively) if these features are truly informative 11/28/2018

Surface Representations Collocational and co-occurrence information Collocational Encode features about the words that appear in specific positions to the right and left of the target word Often limited to the words themselves as well as they’re part of speech Co-occurrence Features characterizing the words that occur anywhere in the window regardless of position Typically limited to frequency counts 11/28/2018

Collocational [guitar, NN, and, CJC, player, NN, stand, VVB] Position-specific information about the words in the window guitar and bass player stand [guitar, NN, and, CJC, player, NN, stand, VVB] In other words, a vector consisting of [position n word, position n part-of-speech…] 11/28/2018

Co-occurrence Information about the words that occur within the window. First derive a set of terms to place in the vector. Then note how often each of those terms occurs in a given window. 11/28/2018

Classifiers Naïve Bayes (the right thing to try first) Decision lists Once we cast the WSD problem as a classification problem, then all sorts of techniques are possible Naïve Bayes (the right thing to try first) Decision lists Decision trees Neural nets Support vector machines Nearest neighbor methods… 11/28/2018

Classifiers The choice of technique, in part, depends on the set of features that have been used Some techniques work better/worse with features with numerical values Some techniques work better/worse with features that have large numbers of possible values For example, the feature the word to the left has a fairly large number of possible values 11/28/2018

Statistical Word-Sense Disambiguation Where s is a vector of senses, V is the vector representation of the input By Bayesian rule By making independence assumption of meanings. This means that the result is the product of the probabilities of its individual features given that its sense 11/28/2018

Problems One for each ambiguous word in the language Given these general ML approaches, how many classifiers do I need to perform WSD robustly One for each ambiguous word in the language How do you decide what set of tags/labels/senses to use for a given word? Depends on the application 11/28/2018

Lecture 3: October 5, 2004 Dan Jurafsky LING 138/238 SYMBSYS 138 Intro to Computer Speech and Language Processing Lecture 3: October 5, 2004 Dan Jurafsky 11/28/2018

Week 2: Dialogue and Conversational Agents Examples of spoken language systems Components of a dialogue system, focus on these 3: ASR NLU Dialogue management VoiceXML Grounding and Confirmation 11/28/2018

Conversational Agents AKA: Spoken Language Systems Dialogue Systems Speech Dialogue Systems Applications: Travel arrangements (Amtrak, United airlines) Telephone call routing Tutoring Communicating with robots Anything with limited screen/keyboard 11/28/2018

A travel dialog: Communicator 11/28/2018

Call routing: ATT HMIHY 11/28/2018

A tutorial dialogue: ITSPOKE 11/28/2018

Dialogue System Architecture Simplest possible architecture: ELIZA Read-search/replace-print loop We’ll need something with more sophisticated dialogue control And speech 11/28/2018

Dialogue System Architecture 11/28/2018

ASR engine ASR = Automatic Speech Recognition Job of ASR system is to go from speech (telephone or microphone) to words We will be studying this in a few weeks 11/28/2018

ASR Overview (pic from Yook 2003) 11/28/2018

ASR in Dialogue Systems ASR systems work better if can constrain what words the speaker is likely to say. A dialogue system often has these constraints: System: What city are you departing from? Can expect sentences of the form I want to (leave|depart) from [CITYNAME] From [CITYNAME] [CITYNAME] etc 11/28/2018

ASR in Dialogue Systems Also, can adapt to speaker But!! ASR is errorful So unlike ELIZA, can’t count on the words being correct As we will see, this fact about error plays a huge role in dialogue system design 11/28/2018

Natural Language Understanding Also called NLU We will discuss this later in the quarter There are many ways to represent the meaning of sentences For speech dialogue systems, perhaps the most common is a simple one called “Frame and slot semantics”. Semantics = meaning 11/28/2018

An example of a frame Show me morning flights from Boston to SF on Tuesday. SHOW: FLIGHTS: ORIGIN: CITY: Boston DATE: Tuesday TIME: morning DEST: CITY: San Francisco 11/28/2018

How to generate this semantics? Many methods Simplest: semantic grammars LIST -> show me | I want | can I see|… DEPARTTIME -> (after|around|before) HOUR | morning | afternoon | evening HOUR -> one|two|three…|twelve (am|pm) FLIGHTS -> (a) flight|flights ORIGIN -> from CITY DESTINATION -> to CITY CITY -> Boston | San Francisco | Denver | Washington 11/28/2018

Semantics for a sentence LIST FLIGHTS ORIGIN Show me flights from Boston DESTINATION DEPARTDATE to San Francisco on Tuesday DEPARTTIME morning 11/28/2018

Frame-filling We use a parser to take these rules and apply them to the sentence. Resulting in a semantics for the sentence We can then write some simple code That takes the semantically labeled sentence And fills in the frame. 11/28/2018

Other NLU Approaches Cascade of Finite-State-Transducers Instead of a parser, we could use FSTs, which are very fast, to create the semantics. Or we could use “Syntactic rules with semantic attachments” This latter is what is done in VoiceXML, so we will see that today. 11/28/2018

Generation and TTS Generation: two main approaches Simple templates (prescripted sentences) Unification: use similar grammar rules as for parsing, but run them backwards! 11/28/2018

Generation : The generation component of a conversational agent chooses the concept to express to the user, plans how to express the concepts in words Assigns necessary prosody to the words TTS: takes these words and their prosodic annotations synthesizes an waveform 11/28/2018

Generation Content planner – what to say Language generation – how to say it Template based generation What time do you want to leave CITY_ORIG? Will you return to CITY_ORIG from CITY_DEST? Natural language generation Sentence planner Surface realizer Prosody assigner 11/28/2018

Dialogue Manager Controls the architecture and the structure of the dialogue. Takes input from ASR/NLU component Maintains some sort of state Interfaces with the task manager Passes output to the NLG/TTS modules. 11/28/2018

Dialogue Manager Eliza was simplest dialogue manager Read - search/replace - print loop No state was kept; system did the same thing on every sentence A real dialogue manager needs to keep state Ability to model structures of dialogue above the level of a single response. 11/28/2018

Architectures for dialogue management Finite State Frame-based Information state based – including probabilistic Planning Agents 11/28/2018

Finite State Dialogue Manager 11/28/2018

Finite-state dialogue managers System completely controls the conversation with the user. It asks the user a series of question Ignoring (or misinterpreting) anything the user says that is not a direct answer to the system’s questions 11/28/2018

Dialogue Initiative “Initiative” means who has control of the conversation at any point Single initiative System User Mixed initiative 11/28/2018

System Initiative Systems which completely control the conversation at all times are called system initiative. Advantages: Simple to build User always knows what they can say next System always knows what user can say next Known words: Better performance from ASR Known topic: Better performance from NLU Disadvantage: Too limited 11/28/2018

User Initiative User directs the system Generally, user asks a single question, system answers System can’t ask questions back, engage in clarification dialogue, confirmation dialogue Used for simple database queries User asks question, system gives answer Web search is user initiative dialogue. 11/28/2018

Problems with System Initiative Real dialogue involves give and take! In travel planning, users might want to say something that is not the direct answer to the question. For example answering more than one question in a sentence: Hi, I’d like to fly from Seattle Tuesday morning I want a flight from Milwaukee to Orlando one way leaving after 5 p.m. on Wednesday. 11/28/2018

Single initiative + universals We can give users a little more flexibility by adding universal commands Universals: commands you can say anywhere As if we augmented every state of FSA with these Help Correct This describes many implemented systems But still doesn’t deal with mixed initiative 11/28/2018

Mixed Initiative Conversational initiative can shift between system and user Simplest kind of mixed initiative: use the structure of the frame itself to guide dialogue Slot Question ORIGIN What city are you leaving from? DEST Where are you going? DEPT DATE What day would you like to leave? DEPT TIME What time would you like to leave? AIRLINE What is your preferred airline? 11/28/2018

Frames are mixed-initiative User can answer multiple questions at once. System asks questions of user, filling any slots that user specifies When frame is filled, do database query If user answers 3 questions at once, system has to fill slots and not ask these questions again! Anyhow, we avoid the strict constraints on order of the finite-state architecture. 11/28/2018

Multiple frames flights, hotels, rental cars Flight legs: Each flight can have multiple legs, which might need to be discussed separately Presenting the flights (If there are multiple flights meeting users constraints) It has slots like 1ST_FLIGHT or 2ND_FLIGHT so use can ask “how much is the second one” General route information: Which airlines fly from Boston to San Francisco Airfare practices: Do I have to stay over Saturday to get a decent airfare? 11/28/2018

Multiple Frames Need to be able to switch from frame to frame Based on what user says. Disambiguate which slot of which frame an input is supposed to fill, then switch dialogue control to that frame. 11/28/2018

VoiceXML Voice eXtensible Markup Language An XML-based dialogue design language Makes use of ASR and TTS Deals well with simple, frame-based mixed initiative dialogue. Most common in commercial world (too limited for research systems) But useful to get a handle on the concepts. 11/28/2018

Voice XML Each dialogue is a <form>. (Form is the VoiceXML word for frame) Each <form> generally consists of a sequence of <field>s, with other commands 11/28/2018

Sample vxml doc <form> <field name="transporttype"> <prompt> Please choose airline, hotel, or rental car. </prompt> <grammar type="application/x=nuance-gsl"> [airline hotel "rental car"] </grammar> </field> <block> You have chosen <value expr="transporttype">. </prompt> </block> </form> 11/28/2018

VoiceXML interpreter Walks through a VXML form in document order Iteratively selecting each item If multiple fields, visit each one in order. Special commands for events 11/28/2018

Another vxml doc (1) noinput> I'm sorry, I didn't hear you. <reprompt/> </noinput> <nomatch> I'm sorry, I didn't understand that. <reprompt/> </nomatch> 11/28/2018

Another vxml doc (2) <form> <block> Welcome to the air travel consultant. </block> <field name="origin"> <prompt> Which city do you want to leave from? </prompt> <grammar type="application/x=nuance-gsl"> [(san francisco) denver (new york) barcelona] </grammar> <filled> <prompt> OK, from <value expr="origin"> </prompt> </filled> </field> 11/28/2018

Another vxml doc (3) <field name="destination"> <prompt> And which city do you want to go to? </prompt> <grammar type="application/x=nuance-gsl"> [(san francisco) denver (new york) barcelona] </grammar> <filled> <prompt> OK, to <value expr="destination"> </prompt> </filled> </field> <field name="departdate" type="date"> <prompt> And what date do you want to leave? </prompt> <prompt> OK, on <value expr="departdate"> </prompt> 11/28/2018

Another vxml doc (4) <block> <prompt> OK, I have you are departing from <value expr="origin”> to <value expr="destination”> on <value expr="departdate"> </prompt> send the info to book a flight... </block> </form> 11/28/2018

A mixed initiative VXML doc Mixed initiative: user might answer a different question So VoiceXML interpreter can’t just evaluate each field of form in order User might answer field2 when system asked field1 So need grammar which can handle all sorts of input: Field1 Field2 Field 1 and field 2 etc 11/28/2018

VXML Nuance-style grammars Rewrite rules Wantsentence -> I want to (fly|go) Nuance VXML format is: () for concatenation, [] for disjunction Each rule has a name: Wantsentence (I want to [fly go]) Airports [(san francisco) denver] 11/28/2018

Mixed-init VXML example (3) <noinput> I'm sorry, I didn't hear you. <reprompt/> </noinput> <nomatch> I'm sorry, I didn't understand that. <reprompt/> </nomatch> <form> <grammar type="application/x=nuance-gsl"> <![ CDATA[ 11/28/2018

Grammar Flight ( ?[ (i [wanna (want to)] [fly go]) (i'd like to [fly go]) ([(i wanna)(i'd like a)] flight) ] [ ( [from leaving departing] City:x) {<origin $x>} ( [(?going to)(arriving in)] City:x) {<dest $x>} ( [from leaving departing] City:x [(?going to)(arriving in)] City:y) {<origin $x> <dest $y>} ?please ) 11/28/2018

Grammar City [ [(san francisco) (s f o)] {return( "san francisco, california")} [(denver) (d e n)] {return( "denver, colorado")} [(seattle) (s t x)] {return( "seattle, washington")} ] ]]> </grammar> 11/28/2018

Grammar <initial name="init"> <prompt> Welcome to the air travel consultant. What are your travel plans? </prompt> </initial> <field name="origin"> <prompt> Which city do you want to leave from? </prompt> <filled> <prompt> OK, from <value expr="origin"> </prompt> </filled> </field> 11/28/2018

Grammar <field name="dest"> <prompt> And which city do you want to go to? </prompt> <filled> <prompt> OK, to <value expr="dest"> </prompt> </filled> </field> <block> <prompt> OK, I have you are departing from <value expr="origin"> to <value expr="dest">. </prompt> send the info to book a flight... </block> </form> 11/28/2018

Grounding and Confirmation Dialogue is a collective act performed by speaker and hearer Common ground: set of things mutually believed by both speaker and hearer Need to achieve common ground, so hearer must ground or acknowledge speakers utterance. Clark (1996): Principle of closure. Agents performing an action require evidence, sufficient for current purposes, that they have succeeded in performing it 11/28/2018

Clark and Schaefer: Grounding Continued attention: B continues attending to A Relevant next contribution: B starts in on next relevant contribution Acknowledgement: B nods or says continuer like uh-huh, yeah, assessment (great!) Demonstration: B demonstrates understanding A by paraphrasing or reformulating A’s contribution, or by collaboratively completing A’s utterance Display: B displays verbatim all or part of A’s presentation 11/28/2018

11/28/2018

Grounding examples Display: Acknowledgement C: I need to travel in May A: And, what day in May did you want to travel? Acknowledgement C: He wants to fly from Boston A: mm-hmm C: to Baltimore Washington International 11/28/2018

Grounding Examples (2) Acknowledgement + next relevant contribution And, what day in May did you want to travel? And you’re flying into what city? And what time would you like to leave? 11/28/2018

Grounding and Dialogue Systems Grounding is not just a tidbit about humans Is key to design of conversational agent Why? 11/28/2018

Grounding and Dialogue Systems Grounding is not just a tidbit about humans Is key to design of conversational agent Why? HCI researchers find users of speech-based interfaces are confused when system doesn’t give them an explicit acknowedgement signal Experiment with this 11/28/2018

Confirmation Another reason for grounding Speech is a pretty errorful channel Hearer could misinterpret the speaker This is important in Conv. Agents Since we are using ASR, which is still really buggy. So we need to do lots of grounding and confirmation 11/28/2018

Explicit confirmation S: Which city do you want to leave from? U: Baltimore S: Do you want to leave from Baltimore? U: Yes 11/28/2018

Explicit confirmation U: I’d like to fly from Denver Colorado to New York City on September 21st in the morning on United Airlines S: Let’s see then. I have you going from Denver Colorado to New York on September 21st. Is that correct? U: Yes 11/28/2018

Implicit confirmation: display U: I’d like to travel to Berlin S: When do you want to travel to Berlin? U: Hi I’d like to fly to Seattle Tuesday morning S: Traveling to Seattle on Tuesday, August eleventh in the morning. Your name? 11/28/2018

Implicit vs. Explicit Complementary strengths Explicit: easier for users to correct systems’s mistakes (can just say “no”) But explicit is cumbersome and long Implicit: much more natural, quicker, simpler (if system guesses right). 11/28/2018

Implicit and Explicit Early systems: all-implicit or all-explicit Modern systems: adaptive How to decide? ASR system can give confidence metric. This expresses how convinced system is of its transcription of the speech If high confidence, use implicit confirmation If low confidence, use explicit confirmation 11/28/2018

Next Lecture Dialogue acts More on VXML More on design of dialogue agents Evaluation of dialogue agents Don’t forget to look at the homework early!!!! 11/28/2018