Download presentation
Presentation is loading. Please wait.
Published byDelphia Mitchell Modified over 9 years ago
1
Mapping Utterances onto Dialogue Acts with LSA and Naïve Bayes Thomas K Harris Dialogs on Dialogs: May 6, 2005
2
Overview::Mini-Problem::NB::LSA::Application::Research2 Today’s Talk A quick super high-level overview of problem A mini-problem –SGPUC –Data collected –Six classes of dialog acts A naïve Bayes classification approach and results Latent Semantic Analysis (LSA) –What is LSA (algorithmically)? –What is LSA (theoretically)? –What are the major LSA-language findings? –How I used it to condition the data. Applicability Research Please give me feedback
3
Overview::Mini-Problem::NB::LSA::Application::Research3 A Super High-Level View of the SDS Input Pass Words -> Speech Acts and Concepts is usually a knowledge engineered “white-box” function. Problematic because the input (words) is –Huge (How large is NL? I don’t think anyone knows.) –Noisy and Probabilistic. (That’s what ASR gives us.) –Dynamic and Situational. Problematic because output (concepts) are difficult to share/generalize from one domain/system to another.
4
Overview::Mini-Problem::NB::LSA::Application::Research4 What do we do? A lot of design iterations! Restrict the domain Share components Control the speaker through –Training and entrainment –Domain-related expectations –Influencing or outright directing the dialog
5
Overview::Mini-Problem::NB::LSA::Application::Research5 Use the Data, Luke. Words -> Speech Acts and Concepts can also be a data- driven “black-box” function, or a hybrid. This has its own set of problems –Labeling data is costly –The catch-22 (data collection requires a working system). Iterate starting with seed data which can be nothing designer hypothesized data WoZ data from a similar or previous-version SDS from some human-human analog –The performance often seems nice at first, but then asymptotes quickly. I’m only going to address the labeling cost issue here.
6
Overview::Mini-Problem::NB::LSA::Application::Research6 A Mini-Problem Let’s look at a small part of the words -> speech acts and concepts problem in a real system, the Speech Graffiti Personal Universal Controller (SGPUC). Hopefully this small, concrete system and it’s mini-problem will facilitate manageable experimentation of approaches. But first, a little about the system itself.
7
Overview::Mini-Problem::NB::LSA::Application::Research7 Speech Graffiti Personal Universal Controller Protocol-based appliance communication architecture Automatically built SDS from appliance description language Speech Graffiti style user interface
8
Overview::Mini-Problem::NB::LSA::Application::Research8 Appliance Communication Architecture Havi adapter X10 adapter Speech Graffiti Personal Universal Controller
9
Overview::Mini-Problem::NB::LSA::Application::Research9 James Stereo (mode) tuner (radio band) AM frequency # station WXXX FM frequency # station WXXX seek forward backward auxiliaryCD (status) playstoppause repeat offsingle tracksingle discall discs disc # track # next track last track random onoff x-bass on off volume volume up volume down off Digital camera control Play mode play stop fast fwd rewind record pause step forward backward info Device mode camera VCR Media type Digital video unknown VHS none (mode) Other devices…
10
Overview::Mini-Problem::NB::LSA::Application::Research10 Speech Graffiti Dialog Artificial subset language Tree-structured functions Universal conversational primitives User-directed Great for recognition Entirely declarative (and automatic)
11
Overview::Mini-Problem::NB::LSA::Application::Research11 Six Dialog Acts in SGPUC Session Management Help Execution Query Response Related Other
12
Overview::Mini-Problem::NB::LSA::Application::Research12 Session Management User: blah blah blah... System: ignoring user User: hello james System: stereo, digital camera User: stereo System: stereo here User: goodbye System: goodbye User: blah blah blah... System: ignoring user
13
Overview::Mini-Problem::NB::LSA::Application::Research13 Help/Exploration/Orientation User: alarm clock options System: alarm, clock, radio, sleep, and more… User: help System: to hear what options you have, say options, to repeat an utterance, say repeat… User: stereo options System: while turning stereo on: off, am, fm, auxiliary, cd, and more… User: where am I System: stereo options
14
Overview::Mini-Problem::NB::LSA::Application::Research14 Execution/Specification User: stereo auxiliary System: while turning the stereo on and switching to auxiliary: auxiliary User: cd System: while switching to cd mode: cd User: play System: while playing a cd: playing cd User: switch disc to four System: while switching discs: switching cd disc to four
15
Overview::Mini-Problem::NB::LSA::Application::Research15 Query User: what is the am frequency System: the am frequency is five hundred thirty User: random status System: random is off User: status System: tuner am, station wabc, volume low, and more…
16
Overview::Mini-Problem::NB::LSA::Application::Research16 Response Related User: radio band am options System: am options are frequency, kabc, k001, k002, and more… User: more System: k003, k004 User: more System: band options are fm User: repeat System: band options are fm
17
Overview::Mini-Problem::NB::LSA::Application::Research17 Back to the Mini-Problem The language is explicit and regular in classifying dialog acts. A grammar will accurately classify dialog acts. Users are taught the SG language. Users learn the language incompletely and have faulty memories. Utterances have false starts, spurious repetitions, etc. ASR is error prone. 37.5% of utterances’ dialog acts were misclassified.
18
Overview::Mini-Problem::NB::LSA::Application::Research18 Data Listening to the actual speech, I labeled 2010 utterances (from 10 participants). Each utterance is labeled with one of the six of the dialog acts. Note that this labeling is much faster than transcription or much other labeling. 2010 utterances were labeled in 2 ½ hours, close to real-time. Each utterance is represented by a boolean vector, where each element in the vector represents whether that word appears or not in the utterance. (i.e. word order is ignored!)
19
Overview::Mini-Problem::NB::LSA::Application::Research19 A Naïve Bayes Classifier
20
Overview::Mini-Problem::NB::LSA::Application::Research20 Classifier Results
21
Overview::Mini-Problem::NB::LSA::Application::Research21 Problems with Naïve Bayes Independence assumption –Word existence in an utterance contributes a fixed amount to class distinction regardless of context. –i.e. “bank” contributes the same thing to the classifier in the context of “world bank” and “river bank” Estimates a high-dimensional model –The model estimates 5 parameters (1-#classes) for each word. Words that occur infrequently will be severely over-fitted. Problems with singletons words –If a word appears in an utterance that hasn’t occurred in the training data for a particular class, the probability assigned to that class is zero.
22
Overview::Mini-Problem::NB::LSA::Application::Research22 Latent Semantic Analysis to the Rescue Independence assumption –LSA models both synonymy and polysemy. –Polysemy: Words that occur in different contexts i.e. “bank” in “world bank” vs “river bank” tend to become distinguished. –Synonymy: Words that occur in similar contexts i.e. the “white” and “black” of “white sheep” and “black sheep” tend to become undistinguished. Estimates a high-dimensional model –The effective dimension is arbitrarily fixed. Problems with singletons words –The dimensionality reduction serves as a smoothing function.
23
Overview::Mini-Problem::NB::LSA::Application::Research23 How Does LSA Work? C1: Human machine interface for ABC computer applications. C2: A survey of user opinion of computer system response time. C3: The EPS user interface management system. C4: System and human system engineering testing of EPS. C…: C1C2C3C4… Human1001… Interface1010… Computer1100… User0110… System0112… Response0100… Time0100… EPS0011… Survey0100… {X} =
24
Overview::Mini-Problem::NB::LSA::Application::Research24 Singular Value Decomposition Any mxn matrix X where m>n can be decomposed into the product of three matrices, UDV T, where: –U is an mxn matrix and V is an nxn matrix both with orthogonal columns. –D is an nxn diagonal matrix D is a sort-of basis in n dimensions for X. In Matlab, [U, D, V] = SVD(X);
25
Overview::Mini-Problem::NB::LSA::Application::Research25 LSA Algorithm in 4 Easy Steps Build your feature-passage matrix X. (Here I chose word-utterance.) [U, D, V] = SVD(X) Zero out all but the highest g values of D to form a new reduced D. Recompose a reduced X as U D V T.
26
Overview::Mini-Problem::NB::LSA::Application::Research26 The Recomposed Matrix C1: Human machine interface for ABC computer applications. C2: A survey of user opinion of computer system response time. C3: The EPS user interface management system. C4: System and human system engineering testing of EPS. C…: C1C2C3C4… Human(1)0.16(0)0.40(0)0.38(1)0.47… Interface(1)0.14(0)0.37(1)0.33(0)0.40… Computer(1)0.15(1)0.51(0)0.36(0)0.41… User(0)0.26(1)0.84(1)0.61(0)0.70… System(0)0.45(1)1.23(1)1.05(2)1.27… Response(0)0.16(1)0.58(0)0.38(0)0.42… Time(0)0.16(1)0.58(0)0.38(0)0.42… EPS(0)0.22(0)0.55(1)0.51(1)0.63… Survey(0)0.10(1)0.53(0)0.23(0)0.21… {X} =
27
Overview::Mini-Problem::NB::LSA::Application::Research27 And This Means? Cosine distances between words show patterns of similarity, as do cosine distances between passages. Clustering with these distances makes clusters that feel “semantic” and mimic human choices in standardized tests for word sorting and lexical priming so well that people have suggested that LSA may be an actual psycholinguistic mechanism.
28
Overview::Mini-Problem::NB::LSA::Application::Research28 LSA-Discounted NB Estimators Why don’t we try to use an LSA- reconstructed matrix to train the NB classifier? Used various amounts of labeled data, discounted by various amounts of unlabeled LSA data. Unlabeled decoder output boosts classification!
29
Overview::Mini-Problem::NB::LSA::Application::Research29 Results
30
Overview::Mini-Problem::NB::LSA::Application::Research30 Applications for Coarse Classification “How may I help you?” systems More directed error correction, i.e. “You said you wanted to go where?” instead of “Can you repeat that?” Perhaps even self-correction. A coarse classifier could re-weight the language model or re-order hypotheses to elicit a corrected best hypothesis.
31
Overview::Mini-Problem::NB::LSA::Application::Research31 Extensions and Further Research How to integrate this into a system? Would this work for non-SG systems? Does it scale further, esp. w.r.t. unlabeled data? Are there better features than Boolean bags-of-words? Can we go to finer-grained classification, perhaps even classifying concepts as well as speech acts?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.