circle Spoken Dialogue for the Why2 Intelligent Tutoring System Diane J. Litman Learning Research and Development Center & Computer Science Department.

Slides:



Advertisements
Similar presentations
Non-Native Users in the Let s Go!! Spoken Dialogue System: Dealing with Linguistic Mismatch Antoine Raux & Maxine Eskenazi Language Technologies Institute.
Advertisements

Mihai Rotaru Diane J. Litman DoD Group Meeting Presentation
Detecting Certainness in Spoken Tutorial Dialogues Liscombe, Hirschberg & Venditti Using System and User Performance Features to Improve Emotion Detection.
5/10/20151 Evaluating Spoken Dialogue Systems Julia Hirschberg CS 4706.
Uncertainty Corpus: Resource to Study User Affect in Complex Spoken Dialogue Systems Kate Forbes-Riley, Diane Litman, Scott Silliman, Amruta Purandare.
Dialogue in Intelligent Tutoring Systems Dialogs on Dialogs Reading Group CMU, November 2002.
Sorry, I didn’t catch that! – an investigation of non-understandings and recovery strategies Dan Bohuswww.cs.cmu.edu/~dbohus Alexander I. Rudnickywww.cs.cmu.edu/~air.
Error detection in spoken dialogue systems GSLT Dialogue Systems, 5p Gabriel Skantze TT Centrum för talteknologi.
Identifying Local Corrections in Human-Computer Dialogue Gina-Anne Levow University of Chicago October 5, 2004.
Student simulation and evaluation DOD meeting Hua Ai 03/03/2006.
Spoken Dialogue Technology How can Jerry Springer contribute to Computer Science Research Projects?
Detecting missrecognitions Predicting with prosody.
1 Evidence of Emotion Julia Hirschberg
Error Detection in Human-Machine Interaction Dan Bohus DoD Group, Oct 2002.
6/28/20151 Spoken Dialogue Systems: Human and Machine Julia Hirschberg CS 4706.
Click to edit the title text format An Introduction to TuTalk: Developing Dialogue Agents for Learning Studies Pamela Jordan University of Pittsburgh Learning.
12.0 Computer-Assisted Language Learning (CALL) References: 1.“An Overview of Spoken Language Technology for Education”, Speech Communications, 51, pp ,
Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman and Kate Forbes-Riley University of Pittsburgh Pittsburgh, PA USA.
Speech-to-Speech Translation with Clarifications Julia Hirschberg, Svetlana Stoyanchev Columbia University September 18, 2013.
Collecting, Storing, Coding, and Analyzing Spoken Tutorial Dialogue Corpora Diane Litman LRDC & Pitt CS.
® Automatic Scoring of Children's Read-Aloud Text Passages and Word Lists Klaus Zechner, John Sabatini and Lei Chen Educational Testing Service.
Modeling User Satisfaction and Student Learning in a Spoken Dialogue Tutoring System with Generic, Tutoring, and User Affect Parameters Kate Forbes-Riley.
Interactive Dialogue Systems Professor Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh Pittsburgh,
Week 3 – Interdisciplinary Nature of Studying Organizations
Interaction Design Session 12 LBSC 790 / INFM 718B Building the Human-Computer Interface.
Kate’s Ongoing Work on Uncertainty Adaptation in ITSPOKE.
On Speaker-Specific Prosodic Models for Automatic Dialog Act Segmentation of Multi-Party Meetings Jáchym Kolář 1,2 Elizabeth Shriberg 1,3 Yang Liu 1,4.
circle Adding Spoken Dialogue to a Text-Based Tutorial Dialogue System Diane J. Litman Learning Research and Development Center & Computer Science Department.
Evaluation of SDS Svetlana Stoyanchev 3/2/2015. Goal of dialogue evaluation Assess system performance Challenges of evaluation of SDS systems – SDS developer.
Comparing Synthesized versus Pre-Recorded Tutor Speech in an Intelligent Tutoring Spoken Dialogue System Kate Forbes-Riley and Diane Litman and Scott Silliman.
Crowdsourcing for Spoken Dialogue System Evaluation Ling 575 Spoken Dialog April 30, 2015.
Adaptive Spoken Dialogue Systems & Computational Linguistics Diane J. Litman Dept. of Computer Science & Learning Research and Development Center University.
Correlations with Learning in Spoken Tutoring Dialogues Diane Litman Learning Research and Development Center and Computer Science Department University.
A Successful Dialogue without Adaptation S: Hi, this is AT&T Amtrak schedule system. This is Toot. How may I help you? U: I want a train from Baltimore.
Collaborative Research: Monitoring Student State in Tutorial Spoken Dialogue Diane Litman Computer Science Department and Learning Research and Development.
Are you ready to play…. Deal or No Deal? Deal or No Deal?
1 Natural Language Processing Lecture Notes 14 Chapter 19.
1 Computation Approaches to Emotional Speech Julia Hirschberg
Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman&Kate Forbes-Riley University of Pittsburgh Department of Computer Science.
Why predict emotions? Feature granularity levels [1] uses pitch features computed at the word-level Offers a better approximation of the pitch contour.
Using Word-level Features to Better Predict Student Emotions during Spoken Tutoring Dialogues Mihai Rotaru Diane J. Litman Graduate Research Competition.
Speech and Language Processing for Educational Applications Professor Diane Litman Computer Science Department & Intelligent Systems Program & Learning.
Building & Evaluating Spoken Dialogue Systems Discourse & Dialogue CS 359 November 27, 2001.
Spoken Dialogue in Human and Computer Tutoring Diane Litman Learning Research and Development Center and Computer Science Department University of Pittsburgh.
Spoken Dialog Systems Diane J. Litman Professor, Computer Science Department.
Experiences with Undergraduate Research (Natural Language Processing for Educational Applications) Professor Diane Litman University of Pittsburgh.
User Responses to Prosodic Variation in Fragmentary Grounding Utterances in Dialog Gabriel Skantze, David House & Jens Edlund.
Using Prosody to Recognize Student Emotions and Attitudes in Spoken Tutoring Dialogues Diane Litman Department of Computer Science and Learning Research.
Misrecognitions and Corrections in Spoken Dialogue Systems Diane Litman AT&T Labs -- Research (Joint Work With Julia Hirschberg, AT&T, and Marc Swerts,
(Speech and Affect in Intelligent Tutoring) Spoken Dialogue Systems Diane Litman Computer Science Department and Learning Research and Development Center.
Metacognition and Learning in Spoken Dialogue Computer Tutoring Kate Forbes-Riley and Diane Litman Learning Research and Development Center University.
1 Spoken Dialogue Systems Error Detection and Correction in Spoken Dialogue Systems.
circle Towards Spoken Dialogue Systems for Tutorial Applications Diane Litman Reprise of LRDC Board of Visitors Meeting, April 2003.
Improving (Meta)cognitive Tutoring by Detecting and Responding to Uncertainty Diane Litman & Kate Forbes-Riley University of Pittsburgh Pittsburgh, PA.
Designing and Evaluating Two Adaptive Spoken Dialogue Systems Diane J. Litman* University of Pittsburgh Dept. of Computer Science & LRDC
Experiments with ITSPOKE: An Intelligent Tutoring Spoken Dialogue System Diane Litman Computer Science Department and Learning Research and Development.
Using Natural Language Processing to Analyze Tutorial Dialogue Corpora Across Domains and Modalities Diane Litman, University of Pittsburgh, Pittsburgh,
Objectives of session By the end of today’s session you should be able to: Define and explain pragmatics and prosody Draw links between teaching strategies.
Prosodic Cues to Disengagement and Uncertainty in Physics Tutorial Dialogues Diane Litman, Heather Friedberg, Kate Forbes-Riley University of Pittsburgh.
Predicting and Adapting to Poor Speech Recognition in a Spoken Dialogue System Diane J. Litman AT&T Labs -- Research
Predicting Emotion in Spoken Dialogue from Multiple Knowledge Sources Kate Forbes-Riley and Diane Litman Learning Research and Development Center and Computer.
Chapter 6. Data Collection in a Wizard-of-Oz Experiment in Reinforcement Learning for Adaptive Dialogue Systems by: Rieser & Lemon. Course: Autonomous.
Towards Emotion Prediction in Spoken Tutoring Dialogues
Dialogue-Learning Correlations in Spoken Dialogue Tutoring
Error Detection and Correction in SDS
Issues in Spoken Dialogue Systems
Spoken Dialogue Systems
Spoken Dialogue Systems
Tim Strode and Bridget O’Leary October 2018
Spoken Dialogue Systems
Presentation transcript:

circle Spoken Dialogue for the Why2 Intelligent Tutoring System Diane J. Litman Learning Research and Development Center & Computer Science Department University of Pittsburgh Pittsburgh, PA USA

circle Why Dialogue Tutoring? CIRCLE (Center for Interdisciplinary Research on Constructive Learning Environments) is an NSF-funded research center located at Pitt and CMU. CIRCLE's mission is to determine why highly effective forms of instruction, such as human one-on-one tutoring, work so well, and to develop computer-based constructive learning environments that foster equally impressive learning. In the intelligent tutoring system community, research suggests that dialogues which encourage students to do as much of the work as possible, lead to increased learning gains.

circle Why Spoken Dialogue Tutoring? Motivation –Promote learning gains by enhancing communication richness Benefits for Intelligent Tutoring Systems –Naturalness and ease of use –New sources of evidence regarding dialogue and pedagogy –Complement to current talking heads –Hands-free aspect can support multimodal interaction Benefits for Spoken Dialogue Systems –Tutoring is a challenging and largely unexplored application

circle Why Now? Spoken dialogue technology has come of age –Commercially available products –Freely available toolkits A flurry of recent activity –Project LISTEN (CMU, reading tutor) –Computer-Assisted Language Learning (CMU, pronunciation) –Just in time training (NASA) –COVE (BBN, shiphandling training) –DC-TRAIN (Stanford, post-training critiquing) –Arithmetic tutoring (Umass/Rochester, planned)

circle Research Questions What are the advantages – and disadvantages – of using speech over text? Can prosody be used to infer pedagogically significant information? Can the tutoring system make use of such inferences? Testbed - Why2: an existing University of Pittsburgh (deep NLP) text-based dialogue tutoring system

circle Why2-Atlas screen shot 4. A rock is thrown straight upward with a velocity v. What is its acceleration at the top of its path? What is its velocity when it comes back to the starting point? Why? At the very top of the path, where the velocity is equal to 0, the acceleration will be -9.8 m/(s*s), the acceleration of gravity. It will no longer have any upward acceleration. When it comes back the its starting point (assuming that it is the ground) its velocity will be equal to 0 because its downward progress will be stopped by the ground Tutor: What is the velocity just before it hits the ground? Student: That will depend on the time that the ball is in the air/distance ball traveled. Tutor: How will it compare with the velocity with which it was thrown up? They will be equal.The height the ball reaches will depend on the initial velocity

circle Detecting and Recovering from Problems in Spoken Dialogue What can we learn from human-human and wizard system dialogues? How do systems detect when a dialogue is `going wrong'? How do they define such conditions? What factors are the key indicators of `bad' dialogues? Can we devise adaptive strategies to identify patterns of error and respond accordingly? What sorts of behavior do users exhibit when faced with system errors? What measures (better prompts, anticipation of likely error, better help information) can be taken to minimize possible errors?

circle My Previous Work (handouts) Prediction of dialogue and utterance level problems –Automatically computable features (prosody, ASR, history, efficiency, quality, lexical) can accurately predict annotated turn classes (misrecognitions - 93%; aware sites - 89%; corrections -84%) –Prosodic features include pitch, energy, duration, pause, tempo, and silence Adaptation –Task success improved from 23% to 65% by detecting and reacting to problematic subdialogues

circle Example with Annotations S: Hi, this is AT&T Amtrak schedule system. This is TOOT. How may I help you? U: I'd like to find a train from Ch- to Chicago from Baltimore. MISRECOGNITION S: Sorry, I can't understand you. Please repeat your utterance. U: I'd like to find a train from Baltimore to Chicago. MISRECOGNITION, AWARE, CORRECTION S: I heard you say “leave today around 5 thirty in the morning." Which city do you want to go to? U: I would like to leave Saturday at eight in the morning. MISRECOGNITION, AWARE, CORRECTION S: You can say the arrival city name, such as “New York City." U: Baltimore to Chicago.AWARE, CORRECTION

circle Implications for Tutorial Dialogue Hypothesis –Similar features will prove useful for predicting and adapting to problematic pedagogical situations Challenge –What pedagogical situations should (and can) be annotated? Starting Points –“Tutorial-level misrecognitions” (e.g., a correct student answer is understood to be incorrect by the system) –Problematic affective states (e.g., confusion, boredom, anger, frustration [Evens 2002]) –Subjective language (Wiebe) –Off-talk –Initiative, Dialogue acts / discourse structure

circle Emotion and Prosody (Shriberg et al. 2001) u Neutral –July 30 –Yes u Disappointed/tired –No u Amused/surprised –No u Annoyed –Yes –Late morning (HYP) u Frustrated –Yes –No –No, I am … (HYP) –There is no Manila

circle Status Currently collecting a corpus of human-human, spoken tutoring dialogues Implementation of human-computer system in progress Lots of opportunities for manual and automated dialogue annotation, analysis via machine learning, and incorporation of insights back into the system

circle Summary Adding spoken dialogue to tutoring systems provides both opportunities and challenges Expected Contributions –Empirical comparisons with text-based tutoring dialogue systems –Annotation schemes for dialogue states of potential pedagogical interest –Use of prosodic and other features to predict such states –Exploitation of such predictions by the tutoring system

circle Demo ITSpoke (or, your idea for a name here) Architecture: Sphinx ASR + Festival TTS + Why2