Download presentation
Presentation is loading. Please wait.
Published bySara Pownell Modified over 10 years ago
1
German Research Center for Artificial Intelligence DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany phone: (+49 681) 302-5252/4162 fax: (+49 681) 302-5341 e-mail: wahlster@dfki.de WWW:http://www.dfki.de/~wahlster Wolfgang Wahlster Language Technologies for the Mobile Internet Era
2
© W. Wahlster Multimodal Interfaces to 3G Mobile Services Market studies (May 2002) predict: Cumulative revenues of almost 1 trillion from launch until 2010 Multimodal UMTS Systems Non-voice service revenues will dominate voice revenues by year 3 and comprise 66% of 3G service revenues by 2010 Non-voice service revenues will dominate voice revenues by year 3 and comprise 66% of 3G service revenues by 2010 322 billion in revenues in 2010 322 billion in revenues in 2010 In 2010 the average 3G subscriber will spend about 30 per month on 3G data services In 2010 the average 3G subscriber will spend about 30 per month on 3G data services
3
© W. Wahlster Multimodal UMTS Systems Intelligent Interaction with Mobile Internet Services Access to web content and web services anywhere and anytime Access to corporate networks and virtual private networks from any device Access to edutainment and infotainment services Access to edutainment and infotainment services Access to all messages (voice, email, multimedia, MMS) from any single device Access to all messages (voice, email, multimedia, MMS) from any single device Personalization Localization
4
© W. Wahlster Mobile Messaging Services Evolution: From SMS to MMS Infrastructure Customer Expectation Applications Text Standard Phones Ubiquity Youth Focus Limited Enhancement SS7 SMSC MMS Relay and Servers UMTS IP/MPLS Protocols EMS Phones MMS Phones Integrated Image Capture Smart Phones Pictures Audio Multimedia Video Enhanced Text Personalized Services Location-based Services Emotional Experience Enhanced Message Creation MMS SMS EMS SMSC Terminals Language Technologies for MMS:- Speech Synthesis (with Affect) - Multimodal Authoring Interface - Speech-based Retrieval of Media Objects
5
© W. Wahlster From Spoken Dialogue to Multimodal Dialogue SmartKom Third Generation UMTS Phone Speech, Graphics and Gesture Verbmobil Todays Cell Phone Speech only
6
© W. Wahlster Spoken Dialogue Graphical User interfaces Gestural Interaction Multimodal Interaction Merging Various User Interface Paradigms Facial Expressions Haptic Input
7
© W. Wahlster System Input Channels Output Channels Storage HD Drive DVD visual tactile auditory haptic MEDIA (physical information carriers) MODALITIES (human senses) languagegraphicsgesture User CODE (systems of symbols) mimics Using All Human Senses for Intuitive Interaction: Code, Media and Modalities
8
© W. Wahlster Symbolic and Subsymbolic Fusion of Multiple Modes Speech Recognition Gesture Recognition Prosody Recognition Facial Expression Recognition Lip Reading Subsymbolic Fusion - Neuronal Networks - Hidden Markov Models Symbolic Fusion - Graph Unification - Bayesian Networks Reference Resolution and Disambiguation Semantic Representation
9
© W. Wahlster Mutual Disambiguation of Multiple Input Modes The combination of speech and vision analysis increases the robustness and understanding capabilities of multimodal user interfaces. Speech Recognition + Lip Reading increases robustness in noisy environments Speech Recognition + Gesture Recognition (XTRA, SmartKom) referential disambiguation and focus control Speech Recognition + Facial Expression Recognition (SmartKom) recognition of irony, sarcasm and scope disambiguation
10
© W. Wahlster SmartKom-Public: A Multimodal Communication Kiosk SmartKom-Mobile: A Handheld Communication Assistant SmartKom: A Transportable Interface Agent Media Analysis Kernel of SmartKom Interface Agent Interaction Management Application Manage- ment Media Design SmartKom-Home/Office: Multimodal Portal to Information Services
11
© W. Wahlster SmartKom`s SDDP Interaction Metaphor SDDP = Situated Delegation-oriented Dialogue Paradigm User specifies goal delegates task cooperate on problems asks questions presents results Service 1 Service 2 Service 3 Webservices Personalized Interaction Agent See: Wahlster et al. 2001, Eurospeech
12
© W. Wahlster Multimodal Input and Output in the SmartKom System Where would you like to sit?
13
© W. Wahlster Id like to reserve tickets for this movie. Where would you like to sit? Id like these two seats. Multimodal Interaction with a Life-like Character User Input: Speech and Gesture Smartakus Output: Speech, Gesture and Facial Expressions User Input: Speech and Gesture
14
© W. Wahlster Using Facial Expression Recognition for Affective Personalization (1) Smartakus: Here you see the CNN program for tonight. (2)User: Thats great. (3)Smartakus: Ill show you the program of another channel for tonight. (2)User: Thats great. (3) Smartakus: Which of these features do you want to see? Processing ironic or sarcastic comments
15
© W. Wahlster SmartKom: Intuitive Multimodal Interaction MediaInterface European Media Lab Uinv. Of Munich Univ. of Stuttgart Saarbrücken Aachen Dresden Berkeley Stuttgart MunichUniv. of Erlangen Heidelberg Main Contractor DFKI Saarbrücken The SmartKom Consortium: Project Budget: 25.5 million Project Duration: 4 years (September 1999 – September 2003) Ulm
16
© W. Wahlster The SmartKom Demonstrator System Camera for Gestural Input Microphone Multimodal Control of TV-Set Multimodal Control of VCR/DVD Player
17
© W. Wahlster A Demonstration of SmartKoms Multimodal Interface for the German President Dr. Rau
18
© W. Wahlster Seamless integration and mutual disambiguation of multimodalinput and output on semantic and pragmatic levels Situated understanding of possibly imprecise, ambiguous, or incom- plete multimodal input Context-sensitive interpretation of dialog interaction on the basis of dynamic discourse and context models Adaptive generation of coordinated, cohesive and coherent multimodal presentations Semi- or fully automatic completion of user-delegated tasks through the integration of information services Intuitive personification of the system through a presentation agent Salient Characteristics of SmartKom
19
© W. Wahlster Multimodal Input and Output in SmartKom Fusion and Fission of Multiple Modalities Input by the User Output by the Presentation agent Speech Gesture Facial Expressions + + + + + +
20
© W. Wahlster Which feature films are shown tonight on TV? Combination of Speech and Gesture in SmartKom I show you a survey of tonight's TV films. I can't find anything interesting. Then I'll go to the movies. Here you see a programme listing of the movies shown in Heidelberg today. This one I would like to see. Where is it shown? On this map all movie theatres are highlighted, that are showing "A Little Christmas Story".
21
© W. Wahlster Multimodal Input and Output in SmartKom There I would like to get a reservation. In this movie theatre a reservation is not possible. Then let's check another theatre. What about this one? This overview lists all show times for the movie "A Little Christmas Story" in the movie theatre "Castle". Here I would like to get a reservation. Please show me where you would like to be seated. I would like to get two seats here. Is this okay? Sure. I have reserved the seats. Your confirmation number is 635. You can pick up the tickets till half an hour before the show at the ticket box. Okay. Thank you. Good Bye. Good bye.
22
© W. Wahlster Personalized Interaction with WebTVs via SmartKom (DFKI with Sony, Philips, Siemens) User: Switch on the TV. Smartakus: Okay, the TV is on. User: Which channels are presenting the latest news right now? Smartakus: CNN and NTV are presenting news. User: Please record this news channel on a videotape. Smartakus: Okay, the VCR is now recording the selected program. Example: Multimodal Access to Electronic Program Guides for TV
23
© W. Wahlster ? e.g. 60 x 90 pixel b/w e.g. 1024 * 768 pixel 24-bit color The Need for Personalization: Adaptive Interaction with Mobile Devices
24
© W. Wahlster PEACH: Beaming A Life-Like Character From A Large Public Display to a Mobile Personal Device PEACH: Personalized Edutainment in Museums (IRST – DFKI)
25
© W. Wahlster A Web of Meaning has more Personalization Potential than a Web of Links Three Layers of Webpage Annotations cf.: Dieter Fensel, James Hendler, Henry Liebermann, Wolfgang Wahlster (eds.) Spinning the Semantic Web, MIT Press, November 2002 Personalization Potential OWL DAML + OIL Content high Structure XML medium Layout HTML low
26
© W. Wahlster Personalization Mapping Web Content Onto a Variety of Structures and Layouts From the one-size fits-all approach of static webpages to the perfect personal fit approach of adaptive webpages Structure XML 1 XML 2 XML n Content OWL Layout HTML 11 HTML 1m HTML 21 HTML 2o HTML 31 HTML 3p
27
© W. Wahlster SmartKom: Towards Multimodal and Mobile Dialogue Systems for Indoor and Outdoor Navigation Seamless Integration of Various Positioning Technologies GSM/UMTS cells GPS Infrared Wavelan, Bluetooth Using the same device for driving and walking directions Speech and Gesture Input Graphics and Speech Output
28
© W. Wahlster Spoken Dialogues with the Car Navigation System: SENECA Product Announcement for E-Class Mercedes: End of 2002
29
© W. Wahlster Getting Driving and Walking Directions via SmartKom User: I want to drive to Heidelberg. Smartakus: Do you want to take the fastest or the shortest route? User: The fastest. Smartakus: Here you see a map with your route from Saarbrücken to Heidelberg. SmartKom can be used for Multimodal Navigation Dialogues in a Car
30
© W. Wahlster Getting Driving and Walking Directions via SmartKom Smartakus: You are now in Heidelberg. Here is a sightseeing map of Heidelberg. User: I would like to know more about this church! Smartakus: Here is some information about the St. Peter's Church. User: Could you please give me walking directions to this church? Smartakus: In this map, I have high-lighted your walking route.
31
© W. Wahlster SmartKom: Multimodal Dialogues with a Hybrid Navigation System
32
© W. Wahlster SmartKom, please look for the nearest parking lot. SmartKom, please look for the nearest parking lot. The parking garage at the main station provides 300 slots. Opening hours are from 6 am to 11 pm. Do you want to get there? The parking garage at the main station provides 300 slots. Opening hours are from 6 am to 11 pm. Do you want to get there? Spoken Navigation Dialogues with SmartKom No, please tell me about the next parking option. No, please tell me about the next parking option. The Market parking lot provides 150 slots. It is opened 24 hours a day. Do you want to get there? The Market parking lot provides 150 slots. It is opened 24 hours a day. Do you want to get there? Yes, please Ill bring you to the Market parking lot. Ill bring you to the Market parking lot.
33
© W. Wahlster The High-Level Control Flow of SmartKom
34
© W. Wahlster The High-Level Control Flow of SmartKom
35
© W. Wahlster The High-Level Control Flow of SmartKom
36
© W. Wahlster The High-Level Control Flow of SmartKom
37
© W. Wahlster The High-Level Control Flow of SmartKom
38
© W. Wahlster The High-Level Control Flow of SmartKom
39
© W. Wahlster The High-Level Control Flow of SmartKom
40
© W. Wahlster The High-Level Control Flow of SmartKom
41
© W. Wahlster The High-Level Control Flow of SmartKom
42
© W. Wahlster The High-Level Control Flow of SmartKom
43
© W. Wahlster The High-Level Control Flow of SmartKom
44
© W. Wahlster The High-Level Control Flow of SmartKom
45
© W. Wahlster The High-Level Control Flow of SmartKom
46
© W. Wahlster The High-Level Control Flow of SmartKom
47
© W. Wahlster Embedded Speech Understanding Content Access (eg. Map Updates) Webservices Distributed Speech Understanding Aurora Speech Features Speech Understanding System With Feature Interface Remote Speech Understanding Java-based Voice Streaming Speech Understanding System A Spectrum of Client/Server Architectures for Mobile Multimodal Systems: From Thin to Fat Clients
48
© W. Wahlster M3I: A Mobile, Multimodal, and Modular Interface of DFKI IBM Embedded Via Voice iPAQ JORNADA C++ EmbeddedJava Java-based Voice Streaming SmartKoms Multimodal Dialogue Engine 1.Hybrid Speech Understanding = Embedded + Remote/Distributed Speech Understanding Small Vocabulary Large Vocabulary (Topic Detection) 2.Resource-Adaptive Speech Processing: Availability of a Server Improves the Coverage and Quality
49
© W. Wahlster Example of Embedded Multimodal Dialogue System M3I for Pedestrian Navigation (DFKI) Spoken and Gestural Input combined with graphics and speech output on an iPAQ
50
© W. Wahlster Java-Based Voice Streaming for Hybrid Speech Understanding in M3I (DFKI)
51
© W. Wahlster SmartKom sends a note to the user or activates an alarm as soon as the user approaches an exhibit that matches the specification of an an item on the ActiveList. ActiveLists spatial alarm can be combined with: - route planning and navigation -temporal and spatial optimization of a visit SmartKoms Added-Value Mobile Service ActiveList Please let me know, when I pass a shop selling batteries.
52
© W. Wahlster SmartKoms Added-Value Mobile Service SpotInspector Whats going on at the castle right now? SmartKom allows the user to have remote visual access to various interesting spots via a selection of webcams – showing current waiting queues, special events and activities. SpotInspector can be combined with: - multimedia presentations of the expected program for these spots - route planning and navigation to these spots
53
© W. Wahlster SmartKoms Added-Value Mobile Service PartnerRadar Where are Lisa und Tom ? What are they looking at? SmartKom helps to locate and to bring together members of the same party. Involved Technologies -Navigation and tour instructions -Monitoring of group activity - Additional information on exhibits that are interesting for the whole party.
54
© W. Wahlster ReflectorsPhoto Detector Speaker Command Button Microphone Fingerprint Recognizer Ultimate Simplicity: One-Button Mobile Devices 8hertz technologies Germany CARC Cyber Assist Research Center Japan
55
UMTS-Doit: The First Test and Evaluation Center for UMTS-based Multimodal Speech Services in Germany Mobile Network Internet Content Provider Gigastream UMTS Navigation Switch E1/ATM RNC Munich Node B at DFKI Saarbrücken PSTN, Telephone System UMTS-Doit Server Cooperation betweenand
56
© W. Wahlster UMTS Applications in a Mercedes: Webcam Providing a Look-Ahead of the Traffic Situation
57
© W. Wahlster Embassi: Multimodal Music Selection in a Car
58
© W. Wahlster UMTS Application in a Mercedes: Language-based Music Download DFKI Spin-off: Natural Language Music Search
59
© W. Wahlster MP3 music files from the Web Rist & Herzog for Blaupunkt Personalized Car Entertainment (DFKI for Bosch)
60
© W. Wahlster Empirical and Data-Driven Models of Multimodality 2002 2005 Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate Corpora for MM Research Mobile, Human-Centered, and Intelligent Multimodal Interfaces Multimodal Interface Toolkit Research Roadmap of Multimodality 2002-2005 XML-Encoded MM Human-Human and Human-Machine Corpora Mobile Multimodal Interaction Tools Standards for the Annotation of MM Training Corpora Examples of Added-Value of Multimodality Multimodal Barge-In Markup Languages for Multimodal Dialogue Semantics Models for Effective and Trustworthy MM HCI Collection of Hardest and Most Frequent/Relevant Phenomena Task-, Situation- and User- Aware Multimodal Interaction Plug- and Play Infrastructure Toolkits for Multimodal Systems Situated and Task- Specific MM Corpora Common Representation of Multimodal Content Decision-theoretic, Symbolic and Hybrid Modules for MM Input Fusion Reusable Components for Multimodal Analysis and Generation Corpora with Multimodal Artefacts and New Multi- modal Input Devices Models of MM Mutual Disambiguation Multiparty MM Interaction 2 Nov. 2001 Dagstuhl Seminar Fusion and Coordination in Multimodal Interaction edited by: W. Wahlster Multimodal Toolkit for Universal Access
61
© W. Wahlster 2006 2010 Ecological Multimodal Interfaces Research Roadmap of Multimodality 2006-2010 Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Toolkits for Multimodal Systems Usability Evaluation Methods for MM System Multimodal Feedback and Grounding Tailored and Adaptive MM Interaction Incremental Feedback between Modalities during Generation Models of MM Collaboration Parametrized Model of Multimodal Behaviour Demonstration of Performance Advances through Multimodal Interaction Real-time Localization and Motion/Eye Tracking Technology Multimodality in VR and AR Environments Resource-Bounded Multimodal Interaction Users Theories of Systems Multimodal Capabilities Multicultural Adaptation of Multimodal Presentations Affective MM Communication Testsuites and Benchmarks for Multimodal Interaction Multimodal Models of Engagement and Floor Management Non-Monotonic MM Input Interpretation Computational Models of the Acquisition of MM Communication Skills Non-Intrusive & Invisible MM Input Sensors Biologically-Inspired Intersensory Coordination Models 2 Nov. 2001 Dagstuhl Seminar Fusion and Coordination in Multimodal Interaction edited by: W. Wahlster
62
© W. Wahlster Burning Issues in Multimodal Interaction Multimodality: from alternate modes of interaction towards mutual disambiguation and synergistic combinations Discourse Models: from information-seeking dialogs towards argumentative dialogs and negotiations Domain Models: from closed world assumptions towards the open world of web services Dialog Behaviour: from automata models towards a combination of probabilistic and plan-based models
63
© W. Wahlster SmartKom is a multimodal dialog system that combines speech, gesture, and mimics input and output. Spontaneous speech understanding is combined with the video- based recognition of natural gestures. One of the major scientific goals of SmartKom is to design new computational methods for the seamless integration and mutual disambiguation of multimodal input and output on a semantic and pragmatic level. SmartKom is based on the situated delegation-oriented dialog paradigm, in which the user delegates a task to a virtual communication assistant, visualized as a life-like character on a graphical display. Conclusions
64
http://smartkom.dfki.de/ URL of this Presentation: http://www.dfki.de/~wahlster/LangTech-2002
65
© W. Wahlster © 2002 DFKI Design by R.O. Thank you very much for your attention
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.