ENTERFACE 08 Project #1 “ MultiParty Communication with a Tour Guide ECA” Final presentation August 29th, 2008.

Slides:



Advertisements
Similar presentations
Qualities of a good facilitator
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
A new Network Concept for transporting and storing digital video…………
1 Software Testing and Quality Assurance Lecture 13 - Planning for Testing (Chapter 3, A Practical Guide to Testing Object- Oriented Software)
Multimodal Human-Computer Interaction by Maurits André The design and implementation of a prototype.
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents.
Learning about the CELDT
23-May-151 Multiparty Communication with a Tour Guide ECA Aleksandra Čereković HOTLab group Department of telecommunications Faculty of electrical engineering.
Didier Perroud Raynald Seydoux Frédéric Barras.  Abstract  Objectives  Modalities ◦ Project modalities ◦ CASE/CARE  Implementation ◦ VICI, Iphone,
Chapter 12: Web Usage Mining - An introduction
Unsupervised Clustering in Multimodal Multiparty Meeting Analysis.
Graz University of Technology, AUSTRIA Institute for Computer Graphics and Vision Fast Visual Object Identification and Categorization Michael Grabner,
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System Supervisor: Prof Michael Lyu Presented by: Lewis Ng,
1 IUT de Montreuil Université Paris 8 Emotion in Interaction: Embodied Conversational Agents Catherine Pelachaud.
Hone Your Communication Skills
November 2011 At A Glance GREAT is a flexible & highly portable set of mission operations analysis tools that increases the operational value of ground.
Building the Design Studio of the Future Aaron Adler Jacob Eisenstein Michael Oltmans Lisa Guttentag Randall Davis October 23, 2004.
CSCI 101 Introduction to Software Development and Design.
Human-Computer Interaction IS 588 Spring 2007 Week 4 Dr. Dania Bilal Dr. Lorraine Normore.
Table-Driven Acceptance Testing Mario Aquino Principal Software Engineer Object Computing, Inc.
1 Autonomic Computing An Introduction Guenter Kickinger.
Evidence based research in education Cathy Gunn University of Auckland.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
ON DESIGING END-USER MULTICAST FOR MULTIPLE VIDEO SOURCES Y.Nakamura, H.Yamaguchi, A.Hiromori, K.Yasumoto †, T.Higashino and K.Taniguchi Osaka University.
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
ITCS 6010 SALT. Speech Application Language Tags (SALT) Speech interface markup language Extension of HTML and other markup languages Adds speech and.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
Chapter 12 Project Communication and Documentation
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
Public Speaking Chapter Nine Organizing Your Speech.
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 7: Focusing on Users and Their Tasks.
Position Depending Communication system (PDCS) Done by Thomas Lynge Supervisors John Aasted Sørensen & Kåre Jelling Kristoffersen. PDCS is a combination.
© RightNow Technologies, Inc. Ask The Experts: Getting the most out of Smart Assistant David Fulton, Product Manager, Web Experience Center Of Excellence,
Multimodal Information Analysis for Emotion Recognition
Chapter 2 Copyright © 2015 Cengage Learning Team and Intercultural Communication.
Career Advancement Center Workshops Active Listening Skills T IMOTHY S CANLON Active Listening Skills T IMOTHY S CANLON APCO Curriculum Developer August.
 Copyright 2008 Digital Enterprise Research Institute. All rights reserved. Semantic on the Social Semantic Desktop.
ENTERFACE 08 Project 2 “multimodal high-level data integration” Mid-term presentation August 19th, 2008.
CELDT Learning about the CELDT Created by Mike Hammar.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
Talking Business A guide for communicating at work.
Issues in Multiparty Dialogues Ronak Patel. Current Trend  Only two-party case (a person and a Dialog system  Multi party (more than two persons Ex.
ENTERFACE 08 Project 1 “MultiParty Communication with a Tour Guide ECA” Mid-term presentation August 19th, 2008.
ENTERFACE ’08 Project 2 “Multimodal High Level Data Integration” Final Report August 29th, 2008.
Final Thoughts. Health Communication Process Pre-work Formative Research Strategy Development Pre-testing Implementation Evaluation.
Behind the Scenes Getting Started, Developing Confidence & the First Speech.
Feedback Elisabetta Bevacqua, Dirk Heylen,, Catherine Pelachaud, Isabella Poggi, Marc Schröder.
Therapeutic Communication
Aibo companion DOAS – group 1 Aitor Azcarate Onaindia Abeer Mahdi
Communicative Function  Communicative function is the language we use to express particular ideas or to achieve particular results in particular situations.
Lighting Design aided by Activity Zones and Context-Aware Computing Andy Perelson Advisor: Kimberle Koile.
New Supervisors’ Guide To Effective Supervision
A Flexible Model for Quality Assurance Frameworks and Quality Management Systems Q2010 Helsinki 4 May 2010 Peter van Nederpelt
ENTERFACE’08 Multimodal Communication with Robots and Virtual Agents mid-term presentation.
Elements Of Communications Coach Dees. Communicator Each communication event starts with a communicator Every one in the world is considered a communicator.
Conversational role assignment problem in multi-party dialogues Natasa Jovanovic Dennis Reidsma Rutger Rienks TKI group University of Twente.
W3C Multimodal Interaction Activities Deborah A. Dahl August 9, 2006.
The Strategic Communications Plan March 22, 2011.
Agent-Based Dialogue Management Discourse & Dialogue CMSC November 10, 2006.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
Chapter 18: Your Body in Delivery. Pay Attention to Body Language  Body language includes  Facial expressions;  Eye behavior;  Gestures;  General.
From Public Speaking Michael Osborn, Suzanne Osborn, and Randall Osborn.
/16 Final Project Report By Facializer Team Final Project Report Eagle, Leo, Bessie, Five, Evan Dan, Kyle, Ben, Caleb.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Supervisor: Prof Michael Lyu Presented by: Lewis Ng, Philip Chan
Chapter 12 Project Communication and Documentation
Dialogue & Communication
Dialogue & Communication
Presentation transcript:

eNTERFACE 08 Project #1 “ MultiParty Communication with a Tour Guide ECA” Final presentation August 29th, 2008

Outline Project Overview Objectives, Issues & Work Done System Overview Configuration and Design Conclusion

Project Objectives Main objective: develop an ECA Tour Guide system which can interract with one or two users Research features: multiparty dialogue model and scenario between two humans and ECA handling and combining input data: users presence and behaviors (speech, tracking) gaze behaviors control and nonverbal model of ECA

Work done: Component Functionality Overview We implemented components which support scenario based on narration and interruptions ECA is narrator, users can ask context-related questions (“where”, “how”, “when”) speaker, addresse and listener identification, ECA gaze model ECA can ask users simple “yes/no” questions to keep attention System can detect users appearance and dynamically initiate/end session System can detect and handle situation when users are paying less attention System can recover from failure (e.g. SR does not recognize user’s speech)

Work done...about to be done... Components are implemented System is being integrated debugging and full testing is needed Not supported: Detection of situation when users are starting their conversation Detection of speech collision between users Smart scheduling and control of ECAs behaviors

System Configuration

Speech Recognition

Functionality: Detects users requests (“Where”, “How”, “When”, “Who”) Detects users willingness to leave the system Detects results of simple questioners (“yes/no”) Detects unknown words Implementation: Keywords detection with confidence score and speech duration is implemented by using Loquendo API

Nonverbal Inputs and Understanding

Nonverbal Inputs: Users appearance and face orientation Functionality of components: Detect motions and users appearance/disappearance Detect number of users present Detect users face orientation and increased/decreased attention left, right user Implementation: OpenCV (motion) & Okao Vision (face orientation, gazing)

Decision Making Component

Decision Making Component - Functionalities Makes decisions “when and what to do to whom”: Handles multimodal input events (number of users, attention, speech channels) Handles user interruptions while ECA is speaking Handles failures from SR component Generates multimodal output and controls ECA’s gazing Simple rule: “First one will be served” “yes”/”no” questionnaire is exception No domain knowledge and behavior scheduling

Decision Making Component - Implementation Decision Making Component component uses ideas from information state theory [Larsson’00] and AIML: The progress of dialogue is represented by a set of variables Most appropriate plans are selected and scheduled by simple inference Time control to obtain both messages from speech channels in case (“yes/no”) questions Component is being developed by using MIDIKI’s toolkit as reference

Animation Player

Functionality: Animation player uses scripted behaviors (GSML language) to generate speech and animation Model of gaze in a multiparty communication is supported: Gazing control is obtained on the utterance level Gaze pattern is following conversational rules (who is addresee, who is listener) Implementation: Visage SDK (based on MPEG-4 standard) 3ds Max

Conclusion Components to support context-based two party human - ECA communication are implemented System is being integrated, but not fully tested Component issues: missing face tracking and domain knowledge about users behaviors simple dialogue management and control (no smart scheduling and smart gaze control) Future directions: system debugging and testing, implement tracking, improve gazing control, study on users behaviors and gazing, system evaluation