Enabling enactive interaction in virtualized experiences Stefano Tubaro and Augusto Sarti DEI – Politecnico di Milano, Italy.

Slides:



Advertisements
Similar presentations
Seeing and Organizing Identity Online thoughts on digital context, perception of self and identity management.
Advertisements

National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
HUMAINE Summer School - September Basic Emotions from Body Movements HUMAINE Summer School 2006 Casa Paganini Genova, Italy Ahmad S. Shaarani The.
Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova,
Created By: Lauren Snyder, Juliana Gerard, Dom Williams, and Ryan Holsopple.
Chapter 3: Understanding users. What goes on in the mind?
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Current Understanding of the Autism Spectrum Prof Rita Jordan PhD OBE Emeritus Professor in Autism Studies University of Birmingham, UK Autism NI: Day.
Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate.
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
ICT work programme ICT 22 Multimodal and natural computer interaction Aleksandra Wesolowska (Unit G.3 - Data Value Chain) Juan Pelegrin (Unit.
A Cross-modal Electronic Travel Aid Device F. Fontana, A. Fusiello, M. Gobbi, V. Murino, D. Rocchesso, L. Sartor, A. Panuccio. Università di Verona Dipartimento.
WHAT, WHERE, & HOW SYSTEMS AGNOSIAS!. What, Where, & How Systems.
ISTD 2003, Thoughts and Emotions Interactive Systems Technical Design Seminar work: Thoughts & Emotions Saija Gronroos Mika Rautanen Juha Sunnari.
Lecture 3: Shared Workspace and Design Coordination Dr. Xiangyu WANG.
Communicating with Avatar Bodies Francesca Barrientos Computer Science UC Berkeley 8 July 1999 HCC Research Retreat.
Emotional Intelligence and Agents – Survey and Possible Applications Mirjana Ivanovic, Milos Radovanovic, Zoran Budimac, Dejan Mitrovic, Vladimir Kurbalija,
Design, goal of design, design process in SE context, Process of design – Quality guidelines and attributes Evolution of software design process – Procedural,
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Introduction to Graphics and Virtual Environments.
Transformed Social Interaction – TSI Theory (Bailenson et al. 2008) To describe the transformation of interaction in mediated communication environments.
Perception-Based Engineering: Integrating Human Response Into Product Design A collaboration between psychologists and engineers. Mission: To integrate.
Virtual Reality Design and Representation. VR Design: Overview Objectives, appropriateness Creating a VR application Designing a VR experience: goals,
Chapter 11: Interaction Styles. Interaction Styles Introduction: Interaction styles are primarily different ways in which a user and computer system can.
GUI: Specifying Complete User Interaction Soft computing Laboratory Yonsei University October 25, 2004.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
Introduction to Virtual Environments Slater, Sherman and Bowman readings.
4/12/2007dhartman, CS A Survey of Socially Interactive Robots Terrance Fong, Illah Nourbakhsh, Kerstin Dautenhahn Presentation by Dan Hartmann.
Affective Interfaces Present and Future Challenges Introductory statement by Antonio Camurri (Univ of Genoa) Marc Leman (Univ of Gent) MEGA IST Multisensory.
Denis Gouin Valérie Lavigne Alexandre Bergeron-Guyard Innovative Interfaces and Interactions Group Intelligence and Information Section DRDC Valcartier.
Break-out Group # D Research Issues in Multimodal Interaction.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
“Early Detection of Learning Disabilities – The Situation Today”. Lalitha Ramanujan Alpha to Omega Learning Centre 1.
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Key Centre of Design Computing and Cognition – University of Sydney Concept Formation in a Design Optimization Tool Wei Peng and John S. Gero 7, July,
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
©2007 by the McGraw-Hill Companies, Inc. All rights reserved. 2/e PPTPPT.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
The Power of Image Developing a visual literacy in the language classroom Ben Goldstein.
Non-verbal Communication. How necessary is it to use and interpret it?
User-System Interaction: from gesture to action Prof. dr. Matthias Rauterberg IPO - Center for User-System Interaction TU/e Eindhoven University of Technology.
Intelligent Robot Architecture (1-3)  Background of research  Research objectives  By recognizing and analyzing user’s utterances and actions, an intelligent.
1 Workshop « Multimodal Corpora » Jean-Claude MARTIN Patrizia PAGGIO Peter KÜEHNLEIN Rainer STIEFELHAGEN Fabio PIANESI.
Multimodality, universals, natural interaction… and some other stories… Kostas Karpouzis & Stefanos Kollias ICCS/NTUA HUMAINE WP4.
4 November 2000Bridging the Gap Workshop 1 Control of avatar gestures Francesca Barrientos Computer Science Division UC Berkeley.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
© 2003 Gina Joue & Brian Duffy Dr. Brian Duffy
University of Kurdistan Artificial Intelligence Methods (AIM) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Aiming Computing Technology at Enhancing the Quality of Life of People with ALS Some Sketches on Directions in Minimal Signaling Communication Communication.
Nonverbal Communication. Communication in general is process of sending and receiving messages that enables humans to share knowledge, attitudes, and.
Electronic visualization laboratory, university of illinois at chicago Towards Lifelike Interfaces That Learn Jason Leigh, Andrew Johnson, Luc Renambot,
Chapter 7 Affective Computing. Structure IntroductionEmotions Emotions & Computers Applications.
Language in Cognitive Science. Research Areas for Language Computational models of speech production and perception Signal processing for speech analysis,
Ergonomics/Human Integrated Systems (Project 02)
Slide no 1 Cognitive Systems in FP6 scope and focus Colette Maloney DG Information Society.
Immersive Virtual Characters for Educating Medical Communication Skills J. Hernendez, A. Stevens, D. S. Lind Department of Surgery (College of Medicine)
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
What is Multimedia Anyway? David Millard and Paul Lewis.
MIT Artificial Intelligence Laboratory — Research Directions Intelligent Perceptual Interfaces Trevor Darrell Eric Grimson.
NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal.
MULTIMODAL AND NATURAL COMPUTER INTERACTION Domas Jonaitis.
ToK - Perception Some key points: Sense perception consists of Sensation and Interpretation If we accept that pain and taste are subjective, we might conclude.
Perceptive Computing Democracy Communism Architecture The Steam Engine WheelFire Zero Domestication Iron Ships Electricity The Vacuum tube E=mc 2 The.
CHAPTER 1 Introduction BIC 3337 EXPERT SYSTEM.
and museum experience: evidence from the ARtGlass case
What is Pattern Recognition?
CEN3722 Human Computer Interaction Displays
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
How consumers see the world and themselves
Multimedia Systems & Interfaces
Presentation transcript:

Enabling enactive interaction in virtualized experiences Stefano Tubaro and Augusto Sarti DEI – Politecnico di Milano, Italy

Nome relatore How do we enable virtualized enaction with incomplete sensory stimulation? Gaining enactive knowledge from a mediated experience Enactive Knowledge: information gained through perception-action interaction usually refers to an environment or a process (task) inherently multimodal as it requires the coordination of the various senses knowledge acquired in a mediated form is usually in symbolic or iconic form enactive knowledge is based on the active use of our senses direct and intuitive (based on experiece) often acquired without being aware of it Research on enaction is mostly focused on developing enactive multimodal interfaces

Nome relatore Realistic or plausible? Aside from videogames, where plausibility plays a dominant role, enactive knowledge acquisition is usually aimed at learning about reality. This is why mediated enaction usually relies on realism and sensorial transparency, which makes its applicability rather limited More promising (for massive use) is to accept virtualized enaction to be based on reduced sensory perception and develop methodologies that allow the user to develop reliable mappings between real and virtual enactive knowledge applications that are able to guess whether such mappings are correctly established in a closed-loop fashion

Nome relatore An example: the virtual dressing room The e-market of articles of clothing is slumping because it offers no virtual try-on at home The impact of applications such as virtual mirrors has been modest because of high cost; modest realism; and lack of multimodal interaction Research is needed for developing virtual try-on applications based on physically and perceptually plausible interactional rendering with reduced sensory perception (audio-visual), and limited realism, as long as we know how to render what is perceptually relevant for a virtual experience that can be instinctly remapped onto real life by the user correctly map haptic interaction onto audio-visual interaction (sensory substitution through cross-modal processing) correctly map real interaction onto virtual interaction exploiting the user’s experiential knowledge

Nome relatore An example: the virtual dressing room The mappings between experiential memory and virtual experience need be established in a closed-loop fashion by an intelligent HMI, which is able to read metacommunication cues carried by posture, gestures, gait, facial expression, voice prosody, etc. The HMI must become enactive and learn on the fly whether the virtual experience is being effectively remapped onto the real world

Nome relatore How do we enable enactive virtualized interpersonal interaction with incomplete sensory perception? Next step: enactive interpersonal interaction in virtual spaces Interpersonal interaction is based on a closed-loop exchange of symbolic or iconic information: Semantic information exchanged through speech, images, intentional gestures, display of clothes or specific objects,... meta-communication cues: body language (face expression, gestures, body posture, gait, etc.), voice prosody ad expression, tactile cues... Interactional meta-communication is purely enactive inherently multimodal instinctive (reduced awareness)

Nome relatore Issues in virtualized interactional experiences Two options for enactive interaction Interaction conveyed through signals metainformation to be captured, transferred and rendered as transparently and completely as possible e.g. video and audio 3D problems: lack of effective and inexpensive haptic sensors/display Interaction mediated by models e.g. 3D body models (avatars), interactional sounds, etc. needs: compensate for incomplete and imperfect sensing and displaying technology with intelligent sensors, affective and cross-modal processing, perceptual multimodal displays, etc. Again, the HMI must become enactive and be aware of the impact of the virtual experience onto the users

Nome relatore In perspective Natural human-machine interaction Multimodal interaction at multiple levels of abstracion: from physical to emotional Enactive machines Machines that learn from the interaction with humans Machines that learn from multimodal observations of human-to-human interaction Point of convergence between Research on intelligent sensing computer vision, audio and acoustic analysis and processing, pattern recognition, multimodal fusion,... Research on multimodal sensors and displays Research on behavioural psychology (social learning theory), and sociology