Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova,

Slides:



Advertisements
Similar presentations
SEMINAR ON VIRTUAL REALITY 25-Mar-17
Advertisements

GMD German National Research Center for Information Technology Darmstadt University of Technology Perspectives and Priorities for Digital Libraries Research.
5th FP7 Networked Media Concertation Meeting, Brussels 3-4 February 2010 A unIfied framework for multimodal content SEARCH Short Project Overview.
Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri,
Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios.
~ Extension cards ~ graphic card, sound card, LAN, TV card.
Blue Eye T E C H N O L G Y.
CHI2006 workshop "What is the Next Generation of HCI?", April , Montreal. Thomas Pederson: egocentric interaction egocentric interaction – a design.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Page 1 SIXTH SENSE TECHNOLOGY Presented by: KIRTI AGGARWAL 2K7-MRCE-CS-035.
IST MEGA Multisensory Expressive Gesture Applications Concertation Meeting Brussels, 3-4 October 2000 Antonio Camurri (DIST-University of Genoa)
Android 4.0 ICS An Unified UI framework for Tablets and Cell Phones Ashwin. G. Balani, Founder Member, GTUG, Napur.
Mixed Reality Reality – Virtuality Continuum Antonio Camurri Casa Paganini – InfoMus Lab –
Department of Computer Science and Electrical Engineering.
Supporting Collaboration: Digital Desktops to Intelligent Rooms Mary Lou Maher Design Computing and Cognition Group Faculty of Architecture University.
Introduction to HCC and HCM. Human Centered Computing Philosophical-humanistic position regarding the ethics and aesthetics of a workplace Any system.
DAFFIE and the Wall Erik Brisson IS&T Scientific Visualization Tutorial - Spring 2010.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
ISTD 2003, Thoughts and Emotions Interactive Systems Technical Design Seminar work: Thoughts & Emotions Saija Gronroos Mika Rautanen Juha Sunnari.
1 Component Description Alice 3d Graphics Software Human Computer Interaction Institute Carnegie Mellon University Prepared by: Randy Pausch,
Introduction to Graphics and Virtual Environments.
HAND GESTURE BASED HUMAN COMPUTER INTERACTION. Hand Gesture Based Applications –Computer Interface A 2D/3D input device (Hand Tracking) Translation of.
1 Introduction to Multimedia What is Multimedia. 1
Video: The Audiovisual Medium. What is film? Film is a material as well as a medium. Film registers light and shadow and fixes them onto the material.
Enabling enactive interaction in virtualized experiences Stefano Tubaro and Augusto Sarti DEI – Politecnico di Milano, Italy.
ASM Mahfujur Rahman Bridging Virtual to Real: A New Approach to Annotation, 2D/3D Representation and Interaction with Ambient.
1 Online Learning Update: CVN Enhancements for Online and On-Campus Programs Nancy Rubin 1.
AgentSheets ® Thought Amplifier End User Development WHO needs it? Alexander Repenning CS Prof. University of Colorado CEO AgentSheets Inc.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
Affective Interfaces Present and Future Challenges Introductory statement by Antonio Camurri (Univ of Genoa) Marc Leman (Univ of Gent) MEGA IST Multisensory.
APML, a Markup Language for Believable Behavior Generation Soft computing Laboratory Yonsei University October 25, 2004.
A sensor for measuring the acceleration of a moving or vibrating body.
EyesWeb XMI Multimodal data recording, playing and analysis M. Mancini, Università di Genova (Italy)
Submitted by:- Vinay kr. Gupta Computer Sci. & Engg. 4 th year.
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
Tangible Bits Next Generation HCI
The 11 th International one-month Summer Workshop on Multimodal Interfaces Aug 10 th – Sept 4 th 2015, Mons, Belgium A. Camurri, M. Mancini, G. Volpe,
What is Computer Science? “Computer Science is no more about computers than astronomy is about telescopes.” - Edsger Dijkstra “Computer Science is no more.
MULTIMEDIA Hardware 4/24/2017.
User-System Interaction: from gesture to action Prof. dr. Matthias Rauterberg IPO - Center for User-System Interaction TU/e Eindhoven University of Technology.
A Multi-agent Approach for the Integration of the Graphical and Intelligent Components of a Virtual Environment Rui Prada INESC-ID.
Project # 9 Tracking-dependent and interactive video projection.
SEMINAR ON: VIRTUAL KEYBOARD PRESENTED BY BY KARTHIK ALVA 5 th sem cs.
Lonce Wyse Arts and Creativity Lab National University of Singapore Net-Music 2013: The Internet as Creative Resource in Music.
Block Jam: A Tangible Interface for Interactive Music Michael Curry.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
Processing (Program Language) by Piseth Tep intro Processing is a code base visual design and electronic art programing. It were initiated by Casey Reas.
© 2003 Gina Joue & Brian Duffy Dr. Brian Duffy
LaHave House Project 1 LaHave House Project Automated Architectural Design BML + ARC.
26/05/2005 Research Infrastructures - 'eInfrastructure: Grid initiatives‘ FP INFRASTRUCTURES-71 DIMMI Project a DI gital M ulti M edia I nfrastructure.
CONTENT FOCUS FOCUS INTRODUCTION INTRODUCTION COMPONENTS COMPONENTS TYPES OF GESTURES TYPES OF GESTURES ADVANTAGES ADVANTAGES CHALLENGES CHALLENGES REFERENCE.
A GOAL-DIRECTED RATIONAL COMPONENT FOR EMOTIONAL AGENTS Antonio Camurri and Gualtiero Volpe DIST - University of Genova Italy 10/04/1999 " AFFECTIVE COMPUTING:
THE EYESWEB PLATFORM - GDE The EyesWeb XMI multimodal platform GDE 5 March 2015.
THE EYESWEB PLATFORM - OVERVIEWTHE EYESWEB PLATFORM - OVERVIEW The EyesWeb XMI multimodal platform Overview 5 March 2015.
An Abstract Control Space for Communication of Sensory Expressive Intentions in Music Performance Canazza, S., De Poli, G., Roda, A., Vidolin, A. Presented.
Final Year Project Eoin Culhane. MIDI Guitar Guitar with 6 outputs 1 output for each string Each individual string output will be converted to MIDI.
What is Multimedia Anyway? David Millard and Paul Lewis.
Accelerometer based motion gestures for mobile devices Presented by – Neel Parikh Advisor Committee members Dr. Chris Pollett Dr. Robert Chun Dr. Mark.
Garage Band For MAC. What is it? A digital audio workstation that can record and play back multiple tracks of audio. Is a software application for OS.
ENTERFACE 08 Project 9 “ Tracking-dependent and interactive video projection ” Mid-term presentation August 19th, 2008.
Blue Eye Technology By: ARUN DIXIT. CONTENTS Motivation What is BlueEye technology ? What is BlueEyes ? System designing System overview DAU CSU IBM research.
HUMAN MEDIA INTERACTION CREATIVE TECHNOLOGY FOR PLAY AND
Microsoft HoloLens 교양컴퓨터 | 스띠아와띠 페라 |
Un-Private House MoMA / MIT : Interactive Table
Controlling Gestures on Avatars
Virtual Keyboard.
Virtual Keyboard Prepared by: Joe Smith & Lucy Main.
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
COMPUTER GRAPHICS with OpenGL (3rd Edition) Donald Hearn M
Presentation transcript:

Expressive Tangible Acoustic Interfaces Antonio Camurri, Corrado Canepa, and Gualtiero Volpe InfoMus Lab, DIST-University of Genova, Viale Causa 13, Genova, 16145, Italy The EyesWeb Framework For multimodal analysis and processing. Visual environment for design of applications. Input support for frame grabbers, wireless on- body sensors, audio and MIDI input, serial, network, standard input devices. Math and filters (e.g., operations with scalars and matrices). Imaging libraries (analysis, processing. and conversion of images and video). Sound and MIDI libraries. Support to industrial standards: DirectX, ASIO, VST, FEAPI, OSC, FreeFrame. Connection with Pure Data, Max/MSP, Kyma. SDK for custom extensions. Available for free at Expressive TAIs in EyesWeb Libraries for gathering and processing data from TAIs sensors. Libraries for localization of touching gestures. Expressive Gesture Processing Libraries for analysis of expressive qualities in gestures. The EyesWeb framework: an application for analysis of expressive gesture Applications InfoMus Lab is exploring possible exploitation of expressive TAIs in: interactive music, theatre, arts, therapy and rehabilitation, interactive edutainment, applications for museums, exhibits, and science centers (authoring), interactive entertainment, ambient intelligence, tools for teaching by playing and experiencing in simulated environments, tools for enhancing communication about new products or ideas in conventions and “information ateliers”. Example: the music theatre opera “Un avatar del diavolo” (composer Roberto Doati), performed at La Biennale, Venezia, Sept The opera includes a sensorized chair. Expressive gestures (e.g., caresses or tapping-like hand movement) of an actor on the chair control in real-time sound generation and processing. Tangible Acoustic Interfaces (TAIs) Objective: transforming physical objects (including everyday objects, e.g. tables or chairs), augmented surfaces, and spaces into tangible-acoustic embodiments of natural seamless unrestricted interfaces. Physical objects and space as media to bridge the gap between the virtual and physical worlds and to make information accessible through large size touchable objects and ambient media. TAIs exploit propagation of sound into physical objects in order to obtain information about where, when, and how an object is touched. TAIs are integrated in a multimodal perspective with other sensors (e.g., videocameras). Expressive Gesture Processing Analysis and processing of user’s movement and gestures for obtaining information related to the user’s affective\emotional state. Such information may include the way in which the user approaches a TAI: e.g., he/she can touch it in a soft and light way or in a hard and heavy way. Analysis is carried out with a multi-layered architecture: A first layer extracts information from video and audio signals and from sensor data. Then, algorithms are used for extracting expressive features characterizing gesture (e.g., energy, fluency, impulsiveness). Finally, the extracted values are fed to machine learning modules for associating an expressive characterization to gestures. Two examples of TAIs: a table running the Google Earth application and a sensorized chair The music theatre piece “Un Avatar del Diavolo” (composer R. Doati), La Biennale, Venezia, Sept The piece exploits TAI technologies (a sensorized chair). With the partial support of the EU-IST Project TAI-CHI (Tangible Acoustic Interfaces for Computer-Human Interaction)