Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz.

Slides:



Advertisements
Similar presentations
Enhancing Learning Experiences through Context-Aware Collaborative Services: Software Architecture and Prototype System Nikolaos Dimakis, Lazaros Polymenakos.
Advertisements

Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science TU München, Germany A Person and Context.
The 20th International Conference on Software Engineering and Knowledge Engineering (SEKE2008) Department of Electrical and Computer Engineering
Manuela Veloso, Anthony Stentz, Alexander Rudnicky Brett Browning, M. Bernardine Dias Faculty Thomas Harris, Brenna Argall, Gil Jones Satanjeev Banerjee.
Model-based Image Interpretation with Application to Facial Expression Recognition Matthias Wimmer
Sharing Content and Experience in Smart Environments Johan Plomp, Juhani Heinila, Veikko Ikonen, Eija Kaasinen, Pasi Valkkynen 1.
Breakout session B questions. Research directions/areas Multi-modal perception cognition and interaction Learning, adaptation and imitation Design and.
Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Matthias Wimmer, Ursula Zucker and Bernd Radig Chair for Image Understanding Computer Science Technische Universität München { wimmerm, zucker, radig
Martin Wagner and Gudrun Klinker Augmented Reality Group Institut für Informatik Technische Universität München December 19, 2003.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Empirical and Data-Driven Models of Multimodality Advanced Methods for Multimodal Communication Computational Models of Multimodality Adequate.
I-Room : Integrating Intelligent Agents and Virtual Worlds.
Introduction to HCC and HCM. Human Centered Computing Philosophical-humanistic position regarding the ethics and aesthetics of a workplace Any system.
Real-Time Systems and the Aware Home Anind K. Dey Ubiquitous Computing Future Computing Environments.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
Community Manager A Dynamic Collaboration Solution on Heterogeneous Environment Hyeonsook Kim  2006 CUS. All rights reserved.
Smart Home Technologies CSE 4392 / CSE 5392 Spring 2006 Manfred Huber
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
Matthias Wimmer, Sylvia Pietzsch, Freek Stulp and Bernd Radig Chair for Image Understanding Institute for Computer Science Technische Universität München.
Building the Design Studio of the Future Aaron Adler Jacob Eisenstein Michael Oltmans Lisa Guttentag Randall Davis October 23, 2004.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Using Motion Detection, Alerts and Alarms
ASSISTIVE TECHNOLOGY PRESENTED BY ABDUL BARI KP. CONTENTS WHAT IS ASSISTIVE TECHNOLOGY? OUT PUT: Screen magnifier Speech to Recogonizing system Text to.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Michael Lawo Using Wearable Computing Technology to Empower the Mobile Worker TNC 2009 Malaga Michael Lawo, Otthein Herzog, Peter.
T Marjaana Träskbäck Design Principles for Intelligent Environments Coen M. H., AAAI-98 Introduction Intelligent Room Room Vision System.
ST01 - Introduction 1 Introduction Lecturer: Smilen Dimitrov Sensors Technology – MED4.
Multimodal Information Access Using Speech and Gestures Norbert Reithinger
Controlling Agents with Natural Language Jared Allen 2005 University of Arkansas EiA Agent team.
Asa MacWilliams Lehrstuhl für Angewandte Softwaretechnik Institut für Informatik Technische Universität München June 27, 2005 A Decentralized.
Technische Universität München Application Performance Monitoring of a scalable Java web-application in a cloud infrastructure Final Presentation August.
Spoken Dialog Systems and Voice XML Lecturer: Prof. Esther Levin.
B3AS Joseph Lewthwaite 1 Dec, 2005 ARL Knowledge Fusion COE Program.
FOREWORD By: Howard Shrobe MIT CS & AI Laboratory
Human Computer Interaction © 2014 Project Lead The Way, Inc.Computer Science and Software Engineering.
ENTERFACE 08 Project 2 “multimodal high-level data integration” Mid-term presentation August 19th, 2008.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
REU 2004 Computer Science and Engineering Department The University of Texas at Arlington Research Experiences for Undergraduates in Distributed Rational.
ICCS WSES BOF Discussion. Possible Topics Scientific workflows and Grid infrastructure Utilization of computing resources in scientific workflows; Virtual.
NGCWE Expert Group EU-ESA Experts Group's vision Prof. Juan Quemada NGCWE Expert Group IST Call 5 Preparatory Workshop on CWEs 13th.
Riga Technical University Department of System Theory and Design Usage of Multi-Agent Paradigm in Multi-Robot Systems Integration Assistant professor Egons.
1 Run-Time Software Engineering An approach for Embedded and Ubiquitous Computing Environments Sooyong Park Sogang University South.
TEMPLATE DESIGN © E-Eye : A Multi Media Based Unauthorized Object Identification and Tracking System Tolgahan Cakaloglu.
Application Ontology Manager for Hydra IST Ján Hreňo Martin Sarnovský Peter Kostelník TU Košice.
Dan Bohus Researcher Microsoft Research in collaboration with: Eric Horvitz, ASI Zicheng Liu, CCS Cha Zhang, CCS George Chrysanthakopoulos, Robotics Tim.
Computer Science and Engineering Department The University of Texas at Arlington MavHome: An Intelligent Home Environment.
Vision-Guided Robot Position Control SKYNET Tony BaumgartnerBrock Shepard Jeff Clements Norm Pond Nicholas Vidovich Advisors: Dr. Juliet Hurtig & Dr. J.D.
Third International Workshop on Networked Appliance 2001 SONA: Applying Mobile Agent to Networked Appliance Control S.Aoki, S.Makino, T.Okoshi J.Nakazawa.
W3C Multimodal Interaction Activities Deborah A. Dahl August 9, 2006.
REU 2009 Computer Science and Engineering Department The University of Texas at Arlington Research Experiences for Undergraduates in Information Processing.
Microsoft Research Faculty Summit Dan Bohus, Eric Horvitz Microsoft Research.
What is Multimedia Anyway? David Millard and Paul Lewis.
T.R.I.D.E Simon Overell (seo01) Supervisor: Keith Clark.
MULTIMODAL AND NATURAL COMPUTER INTERACTION Domas Jonaitis.
Perceptive Computing Democracy Communism Architecture The Steam Engine WheelFire Zero Domestication Iron Ships Electricity The Vacuum tube E=mc 2 The.
Automatic License Plate Recognition for Electronic Payment system Chiu Wing Cheung d.
Hand Gestures Based Applications
iCub Interactive Tutoring Demo
Overview of Year 1 Progress Angelo Cangelosi & ITALK team
Video-based human motion recognition using 3D mocap data
Ambient Intelligence -by Internal Guide: M.Preethi(10C91A0563)
Pervasive Computing Happening?
Human-centered Interfaces
Command Me Specification
k:info Smart Billboards for Informal Public Spaces
Computer Vision Readings
Peter E, Ayemholan1, Garba, Suleiman2 and Osaigbovo Timothy3
VoiceXML An investigation Author: Mya Anderson
Presentation transcript:

Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz Technische Universität München – Informatik IX Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments

SIP via sensors Situation detection: information about persons: name, location, focus of attention, posture, motion,… information about persons: name, location, focus of attention, posture, motion,… Individualized settings: desktop, avatar, input settings (gestures, voice commands,…) desktop, avatar, input settings (gestures, voice commands,…) Personalized settings: user’s role, right management,… user’s role, right management,… SIP detection using sensors more comprehensive SIP information more comprehensive SIP information more intuitive HCI more intuitive HCI

Our Test Bed Sensors: cameras, microphones, laser-range-sensors Actuators: monitor, speaker, video-wall Scenarios: Scenarios: person localization person localization automatic login automatic login meeting reminder meeting reminder individualized gesture interaction individualized gesture interaction

Video

Techniques (Computer Vision) person detection OpenCV (Haar-Face-Detector) person recognition OpenCV (Hidden Markov Models) person tracking developed at TUM laser-scanner based multiple hypothesis tracking,… gesture recognition developed at TUM motion templates, multiple classifiers,… mimic recognition developed at TUM point distribution model, optical flow,…

Techniques (others) natural language input Java Sphinx 4 (origin CMU, now open source) Java Sphinx 4 (origin CMU, now open source) phonemes are already trained phonemes are already trained we defined the words ( = concatenation of phonemes) we defined the words ( = concatenation of phonemes) we defined the grammar ( = allowed sentences) we defined the grammar ( = allowed sentences) natural language output provides the user with audio information provides the user with audio information user can be mobile user can be mobile FreeTTS 1.2 (sourceforge) FreeTTS 1.2 (sourceforge)

Software architecture Dispatcher multi agent framework

Conclusion Advantages using sensors additional and more exact context knowledge additional and more exact context knowledge unobtrusive system unobtrusive system Multi agent framework distributed and scalable system distributed and scalable system simply extensible to further scenarios simply extensible to further scenarios Overall semantic semantic agent communication semantic agent communication central aggregation of semantic context knowledge central aggregation of semantic context knowledge Leads to more comprehensive SIP information more comprehensive SIP information seamless integration of SIP information seamless integration of SIP information intuitive HCI intuitive HCI

Thank you!

Setup & Benefit sensors for detection of SIP context: cameras cameras microphones microphones laser-range-sensors laser-range-sensors pressure-sensors, … pressure-sensors, … sensors provide knowledge about the SIP context situation dependant services situation dependant services intuitive HCI (human computer interface) intuitive HCI (human computer interface) application scenarios: support in meetings and presentations support in meetings and presentations intelligent House intelligent House external robot control external robot control

Our Test Bed Sensors: Cameras, Microphones, Laser-Range-Sensors Actuators: Monitor, Speaker, Video-Wall Scenarios: automatic login automatic login meeting reminder meeting reminder individualized gesture interaction individualized gesture interaction intuitive robot control intuitive robot control person localization person localization

Sensors person recognition person recognition (Bild) (Bild) gesture recognition gesture recognition (Bild)

Knowledgebase Web Ontology Language (W3C) Web Ontology Language (W3C)