卓越發展延續計畫分項三 User-Centric Interactive Media 93.4.1 ~ 96.3.31 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會 93.12.11.

Slides:



Advertisements
Similar presentations
Some Reflections on Augmented Cognition Eric Horvitz ISAT & Microsoft Research November 2000 Some Reflections on Augmented Cognition Eric Horvitz ISAT.
Advertisements

National Technical University of Athens Department of Electrical and Computer Engineering Image, Video and Multimedia Systems Laboratory
Technical and design issues in implementation Dr. Mohamed Ally Director and Professor Centre for Distance Education Athabasca University Canada New Zealand.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Digital Interactive Entertainment Dr. Yangsheng Wang Professor of Institute of Automation Chinese Academy of Sciences
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Sensor-based Situated, Individualized, and Personalized Interaction in Smart Environments Simone Hämmerle, Matthias Wimmer, Bernd Radig, Michael Beetz.
1 Intelligent Agents Software analog to human agents real estate agent, librarian, salesperson Perform tasks individually, or in collaboration Static and.
The Process of Multiplatform Development: An Example Robyn Taylor University of Alberta.
Slide 1 Tiled Display Walls - Relation to the Access Grid and Other Systems Mike Walterman, Manager of Graphics Programming, Scientific Computing and Visualization.
Ambient Computational Environments Sprint Research Symposium March 8-9, 2000 Professor Gary J. Minden The University of Kansas Electrical Engineering and.
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
Game Development with Kinect
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
01 -1 Lecture 01 Intelligent Agents TopicsTopics –Definition –Agent Model –Agent Technology –Agent Architecture.
The Need of Unmanned Systems
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
HAND GESTURE BASED HUMAN COMPUTER INTERACTION. Hand Gesture Based Applications –Computer Interface A 2D/3D input device (Hand Tracking) Translation of.
Communication 200 Media Narratives Negroponte, “Being Digital” Kris Samuelson Byron Reeves.
Copyright John Wiley & Sons, Inc. Chapter 3 – Interactive Technologies HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane.
Technology to support psychosocial self-management Kurt L. Johnson, Ph.D. Henry Kautz, Ph.D.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
September1 Managing robot Development using Agent based Technologies Dr. Reuven Granot Former Scientific Deputy Research & Technology Unit Directorate.
PortableVision-based HCI A Hand Mouse System on Portable Devices 連矩鋒 (Burt C.F. Lien) Computer Science and Information Engineering Department National.
Chapter 7. BEAT: the Behavior Expression Animation Toolkit
GENERAL PRESENTATION SUBMITTED BY:- Neeraj Dhiman.
© 2007 Tom Beckman Features:  Are autonomous software entities that act as a user’s assistant to perform discrete tasks, simplifying or completely automating.
INTRODUCTION Generally, after stroke, patient usually has cerebral cortex functional barrier, for example, the impairment in the following capabilities,
A context-aware communication system Natalia Marmasse advisor: Chris Schmandt Speech Interface Group MIT Media Lab.
GENESIS OF VIRTUAL REALITY  The term ‘Virtual reality’ (VR) was initially coined by Jaron Lanier, founder of VPL Research (1989)..
Human Computer Interaction © 2014 Project Lead The Way, Inc.Computer Science and Software Engineering.
A Context Model based on Ontological Languages: a Proposal for Information Visualization School of Informatics Castilla-La Mancha University Ramón Hervás.
Can We Talk?: Building Social Communication Skills Lydia H. Soifer, Ph.D. SPED*NET Wilton Norwalk SPED Partners.
I Robot.
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
The Effect of Interface on Social Action in Online Virtual Worlds Anthony Steed Department of Computer Science University College London.
KAMI KITT ASSISTIVE TECHNOLOGY Chapter 7 Human/ Assistive Technology Interface.
4 November 2000Bridging the Gap Workshop 1 Control of avatar gestures Francesca Barrientos Computer Science Division UC Berkeley.
Learning Agents MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
Animated Speech Therapist for Individuals with Parkinson Disease Supported by the Coleman Institute for Cognitive Disabilities J. Yan, L. Ramig and R.
TEMPLATE DESIGN © E-Eye : A Multi Media Based Unauthorized Object Identification and Tracking System Tolgahan Cakaloglu.
Higher Vision, language and movement. Strong AI Is the belief that AI will eventually lead to the development of an autonomous intelligent machine. Some.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Interactive Mirror System based on Personal Purchase Information Donghyun Kim 1, Younsam Chae 2, Jonghun Shin 2, Uyeol Baek 2, Seoksoo Kim * 1,* Dept of.
Chapter 7 Affective Computing. Structure IntroductionEmotions Emotions & Computers Applications.
Copyright John Wiley & Sons, Inc. Chapter 3 – Interactive Technologies HCI: Developing Effective Organizational Information Systems Dov Te’eni Jane.
Dan Bohus Researcher Microsoft Research in collaboration with: Eric Horvitz, ASI Zicheng Liu, CCS Cha Zhang, CCS George Chrysanthakopoulos, Robotics Tim.
WP6 Emotion in Interaction Embodied Conversational Agents WP6 core task: describe an interactive ECA system with capabilities beyond those of present day.
What is Multimedia Anyway? David Millard and Paul Lewis.
Communication and Interpersonal Skills By Adel Ali 18/09/14371Communication Skills, Adel Ali.
MIT Artificial Intelligence Laboratory — Research Directions Intelligent Perceptual Interfaces Trevor Darrell Eric Grimson.
NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal.
Date of download: 7/8/2016 Copyright © ASME. All rights reserved. From: Stabilization of a Dynamic Walking Gait Simulation J. Comput. Nonlinear Dynam.
Perceptive Computing Democracy Communism Architecture The Steam Engine WheelFire Zero Domestication Iron Ships Electricity The Vacuum tube E=mc 2 The.
Visual Information Retrieval
Context-Aware Computing
CONNECTIVE APP Connect, Communicate , Encourage, Educate, and aware disable people.
Human-centered Interfaces
Technical Capabilities
Lesson 4 Alternative Methods Of Input.
Presented by: Mónica Domínguez
Natural User Interaction with Perceptual Computing
Chapter 9 System Control
Higher School of Economics , Moscow, 2016
Computer Vision Readings
Presentation transcript:

卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會

Media Interaction and Feedback Loop Context Change Context Knowledge Observation Interaction Interpretation Reasoning / Processing for Reaction Display / Actuator User Location User Attention Posture Input Acoustic Input Visual Sensing Audio Sensing Contact Sensing (Mouse, Joystick, Pressure, …) Others (IR, RFID, EM, …) Gesture / Posture Recognition Speech Recognition Behavior / Activity Recognition Emotion Discovery Steerable Projector Multi-resolution Display Motion Platform Control Haptics Speech Synthesis Intelligent Agent Reactive Control VR/AR Immersive Environment Events Features Users’ Intention Content / Command Device Input Interactive Media Content Presentation

Stewart Platform Consists of a fixed base and a moving platform connected by six extensible actuators. Provide sensation of motion by tracking desired trajectory. Incorporate wash-out filter that improves the capability and performance. Current Achievement

System Diagram of Motion Platform Current Achievement

Stewart Platform Demo Current Achievement

Human Figure Recognition(1) 1.Extract foreground object from background image. 2.Identify if the foreground object is a human figure. Current Achievement

Human Figure Recognition(2) Current Achievement

Human Limb Tracking(1) Future Work Use calibrated camera system to track the configuration of upper body limbs.

Human Limb Tracking(2) Detect head position first. Use 3D kinematic model to help recover the posture of upper body. Future Work Head Torso Left Upper Arm Left Forearm Right Upper Arm Right Forearm shoulder elbow neck

Human Limb Tracking(3) Future Work The human limb configuration can intuitively manipulate objects displayed in large screen or immersive environment.

Context-Aware Interactive Media Humans are good at conveying ideas to each other because they use context information to increase the conversational bandwidth. Context-aware interactive applications can make HCI more natural. Context-awareness is the essential feature of next generation interactive media. Future Work

Intelligent Agents for Interactive Media Traditional Media : Direct Manipulation System only do something if we explicitly tell it to. Interactive Media : Agent-based Interaction We work with computer as peers, each carry out parts of work. Agents can do works according to their experts, or negotiate with other agents on behalf of user. Agents can issue appropriate commands to the actuator / display according to user’s intention autonomously. Asking other agents to help (via Web Services), if the commands are out of local agent’s ability.(Ex: Hotel Reservation, emergent situation…) Future Work

Intelligent Agent for Interactive Media User Intention Model Action Rules Environmental Information Memory / User Interaction History Reasoning Engine External Interfaces Foreign Agent A Foreign Agent B Cognitive Rules Interaction Interpretation Local Agent Display / Actuator Future Work

Context Aware (3) Commercial Cut and Video Summarization (2) Auto Translation (1) Face and Pose Recognition (3) Region of Interest Variable Bit Rate Codec Multi-Resolution Display (3) Motion Platform (3) Cross Language Query Translation (1) Cross Language Query Translation (1) Embedded System (4) Ontology (1)