Building an Aware Home Irfan Essa Aware Home Research Initiative GVU Center / College of Computing Georgia Institute of Technology

Slides:



Advertisements
Similar presentations
Enhancing Learning Experiences through Context-Aware Collaborative Services: Software Architecture and Prototype System Nikolaos Dimakis, Lazaros Polymenakos.
Advertisements

Martin Wagner and Gudrun Klinker Augmented Reality Group Institut für Informatik Technische Universität München December 19, 2003.
Pervasive Computing 通訊所 鄭筱親. Outline  Introduction  Context Awareness  Recent Research  Future and Conclusion.
XProtect® Expert 2013 Product presentation
Ubiquitous Computing Definitions Ubiquitous computing is the method of enhancing computer use by making many computers available throughout the physical.
Component-Based Software Engineering Oxygen Paul Krause.
1 Ubiquitous Computing CS376 Reading Summary Taemie Kim.
Supporting Collaboration: Digital Desktops to Intelligent Rooms Mary Lou Maher Design Computing and Cognition Group Faculty of Architecture University.
Introduction to HCC and HCM. Human Centered Computing Philosophical-humanistic position regarding the ethics and aesthetics of a workplace Any system.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
Think. Learn. Succeed. Aura: An Architectural Framework for User Mobility in Ubiquitous Computing Environments Presented by: Ashirvad Naik April 20, 2010.
CS CS 5150 Software Engineering Lecture 12 Usability 2.
Software Engineering Issues for Ubiquitous Computing Author: Gregory D. Abowd, Georgia Institute of Technology CSCI 599 Week 4 Paper 3 September
Ubiquitous Computing Computers everywhere. Agenda Old future videos
FCE Seminar - Background zShort talk at CoBuild, this Saturday zAwareHome project z10 minutes zDRAFT.
Ubiquitous Computing Computers everywhere.
Real-Time Systems and the Aware Home Anind K. Dey Ubiquitous Computing Future Computing Environments.
Location Systems for Ubiquitous Computing Jeffrey Hightower and Gaetano Borriello.
Highlights Lecture on the image part (10) Automatic Perception 16
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
Smart Home Technologies CSE 4392 / CSE 5392 Spring 2006 Manfred Huber
Sharena Paripatyadar.  What are the differences?
EIA : “Automated Understanding of Captured Experience” Georgia Institute of Technology, College of Computing Investigators: Irfan Essa, G. Abowd,
Personalized Medicine Research at the University of Rochester Henry Kautz Department of Computer Science.
Smart Environments for Occupancy Sensing and Services Paper by Pirttikangas, Tobe, and Thepvilojanapong Presented by Alan Kelly December 7, 2011.
There is more to Context than Location Albrecht Schmidt, Michael Beigl, and Hans-W. Gellersen Telecooperation Office (TecO), University of Karlsruhe, Elsevier,
Twenty-First Century Automatic Speech Recognition: Meeting Rooms and Beyond ASR 2000 September 20, 2000 John Garofolo
DCS 891C Research Seminar Summer 2004 July 16, 2004 Richard Harvey
Building an Aware Home: Understanding the symbiosis between computing and everyday activities Irfan Essa, Gregory Abowd Future Computing Environments College.
1 Chapter 16 Assistive Environments for Individuals with Special Needs.
: Chapter 1: Introduction 1 Montri Karnjanadecha ac.th/~montri Principles of Pattern Recognition.
September 29, 2002Ubicomp 021 NIST Meeting Data Collection Jean Scholtz National Institute of Standards and Technology Gaithersburg, MD USA.
1 SWE 513: Software Engineering Usability II. 2 Usability and Cost Good usability may be expensive in hardware or special software development User interface.
Center for Human Computer Communication Department of Computer Science, OG I 1 Designing Robust Multimodal Systems for Diverse Users and Mobile Environments.
Trends in Computer Vision Automatic Video Surveillance.
Ubiquitous Computing Computers everywhere. Where are we going? What happens when the input is your car pulls into the garage, and the output is the heat.
NM – LREC 2008 /1 N. Moreau 1, D. Mostefa 1, R. Stiefelhagen 2, S. Burger 3, K. Choukri 1 1 ELDA, 2 UKA-ISL, 3 CMU s:
SixthSense RFID based Enterprise Intelligence Lenin Ravindranath, Venkat Padmanabhan Interns: Piyush Agrawal (IITK), SriKrishna (BITS Pilani)
1 Chapter 7 Designing for the Human Experience in Smart Environments.
Report on Intrusion Detection and Data Fusion By Ganesh Godavari.
PERVASIVE COMPUTING MIDDLEWARE BY SCHIELE, HANDTE, AND BECKER A Presentation by Nancy Shah.
Comp 15 - Usability & Human Factors Unit 9 - Ubiquitous Computing in Healthcare This material was developed by Columbia University, funded by the Department.
Specialized Input and Output. Inputting Sound ● The microphone is the most basic device for inputting sounds into a computer ● Microphones capture sounds.
FOREWORD By: Howard Shrobe MIT CS & AI Laboratory
Cerberus: A Context-Aware Security Scheme for Smart Spaces presented by L.X.Hung u-Security Research Group The First IEEE International Conference.
Trends in Embedded Computing The Ubiquitous Computing through Sensor Swarms.
Prototype 3: MI prototype for video surveillance and biometry CVDSP-UJI Computer Vision Group – UJI Digital Signal Processing Group – UV November 2010.
User-System Interaction: from gesture to action Prof. dr. Matthias Rauterberg IPO - Center for User-System Interaction TU/e Eindhoven University of Technology.
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
School of something FACULTY OF OTHER Facing Complexity Using AAC in Human User Interface Design Lisa-Dionne Morris School of Mechanical Engineering
TEMPLATE DESIGN © E-Eye : A Multi Media Based Unauthorized Object Identification and Tracking System Tolgahan Cakaloglu.
Indoor Positioning System
Computer Vision, CS766 Staff Instructor: Li Zhang TA: Yu-Chi Lai
Digital Family Portraits: Supporting Peace of Mind for Extended Family Members Elizabeth et. al Ubicomp class reading Presented by BURT.
Chapter 7 Affective Computing. Structure IntroductionEmotions Emotions & Computers Applications.
Ubiquitous Computing Computers everywhere. Wednesday: presentations Ideal Concepts T.H.E. Team Infused Industries CommuniCORP Part 3 DUE!
Selective Perception Policies for Guiding Sensing and Computation in Multimodal Systems Brief Presentation of ICMI ’ 03 N.Oliver & E.Horvitz paper Nikolaos.
Week 9, Day 2 Object-oriented Design Acknowledgement: These slides by Dr. Hasker SE-2811 Slide design: Dr. Mark L. Hornick Content: Dr. Hornick Errors:
Stanford hci group / cs376 u Jeffrey Heer · 19 May 2009 Speech & Multimodal Interfaces.
Gaia Ubiquitous Computing Directions Roy Campbell University of Illinois at Urbana-Champaign.
NCP meeting Jan 27-28, 2003, Brussels Colette Maloney Interfaces, Knowledge and Content technologies, Applications & Information Market DG INFSO Multimodal.
Assisted Cognition Systems Henry Kautz Department of Computer Science.
Perceptive Computing Democracy Communism Architecture The Steam Engine WheelFire Zero Domestication Iron Ships Electricity The Vacuum tube E=mc 2 The.
Automated Detection of Human Emotion
Augmentative and Alternative Communication Assessment and Intervention
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Ubiquitous Computing and Augmented Realities
CEN3722 Human Computer Interaction Advanced Interfaces
3rd Studierstube Workshop TU Wien
Presentation transcript:

Building an Aware Home Irfan Essa Aware Home Research Initiative GVU Center / College of Computing Georgia Institute of Technology

© Irfan Essa and Georgia Institute of Technology, Research Goal How can your house help, if it is aware (of your whereabouts, activities, needs, intentions, etc.)?

© Irfan Essa and Georgia Institute of Technology, Important Goals: Ubiquity  Sensing and output technology that is transparent to everyday activities.  Passive  Anywhere, anytime input/output.  Provide an ability to sense, interact, display information, communicate, without increasing burden/load on users.  Aware of residents, sense them!  who, what, where, why? (W4)  noninvasive, unobtrusive, perceptual, ubiquitous, natural interface

© Irfan Essa and Georgia Institute of Technology, Sense, Measure, Monitor?  Issues of location: Where are people?  Identity: Where are which people?  What about new people?  Local action  “Sitting/Getting up”, “Climbing stairs”, “Washing dishes”, “Reading book”, etc.  Extended action  “Eating a meal”,”preparing a meal”,  Really extended action  “Change of mobility”, “eating well”

© Irfan Essa and Georgia Institute of Technology, Good hard perceptional problems From a perception standpoint, sensing in the Aware Home demands the solution to several classes of fundamental problems:  Sensing user state  Understanding user activity  Noticing variation over longer time scales  “trending”, “routines” … a really good set for Computer Vision researchers. … but vision may not be (is not) enough. … a sensor fusion, sensor interpretation problem.

© Irfan Essa and Georgia Institute of Technology, So what form of sensing?  Typical, do-able but:  “Grandma fell down and didn’t get up”  Why not: Because if that is all you want to do, there are better, cheaper, more reliable ways (though the failure modes need to be designed well).  Tracking is STILL HARD!  Many other sensors can be pervasive but …  If you have a vision infrastructure, and basic primitive capabilities, every new task is not a re- engineering job.  It can help focusing on some important event (context)

© Irfan Essa and Georgia Institute of Technology, Practical Indoor Sensing  RF ID instrumentation  Floor mats  Below-knee tags  Room-level positioning  Can other sensing build on top of this?

© Irfan Essa and Georgia Institute of Technology, Vision infrastructure  20+ Fixed Cameras (Analog & Digital *IEEE 1394*)  16+ PIII PCs (2 cameras / PC)  8 Pan-Tilt-Zoom Cameras  Stereo and other special purpose cameras

© Irfan Essa and Georgia Institute of Technology, Vision-based tracking methods  Background Segmentation / Modeling.  Color Histograms / Segmentation.  Template / Appearance Modeling & Matching.  Motion Integration.  Calibration, Perspective Modeling.  Sensor Fusion (between cameras and other sensors).  Learning Methods  Client-server Architecture for distributed Processing.

© Irfan Essa and Georgia Institute of Technology, Tracking from ceiling sensors A person is tracked and his activities are reported on the map.

© Irfan Essa and Georgia Institute of Technology, Tracking from Above

© Irfan Essa and Georgia Institute of Technology, Room mapping  2D descriptions  Overlapping cameras

© Irfan Essa and Georgia Institute of Technology, The Gesture Pendant  Wear sensors looking outwards. (1 st vs. 2 nd vs. 3 rd person perspective).  Simplified home control  Biometrics, biomedical, etc.  Starner et al.

© Irfan Essa and Georgia Institute of Technology, Eye/Pupil Tracking

© Irfan Essa and Georgia Institute of Technology, Audio Sensors  Speech recognition.  Augment interaction.  Tracking / identification.  Affect Determination (anger, stress, sadness, happiness).  Noise cancellation.  Acoustic Modeling.

© Irfan Essa and Georgia Institute of Technology, Auditory Localization I  Phased Array Microphones  Localize a speaker and move a pan-tilt-zoom camera to their face  microphone system  Vision can help with face tracking  Sensor-fusion

© Irfan Essa and Georgia Institute of Technology, Auditory Localization II  Adaptive Array Processing  Determine Time Delay of Arrival (TDoA) to determine source.  59 microphone array  Interaction with NIST (Vince Stanford).

© Irfan Essa and Georgia Institute of Technology, Video-based Tracking Cameras

© Irfan Essa and Georgia Institute of Technology, System Architecture Video Locations Camera 1 (Fixed) Camera 2 (Fixed) Color Tracking Color Tracking Motion Tracking Motion Tracking Calibrated Video Camera 3 (PTZ) Camera 4 (PTZ) Color Tracking Beam Former Face Tracking Auditory Localization Face Tracking Video More Sensors Room Manager Face Recog.

© Irfan Essa and Georgia Institute of Technology, Occupancy Grid

© Irfan Essa and Georgia Institute of Technology, Combining Sensors Map of the Room, showing sensors and 2 residents in the room Visual tracking of a resident Visual identification of a resident Paper appears in PUI 2001

© Irfan Essa and Georgia Institute of Technology, Multi-modal tracking

© Irfan Essa and Georgia Institute of Technology, What Was I Cooking? Mynatt, Abowd, et al.

© Irfan Essa and Georgia Institute of Technology, Video

© Irfan Essa and Georgia Institute of Technology, Behavior Analysis Detection Behavior Accuracy Low-Risk 92% High-Risk 76% Novice 100% Expert 90% After ~10 trials per person

© Irfan Essa and Georgia Institute of Technology, Routine Activities  share a set of component tasks  identify a subset of tasks and measure the demand for the performance of such tasks  model and predict successful and independent performance of an activity  (Clark, Czaja, & Weber, 1990; Connell & Sanford, 1997; Sanford, Story, & Ringholz, 1998).

© Irfan Essa and Georgia Institute of Technology, Routine Household Activities  Activities of Daily Living (ADLs) [dressing, bathing, etc.]  Instrumental Activities of Daily Living (IADLs) [house cleaning, laundry, cooking].  Enhanced Activities of Daily Living (EADLs).  ADLs, IADLs, and EADLs can potentially be aided by Aware Environments.

© Irfan Essa and Georgia Institute of Technology, Face of the House! PS. Did some facial expression recognition earlier, ask Sandy.

© Irfan Essa and Georgia Institute of Technology, Finally, the Context. We need it.  How do we represent what: really the heart of the question What is context?  Helps define the target vocabulary of sensing and perception, and input information for decision making.  NEED Experts.  Software Engineering: inflow, synchronization, storage, access, delivery (e.g. Context Toolkit, Abowd et al.)

© Irfan Essa and Georgia Institute of Technology, Ethical Issues  These visions concern some people (as they should!).  For example, with automated capture:  who controls and distributes capture?  what about silent and intimidated minority?  Educate & confront  Policy

© Irfan Essa and Georgia Institute of Technology, More!  We are interested in building useful (important) “Living Laboratories” (and learning how to build them too).  We will build, test, evaluate, and rebuild.  “This Aware House.”  See:     Others  Gregory Abowd, Beth Mynatt, Wendy Rogers, Aaron Bobick, Thad Starner, many UG, MS, PhD students and Research Scientists.

© Irfan Essa and Georgia Institute of Technology, “blob” management b/w clients

© Irfan Essa and Georgia Institute of Technology, All the same person?

© Irfan Essa and Georgia Institute of Technology, Location  Awareness of a resident is crucial!  Claims of reliable location sensing are somewhat exaggerated.  Vision can help (so can audio), but we need something reliable (24/7).  Room-level accuracy a major requirement.

© Irfan Essa and Georgia Institute of Technology, Natural tasks for vision  Location refinement  Non-location determined activity  “Couch potatoes”  Basic activity  Contextual-triggers  ”Preparing to leave the house”  Lots of potential features  Statistical characterization  “Slower going up the stairs this week than last”

© Irfan Essa and Georgia Institute of Technology, A Video Study 1. A video-based naturalistic protocol developed to record and study routine activities (Sanford et al. 1997, 2000). Video of 28 Participants (50-80) in their homes. 2. Analyze existing video and code specific task related problems. Determine what codings can be represented and modeled computationally.

© Irfan Essa and Georgia Institute of Technology, Keeping track of blobs  Overhead cameras  These are not plan view cameras, but require mapping (calibrate if desired).  A messy house is not a lab – much less control.  Integrate according to 2.1D location – really foot location in plan view by simple learning.  The (dreaded) N-to-M problem:  Temporal integration on appearance  Probabilistic assignment  Finite look ahead and look behind