Understanding Human Behavior from Sensor Data

Slides:



Advertisements
Similar presentations
Opportunity Knocks: A Community Navigation Aid Henry Kautz Don Patterson Dieter Fox Lin Liao University of Washington Computer Science & Engineering.
Advertisements

Recognizing Human Activity from Sensor Data Henry Kautz University of Washington Computer Science & Engineering graduate students: Don Patterson, Lin Liao.
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
1 Assisted Cognition Henry Kautz Don Patterson, Nan LI Oren Etzioni, Dieter Fox University of Washington Department of Computer Science & Engineering.
FixtureFinder: Discovering the Existence of Electrical and Water Fixtures Vijay Srinivasan*, John Stankovic, Kamin Whitehouse University of Virginia *(Currently.
Assisted Cognition Henry Kautz University of Rochester Computer Science.
The Future of Computing CSC 161: The Art of Programming Prof. Henry Kautz 12/2/
1 Assisted Cognition Henry Kautz, Oren Etzioni, & Dieter Fox University of Washington Department of Computer Science & Engineering.
Dieter Fox University of Washington Department of Computer Science & Engineering.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
Probabilistic Databases Amol Deshpande, University of Maryland.
Personalized Medicine Research at the University of Rochester Henry Kautz Department of Computer Science.
Bayesian Filtering for Robot Localization
Ambulation : a tool for monitoring mobility over time using mobile phones Computational Science and Engineering, CSE '09. International Conference.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Chapter 8 Prediction Algorithms for Smart Environments
Robust Activity Recognition Henry Kautz University of Washington Computer Science & Engineering graduate students: Don Patterson, Lin Liao, Krzysztof Gajos,
Making Sense of Sensors Henry Kautz Department of Computer Science & Engineering University of Washington, Seattle, WA Funding for this research is provided.
Navigation System for the Visually Impaired Based on an Information Server Concept Ari Virtanen, Sami Koskinen.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280.
Recognizing Activities of Daily Living from Sensor Data Henry Kautz Department of Computer Science University of Rochester.
Learning Geographical Preferences for Point-of-Interest Recommendation Author(s): Bin Liu Yanjie Fu, Zijun Yao, Hui Xiong [KDD-2013]
Learning and Inferring Transportation Routines By: Lin Liao, Dieter Fox and Henry Kautz Best Paper award AAAI’04.
Creating a Corpus for A Conversational Assistant for Everyday Tasks Henry Kautz, Young Song, Ian Pereira, Mary Swift, Walter Lasecki, Jeff Bigham, James.
Advisor: Hsin-His Chen Reporter: Chi-Hsin Yu Date: From AAAI 2008 William Pentney, Department of Computer Science & Engineering University of.
Learning to Navigate Through Crowded Environments Peter Henry 1, Christian Vollmer 2, Brian Ferris 1, Dieter Fox 1 Tuesday, May 4, University of.
The New Age of Commonsense Reasoning Henry Kautz University of Rochester.
Inferring High-Level Behavior from Low-Level Sensors Donald J. Patterson, Lin Liao, Dieter Fox, and Henry Kautz.
Project ACCESS Henry Kautz, Dieter Fox, Gaetano Boriello (UW CSE) Don Patterson (UW / UC Irvine) Kurt Johnson, Pat Brown, Mark Harniss (UW Rehabilitation.
Sensor Analysis – Part II A literature based exploration Thomas Plötz [material taken from the original papers and John Krumm “Ubiquitous Computing Fundamentals”
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Thrust IIA: Environmental State Estimation and Mapping Dieter Fox (Lead) Nicholas Roy MURI 8 Kickoff Meeting 2007.
Learning and Inferring Transportation Routines Lin Liao, Don Patterson, Dieter Fox, Henry Kautz Department of Computer Science and Engineering University.
Ubiquitous means ‘everywhere’
Machine learning & object recognition Cordelia Schmid Jakob Verbeek.
Chapter 6 Activity Recognition from Trajectory Data Yin Zhu, Vincent Zheng and Qiang Yang HKUST November 2011.
Assisted Cognition Systems Henry Kautz Department of Computer Science.
Brief Intro to Machine Learning CS539
Understanding Human Behavior from Sensor Data
Intelligent Transportation System
Alan L. Liu, Harlan Hile, Gaetano Borriello
What is a Hidden Markov Model?
Understanding Human Behavior from Sensor Data
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
OptiView™ XG Network Analysis Tablet
CIVINET UK and Ireland Network
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Ubiquitous Computing and Augmented Realities
Tracking Objects with Dynamics
Vijay Srinivasan, John Stankovic, Kamin Whitehouse
MURI Annual Review Meeting Randy Moses November 3, 2008
Chapter 2: Input and output devices
Issues in Spoken Dialogue Systems
Probability and Time: Markov Models
Review on Smart Solutions for People with Visual Impairment
Probability and Time: Markov Models
Identifying Confusion from Eye-Tracking Data
Probability and Time: Markov Models
Multimodal Human-Computer Interaction New Interaction Techniques 22. 1
Discovering Activities of Daily Life Using RFID’s
Chapter 11 user support.
Probability and Time: Markov Models
Machine Learning in Practice Lecture 27
Finding Periodic Discrete Events in Noisy Streams
Speech recognition, machine learning
Speech recognition, machine learning
Presentation transcript:

Understanding Human Behavior from Sensor Data Henry Kautz University of Washington

A Dream of AI Systems that can understand ordinary human experience Work in KR, NLP, vision, IUI, planning… Plan recognition Behavior recognition Activity tracking Goals Intelligent user interfaces Step toward true AI

Plan Recognition, circa 1985

Behavior Recognition, circa 2005

Punch Line Resurgence of work in behavior understanding, fueled by Advances in probabilistic inference Graphical models Scalable inference KR U Bayes Ubiquitous sensing devices RFID, GPS, motes, … Ground recognition in sensor data Healthcare applications Aging boomers – fastest growing demographic

This Talk Activity tracking from RFID tag data ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

This Talk Activity tracking from RFID tag data ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

Object-Based Activity Recognition Activities of daily living involve the manipulation of many physical objects Kitchen: stove, pans, dishes, … Bathroom: toothbrush, shampoo, towel, … Bedroom: linen, dresser, clock, clothing, … We can recognize activities from a time-sequence of object touches

Image Courtesy Intel Research Application ADL (Activity of Daily Living) monitoring for the disabled and/or elderly Changes in routine often precursor to illness, accidents Human monitoring intrusive & inaccurate Image Courtesy Intel Research

Sensing Object Manipulation RFID: Radio-frequency identification tags Small No batteries Durable Cheap Easy to tag objects Near future: use products’ own tags

Wearable RFID Readers 13.56MHz reader, radio, power supply, antenna 12 inch range, 12-150 hour lifetime Objects tagged on grasping surfaces

Experiment: ADL Form Filling Tagged real home with 108 tags 14 subjects each performed 12 of 65 activities (14 ADLs) in arbitrary order Used glove-based reader Given trace, recreate activities

Results: Detecting ADLs Activity Prior Work SHARP Personal Appearance 92/92 Oral Hygiene 70/78 Toileting 73/73 Washing up 100/33 Appliance Use 100/75 Use of Heating 84/78 Care of clothes and linen 100/73 Making a snack 100/78 Making a drink 75/60 Use of phone 64/64 Leisure Activity 100/79 Infant Care 100/58 Medication Taking 100/93 Housework 100/82 Legend Point solution General solution Inferring ADLs from Interactions with Objects Philipose, Fishkin, Perkowitz, Patterson, Hähnel, Fox, and Kautz IEEE Pervasive Computing, 4(3), 2004

Experiment: Morning Activities 10 days of data from the morning routine in an experimenter’s home 61 tagged objects 11 activities Often interleaved and interrupted Many shared objects Use bathroom Make coffee Set table Make oatmeal Make tea Eat breakfast Make eggs Use telephone Clear table Prepare OJ Take out trash

Baseline: Individual Hidden Markov Models 68% accuracy 11.8 errors per episode

Baseline: Single Hidden Markov Model 83% accuracy 9.4 errors per episode

Cause of Errors Observations were types of objects Spoon, plate, fork … Typical errors: confusion between activities Using one object repeatedly Using different objects of same type Critical distinction in many ADL’s Eating versus setting table Dressing versus putting away laundry

Aggregate Features HMM with individual object observations fails No generalization! Solution: add aggregation features Number of objects of each type used Requires history of current activity performance DBN encoding avoids explosion of HMM

Dynamic Bayes Net with Aggregate Features 88% accuracy 6.5 errors per episode

Improving Robustness DBN fails if novel objects are used Solution: smooth parameters over abstraction hierarchy of object types

Abstraction Smoothing Methodology: Train on 10 days data Test where one activity substitutes one object Change in error rate: Without smoothing: 26% increase With smoothing: 1% increase

Summary Activities of daily living can be robustly tracked using RFID data Simple, direct sensors can often replace (or augment) general machine vision Accurate probabilistic inference requires sequencing, aggregation, and abstraction Works for essentially all ADLs defined in healthcare literature D. Patterson, H. Kautz, & D. Fox, ISWC 2005 best paper award

This Talk Activity tracking from RFID tag data ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

Motivation: Community Access for the Cognitively Disabled

The Problem Using public transit cognitively challenging Learning bus routes and numbers Transfers Recovering from mistakes Point to point shuttle service impractical Slow Expensive Current GPS units hard to use Require extensive user input Error-prone near buildings, inside buses No help with transfers, timing

Solution: Activity Compass User carries smart cell phone System infers transportation mode GPS – position, velocity Mass transit information Over time system learns user model Important places Common transportation plans Mismatches = possible mistakes System provides proactive help

Technical Challenge Given a data stream from a GPS unit... Infer the user’s mode of transportation, and places where the mode changes Foot, car, bus, bike, … Bus stops, parking lots, enter buildings, … Learn the user’s daily pattern of movement Predict the user’s future actions Detect user errors

Technical Approach Map: Directed graph Location: (Edge, Distance) Geographic features, e.g. bus stops Movement: (Mode, Velocity) Probability of turning at each intersection Probability of changing mode at each location Tracking (filtering): Given some prior estimate, Update position & mode according to motion model Correct according to next GPS reading So far, I have described the general picture of our system and some applications. In the rest of the presentation, I will focus on the technical approach of probabilistic reasoning. There are three key things in our system. For the .., we use…. Because of the time issue, I cannot discuss those techniques in details. So I will only give you a very rough picture for each of them and show the results of our experiments.

Dynamic Bayesian Network I mk-1 mk Transportation mode xk-1 xk Edge, velocity, position qk-1 qk Data (edge) association zk-1 zk GPS reading Time k-1 Time k

Mode & Location Tracking Measurements Projections Bus mode Car mode Foot mode Green Red Blue Explain why it switches from foot to bus, not car: because of the bus stops If people ask why no car particle at all (there are car particles, just their weights are too small that can not be displayed).

Learning Prior knowledge – general constraints on transportation use Vehicle speed range Bus stops Learning – specialize model to particular user 30 days GPS readings Unlabeled data Learn edge transition parameters using expectation-maximization (EM)

How to improve predictive power? Predictive Accuracy How to improve predictive power? Probability of correctly predicting the future City Blocks

Transportation Routines Home A B Workplace Goal: intended destination Workplace, home, friends, restaurants, … Trip segments: <start, end, mode> Home to Bus stop A on Foot Bus stop A to Bus stop B on Bus Bus stop B to workplace on Foot Now the system could infer location and transportation modes, but wee still want to know more. We first show an example. Goal In order to get to the goal, the user may switch transportation mode. Thus we introduce the concept of trip segments.

Dynamic Bayesian Net II gk-1 gk Goal tk-1 tk Trip segment mk-1 mk Transportation mode xk-1 xk Edge, velocity, position qk-1 qk Data (edge) association zk-1 zk GPS reading Time k-1 Time k

Predict Goal and Path Predicted goal Predicted path

Improvement in Predictive Accuracy

Detecting User Errors Learned model represents typical correct behavior Model is a poor fit to user errors We can use this fact to detect errors! Cognitive Mode Normal: model functions as before Error: switch in prior (untrained) parameters for mode and edge transition After we learn the model, detect potential errors.

Dynamic Bayesian Net III zk-1 zk xk-1 xk qk-1 qk Time k-1 Time k mk-1 mk tk-1 tk gk-1 gk ck-1 ck Cognitive mode { normal, error } Goal Trip segment Transportation mode Edge, velocity, position Data (edge) association GPS reading

Detecting User Errors In this episode, the person takes the bus home but missed the usual bus stop. The black disk is the goal and the cross mark is where the system think the person should get off the bus. The probability of normal and abnormal activities are represented using a bar here. The black part is the probability of being normal while the red part is the probability of being abnormal. At here, the person missed the bus stop and the probability of abnormal activity jump quickly.

Status Major funding by NIDRR Extension to indoor navigation National Institute of Disability & Rehabilitation Research Partnership with UW Dept of Rehabilitation Medicine & Center for Technology and Disability Studies Extension to indoor navigation Hospitals, nursing homes, assisted care communities Wi-Fi localization Multi-modal interface studies Speech, graphics, text Adaptive guidance strategies

Papers Liu et al, ASSETS 2006 Indoor Wayfinding: Developing a Functional Interface for Individuals with Cognitive Impairments Liao, Fox, & Kautz, AAAI 2004 (Best Paper) Learning and Inferring Transportation Routines Patterson et al, UBICOMP 2004 Opportunity Knocks: a System to Provide Cognitive Assistance with Transportation Services

This Talk Activity tracking from RFID tag data ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

Task Learn to label a person’s Given Daily activities working, visiting friends, traveling, … Significant places work place, friend’s house, usual bus stop, … Given Training set of labeled examples Wearable sensor data stream GPS, acceleration, ambient noise level, …

Learning to Label Places & Activities HOME OFFICE PARKING LOT RESTAURANT FRIEND’S HOME STORE Learned general rules for labeling “meaningful places” based on time of day, type of neighborhood, speed of movement, places visited before and after, etc.

Relational Markov Network First-order version of conditional random field Features defined by feature templates All instances of a template have same weight Examples: Time of day an activity occurs Place an activity occurs Number of places labeled “Home” Distance between adjacent user locations Distance between GPS reading & nearest street

Global soft constraints RMN Model s1 Global soft constraints p1 p2 Significant places a1 a2 a3 a4 a5 Activities g1 g2 g3 g4 g5 g6 g7 g8 g9 {GPS, location} time 

Significant Places Previous work decoupled identifying significant places from rest of inference Simple temporal threshold Misses places with brief activities RMN model integrates Identifying significant place Labeling significant places Labeling activities

Results: Finding Significant Places

Results: Efficiency

Summary We can learn to label a user’s activities and meaningful locations using sensor data & state of the art relational statistical models (Liao, Fox, & Kautz, IJCAI 2005, NIPS 2006) Many avenues to explore: Transfer learning Finer grained activities Structured activities Social groups

This Talk Activity tracking from RFID tag data ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

Intelligent Systems Research Henry Kautz Kodak Intelligent Systems Research Center

Intelligent Systems Research Center New AI / CS center at Kodak Research Labs Directed by Henry Kautz, 2006-2007 Initial group of 4 researchers, expertise in machine vision Hiring 6 new PhD level researchers in 2007 Mission: providing KRL with access to expertise in Machine learning Automated reasoning & planning Computer vision & computational photography Pervasive computing 14 March 2007

University Partnerships Goal: access to external expertise & technology > $3M @ year across KRL Unrestricted grants: $20 - $50K Sponsored research grants: $75 - $150K CEIS (NYSTAR) eligible KEA Fellowships (3 year) Summer internships Individual consulting agreements Master IP Agreement Partners University of Rochester University of Massachusetts at Amherst Cornell Many other partners UIUC, Columbia, U Washington, MIT, … 14 March 2007

New ISRC Project Extract meaning from combination of image content and sensor data First step: GPS location data 10 hand-tuned visual concept detectors People and faces Environments, e.g. forest, water, … 100-way classifier built by machine learning High-level activities, e.g. swimming Specific objects, e.g. boats Learned probabilistic model integrates location and image features E.g.: Locations near water raise probability of “image contains boats” 14 March 2007

Multimedia Gallery/Blog (with automatic annotation) Flying Sailing Driving Walking Swimming in the Caymans Mode of transportation -> places (ports, airport, hiking spot, etc). 14 March 2007

Enhancing Video Synthesizing 3-D video from ordinary video Surprisingly simple and fast techniques use motion information to create compelling 3-D experience Practical resolution enhancement New project for 2007 Goal: convert sub-VGA quality video (cellphone video) to high-quality 640x480 resolution video Idea: sequence of adjacent frames contain implicit information about “missing” pixels 14 March 2007

Workflow Planning Research Opportunities Problem: Assigning workflow(s) to particular jobs requires extensive expertise Solution: System learns general job planning preferences from expert users Problem: Sets of jobs often not scheduled optimally, wasting time & raising costs Solution: Improved scheduling algorithms that consider interactions between jobs (e.g. ganging) Problem: Users find it hard to define new workflows correctly (even with good tools) Solution: Create and verify new workflows by automated planning 14 March 2007

Conclusion: Why Now? An early goal of AI was to create programs that could understand ordinary human experience This goal proved elusive Missing probabilistic tools Systems not grounded in real world Lacked compelling purpose Today we have the mathematical tools, the sensors, and the motivation

Credits Graduate students: Colleagues: Funders: Don Patterson, Lin Liao, Ashish Sabharwal, Yongshao Ruan, Tian Sang, Harlan Hile, Alan Liu, Bill Pentney, Brian Ferris Colleagues: Dieter Fox, Gaetano Borriello, Dan Weld, Matthai Philipose Funders: NIDRR, Intel, NSF, DARPA