Presentation is loading. Please wait.

Presentation is loading. Please wait.

Understanding Human Behavior from Sensor Data

Similar presentations


Presentation on theme: "Understanding Human Behavior from Sensor Data"— Presentation transcript:

1 Understanding Human Behavior from Sensor Data
Henry Kautz University of Washington

2 A Dream of AI Systems that can understand ordinary human experience
Work in KR, NLP, vision, IUI, planning… Plan recognition Behavior recognition Activity tracking Goals Intelligent user interfaces Step toward true AI

3 Plan Recognition, circa 1985

4 Behavior Recognition, circa 2005

5 Punch Line Resurgence of work in behavior understanding, fueled by
Advances in probabilistic inference Graphical models Scalable inference KR U Bayes Ubiquitous sensing devices RFID, GPS, motes, … Ground recognition in sensor data Healthcare applications Aging boomers – fastest growing demographic

6 This Talk Activity tracking from RFID tag data
ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

7 This Talk Activity tracking from RFID tag data
ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

8 Object-Based Activity Recognition
Activities of daily living involve the manipulation of many physical objects Kitchen: stove, pans, dishes, … Bathroom: toothbrush, shampoo, towel, … Bedroom: linen, dresser, clock, clothing, … We can recognize activities from a time-sequence of object touches

9 Image Courtesy Intel Research
Application ADL (Activity of Daily Living) monitoring for the disabled and/or elderly Changes in routine often precursor to illness, accidents Human monitoring intrusive & inaccurate Image Courtesy Intel Research

10 Sensing Object Manipulation
RFID: Radio-frequency identification tags Small No batteries Durable Cheap Easy to tag objects Near future: use products’ own tags

11 Wearable RFID Readers 13.56MHz reader, radio, power supply, antenna
12 inch range, hour lifetime Objects tagged on grasping surfaces

12 Experiment: ADL Form Filling
Tagged real home with 108 tags 14 subjects each performed 12 of 65 activities (14 ADLs) in arbitrary order Used glove-based reader Given trace, recreate activities

13 Results: Detecting ADLs
Activity Prior Work SHARP Personal Appearance 92/92 Oral Hygiene 70/78 Toileting 73/73 Washing up 100/33 Appliance Use 100/75 Use of Heating 84/78 Care of clothes and linen 100/73 Making a snack 100/78 Making a drink 75/60 Use of phone 64/64 Leisure Activity 100/79 Infant Care 100/58 Medication Taking 100/93 Housework 100/82 Legend Point solution General solution Inferring ADLs from Interactions with Objects Philipose, Fishkin, Perkowitz, Patterson, Hähnel, Fox, and Kautz IEEE Pervasive Computing, 4(3), 2004

14 Experiment: Morning Activities
10 days of data from the morning routine in an experimenter’s home 61 tagged objects 11 activities Often interleaved and interrupted Many shared objects Use bathroom Make coffee Set table Make oatmeal Make tea Eat breakfast Make eggs Use telephone Clear table Prepare OJ Take out trash

15 Baseline: Individual Hidden Markov Models
68% accuracy 11.8 errors per episode

16 Baseline: Single Hidden Markov Model
83% accuracy 9.4 errors per episode

17 Cause of Errors Observations were types of objects
Spoon, plate, fork … Typical errors: confusion between activities Using one object repeatedly Using different objects of same type Critical distinction in many ADL’s Eating versus setting table Dressing versus putting away laundry

18 Aggregate Features HMM with individual object observations fails
No generalization! Solution: add aggregation features Number of objects of each type used Requires history of current activity performance DBN encoding avoids explosion of HMM

19 Dynamic Bayes Net with Aggregate Features
88% accuracy 6.5 errors per episode

20 Improving Robustness DBN fails if novel objects are used
Solution: smooth parameters over abstraction hierarchy of object types

21

22 Abstraction Smoothing
Methodology: Train on 10 days data Test where one activity substitutes one object Change in error rate: Without smoothing: 26% increase With smoothing: 1% increase

23 Summary Activities of daily living can be robustly tracked using RFID data Simple, direct sensors can often replace (or augment) general machine vision Accurate probabilistic inference requires sequencing, aggregation, and abstraction Works for essentially all ADLs defined in healthcare literature D. Patterson, H. Kautz, & D. Fox, ISWC 2005 best paper award

24 This Talk Activity tracking from RFID tag data
ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

25 Motivation: Community Access for the Cognitively Disabled

26 The Problem Using public transit cognitively challenging
Learning bus routes and numbers Transfers Recovering from mistakes Point to point shuttle service impractical Slow Expensive Current GPS units hard to use Require extensive user input Error-prone near buildings, inside buses No help with transfers, timing

27 Solution: Activity Compass
User carries smart cell phone System infers transportation mode GPS – position, velocity Mass transit information Over time system learns user model Important places Common transportation plans Mismatches = possible mistakes System provides proactive help

28 Technical Challenge Given a data stream from a GPS unit...
Infer the user’s mode of transportation, and places where the mode changes Foot, car, bus, bike, … Bus stops, parking lots, enter buildings, … Learn the user’s daily pattern of movement Predict the user’s future actions Detect user errors

29 Technical Approach Map: Directed graph Location: (Edge, Distance)
Geographic features, e.g. bus stops Movement: (Mode, Velocity) Probability of turning at each intersection Probability of changing mode at each location Tracking (filtering): Given some prior estimate, Update position & mode according to motion model Correct according to next GPS reading So far, I have described the general picture of our system and some applications. In the rest of the presentation, I will focus on the technical approach of probabilistic reasoning. There are three key things in our system. For the .., we use…. Because of the time issue, I cannot discuss those techniques in details. So I will only give you a very rough picture for each of them and show the results of our experiments.

30 Dynamic Bayesian Network I
mk-1 mk Transportation mode xk-1 xk Edge, velocity, position qk-1 qk Data (edge) association zk-1 zk GPS reading Time k-1 Time k

31 Mode & Location Tracking
Measurements Projections Bus mode Car mode Foot mode Green Red Blue Explain why it switches from foot to bus, not car: because of the bus stops If people ask why no car particle at all (there are car particles, just their weights are too small that can not be displayed).

32 Learning Prior knowledge – general constraints on transportation use
Vehicle speed range Bus stops Learning – specialize model to particular user 30 days GPS readings Unlabeled data Learn edge transition parameters using expectation-maximization (EM)

33 How to improve predictive power?
Predictive Accuracy How to improve predictive power? Probability of correctly predicting the future City Blocks

34 Transportation Routines
Home A B Workplace Goal: intended destination Workplace, home, friends, restaurants, … Trip segments: <start, end, mode> Home to Bus stop A on Foot Bus stop A to Bus stop B on Bus Bus stop B to workplace on Foot Now the system could infer location and transportation modes, but wee still want to know more. We first show an example. Goal In order to get to the goal, the user may switch transportation mode. Thus we introduce the concept of trip segments.

35 Dynamic Bayesian Net II
gk-1 gk Goal tk-1 tk Trip segment mk-1 mk Transportation mode xk-1 xk Edge, velocity, position qk-1 qk Data (edge) association zk-1 zk GPS reading Time k-1 Time k

36 Predict Goal and Path Predicted goal Predicted path

37 Improvement in Predictive Accuracy

38 Detecting User Errors Learned model represents typical correct behavior Model is a poor fit to user errors We can use this fact to detect errors! Cognitive Mode Normal: model functions as before Error: switch in prior (untrained) parameters for mode and edge transition After we learn the model, detect potential errors.

39 Dynamic Bayesian Net III
zk-1 zk xk-1 xk qk-1 qk Time k-1 Time k mk-1 mk tk-1 tk gk-1 gk ck-1 ck Cognitive mode { normal, error } Goal Trip segment Transportation mode Edge, velocity, position Data (edge) association GPS reading

40 Detecting User Errors In this episode, the person takes the bus home but missed the usual bus stop. The black disk is the goal and the cross mark is where the system think the person should get off the bus. The probability of normal and abnormal activities are represented using a bar here. The black part is the probability of being normal while the red part is the probability of being abnormal. At here, the person missed the bus stop and the probability of abnormal activity jump quickly.

41 Status Major funding by NIDRR Extension to indoor navigation
National Institute of Disability & Rehabilitation Research Partnership with UW Dept of Rehabilitation Medicine & Center for Technology and Disability Studies Extension to indoor navigation Hospitals, nursing homes, assisted care communities Wi-Fi localization Multi-modal interface studies Speech, graphics, text Adaptive guidance strategies

42 Papers Liu et al, ASSETS 2006 Indoor Wayfinding: Developing a Functional Interface for Individuals with Cognitive Impairments Liao, Fox, & Kautz, AAAI 2004 (Best Paper) Learning and Inferring Transportation Routines Patterson et al, UBICOMP 2004 Opportunity Knocks: a System to Provide Cognitive Assistance with Transportation Services

43 This Talk Activity tracking from RFID tag data
ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

44 Task Learn to label a person’s Given Daily activities
working, visiting friends, traveling, … Significant places work place, friend’s house, usual bus stop, … Given Training set of labeled examples Wearable sensor data stream GPS, acceleration, ambient noise level, …

45 Learning to Label Places & Activities
HOME OFFICE PARKING LOT RESTAURANT FRIEND’S HOME STORE Learned general rules for labeling “meaningful places” based on time of day, type of neighborhood, speed of movement, places visited before and after, etc.

46 Relational Markov Network
First-order version of conditional random field Features defined by feature templates All instances of a template have same weight Examples: Time of day an activity occurs Place an activity occurs Number of places labeled “Home” Distance between adjacent user locations Distance between GPS reading & nearest street

47 Global soft constraints
RMN Model s1 Global soft constraints p1 p2 Significant places a1 a2 a3 a4 a5 Activities g1 g2 g3 g4 g5 g6 g7 g8 g9 {GPS, location} time 

48 Significant Places Previous work decoupled identifying significant places from rest of inference Simple temporal threshold Misses places with brief activities RMN model integrates Identifying significant place Labeling significant places Labeling activities

49 Results: Finding Significant Places

50 Results: Efficiency

51 Summary We can learn to label a user’s activities and meaningful locations using sensor data & state of the art relational statistical models (Liao, Fox, & Kautz, IJCAI 2005, NIPS 2006) Many avenues to explore: Transfer learning Finer grained activities Structured activities Social groups

52 This Talk Activity tracking from RFID tag data
ADL Monitoring Learning patterns of transportation use from GPS data Activity Compass Learning to label activities and places Life Capture Kodak Intelligent Systems Research Center Projects & Plans

53 Intelligent Systems Research
Henry Kautz Kodak Intelligent Systems Research Center

54 Intelligent Systems Research Center
New AI / CS center at Kodak Research Labs Directed by Henry Kautz, Initial group of 4 researchers, expertise in machine vision Hiring 6 new PhD level researchers in 2007 Mission: providing KRL with access to expertise in Machine learning Automated reasoning & planning Computer vision & computational photography Pervasive computing 14 March 2007

55 University Partnerships
Goal: access to external expertise & technology > year across KRL Unrestricted grants: $20 - $50K Sponsored research grants: $75 - $150K CEIS (NYSTAR) eligible KEA Fellowships (3 year) Summer internships Individual consulting agreements Master IP Agreement Partners University of Rochester University of Massachusetts at Amherst Cornell Many other partners UIUC, Columbia, U Washington, MIT, … 14 March 2007

56 New ISRC Project Extract meaning from combination of image content and sensor data First step: GPS location data 10 hand-tuned visual concept detectors People and faces Environments, e.g. forest, water, … 100-way classifier built by machine learning High-level activities, e.g. swimming Specific objects, e.g. boats Learned probabilistic model integrates location and image features E.g.: Locations near water raise probability of “image contains boats” 14 March 2007

57 Multimedia Gallery/Blog (with automatic annotation)
Flying Sailing Driving Walking Swimming in the Caymans Mode of transportation -> places (ports, airport, hiking spot, etc). 14 March 2007

58 Enhancing Video Synthesizing 3-D video from ordinary video
Surprisingly simple and fast techniques use motion information to create compelling 3-D experience Practical resolution enhancement New project for 2007 Goal: convert sub-VGA quality video (cellphone video) to high-quality 640x480 resolution video Idea: sequence of adjacent frames contain implicit information about “missing” pixels 14 March 2007

59 Workflow Planning Research Opportunities
Problem: Assigning workflow(s) to particular jobs requires extensive expertise Solution: System learns general job planning preferences from expert users Problem: Sets of jobs often not scheduled optimally, wasting time & raising costs Solution: Improved scheduling algorithms that consider interactions between jobs (e.g. ganging) Problem: Users find it hard to define new workflows correctly (even with good tools) Solution: Create and verify new workflows by automated planning 14 March 2007

60 Conclusion: Why Now? An early goal of AI was to create programs that could understand ordinary human experience This goal proved elusive Missing probabilistic tools Systems not grounded in real world Lacked compelling purpose Today we have the mathematical tools, the sensors, and the motivation

61 Credits Graduate students: Colleagues: Funders:
Don Patterson, Lin Liao, Ashish Sabharwal, Yongshao Ruan, Tian Sang, Harlan Hile, Alan Liu, Bill Pentney, Brian Ferris Colleagues: Dieter Fox, Gaetano Borriello, Dan Weld, Matthai Philipose Funders: NIDRR, Intel, NSF, DARPA


Download ppt "Understanding Human Behavior from Sensor Data"

Similar presentations


Ads by Google