Presentation is loading. Please wait.

Presentation is loading. Please wait.

Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris.

Similar presentations


Presentation on theme: "Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris."— Presentation transcript:

1 Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris Corporation

2 Goals of the System Recognize human actions in a room for which prior knowledge is available. Handle multiple people Provide a textual description of each action Extract “key frames” for each action Recognize all actions simultaneously for a given field of view

3 Possible Actions Enter Leave Sitting or Standing Picking Up Object Put Down Object Remove Object Open or Close Use Terminal

4 Prior Knowledge Spatial layout of the scene: –Location of entrances and exits –Location of objects and some information about how they are use Context can then be used to improve recognition and save computation

5 Layout of Scene 1

6 Layout of Scene 2

7 Layout of Scene 3

8 Layout of Scene 4

9 Major Components Skin Detection Tracking Scene Change Detection Action Recognition

10 Flow of the System Skin Detection Track people and Objects for this Frame Scene Change Detection Update States, Output Text, Output Key Frames Determine Possible Interactions Between People and Objects

11 Skin Detection Uses a modified version of Kjeldsen and Kender method Uses Y'C b C r color space instead of HSV 3D Histogram based approach Training phase and detection phase Tracking initialized with a window containing the largest connected region

12 Tracking Tracking is done on heads and movable objects Code Provided by Fieguth and Terzopoulos Uses Y'C b C r colorspace Tracks multiple people and objects Simple non-linear estimator Goodness of Fit Equation:

13 Scene Change Detection Similar to method used by Sandy Pentland’s group in Pfinder Uses brightness normalized chromiance Takes a.5 sec clip to build model Determines change based on Mahalonobis distance

14 Possible Actions Enter: significant skin region detected for several frames Leave: tracking box is close to edge of entrance area Sitting or Standing: significant increase or decrease in tracking box’s y-component Picking Up Object: after the person has moved away from the object’s initial position, a scene change is detected

15 Possible Actions Put Down Object: person moves away from the object Remove Object: object close to edge of entrance area Open or Close: increase or decrease in the amount of scene change Use Terminal: person is sitting and a scene change in mouse, but not behind it

16 Start EndStandingSitting Near Cabinet Near Terminal Using Terminal Enter Sit Leave Use Terminal Near Phone Talking on Phone Hanging Up Phone Pick Up Phone Put Down Phone Stand Opening/Closing Cabinet Open / Close Cabinet Sit / 0 Stand / 0 State Model For Action Recognition

17 Key Frames Why get key frames? –Key frames take less space to store –Key frames take less time to transmit –Key frames can be viewed more quickly We use heuristics to determine when key frames are taken –Some are taken before the action occurs –Some are taken after the action occurs

18 Key Frames “Enter” key frames: as the person leaves the entrance/exit area “Leave” key frames: as the person enters the entrance/exit area “Standing/Sitting” key frames: after the tracking box has stopped moving up or down respectively “Open/Close” key frames: when the % of changed pixels stabilizes

19 Demo Sequences Two people: opening cabinet, closing cabinet, sitting and standing One person: picking up phone and putting down phone One person: moving file folder One person: sitting and using a terminal

20 Sequence 1

21 Key Frames Sequence 1 (350 frames), Part 1

22 Key Frames Sequence 1 (350 frames), Part 2

23 Sequence 2

24 Key Frames Sequence 2 (200 frames)

25 Sequence 3

26 Key Frames Sequence 3 (200 frames)

27 Sequence 4

28 Key Frames Sequence 4 (399 frames), Part 1

29 Key Frames Sequence 4 (399 frames), Part 2

30 Sequence 5

31 Sequence 6

32 Summary Successfully recognizes a set of human actions Uses prior knowledge about scene layout Handles multiple people simultaneously Generates key frames and textual descriptions

33 Limitations Limited by: –Skin detection –Tracking –Scene change detection Some actions difficult to model Limited by prior knowledge Limited by field of view

34 Future Work Increase action vocabulary Determine people’s identity Improve field of view –Wide angle lens –Multiple cameras Implement in real-time

35


Download ppt "Monitoring Human Behavior in an Office Environment Douglas Ayers and Mubarak Shah * Research Conducted at the University of Central Florida *Now at Harris."

Similar presentations


Ads by Google