Download presentation
Presentation is loading. Please wait.
1
Active Capture and Folk Computing Ana Ramírez and Marc Davis ICME 2004 – Taipei, Taiwan 29 June 2004 UC Berkeley - Garage Cinema Research - Group for User Interface Research
2
2 6/29/2004 Smart Multimedia Acquisition Systems First two papers – automatic camera calibration Image Audio Third paper – understand structure of what is being captured to edit in real time Active capture - smart cameras that interactively guide and capture human action
3
3 6/29/2004 Outline Sample applications Active Capture Designing Active Capture algorithms Future work
4
4 6/29/2004 Sample Applications Automatic Movie Trailers Play Video Video of capture process
5
5 6/29/2004 Sample Applications Automatic Movie Trailers Play Video Video of automatically created movie trailer
6
6 6/29/2004 Sample Applications Sports Instruction
7
7 6/29/2004 Sample Applications Telemedicine Rural TownLarge City leishmaniasis
8
8 6/29/2004 Sample Applications Automated Health Screening Rural TownLarge City leishmaniasis
9
9 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
10
10 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
11
11 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
12
12 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
13
13 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
14
14 6/29/2004 Active Capture Traditionally, signal processing algorithms avoid interacting with the user Signal processing + interaction => more sophisticated recognizers How to design hybrid algorithms that involve capture, interaction, and processing
15
15 6/29/2004 Components of Active Capture Algorithms Simple computer vision and audition recognizers / sensors Motion Eyes Sound Desired action in terms of recognizers Interaction script
16
16 6/29/2004 Design Process Input: Desired action = head turn Recognizers = motion, eyes Motion Eyes time
17
17 6/29/2004 Design Process Input: Desired action = head turn Recognizers = motion, eyes Step 1: Express desired action in terms of recognizers Motion Eyes time No Motion No Eyes Motion Eyes No Motion
18
18 6/29/2004 Design Process Input: Desired action = head turn Recognizers = motion, eyes Step 1: Express desired action in terms of recognizers Step 2: Design interaction script
19
19 6/29/2004 Design Process – Step II
20
20 6/29/2004 Design Process – Step II
21
21 6/29/2004 Design Process – Step II Play Video
22
22 6/29/2004 Design Process – Step II
23
23 6/29/2004 Design Process – Step II Play Video
24
24 6/29/2004 Design Challenges Step I - Description of action Approximate timing Strict and non strict ordering Step II – Interaction script What to do if something goes wrong – mediation
25
Step I – Action Description
26
26 6/29/2004 Observations Commands Capture Time constraints Strict ordering Non-strict ordering Step I - Action Description Visual Language
27
27 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
28
28 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
29
29 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
30
30 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
31
31 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
32
32 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
33
Step II – Interaction Script
34
34 6/29/2004 Step II – Interaction Script Contextual Inquiries Golf instructor Aikido instructor 911 emergency phone operator Triage nurse Children’s portrait photographer Film and theatre directors [Jeffrey Heer, Nathaniel S. Good, Ana Ramirez, Marc Davis, and Jennifer Mankoff. “Presiding Over Accidents: System Direction of Human Action.” In: Proceedings of the Conference on Human Factors in Computing Systems (CHI 2004) in Vienna, Austria. ACM Press, 463-470, 2004. ]
35
35 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies External aids Play Video
36
36 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies Decomposition and “Show” Play Video
37
37 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies Method shift from “Show” to “Tell” Play Video
38
38 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies
39
39 6/29/2004 Summary Active Capture – smart cameras that interactively guide and capture human action Sample applications Automated health screening Automated movie clips Sports trainer Design Challenges Description of action Interaction script
40
40 6/29/2004 Future Work Support design and implementation of Active Capture applications Evaluate the relative contribution of signal analysis and user interaction in these hybrid algorithms
41
41 6/29/2004 Questions Ana Ramírez anar@cs.berkeley.edu www.cs.berkeley.edu/~anar Garage Cinema Research http://garage.sims.berkeley.edu Group for User Interface Research http://guir.berkeley.edu
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.