Active Capture and Folk Computing Ana Ramírez and Marc Davis ICME 2004 – Taipei, Taiwan 29 June 2004 UC Berkeley - Garage Cinema Research - Group for User Interface Research
2 6/29/2004 Smart Multimedia Acquisition Systems First two papers – automatic camera calibration Image Audio Third paper – understand structure of what is being captured to edit in real time Active capture - smart cameras that interactively guide and capture human action
3 6/29/2004 Outline Sample applications Active Capture Designing Active Capture algorithms Future work
4 6/29/2004 Sample Applications Automatic Movie Trailers Play Video Video of capture process
5 6/29/2004 Sample Applications Automatic Movie Trailers Play Video Video of automatically created movie trailer
6 6/29/2004 Sample Applications Sports Instruction
7 6/29/2004 Sample Applications Telemedicine Rural TownLarge City leishmaniasis
8 6/29/2004 Sample Applications Automated Health Screening Rural TownLarge City leishmaniasis
9 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
10 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
11 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
12 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
13 6/29/2004 Active Capture CaptureInteraction Processing Direction/ Cinematography Human- Computer Interaction Computer Vision/ Audition Active Capture
14 6/29/2004 Active Capture Traditionally, signal processing algorithms avoid interacting with the user Signal processing + interaction => more sophisticated recognizers How to design hybrid algorithms that involve capture, interaction, and processing
15 6/29/2004 Components of Active Capture Algorithms Simple computer vision and audition recognizers / sensors Motion Eyes Sound Desired action in terms of recognizers Interaction script
16 6/29/2004 Design Process Input: Desired action = head turn Recognizers = motion, eyes Motion Eyes time
17 6/29/2004 Design Process Input: Desired action = head turn Recognizers = motion, eyes Step 1: Express desired action in terms of recognizers Motion Eyes time No Motion No Eyes Motion Eyes No Motion
18 6/29/2004 Design Process Input: Desired action = head turn Recognizers = motion, eyes Step 1: Express desired action in terms of recognizers Step 2: Design interaction script
19 6/29/2004 Design Process – Step II
20 6/29/2004 Design Process – Step II
21 6/29/2004 Design Process – Step II Play Video
22 6/29/2004 Design Process – Step II
23 6/29/2004 Design Process – Step II Play Video
24 6/29/2004 Design Challenges Step I - Description of action Approximate timing Strict and non strict ordering Step II – Interaction script What to do if something goes wrong – mediation
Step I – Action Description
26 6/29/2004 Observations Commands Capture Time constraints Strict ordering Non-strict ordering Step I - Action Description Visual Language
27 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
28 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
29 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
30 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
31 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
32 6/29/2004 Step I - Action Description Visual Language Observations Commands Capture Time constraints Strict ordering Non-strict ordering
Step II – Interaction Script
34 6/29/2004 Step II – Interaction Script Contextual Inquiries Golf instructor Aikido instructor 911 emergency phone operator Triage nurse Children’s portrait photographer Film and theatre directors [Jeffrey Heer, Nathaniel S. Good, Ana Ramirez, Marc Davis, and Jennifer Mankoff. “Presiding Over Accidents: System Direction of Human Action.” In: Proceedings of the Conference on Human Factors in Computing Systems (CHI 2004) in Vienna, Austria. ACM Press, , ]
35 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies External aids Play Video
36 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies Decomposition and “Show” Play Video
37 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies Method shift from “Show” to “Tell” Play Video
38 6/29/2004 Step II – Interaction Script Direction and Feedback Strategies
39 6/29/2004 Summary Active Capture – smart cameras that interactively guide and capture human action Sample applications Automated health screening Automated movie clips Sports trainer Design Challenges Description of action Interaction script
40 6/29/2004 Future Work Support design and implementation of Active Capture applications Evaluate the relative contribution of signal analysis and user interaction in these hybrid algorithms
41 6/29/2004 Questions Ana Ramírez Garage Cinema Research Group for User Interface Research