Download presentation
Presentation is loading. Please wait.
Published byEmely Tobin Modified over 10 years ago
1
All that video! Analysis Across Time, Place, & Activities
2
The Problems Too much video Too little time Too few people Too many hypotheses Hard to search through video
3
Slow Coding
4
Why so Much? Timescales of phenomena of interest Weeks, months, years of video Following people across sites & activities Comparative cases multiply video footage Video is becoming cheap and easy Storage and processing is feasible
5
New Hope for the Dread Winnow footage to identify useful segments Speedview through footage Simultaneous multiple video displays Time sampling Multiple timescale viewing (temporal zoom) Place synthesis (multiple viewpoints) Computational search and classification
6
Time Sampling Randomly sample with a fixed-length time window Sample every n seconds for fixed time Catenate the samples and run as a meta-video Cf. time-lapse and stroboscopic photography What makes a sample “representative”? Criteria for helpful sampling?
7
SpeedViewing What do we see if we watch a lot of video at accelerated speed? With or without time-sampling Manovich group NBC News meta-video: time- sampled to just opening scenes, collected for 20 years, run as a single video Pattern recognition? Aided by comparison?
8
Overlay-viewing
9
Side by Side
10
Multiple Comparisons We are a bit used to watching two related videos side by side What if there were a display matrix of 4? Or 9, in a 3-by-3 array? All running simultaneously, in synch What would we notice? How would we learn to watch such displays?
11
Time Zoom Not side-by-side in space, but nested in time Along a timeline of the long scale of a video or several chained together, Expand an inner timeline of an embedded episode, And within that one, another Perhaps down to individual frames
12
Scrolling timescales
13
Place Across Time Microsoft PhotoSynth assembles composite spaces from large sets of photographs of same or adjacent scenes from different viewpoints http://photosynth.net/ http://photosynth.net/ If the images were video stills from one or more traversals through a neighborhood, a composite image could index the videos spatially And allow us to search a video corpus spatially, With or without GPS markup
14
Computational Screening Recent advances in computer science support scene recognition in video (TRECVID) Image and video classification Find more like these Identify similarity/difference clusters Even with many false positives, aids manual segment selection for further analysis
15
Image classification
16
Clustering Faces
17
Context Browsing
18
Multi-thread Browsing
19
Reductive Comparison
20
The Meaning Problem Maintaining a focus on meaning makes the big picture hard to see and the analysis of large databases of rich media intractable Postponing a focus on meaning allows us to benefit from the rich redundancy of media features to let computation aid selection Meaning enters when we select criterial features of interest to guide computation and when we interpret its results afterwards
21
A New Paradigm? Mixed and Complementary Methods to combine Qualitative & Quantitative paradigms Abandon the logic of Experimental research on complex socio-natural systems (neither control nor generalizability is achievable) Keep quantitative methods for data mining qualitatively rich media databases Keep and extend Qualitative paradigms to ground the logic of research
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.