Download presentation
Presentation is loading. Please wait.
Published byEmma Scott Modified over 9 years ago
1
Video Synopsis Yael Pritch Alex Rav-Acha Shmuel Peleg The Hebrew University of Jerusalem
2
Detective Series: “Elementary”
3
Video Surveillance Problem It took weeks to find these events in video archives. Cost of a lost information or a delay may be very high. Terrorists, London tube, 7-7-05Cologne Train Bombs, 31-7-06
4
Challenges in Video Surveillance Millions of surveillance cameras are installed, capturing data 24/365 Number of cameras and their resolution increases rapidly Not enough people to watch captured data Human Attention is Lost after ~20 Minutes Result: Recorded Video is Lost Video –Less than 1% of surveillance video is examined
5
Handling Surveillance Video Object Detection and Tracking –Background Subtraction Object Recognition –Individual people Activity Recognition –Left luggage; Fight A lot of progress done. More work remains.
6
Object Detection and Tracking –Background Subtraction (Assume Done) Object Recognition (Do not use) –Individual people Activity Recognition (Do not use) –Left luggage; Fight A lot of progress done. More work remains. Let People do the Recognition Handling Surveillance Video Video Synopsis
8
Video Synopsis Original video A fast way to browse & index video archives. Summarize a full day of video in a few minutes. Events from different times appear simultaneously. Human inspection of synopsis!!!
9
Synopsis of Surveillance Videos Human Inspection of Search Results Serve queries regarding each camera: –Generate a 3 minutes video showing most activities in the last 24 hours –Generate the shortest video showing all activities in the last 24 hours Each presented activity points back to original time in the original video Orthogonal to Video Analytics
10
Non-Chronological Time Dynamic Mosaicing Video Synopsis Salvador Dali The Hebrew University of Jerusalem
11
Dynamic Mosaics Non Chronological Time
12
Handheld Stereo Mosaic
13
u t Mosaic Image Original frames strips
14
Frame t l u t Frame t k uaua ubub Mosaic Image Space-Time Slice Visibility region
15
u t First Slice Last Slice play Creating Dynamic Panoramic Movies First Mosaic - Appearance Last Mosaic - Disappearance
16
Dynamic Panorama: Iguazu Falls u t
17
From Video In to Video Out Constructing an aligned Space-Time Volume u dt v a α t b Alignment: Parallax, Dynamic Scenes, etc.
18
t u k k+1 u t Stationary CameraPanning Camera k k+1 Aligned ST Volume: View from Top
19
Generate Output Video Sweeping a “Time Front” surface Time is not chronological any more Interpolation
20
Generate Output Video Sweeping a “Time Front” surface Time is not chronological any more Interpolation
21
u t Evolving Time Front u t x Mapping each TF to a new frame using spatio-temporal interpolation
22
Example: Demolition
23
t u
24
Example: Racing
25
t v
26
Dynamic Panorama: Thessaloniki
27
Creating Panorama: 4D min-cut Aligned space-time volume t x
28
Mosaic Stitching Examples
30
Video Synopsis and Indexing Making a Long Video Short 11 million cameras in 2008 Expected 30 million in 2013 Recording 24 hours a day, every day
31
2009 Explosive growth in cameras… 2014 31 11m 24m
32
Handling the Video Overflow Not enough people to watch captured data Guards are watching 1% of video Automatic Video Analytics covers less than 5% –Only when events can be accurately defined & detected Most video is never watched or examined!!!
33
A Recent Example
34
Key frames C. Kim and J. Hwang. An integrated scheme for object-based video abstraction. In ACM Multimedia, pages 303–311, New York, 2000. Collection of short video sequences A. M. Smith and T. Kanade. Video skimming and characterization through the combination of image and language understanding. In CAIVD, pages 61–70, 1998. Adaptive Fast Forward N. Petrovic, N. Jojic, and T. Huang. Adaptive video fast forward. Multimedia Tools and Applications, 26(3):327–344, August 2005. Entire frames are used as the fundamental building blocks Mosaic images together with some meta-data for video indexing M. Irani, P. Anandan, J. Bergen, R. Kumar, and S. Hsu. Efficient representations of video sequences and their applications. Signal Processing: Image Communication, 8(4):327–351, 1996. Space Time Video montage H. Kang, Y. Matsushita, X. Tang, and X. Chen. Space-time video montage. In CVPR’06, pages 1331– 1338, New-York, June 2006. Related Work (Video Summary)
35
We proposed Objects / Events based summary as opposed to Frames based summary –Enables to shorten a very long video into a short time –No fast forward of objects (preserve dynamics) –Causality is not necessarily kept Object Based Video Summary
36
Original video: 24 hours Video Synopsis: 1 minute Video Synopsis Browse Hours in Minutes Index back to Original Video
37
t Video Synopsis Shift Objects in Time Input Video I (x,y,t) Synopsis Video S(x,y,t)
38
Objects Extracted to Database 10:00 09:03 11:08 14:38 18:45 21:50 38 How does Video Synopsis work? Original: 9 hours Video Synopsis: 30 seconds 38
39
How Does Video Synopsis works Original: 9 hours Video Synopsis: 30 seconds
40
Surveillance Cameras 24 hours in 20 seconds 9 hours in 10 seconds
41
Detect and track objects, store in database. Select relevant objects from database Display selected objects in a very short “Video Synopsis” In “Video Synopsis”, objects from different times can appear simultaneously Index from selected objects into original video Cluster similar objects Steps in Video Synopsis
42
42 Input Video t Synopsis Video x Object “Packing” Compute object trajectories Pack objects in shorter time (minimize overlap) Overlay objects on top of time-laps background
43
Example: Monitoring a Coffee Station t x
44
x t
45
Original Movie Stroboscopic Movie
46
Panoramic Synopsis Panoramic synopsis is possible when the camera is rotating. Original Panoramic Video Synopsis
47
Endless video – Challenges Endless video – finite storage (“forget” events) Background changes during long time periods Stitching object on a background from a different time Fast response to user queries
48
Online Monitoring Online Monitoring (real time) –Compute background (background model) –Find Activity Tubes and insert to database –Handle a queue of objects Query Service –Collect tubes with desired properties (time…) –Generate Time Lapse Background –Pack tubes into desired length of synopsis –Stitching of objects to background 2 Phase approach
49
Online Monitoring Online Monitoring (real time) –Compute background (background model) –Find Activity Tubes and insert to database –Handle a queue of objects Query Service –Collect tubes with desired properties (time…) –Generate Time Lapse Background –Pack tubes into desired length of synopsis –Stitching of objects to background 2 Phase approach
50
Extract Tubes Object Detection and Tracking We used a simplification of Background-Cut* –combining background subtraction with min-cut Connect space time tubes component Morphological operations * J. Sun, W. Zhang, X. Tang, and H. Shum. Background cut. In ECCV, pages 628–641, 2006
51
Extract Tubes
52
The Object Queue Limited Storage Space with Endless Video –May need to discard objects Estimate object usefulness for future queries –“Importance” (application dependent) –Collision Potential –Age: discard older objects first Take mistakes into account….
53
Query Service Online Monitoring (real time) –Pre-Processing : remove stationary frames –Compute background (temporal median) –Find Activity Tubes and insert to database –Handle a queue of objects Query Service –Collect tubes with desired properties (time…) –Generate Time Lapse Background –Pack tubes into desired length of synopsis –Stitching of objects to background 2 Phase approach
54
Time-Lapse Background
55
Time Lapse background goals –Represent background changes over time –Represent the background of activity tubes Activity distribution over time (parking lot 24 hours) 20% night frames
56
Tubes Selection Guidelines for the tubes arrangement : Maximum “activity” in synopsis Minimum collision between objects Preserve causality (temporal consistency) This defines energy minimization process : A time mapping between the input tubes and the appearance time in the output synopsis
57
Energy Minimization Problem Activity Cost (favors synopsis video with maximal activity) Temporal consistency Cost (favors synopsis video that preserves original order of events ) Collision Cost (favors synopsis video with minimal collision between tubes )
58
Tubes Selection as Energy Minimization Each state – temporal mapping of tubes into the synopsis Neighboring states - states in which a single activity tube changes its mapping into the synopsis. Initial state - all tubes are shifted to the beginning of the synopsis video.
59
Stitching the Synopsis Challenge : Different lighting for objects and background Assumption : Extracted tubes are surrounded with background pixels Our Stitching method :Modification of Poisson Editing –add weight for object to keep original color
60
Stitching the Synopsis Challenge : objects stitched on time lapse background with possibly different lighting condition (for example : day / night) Assumption : no accurate segmentation. Tubes are extracted surrounded with background pixels Our Stitching method : modification of Poisson editing add weight for object to keep original color
61
Stitching the Synopsis
63
Webcam Synopsis: Example Webcam in Billiard Hall Typical Webcam Stream (13 Hours) Webcam Synopsis 13 hours in 10 seconds 13 hours in 2:30 minutes Keep all objects
64
Webcam in Parking Lot Typical Webcam Stream (24 hours) Webcam Synopsis : 20 Seconds
65
Video Indexing Webcam Synopsis : 20 Seconds Link from the synopsis back to the original video context synopsis can be used for video indexing
66
Webcam Synopsis : 20 Seconds Link from the synopsis back to the original video context synopsis can be used for video indexing Video Indexing
67
Link from the synopsis back to the original video context Video Indexing Hotspot on Tracked Objects
68
Link from the synopsis back to the original video context Video Indexing Hotspot on Tracked Objects
69
Who soiled my lawn? Unexpected Applications 2 hours20 seconds
70
Examples
72
Video Synopsis Should be More Organized
73
Clustered Synopsis Faster and more accurate browsing carspeople Example: Cluster into 2 clusters based on shape Continue Examining the ‘Car’ cluster
74
Clustering by Motion of ‘Cars’ Class Synopsis now useful in crowded scenes Exit Enter Up HillRight
75
Features of Activity Tubes (Moving Object) Appearance Feature Used: – Randomly selected 200 SIFT features inside the tube Motion Feature Used: –Smooth trajectory of tube center t
76
Appearance (Shape) Distance Between Objects Symmetric Average Nearest Neighbor distance between SIFT descriptors O. Boiman, E. Shechtman and M. Irani, In Defense of Nearest-Neighbor Based Image Classification. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2008. K’s Sift Descriptor in tube i Sift Descriptor closest to K of tube j
77
Spectral Clustering by Appearance Cluster 1Cluster 2 Cluster 3 Cluster 4
78
More Classes : Easy to Remove False Alarm Classes GateTrees Spectral Clustering by Appearance
79
Object Distance: Motion Trajectory Similarity –Computing minimum area between trajectories over all temporal shifts –Efficient computation using NN and KD trees Weight encouraging long temporal overlap Common Time of tubes Space Time trajectory distance x t k
80
Spectral Clustering by Motion ‘Cars’ Class Exit Enter Up HillRight
81
Creating the Synopsis Video Goals – Video Synopsis Having Shortest Duration – Minimal Collision Between Objects Assigning a playing time to each object –Clustering objects based on Packing Cost –Assign play time to each object in cluster –Assign play time to each cluster
82
Creating Video Synopsis Goals – Video Synopsis Having Shortest Duration – Minimal Collision Between Objects Approach –Displaying clustered objects together –Objects packed in space-time like sardines
83
Packing Cost How efficiently the activities are packed together (to creat short Summaries) –Using the motion distance –Adding collision cost between tubes –Computing minimum over all temporal shifts Trajectories Motion Distance Collision Cost
84
Packing Cost Example Packing cars on the top road Affinity Matrix after Clustering Arranged Cluster 1Arranged Cluster 2
85
Combining Different Packed Clustered Similar to the combination of a single object but moving clustered objects together For quick computation –Use KD trees to estimate distance between each tube cluster in each shift and it’s nearest neighbor (location) in already inserted tubes
86
Combining Two Clusters Low Collision Cost Between Classes High Collision Cost Between Classes
87
Training and Testing Supervised Classifier Supervised classifiers requires large number of tagged samples Using Clustered Summary to build training set Use unsupervised clustering as initial tagged clusters Used NN samples to create initial tagged clusters –Interactively clean the training set errors –Feed classifier (for example : SVM) View Classification results Instantly
88
An Important Application: Display Results of Video Analytics Display the hundreds of “Blue Cars” Display thousands of people going left Good for verification of algorithm as well as for deployment
89
Two Clusters Cars People Camera in St. Petersburg Detect specific events Discover activity patterns
90
Cars People Two Clusters Camera in China
91
Automatically Generated Clusters Using Only Shape & Motion People Left People Right Cars LeftCars RightCars Parking People Misc.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.