Authers : Yael Pritch Alex Rav-Acha Shmual Peleg. Presenting by Yossi Maimon.

Slides:



Advertisements
Similar presentations
Fast Algorithms For Hierarchical Range Histogram Constructions
Advertisements

Greedy Algorithms Greed is good. (Some of the time)
Spatial-Temporal Consistency in Video Disparity Estimation ICASSP 2011 Ramsin Khoshabeh, Stanley H. Chan, Truong Q. Nguyen.
CSE 380 – Computer Game Programming Pathfinding AI
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Generated Waypoint Efficiency: The efficiency considered here is defined as follows: As can be seen from the graph, for the obstruction radius values (200,
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Nonchronological Video Synopsis and Indexing TPAMI 2008 Yael Pritch, Alex Rav-Acha, and Shmuel Peleg, Member, IEEE 1.
SASH Spatial Approximation Sample Hierarchy
Lecture 6 Image Segmentation
Using Structure Indices for Efficient Approximation of Network Properties Matthew J. Rattigan, Marc Maier, and David Jensen University of Massachusetts.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Branch and Bound Similar to backtracking in generating a search tree and looking for one or more solutions Different in that the “objective” is constrained.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Segmentation CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
Segmentation Divide the image into segments. Each segment:
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
Advanced Topics in Computer Vision Spring 2006 Video Segmentation Tal Kramer, Shai Bagon Video Segmentation April 30 th, 2006.
Video Surveillance using Distance Maps January 2006 Theo Schouten Harco Kuppens Egon van den Broek.
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set.
Chapter 3: Data Storage and Access Methods
Background vs. foreground segmentation of video sequences = +
Video summarization by graph optimization Lu Shi Oct. 7, 2003.
Deriving Intrinsic Images from Image Sequences Mohit Gupta Yair Weiss.
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi.
Stereo Computation using Iterative Graph-Cuts
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
A REAL-TIME VIDEO OBJECT SEGMENTATION ALGORITHM BASED ON CHANGE DETECTION AND BACKGROUND UPDATING 楊靜杰 95/5/18.
1 Real Time, Online Detection of Abandoned Objects in Public Areas Proceedings of the 2006 IEEE International Conference on Robotics and Automation Authors.
Real Time Abnormal Motion Detection in Surveillance Video Nahum Kiryati Tammy Riklin Raviv Yan Ivanchenko Shay Rochel Vision and Image Analysis Laboratory.
Computer Vision - A Modern Approach Set: Segmentation Slides by D.A. Forsyth Segmentation and Grouping Motivation: not information is evidence Obtain a.
Webcam-synopsis: Peeking Around the World Young Ki Baik (CV Lab.) (Fri)
Video Synopsis Yael Pritch Alex Rav-Acha Shmuel Peleg The Hebrew University of Jerusalem.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
By Yevgeny Yusepovsky & Diana Tsamalashvili the supervisor: Arie Nakhmani 08/07/2010 1Control and Robotics Labaratory.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
SCENE SUMMARIZATION Written by: Alex Rav-Acha // Yael Pritch // Shmuel Peleg (2006) PRESENTED BY: NIHAD AWIDAT.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and David H.C. Du Dept. of.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Digital Image Processing CCS331 Relationships of Pixel 1.
Deriving Intrinsic Images from Image Sequences Mohit Gupta 04/21/2006 Advanced Perception Yair Weiss.
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Dynamosaicing Dynamosaicing Mosaicing of Dynamic Scenes (Fri) Young Ki Baik Computer Vision Lab Seoul National University.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
CS654: Digital Image Analysis
The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Graphcut Textures Image and Video Synthesis Using Graph Cuts
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Presented by: Idan Aharoni
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
U of Minnesota DIWANS'061 Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Hebrew University Image Processing Exercise Class 8 Panoramas – Stitching and Blending Min-Cut Stitching Many slides from Alexei Efros.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
1 Minimum Bayes-risk Methods in Automatic Speech Recognition Vaibhava Geol And William Byrne IBM ; Johns Hopkins University 2003 by CRC Press LLC 2005/4/26.
Range Imaging Through Triangulation
Segmentation and Grouping
Globally Optimal Generalized Maximum Multi Clique Problem (GMMCP) using Python code for Pedestrian Object Tracking By Beni Mulyana.
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Seam Carving Project 1a due at midnight tonight.
Presentation transcript:

Authers : Yael Pritch Alex Rav-Acha Shmual Peleg. Presenting by Yossi Maimon.

 Today, the amount of capture video is growing dramatically.  Public and private places are surrounded with surveillance cameras.  Public places: Airports, Museums, governments instituted and so on.

 Each location required several camera to few hundreds of camera in order to cover all place.  The surveillance camera are capturing video 24/7 (public places). London city has more the million surveillance cameras.  As a result, searching for activities from the last few hours/days will take hours/days.  Cause to surveillance cameras to be irrelevant.

 Fast forwarding.  Key frame. Arbitrary: selecting every X frame. Dynamic: according to activities, the algorithm will select more frames from activities.  All solutions consider the frames as building blocks.

 The idea is to create video synopsis according to user query.  The video synopsis will contain the importance data and activities from the raw video.  Presenting in the same time different activities from different time.  Each activity will have a pointer the original time and space in the raw video.

The article describe two approaches to perform synopsis on the raw video. 1. Low level - Pixel base approach. 2. High level - Object base approach.

 The video synopsis should be substantially shorter than the raw video.  Maximum activity/interest from the raw video should appear in the synopsis video.  The dynamics of the objects should be preserved in the synopsis video.  Visible seams and fragmented objects should be avoided.  The shift will be only in the time space.

Assuming N frame choosing. 1 ≤ t ≤ N, (x, y) pixel spatial coordinate. M – mapping pixel. I(x, y, t) – Pixel in the raw video. S(x, y, t) – Pixel in the synopsis video. Since the spatial space is not change only time then: S(x, y, t) = I(x, y, M(x, y, t)).

The time shift M is obtained by minimization the following cost function: E(M) = Ea(M) + αEd(M). Ea – indicates the lost of activity. The total active Pixels in I(raw) and not in S (Synopsis) Ed – indicates the discontinuity across seams.

Active pixel: The difference of the pixel from the background. χ(x, y, t) = I(x, y, t) - B(x, y, t) (B for background) New equation formulation: Ea(M) – Ed(M) –

The solution can be represent as a graph.  Pixel => node, The weight is derived from activity cost.  Neighbor => edge, The weight is derived from the discontinuity cost. Since each pixel in the synopsis video can Come from any time then it causes to high Complexity.

 Moving to high level implementation.  Object/tube instead of pixel.  The purpose is to detect and track object in the raw video to synopsis video.  Objects will be rank according to there importance.  Maximum activity, Minimum overlapping, Maximum continuity.

Background:  In short videos the background doesn’t change except surveillance Cameras (lighting, static objects).  Therefore, in long videos, it should be calculate every several minutes.  Background subtraction and min cut are used for segmentation of foreground objects.

Activity cost: Favor synopsis movie with maximum activity. penalizes for objects that are not mapped to a valid time in the synopsis. If some pixels of the tube is mapped then the function will calculate only the unmapped pixels.

 Collision cost: For every two shifted tubes a collision should be calculate. This expression will give a low penalty to pixel whose color is similar to the background.

Temporal Consistency Cost  Preserving the chronological order of events (two people talking of two events with a reasoning relation)  The calculation will be according to the spatio-temporal distance.  C is a penalty for object that do not preserved temporal consistency.

 This energy will used for maximum activity with avoiding conflicts and overlap between objects. α and β are user parameters. Reducing β will cause to object overlapping and increasing will cause to sparse video.

 Synopsis video are bounded from below by the longest activity.  Long videos can’t be synopsis in temporal rearrangement.  Two option to deal with it: ◦ Display partial activity. ◦ Cut the activity to several activities and present them simultaneously (stroboscopic effect).

The algorithm will provide the user the ability to watch synopsis video with the raw video (Surveillance cameras) The algorithm is divide to two phases. 1. Online phase. Collecting and analyzing the raw video. 2. Response phase. Build user synopsis as a response to user query.

 Creating a background video by temporal median.  Object (tube) detection and segmentation.  Inserting detected objects into the object queue.  Removing objects from the object queue when reaching a space limit.

 Constructing a time-lapse video of the changing background.  Selecting tubes for the synopsis video and computing the optimal temporal arrangement of these tubes.  Stitching the tubes and the background into a coherent video

 Generating a background video.  computes consistency cost for each object and for each possible time in the synopsis.  determines which tubes should appear in the synopsis and at what time.  The selected tubes are combined with the background time-lapse to get the final synopsis.

 Removing Stationary Frames Surveillance camera have long period with no activity. Such frames can be filtered during online phase. Recording only when notice in activity.  Short activity Activity less then a second has no importance. Therefore, we will take frame every 10 frames.

 In endless movie there is a problem to queued all items due to space.  The common methods is to through the oldest object but then it limited to user query.  Our approach is to through object with low importance (activity), collision potential and age.  By user defined thresholds (uniform, dynamic).  Object properties such as: Activity, time

What should it done?  It should represent the background changes over time (day-night transition).  it should represent the background of the activity tubes. Ht – uniform histogram. Ha – Activity histogram.

 Assumption: The pixels in the object border is similar to the background.  We’ll define the cost of stitching an object to background.

 Stitching all tubes together will cause to color blending.  Boundaries of each tune are consists with background.  Suggested approach: The background is the same (except lighting) and each object will be stitched independetly.

 Moving object become stationary.  Stationary object become moving object. Problems:  Background objects will appear and disappear with no reason.  Moving objects will disappear when they stop moving rather than becoming part of the background.

 original video frames of active periods should be stored together with the object based queue.  Each selected object has time stamp.  Clicking the object will direct the user to the time in the raw video according to the time stamp.