Presentation is loading. Please wait.

Presentation is loading. Please wait.

Content-based Video Indexing and Retrieval

Similar presentations


Presentation on theme: "Content-based Video Indexing and Retrieval"— Presentation transcript:

1 Content-based Video Indexing and Retrieval

2 Motivation There is an amazing growth in the amount of digital video data in recent years. Lack of tools to classify and retrieve video content There exists a gap between low-level features and high-level semantic content. To let machine understand video is important and challenging.

3 Motivation Necessity of Video Database Management System
Increase in the amount of video data captured. Efficient way to handle multimedia data. Traditional Databases Vs Video Databases Traditional Databases has tuple as basic unit of data. Video Databases has shot as basic unit of data.

4 Video Management Video consists of: Text Audio Images
All change over time

5 Video Data Management Metadata-based method Text-based method
Audio-based method Content-based method Integrated approach

6 Metadata-based Method
Video is indexed and retrieved based on structured metadata information by using a traditional DBMS Metadata examples are the title, author, producer, director, date, types of video.

7 Text-based Method Video is indexed and retrieved based on associated subtitles (text) using traditional IR techniques for text documents. Transcripts and subtitles are already exist in many types of video such as news and movies, eliminating the need for manual annotation.

8 Text-based Method Basic method is to use human annotation
Can be done automatically where subtitles / transcriptions exist BBC: 100% output subtitled by 2008 Speech recognition for archive material

9 Text-based Method Key word search based on subtitles Content based
Live demo:

10 Text-based Method

11 Audio-based Method Video is indexed and retrieved based on associated soundtracks using the methods for audio indexing and retrieval. Speech recognition is applied if necessary.

12 Content-based Method There are two approaches for content-based video retrieval: Treat video as a collection of images Divide video sequences into groups of similar frames

13 Integrated Approach Two or more of the above techniques are used as a combination in order to provide more flexibility in video retrieval.

14 Video Data Management Video Parsing Video Indexing
Manipulation of whole video for breakdown into key frames. Video Indexing Retrieving information about the frame for indexing in a database. Video Retrieval and browsing Users access the db through queries or through interactions.

15 Video Parsing Scene: single dramatic event taken by a small number of related cameras. Shot: A sequence taken by a single camera Frame: A still image

16 Video Parsing Video Scenes Shots Frames
Detection and identification of meaningful segments of video. Video Obvious Cuts Scenes Shot Boundary Analysis Shots Key Frame Analysis Frames

17 Video Parsing

18 System overview

19 Video Shot Definition A shot is a contiguous recording of one or more video frames depicting a contiguous action in time and space. During a shot, the camera may remain fixed, or may exhibit such motions as panning, tilting, zooming, tracking, etc.

20 Video Shot Detection Segmentation is a process for dividing a video sequence into shots. Consecutive frames on either side of a camera break generally display a significant quantitative change in content. We need a suitable quantitative measure that captures the difference between two frames.

21 Video Shot Detection Use of pixel differences: tend to be very sensitive to camera motion and minor illumination changes. Global histogram comparisons: produce relatively accurate results compared to others. Local histogram comparisons: produce the most accurate results compared to others. Use of motion vectors: produce more false positives than histogram-based methods. Use of the DCT coefficients from MPEG files: produce more false positives than histogram-based methods.

22 Shot Boundary Detection
Frame Dissimilarity Normalized color histogram difference is adopted Measure of dissimilarity, or distance Shot Dissimilarity Minimum dissimilarity between any two frames of two shots Measure of Shot Dissimilarity D(Si , Sj) = mink,l D(ƒki , ƒlj) ƒki , frame k of shot i. Distance of Frames D(ƒi , ƒj) = ∑b|hib – hjb|/N hib , a given bin in the histogram of frame i, N , the total number of pixels in each frame.

23 Shot boundary detection
Split video into meaningful segments Traditionally look at inter-frame differences Common problems Gradual changes Rapid motion Our solution Inspired by Pye et al, Zhang et al Moving average over greater range

24 Shot boundary detection
At each frame, compute 4 distance measures – d2 , d4, d8, d16 across ranges of 2,4,8,16 frames respectively Coincident peaks indicate shot boundaries d4 difference used to find transition start/end times

25 SBD examples Cut Gradual

26 Video Indexing and Retrieval
Based on representative frames Based on motion information Based on objects

27 Representative Frames
The most common way of creating a shot index is to use a representative frame to represent each shot. Features of this frame are extracted and indexed based on color, shape, texture (as in image retrieval)

28 Representative Frames
If shots are quite static, any frame within the shot can be used as a representative. Otherwise, more effective methods should be used to select the representative frame.

29 Representative Frames
Two issues: How many frames must be selected from each shot. How to select these frames.

30 Representative Frames
How many frames per shot? Three methods: One frame per shot. The method does not consider the length and content changes. The number of selected representatives depends on the length of the video shot. Content is not handled properly. Divide shots into subshots and select one representative frame from each subshot. Length and content are taken into account.

31 Representative Frames
Now we know the number of representative frames per shot. The next step is to determine HOW to select these frames.

32 Representative Frames
Definition: SEGMENT is a shot, a second of a video, a subshot

33 Representative Frames
Method I The first frame is selected from the segment. This is based on the observation that a segment is usually described by using the first few frames.

34 Representative Frames
Method II An average frame is defined so that each pixel in this frame is the average of pixel values at the same grid point in all the frames of the segment. Then the frame within the segment that is most similar to the average frame is selected as the representative.

35 Representative Frames
Method III The histograms of all the frames in the segment are averaged. The frame whose histogram is closest to this average histogram is selected as the represenative frame of the segment.

36 Representative Frames
Method IV Each frame is divided into background and foreground objects. A large background is then constructed from the background of all frames, and then the main foreground objects of all frames are superimposed onto the constructed background.

37 Foreground and Background Variance Method
Overview Videos are divided into categories along with their shots. We calculate the Foreground Variance,Background Variance and Average Color of shots, and store them in the database. Shots are retrieved by comparing the Foreground Variance, Background variance and Average Color values. What is Background and Foreground? Background is the area that is outside the primary object. Foreground is the area where the primary object can be found.

38 Foreground and Background Variance Method
Choosing Foreground and Background. W = C*(1/10). c w w w

39 Foreground and Background Variance Method
Actual Method Steps for calculating Foreground Variance values Take each pixel of the Foreground area and access its individual Red, Green, and Blue values. Calculate Average Red, Average Green, and Average Blue Color values for Foreground.Repeat the above process for all the frames of the shot and calculate Average Red, Average Green, and Average Blue Color values. Using the above Foreground values of all the frames of a shot, we calculate the Variance of Red, Green, and Blue.

40 Foreground and Background Variance Method
Actual Method Steps for calculating Foreground Variance values The Formula for calculating the Variance of Red Color for Foreground is VFgRed = ∑ (Xi – Mean)2/(N-1), where Xi = Average values of Red Color of all the frames for Foreground, and N = Total No of frames. The same process in the above step is repeated for Green and Blue and we find VFgGreen and VFgBlue In the similar lines we find the Background Variance values VBgRed, VBgGreen, and VBgBlue.

41 Foreground and Background Variance Method
Actual Method Steps for calculating Average Color values. Access each pixel of each frame and calculate individual color values. Add up all the individual Red, Green and Blue values of each pixel separately. For calculating Average Red Color for one frame, divide the sum of all the red pixel values by the total No of pixels in one frame. For calculating the Average Red Color values for the entire shot, we divide the sum of all the Red Color values of individual frames by the total No of frames. Thus we will be getting AvgRed.

42 Foreground and Background Variance Method
Actual Method Steps for calculating Average Color values. Similarly we calculate the AvgGreen, and AvgBlue values for the entire shot. We have a total of nine different variables and we store these values in the Database. For retrieving similar shots, we compare the above nine values in the Database to the corresponding values of the query shot.

43 Foreground and Background Variance Method
Actual Method We compare the Foreground, Background, and the Average Color values using the formula Ri = √((∆1-∂1)2 + (∆2-∂2)2 + (∆3-∂3)2) where ∆1, ∆2, and ∆3 are Database values. where ∂1, ∂2, and ∂3 are query shot values. We add up all the Ri values after comparing the Foreground, Background and Average Color values. If that values is less than 100, then that shot is retrieved and displayed. The shots are displayed in the increasing order of their closeness to the query shot.

44 Motion Information Motivation
Indexing and retrieval based on representative frames ignores the motion information contained in a video segment.

45 Motion Information The following parameters are used: Motion content
Motion uniformity Motion panning Motion tilting

46 Motion Information (content)
This a measure of the total amount of motion within a given video. It measures the action content of the video. For example, a talking person video has a very small motion content, while a violent car explosion typically has high motion content.

47 Motion Information (uniformity)
This is a measure of the smoothness of the motion within a video as a function of time.

48 Motion Information (panning)
This measure captures the panning motion (left to right, right to left motion of a camera).

49 Motion Information (tilting)
This is a measure of the vertical motion component of the motion within a video. Panning shots have a lower value than a video with large amount of vertical motion.

50 Motion Information The above measures are associated either with the entire video or with each shot of the video.

51 Object-based Retrieval
Motivation The major drawback of shot-based video indexing is that while the shot is the smallest unit in the video sequence, it does not lend itself directly to content-based representation.

52 Object-based Retrieval
Any given scene is a complex collection of parts or objects. The location and physical qualities of each object, as well as the interaction with others, define the content of the scene. Object-based techniques try to identify objects and relationships among these objects.

53 Object-based Retrieval
In a still image object segmentation and identification is normally a difficult task. In a video sequence, an object moves as a whole. Therefore, we can group pixels that move together into an object. Object segmentation is quite accurate by using the above idea.

54 Object-based Retrieval
Object-based video indexing and retrieval can be performed easily when video is compressed using the MPEG-4 object-based coding standard. An MPEG-4 video is composed of one or more VOs. A VO consists of one ore more video object layers (VOLs).

55 An Architecture for Video Database System
Spatio-Temporal Semantics: Formal Specification of Event/Activity/Episode for Content-Based Retrieval Object Definitions (Events/Concepts) Inter-Object Movement (Analysis) Intra/Inter-Frame Analysis (Motion Analysis) Spatial-Semantics of Objects (human,building,…) Semantic Association (President, Capitol,...) Object Identification and Tracking Physical Object Database Image Features Sequence of Frames (indexed) Raw Image Database Object Description Temporal Abstraction Raw Video Database Frame Spatial Abstraction

56 Conclusion Video indexing and retrieval is very important in multimedia database management. Video contains more information than other media types (text, audio, images). Methods: representative frames, motion information, object-based retrieval.


Download ppt "Content-based Video Indexing and Retrieval"

Similar presentations


Ads by Google