Taking Advances in Multimedia Content Analysis to Product: Challenges and Solutions Ajay Divakaran Vision and Multi-Sensor Systems December 1, 2009.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Generation of Multimedia TV News Contents for WWW Hsin Chia Fu, Yeong Yuh Xu, and Cheng Lung Tseng Department of computer science, National Chiao-Tung.
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Russell Taylor. Sampling Sampled a file from an on-line/on-board source Edited that file by Deleting a section of the original file Added a Fade-in section.
Certificate in Digital Applications – Level 02 Creative Multimedia – DA202.
Kien A. Hua Division of Computer Science University of Central Florida.
2. What is Multimedia? Multimedia can have a many definitions these include: Multimedia means that computer information can be represented through audio,
SmartPlayer: User-Centric Video Fast-Forwarding K.-Y. Cheng, S.-J. Luo, B.-Y. Chen, and H.-H. Chu ACM CHI 2009 (international conference on Human factors.
Microsoft Office Illustrated Fundamentals Unit N: Polishing and Running a Presentation.
Finding Structure in Home Videos by Probabilistic Hierarchical Clustering Daniel Gatica-Perez, Alexander Loui, and Ming-Ting Sun.
Patch to the Future: Unsupervised Visual Prediction
Using Multiple Synchronized Views Heymo Kou.  What is the two main technologies applied for efficient video browsing? (one for audio, one for visual.
Personalized Abstraction of Broadcasted American Football Video by Highlight Selection Noboru Babaguchi (Professor at Osaka Univ.) Yoshihiko Kawai and.
Information Retrieval in Practice
Toward Semantic Indexing and Retrieval Using Hierarchical Audio Models Wei-Ta Chu, Wen-Huang Cheng, Jane Yung-Jen Hsu and Ja-LingWu Multimedia Systems,
1 CS 430: Information Discovery Lecture 22 Non-Textual Materials 2.
LYU0103 Speech Recognition Techniques for Digital Video Library Supervisor : Prof Michael R. Lyu Students: Gao Zheng Hong Lei Mo.
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
Quicktime Howell Istance School of Computing De Montfort University.
ADVISE: Advanced Digital Video Information Segmentation Engine
Supervised by Prof. LYU, Rung Tsong Michael Department of Computer Science & Engineering The Chinese University of Hong Kong Prepared by: Chan Pik Wah,
Multimedia Search and Retrieval Presented by: Reza Aghaee For Multimedia Course(CMPT820) Simon Fraser University March.2005 Shih-Fu Chang, Qian Huang,
Presentation Outline  Project Aims  Introduction of Digital Video Library  Introduction of Our Work  Considerations and Approach  Design and Implementation.
Feature vs. Model Based Vocal Tract Length Normalization for a Speech Recognition-based Interactive Toy Jacky CHAU Department of Computer Science and Engineering.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
Visual Information Retrieval Chapter 1 Introduction Alberto Del Bimbo Dipartimento di Sistemi e Informatica Universita di Firenze Firenze, Italy.
Presented by Zeehasham Rasheed
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Multimodal Analysis Video Representation Video Highlights Extraction Video Browsing Video Retrieval Video Summarization.
김덕주 (Duck Ju Kim). Problems What is the objective of content-based video analysis? Why supervised identification has limitation? Why should use integrated.
Real Time Abnormal Motion Detection in Surveillance Video Nahum Kiryati Tammy Riklin Raviv Yan Ivanchenko Shay Rochel Vision and Image Analysis Laboratory.
Overview of Search Engines
DVMM Lab, Columbia UniversityVideo Event Recognition Video Event Recognition: Multilevel Pyramid Matching Dong Xu and Shih-Fu Chang Digital Video and Multimedia.
Information Retrieval in Practice
Moving PicturestMyn1 Moving Pictures MPEG, Motion Picture Experts Group MPEG is a set of standards designed to support ”Coding of Moving Pictures and Associated.
This chapter is extracted from Sommerville’s slides. Text book chapter
Using Multimedia on the Web
Multimedia. Definition What is Multimedia? Multimedia can have a many definitions these include: Multimedia means that computer information can be represented.
Video Classification By: Maryam S. Mirian
1 Seminar Presentation Multimedia Audio / Video Communication Standards Instructor: Dr. Imran Ahmad By: Ju Wang November 7, 2003.
Multimedia Databases (MMDB)
Glencoe Introduction to Multimedia Chapter 9 Video 1 Chapter Video 9  Section 9.1 Video in Multimedia  Section 9.2 Work with Video Contents.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
Department of Computer Science and Engineering, CUHK 1 Final Year Project 2003/2004 LYU0302 PVCAIS – Personal Video Conference Archives Indexing System.
MULTIMEDIA DEFINITION OF MULTIMEDIA
CHAPTER TEN AUTHORING.
MULTIMEDIA TECHNOLOGY SMM 3001 MEDIA - VIDEO. In this chapter How digital video differs from conventional analog video How digital video differs from.
Understanding The Semantics of Media Chapter 8 Camilo A. Celis.
1 CS 430: Information Discovery Lecture 22 Non-Textual Materials: Informedia.
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
March 31, 1998NSF IDM 98, Group F1 Group F Multi-modal Issues, Systems and Applications.
1 Applications of video-content analysis and retrieval IEEE Multimedia Magazine 2002 JUL-SEP Reporter: 林浩棟.
Introduction to Interactive Media Interactive Media Tools: Authoring Applications.
Data and Applications Security Developments and Directions Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #15 Secure Multimedia Data.
Digital Video Library Network Supervisor: Prof. Michael Lyu Student: Ma Chak Kei, Jacky.
1 CS 430 / INFO 430 Information Retrieval Lecture 17 Metadata 4.
Oman College of Management and Technology Course – MM Topic 7 Production and Distribution of Multimedia Titles CS/MIS Department.
MMM2005The Chinese University of Hong Kong MMM2005 The Chinese University of Hong Kong 1 Video Summarization Using Mutual Reinforcement Principle and Shot.
1 AQA ICT AS Level © Nelson Thornes 2008 Operating Systems What are they and why do we need them?
Copyright © 2016 Pearson Education, Inc. Modern Database Management 12 th Edition Jeff Hoffer, Ramesh Venkataraman, Heikki Topi CHAPTER 11: BIG DATA AND.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
MPEG 7 &MPEG 21.
Information Retrieval in Practice
Visual Information Retrieval
Automatic cLasification d
Presenter: Ibrahim A. Zedan
Supervised Time Series Pattern Discovery through Local Importance
Presentation transcript:

Taking Advances in Multimedia Content Analysis to Product: Challenges and Solutions Ajay Divakaran Vision and Multi-Sensor Systems December 1, 2009

Acknowledgements Mitsubishi Electric – Research Labs USA, Japan Huifang Sun Isao Otsuka, Tokumichi Murakami Regunathan Radhakrishnan, Ziyou Xiong, Lexing Xie, Kadir Peker, Kevin Wilson Many others in the Multimedia Community – Especially all attendees today Harpreet Singh Sawhney 2009 Sarnoff Corp. Copyrighted

Outline Introduction and Motivation Video Browsing Enabled DVD Recorder – A Case Study Future Possibilities 2009 Sarnoff Corp. Copyrighted

Consumer Video Browsing Summarization ContentIndexing 2009 Sarnoff Corp. Copyrighted

Web-based vs Personal View 2009 Sarnoff Corp. Copyrighted World Personal Content WORLDWIDE CONTENT

Consumer Video Browsing What is the business model? – Rapid Browsing as yet another feature Smart FF, REW etc.? – Advertisement based – Is that a contradiction? 2009 Sarnoff Corp. Copyrighted

Existing Products Browsing enabled DVD and Blu-ray recorders – Mitsubishi – Sports and Music – Sony – Sports and Music – Hitachi - Sports Content-based Editing – Microsoft Web-based “Skins” for Media Players Video Editing Tools for Broadcasters – Not the focus of this talk Limited to locally stored content mostly 2009 Sarnoff Corp. Copyrighted

Highlights Playback Enabled DVD Recorder: A Case Study Video: Highlights Playback Enabled DVD Recorder

A Video-Browsing Enabled Personal Video Recorder Ajay Divakaran

OUTLINE Background and Motivation Basic Concept – Audio Classification Framework – Sports Highlight Extraction – Data Structure of Meta-data Realization on Target Platform Demonstration Further Topics if time permits Conclusion

Background : Video Summarization Compact viewable representation of content Past work relies heavily on intuitively gathered domain knowledge Unscripted Content - Sports – based on detection of specific events – Based on video analysis [Ekin, Pan] – Based on audio analysis [Z.Xiong] – Based on closed caption text analysis [Babaguchi] – Unsupervised structure discovery-play/break [L.Xie] Unscripted Content - Surveillance – Object segmentation & tracking followed by unusual event detection [E. Chang] Scripted Content – News, Drama, Films etc – Focus on generation of TOC (Table of Contents) – Audio-visual feature extraction for semantic boundary extraction (MCA community) – Table of Contents Extraction (e.g. Rui et al, Dimitrova et al, S-F. Chang et al)

Scripted and Unscripted content News Drama Sports Surveillance Existence of structure Ease of discovery scripted unscripted

Summarizing “scripted” & “Unscripted content” Summary: flexible traversal through semantic units using abstractions Extract Semantic units News, drama,movies etc Abstract Semantic units e.g News story, scene e.g skims, Key frames Event Detection Sports & surveillance Rank events e.g goals, Home runs uninteresting Summary: flexible traversal through ranked events

Representation for “Scripted” content

Representation for “Unscripted” content : Key audio-visual markers Cheering! Applause!

Background and Motivation (1) To Browse the expected scenes. (Highlight Search) (2) To Grasp the entire content quickly. (Summarized Playback) It is necessary to develop a technology that can detect highlight scenes from contents automatically and help browse the content. We propose to use only Audio Analysis The requirement for long duration recorded content is … Current PVRs can store up to 200 hours of high quality video content. The storage capacity is expected to further grow. (i.e. HDD, Blu-ray, etc.)

How to detect highlight scenes from contents EXCITED SPEECH part in the content becomes a powerful indicator of highlight scenes. Excited Commen t Cheering! Interesting events lead to human reaction consisting of a mixture of cheering and the commentator’s excited speech. This method works across a wide range of sports.

Basic Concept Cheering! Audio Classification Importance Level SHOT! Excited Commen t Applause! Multimedia Data Meta-data Storage Medium MPEG-2 AC-3

System Block Diagram Current DVD Recorder System Block Diagram System Encoder MPEG-2 Video Encoder Packetizer PS MIX Write Buffer ATA/ ATAPI I/F Video Signal Video ES Audio Signal Video PES Audio ES Audio PES PS HDD,DVD Disk MCU DVD Codec LSI ATA/ATAPI Format Audio DSP Dolby AC-3 Audio Encoder Importance Level calc. Audio Classification (GMM) MDCT Feature Extraction SCR Meta- Data MDCT: Modified Discrete Cosine Transform GMM: Gaussian Mixture Model Importance Level Time Stamp Info. Broadcast video Entirely Hard-Wired Based on a programmable DSP

Time DomainFrequency Domain Audio Classification Framework Applause Cheering Music Speech Excited Speech MDCT Comparing Likelihoods Sample Audio (AC-3) - Variety of content - Input Audio (AC-3) Classify the input audio into 5 Audio Classes using GMM Audio Class MDCT Coef. Training the GMM Classifiers using MDCT data. Applause or Cheering with excited speech from commentator

Sports Highlight Extraction Framework W t Time Window ( Sliding ) Data stream of the result of Audio Classification … … Playback Time = Percentage of the significant audio class Audio Energy Importance Level x Excited Speech is effective for sports content.

Playback Time Importance Level High Low SLICE LEVEL User’s Preference Play Play ・・ ・ Skipping to the start point manually Skipping Automatically Highlight Search Func. Summarized Playback Func. Highlight Scene Imp. Level Plot Realization on Target Platform Importance Level

Proposed New Features (1) To extract highlight scenes automatically. (2) To browse the expected scenes. (Highlight Search) (3) To grasp the entire content quickly. (Summarized Playback) New Keys for the Features a) Summarized Playback b) Highlight Scene Skip c) Highlight Scene Menu d) Importance Level Adjust Prototype Model Importance Level Plot Remote Control Prototype Model Development

Importance Level Plot Result Baseball Horse Racing RBIsHome Run Finish We have got satisfactory accuracy with over 30 games from Japanese and U.S. broadcast sports content. Commentator’s talk 30min.

Mitsubishi ‘Raku-Reco’ [Highlight Playback] Importanc e Level Plot Current Position Slice Level Slice Level The product DVR-HE50W was released in September Highlight Scenes Highlight Scenes

MELCO DVD Recorder 2006

MELCO Highlights Enabled Cellphone 2007

Highlight Playback for MELCO Cellphone 2006

Highlight Playback for Cellphones

Highlight Playback Blu-Ray Recorders 2008

Performance Evaluation What is a Good Summary? Are there Objective Measures? – Yes, they exist and No, they are not sufficient User Studies will be key – Key issues are Functionalities and their effectiveness – Controlled tests of demonstration setup with groups of non-expert users – Several Iterations will be required – Hope to gauge the usefulness and efficacy of proposed functionalities – Evidence from our past studies shows that such studies are insightful

Experimental Set-Up Interface displayed on Big-Screen TV All Subjects were sports aficionadoes One of us sat with the subject The rest of us observed from a remote hook- up Wide variety of sports tried – Soccer – American Football – Ice Hockey – Baseball – Tennis

Status of MERL Video Summarization MERL Techniques well accepted in community – Book published, Over 20 papers, 6 Invited Book Chapters, Over 30 patents filed, 10 issued – Best Paper awards at ICIP, ICASSP and ICCE – 4 Ph.D’s stemming from research – MERL researchers are members of PC’s of prestigious conferences such as ACM Multimedia, and IEEE ICME MERL advantages – Simplicity stemming from compressed domain computation – Accuracy comparable to best – Strong in both audio and video analysis

General Response Missed Highlights bigger concern than False Highlights Quick recovery from false highlights essential Overall system enjoyable despite its mistakes Users liked: – The Action Map because of the flexibility it affords – Auto-skip and Highlight skip features Suitability to different sports – Best Suited to: – Soccer, Baseball, Hockey

Conclusion Users enjoyed system despite flaws The general consistency in the results indicates validity Our hypothesis that fair accuracy combined with a good interface would work well, was confirmed.

Product Impact Highlight Playback Feature of Mitsubishi DVD Recorder DVR-HE50W for Japanese Market – World’s first DVD Recorder and mainstream PVR to have sports highlights extraction capability – Recommended as best buy in its category by HiVi Magazine – Uniformly positive reviews in Japanese press Tokusengai magazine rates Mitsubishi sports highlights playback as the best

Current work: Genre-independent Smart Skipping Smart Skipping – Adaptive content segmentation at multiple resolutions – Genre independent “one-size fits all” technique – Audio-Based to exploit current platform Current approach – Use hand-labeled data to train a support vector machine (SVM) scene- change classifier. – Hand-labeled data comes from several different genres, so learned classifier works across genres. – Feature representation is compatible with sports highlights work.

System overview Audio features – MFCCs, semantic classes Video features – shot change locations Labeled ground truth data – 7 hours of content from sitcoms, news programs, dramas, music videos, etc.

Blind Summarization – Content-adaptive summarization Content Specificity enables high summary quality Content-specificity leads to profusion of techniques Difficult to support multiple non-overlapping techniques in practical systems Content-independent techniques are known to compromise quality of summarization Thus the trade-off is between summary quality and content-specificity Our approach is to combine specific and general techniques – Have a large common core of processing – Postpone content-specific processing to as late a stage as possible

Problem formulation Motivated by the observation that “interesting” events in unscripted multimedia happen sparsely in a background of “uninteresting” events. – A burst of audience reaction in the vicinity of “interesting” events in sports audio. – A burst of screaming sound in the vicinity of “suspicious” events in surveillance audio content. Given a time series of observations from a usual background process (P 1 ) and an unusual foreground process (P 2 )

System Summary Developed a system that extracts highlight scenes automatically by using audio features and plays back desired highlight scenes. Presented a reasonably accurate highlight extraction algorithm for sports contents by using audio detection of audience reaction. This technology is easy to incorporate into DVD, HDD, Blu-ray Disc recorders. Improvement of the audio classification accuracy and extension of our framework beyond sports video, are avenues for further improvement.

Challenges A unified, non-heuristic way of summarizing content from diverse genres based on: Enhancement of current audio-only approach Adding Visual Object Detection (goalposts, faces etc.) Fusing Audio-Visual Cues to improve accuracy Extending current framework beyond sports to variety show, dramas, news etc.

Proposed Hierarchical Video Representation Video with Audio Track Play / Break Audio-Visual Markers Highlights Highlight Groups Feature extraction & segmentation Key audio-visual object detection Audio-visual markers association Grouping Pla y Break Visual Marker Audio Marker A highlight

Proposed Analysis Framework AudioVideo Audio Marker DetectionVisual Marker Detection ApplauseCheersBaseball CatcherSoccer Goalpost Golfer Bending to Hit A-V Marker Association Golf Swings Highlight Candidates Finer Resolution Highlights Golf Putts Which sport is it? Non-highlights Strikes, BallsBall-hitsNon-highlights Corner KicksPenalty KicksNon-highlights Excited Speech

What is a good visual marker? That has strong association with “interesting” events That has low variability

Visual Marker 1: Squatting Baseball Catcher Catchers Pitches Pitches that are followed by hits Hits that lead to runs/home- runs

Faces as a Visual Feature for Video Analysis Visual features are much lower than audio in semantics, BUT, Faces are: Arguably, semantically the most meaningful visual class Humans are the most popular subjects of consumer video Arguably, the best studied visual class for detection Powerful computer vision techniques for detection and Arguably, MERL has the most effective face detector Viola-Jones detector, fast and accurate

MERL Face Detector Viola-Jones face detector – One of the fastest and most accurate – Detectors for ‘frontal face’, ‘right face’, ‘left face’, ‘slanted faces’,… – Face Recognition (MERL technology) can be incorporated in the future – FaceAPI – easy integration and development (We developed a plug-in for a popular video editing software) Same object detection framework for faces, baseball catcher, etc. Can increase speed by changing parameters About 15fps, can be increased – accuracy trade off Only I-frames can be sufficient for content analysis

Applications – News Video Segmentation First, classify using face count Same as commercial detection. Mostly 1-face segments for news, though. Cluster 1-face segments using face location and size Effectively classifies different scene types Anchor shots appear in one cluster, weather report in another, outside correspondents another, etc. Detected 11 story introductions, missed 2 Styles vary. Will test across many news video sources. Temporal smoothing and error filtering to generate smooth summaries In general, applicable to static-scene programs Promising results with talk-show programs (e.g. monologues vs. guest segments) Interviews, documentaries, etc.

Scene composition clusters. (x: face x-location. y: face size)

Audio-Visual Marker Association Why? – To establish the beginning and end of the event of interest – to reduce false alarms – to use audience reaction to gauge significance Two cases – A video marker overlaps with an audio marker Associate them – A video marker followed by several audio markers Choose the nearest one that is not too far apart

Visual Marker Audio Marker candidate Audio Marker Visual Marker candidate Visual Marker Case1: more than 50% overlap between audio marker & visual marker t Case2: t < Threshold. Threshold obtained from the statistics of play durations of a game Discarded segment Audio Marker Discarded segment

What type of user goals? Skim (highlights, news) – automatic summary, quick overview Content selection – decide what to watch – Movie trailer, sitcom jokes, instructional video abstract Browse segments – find out what elements included – Stages of how-to, questions in game show, songs in music, guests in morning show, … Content segment selection – locate and watch only desired part – Songs in variety show, segments of a talk show, verdict in court TV Remember previously viewed – quickly remember which … – Series episodes – sitcoms, children’s, cartoons… – Home video browsing, searching Many times a mixture of above, or switching between

Problem formulation We want to find scene boundaries (boundaries between semantically different segments of a show). – We will use scene boundaries for smart browsing. For different genres, different features are useful. – Short music segments indicate scene changes in some sitcoms. – Speaker changes indicate scene changes in news programs. To create a single system that works for all genres, we hand-label a training set of diverse video content and use a support vector machine to learn an optimal classifier.

System overview Audio features – MFCCs, semantic classes Video features – shot change locations Labeled ground truth data – 7 hours of content from sitcoms, news programs, dramas, music videos, etc.

Audio Texture Analysis based temporal segmentation Audio features –semantic classes

Results Training data + SVM allows us to test many combinations of features on many genres. A combination of features (semantic histograms plus Bhattacharyya distance) was found to perform much better than any single audio feature. Video Shot detection improves the results

Performance Evaluation What is a Good Summary? Are there Objective Measures? – Yes, they exist and No, they are not sufficient User Studies will be key – Key issues are Functionalities and their effectiveness – Controlled tests of demonstration setup with groups of non-expert users – Several Iterations will be required – Hope to gauge the usefulness and efficacy of proposed functionalities – Evidence from Time Tunnel studies shows that such studies are insightful

Experimental Set-Up Interface displayed on Big-Screen TV All Subjects were sports aficionadoes One of us sat with the subject The rest of us observed from a remote hook- up Wide variety of sports tried – Soccer – American Football – Ice Hockey – Baseball – Tennis

Starting the session Demonstration of remote-control based highlights playback to subject Remote control has highlight skip button in addition to usual FF, REW etc.

Three Simple Tasks You have ten minutes to watch the content before you leave for work. You have thirty minutes to watch the content You have a full afternoon to watch the content The user was allowed to choose the content

Outline of Questions for Subjects Not a rigorous scientific experiment but intended to get a general idea of user response Salient Questions: How much of sports video do you record every week/month? What was your overall impression of the interface? Do you think it captured all the highlights?

What is a Highlight? Recorded both detailed comments and yes-no answers P1 thought that some highlights seemed “random.” P3 said that crowds react to things that he does not always care about. P4 “didn’t like” false and missing highlights. He said, “show me home runs and touchdowns.” Unanimous in liking the highlights when found “correctly”

General Response Missed Highlights bigger concern than False Highlights Quick recovery from false highlights essential Overall system enjoyable despite its mistakes Users liked: – The Action Map because of the flexibility it affords – Auto-skip and Highlight skip features Suitability to different sports – Best Suited to: – Soccer, Baseball, Hockey

Conclusion Users enjoyed system despite flaws The general consistency in the results indicates validity Our hypothesis that fair accuracy combined with a good interface would work well, was confirmed.

Blind Summarization – Content-adaptive summarization Content Specificity enables high summary quality Content-specificity leads to profusion of techniques Difficult to support multiple non-overlapping techniques in practical systems Content-independent techniques are known to compromise quality of summarization Thus the trade-off is between summary quality and content-specificity Our approach is to combine specific and general techniques – Have a large common core of processing – Postpone content-specific processing to as late a stage as possible

Problem formulation Motivated by the observation that “interesting” events in unscripted multimedia happen sparsely in a background of “uninteresting” events. – A burst of audience reaction in the vicinity of “interesting” events in sports audio. – A burst of screaming sound in the vicinity of “suspicious” events in surveillance audio content. Given a time series of observations from a usual background process (P 1 ) and an unusual foreground process (P 2 )

Proposed Analysis & Representation Framework

Outlier subsequence detection in time series … …… Compute Affinity Matrix (N*N) WLWL WLWL M 1 M 2 …. M i …… M N N Context Models Input time series Outlier Subsequence Detection Detected transitions & Outlier subsequences WSWS Clustering

Examples of time series from audio Low-level features Mid-level semantic labels at different time resolutions Applause Cheering Music Speech Speech & Music MFCC MDCT Comparing Likelihoods Training Examples Variety of content - Input Audio Audio Class …

Cluster indicator vector for a soccer clip

Mining surveillance video

Using confidence metric to rank outliers I – Set of inliers M i – inlier Context model M j – outlier to be ranked When has only a finite support, use to rank outliers. P d,i d 1 (M i, M j ) d 2 (M i, M k )

Systematic acquisition of domain knowledge Input content Feature Extraction Mining framework P1 P2 p3 Train Supervised Models

Acquired audio classes from surveillance data

Scene segmentation for situation comedy content: Music detection

Conclusions and Future work Developed Content-adaptive analysis framework based on time-series analysis Postpones content-specific processing to a late stage Works for sports, surveillance, sitcoms, news and other genres where there are one or two key audio-visual object classes High computational complexity compared to content-specific techniques Currently more useful as a test-bed rather than as a practical system Future challenge will be to maintain flexibility while making computational complexity comparable to content-specific techniques Could incorporate visual techniques

TimeTunnel: Motivation However - many portions of the interface for accessing DV have not changed Some exceptions include: – Chapter browsing on DVDs (random access) – 30-second skip function for PVRs In summary – the basic method for fast- forwarding and rewinding through digital video is the same as fast-forwarding and rewinding through analogue video

Augmenting Fast-forward and Rewind Display images from multiple positions in the video at the same time A combination of temporal and spatial layout – Uses conventional fast-forwarding view in background (slideshow) – Adds context by laying out neighboring frames in a trail vs.or One variation of our interface Temporal keyhole Spatial array

Augmenting Fast-forward and Rewind

User Evaluation Published the results of an early evaluation in UIST 2003 – Subjects were more accurate at finding a specific location in a video with our technique than with traditional DVD fast- forward – Reduced navigation error by about 25%

User Evaluation Recently, subjects used DC images from standard definition video to fast-forward and rewind through video Preferences differed about the specifics of the layout, but some commonality emerged

Experiment SAC has a very large design space This initial experiment used a vanilla version of SAC to convince ourselves that there was merit in this realm Hypothesis – SAC will allow users to more accurately reach a desired position in a recorded video than a traditional fast-forward interface will. – SAC will allow users to more quickly reach a desired position in a recorded video than a traditional fast-forward interface will. 15 subjects fast-forwarding through 3-4 minute video Displayed on TV Remote control as input “Please fast-forward to the start of the fireworks show” and “Please watch this program and fast-forward through the commercials” were typical instructions

Results: Accuracy average 6.87 vs. 9.18, t(13) = , p = 0.023

Web-based vs Personal View 2009 Sarnoff Corp. Copyrighted World Personal Content WORLDWIDE CONTENT

2009 Sarnoff Corp. Copyrighted Limitations of Current Technology Content Extraction : Multi-media content extraction, indexing and retrieval at the Web scale is non- existent – Semantic content extraction from Image/Video data at a large scale in infancy at best – Joint exploitation with text, audio and other data in limited domains: News Videos Indexing : Sub-linear, near real-time indexing for high dimensional data feasible only for modest sized repositories: ~1M images – Video indexing not even tried More Computation not the answer Duration of Content Hours 24 TB of MPEG-1 Video Database Size1420 GB Indexing Time 1 processor seconds 80 Hours Projections: State of the art with 6 24/7 1 Year MPEG-1 streams

2009 Sarnoff Corp. Copyrighted Limitations of Current Technology Learning : Unsupervised, incremental extraction and assimilation of new content at large scale not available – Google-like auto indexing of large scale multi-modal content needed Triggers : Real-time triggers and alerts for events of interest needed Architecture: Distributed algorithms and architectures for meta-data extraction and real-time retrieval need to be discovered

2009 Sarnoff Corp. Copyrighted Feasibility: Technology Nuggets Real-time Feature extraction and matching Multi-modal association of imagery, video, blogs across Internet – Geo-location based association Incremental meta-data extraction techniques Distributed Architecture for very large databases In-time or real-time triggering for user-specified event Robustness w.r.t. viewpoint, illumination changes through embeddings based representations –Locality-sensitive hashing trees, Randomized trees Logic leading to link graphs Hierarchical Event Representation and Recognition Scalable and Modular Framework Real-time extraction & matching of learned multi-modal features Technology Nuggets Techniques

2009 Sarnoff Corp. Copyrighted Notional Metrics Duration of ContentYears? Months? Database Size? Indexing Time Indexing Time/Query Feature Linear? Sub-linear?

2009 Sarnoff Corp. Copyrighted Notional Metrics (Continued) RobustnessTo Imprecision of meta- data, imprecision of retrieval, etc. User FeedbackMinimize number of required user interactionns Retrieval Accuracy vs Speed Trade-off Graceful Degradation

Future Directions Scale Will Increase – Personally created and captured content – Personally recorded Broadcast Content – Web Content Personal vs Web Content – No real distinction Variety of devices – Conventional TV’s – PC’s – Handheld’s – Projectors? 2009 Sarnoff Corp. Copyrighted

Technical Challenges Accuracy Computational Complexity Hardware vs. Software – MCA at edge? – MCA at server? User Interfaces 2009 Sarnoff Corp. Copyrighted

Questions 90 minutes of soccer – 2 minute digest 9000 minutes of soccer – 200 minute digest? Boundary line between content creation and analysis? Interactivity in multiple modes? Multimedia Content Analysis should – Convert Multimedia into a set of searchable text words? – Provide Multimedia responses to user needs? 2009 Sarnoff Corp. Copyrighted