K. Selçuk Candan, Maria Luisa Sapino Xiaolan Wang, Rosaria Rossini

Slides:



Advertisements
Similar presentations
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Advertisements

Word Spotting DTW.
Presented by Xinyu Chang
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
CSE 473/573 Computer Vision and Image Processing (CVIP)
IIIT Hyderabad Pose Invariant Palmprint Recognition Chhaya Methani and Anoop Namboodiri Centre for Visual Information Technology IIIT, Hyderabad, INDIA.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Algorithms and Applications in Computer Vision
A Study of Approaches for Object Recognition
Making Time-series Classification More Accurate Using Learned Constraints © Chotirat “Ann” Ratanamahatana Eamonn Keogh 2004 SIAM International Conference.
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Computing motion between images
Color a* b* Brightness L* Texture Original Image Features Feature combination E D 22 Boundary Processing Textons A B C A B C 22 Region Processing.
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Using Relevance Feedback in Multimedia Databases
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Hand Signals Recognition from Video Using 3D Motion Capture Archive Tai-Peng Tian Stan Sclaroff Computer Science Department B OSTON U NIVERSITY I. Introduction.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
CSE 185 Introduction to Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
FastDTW: Toward Accurate Dynamic Time Warping in Linear Time and Space
Qualitative approximation to Dynamic Time Warping similarity between time series data Blaž Strle, Martin Možina, Ivan Bratko Faculty of Computer and Information.
Analysis of Constrained Time-Series Similarity Measures
Shape Matching for Model Alignment 3D Scan Matching and Registration, Part I ICCV 2005 Short Course Michael Kazhdan Johns Hopkins University.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
S DTW: COMPUTING DTW DISTANCES USING LOCALLY RELEVANT CONSTRAINTS BASED ON SALIENT FEATURE ALIGNMENTS K. Selçuk Candan Arizona State University Maria Luisa.
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
Dynamic Time Warping Algorithm for Gene Expression Time Series
報告人:張景舜 P.H. Wu, C.C. Chen, J.J. Ding, C.Y. Hsu, and Y.W. Huang IEEE Transactions on Image Processing, Vol. 22, No. 9, September 2013 Salient Region Detection.
Incorporating Dynamic Time Warping (DTW) in the SeqRec.m File Presented by: Clay McCreary, MSEE.
Decomposition-by-Normalization (DBN): Leveraging Approximate Functional Dependencies for Efficient Tensor Decomposition Mijung Kim (Arizona State University)
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
21 June 2009Robust Feature Matching in 2.3μs1 Simon Taylor Edward Rosten Tom Drummond University of Cambridge.
Bart M. ter Haar Romeny.  Question: can top-points be used for object- retrieval tasks?
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
Speed improvements to information retrieval-based dynamic time warping using hierarchical K-MEANS clustering Presenter: Kai-Wun Shih Gautam Mantena 1,2.
Features, Feature descriptors, Matching Jana Kosecka George Mason University.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Local features: detection and description
By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas.
Scale Invariant Feature Transform (SIFT)
CS654: Digital Image Analysis
Presented by David Lee 3/20/2006
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
Morphological Image Processing
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Mustafa Gokce Baydogan, George Runger and Eugene Tuv INFORMS Annual Meeting 2011, Charlotte A Bag-of-Features Framework for Time Series Classification.
Image Representation and Description – Representation Schemes
Presented by David Lee 3/20/2006
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Distinctive Image Features from Scale-Invariant Keypoints
Paper – Stephen Se, David Lowe, Jim Little
Supervised Time Series Pattern Discovery through Local Importance
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Jongik Kim1, Dong-Hoon Choi2, and Chen Li3
Fourier Transform of Boundaries
Presentation transcript:

K. Selçuk Candan, Maria Luisa Sapino Xiaolan Wang, Rosaria Rossini sDTW: Computing DTW Distances using Locally Relevant Constraints based on Salient Feature Alignments K. Selçuk Candan, Maria Luisa Sapino Xiaolan Wang, Rosaria Rossini This work is supported by the NSF Grant 1043583 MiNC: NSDL Middleware for Network- and Context- aware Recommendations.

Dynamic Time Warping Dynamic time warping (DTW) tries to find the best mapping with the minimum distance between two sequences. The method is called ”time warping” since we need to compress or expand in time in order to find the best mapping.

DTW Grid and Warp Path Given two time series, X and Y, of length N and M, alternative warping strategies can be compactly represented as an NxM grid. A warp path between X and Y then can be visualized as a path from the lower-left corner of the NxM grid to its upper-right corner.

Reducing the cost of DTW The major cost of the DTW algorithm is filling the NxM grid. To reduce cost, Sakoe and Chiba band constraints and Itakura Parallelogram define a narrow band around the diagonal from which the warp paths can pass.

sDTW Idea Our key observation is that time series often carry structural features that can be used for identifying locally relevant constraints to eliminate redundant work.

Proposed Adaptive fixed core&adaptive width (fc,aw): the width is adapted on local features; adaptive core&fixed width (ac,fw): the core is not necessarily on the diagonal; adaptive core&adaptive width (ac,aw):both the core and the width are adapted;

sDTW Process sDTW leverages salient alignment evidences to improve the effectiveness of the pruning constraints. Search for salient features on the input time series. Find salient alignments of a given pair of time series by matching the descriptors of the salient features. Use these salient alignments to compute locally relevant constraints to prune the warp path search.

First Question How can we locate robust local features of time series? To search for robust local features, we adapt the scale-invariant feature transform (SIFT).

Scale-Invariant Feature Transform The scale-invariant feature transform (SIFT)4 is a computer vision algorithm used to detect and describe local features in 2D images. The algorithm identifies salient point of a given image (and their descriptors) that are invariant to: Image scaling; Translation; Rotation; Different illuminations and noise. We adapted the 2D SIFT algorithm in a way that captures characteristics of 1D time series.

1D Scale-Invariant Feature Transform Steps The salient feature extraction algorithm relies on a three step process to identify such salient features and their robust local feature descriptors in time series: Scale-space extrema detection; Feature descriptor creation. Feature filtering and localization Each step is designed to capture characteristics of time series.

Salient Temporal Features

Feature descriptor creation 8 gradients for a time series describe the magnitude of the changes at different distances from the salient point. The same descriptor size would cover different temporal ranges at different time scale.

2nd Step: Features Matching & Inconsistency Pruning The Feature Matching Step is performed by compare: the distance between the feature descriptors of each pair of salient features using Euclidean Distance; the size of scopes between the features of the matching pairs. A good match has less distance descriptors and similar size-scopes between two features.

Inconsistency Pruning For each pair of matching features we look on: Alignment feature pairs with large temporal length and close to each other in time; Similarity pairs of features that have both similar descriptors and similar average amplitudes; Combined score both allignment and similarity.

Inconsistency Pruning Given two time series, consider the scope boundaries of the pairs of matching salient features in descending order of their μcomb scores and prune those that imply inconsistent ordering of start and end points.

Inconsistency Pruning Pairs of matching salient points (pre-inconsistency removal) Pairs of matches remaining after inconsistency pruning. Scope boundaries of the matching salient features after inconsistency elimination

3trd Step of sDTW Process Locally Relevant DTW Constraints: WIDTH. A sets of matching consistent of salient features after the inconsistent pruning. Thus, our intuition is that each corresponding interval pair in the two time series corresponds to similar regions and thus must have similar characteristics.

Adaptive Width Constrains Adaptive width constraints use the widths of the resulting intervals to choose a different locally relevant width for each point (details in the paper).

3trd Step of sDTW Process Locally Relevant DTW Constraints: CORE. Sample point alignments within interval E. We can use the starting and ending points of the corresponding intervals in the two time series to associate to each point in one series a candidate point in the other time series.

Adaptive Core Constrains The core of the band is aligned with the diagonal; instead, the core follows a path that reflects the alignments implied by the scopes of the salient features.

Adaptive Core & Adaptive Width Adaptive Width Contraints can easely be combined with Adaptive Core Constraints.

Evaluation Criteria For estimate the efficiency and effectiveness of sDTW, it is tested on the time series data sets: Data Set Length # of Series # of Classes Gun 150 50 2 Trace 275 100 4 50Words 270 450

Experiment Set Up Intel Core 2 Quad CPU 3GHz machine, 8Gb RAM Ubuntu 9.10(64bit) Matlab 7.8.0 For the baseline fc&fw schemes we used the DTW code of Sakoe-Chiba.

Evaluation Criteria In order to assess the effectiveness of various DTW algorithms, we use the following measures: Retrieval Accuracy Distance accuracy Classification accuracy

Execution Time Analysis The overhead of the matching step for the adaptive algorithms is negligible.

Top-k Retrieval Accuracy fixed core&fixed width (fc,fw) ⇒ the larger is the value of w, the more accurate the results are; adaptive core&fixed width (ac,fw) ⇒ has significant gains in accuracy is obtained when the core is adapted; adaptive core&adaptive width (ac,aw) ⇒ the accuracy is further boosted when both the width and the core are adapted.

Classification Accuracy The figure re-confirms the results by focusing on the classification accuracy. Similarly to the top-k retrieval and DTW distance accuracy results, adaptive core, and adaptive width algorithms improve the classification accuracy.

Distance Accuracy The figure confirms the results by focusing on the errors in the DTW estimates provided by the various algorithms for the data set.

Conclusions We recognize that the time series that are being compared often carry structural evidences that can help identify locally relevant constraints that can prune unnecessary work during dynamic time warp (DTW) distance computation. Experiment results have shown that the proposed locally-relevant constraints based on salient features help improve accuracy in DTW distance estimations. We have proposed three different constraint types based on different assumptions on the structural variations in the data set.

Thank you.