Object Matching Using a Locally Affine Invariant and Linear Programming Techniques - H. Li, X. Huang, L. He Ilchae Jung.

Slides:



Advertisements
Similar presentations
The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Advertisements

1 LP, extended maxflow, TRW OR: How to understand Vladimirs most recent work Ramin Zabih Cornell University.
Mixed integer linear programming
Integer linear programming Optimization problems where design variables have to be integers are more difficult than ones with continuous variables. The.
Advanced Topics in Algorithms and Data Structures Lecture 7.2, page 1 Merging two upper hulls Suppose, UH ( S 2 ) has s points given in an array according.
Linear Programming: Simplex Method and Sensitivity Analysis
Convex Optimization Chapter 1 Introduction. What, Why and How  What is convex optimization  Why study convex optimization  How to study convex optimization.
Convex Quadratic Programming for Object Location Hao Jiang, Mark S. Drew and Ze-Nian Li School of Computing Science Simon Fraser University.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
We propose a successive convex matching method to detect actions in videos. The proposed scheme does not need foreground/background separation, works in.
One-Shot Multi-Set Non-rigid Feature-Spatial Matching
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
1 Lecture 13 Modeling Curved Lines and Surfaces. 2 Types of Surfaces Ruled Surfaces B-Splines and Bezier Curves Surfaces of Revolution.
Human Posture Recognition with Convex Programming Hao Jiang, Ze-Nian Li and Mark S. Drew School of Computing Science Simon Fraser University Burnaby, BC,
A Combinatorial Maximum Cover Approach to 2D Translational Geometric Covering Karen Daniels, Arti Mathur, Roger Grinde University of Massachusetts Lowell.
Approximation Algorithms
Support Vector Machines Formulation  Solve the quadratic program for some : min s. t.,, denotes where or membership.  Different error functions and measures.
09/09/02 Dinesh Manocha, COMP258 Properties of Bezier Curves Invariance under affine parameter transformation P i B i,n (u) = P i B i,n ((u –a)/(b-a))
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Solving the Protein Threading Problem in Parallel Nocola Yanev, Rumen Andonov Indrajit Bhattacharya CMSC 838T Presentation.
Recovering Articulated Object Models from 3D Range Data Dragomir Anguelov Daphne Koller Hoi-Cheung Pang Praveen Srinivasan Sebastian Thrun Computer Science.
Unconstrained Optimization Problem
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Linear Solution to Scale and Rotation Invariant Object Matching Professor: 王聖智 教授 Student : 周 節.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
LP formulation of Economic Dispatch
Robust fitting Prof. Noah Snavely CS1114
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Fast Spectrum Allocation in Coordinated Dynamic Spectrum Access Based Cellular Networks Anand Prabhu Subramanian*, Himanshu Gupta*,
1 1 Slide Integer Linear Programming Professor Ahmadi.
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Systems of Inequalities in Two Variables Sec. 7.5a.
Linear Programming – Simplex Method
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
ECE738 Advanced Image Processing Face Detection IEEE Trans. PAMI, July 1997.
Recognition II Ali Farhadi. We have talked about Nearest Neighbor Naïve Bayes Logistic Regression Boosting.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
1 Scale and Rotation Invariant Matching Using Linearly Augmented Tree Hao Jiang Boston College Tai-peng Tian, Stan Sclaroff Boston University.
Distinctive Image Features from Scale-Invariant Keypoints
Integer Programming, Branch & Bound Method
Linear Solution to Scale and Rotation Invariant Object Matching Hao Jiang and Stella X. Yu Computer Science Department Boston College.
Efficient Point Coverage in Wireless Sensor Networks Jie Wang and Ning Zhong Department of Computer Science University of Massachusetts Journal of Combinatorial.
Generalization Error of pac Model  Let be a set of training examples chosen i.i.d. according to  Treat the generalization error as a r.v. depending on.
Linear Programming Piyush Kumar Welcome to CIS5930.
1 Chapter 5 Branch-and-bound Framework and Its Applications.
Data Driven Resource Allocation for Distributed Learning
Nearest-neighbor matching to feature database
You can check broken videos in this slide here :
Paper Presentation: Shape and Matching
Probabilistic Models for Linear Regression
Structure from motion Input: Output: (Tomasi and Kanade)
Graph matching algorithms
Object recognition Prof. Graeme Bailey
Nearest-neighbor matching to feature database
Structured Models for Multi-Agent Interactions
Chapter 6. Large Scale Optimization
The Brightness Constraint
Integer Programming (정수계획법)
Instructors: Fei Fang (This Lecture) and Dave Touretzky
Part-based visual tracking with online latent structural learning -Rui Yao et al. ICCV 2013 Cvlab Jung ilchae.
Linear Programming Example: Maximize x + y x and y are called
Integer Programming (정수계획법)
Lecture 19 Linear Program
Topics in Algorithms 2005 Max Cuts
Structure from motion Input: Output: (Tomasi and Kanade)
Solution methods for NP-hard Discrete Optimization Problems
Chapter 6. Large Scale Optimization
Presentation transcript:

Object Matching Using a Locally Affine Invariant and Linear Programming Techniques - H. Li, X. Huang, L. He Ilchae Jung

Object matching Slide from “Linear solution to scale and rotation invariant object matching”, Hao Jiang, Stella X. Yu, CVPR 09

Problem formulation Find the matching function 𝒎 ∙ maximizing the objective 𝑚 ∙ = arg min 𝑚(∙) 𝑖=1 𝑛 𝑡 {𝑐( 𝑝 𝑖 , 𝑚 𝑝 𝑖 +𝜆∙𝑔( 𝑝 𝑖 , 𝑁 𝑝 𝑖 ;𝑚 𝑝 𝑖 , 𝑁 𝑚( 𝑝 𝑖 ) )} 𝑛 𝑡 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑛 𝑠 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑝 𝑖 = 𝑥 𝑖 , 𝑦 𝑖 𝑇 𝜖 𝑅 2 = 𝑖 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑞 𝑗 𝜖 𝑅 2 = 𝑗 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑆𝜖 𝑅 𝑛 𝑆 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑇𝜖 𝑅 𝑛 𝑡 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑁 𝑝 𝑖 =𝑡ℎ𝑒 𝑠𝑒𝑡 𝑜𝑓 𝑜𝑟𝑑𝑒𝑟𝑒𝑑 𝑝𝑜𝑖𝑛𝑡𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 𝑜𝑓 𝑝 𝑖 𝑚 𝑝 𝑖 =𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑓𝑢𝑛𝑐𝑡𝑢𝑖𝑜𝑛

Problem formulation Photometric similarity 𝑝 𝑖 𝑚(𝑝 𝑖 ) 𝑖=1 𝑛 𝑡 {𝑐( 𝑝 𝑖 , 𝑚 𝑝 𝑖 +𝜆∙𝑔( 𝑝 𝑖 , 𝑁 𝑝 𝑖 ;𝑚 𝑝 𝑖 , 𝑁 𝑚( 𝑝 𝑖 ) )} 𝑛 𝑡 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑛 𝑠 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑝 𝑖 = 𝑥 𝑖 , 𝑦 𝑖 𝑇 𝜖 𝑅 2 = 𝑖 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑞 𝑗 𝜖 𝑅 2 = 𝑗 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑆𝜖 𝑅 𝑛 𝑆 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑇𝜖 𝑅 𝑛 𝑡 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑁 𝑝 𝑖 =𝑡ℎ𝑒 𝑠𝑒𝑡 𝑜𝑓 𝑜𝑟𝑑𝑒𝑟𝑒𝑑 𝑝𝑜𝑖𝑛𝑡𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 𝑜𝑓 𝑝 𝑖 𝑚 𝑝 𝑖 =𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑓𝑢𝑛𝑐𝑡𝑢𝑖𝑜𝑛

Problem formulation geometric similarity Photometric similarity 𝑝 𝑖 𝑚(𝑝 𝑖 ) 𝑖=1 𝑛 𝑡 {𝑐( 𝑝 𝑖 , 𝑚 𝑝 𝑖 +𝜆∙𝑔( 𝑝 𝑖 , 𝑁 𝑝 𝑖 ;𝑚 𝑝 𝑖 , 𝑁 𝑚( 𝑝 𝑖 ) )} 𝑛 𝑡 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑛 𝑠 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑝 𝑖 = 𝑥 𝑖 , 𝑦 𝑖 𝑇 𝜖 𝑅 2 = 𝑖 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑞 𝑗 𝜖 𝑅 2 = 𝑗 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑆𝜖 𝑅 𝑛 𝑆 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑇𝜖 𝑅 𝑛 𝑡 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑁 𝑝 𝑖 =𝑡ℎ𝑒 𝑠𝑒𝑡 𝑜𝑓 𝑜𝑟𝑑𝑒𝑟𝑒𝑑 𝑝𝑜𝑖𝑛𝑡𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 𝑜𝑓 𝑝 𝑖 𝑚 𝑝 𝑖 =𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑓𝑢𝑛𝑐𝑡𝑢𝑖𝑜𝑛

Supplement: Difference from graph matching Slide from “Learning Graphs to Match”, Minsu Cho, Karteek Alahari, and Jean Ponce,ICCV 13

Supplement: Difference from graph matching Finding y maximizing score S (Integer quadratic programming) 𝑦 ∗ = arg max 𝑦 𝑆 𝐺, 𝐺 ′ ,𝑦 = arg max 𝑦 𝑦 𝑇 𝑊𝑦 𝑠.𝑡 𝑊 𝑖𝑎;𝑗𝑏 = 𝑆 𝑉 𝒂 𝑖 , 𝒂 𝑎 ′ 𝑖𝑓 𝑖=𝑗, 𝑎=𝑏 𝑆 𝐸 𝒂 𝑖𝑗 , 𝒂 𝑎𝑏 ′ 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 Slide from “Learning Graphs to Match”, Minsu Cho, Karteek Alahari, and Jean Ponce,ICCV 13

contribution Locally affine invariant geometric constraint Efficient linearization to solve an original objective Successive refinement for accurate matching

Problem formulation geometric similarity Photometric similarity 𝑝 𝑖 𝑚(𝑝 𝑖 ) 𝑖=1 𝑛 𝑡 {𝑐( 𝑝 𝑖 , 𝑚 𝑝 𝑖 +𝜆∙𝑔( 𝑝 𝑖 , 𝑁 𝑝 𝑖 ;𝑚 𝑝 𝑖 , 𝑁 𝑚( 𝑝 𝑖 ) )} 𝑛 𝑡 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑛 𝑠 =𝑡ℎ𝑒 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑝 𝑖 = 𝑥 𝑖 , 𝑦 𝑖 𝑇 𝜖 𝑅 2 = 𝑖 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑞 𝑗 𝜖 𝑅 2 = 𝑗 𝑡ℎ 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡 𝑆𝜖 𝑅 𝑛 𝑆 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑇𝜖 𝑅 𝑛 𝑡 ×2 =𝑠𝑒𝑡 𝑜𝑓 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠 𝑁 𝑝 𝑖 =𝑡ℎ𝑒 𝑠𝑒𝑡 𝑜𝑓 𝑜𝑟𝑑𝑒𝑟𝑒𝑑 𝑝𝑜𝑖𝑛𝑡𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟ℎ𝑜𝑜𝑑 𝑜𝑓 𝑝 𝑖 𝑚 𝑝 𝑖 =𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔 𝑓𝑢𝑛𝑐𝑡𝑢𝑖𝑜𝑛

The modeling of the feature matching function (photometric) 𝑖=1 𝑛 𝑡 {𝑐( 𝑝 𝑖 , 𝑚 𝑝 𝑖 )} = 𝑖 𝑛 𝑡 𝑗 𝑛 𝑠 𝐶 𝑖𝑗 𝑋 𝑖𝑗 =𝑡𝑟 𝐶 𝑇 𝑋 𝐶 𝑖𝑗 = min 𝑠, 𝜃 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒(𝑓𝑒𝑎𝑡𝑢𝑟𝑒 𝑝 𝑖 , 𝑓𝑒𝑎𝑡𝑢𝑟𝑒(𝑇 𝑞 𝑗 ;𝑠, 𝜃 ) 𝑋 𝜖 0, 1 𝑛 𝑡 × 𝑛 𝑠 =𝑟𝑒𝑝𝑟𝑒𝑠𝑒𝑛𝑡𝑎𝑡𝑖𝑜𝑛 𝑜𝑓 𝑚 ∙ 𝑠.𝑡 𝑋 1 𝑛 𝑠 = 1 𝑛 𝑡 𝐶 𝜖 𝑅 𝑛 𝑡 × 𝑛 𝑠 =𝑐𝑜𝑠𝑡𝑠 𝑜𝑓 𝑚𝑎𝑡𝑐ℎ𝑖𝑛𝑔

A locally affine-invariant constraint Every template point have at least three neighbors Neighbors must not be collinear 𝑝 𝑖 = 𝑝 𝑗 𝜖 𝑁 𝑝 𝑖 𝑊 𝑖𝑗 𝑝 𝑗 𝑊 𝑖𝑗 = 𝑊 𝑖𝑙 𝑖𝑓 𝑝 𝑗 𝑖𝑠 𝑡ℎ𝑒 𝑙 𝑡ℎ 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟 𝑜𝑓 𝑝 𝑖 0 𝑖𝑓 𝑝 𝑗 𝑖𝑠 𝑛𝑜𝑡 𝑛𝑒𝑖𝑔ℎ𝑏𝑜𝑟 𝑜𝑓 𝑝 𝑖 𝑝 1 𝑝 2 𝑝 3 1 1 1 𝑊 𝑖 𝑇 =Q 𝑊 𝑖 𝑇 = 𝑝 𝑖 1 ⇒ 𝑊 𝑖 𝑇 = 𝑄 −1 𝑝 𝑖 1 𝑇

A locally affine-invariant constraint Every template point have at least three neighbors Neighbors must not be collinear All the points are moving in the same scale and rotation 𝑝 𝑖 = 𝑝 𝑗 𝜖 𝑁 𝑝 𝑖 𝑊 𝑖𝑗 𝑝 𝑗 ⇒ 0= 𝑝 𝑖 − 𝑗 𝑊 𝑖𝑗 𝑝 𝑗 ⇒ 0= 𝐼−𝑊 𝑇 𝑋= arg min 𝑋 𝑔 𝑝 𝑖 , 𝑁 𝑝 𝑖 ;𝑚 𝑝 𝑖 , 𝑁 𝑚 𝑝 𝑖 = arg min 𝑋 𝐼−𝑊 𝑋𝑆 The objective is affine-invariant!! 𝑠.𝑡. 𝐼=𝑖𝑑𝑒𝑛𝑡𝑖𝑡𝑦 𝑚𝑎𝑡𝑟𝑖𝑥 , 𝑇 𝜖 𝑅 𝑛 𝑡 ×2 , 𝑆𝜖 𝑅 𝑛 𝑆 ×2 𝑋𝑆=𝑚𝑎𝑡𝑐ℎ𝑒𝑑 𝑠𝑐𝑒𝑛𝑒 𝑝𝑜𝑖𝑛 𝑡 ′ 𝑠 𝑐𝑜𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑠𝑎𝑚𝑒 𝑜𝑟𝑑𝑒𝑟 𝑎𝑠 𝑡ℎ𝑒 𝑡𝑒𝑚𝑝𝑙𝑎𝑡𝑒 𝑝𝑜𝑖𝑛𝑡𝑠

Overall objective function 𝑚𝑖𝑛𝑖𝑚𝑖𝑧 𝑒 𝑋 tr C T X +𝜆 𝐼−𝑊 𝑋𝑆 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑋 𝜖 0,1 𝑛 𝑡 × 𝑛 𝑠 𝑋 1 𝑛 𝑠 = 1 𝑡 𝑡 𝑋 𝑇 1 𝑛 𝑡 ≤ 𝑤 𝑛 𝑠 Difficult to solve : NP-hard Solution : convert an objective to linear programming algorithm 𝑚 ∙ = arg min 𝑚(∙) 𝑖=1 𝑛 𝑡 {𝑐( 𝑝 𝑖 , 𝑚 𝑝 𝑖 +𝜆∙𝑔( 𝑝 𝑖 , 𝑁 𝑝 𝑖 ;𝑚 𝑝 𝑖 , 𝑁 𝑚( 𝑝 𝑖 ) )}

Linearization and relaxation Linear programming - Linear objective and linear constraints - Reduction of search space 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑐 𝑇 𝑥 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝐴𝑥≤𝑏 𝑎𝑛𝑑 𝑥≥0 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑋 𝜖 0,1 𝑛 𝑡 × 𝑛 𝑠 𝑋 1 𝑛 𝑠 = 1 𝑡 𝑡 𝑋 𝑇 1 𝑛 𝑡 ≤ 𝑤 𝑛 𝑠 𝑋 𝜖 0, 1 𝑛 𝑡 × 𝑛 𝑠 𝑚𝑖𝑛𝑖𝑚𝑖𝑧 𝑒 𝑋 tr C T X +𝜆 𝐼−𝑊 𝑋𝑆

Linearization and relaxation Linear approximation 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑖 𝑁 𝑥 𝑖 ⇔ 𝑚𝑖𝑛𝑖𝑚𝑖𝑧 𝑒 𝑥 𝑖 , 𝑢 𝑖 𝑖=1 𝑁 𝑢 𝑖 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥 𝑖 ≤ 𝑢 𝑖 , 𝑥 𝑖 ≥− 𝑢 𝑖 𝑢 𝑖 ≥0 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑖=1,⋯,𝑁 𝑚𝑖𝑛𝑖𝑚𝑖𝑧 𝑒 𝑋 𝜆 𝐼−𝑊 𝑋𝑆 ⇔ 𝑚𝑖𝑛𝑖𝑚𝑖𝑧 𝑒 𝑋,𝑈 𝜆 1 𝑛 𝑡 𝑇 𝑈 1 2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑋≥0, 𝑈≥0 𝐼−𝑊 𝑋𝑆≤𝑈 𝐼−𝑊 𝑋𝑆≥−𝑈

Linearization and relaxation Final objective 𝑚𝑖𝑛𝑖𝑚𝑖𝑧 𝑒 𝑋,𝑈 𝑓 𝑋 =𝑡𝑟 𝐶 𝑇 𝑋 +𝜆 1 𝑛 𝑡 𝑇 𝑈 1 2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑋≥0, 𝑈≥0 𝑋 1 𝑛 𝑠 = 1 𝑛 𝑡 𝐼−𝑊 𝑋𝑆≤𝑈 𝐼−𝑊 𝑋𝑆≥−𝑈 𝑋 𝑇 1 𝑛 𝑡 ≤ 𝑤 𝑛 𝑠 More problem : X is continuous Solution : successive linear programming with search space reduction

Removing unnecessary variables Trust region shrinkage with neighbor search Lower convex hull search 각 template point에 대해서 search space를 줄여준다.

Trust region shrinkage with neighbor search For each iteration - smaller diameter r - Exclude the variables outside its trust region Finishing condition - For proper iteration step, For each row, fixing other rows, find a discrete solution for the low minimizing an objective function

The lower convex hull property Cost surfaces can be replaced by their lower convex hulls without changing the LP solution

Experiments Time complexity

Experiments Matching correctness

Experiments Rotation invariance

Experiments In videos http://www.youtube.com/playlist?list=PL5315098DD6D1F04C

Discussion Assumption of scale and rotation Distinctive feature points Occlusion handling Appropriate weights Time complexity