Download presentation
Presentation is loading. Please wait.
1
ECE 417 Lecture 2: Metric (=Norm) Learning
Mark Hasegawa-Johnson 8/31/2017
2
Today’s Lecture Similarity and Dissimilarity of vectors: all you need is a norm Example: the Minkowski Norm (Lp norm) Cosine Similarity: you need a dot product Example: Diagonal Mahalanobis Distance What is Similarity? Metric Learning
3
Norm (or Metric, or Length) of a vector
A norm is: Non-negative, 𝑥 ≥0 Positive definite, 𝑥 =0 iff 𝑥 = 0 Absolute homogeneous, 𝑎 𝑥 =|𝑎| 𝑥 Satisfies the triangle inequality, 𝑥 + 𝑦 ≤ 𝑥 + 𝑦 Notice that, from 3 and 4 together, we get 𝑥 − 𝑦 ≤ 𝑥 + 𝑦
4
Distance between two vectors
The distance between two vectors is just the norm of their difference. Notice that, because of non-negativity, homogeneity, and triangle inequality, we can write that 0≤ 𝑥 − 𝑦 ≤ 𝑥 + 𝑦 And because of positive definiteness, we also know that 0= 𝑥 − 𝑦 only if 𝑥 = 𝑦 . And the maximum value of 𝑥 − 𝑦 is 𝑥 + 𝑦 achieved only if y is proportional to -x
5
Today’s Lecture Similarity and Dissimilarity of vectors: all you need is a norm Example: the Minkowski Norm (Lp norm) Cosine Similarity: you need a dot product Example: Diagonal Mahalanobis Distance What is Similarity? Metric Learning
6
Example: Euclidean (L2) Distance
The Euclidean (L2) distance between two vectors is defined as 𝑥 − 𝑦 2 = | 𝑥 1 − 𝑦 1 | 2 +…+ | 𝑥 𝐷 − 𝑦 𝐷 | 2 Non-negative: well, obviously Positive definite: also obvious Absolute homogeneous: easy to show Triangle inequality: easy to show: square both sides of that equation 𝑥 − 𝑦 ≤ 𝑥 + 𝑦
7
Example: Euclidean (L2) Distance
Here are the vectors 𝑥 , in 2-dimensional space, that have 𝑥 2 =1 Attribution: Gustavb,
8
Example: Minkowski (Lp) Norm
The Minkowski (Lp) distance between two vectors is defined as 𝑥 − 𝑦 𝑝 = 𝑝 | 𝑥 1 − 𝑦 1 | 𝑝 +…+ | 𝑥 𝐷 − 𝑦 𝐷 | 𝑝 Non-negative: well, obviously Positive definite: also obvious Absolute homogeneous: easy to show Triangle inequality: easy to show for any particular positive integer value of p (just raise both sides of the equation 𝑥 − 𝑦 ≤ 𝑥 + 𝑦 to the power of p).
9
Example: Minkowski (Lp) Distance
Here are the vectors 𝑥 , in 2-dimensional space, that have 𝑥 3/2 =1 Attribution: Krishnavedala,
10
Example: Minkowski (Lp) Distance
Here are the vectors 𝑥 , in 2-dimensional space, that have 𝑥 2/3 =1 Attribution: Joelholdsworth,
11
Manhattan Distance and L-infinity Distance
The Manhattan (L1) distance is 𝑥 − 𝑦 1 = 𝑥 1 − 𝑦 1 +…+ 𝑥 𝐷 − 𝑦 𝐷 The L-infinity distance is 𝑥 − 𝑦 ∞ = lim 𝑝→∞ 𝑝 | 𝑥 1 − 𝑦 1 | 𝑝 +…+ 𝑥 𝐷 − 𝑦 𝐷 𝑝 = max 1≤𝑑≤𝐷 𝑥 𝑑 − 𝑦 𝑑 Attribution: Esmil,
12
Today’s Lecture Similarity and Dissimilarity of vectors: all you need is a norm Example: the Minkowski Norm (Lp norm) Cosine Similarity: you need a dot product Example: Diagonal Mahalanobis Distance What is Similarity? Metric Learning
13
Dot product defines a norm
The dot product between two real-valued vectors is symmetric and linear, so: 𝑥 − 𝑦 𝑇 𝑥 − 𝑦 = 𝑥 𝑇 𝑥 −2 𝑦 𝑇 𝑥 + 𝑦 𝑇 𝑦 (for complex-valued vectors, things are a bit more complicated, but not too much). Dot product is always positive definite: 𝑥 − 𝑦 𝑇 𝑥 − 𝑦 ≥0 𝑥 − 𝑦 𝑇 𝑥 − 𝑦 =0 only if 𝑥 = 𝑦 So a dot product defines a norm: 𝑥 − 𝑦 2 = 𝑥 − 𝑦 𝑇 𝑥 − 𝑦 𝑥 − 𝑦 2 = 𝑥 2 −2 𝑦 𝑇 𝑥 𝑦 2
14
Cosine The cosine of the angle between two vectors is cos ( 𝑥 , 𝑦 ) = 𝑦 𝑇 𝑥 𝑥 𝑦 Attribution: CSTAR,
15
Today’s Lecture Similarity and Dissimilarity of vectors: all you need is a norm Example: the Minkowski Norm (Lp norm) Cosine Similarity: you need a dot product Example: Diagonal Mahalanobis Distance What is Similarity? Metric Learning
16
Example: Euclidean distance
The Euclidean dot product is: 𝑦 𝑇 𝑥 = 𝑥 1 𝑦 1 +…+ 𝑥 𝐷 𝑦 𝐷 The Euclidean distance is: 𝑥 − 𝑦 2 = ( 𝑥 1 −𝑦 1 ) 2 +…+ 𝑥 𝐷 − 𝑦 𝐷 2 = 𝑥 1 2 −2 𝑥 1 𝑦 1 + 𝑦 …+ 𝑥 𝐷 2 −2 𝑥 𝐷 𝑦 𝐷 + 𝑦 𝐷 2 = 𝑥 𝑦 2 −2( 𝑥 1 𝑦 1 +…+ 𝑥 𝐷 𝑦 𝐷 )
17
Example: Mahalanobis Distance
Suppose that Σ is a diagonal matrix, Σ= 𝜎 … 𝜎 𝐷 2 , Σ −1 = 1 𝜎 … 𝜎 𝐷 2 The Mahalanobis dot product is then defined as: 𝑦 𝑇 Σ −1 𝑥 = 𝑥 1 𝑦 1 𝜎 …+ 𝑥 𝐷 𝑦 𝐷 𝜎 𝐷 2 The squared Mahalonobis distance is: 𝑑 𝑚 2 𝑥,𝑦 =( 𝑥 − 𝑦 ) 𝑇 Σ −1 ( 𝑥 − 𝑦 )= ( 𝑥 1 − 𝑦 1 ) 2 𝜎 …+ ( 𝑥 𝐷 − 𝑦 𝐷 ) 2 𝜎 𝐷 2
18
Example: Mahalanobis Distance
Attribution: Piotrg,
19
Today’s Lecture Similarity and Dissimilarity of vectors: all you need is a norm Example: the Minkowski Norm (Lp norm) Cosine Similarity: you need a dot product Example: Diagonal Mahalanobis Distance What is Similarity? Metric Learning
20
What is similarity?
21
What is similarity? Typical Ocean Roundness Peach Ocean at Sunset
Redness
22
Today’s Lecture Similarity and Dissimilarity of vectors: all you need is a norm Example: the Minkowski Norm (Lp norm) Cosine Similarity: you need a dot product Example: Diagonal Mahalanobis Distance What is Similarity? Metric Learning
23
Metric Learning The goal: learn a function f(x,y) such that, if the user says y1 is more like x and y2 is less like x, then f(x,y1) < f(x,y2)
25
Mahalanobis Distance Learning
The goal is just to learn the parameters Σ so that ( 𝑥 − 𝑦 ) 𝑇 Σ −1 ( 𝑥 − 𝑦 )= ( 𝑥 1 − 𝑦 1 ) 2 𝜎 …+ ( 𝑥 𝐷 − 𝑦 𝐷 ) 2 𝜎 𝐷 2 accurately describes the perceived distance between x and y.
26
𝑥 − 𝑦 𝑇 𝑊 𝑥 − 𝑦 = 𝑤 1 𝑥 1 − 𝑦 1 2 +…+ 𝑤 𝐷 ( 𝑥 𝐷 − 𝑦 𝐷 ) 2
Sample problem Suppose your expriments show that people completely ignore dimension i. What should be the learned parameter 𝜎 𝑖 2 ? Suppose that dimension j is more important than dimension k. Should you have 𝜎 𝑗 2 < 𝜎 𝑘 2 , or 𝜎 𝑗 2 > 𝜎 𝑘 2 ? Suppose that, instead of the normal Mahalanobis distance definition, you read a paper that does distance learning with 𝑥 − 𝑦 𝑇 𝑊 𝑥 − 𝑦 = 𝑤 1 𝑥 1 − 𝑦 …+ 𝑤 𝐷 ( 𝑥 𝐷 − 𝑦 𝐷 ) 2 What’s the relationship between the parameters 𝑤 𝑗 and 𝜎 𝑗 2 ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.