Download presentation
Presentation is loading. Please wait.
1
Information Bottleneck presented by Boris Epshtein & Lena Gorelick Advanced Topics in Computer and Human Vision Spring 2004
2
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
3
Motivation Clustering Problem
4
Motivation “Hard” Clustering – partitioning of the input data into several exhaustive and mutually exclusive clusters Each cluster is represented by a centroid
5
Motivation “Good” clustering – should group similar data points together and dissimilar points apart Quality of partition – average distortion between the data points and corresponding representatives (cluster centroids)
6
“Soft” Clustering – each data point is assigned to all clusters with some normalized probability Goal – minimize expected distortion between the data points and cluster centroids Motivation
7
Complexity-Precision Trade-off Too simple modelPoor precision Higher precision requires more complex model Motivation…
8
Complexity-Precision Trade-off Too simple modelPoor precision Higher precision requires more complex model Too complex modelOverfitting Motivation…
9
Complexity-Precision Trade-off Too Complex Model – can lead to overfitting – is hard to learn Too Simple Model –can not capture the real structure of the data Examples of approaches: –SRM Structural Risk Minimization –MDL Minimum Description Length –Rate Distortion Theory Motivation… Poor generalization
10
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
11
Entropy The measure of uncertainty about the random variable Definitions…
12
Entropy - Example –Fair Coin: –Unfair Coin: Definitions…
13
Entropy - Illustration Definitions… Highest Lowest
14
Conditional Entropy The measure of uncertainty about the random variable given the value of the variable Definitions…
15
Conditional Entropy Example Definitions…
16
Mutual Information The reduction in uncertainty of due to the knowledge of –Nonnegative –Symmetric –Convex w.r.t. for a fixed Definitions…
17
Mutual Information - Example Definitions…
18
A distance between distributions –Nonnegative –Asymmetric Kullback Leibler Distance Definitions… Over the same alphabet
19
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
20
Rate Distortion Theory Introduction Goal: obtain compact clustering of the data with minimal expected distortion Distortion measure is a part of the problem setup The clustering and its quality depend on the choice of the distortion measure
21
Rate Distortion Theory Obtain compact clustering of the data with minimal expected distortion given fixed set of representatives Data ? Cover & Thomas
22
–zero distortion –not compact –high distortion –very compact Rate Distortion Theory - Intuition
23
The quality of clustering is determined by – Complexity is measured by – Distortion is measured by Rate Distortion Theory – Cont. (a.k.a. Rate)
24
Rate Distortion Plane Ed(X,T) Maximal Compression Minimal Distortion D - distortion constraint
25
Higher values of mean more relaxed distortion constraint Stronger compression levels are attainable Rate Distortion Function Given the distortion constraint find the most compact model (with smallest complexity ) Let be an upper bound constraint on the expected distortion
26
Rate Distortion Function Given –Set of points with prior –Set of representatives –Distortion measure Find –The most compact soft clustering of points of that satisfies the distortion constraint Rate Distortion Function
27
Lagrange Multiplier Complexity Term Distortion Term Minimize!
28
Rate Distortion Curve Ed(X,T) Maximal Compression Minimal Distortion
29
Subject to The minimum is attained when Rate Distortion Function Normalization Minimize
30
Known Solution - Analysis The solution is implicit Solution:
31
When is similar to is small closer points are attached to with higher probability Solution - Analysis Solution: For a fixed
32
reduces the influence of distortion does not depend on this + maximal compression single cluster Solution - Analysis Solution: most of cond. prob. goes to some with smallest distortion hard clustering Fix t Fix x
33
Varying Solution - Analysis Solution: Intermediate soft clustering, intermediate complexity
34
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
35
Blahut – Arimoto Algorithm Input: Randomly init Optimize convex function over convex set the minimum is global
36
Blahut-Arimoto Algorithm Advantages: Obtains compact clustering of the data with minimal expected distortion Optimal clustering given fixed set of representatives
37
Blahut-Arimoto Algorithm Drawbacks: Distortion measure is a part of the problem setup –Hard to obtain for some problems –Equivalent to determining relevant features Fixed set of representatives Slow convergence
38
Rate Distortion Theory – Additional Insights –Another problem would be to find optimal representatives given the clustering. –Joint optimization of clustering and representatives doesn’t have a unique solution. (like EM or K-means)
39
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
40
Information Bottleneck Copes with the drawbacks of Rate Distortion approach Compress the data while preserving “important” (relevant) information It is often easier to define what information is important than to define a distortion measure. Replace the distortion upper bound constraint by a lower bound constraint over the relevant information Tishby, Pereira & Bialek, 1999
41
Information Bottleneck-Example DocumentsJoint priorTopics Given:
42
Information Bottleneck-Example WordsPartitioningTopics Obtain: I(Cluster;Topic) I(Word;Topic) I(Word;Cluster)
43
Information Bottleneck-Example Extreme case 1: I(Cluster;Topic)=0 I(Word;Cluster)=0 Very Compact Not Informative
44
Information Bottleneck-Example Minimize I (Word; Cluster) & maximize I (Cluster; Topic) I(Cluster;Topic)=max I(Word;Cluster)=max Not Compact Very Informative Extreme case 2:
45
Information Bottleneck Compactness Relevant Information words topics
46
Relevance Compression Curve Maximal Compression Maximal Relevant Information D – relevance constraint
47
Let be minimal allowed value of Smaller more relaxed relevant information constraint Stronger compression levels are attainable Relevance Compression Function Given relevant information constraint Find the most compact model (with smallest )
48
Relevance Compression Function Lagrange Multiplier Compression Term Relevance Term Minimize!
49
Relevance Compression Curve Maximal Compression Maximal Relevant Information
50
Subject to The minimum is attained when Relevance Compression Function Normalization Minimize
51
Solution - Analysis The solution is implicit Solution: Known
52
Solution - Analysis Solution: KL distance emerges as effective distortion measure from IB principle The optimization is also over cluster representatives When is similar to KL is small attach such points to with higher probability For a fixed
53
reduces the influence of KL does not depend on this + maximal compression single cluster Solution - Analysis Solution: most of cond. prob. goes to some with smallest KL (hard mapping) Fix t Fix x
54
Relevance Compression Curve Maximal Compression Maximal Relevant Information Hard Mapping
55
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
56
Iterative Optimization Algorithm (iIB) Input: Randomly init Pereira, Tishby, Lee, 1993; Tishby, Pereira, Bialek, 2001
57
Iterative Optimization Algorithm (iIB) p(cluster | word) p(cluster) p(topic | cluster) Pereira, Tishby, Lee, 1993;
58
iIB simulation Given: –300 instances of with prior –Binary relevant variable –Joint prior – Obtain: –Optimal clustering (with minimal )
59
X points and their priors iIB simulation…
60
Given the is given by the color of the point on the map iIB simulation…
61
Single Cluster – Maximal Compression
62
iIB simulation…
74
Hard Clustering – Maximal Relevant Information
75
Iterative Optimization Algorithm (iIB) Analogy to K-means or EM Optimize non-convex functional over 3 convex sets the minimum is local
76
“Semantic change” in the clustering solution
77
Advantages: Defining relevant variable is often easier and more intuitive than defining distortion measure Finds local minimum Iterative Optimization Algorithm (iIB)
78
Drawbacks: Finds local minimum (suboptimal solutions) Need to specify the parameters Slow convergence Large data sample is required Iterative Optimization Algorithm (iIB)
79
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
80
Iteratively increase the parameter and then adapt the solution from the previous value of to the new one. Track the changes in the solution as the system shifts its preference from compression to relevance Tries to reconstruct the relevance-compression curve Deterministic Annealing-like algorithm (dIB) Slonim, Friedman, Tishby, 2002
81
Solution from previous step: Deterministic Annealing-like algorithm (dIB)
83
Small Perturbation Deterministic Annealing-like algorithm (dIB)
84
Apply iIB using the duplicated cluster set as initialization Deterministic Annealing-like algorithm (dIB)
85
if are different leave the split else use the old Deterministic Annealing-like algorithm (dIB)
86
Illustration What clusters split at which values of
87
Advantages: Finds local minimum (suboptimal solutions) Speed-up convergence by adapting previous soultion Deterministic Annealing-like algorithm (dIB)
88
Drawbacks: Need to specify and tune several parameters: - perturbation size - step for (splits might be “skipped”) - similarity threshold for splitting - may need to vary parameters during the process Finds local minimum (suboptimal solutions) Large data sample is required Deterministic Annealing-like algorithm (dIB)
89
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
90
Agglomerative Algorithm (aIB) Find hierarchical clustering tree in a greedy bottom-up fashion Results in different trees for each Each tree is a range of clustering solutions at different resolutions Same Different Resolutions Slonim & Tishby 1999
91
Agglomerative Algorithm (aIB) Fix Start with
92
Agglomerative Algorithm (aIB) For each pair Compute new Merge and that produce the smallest
93
Agglomerative Algorithm (aIB) For each pair Compute new Merge and that produce the smallest
94
Agglomerative Algorithm (aIB) For each pair Continue merging until single cluster is left
95
Agglomerative Algorithm (aIB)
96
Advantages: Non-parametric Full Hierarchy of clusters for each Simple
97
Agglomerative Algorithm (aIB) Drawbacks: Greedy – is not guaranteed to extract even locally minimal solutions along the tree Large data sample is required
98
Agenda Motivation Information Theory - Basic Definitions Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
99
Unsupervised Clustering of Images Modeling assumption: For a fixed colors and their spatial distribution are generated by a mixture of Gaussians in 5-dim Applications… Shiri Gordon et. al., 2003
100
Unsupervised Clustering of Images Apply EM procedure to estimate the mixture parameters Applications… Shiri Gordon et. al., 2003 Mixture of Gaussians model:
101
Unsupervised Clustering of Images Applications… Shiri Gordon et. al., 2003 Assume uniform prior Calculate conditional Apply aIB algorithm
102
Unsupervised Clustering of Images Applications… Shiri Gordon et. al., 2003
103
Unsupervised Clustering of Images Applications… Shiri Gordon et. al., 2003
104
Summary Rate Distortion Theory –Blahut-Arimoto algorithm Information Bottleneck Principle IB algorithms –iIB –dIB –aIB Application
105
Thank you
106
Blahut-Arimoto algorithm A B When does it converge to the global minimum? - A and B are convex + some requirements on distance measure Convex set of distributions Minimum Distance ? Csiszar & Tusnady, 1984
107
Blahut-Arimoto algorithm A B Reformulate using distance
108
Blahut-Arimoto algorithm A B
109
Rate Distortion Theory - Intuition –zero distortion –not compact – –high distortion –very compact –
114
Assume Markov relations: –T is a compressed representation of X, thus independent of Y if X is given –Information processing inequality: Information Bottleneck - cont’d
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.