Download presentation
Presentation is loading. Please wait.
Published byAmos Knight Modified over 6 years ago
1
Probabilistic Relaxation Labelling by Fokker-Planck Diffusion on a Graph
Hongfang Wang and Edwin R. Hancock Department of Computer Science University of York
2
Outline Introduction Spectral graph theory
Probabilistic relaxation labelling Diffusion processes Probabilistic relaxation by diffusion Experiments Discussion
3
Overview The aim is to exploit the similarities between diffusion processes and relaxation labelling to develop a new iterative process for consistent labelling. A compositional graph structure is used to represent the compatibilities between object-label assignments. Evidence combination and label probability update are controlled by the Fokker-Planck equation. We evaluate our new development on the problem of data classification.
4
Probabilistic Relaxation Labelling
Classical object labelling technique first introduced by Rosenfeld, Hummel and Zucker in early 1970’s.
5
Relaxation Labelling It aims at assigning consistent and unambiguous labels to a given set of objects. Relies on contextual information provided by topology of object arrangement and sets of compatibility relations on object-configutations. Involves evidence combination to update label probabilities and propagate label consistency constraints. Requires initial label probability assignment.
6
Relaxation Labelling Support functions for evidence combination:
Hummer & Zucker: Kittler & Hancock: Probability update formula:
7
Graph theoretical setting for relaxation labelling.
Formulate relaxation labelling as a random walk on a support graph.
8
Graph spectral relaxation labeling
Set up a support graph where nodes represent possible object-label assignments. Edges represent label compatibility. Set up a random walk on the graph. Probability of visiting node is probability of object-label assignment (state-vector). Evolve state-vector of random with time to update probabilities and combine evidence.
9
Support graph Node-set Cartesian product of object-set X and label-set
Adjacency matrix encodes object proximity and label compatibility: where: W(xi ,xj): the edge weight in the object graph; ωi , ωj: object labels; R(ωi , ωj ): compatibility value between the two labels. Compatiblities assigned using prior konwledge of label domain. Proximity weight set using inter-point distance measure (usually Gaussian).
10
Example
11
Fokker-Planck Diffusion
Model random walk on graph using Fokker-Planck equation.
12
Diffusion Processes Diffusion processes are Markov random variables that are correlated and indexed by time. Markov property states that ``for a collection of random variables indexed by time t, given the current value of the variable, the future is independent of the variable’s past’’ Examples of using Markov property: image restoration [e.g., Geman & Geman 1985], Texture [Zhu,Wu & Mumford 1998], and object tracking [Isard & Blake 1996, Hua & Wu 2004, Yang, Duraiswami & Davis 2005]. Here use diffusion process to model random walk on support graph. Local evidence collection and propagation. … short time behaviour [Zhu, Wu & Mumford 1998]: FRAME: Filters, Random field And Maximum Entropy: --- Towards a Unified Theory for Texture Modeling, IJCV 27(2) pp.1-20 [Isard & Blake 1996]: Contour tracking by stochastic propagation of conditional density, ECCV 1996 [Hua & Wu 2004]: Multi-scale Visual Tracking by Sequential Belief Propagation, CVPR 2004 [Yang, Duraiswami & Davis 2005]: Fast Multiple Object Tracking via a Hierarchical Particle Filter, ICCV 2005
13
Probabilistic Relaxation Labelling by Diffusion
Diffusion: global propagation and local evidence combination Diffusion:m we derive the diffusion operator from discrete random walks on graph [Feller Vol.II, 1970]. State space of the process: represented by the support graph. Justification of using diffusion processes on graphs: given sufficient data points, graph is capable of representing the manifold underlying the given data [Belkin&Niyogi 2002, Singer 2006, Sochen2001] Where: F is the Fokker-Planck operator [Feller 1970]: W. Feller, An introduction to probability theory and its applications, Vol.II [Belkin & Niyogi 2002]: Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. NIPS 2001 [Singer 2006]: From graph to manifold {L}aplacian: The convergence rate. Applied and Computational Harmonic Analysis, Vol. 21, pp , 2006 [Sochen2001]: Stochastic Processes in Vision: From Langevin to Beltrami. ICCV 2001.
14
Solve Fokker-Planck equation using spectrum of operator.
Spectral Graph Theory Solve Fokker-Planck equation using spectrum of operator.
15
Spectral Graph Theory Adjacency matrix A:
Laplacian matrix L (weighted): Transition matrix P : Where: E is the graph’s edge set, Wij is the weight between nodes vi and vj ,and degi is the degree of node vi .
16
Graph-spectral solution of FP Eqn.
Transition matrix P and the Fokker Planck operator F: Label probability update formula: where F has the eigen-decomposition: An iteration is carried by setting the newly obtained label probabilities as the current initial label probabilities p0. Haven’t given the relation bet’n Q and F…
17
Experiments The development is applied to tasks of data classification; Five synthetic and two real world data-sets have been used; Small time steps are used to exploit the short time behaviour of the diffusion process; Compatibility values between labels: Short time behaviour; t is set to a range between 1 and 10. …?
18
Experiments Five synthetic data-sets: I. Two rings (R2)
II. Ring-Gaussian (RG) III. Three Gaussian (G3) V. Four Gaussian G4_2) Top to bottom, left to right: i) syn. Data with two class labels (R2); ii) Three class labels (RG); iii) Three class labels, a mixture of three gaussians (G3); iv) A mixture sample of points from four Gaussian distributions (G4_1); v) A mixture sample of points from four Gaussian distributions (G4_2). IV. Four Gaussian (G4_1)
19
Experimental Results Resutls of synthetic data-sets:
II. Results of Ring-Gaussian data (RG) I. Results of Three Gaussian data (G3) Some data-sets from UCI Repository of machine learning databases: Iris and wine III. Results of Four Gaussian data (G4-1)
20
Experimental Results Real world data (from UCI machine learning data repository) III. Results of Iris data-set IV. Results of Wine data
21
Discussion New development of probabilistic relaxation labelling in a graph setting; The diffusion process is used on the graph to collect evidences locally and propagate them globally; The process can also be viewed from a kernel methods perspective; Experiments on data classification tasks are successful; Can be applied to other tasks: Image segmentation; Speaker recognition; Other applications. I’d like to relate these concepts together…possibility of discovering new world.
22
Muchas grasias!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.