Download presentation
Presentation is loading. Please wait.
1
Graph Spectral Image Smoothing
GBR 2007 Graph Spectral Image Smoothing Fan Zhang, Edwin R. Hancock Computer Vision and Pattern Recognition Group Department of Computer Science University of York, UK
2
Overview Literature and Motivation Method
Graph representation of an image Diffusion on a graph Numerical implementation Relationship to other methods (e.g. anisotropic diffusion, low-pass filtering, normalised cut) Experiments Conclusion
3
Literature General diffusion-based partial differential equation (PDE): If D=1, isotropic linear diffusion (Gaussian filter). If D is a scalar function, inhomogeneous diffusion, e.g. Perona-Malik diffusion: If D is a diffusion tensor, anisotropic diffusion, e.g (Weickert, etc). Diffusion-based partial differential equations have become one of the main technique for image smoothing and multi-scale image analysis. The basic idea is to evolve the image using a PDE. Most of the PDE methods can be formulated as a divergence formulation. If D is 1, this is isotropic linear diffusion which is the same as Gaussian filter. When D is a scalar function, it called inhomogeneous diffusion, the most well known example is perona-malik diffusion. If D is diffusion tensor, it called anisotropic diffusion which was first proposed by weikert.
4
Motivation: Why Graph Diffusion?
Assumption for continuous PDEs An image is a continuous function on R2 Consider discretisation for numerical implementation. Weakness: Noisy images may not be sufficiently smooth to give reliable derivatives. Fast, accurate, and stable implementation is difficult to achieve. Solution: diffusion on graphs An image is a smooth function on a graph. Purely combinatorial operators that require no discretisation are used. Most continuous pdes assume An image is a continuous function on R2 , and cosider discretisation for numerical implementation. However, noisy images may not be sufficiently smooth to give a reliable derivate, since the fuction jump frequently due to the noise. Another thing is fast and stable implementation is difficult to acchive. It always a big topic for mathematics. A digital image is discrete itself and so it’s more natural to represent using a weighted graph. Then diffusion on graphs can be used for smoothing and no discretisation is needed.
5
Aim in this paper Explore if we can use graph spectral methods to solve the diffusion equation commencing from a discrete setting. Most former methods for multi-scale descriptions and image smoothing assume that the image is continuous two dimensional function and consider discretization for the purpose of numerical implementation. we accept the discrete nature of images from the outset, and use graphs to represent images. We then use heat diffusion on graphs which is governed by the heat kernel to generate a graph scale-space.
6
Steps Set-up diffusion process as problem involving weighted graph, where anisotropic smoothing is modelled by an edge-weight matrix. Diffusion is heat-flow on the associated graph-structure. Nodes are pixels, and diffusion of grey-scale information is along edges with time. Solution is given by exponentiating the spectrum of the associated Laplacian matrix.
7
Graph Representation of Images
An image is represented using a weighted graph The nodes V are the pixels. An edge is formed between two nodes vi and vj . The weight of an edge, , is denoted by Graph representation is easy to understand. Nodes of the graph are pixels. Two nodes is connected by an edge. The example shows an 8-connectivity graph representationn of the left image.
8
Graph Edge Weight Characterise each pixel by a window of neighbors instead of using a single pixel alone. The similarity between nodes vi and vj is measured by the Gaussian weighted Euclidean distance between windows, i.e. Thus, edge weight is computed by Edge weight plays an important role in our algorithm. We use a window of neighbours to characterise each pixel. The similarity is measureed using the gaussian weighted euclidean distance between windows. These method is expected to be more robust than traditional intensity difference between two pixels.
9
Laplacian of a graph Weighted graph G = (V, E,W) with node-set V, edge-set E, adjacency weight matrix W and degree matrix D. Diagonal degree matrix Laplacian Normalised Laplacian To commence, we need to give some standard definitions of the graph matrices. For a graph G with set V is its vertices, set E is its edges. we then define the usual adjacency matrix A and degree matrix D, so we calculate the laplacian matrix L by the degree matrix minus the adjacency matrix. As we got the normal laplacian matrix, it is easy to find the normalised one. Go through our work, we use the normalised laplacian. If we make an spectral decomposition of L, we can get the diagonal matrix /lamda with ordered eigenvalues and matrix \phi with the corresponding eigenvectors. The second eigenvector, /phi2 is the Fiedler vector.
10
Laplacian spectrum Spectral decomposition of Laplacian Eigenvalues
To commence, we need to give some standard definitions of the graph matrices. For a graph G with set V is its vertices, set E is its edges. we then define the usual adjacency matrix A and degree matrix D, so we calculate the laplacian matrix L by the degree matrix minus the adjacency matrix. As we got the normal laplacian matrix, it is easy to find the normalised one. Go through our work, we use the normalised laplacian. If we make an spectral decomposition of L, we can get the diagonal matrix /lamda with ordered eigenvalues and matrix \phi with the corresponding eigenvectors. The second eigenvector, /phi2 is the Fiedler vector. Eigenvalues Eigenvectors
11
Graph Heat Kernel Heat equation on graph
Solution of heat equation is heat kernel When t tends to 0, When t large, The whole flows of heat across all the edges can be described by the heat equation on the graph. The fundamental solution of the heat equation is the heat kernel, which can be found by exponentiating the normalised laplacian matrix with time t. The heat kernel of graph is a n times n symmetric matrix. When the time t tends to zero, then the heat kernel is controlled by the normalised laplacian, that is depending on the local topology of the graph. On the other hand, if the time t is large, then the heat kernel is controlled by the Fiedler vector, that is depending on the global structure of the graph.
12
Graph Heat Kernel Heat equation on graph
Solution of heat equation is heat kernel When t tends to 0, When t large, The whole flows of heat across all the edges can be described by the heat equation on the graph. The fundamental solution of the heat equation is the heat kernel, which can be found by exponentiating the normalised laplacian matrix with time t. The heat kernel of graph is a n times n symmetric matrix. When the time t tends to zero, then the heat kernel is controlled by the normalised laplacian, that is depending on the local topology of the graph. On the other hand, if the time t is large, then the heat kernel is controlled by the Fiedler vector, that is depending on the global structure of the graph.
13
Graph Heat Kernel Heat equation on graph
Solution of heat equation is heat kernel When t tends to 0, When t large, The whole flows of heat across all the edges can be described by the heat equation on the graph. The fundamental solution of the heat equation is the heat kernel, which can be found by exponentiating the normalised laplacian matrix with time t. The heat kernel of graph is a n times n symmetric matrix. When the time t tends to zero, then the heat kernel is controlled by the normalised laplacian, that is depending on the local topology of the graph. On the other hand, if the time t is large, then the heat kernel is controlled by the Fiedler vector, that is depending on the global structure of the graph.
14
Graph Heat Kernel Heat equation on graph
Solution of heat equation is heat kernel When t tends to 0, When t large, The whole flows of heat across all the edges can be described by the heat equation on the graph. The fundamental solution of the heat equation is the heat kernel, which can be found by exponentiating the normalised laplacian matrix with time t. The heat kernel of graph is a n times n symmetric matrix. When the time t tends to zero, then the heat kernel is controlled by the normalised laplacian, that is depending on the local topology of the graph. On the other hand, if the time t is large, then the heat kernel is controlled by the Fiedler vector, that is depending on the global structure of the graph.
15
Lazy random walk on graph
Moves between nodes with probability , remains static with probability Transition probability matrix Let , then after N-steps of walk The heat kernel also have a close relationship with lazy random walks. If let T the transition matrix, then the t n as n goes to infinite is the limit of lazy random walk which has a similar expression as heat kernel.
16
Lazy random walk on graph
Moves between nodes with probability , remains static with probability Transition probability matrix Let , then after N-steps of walk The heat kernel also have a close relationship with lazy random walks. If let T the transition matrix, then the t n as n goes to infinite is the limit of lazy random walk which has a similar expression as heat kernel.
17
Continuous time random walk
Let we the state probability vector at time t. Probability of visiting ith node after time t is State-vector is solution of the differential equation Solution given by heat-kernel The heat kernel also have a close relationship with lazy random walks. If let T the transition matrix, then the t n as n goes to infinite is the limit of lazy random walk which has a similar expression as heat kernel.
18
Continuous time random walk
Let we the state probability vector at time t. Probability of visiting ith node after time t is State-vector is solution of the differential equation Solution given by heat-kernel The heat kernel also have a close relationship with lazy random walks. If let T the transition matrix, then the t n as n goes to infinite is the limit of lazy random walk which has a similar expression as heat kernel.
19
…we are going to use the random walk to model image smoothing via anisotropic diffusion.
Transition probability is small when there is strong evidence of edge-structure, and small in uniform regions. Heat-kernel is used to smooth pixel values, and this means pixel values are related to state probabilities of random walk.
20
Anisotropic diffusion as heat flow on a graph
Vertices: pixel values Edge weight: c.f. thermal conductivity Diffusion-equation for pixel values: We would like to generate a family of coarser resolution images from I_0 using heat flow on the graph G. To do this we inject at each vertex an amount of heat energy equal to the intensity of the associated pixel. The heat at each vertex diffuses through the graph edges as time t progresses. The edge weight plays the role of thermal conductivity. If two pixels belong to the same region, then the associated edge weight is large. As a result heat can flow easily between them. On the other hand, if two pixels belong to different regions, then the associated edge weight is very small, and hence it is difficult for heat to flow from one region to another. We wish to minimize the influence of one region on another. This heat evolution model is similar to the graph heat kernel, except that the initial heat residing at each vertex is determined by the pixel intensities. Since we wish to find the heat at each node of the graph at time t, the heat diffusion here is still controlled by the graph Laplacian. So the evolution of the image intensity I_0 follows the above equation. Pixel values stored as vector of stacked image columns. Connectivity structure encoded by Laplacian.
21
Graph spectral image smoothing
Solution of diffusion equation: Small t behaviour The solution of the graph scale-space equation is shown as above.
22
Graph spectral image smoothing
Solution of diffusion equation: Small t behaviour Heat kernel weights the averaging of grey-scale values The solution of the graph scale-space equation is shown as above.
23
Graph spectral image smoothing
Solution of diffusion equation: Small t behaviour The solution of the graph scale-space equation is shown as above. Effect of original pixel value decreases with time. Weighted average of neighbouring values.
24
Meaning What does this mean?
- heat kernel weights initial pixel values and smooths image. - pixel values are strongly influenced by neighbours when they are connected by edges of large weight (small difference in grey-scale value). - time plays the role of scale, the longer the process is run the greater the smoothing.
25
Numerical Implementation
Laplacian L is very large, e.g It is not tractable to calculate the heat kernel by matrix exponential. Solution: Krylov subspace projection technique Idea: approximate by an element of Krylov space Approximation scheme: : first column of identity matrix : orthonormal basis of Krylov space : Hessenberg matrix resulting from Lanczos process Our laplacian matrix is really large and sparse, for 256 square image, its oreder is more than sixty thounds. So it is impossible to calculate the heat kernel by complete eigen-spectrum of th laplacian. Here, the krylov subspace projection technique is used. The ideat is to approximate the action of matrix exponnential on an vector by an exlement of the krylov space.
26
Relation to Anisotropic Diffusion
An image is a 2D manifold M embedded in R3, i.e. Our graph diffusion can also be related to the continous pde. Our graph representation of the image is a mesh of the manifold M if we set edge weight. If we set the edge weight as the euclidean distance between pixel. Graph laplcian converges to the laplace-beltrami operator. So our heat diffusion converges to the heat equation on manifolds. But with our window representation of the weight, the method is highly nonlinear and cant be fomulated by a continuous pde.
27
Relation to Signal Processing
Our method can also be understand with signal processing. It is an extension of fourier analysis to images defined on graphs. Since we decompose the image into a linear combination of the eigenvectors, and use the heat kernel to attenuate the terms associating with high eigenvalues. An extension of Fourier analysis to images defined on graphs. Decomposition of the image into a linear combination of eigenvectors Attenuate the terms associating with high eigenvalues. Thus, Graph heat kernel can be regarded as a low-pass filter kernel.
28
Relation to Spectral Clustering
Large t behavior of the heat kernel is governed by the second eigenvector of the graph Laplacian [Normalised cut (Shi and Malik)]. The algorithm projects the noisy image onto the space spanned by the first few eigenvectors. [Laplacian eigenmap (Belkin and Niyogi)]. The second or first few eigenvectors of the Laplacian encode the segment-structure of the image. And it also has a close relation with spectral clustering. Since our long time diffsuio of the image is determined by the sencond or first few eigenvectors, these eigenvectors encode the segmentation information of the image. So if we diffusioh the image using a large t, the segmentation of the image will apear automtatically.
29
Results It takes 3~6 seconds to process a 256 square image.
30
(a) Noisy Lenna (b) Zoomed portion (c) Our graph smoothing (d) Regularised Perona-Malik diffusion (RPM) (e) Nonlinear complex ramp-preserving diffusion (NCRD) (f) Coherence-enhancing diffusion (CED) (g) Total-variation denoising (TV) (h) Wavelet filtering (WAVELET)
32
Root-Mean-Square Error comparison
33
Root-Mean-Square Error comparison
36
Conclusion Graph representation of images is a natural, discrete and effective way for image processing. Diffusion on graphs can be efficiently used for image smoothing. The diffusion is determined by spectra of the graphs. Graph smoothing can be readily solved using Krylov subspace technique. Graph-spectral smoothing has close relationships with the continuous anisotropic diffusion, low-pass filtering and spectral clustering.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.