Download presentation
Presentation is loading. Please wait.
2
Giansalvo EXIN Cirrincione unit #6
3
Problem: given a mapping: x d t and a TS of N points, find a function h(x) such that: h(x n ) = t n n = 1, …, N Radial basis functions Exact interpolation The exact interpolation problem requires every input vector to be mapped exactly onto the corresponding target vector. N basis functionsusually Euclidean generalized linear discriminant function (t n )(w n ) nn’ = ( x n - x n’ )
4
Radial basis functions Exact interpolation Many properties of the interpolating function are relatively insensitive to the precise form of the non-linear kernel. localized thin-plate spline function multi-quadric function for = 1/2 non-linear in the components of x
5
Radial basis functions Gaussian noise, zero mean, = 0.05 30 points Gaussian basis functions with = 0.067 (roughly twice the spacing of the data points) Exact interpolation highly oscillatory
6
Radial basis functions Exact interpolation more than one output variable
7
RBF RBF’s provide a smooth interpolating function in which the number of basis functions is determined by the complexity of the mapping to be represented rather than by the size of the data set. The number M of basis functions is less than N. The centres of the basis functions are determined by the training process. Each basis function has its own width j which is adaptive. Bias parameters are included in the linear sum and compensate for the difference between the average value over the TS of the basis function activations and the corresponding average value of the targets.
8
RBF Gaussiankernel Elements ji fixed at 1 Md universal approximator best approximator
9
There is a trade-off between using a smaller number of basis with many adjustable parameters and a larger number of less flexible functions RBF
10
There is a trade-off between using a smaller number of basis with many adjustable parameters and a larger number of less flexible functions RBF homework Suppose that all of the Gaussian basis functions in the network share a common covariance matrix . Show that the mapping represented by such a network is equivalent to that of a network of spherical Gaussian basis functions with common variance 2 = 1, provided the input vector x is first transformed by an appropriate linear transformation. Find expressions relating the transformed input vector and kernel centres to the corresponding original vectors. Suppose that all of the Gaussian basis functions in the network share a common covariance matrix . Show that the mapping represented by such a network is equivalent to that of a network of spherical Gaussian basis functions with common variance 2 = 1, provided the input vector x is first transformed by an appropriate linear transformation. Find expressions relating the transformed input vector and kernel centres to the corresponding original vectors.
11
RBF training decoupling first-layer weights second-layer weights fixed first-layer weightsgeneralized linear discriminant fast learning
12
RBF training normal equations
13
RBF training M = 5 centres = random subset of the TS = 0.4 = 0.08 = 10.0
14
`Regularization theory differential operator Set the functional derivative w.r.t. y(x) to zero: adjoint differential operator to P Euler-Lagrange equations If P is translationally and rotationally invariant, G depends only on the distance between x and x’ (radial). The solution is given by: The solution can be written down in terms of the Green’s functions G(x,x’) of the operator PP: ^ RBF
15
Regularization theory
16
Integrate over a small region around x n Using the solution in terms of the Green’s functions: The Green’s function is Gaussian with width parameter if the operator P is chosen as: = 0 implies RB exact interpolation
17
Regularization theory = 0.067 = 40 RB’s centred on each data = 0 implies RB exact interpolation
18
homework
19
Consider the functional derivative of the regularization functional (w.r.t. y(x)) given by: show that the operator By using successive integration by parts, and making use of identities: is given by: It should be assumed that boundary terms arising from the integration by parts can be neglected. Now find the radial Green’s function of this operator as follows. First introduce the multidimensional Fourier transform of G in the form:
20
Now substitute this result into the inverse Fourier transform of G and then show that the Green’s function is given by: By using the last two formulae and using the following formula for the Fourier transform of the Dirac function: where d is the dimensionality of x and s, show that the Fourier transform of the Green’s function is given by:
21
Regularization theory Regularization can also be applied to general RBF’s. Also, regularization terms can be considered for which the kernels are not necessarily the Green’s functions. For example: penalizes mappings which have large curvatures. This regularizer leads to second-layer weights which are found by the solution of:
22
Relation to kernel regression Technique for estimating regression functions from noisy data, based on methods of kernel density estimation Consider a mapping : x y and a corresponding TS; a complete description of the statistical properties of the generator of the data is given by the probability density p(x,t) in the joint input-target space. Parzen Gaussian kernel estimator
23
Relation to kernel regression The optimal mapping is given by forming the regression, or conditional average t x , of the target data, conditioned on the input variables.
24
Relation to kernel regression The optimal mapping is given by forming the regression, or conditional average t x , of the target data, conditioned on the input variables. Nadaraya-Watson estimator
25
Relation to kernel regression Nadaraya-Watson estimator normalized Gaussians second-layer weights Extension: replace the kernel estimator with an adaptive mixture model Extension: replace the kernel estimator with an adaptive mixture model
26
RBF’s for classification Model the class distributions by local kernel functions Multilayer perceptron Multilayer perceptron
27
The outputs represent approximations to the posterior probabilities RBF’s for classification RBF RBF Hidden-to-output weight
28
RBF’s for classification Use a common pool of M basis functions, labelled by an index j, to represent all of the class-conditional densities where
29
RBF’s for classification The activations of the basis functions can be interpreted as the posterior probabilities of the presence of corresponding features in the input space, and the weights can similarly be interpreted as the posterior probabilities of class membership, given the presence of the features. The outputs represent the posterior probabilities of class membership.
30
Comparison with the multilayer perceptron The hidden unit activation in a MLP is constant on parallel (d-1)-dimensional hyperplanes. The hidden unit (RB) activation in a RBF is constant on concentric (d-1)-dimensional hyperspheres (more generally hyperellipsoids). homework In a MLP a hidden unit has a constant activation function for input vectors which lie on a hyperplanar surface in input space given by w T x+w 0 =const., while for a spherical RBF a hidden unit has constant activation on a hyperspherical surface defined by ||x- || 2 =const.. Show that, for suitable choices of the parameters, these surfaces coincide if the input vectors are normalized to unit length. Illustrate this equivalence geometrically for vectors in a three- dimensional input space. In a MLP a hidden unit has a constant activation function for input vectors which lie on a hyperplanar surface in input space given by w T x+w 0 =const., while for a spherical RBF a hidden unit has constant activation on a hyperspherical surface defined by ||x- || 2 =const.. Show that, for suitable choices of the parameters, these surfaces coincide if the input vectors are normalized to unit length. Illustrate this equivalence geometrically for vectors in a three- dimensional input space.
31
Comparison with the multilayer perceptron The hidden unit activation in a MLP is constant on parallel (d-1)-dimensional hyperplanes. The hidden unit (RB) activation in a RBF is constant on concentric (d-1)-dimensional hyperspheres (more generally hyperellipsoids). The MLP forms a distributed representation in the space of activation values for the hidden units (problems: local minima, flat regions). The RBF forms a representation in the space of activation values for the hidden units which is local w.r.t. the input space. All of the parameters in a MLP are usually determined at the same time as part of a single global supervised training strategy. RBF is trained in two steps (kernels determined first by unsupervised methods, weights then found by fast linear supervised methods).
32
ignore any target information basis function optimization The basis function parameters should be chosen to form a representation of the pdf of the input data. The basis function parameters should be chosen to form a representation of the pdf of the input data. j ’s as prototypes of the inputs Problem: if the basis function centres are used to fill out a compact d-dimensional region of the input space, then the number of kernel centres will be an exponential function of d. Problem: input variables which have significant variance but play little role in determining the appropriate output variables (irrelevant inputs).
33
basis function optimization Problem: input variables which have significant variance but play little role in determining the appropriate output variables (irrelevant inputs). y independent of x 2
34
basis function optimization Problem: input variables which have significant variance but play little role in determining the appropriate output variables (irrelevant inputs). y independent of x 2 A MLP can learn to ignore irrelevant inputs and obtain accurate results with a small number of hidden units.
35
basis function optimization The optimal choice of basis function parameters for input density estimation needs not be optimal for representing the mapping of the output variables. The optimal choice of basis function parameters for input density estimation needs not be optimal for representing the mapping of the output variables. input input pdf target pdf
36
very fast significantly sub-optimal basis function optimization subsets of data points basis function centres set them equal to a random subset of TS data (use as initial conditions) start with all data points as kernel centres and then selectively remove centres in such a way as to have minimum disruption on the system performance basis function widths all equal and set to some multiple of the average distance between the kernel centres (overlap for smoothing) determined from the average distance of each kernel to its L nearest neighbours, where L is typically small
37
basis function optimization K-means (basic ISODATA) clustering algorithm decided in advance Suppose there are N data points x n in total; it tries to find a set of K representatives vectors j where j = 1,...,k. It seeks to partition the data points into K disjoint subsets S j containing N j data points in such a way as to minimize the sum-of-squares clustering function given by: where j is the mean of the data points in set S j and is given by:
38
basis function optimization K-means (basic ISODATA) clustering algorithm batch (Lloyd, 1982) It begins by assigning the points at random to K sets and then computing the mean vectors of the points in each set. Next, each point is re-assigned to a new set according to which is the nearest mean vector. The means of the sets are then recomputed. This procedure is repeated until there is no further change in the grouping of the data points. It begins by assigning the points at random to K sets and then computing the mean vectors of the points in each set. Next, each point is re-assigned to a new set according to which is the nearest mean vector. The means of the sets are then recomputed. This procedure is repeated until there is no further change in the grouping of the data points. At each such iteration, the value of J will not increase. At each such iteration, the value of J will not increase.
39
basis function optimization K-means (basic ISODATA) clustering algorithm batch (Lloyd, 1982) K = 2
40
basis function optimization K-means (basic ISODATA) clustering algorithm sequential (MacQueen, 1967) The initial centres are randomly chosen from the data points, and as each data point x n is presented, the nearest j is updated using: learning rate Robbins-Monro for finding the root of a regression function given by the derivative of J w.r.t. j Once the centres of the kernels have been found, the covariance matrices of the kernels can be set to the covariances of the points assigned to the corresponding clusters.
41
homework
42
Write a numerical implementation of the K-means clustering algorithm using both the batch and on- line versions. Illustrate the operation of the algorithm by generating data sets in two dimensions from a mixture of Gaussian distributions, and plotting the data points together with the trajectories of the estimated means during the course of the algorithm. Investigate how the results depend on the value of K in relation to the number of Gaussian distributions, and how they depend on the variances of the distributions in relation to their separation. Study the performance of the on-line version of the algorithm for different values of the learning rate parameter and compare the algorithm with the batch version.
43
input data pdf basis functions Gaussian mixture models The basis functions of the RBF can be regarded as the components of a mixture density model whose parameters are to be optimized by ML. max P(j) kernel parameters Once the mixture model has been optimized, the mixing coefficients P(j) can be discarded, and the basis functions then used in the RBF in which the second-layer weights are found by supervised training.
44
K-means as a particular limit of the EM optimization of a Gaussian mixture model
45
basis function optimization (supervised training) The basis function parameters for regression can be found by treating the kernel centres and widths, along with the second-layer weights, as adaptive parameters to be determined by minimization of an error function. a sum-of-squares error a spherical Gaussian basis functions non-linear computationally intensive optimization problem
46
basis function optimization (supervised training) well localized kernels RBF training input
47
basis function optimization (supervised training) RBF training
48
basis function optimization (supervised training) RBF training activation only for a small fraction of kernels training procedures can be speeded up significantly by identifying the relevant kernels and therefore avoiding unnecessary computation
49
basis function optimization (supervised training) RBF training coarse (unsupervised) to fine (supervised) no guarantee basis function will remain localized !
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.