Pattern Recognition and Machine Learning Chapter 1: Introduction
Example Handwritten Digit Recognition
Polynomial Curve Fitting
Sum-of-Squares Error Function
0th Order Polynomial
1st Order Polynomial
3rd Order Polynomial
9th Order Polynomial
Over-fitting Root-Mean-Square (RMS) Error:
Polynomial Coefficients
Data Set Size: 9th Order Polynomial
Data Set Size: 9th Order Polynomial
Regularization Penalize large coefficient values
Regularization:
Regularization:
Regularization: vs.
Polynomial Coefficients
Probability Theory Apples and Oranges
Probability Theory Marginal Probability Conditional Probability Joint Probability
Probability Theory Sum Rule Product Rule
The Rules of Probability Sum Rule Product Rule
Bayes’ Theorem posterior likelihood × prior
Probability Densities
Transformed Densities
Expectations Conditional Expectation (discrete) Approximate Expectation (discrete and continuous)
Variances and Covariances
The Gaussian Distribution
Gaussian Mean and Variance
The Multivariate Gaussian
Gaussian Parameter Estimation Likelihood function
Maximum (Log) Likelihood
Properties of and
Curve Fitting Re-visited
Maximum Likelihood Determine by minimizing sum-of-squares error, .
Predictive Distribution
MAP: A Step towards Bayes Determine by minimizing regularized sum-of-squares error, .
Bayesian Curve Fitting
Bayesian Predictive Distribution
Model Selection Cross-Validation
Curse of Dimensionality
Curse of Dimensionality Polynomial curve fitting, M = 3 Gaussian Densities in higher dimensions
Decision Theory Inference step Determine either or . Decision step For given x, determine optimal t.
Minimum Misclassification Rate
Minimum Expected Loss Example: classify medical images as ‘cancer’ or ‘normal’ Decision Truth
Minimum Expected Loss Regions are chosen to minimize
Reject Option
Why Separate Inference and Decision? Minimizing risk (loss matrix may change over time) Reject option Unbalanced class priors Combining models
Decision Theory for Regression Inference step Determine . Decision step For given x, make optimal prediction, y(x), for t. Loss function:
The Squared Loss Function
Generative vs Discriminative Generative approach: Model Use Bayes’ theorem Discriminative approach: Model directly
Entropy Important quantity in coding theory statistical physics machine learning
Entropy Coding theory: x discrete with 8 possible states; how many bits to transmit the state of x? All states equally likely
Entropy
Entropy In how many ways can N identical objects be allocated M bins? Entropy maximized when
Entropy
Differential Entropy Put bins of width ¢ along the real line Differential entropy maximized (for fixed ) when in which case
Conditional Entropy
The Kullback-Leibler Divergence
Mutual Information
Pattern Recognition and Machine Learning Chapter 2: Probability distributions
Parametric Distributions Basic building blocks: Need to determine given Representation: or ? Recall Curve Fitting
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution
Binary Variables (2) N coin flips: Binomial Distribution
Binomial Distribution
Parameter Estimation (1) ML for Bernoulli Given:
Parameter Estimation (2) Example: Prediction: all future tosses will land heads up Overfitting to D
Beta Distribution Distribution over .
Bayesian Bernoulli The Beta distribution provides the conjugate prior for the Bernoulli distribution.
Beta Distribution
Prior ∙ Likelihood = Posterior
Properties of the Posterior As the size of the data set, N , increase
Prediction under the Posterior What is the probability that the next coin toss will land heads up?
Multinomial Variables 1-of-K coding scheme:
ML Parameter estimation Given: Ensure , use a Lagrange multiplier, ¸.
The Multinomial Distribution
The Dirichlet Distribution Conjugate prior for the multinomial distribution.
Bayesian Multinomial (1)
Bayesian Multinomial (2)
The Gaussian Distribution
Central Limit Theorem The distribution of the sum of N i.i.d. random variables becomes increasingly Gaussian as N grows. Example: N uniform [0,1] random variables.
Geometry of the Multivariate Gaussian
Moments of the Multivariate Gaussian (1) thanks to anti-symmetry of z
Moments of the Multivariate Gaussian (2)
Partitioned Gaussian Distributions
Partitioned Conditionals and Marginals
Partitioned Conditionals and Marginals
Bayes’ Theorem for Gaussian Variables Given we have where
Maximum Likelihood for the Gaussian (1) Given i.i.d. data , the log likeli-hood function is given by Sufficient statistics
Maximum Likelihood for the Gaussian (2) Set the derivative of the log likelihood function to zero, and solve to obtain Similarly
Maximum Likelihood for the Gaussian (3) Under the true distribution Hence define
Sequential Estimation Contribution of the N th data point, xN correction given xN correction weight old estimate
The Robbins-Monro Algorithm (1) Consider µ and z governed by p(z,µ) and define the regression function Seek µ? such that f(µ?) = 0.
The Robbins-Monro Algorithm (2) Assume we are given samples from p(z,µ), one at the time.
The Robbins-Monro Algorithm (3) Successive estimates of µ? are then given by Conditions on aN for convergence :
Robbins-Monro for Maximum Likelihood (1) Regarding as a regression function, finding its root is equivalent to finding the maximum likelihood solution µML. Thus
Robbins-Monro for Maximum Likelihood (2) Example: estimate the mean of a Gaussian. The distribution of z is Gaussian with mean ¹ { ¹ML. For the Robbins-Monro update equation, aN = ¾2=N.
Bayesian Inference for the Gaussian (1) Assume ¾2 is known. Given i.i.d. data , the likelihood function for ¹ is given by This has a Gaussian shape as a function of ¹ (but it is not a distribution over ¹).
Bayesian Inference for the Gaussian (2) Combined with a Gaussian prior over ¹, this gives the posterior Completing the square over ¹, we see that
Bayesian Inference for the Gaussian (3) … where Note:
Bayesian Inference for the Gaussian (4) Example: for N = 0, 1, 2 and 10.
Bayesian Inference for the Gaussian (5) Sequential Estimation The posterior obtained after observing N { 1 data points becomes the prior when we observe the N th data point.
Bayesian Inference for the Gaussian (6) Now assume ¹ is known. The likelihood function for ¸ = 1/¾2 is given by This has a Gamma shape as a function of ¸.
Bayesian Inference for the Gaussian (7) The Gamma distribution
Bayesian Inference for the Gaussian (8) Now we combine a Gamma prior, , with the likelihood function for ¸ to obtain which we recognize as with
Bayesian Inference for the Gaussian (9) If both ¹ and ¸ are unknown, the joint likelihood function is given by We need a prior with the same functional dependence on ¹ and ¸.
Bayesian Inference for the Gaussian (10) The Gaussian-gamma distribution Quadratic in ¹. Linear in ¸. Gamma distribution over ¸. Independent of ¹.
Bayesian Inference for the Gaussian (11) The Gaussian-gamma distribution
Bayesian Inference for the Gaussian (12) Multivariate conjugate priors ¹ unknown, ¤ known: p(¹) Gaussian. ¤ unknown, ¹ known: p(¤) Wishart, ¤ and ¹ unknown: p(¹,¤) Gaussian-Wishart,
Student’s t-Distribution where Infinite mixture of Gaussians.
Student’s t-Distribution
Student’s t-Distribution Robustness to outliers: Gaussian vs t-distribution.
Student’s t-Distribution The D-variate case: where . Properties:
Periodic variables Examples: calendar time, direction, … We require
von Mises Distribution (1) This requirement is satisfied by where is the 0th order modified Bessel function of the 1st kind.
von Mises Distribution (4)
Maximum Likelihood for von Mises Given a data set, , the log likelihood function is given by Maximizing with respect to µ0 we directly obtain Similarly, maximizing with respect to m we get which can be solved numerically for mML.
Mixtures of Gaussians (1) Old Faithful data set Single Gaussian Mixture of two Gaussians
Mixtures of Gaussians (2) Combine simple models into a complex model: K=3 Component Mixing coefficient
Mixtures of Gaussians (3)
Mixtures of Gaussians (4) Determining parameters ¹, §, and ¼ using maximum log likelihood Solution: use standard, iterative, numeric optimization methods or the expectation maximization algorithm (Chapter 9). Log of a sum; no closed form maximum.
The Exponential Family (1) where ´ is the natural parameter and so g(´) can be interpreted as a normalization coefficient.
The Exponential Family (2.1) The Bernoulli Distribution Comparing with the general form we see that and so Logistic sigmoid
The Exponential Family (2.2) The Bernoulli distribution can hence be written as where
The Exponential Family (3.1) The Multinomial Distribution where, , and NOTE: The ´ k parameters are not independent since the corresponding ¹k must satisfy
The Exponential Family (3.2) Let . This leads to and Here the ´ k parameters are independent. Note that Softmax
The Exponential Family (3.3) The Multinomial distribution can then be written as where
The Exponential Family (4) The Gaussian Distribution where
ML for the Exponential Family (1) From the definition of g(´) we get Thus
ML for the Exponential Family (2) Give a data set, , the likelihood function is given by Thus we have Sufficient statistic
Prior corresponds to º pseudo-observations with value Â. Conjugate priors For any member of the exponential family, there exists a prior Combining with the likelihood function, we get Prior corresponds to º pseudo-observations with value Â.
Noninformative Priors (1) With little or no information available a-priori, we might choose a non-informative prior. ¸ discrete, K-nomial : ¸2[a,b] real and bounded: ¸ real and unbounded: improper! A constant prior may no longer be constant after a change of variable; consider p(¸) constant and ¸=´2:
Noninformative Priors (2) Translation invariant priors. Consider For a corresponding prior over ¹, we have for any A and B. Thus p(¹) = p(¹ { c) and p(¹) must be constant.
Noninformative Priors (3) Example: The mean of a Gaussian, ¹ ; the conjugate prior is also a Gaussian, As , this will become constant over ¹ .
Noninformative Priors (4) Scale invariant priors. Consider and make the change of variable For a corresponding prior over ¾, we have for any A and B. Thus p(¾) / 1/¾ and so this prior is improper too. Note that this corresponds to p(ln ¾) being constant.
Noninformative Priors (5) Example: For the variance of a Gaussian, ¾2, we have If ¸ = 1/¾2 and p(¾) / 1/¾ , then p(¸) / 1/ ¸. We know that the conjugate distribution for ¸ is the Gamma distribution, A noninformative prior is obtained when a0 = 0 and b0 = 0.
Nonparametric Methods (1) Parametric distribution models are restricted to specific forms, which may not always be suitable; for example, consider modelling a multimodal distribution with a single, unimodal model. Nonparametric approaches make few assumptions about the overall shape of the distribution being modelled.
Nonparametric Methods (2) Histogram methods partition the data space into distinct bins with widths ¢i and count the number of observations, ni, in each bin. Often, the same width is used for all bins, ¢i = ¢. ¢ acts as a smoothing parameter. In a D-dimensional space, using M bins in each dimen- sion will require MD bins!
Nonparametric Methods (3) Assume observations drawn from a density p(x) and consider a small region R containing x such that The probability that K out of N observations lie inside R is Bin(KjN,P ) and if N is large If the volume of R, V, is sufficiently small, p(x) is approximately constant over R and Thus V small, yet K>0, therefore N large?
Nonparametric Methods (4) Kernel Density Estimation: fix V, estimate K from the data. Let R be a hypercube centred on x and define the kernel function (Parzen window) It follows that and hence
Nonparametric Methods (5) To avoid discontinuities in p(x), use a smooth kernel, e.g. a Gaussian Any kernel such that will work. h acts as a smoother.
Nonparametric Methods (6) Nearest Neighbour Density Estimation: fix K, estimate V from the data. Consider a hypersphere centred on x and let it grow to a volume, V ?, that includes K of the given N data points. Then K acts as a smoother.
Nonparametric Methods (7) Nonparametric models (not histograms) requires storing and computing with the entire data set. Parametric models, once fitted, are much more efficient in terms of storage and computation.
K-Nearest-Neighbours for Classification (1) Given a data set with Nk data points from class Ck and , we have and correspondingly Since , Bayes’ theorem gives
K-Nearest-Neighbours for Classification (2)
K-Nearest-Neighbours for Classification (3) K acts as a smother For , the error rate of the 1-nearest-neighbour classifier is never more than twice the optimal error (obtained from the true conditional class distributions).
Pattern Recognition and Machine Learning Chapter 3: Linear models for regression
Linear Basis Function Models (1) Example: Polynomial Curve Fitting
Linear Basis Function Models (2) Generally where Áj(x) are known as basis functions. Typically, Á0(x) = 1, so that w0 acts as a bias. In the simplest case, we use linear basis functions : Ád(x) = xd.
Linear Basis Function Models (3) Polynomial basis functions: These are global; a small change in x affect all basis functions.
Linear Basis Function Models (4) Gaussian basis functions: These are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (width).
Linear Basis Function Models (5) Sigmoidal basis functions: where Also these are local; a small change in x only affect nearby basis functions. ¹j and s control location and scale (slope).
Maximum Likelihood and Least Squares (1) Assume observations from a deterministic function with added Gaussian noise: which is the same as saying, Given observed inputs, , and targets, , we obtain the likelihood function where
Maximum Likelihood and Least Squares (2) Taking the logarithm, we get where is the sum-of-squares error.
Maximum Likelihood and Least Squares (3) Computing the gradient and setting it to zero yields Solving for w, we get where The Moore-Penrose pseudo-inverse, .
Geometry of Least Squares Consider S is spanned by . wML minimizes the distance between t and its orthogonal projection on S, i.e. y. N-dimensional M-dimensional
Sequential Learning Data items considered one at a time (a.k.a. online learning); use stochastic (sequential) gradient descent: This is known as the least-mean-squares (LMS) algorithm. Issue: how to choose ´?
Regularized Least Squares (1) Consider the error function: With the sum-of-squares error function and a quadratic regularizer, we get which is minimized by Data term + Regularization term ¸ is called the regularization coefficient.
Regularized Least Squares (2) With a more general regularizer, we have Lasso Quadratic
Regularized Least Squares (3) Lasso tends to generate sparser solutions than a quadratic regularizer.
Multiple Outputs (1) Analogously to the single output case we have: Given observed inputs, , and targets, , we obtain the log likelihood function
Multiple Outputs (2) Maximizing with respect to W, we obtain If we consider a single target variable, tk, we see that where , which is identical with the single output case.
The Bias-Variance Decomposition (1) Recall the expected squared loss, where The second term of E[L] corresponds to the noise inherent in the random variable t. What about the first term?
The Bias-Variance Decomposition (2) Suppose we were given multiple data sets, each of size N. Any particular data set, D, will give a particular function y(x;D). We then have
The Bias-Variance Decomposition (3) Taking the expectation over D yields
The Bias-Variance Decomposition (4) Thus we can write where
The Bias-Variance Decomposition (5) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.
The Bias-Variance Decomposition (6) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.
The Bias-Variance Decomposition (7) Example: 25 data sets from the sinusoidal, varying the degree of regularization, ¸.
The Bias-Variance Trade-off From these plots, we note that an over-regularized model (large ¸) will have a high bias, while an under-regularized model (small ¸) will have a high variance.
Bayesian Linear Regression (1) Define a conjugate prior over w Combining this with the likelihood function and using results for marginal and conditional Gaussian distributions, gives the posterior where
Bayesian Linear Regression (2) A common choice for the prior is for which Next we consider an example …
Bayesian Linear Regression (3) 0 data points observed Prior Data Space
Bayesian Linear Regression (4) 1 data point observed Likelihood Posterior Data Space
Bayesian Linear Regression (5) 2 data points observed Likelihood Posterior Data Space
Bayesian Linear Regression (6) 20 data points observed Likelihood Posterior Data Space
Predictive Distribution (1) Predict t for new values of x by integrating over w: where
Predictive Distribution (2) Example: Sinusoidal data, 9 Gaussian basis functions, 1 data point
Predictive Distribution (3) Example: Sinusoidal data, 9 Gaussian basis functions, 2 data points
Predictive Distribution (4) Example: Sinusoidal data, 9 Gaussian basis functions, 4 data points
Predictive Distribution (5) Example: Sinusoidal data, 9 Gaussian basis functions, 25 data points
Equivalent Kernel (1) The predictive mean can be written This is a weighted sum of the training data target values, tn. Equivalent kernel or smoother matrix.
Equivalent Kernel (2) Weight of tn depends on distance between x and xn; nearby xn carry more weight.
Equivalent Kernel (3) Non-local basis functions have local equivalent kernels: Polynomial Sigmoidal
Equivalent Kernel (4) The kernel as a covariance function: consider We can avoid the use of basis functions and define the kernel function directly, leading to Gaussian Processes (Chapter 6).
Equivalent Kernel (5) for all values of x; however, the equivalent kernel may be negative for some values of x. Like all kernel functions, the equivalent kernel can be expressed as an inner product: where .
Bayesian Model Comparison (1) How do we choose the ‘right’ model? Assume we want to compare models Mi, i=1, …,L, using data D; this requires computing Bayes Factor: ratio of evidence for two models Posterior Prior Model evidence or marginal likelihood
Bayesian Model Comparison (2) Having computed p(MijD), we can compute the predictive (mixture) distribution A simpler approximation, known as model selection, is to use the model with the highest evidence.
Bayesian Model Comparison (3) For a model with parameters w, we get the model evidence by marginalizing over w Note that
Bayesian Model Comparison (4) For a given model with a single parameter, w, con-sider the approximation where the posterior is assumed to be sharply peaked.
Bayesian Model Comparison (5) Taking logarithms, we obtain With M parameters, all assumed to have the same ratio , we get Negative Negative and linear in M.
Bayesian Model Comparison (6) Matching data and model complexity
The Evidence Approximation (1) The fully Bayesian predictive distribution is given by but this integral is intractable. Approximate with where is the mode of , which is assumed to be sharply peaked; a.k.a. empirical Bayes, type II or gene-ralized maximum likelihood, or evidence approximation.
The Evidence Approximation (2) From Bayes’ theorem we have and if we assume p(®,¯) to be flat we see that General results for Gaussian integrals give
The Evidence Approximation (3) Example: sinusoidal data, M th degree polynomial,
Maximizing the Evidence Function (1) To maximise w.r.t. ® and ¯, we define the eigenvector equation Thus has eigenvalues ¸i + ®.
Maximizing the Evidence Function (2) We can now differentiate w.r.t. ® and ¯, and set the results to zero, to get where N.B. ° depends on both ® and ¯.
Effective Number of Parameters (3) w1 is not well determined by the likelihood w2 is well determined by the likelihood ° is the number of well determined parameters Likelihood Prior
Effective Number of Parameters (2) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1.
Effective Number of Parameters (3) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1. Test set error
Effective Number of Parameters (4) Example: sinusoidal data, 9 Gaussian basis functions, ¯ = 11.1.
Effective Number of Parameters (5) In the limit , ° = M and we can consider using the easy-to-compute approximation
Limitations of Fixed Basis Functions M basis function along each dimension of a D-dimensional input space requires MD basis functions: the curse of dimensionality. In later chapters, we shall see how we can get away with fewer basis functions, by choosing these using the training data.
Pattern Recognition and Machine Learning Chapter 8: graphical models
Bayesian Networks Directed Acyclic Graph (DAG)
Bayesian Networks General Factorization
Bayesian Curve Fitting (1) Polynomial
Bayesian Curve Fitting (2) Plate
Bayesian Curve Fitting (3) Input variables and explicit hyperparameters
Bayesian Curve Fitting —Learning Condition on data
Bayesian Curve Fitting —Prediction Predictive distribution: where
Generative Models Causal process for generating images
Discrete Variables (1) General joint distribution: K 2 { 1 parameters Independent joint distribution: 2(K { 1) parameters
Discrete Variables (2) General joint distribution over M variables: KM { 1 parameters M -node Markov chain: K { 1 + (M { 1) K(K { 1) parameters
Discrete Variables: Bayesian Parameters (1)
Discrete Variables: Bayesian Parameters (2) Shared prior
Parameterized Conditional Distributions If are discrete, K-state variables, in general has O(K M) parameters. The parameterized form requires only M + 1 parameters
Linear-Gaussian Models Directed Graph Vector-valued Gaussian Nodes Each node is Gaussian, the mean is a linear function of the parents.
Conditional Independence a is independent of b given c Equivalently Notation
Conditional Independence: Example 1
Conditional Independence: Example 1
Conditional Independence: Example 2
Conditional Independence: Example 2
Conditional Independence: Example 3 Note: this is the opposite of Example 1, with c unobserved.
Conditional Independence: Example 3 Note: this is the opposite of Example 1, with c observed.
“Am I out of fuel?” B = Battery (0=flat, 1=fully charged) and hence B = Battery (0=flat, 1=fully charged) F = Fuel Tank (0=empty, 1=full) G = Fuel Gauge Reading (0=empty, 1=full)
“Am I out of fuel?” Probability of an empty tank increased by observing G = 0.
“Am I out of fuel?” Probability of an empty tank reduced by observing B = 0. This referred to as “explaining away”.
D-separation A, B, and C are non-intersecting subsets of nodes in a directed graph. A path from A to B is blocked if it contains a node such that either the arrows on the path meet either head-to-tail or tail-to-tail at the node, and the node is in the set C, or the arrows meet head-to-head at the node, and neither the node, nor any of its descendants, are in the set C. If all paths from A to B are blocked, A is said to be d-separated from B by C. If A is d-separated from B by C, the joint distribution over all variables in the graph satisfies .
D-separation: Example
D-separation: I.I.D. Data
Directed Graphs as Distribution Filters
The Markov Blanket Factors independent of xi cancel between numerator and denominator.
Cliques and Maximal Cliques
Joint Distribution where is the potential over clique C and is the normalization coefficient; note: M K-state variables KM terms in Z. Energies and the Boltzmann distribution
Illustration: Image De-Noising (1) Original Image Noisy Image
Illustration: Image De-Noising (2)
Illustration: Image De-Noising (3) Noisy Image Restored Image (ICM)
Illustration: Image De-Noising (4) Restored Image (ICM) Restored Image (Graph cuts)
Converting Directed to Undirected Graphs (1)
Converting Directed to Undirected Graphs (2) Additional links
Directed vs. Undirected Graphs (1)
Directed vs. Undirected Graphs (2)
Inference in Graphical Models
Inference on a Chain
Inference on a Chain
Inference on a Chain
Inference on a Chain
Inference on a Chain To compute local marginals: Compute and store all forward messages, . Compute and store all backward messages, . Compute Z at any node xm Compute for all variables required.
Trees Undirected Tree Directed Tree Polytree
Factor Graphs
Factor Graphs from Directed Graphs
Factor Graphs from Undirected Graphs
The Sum-Product Algorithm (1) Objective: to obtain an efficient, exact inference algorithm for finding marginals; in situations where several marginals are required, to allow computations to be shared efficiently. Key idea: Distributive Law
The Sum-Product Algorithm (2)
The Sum-Product Algorithm (3)
The Sum-Product Algorithm (4)
The Sum-Product Algorithm (5)
The Sum-Product Algorithm (6)
The Sum-Product Algorithm (7) Initialization
The Sum-Product Algorithm (8) To compute local marginals: Pick an arbitrary node as root Compute and propagate messages from the leaf nodes to the root, storing received messages at every node. Compute and propagate messages from the root to the leaf nodes, storing received messages at every node. Compute the product of received messages at each node for which the marginal is required, and normalize if necessary.
Sum-Product: Example (1)
Sum-Product: Example (2)
Sum-Product: Example (3)
Sum-Product: Example (4)
The Max-Sum Algorithm (1) Objective: an efficient algorithm for finding the value xmax that maximises p(x); the value of p(xmax). In general, maximum marginals joint maximum.
The Max-Sum Algorithm (2) Maximizing over a chain (max-product)
The Max-Sum Algorithm (3) Generalizes to tree-structured factor graph maximizing as close to the leaf nodes as possible
The Max-Sum Algorithm (4) Max-Product Max-Sum For numerical reasons, use Again, use distributive law
The Max-Sum Algorithm (5) Initialization (leaf nodes) Recursion
The Max-Sum Algorithm (6) Termination (root node) Back-track, for all nodes i with l factor nodes to the root (l=0)
The Max-Sum Algorithm (7) Example: Markov chain
The Junction Tree Algorithm Exact inference on general graphs. Works by turning the initial graph into a junction tree and then running a sum-product-like algorithm. Intractable on graphs with large cliques.
Loopy Belief Propagation Sum-Product on general graphs. Initial unit messages passed across all links, after which messages are passed around until convergence (not guaranteed!). Approximate but tractable for large graphs. Sometime works well, sometimes not at all.