Lecture 16 Graphs and Matrices in Practice Eigenvalue and Eigenvector Shang-Hua Teng.

Slides:



Advertisements
Similar presentations
Markov chains. Probability distributions Exercise 1.Use the Matlab function nchoosek(n,k) to implement a generic function BinomialPMF(k,n,p) for calculating.
Advertisements

Matrices, Digraphs, Markov Chains & Their Use by Google Leslie Hogben Iowa State University and American Institute of Mathematics Leslie Hogben Iowa State.
Lecture 17 Path Algebra Matrix multiplication of adjacency matrices of directed graphs give important information about the graphs. Manipulating these.
Lecture 17 Introduction to Eigenvalue Problems
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
Matrix Multiplication To Multiply matrix A by matrix B: Multiply corresponding entries and then add the resulting products (1)(-1)+ (2)(3) Multiply each.
Lecture 21: Matrix Operations and All-pair Shortest Paths Shang-Hua Teng.
1 Representing Graphs. 2 Adjacency Matrix Suppose we have a graph G with n nodes. The adjacency matrix is the n x n matrix A=[a ij ] with: a ij = 1 if.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 3 March 23, 2005
Maths for Computer Graphics
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
I. Homomorphisms & Isomorphisms II. Computing Linear Maps III. Matrix Operations VI. Change of Basis V. Projection Topics: Line of Best Fit Geometry of.
Lecture 6 Intersection of Hyperplanes and Matrix Inverse Shang-Hua Teng.
An Introduction to Latent Semantic Analysis
Lecture 19 Quadratic Shapes and Symmetric Positive Definite Matrices Shang-Hua Teng.
Introduction to Information Retrieval Introduction to Information Retrieval Hinrich Schütze and Christina Lioma Lecture 21: Link Analysis.
Zdravko Markov and Daniel T. Larose, Data Mining the Web: Uncovering Patterns in Web Content, Structure, and Usage, Wiley, Slides for Chapter 1:
Lecture 14: Graph Algorithms Shang-Hua Teng. Undirected Graphs A graph G = (V, E) –V: vertices –E : edges, unordered pairs of vertices from V  V –(u,v)
Singular Value Decomposition in Text Mining Ram Akella University of California Berkeley Silicon Valley Center/SC Lecture 4b February 9, 2011.
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
CSE 321 Discrete Structures Winter 2008 Lecture 25 Graph Theory.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Lecture 7 Hyper-planes, Matrices, and Linear Systems Shang-Hua Teng.
Damped random walks and the spectrum of the normalized laplacian on a graph.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Lecture 16 Cramer’s Rule, Eigenvalue and Eigenvector Shang-Hua Teng.
1 CS 430 / INFO 430 Information Retrieval Lecture 9 Latent Semantic Indexing.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
Homework Define a loss function that compares two matrices (say mean square error) b = svd(bellcore) b2 = b$u[,1:2] %*% diag(b$d[1:2]) %*% t(b$v[,1:2])
Models and Algorithms for Complex Networks Graph Clustering and Network Communities.
An Introduction to Latent Semantic Analysis. 2 Matrix Decompositions Definition: The factorization of a matrix M into two or more matrices M 1, M 2,…,
DISCRETE COMPUTATIONAL STRUCTURES CS Fall 2005.
Lecture 8 Matrix Inverse and LU Decomposition
Linear Algebra (Aljabar Linier) Week 6 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma Ph: ,
Alternative IR models DR.Yeni Herdiyeni, M.Kom STMIK ERESHA.
Modern information retreival Chapter. 02: Modeling (Latent Semantic Indexing)
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
How to Multiply Two Matrices. Steps for Matrix Multiplication 1.Determine whether the matrices are compatible. 2.Determine the dimensions of the product.
1 CS 430: Information Discovery Lecture 11 Latent Semantic Indexing.
Recuperação de Informação B Cap. 02: Modeling (Latent Semantic Indexing & Neural Network Model) 2.7.2, September 27, 1999.
geometric representations of graphs
Value Function Approximation with Diffusion Wavelets and Laplacian Eigenfunctions by S. Mahadevan & M. Maggioni Discussion led by Qi An ECE, Duke University.
Source: CSE 214 – Computer Science II Graphs.
Computation on Graphs. Graphs and Sparse Matrices Sparse matrix is a representation of.
Final Outline Shang-Hua Teng. Problem 1: Multiple Choices 16 points There might be more than one correct answers; So you should try to mark them all.
13.4 Product of Two Matrices
Matrix Representation of Graphs
Graph Representations
Random Walks on Graphs.
Search Engines and Link Analysis on the Web
Matrix Multiplication
Lecture 16 Cramer’s Rule, Eigenvalue and Eigenvector
Lecture 21 SVD and Latent Semantic Indexing and Dimensional Reduction
Lecture 22 SVD, Eigenvector, and Web Search
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Matrices Elements, Adding and Subtracting
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
geometric representations of graphs
Ilan Ben-Bassat Omri Weinstein
Great Theoretical Ideas In Computer Science
Lecture 13: Singular Value Decomposition (SVD)
Lecture 22 SVD, Eigenvector, and Web Search
Lecture 22 SVD, Eigenvector, and Web Search
Lecture 21: Matrix Operations and All-pair Shortest Paths
Information Retrieval and Web Search
CS723 - Probability and Stochastic Processes
Presentation transcript:

Lecture 16 Graphs and Matrices in Practice Eigenvalue and Eigenvector Shang-Hua Teng

Where Do Matrices Come From?

Computer Science Graphs: G = (V,E)

Internet Graph

View Internet Graph on Spheres

Graphs in Scientific Computing

Resource Allocation Graph

Road Map

Matrices Representation of graphs Adjacency matrix:

Adjacency Matrix:

Matrix of Graphs Adjacency Matrix: If A(i, j) = 1: edge exists Else A(i, j) =

Laplacian of Graphs

Matrix of Weighted Graphs Weighted Matrix: If A(i, j) = w(i,j): edge exists Else A(i, j) = infty

Random walks How long does it take to get completely lost?

Random walks Transition Matrix

Markov Matrix Every entry is non-negative Every column adds to 1 A Markov matrix defines a Markov chain

Other Matrices Projections Rotations Permutations Reflections

Term-Document Matrix Index each document (by human or by computer) –f ij counts, frequencies, weights, etc Each document can be regarded as a point in m dimensions

Document-Term Matrix Index each document (by human or by computer) –f ij counts, frequencies, weights, etc Each document can be regarded as a point in n dimensions

Term Occurrence Matrix

c1 c2 c3 c4 c5 m1 m2 m3 m4 human interface computer user system response time EPS survey trees graph minors

Matrix in Image Processing

Random walks How long does it take to get completely lost?

Random walks Transition Matrix