A “function” has: A source A target A rule to go from the source to the target.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Announcements. Structure-from-Motion Determining the 3-D structure of the world, and/or the motion of a camera using a sequence of images taken by a moving.
Covariance Matrix Applications
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
PCA + SVD.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Principal Components Analysis Babak Rasolzadeh Tuesday, 5th December 2006.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Linear Transformations
Chapter 4.1 Mathematical Concepts
Computer Graphics Recitation 5.
3D Geometry for Computer Graphics
Chapter 4.1 Mathematical Concepts. 2 Applied Trigonometry Trigonometric functions Defined using right triangle  x y h.
Dimensional reduction, PCA
CHAPTER 19 Correspondence Analysis From: McCune, B. & J. B. Grace Analysis of Ecological Communities. MjM Software Design, Gleneden Beach, Oregon.
CSCE 590E Spring 2007 Basic Math By Jijun Tang. Applied Trigonometry Trigonometric functions  Defined using right triangle  x y h.
Canonical correlations
Face Recognition Jeremy Wyatt.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Computer Animations of Molecular Vibration Michael McGuan and Robert M. Hanson Summer Research 2004 Department of Chemistry St. Olaf College Northfield,
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology 3D Engines and Scene Graphs Spring 2012.
Techniques for studying correlation and covariance structure
Correlation. The sample covariance matrix: where.
Today Wrap up of probability Vectors, Matrices. Calculus
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
1 Matrix Math ©Anthony Steed Overview n To revise Vectors Matrices n New stuff Homogenous co-ordinates 3D transformations as matrices.
Rotations and Translations
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
A biologist and a matrix The matrix will follow. Did you know that the 20 th century scientist who lay the foundation to the estimation of signals in.
The Multiple Correlation Coefficient. has (p +1)-variate Normal distribution with mean vector and Covariance matrix We are interested if the variable.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Canonical Correlation Psy 524 Andrew Ainsworth. Matrices Summaries and reconfiguration.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
1 Sample Geometry and Random Sampling Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Chapter 7 Multivariate techniques with text Parallel embedded system design lab 이청용.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2.5 – Determinants and Multiplicative Inverses of Matrices.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
Feature Extraction 主講人:虞台文.
Principal Components Analysis ( PCA)
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
Unsupervised Learning II Feature Extraction
College Algebra Chapter 6 Matrices and Determinants and Applications
Background on Classification
9.3 Filtered delay embeddings
Techniques for studying correlation and covariance structure
Principal Component Analysis
Introduction PCA (Principal Component Analysis) Characteristics:
Matrix Algebra and Random Vectors
X.1 Principal component analysis
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Mathematics for Signals and Systems
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Principal Component Analysis
Game Programming Algorithms and Techniques
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

A “function” has: A source A target A rule to go from the source to the target

The source and target can be 2 or 3 dimensional H can be a topographic map, for example For each point on the map we assign a number

The other way around: The orbit describes the movement of the planets as a function of time in 3-D

Lets consider functions from 2-D to 2-D: y x If we can write: Then f is called linear

2 important functions from 2-D to 2-D y x The function that takes every point to (0,0) : the zero function The function that doesn’t do anything : the unity function

zero matrix The matrix notion: unity matrix

Every linear function 2-D to 2-D can be written by a 2x2 matrix Every 2x2 matrix represent a linear function from 2-D to 2-D Another example: a rotation matrix More examples: reflection, compression, stretching…

y x Ф In a rotation, the vector’s length remain the same

The matrix notion:

Matrix math: only square matrices can be inverted, and not even all of them zero matrix inverse? unity matrix inverse?

A vector which is only scaled by a specific matrix operation is called an eigenvector. The scaling factor is called an eigenvalue. y x

Anyway, one thing remains: the reversibility of a matrix depends on its eigenvalues. Invertible matrix  no zero eigenvalues, λ≠0. What is the physical meaning of the eigenvectors/ values? For every use of matrices there is a different meaning. We will see an example.

A major task of engineering: make the data easy on the eyes[1] Biology example: cell signaling. Many signals, many observations = big matrix, big mess Transform this matrix into something we can look at, by choosing the best x and y axes [1] Kevin A. Janes and Michael B. Yaffe, Data-driven modeling of signal-transduction networks, Nature Reviews Molecular Cell Biology 7, (November 2006) “The paradox for systems biology is that these large data sets by themselves often bring more confusion than understanding” [1]

The idea: arrange the rows and columns of the matrix in a way that reveals biological meaning The example: measure the co-variance (how 2 cell signals change “together”), to create a matrix: This matrix represent a linear function The matrix work on a vector of cell signals For the eigenvectors, the matrix just change the vector size (multiply by the eigenvalue)

Biggest eigenvalues of C correspond to the most informative collection of signals- the ones that behave “together” Choose for example only the biggest 2, and use them as the X and Y axis How do we change the existing data vectors to the new axes? We project! y x Ф r BTW, this method is called Principal Components analysis (PCA)

Another use of matrices: advance in time example y x

Use of matrices: propagator function- advance in time y x 45 ○ What is the eigenvalues of the eigenvectors? The 2 eigenvectors can be thought of 2 modes of movement in the space- one motionless, the other ‘jumps’ 180 degrees. And if we build a new vector, a combination of the 2 eigenvectors?

Combination of eigenvector with non-eigenvector y x What will happen if ?

Summary: When the matrix is a propagator the eigenvectors with eigenvalue 1 are the stable states (along side 0) When the eigenvalues are less than one the system will decay to 0 When the eigenvalues are higher than one the system will grow and grow… What if we want to check the system state after many time steps?

How do we calculate the matrix power? Using the eigenvectors, we can write the matrix as a multiplication of 3 matrices:

What can such matrix mean? - Ligand / receptor binding state, and next state probabilities[2] Capture state Free state [2], A. Hassibi, S. Zahedi, R. Navid, R. W. Dutton, and T. H. Lee, Biological Shot-noise and Quantum-Limited SNR in Affinity-Based Biosensor, Journal of Applied Physics, 97-1, (2005).

For example, assume all ligands are free at time zero: As only the eigenvector of 1 survives (0.4 mode goes down to zero), we will be left with a uniform probability of (½, ½)- half of the ligand molecules are captured and half are free at steady state

Another example: Evolutionary Biology and genetics “ evolutionary biology rests firmly on a foundation of linear algebra”[3] Observations are made on the covariance matrix of traits denoted G A genetic constraint is a factor that effects the direction of evolution or prevents adaptation Genetic correlation that show no variance in a direction of selection will constrain the evolution in that direction. How can we see it in the matrix? [3], M. W. Blows, A tale of two matrices: multivariate approaches in evolutionary biology, Journal of Evolutionary Biology, Volume 20 Issue 1 Page 1-8, (January 2007) A zero eigenvalue