Determinants of Distance Matrices of Trees Valerie Tillia.

Slides:



Advertisements
Similar presentations
Week 6 - Wednesday CS322.
Advertisements

Chapter 4 Systems of Linear Equations; Matrices
Transformations We want to be able to make changes to the image larger/smaller rotate move This can be efficiently achieved through mathematical operations.
Exponential Functions Logarithmic Functions
1 MA 1128: Lecture 19 – 4/19/11 Quadratic Formula Solving Equations with Graphs.
Copyright © Cengage Learning. All rights reserved.
Multiplication Staff Tutorial. In this tutorial we run through some of the basic ideas and tricks in multiplying whole numbers. We do a bit of stuttering.
Mr Barton’s Maths Notes
Maths Workshop. Aims of the Workshop To raise standards in maths by working closely with parents. To raise standards in maths by working closely with.
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
Section 2.3 Gauss-Jordan Method for General Systems of Equations
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
Mr Barton’s Maths Notes
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Solving Systems of Linear Equations using Elimination
Systems of Linear Equations
Solving Systems with Matrices Can’t figure out how to solve systems using Elimination Method? Substitution Method? Graphing? Why not try…
Graphs, relations and matrices
Adding and Subtracting Integers.
College Algebra Fifth Edition James Stewart Lothar Redlin Saleem Watson.
Dividing Polynomials.
Introduction To Logarithms. Logarithms were originally developed to simplify complex arithmetic calculations. They were designed to transform multiplicative.
A quadratic equation is a second degree polynomial, usually written in general form: The a, b, and c terms are called the coefficients of the equation,
Simplifying Rational Expressions – Part I
Mrs. Martinez CHS MATH DEPT.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
Systems and Matrices (Chapter5)
Maths Notes Number 5. Decimals
Copyright © Cengage Learning. All rights reserved.
Pre-Calculus Lesson 7-3 Solving Systems of Equations Using Gaussian Elimination.
Chapter 6.7 Determinants. In this chapter all matrices are square; for example: 1x1 (what is a 1x1 matrix, in fact?), 2x2, 3x3 Our goal is to introduce.
MATH 250 Linear Equations and Matrices
7 Graph 7.1 Even and Odd Degrees.
University Prep Math Patricia van Donkelaar Course Outcomes By the end of the course,
Chapter 11 Section 11.4 Solving Larger Systems of Equations.
Systems of Linear Equations: Matrices Chapter Overview A matrix is simply a rectangular array of numbers. Matrices are used to organize information.
Chapter 1 Section 1.1 Introduction to Matrices and systems of Linear Equations.
Week 11 - Wednesday.  What did we talk about last time?  Graphs  Euler paths and tours.
Does this point lie on this line? The Point-Slope format (y – y 1 ) = m(x – x 1 )
We came up with the general rule for expanding binomials, in particular squaring the sum and difference of two terms:- (a + b)²= a²+ 2ab + b² and (a –
Mr Barton’s Maths Notes Graphs 2. Quadratics and Cubics
In section 11.9, we were able to find power series representations for a certain restricted class of functions. Here, we investigate more general problems.
Activity 1-19: The Propositional Calculus
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
Mr Barton’s Maths Notes Algebra 9. Inequalities
Solution of. Linear Differential Equations The first special case of first order differential equations that we will look is the linear first order differential.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
 In this packet we will look at:  The meaning of acceleration  How acceleration is related to velocity and time  2 distinct types acceleration  A.
Sight Words.
College Algebra Sixth Edition James Stewart Lothar Redlin Saleem Watson.
History & Philosophy of Calculus, Session 4 THE INTEGRAL.
Revision Mid 1 Prof. Sin-Min Lee Department of Computer Science.
Revision on Matrices Finding the order of, Addition, Subtraction and the Inverse of Matices.
1 Lecture 24: Controlling the Uncontrollable Building the Unbuildable How do we deal with this uncontrollable system? Can we extend the method of Zs to.
By: Megan Funk. I will: 1. Explain the binary number system How to: -Generate binary from a number -Add binary 2. Explain the base-b number system 3.
Complex Numbers and Equation Solving 1. Simple Equations 2. Compound Equations 3. Systems of Equations 4. Quadratic Equations 5. Determining Quadratic.
Mr Barton’s Maths Notes
Unit 1 Patterns & Relations.
Copyright © Cengage Learning. All rights reserved.
Chapter 2 Determinants Basil Hamed
Adding and Subtracting Integers.
Algebra 8. Simultaneous Equations
Mr Barton’s Maths Notes
Presentation transcript:

Determinants of Distance Matrices of Trees Valerie Tillia

Outline A look at graph theory Short biography of Ron Graham A look at determinants Exploring Condensation and the man who created it Overview of the proof by Yin and Yeh Conclusion Remaining questions

What is graph theory? Graph theory is the mathematical study of networks. These networks are represented by nodes (vertices) and connections (edges). The Petersen Graph Who studies graph theory?? (and why?)

In the 1970’s, Ronald L. Graham, the man responsible for Graham’s Number (we’ll look at that later), Ron Graham mathcircle.berkeley.edu Short biography of Ron Graham published a paper using graph theory to study transmitting messages and calls efficiently at Bell System Tech.

The U.S. National Archives' photo stream Bell System Tech- graph theory in use! businessweek.com

Ron Graham- Man of Many Hats! Graham the Mathematician- began new areas of mathematical study such as worst-case analysis in scheduling theory, and Graham’s Number! Graham’s Number- largest number yet for application use. To write it out would take more ink than atoms in the universe (what we know of it)!

More on Graham’s number… The arrows are read “up”. 3 up n means 3(3)(3)…(3) n times 3 up up n is 3 up 3 up 3 up 3…. up 3 n times. What does that mean?? This is like function composition. It is read, 3 up (3 up (3 up (3 up(…. up 3)…)) Depending on how many arrows you have, it gets a lot worse.

Ron Graham- Hat #2 Graham the Juggler- he was at one point the president of the International Jugglers Association daviddarling.info/images/Graha m_Ronald math.ucsd.edu

Ron Graham- Hat #3 The Circus Performer- part of the Bouncing Baers He has circus equipment installed in his home for practice. math.ucsd.edu

Ron Graham- Hat #4 The Scientist- Chief Scientist at California Institute for Telecommunication (more graph theory?!) and Information Technology physics.uci.edu

Graham’s Formula While working on the study transmitting messages and calls efficiently, he found and proved a formula.

Graham’s Formula While working on the study transmitting messages and calls efficiently, he found and proved a formula. This formula is so simple, yet so intriguing it has inspired mathematicians from all sorts of backgrounds to create new and different proofs of it.

Graham’s Formula While working on the study transmitting messages and calls efficiently, he found and proved a formula. This formula is so simple, yet so intriguing it has inspired mathematicians from all sorts of backgrounds to create new and different proofs of it. This formula finds the determinant of the distance matrix of a tree.

Graham’s Formula While working on the study transmitting messages and calls efficiently, he found and proved a formula. This formula is so simple, yet so intriguing it has inspired mathematicians from all sorts of backgrounds to create new and different proofs of it. This formula finds the determinant of the distance matrix of a tree. Not that kind of tree…

Graham’s Formula While working on the study transmitting messages and calls efficiently, he found and proved a formula. This formula is so simple, yet so intriguing it has inspired mathematicians from all sorts of backgrounds to create new and different proofs of it. This formula finds the determinant of the distance matrix of a tree. Not that kind of tree… Time for definitions!

Definitions and concepts we need: Trees Distance and distance matrix Determinants

Trees A tree is a graph that has no cycles: Tree Not a tree Cycle! With bark?

Trees A tree on n vertices always has n-1 edges. A “terminating” vertex is also called a leaf. Don’t ask why there’s a leaf at the base of the trunk…

Distance The distance from one vertex to another is the length of the shortest path between them. You count the number of edges, or “steps” from one to the other The distance from x to y is 3.

Distance Matrix The distance matrix is a symmetric matrix that lists the distance between every possible pair of vertices in the graph. Label the vertices in the tree and set up the matrix as in this example: As you can see, this matrix will always be symmetric about the diagonal.

Determinant In linear algebra, the determinant of a matrix turns out to be the product of the eigenvalues of the matrix. (But we won’t get into that) To find the determinant of a 2x2 matrix, you take ad-bc.

How to find harder determinants To find the determinant of any matrix larger than this, a new idea is needed. One way is called co-factor expansion. It requires taking determinants of smaller matrices sitting inside the big one. Here are a couple smaller 3x3’s sitting inside our 4x4

How to do co-factor expansion First, pick a row or column with the most zeros. This will make things easier for us. Might as well pick the top row here.

How to do co-factor expansion Next, you pick the first number in the row, and cross out the row and column it is sitting in. Take the number you circled, and multiply it by the determinant of the tiny left-over matrix. This is the first summand in the expression for the determinant.

How to do co-factor expansion Do this repeatedly for every entry in the row or column that you picked. See why zeros are nice? When you add them, use a +,-,+… pattern.

Even larger matrices… Although this isn’t so bad for 3x3’s or maybe even 4x4’s, trying to do this for a matrix of even 5x5 gets pretty hairy. Unless you have a lot of zeros, if you have an nxn matrix there are n!=n(n-1)(n-2)…(2) steps… n things in the first row which means n determinants. Then each of those is an (n-1)x (n- 1) matrix. Each (n-1) x (n-1) has n-1 of (n-2) x (n-2) matrices…. yikes

More yikes: For a 16 by 16 matrix, that’s × steps!! Outrageous. 20,922,789,900,000= twenty trillion, nine hundred twenty-two billion, seven hundred eighty-nine million, nine hundred thousand. Is there another way besides technology?

Introducing: Charles Dodgson Charles Dodgson, born in 1832 was also a man of many hats. He was an author, a reverend, a photographer, and a mathematician. He came up with another way to find the determinant, and published it in the Proceedings of the Royal Society of London. He also is known by an entirely different name…

Lewis Carroll!! Is he writing Alice, or doing math!? lewiscarroll.org

Lewis Carroll Yes, Lewis Carroll is the man who wrote Alice in Wonderland AND came up with a new way to find the determinant! His way is based on an identity called the Desnanot- Jacobi identity. These people lived around the same time, but the identity became more commonly known as Dodgson's Determinant Evaluation Rule.

Condensation Dodgson called his method condensation, because it “condenses” our large matrix into progressively smaller matrices until we are finally just left with the determinant. Lets see how this works. Condensed Soup.

Condensation on a 4x4 matrix Below is a nice 4x4 matrix to do condensation on. There are a lot of zeros, though. Although this is good in cofactor expansion, its bad in condensation. However, we can do row operation to make the “interior” zeros go away, and it won’t affect the determinant. Elementary row operations Zeros in the interior are not good

Condensation on a 4x4 matrix Our next step is to take the determinants of all the nestled 2x2’s. These are quick calculations you can usually do in your head.

Condensation on a 4x4 matrix Now you take all those determinants, and put them into a new, “condensed”, matrix:

Condensation on a 4x4 matrix Do the same thing for the second step, but this time, go back and find the corresponding “interior” entries from 2 steps back, and divide each entry in your newly condensed matrix by them. (hence the no-zero’s rule)

Condensation on a 4x4 matrix Continue this if you have a larger matrix (including the going back 2 steps ago to get some interior entries to divide by) until you arrive at a 2x2 matrix. We’re there! The determinant of this matrix, divided by the interior of the matrix two steps back, is the determinant of the original matrix.

Condensation vs. Cofactor Expansion Condensation wasn’t exactly easy, and complications can occur if zeros spontaneously appear in the interiors of successive matrices. (In which case you basically have to start over) However, for a 16x16 matrix, we’d be facing around 15 condensing steps, rather than × I’m glad we have technology now!

Graham's Formula Now you are ready to appreciate Graham’s formula. The determinant of the distance matrix of a tree on n vertices is given by the formula, Notice that the only parameter is n. The determinant does not care about the structure of the tree!! |D|

Regardless of the structure… A little hard to believe, right? Let’s look at some examples, and see if we can convince ourselves. Here are two trees on 4 vertices, that clearly have different distance matrices. Tree 1 has 3’s and Tree 2 doesn’t. And they have the same determinant? Tree 1 Tree 2

Regardless of the structure… Remember that matrix we did condensation on? That was actually the distance matrix for Tree 1, and we found its determinant was -12. Lets use co-factor expansion on Tree 2’s matrix.

Regardless of the structure… Remember that matrix we did condensation on? That was actually the distance matrix for Tree 1, and we found its determinant was -12. Lets use co-factor expansion on Tree 2’s matrix. Wow!

Regardless of the structure… Remember that matrix we did condensation on? That was actually the distance matrix for Tree 1, and we found its determinant was -12. Lets use co-factor expansion on Tree 2’s matrix. Wow!

Overview of the proof by Yan and Yeh Their proof uses a clever combination of the determinant evaluation rule, co-factor expansion, and a system of equations! You now have all the knowledge you need to understand this proof, so I will give you a quick look at how it goes.

Overview of the proof by Yan and Yeh The proof is by induction, so a base case is shown that the formula does indeed work for a tree on 3 vertices. The distance matrix is

Overview of the proof by Yan and Yeh The proof is by induction, so a base case is shown that the formula does indeed work for a tree on 3 vertices. The distance matrix is

Overview of the proof by Yan and Yeh We now assume the formula holds for trees on less than n vertices. Next by using some slick elementary column operations on an arbitrary distance matrix, we are able to get it to have a lot of zeros in one column. Which has the same determinant before we changed it with row operations. Zeros are good!!!

Overview of the proof by Yan and Yeh Now we will use cofactor expansion along the first column (rather than row) of D. See all the zeros?

Overview of the proof by Yan and Yeh But this is an induction proof, so if we can get our expression for a determinant in terms of determinants of trees of smaller than n vertices, we can assume it works for them!

Overview of the proof by Yan and Yeh Now is where Dodgson’s Rule comes in. Condensation is based on the fact that for any matrix, If we think of each of those determinants in that formula as the determinant of a distance matrix, for the ones on less than n vertices, we can apply the formula.

Overview of the proof by Yan and Yeh What we have done, is effectively gotten two different expressions for the determinant of our matrix D.

Overview of the proof by Yan and Yeh Lets use some basic linear algebra skills. Using substitution to eliminate the variable |D|, This is just a quadratic that factors quite nicely, giving a value of !!

Overview of the proof by Yan and Yeh Going back to the first equation in the system and plugging in this value gives Which is the formula discovered by Graham.

Conclusion We looked at a quick biography of Ron Graham, the man who discovered this formula during an application problem. We learned about trees and determinants, and learned co-factor expansion for finding determinants Charles Dodgson (better known as Lewis Carroll) came up with another way to evaluate determinants, and we explored that method as well. Once we knew enough math, we looked at Yin and Yeh’s proof of the formula.

Remaining Questions The main question that does not seem to be answered yet: How does the determinant relate to the tree geometrically?

Remaining Questions The main question that does not seem to be answered yet: How does the determinant relate to the tree geometrically? Looking at the formula through a combinatorialist’s eyes one might think that it is counting the number of ways to pick an edge then decide whether to include/exclude the rest of the edges. Is this the case?

Remaining Questions The main question that does not seem to be answered yet: How does the determinant relate to the tree geometrically? (if at all) Looking at the formula through a combinatorialist’s eyes one might think that it is counting the number of ways to pick an edge then decide whether to include/exclude the rest of the edges. Is this the case? Why doesn’t the structure matter?

Thank you!