Markov Chains & Population Movements

Slides:



Advertisements
Similar presentations
Random Processes Markov Chains Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering
Advertisements

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Operations Research: Applications and Algorithms
Operations Research: Applications and Algorithms
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Chapter 17 Markov Chains.
Entropy Rates of a Stochastic Process
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Matrix Multiplication To Multiply matrix A by matrix B: Multiply corresponding entries and then add the resulting products (1)(-1)+ (2)(3) Multiply each.
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
CALCULUS – II Matrix Multiplication by Dr. Eman Saad & Dr. Shorouk Ossama.
Determinants King Saud University. The inverse of a 2 x 2 matrix Recall that earlier we noticed that for a 2x2 matrix,
Lecture 11 – Stochastic Processes
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
Sets, Combinatorics, Probability, and Number Theory Mathematical Structures for Computer Science Chapter 3 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProbability.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
1 Introduction to Estimation Chapter Concepts of Estimation The objective of estimation is to determine the value of a population parameter on the.
Chapter 5.1 Probability Distributions.  A variable is defined as a characteristic or attribute that can assume different values.  Recall that a variable.
2 x 2 Matrices, Determinants, and Inverses.  Definition 1: A square matrix is a matrix with the same number of columns and rows.  Definition 2: For.
13.3 Conditional Probability and Intersections of Events Understand how to compute conditional probability. Calculate the probability of the intersection.
4.5 Matrices, Determinants, Inverseres -Identity matrices -Inverse matrix (intro) -An application -Finding inverse matrices (by hand) -Finding inverse.
GG 313 beginning Chapter 5 Sequences and Time Series Analysis Sequences and Markov Chains Lecture 21 Nov. 12, 2005.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Asst. Professor in Mathematics
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
2.1 Matrix Operations 2. Matrix Algebra. j -th column i -th row Diagonal entries Diagonal matrix : a square matrix whose nondiagonal entries are zero.
Virtual University of Pakistan
The Acceptance Problem for TMs
Linear Algebra Review.
13.4 Product of Two Matrices
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Properties and Applications of Matrices
What Is Probability?.
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Availability Availability - A(t)
Bring a penny to class tomorrow
MATHEMATICS Matrix Algebra
5 Systems of Linear Equations and Matrices
Probability of Independent Events
MATHEMATICS Matrix Multiplication
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Matrix Multiplication
Parameter, Statistic and Random Samples
Applicable Mathematics “Probability”
6. Markov Chain.
Hidden Markov Autoregressive Models
2. Matrix Algebra 2.1 Matrix Operations.
2+6.1= 6.6−1.991= 0.7(5.416)= 8.92÷1.6= = Bell Work Cronnelly.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Matrices and Matrix Operations
13. The Weak Law and the Strong Law of Large Numbers
Section 12.2 Theoretical Probability
Section 12.2 Theoretical Probability
Multiplication of Matrices
DETERMINANT MATH 80 - Linear Algebra.
Discrete-time markov chain (continuation)
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Matrix Multiplication Sec. 4.2
Section 12.2 Theoretical Probability
13. The Weak Law and the Strong Law of Large Numbers
Sets, Combinatorics, Probability, and Number Theory
Presentation transcript:

Markov Chains & Population Movements

Stochastic Matrices Certain matrices, called Stochastic Matrices, are important in the study of random phenomena where the exact outcome is not known but probabilities can be determined. One example is an analysis of population movement between cities and suburbs in the United States. Recall some basic ideas of probability. If the outcome of an event is sure to occur, we say the probability of the outcome is 1. On the other hand, if it will not occur we say the probability is 0. Other probabilities are represented by fractions between 0 and 1; the larger the fraction, the greater the probability p of that outcome occurring. Thus we have the restriction 0 ≤ p ≤ 1 on a probability p.

If any one of n completely independent outcomes is equally likely to happen, and if m of these outcomes are of interest to us, then the probability p that one of these outcomes will occur is defined to be the fraction m/n. An example. Consider the event of drawing a single card from a deck of 52 playing cards. What is the probability the outcome will be an ace or a king? We see there are 52 possible outcomes. There are 4 aces and 4 kings in the deck; there are 8 outcomes of interest. Thus the probability of drawing an ace or a king is 8/52 or 2/13.

Stochastic Matrix Definition: A stochastic matrix is a square matrix whose elements are probabilities and whose columns add up to 1.

Examples of Stochastic Matrices

Examples that aren’t Stochastic Matrices The sum of the elements in the first column is not 1. The 2 in the first row is not a probability since it is greater than 1.

A general 2 x 2 stochastic matrix can be written: where 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1.

Theorem: If A and B are stochastic matrices of the same size, then AB is a stochastic matrix. Thus if A is stochastic, then A2, A3, A4,… are all stochastic.

Example It is estimated the number of people living in cities in the United States during 2012 was 80 million. The number of people living in the surrounding suburbs was 175 million. Let us represent this information by the matrix: Consider the population flow from cities to suburbs. During 2012 the probability of a person staying in the city was 0.96. Thus the probability of moving to the suburbs was 0.04 (assuming all those who moved went to the suburbs). Consider now the reverse population flow, from suburbia to the city. The probability of a person moving to the city was 0.01; the probability of remaining in suburbia was 0.99.

Stochastic Matrix P These probabilities can be written as the elements of a stochastic matrix P: The probability of moving from location A to location B is given by the element in column A and row B. In this context, the stochastic matrix is called the matrix of transition probabilities.

Now consider the population distribution in 2013, one year later: City population in 2013 = people who remained from 2012 + people who moved in from the suburbs = (0.96 * 80) + (0.01 * 175) = 78.55 million Suburban population in 2013 = people who moved in from the city + people who stayed from 2012 = (0.04 * 80) + (0.99 * 175) = 176.45 million

Note we can arrive at these numbers using matrix multiplication: Using 2012 as the base year, let X1 be the population in 2013, one year later. We can write X1 = PX0 Assume the population flow represented by the matrix P is unchanged over the years. The population distribution X2 after two years is given by X2 = PX1

After three years the population distribution is given by X3 = PX2 After n years we get Xn = PXn - 1 The predictions of this model (to four decimal places) are and so on.

Observe how the city population is decreasing annually, while that of the suburbs is increasing. We will see later that the sequence X0, X1, X2,…approaches If conditions do not change, city population will gradually approach 51 million, while the population of suburbia will approach 204 million.

X1 = PX0, X2 = P2X0, X3 = P3X0,…,Xn = PnX0 Xi + n = PnXi Further, note the sequence X1, X2, X3,…,Xn can be directly computed from X0, as follows: X1 = PX0, X2 = P2X0, X3 = P3X0,…,Xn = PnX0 The matrix Pn is a stochastic matrix that takes X0 into Xn, in n steps. This result can be generalized. That is, Pn can be used in this manner to predict the distribution n stages later, from any given distribution. Xi + n = PnXi

Pn is called the n-step transition matrix Pn is called the n-step transition matrix. The (i, j)th element of Pn gives the probability of going from state j to state i in n steps. For example, it can be shown (writing to two decimal places) Thus, for instance, the probability of living in the city in 2012 and being in the suburbs four years later is 0.15.

Final Notes The probabilities in this model depend only on the current state of a person – whether the person is living in the city or in suburbia. This type of model, where the probability of going from one state to another depends only on the current state rather than on a more complete historical description, is called a Markov Chain. A modification that allows for possible annual population growth or decrease would give improved estimates of future population distributions. These concepts can be extended to Markov processes involving more than two states.