Section 10.1 Basic Properties of Markov Chains

Slides:



Advertisements
Similar presentations
MARKOV ANALYSIS Andrei Markov, a Russian Mathematician developed the technique to describe the movement of gas in a closed container in 1940 In 1950s,
Advertisements

Markov Chain Nur Aini Masruroh.
Markov Processes Aim Higher. What Are They Used For? Markov Processes are used to make predictions and decisions where results are partly random but may.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Succession Model We talked about the facilitation, inhibition, and tolerance models, but all of these were verbal descriptions Today we are going to consider.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Operations Research: Applications and Algorithms
1 A class of Generalized Stochastic Petri Nets for the performance Evaluation of Mulitprocessor Systems By M. Almone, G. Conte Presented by Yinglei Song.
Operations Research: Applications and Algorithms
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Our Group Members Ben Rahn Janel Krenz Lori Naiberg Chad Seichter Kyle Colden Ivan Lau.
Markov Models Charles Yan Markov Chains A Markov process is a stochastic process (random process) in which the probability distribution of the.
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
1 Markov Chain - Brand Switching AusKola The Aussie Drink.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
Modeling and Simulation Markov chain 1 Arwa Ibrahim Ahmed Princess Nora University.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
Group exercise For 0≤t 1
DynaTraffic – Models and mathematical prognosis
Chapter 2 Systems of Linear Equations and Matrices Section 2.4 Multiplication of Matrices.
Section 10.2 Regular Markov Chains
9  Markov Chains  Regular Markov Chains  Absorbing Markov Chains  Game Theory and Strictly Determined Games  Games with Mixed Strategies Markov Chains.
CH – 11 Markov analysis Learning objectives:
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Copyright © 2007 Pearson Education, Inc. Slide 8-1.
Monte Carlo Methods Versatile methods for analyzing the behavior of some activity, plan or process that involves uncertainty.
Dynamical Systems Model of the Simple Genetic Algorithm Introduction to Michael Vose’s Theory Rafal Kicinger Summer Lecture Series 2002.
40S Applied Math Mr. Knight – Killarney School Slide 1 Unit: Matrices Lesson: MAT-5 Transitions Transition Problems Learning Outcome L-4 MAT-L5 Objectives:
PHARMACOECONOMIC EVALUATIONS & METHODS MARKOV MODELING IN DECISION ANALYSIS FROM THE PHARMACOECONOMICS ON THE INTERNET ®SERIES ©Paul C Langley 2004 Maimon.
When data from a table (or tables) needs to be manipulated, easier to deal with info in form of a matrix. Matrices FreshSophJunSen A0342 B0447 C2106 D1322.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Binomial Probability. Features of a Binomial Experiment 1. There are a fixed number of trials. We denote this number by the letter n.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Asst. Professor in Mathematics
Productivity. Operations Management Module 3 Productivity Efficiency Effectiveness Markov chains.
Markov Games TCM Conference 2016 Chris Gann
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Chapter 5: Matrices and Determinants Section 5.1: Matrix Addition.
Chapter 9: Markov Processes
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Markov Chain Hasan AlShahrani CS6800
Basic Matrix Operations
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
V5 Stochastic Processes
Matrix Multiplication
Operations Research: Applications and Algorithms
Probability: The Study of Randomness
Slides by John Loucks St. Edward’s University.
Discrete-time markov chain (continuation)
Markov Chains & Population Movements
Presentation transcript:

Section 10.1 Basic Properties of Markov Chains Chapter 10 Markov Chains Section 10.1 Basic Properties of Markov Chains

Some Background Information Mathematical models that evolve over time in a probabilistic manner are called stochastic processes. A special kind of stochastic process is a Markov Chain, where the outcome of an experiment depends only on the outcome of the previous experiment.

Why Study Markov Chains? Markov chains are used to analyze trends and predict the future. (Weather, stock market, genetics, product success, etc.

A Sociology Example Sociologists classify people by income as lower-class, middle-class, and upper-class. They have found that the strongest determinant of the income class of an individual is the income class of that person’s parents.

Transition Diagrams

Transition Matrices

Characteristics of a Transition Matrix It is a square matrix. All entries are between 0 and 1 (because all the entries are probabilities). The sum of the entries in any row must be 1. Denoted by P .

Key Features of Markov Chains A sequence of trials of an experiment is a Markov chain if 1.) the outcome of each experiment is one of a set of discrete states; 2.) the outcome of an experiment depends only on the present state, and not on any past states; 3.) the transition probabilities remain constant from one transition to the next.

Transition Probabilities from One State to Another The transition matrix shows the probabilities of moving from state-to-state from the current generation to the next. P gives the probabilities of a transition from one state to another in k repetitions of an experiment, provided the transition probabilities remain constant from one repetition to the next. k

Example 1 Use the transition matrix from the Sociology example to find the probabilities of change for the grandchildren of the current generation.

Example 2 Write a transition matrix for the following situation. Then find the probabilities associated with a third and fourth purchase from the bookstores. In a study of the market share of the three bookstores in a university town, it was found that 75% of those who had bought from University Bookstore would buy from it again, 15% from Campus Bookstore and 10% from Bookmart. Of those who bought from Campus Bookstore, 90% would buy from it again and 5% each would buy from University Bookstore and Bookmart. 85% of those who bought from Bookmart would buy from it again, 5% from University Bookstore and 10% from Campus Bookstore.

Probability Vector (Matrix) When a study is first begun, the probabilities of the states are called the initial distribution. This distribution is written as a matrix of only one row. Denoted by X0 .

Characteristics of a Probability Vector It is a row matrix. Each entry must be between 0 and 1 inclusive. The sum of the entries of the row must be 1.

Making Predictions about the Population Proportion with Markov Chains 1.) Create a probability vector, X0 . The entries are the initial probabilities of the states. 2.) Create the transition matrix, P . The entries are the probabilities of passing from current states (rows) to the first following states (columns). 3.) Calculate X0 P This is the probability vector after n repetitions of the experiment. In other words, it is a prediction for the proportion of the population after n repetitions. Note: The columns and rows of the probability vector and the transition matrix must be labeled the same. n

Example 3 A marketing analysis shows that KickKola currently commands 14% of the cola market. Further analysis indicates that 12% of the consumers who do not currently drink KickKola will purchase KickKola the next time they buy a cola (in response to a new advertising campaign) and that 63% of the consumers who currently drink KickKola will purchase it the next time they buy a cola. Predict KickKola’s market share at: a.) the next following purchase b.) the second following purchase

Example 4 It has been noted that George the golfer tends to repeat himself. In his first shot, he will hit his ball in the fairway (F), in the rough (R), or out of bounds (B). On his first shot he hits in the fairway 60% of the time, in the rough 30% of the time, and out of bounds 10% of the time. However, on subsequent shots, if he hit in the fairway the first time, he will hit in the fairway next time 90% of the time and in the rough 10% of the time. If he hit in the rough the first time, he will hit the rough next time 50% of the time and out of bounds 5% of the time. If he hits out of bounds the first time, he will hit out of bounds next time 70% of the time and in the fairway 15% of the time. Find the probability that his next following shot is in: a.) the fairway b.) the rough c.) out of bounds