stochastic processes(2)

Slides:



Advertisements
Similar presentations
CompSci 102 Discrete Math for Computer Science March 29, 2012 Prof. Rodger Lecture adapted from Bruce Maggs/Lecture developed at Carnegie Mellon, primarily.
Advertisements

Stochastic Markov Processes and Bayesian Networks
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
Chapter 2 Concepts of Prob. Theory
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
U eatworms.swmed.edu/~leon u
Random walk Presented by Changqing Li Mathematics Probability Statistics.
Flipping an unfair coin three times Consider the unfair coin with P(H) = 1/3 and P(T) = 2/3. If we flip this coin three times, the sample space S is the.
Operations Research: Applications and Algorithms
Presented By Cindy Xiaotong Lin
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
An Introduction to Markov Chains. Homer and Marge repeatedly play a gambling game. Each time they play, the probability that Homer wins is 0.4, and the.
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Hidden Markov Models Fundamentals and applications to bioinformatics.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Minority Games A Complex Systems Project. Going to a concert… But which night to pick? Friday or Saturday? You want to go on the night with the least.
1 Discrete Structures & Algorithms Discrete Probability.
Probability And Expected Value ————————————
Markov Chains Chapter 16.
Randomness and Probability
INDR 343 Problem Session
Class 5 Hidden Markov models. Markov chains Read Durbin, chapters 1 and 3 Time is divided into discrete intervals, t i At time t, system is in one of.
Great Theoretical Ideas in Computer Science.
Probability Theory: Counting in Terms of Proportions Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture.
Lecture 11 – Stochastic Processes
Financial Risk Management of Insurance Enterprises Stochastic Interest Rate Models.
Copyright ©2005 Brooks/Cole, a division of Thomson Learning, Inc. Understanding Probability and Long-Term Expectations Chapter 16.
The Marriage Problem Finding an Optimal Stopping Procedure.
Random Variables A random variable A variable (usually x ) that has a single numerical value (determined by chance) for each outcome of an experiment A.
CDA6530: Performance Models of Computers and Networks Examples of Stochastic Process, Markov Chain, M/M/* Queue TexPoint fonts used in EMF. Read the TexPoint.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Stochastic Processes A stochastic process is a model that evolves in time or space subject to probabilistic laws. The simplest example is the one-dimensional.
Understanding Probability and Long-Term Expectations Example from Student - Kalyani Thampi I have an example of "chance" that I thought about mentioning.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
5.3 Random Variables  Random Variable  Discrete Random Variables  Continuous Random Variables  Normal Distributions as Probability Distributions 1.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Probability Review CSE430 – Operating Systems. Overview of Lecture Basic probability review Important distributions Poison Process Markov Chains Queuing.
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
Copyright © 2010 Pearson Education, Inc. Unit 4 Chapter 14 From Randomness to Probability.
Computer Simulation. The Essence of Computer Simulation A stochastic system is a system that evolves over time according to one or more probability distributions.
L56 – Discrete Random Variables, Distributions & Expected Values
Chapter 5 Discrete Random Variables Probability Distributions
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
Markov Chains. What is a “Markov Chain” We have a set of states, S = {s1, s2,...,sr}. A process starts in one of these states and moves successively from.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
Asst. Professor in Mathematics
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
S TOCHASTIC M ODELS L ECTURE 4 B ROWNIAN M OTIONS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (Shenzhen) Nov 11,
Chapter 15 Random Variables. Introduction Insurance companies make bets. They bet that you are going to live a long life. You bet that you are going to.
 What do you think it means for an event to have a probability of ½ ?  What do you think it means for an event to have a probability of 1/4 ?
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
Lesson 96 – Expected Value & Variance of Discrete Random Variables HL2 Math - Santowski.
An Introduction to Markov Chains
Introduction to Discrete Mathematics
Markov Chains Mixing Times Lecture 5
Financial Risk Management of Insurance Enterprises
Discrete Time Markov Chains
Chapter 4 – Part 3.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
TexPoint fonts used in EMF.
Diffusion & Osmosis Transparent light effect (Basic)
Probability using Simulations
Discrete-time markov chain (continuation)
TexPoint fonts used in EMF.
Lecture 11 – Stochastic Processes
Presentation transcript:

stochastic processes(2) Dr. Adil Yousif Lecture 6

Cat and Mouse Five Boxes [1,2,3,4,5] Cat starts in box 1, mouse starts in box 5 Each turn each animal can move left or right, (randomly) If they occupy the same box, game over (for the mouse anyway)

5 box cat and mouse game States: Stochastic Matrix: State 1: cat in the first box, mouse in the third box: (1, 3) State 2: cat in the first box, mouse in the fifth box: (1, 5) State 3: cat in the second box, mouse in the fourth box: (2, 4) State 4: cat in the third box, mouse in the fifth box: (3, 5) State 5: the cat ate the mouse and the game ended: F.

State Diagram of cat and mouse State two in green is the initial state State five in blood red, is an accepting state We can bounce around for a while, but eventually the mouse will die

Cat, mouse and cheese example In this example, a mouse is randomly moving from room to room. The cat and cheese do not move. But, if the mouse goes into the cat’s room, he never comes out. If he reaches the cheese he also does not come out.

For example, whenever the mouse is in room 3 he will go next to room 2,4 or 5 with equal probability. The mouse moves according to the transition probabilities p(i, j) = P(the mouse goes to room j when he is in room i).

the transition matrix: Every row adds up to 1. This is because the mouse has to go somewhere or stay where he is.

(1) There are 5 states (the five rooms). I numbered them: 1,2,3,4,5. (2) The mouse moves in integer time, say every minute. (3) The mouse does not remember which room he was in before. Every minute he picks an adjacent room at random, possibly going back to the room he was just in. (4) The probabilities do not change with time.

Frog Cell Cycle Sible and Tyson figure 1 Methods 41 2007

Frog Cell Cycle Concentration or number of each of the molecule is a state. Each reaction serves as a transition from state to state. Whether or not a reaction will occur is Stochastic. Each state contains a complete picture of every species' quantity

Markov? Andrey (Andrei) Andreyevich Markov Russian Mathematician June 14, 1856 – July 20, 1922

Markov Chain Future is independent of the past given the present. Want to know tomorrow’s weather? Don’t look at yesterday, look out the window. Requires perfect knowledge of current state. Very Simple, Very Powerful. P(Future | Present)

Markov Chain Make predictions about future events given probabilities based on the current state. Probability of the future, given the present. Transition from state to state

First Order Markov Chain Make a Markov assumption that the value of the current state depends only on a fixed number of previous states In our case we are only looking back to one previous state Xt only depends on Xt-1

Second Order Markov Chain Value of the current state depends on the two previous states P(Xt|Xt-1,Xt-2) The math starts getting very complicated Can expand to third fourth… Markov chains

Random Walk Model A drunk man leaves a bar late Saturday night He doesn’t know where home is and supports himself from the light posts down Green Street He can only move from one light post to the next Unfortunately, when he gets to the new light, he forgets where he came from On average, where does this man wake up Sunday morning?

Features of a Random Walk Memory loss History reveals no information about the future Expected change in value is zero Over any length of time, the best predictor of future value is the current value This feature is termed a martingale Variance increases with time As more time passes, there is potential for being farther from the initial value 6 6

Why Random Walks? A random walk (RW) is a useful model in understanding stochastic processes across a variety of scientific disciplines. Random walk theory supplies the basic probability theory behind BLAST ( the most widely used sequence alignment theory).

Definitions (cont.) A random walk is defined as restricted walk if the walk is limited to the interval [a, b]. The endpoints a and b are called absorbing barriers if the random walk eventually stays there forever; or reflecting barriers if the walk reaches the endpoint and bounces back.

Example (cont.): simple RW Ladder point Ladder Point (LP):the point in the walk lower than any previously reached points. Excursion: the part of the walk from a LP until the highest point attained before the next LP. Excursions in Fig: 1, 1, 4, 0, 0, 0, 3; BLAST theory focused on the maximum heights achieved by these excursions.

the one-dimensional simple random walk The process starts in state X0 at time t = 0. Independently, at each time instance, the process takes a jump Zn: Prob { Zn = -1} = q, Prob { Zn = +1} = p and Prob { Zn = 0 } = 1 - p - q. The state of the process at time n is Xn = X0 + Z1 + Z2 + … + Zn.

the one-dimensional simple random walk In the analysis below assume: • Probability of a left-step (tails) is q where p + q = 1 • Probability of a right-step (heads) is p The following table shows the probabilities associated with the different possible values of k for n = 1, 2, 3, 4:

the one-dimensional simple random walk

The gambling banker Consider two urns A and B in a casino game. Initially A contains two white balls, and B contains three black balls. The balls are then `shu²ed' repeatedly at discrete time steps according to the following rule: pick at random one ball from each urn, and swap them.

The gambling banker The three possible states of the system during this (discrete time and discrete state space) stochastic process are shown below:

The gambling banker A banker decides to gamble on the above process. He enters into the following bet: at each step the bank wins 9M£ if there are two white balls in urn A, but has to pay 1M£ if not. What will happen to the bank?

Questions