Stochastic Processes Nancy Griffeth January 10, 2014 Funding for this workshop was provided by the program “Computational Modeling and Analysis of Complex.

Slides:



Advertisements
Similar presentations
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
Advertisements

Clear your desk for your quiz. Unit 2 Day 8 Expected Value Average expectation per game if the game is played many times Can be used to evaluate and.
Take out a coin! You win 4 dollars for heads, and lose 2 dollars for tails.
Markov Chains Extra problems
IERG5300 Tutorial 1 Discrete-time Markov Chain
Random Variables and Expectation. Random Variables A random variable X is a mapping from a sample space S to a target set T, usually N or R. Example:
Operations Research: Applications and Algorithms
Games of probability What are my chances?. Roll a single die (6 faces). –What is the probability of each number showing on top? Activity 1: Simple probability:
Hidden Markov Models Tunghai University Fall 2005.
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Our Group Members Ben Rahn Janel Krenz Lori Naiberg Chad Seichter Kyle Colden Ivan Lau.
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
1 Discrete Structures & Algorithms Discrete Probability.
Visual Recognition Tutorial
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
Chapter 7 Expectation 7.1 Mathematical expectation.
Markov Chains Chapter 16.
Section The Idea of Probability Statistics.
Finite probability space set  (sample space) function P:  R + (probability distribution)  P(x) = 1 x 
Great Theoretical Ideas in Computer Science.
Using Tinkerplots Ruth Kaniuk Endeavour Teacher Fellow, 2013.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
Fair Games/Expected Value
Numerical Solutions to ODEs Nancy Griffeth January 14, 2014 Funding for this workshop was provided by the program “Computational Modeling and Analysis.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
CH – 11 Markov analysis Learning objectives:
Probability PowerPoint By: Drew Partsch, and Nathan Efstation.
Quiz Time! Clear your desk except for a pencil & calculator!
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Expected Value.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
5.1 Probability in our Daily Lives.  Which of these list is a “random” list of results when flipping a fair coin 10 times?  A) T H T H T H T H T H 
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
Fair and Unfair Games Laura Smiley. What makes a game… FairUnfair.
Warm Up Section Explain why the following is not a valid probability distribution? 2.Is the variable x discrete or continuous? Explain. 3.Find the.
L 7: Linear Systems and Metabolic Networks. Linear Equations Form System.
Introduction to Probability – Experimental Probability.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
ST3236: Stochastic Process Tutorial 6
Markov Processes What is a Markov Process?
Chemical Kinetics Nancy Griffeth January 8, 2014 Funding for this workshop was provided by the program “Computational Modeling and Analysis of Complex.
Section The Idea of Probability AP Statistics
9.2 Mixed Strategy Games In this section, we look at non-strictly determined games. For these type of games the payoff matrix has no saddle points.
Discrete-time markov chain (continuation)
Introduction to Discrete Mathematics
Markov Chains Tutorial #5
Game Theory “How to Win the Game!”.
Expected Value.
Operations Research: Applications and Algorithms
Expected Value.
6.2/6.3 Probability Distributions and Distribution Mean
Solutions Markov Chains 1
Solutions Markov Chains 2
Markov Chains Tutorial #5
Expected Value.
Fun… Tree Diagrams… Probability.
Statistics and Probability-Part 5
Probability Mutually exclusive and exhaustive events
Solutions Markov Chains 6
Random Processes / Markov Processes
Markov Chains and Markov Processes
Presentation transcript:

Stochastic Processes Nancy Griffeth January 10, 2014 Funding for this workshop was provided by the program “Computational Modeling and Analysis of Complex Systems,” an NSF Expedition in Computing (Award Number ).

Motivation Continuous solutions depend on large numbers (e.g., ODE’s) Stochastic solutions are better for small numbers Strange and surprising things happen!

Game Two players each have a pile of pennies Taking turns: Player flips a penny Heads, he gets a penny from the other player Tails, he gives the penny to the other player Until one player runs out of pennies

Questions What each player’s chance of winning? What is the expected value of the game to a player? How long will the game go on before a player runs out of pennies? What is the probability after m moves that player A has a pennies and player B has b pennies? Or the probability distribution over (a,b)?

Chance of winning If A and B have the same number of coins, they’re equally likely to win. With m coins to player A and n coins to player B A wins with probability m/(m+n) B wins with probability n/(m+n) The player with more coins is more likely to win.

Expected Value Definition: Exp(winnings)=Prob(winning)*value of winning +Prob(losing)*value of losing Value of winning is m+n Value of losing is 0 So, if A starts with m coins and B with n: E(A winnings) = m/(m+n)*(m+n) + n/(m+n)*0 = m E(B winnings) = n/(m+n)*(m+n) + m/(m+n)*0 =n

Markov Chain 3,3 2,4 1,5 0,6 2,4 1,5 0,6 The probability of each transition is ½, except 0,6->0,6 and 6,0->6,0, where the probability is 1

How long Relatively hard from the UCLA tutorial Sum over n of n*[p((5,1);n)*0.5+p((1,5);n)*0.5] = Sum over n of n*p((5,1;n) Starting from (3,3), about 8 turns

Probability Matrix 0,61,52,43,34,25,16,0 0, , , , , , ,

Two Steps 0,61,52,43,34,25,16,0 0, , , , , , ,

Three Steps 0,61,52,43,34,25,16,0 0, , , , , , ,

64 Steps 0,61,52,43,34,25,16,0 0, , , , , , ,

Steady States Sooner or later, somebody wins (with probability=1) Either party can win The system has two stable steady states (bistable)

L0 R8 L.R2 L.R.R.L0 The “Game” with Molecules L2 R10 L.R0 L.R.R.L0 L1 R9 L.R1 L.R.R.L0 L0 R8 L.R0 L.R.R.L1

Chemical Master Equation A set of first-order differential equations describing the change in the probability of states with time t. With time-independent reaction rates, the process is Markovian.

The Chemical Master Equation We are interested in p(x; t), the probability that the chemical system will be in state x at time t. The time evolution of p(x; t) is described by the Chemical Master Equation: Where s μ is the stoichiometric vector for reaction μ, giving the changes in each type of molecule as a result of μ, and a μ is the propensity for reaction μ

Bistability A steady state is a state that a system tends to stay in once it reaches it A steady state can be stable or unstable A system that has two stable steady states is called bistable.