1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.

Slides:



Advertisements
Similar presentations
Theoretical Probability
Advertisements

Theoretical Probability
YOU DONT KNOW CRAPS!!. An Investigation Using Probability, the Geometric Distribution, and Expected Returns for the Game of Craps By Otto Magdanz and.
Probability Three basic types of probability: Probability as counting
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Probability Distributions CSLU 2850.Lo1 Spring 2008 Cameron McInally Fordham University May contain work from the Creative Commons.
Binomial random variables
Take out a coin! You win 4 dollars for heads, and lose 2 dollars for tails.
Chapter 5.1 & 5.2: Random Variables and Probability Mass Functions
Mathematics in Today's World
Winning with Losing Games An Examination of Parrondo’s Paradox.
Learn to estimate probability using theoretical methods.
1 MF-852 Financial Econometrics Lecture 3 Review of Probability Roy J. Epstein Fall 2003.
Random Variables and Expectation. Random Variables A random variable X is a mapping from a sample space S to a target set T, usually N or R. Example:
Games of probability What are my chances?. Roll a single die (6 faces). –What is the probability of each number showing on top? Activity 1: Simple probability:
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
Excursions in Modern Mathematics, 7e: Copyright © 2010 Pearson Education, Inc. 15 Chances, Probabilities, and Odds 15.1Random Experiments and.
Parrondo’s Paradox Noel-Ann Bradshaw University of Greenwich.
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Math 310 Section 7.2 Probability. Succession of Events So far, our discussion of events have been in terms of a single stage scenario. We might be looking.
Line of Best Fit Geometry of Linear Maps Markov Chains Orthonormal Matrices 3. Topics.
INFM 718A / LBSC 705 Information For Decision Making Lecture 9.
Fair Games/Expected Value
Warm up: Solve each system (any method). W-up 11/4 1) Cars are being produced by two factories, factory 1 produces twice as many cars (better management)
- ABHRA DASGUPTA Solving Adhoc and Math related problems.
Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
The game of Craps Rules of play: 1. Played with two dice (six faces to a die – numbers 1-6 per face) 2. Sequence of betting rounds (or just rounds) 3.
Great Theoretical Ideas In Computer Science Steven Rudich, Anupam GuptaCS Spring 2004 Lecture 22April 1, 2004Carnegie Mellon University
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Independence and Bernoulli.
Random Variables A random variable is simply a real-valued function defined on the sample space of an experiment. Example. Three fair coins are flipped.
Extending the Definition of Exponents © Math As A Second Language All Rights Reserved next #10 Taking the Fear out of Math 2 -8.
TIMES 3 Technological Integrations in Mathematical Environments and Studies Jacksonville State University 2010.
Section 7.2. Section Summary Assigning Probabilities Probabilities of Complements and Unions of Events Conditional Probability Independence Bernoulli.
Great Theoretical Ideas in Computer Science.
Expected Value and Standard Error for a Sum of Draws (Dr. Monticino)
Copyright © 2011 Pearson Education, Inc. Probability: Living with the Odds Discussion Paragraph 7B 1 web 59. Lottery Chances 60. HIV Probabilities 1 world.
Independence and Bernoulli Trials. Sharif University of Technology 2 Independence  A, B independent implies: are also independent. Proof for independence.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
7.2 Means and variances of Random Variables (weighted average) Mean of a sample is X bar, Mean of a probability distribution is μ.
COMPSCI 102 Introduction to Discrete Mathematics.
Chapter 6. Probability What is it? -the likelihood of a specific outcome occurring Why use it? -rather than constantly repeating experiments to make sure.
1 ECE 517: Reinforcement Learning in Artificial Intelligence Lecture 8: Dynamic Programming – Value Iteration Dr. Itamar Arel College of Engineering Department.
1 Discrete Random Variables – Outline 1.Two kinds of random variables – Discrete – Continuous 2. Describing a DRV 3. Expected value of a DRV 4. Variance.
WOULD YOU PLAY THIS GAME? Roll a dice, and win $1000 dollars if you roll a 6.
Chapter 2. Conditional Probability Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
L56 – Discrete Random Variables, Distributions & Expected Values
Homework An experiment consists of rolling a fair number cube. Find each probability. 1. P(rolling an odd number) 2. P(rolling a prime number) An experiment.
MATH 256 Probability and Random Processes Yrd. Doç. Dr. Didem Kivanc Tureli 14/10/2011Lecture 3 OKAN UNIVERSITY.
Independence and Dependence 1 Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University.
President UniversityErwin SitompulPBST 4/1 Dr.-Ing. Erwin Sitompul President University Lecture 4 Probability and Statistics
Great Theoretical Ideas in Computer Science for Some.
COMPSCI 230 Discrete Mathematics for Computer Science.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Functions of Random Variables
1 Introduction to Stochastic Models GSLM Outline  transient behavior  first passage time  absorption probability  limiting distribution 
Theoretical Probability
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
In games of chance the expectations can be thought of as the average outcome if the game was repeated multiple times. Expectation These calculated expectations.
Lesson 96 – Expected Value & Variance of Discrete Random Variables HL2 Math - Santowski.
Great Theoretical Ideas in Computer Science.
PROBABILITY AND COMPUTING RANDOMIZED ALGORITHMS AND PROBABILISTIC ANALYSIS CHAPTER 1 IWAMA and ITO Lab. M1 Sakaidani Hikaru 1.
Multi-Step Equations How to Identify Multistep Equations |Combining Terms| How to Solve Multistep Equations | Consecutive Integers.
Markov Chains Mixing Times Lecture 5
Conditional Probability
Discrete Probability Chapter 7 With Question/Answer Animations
Discrete-time markov chain (continuation)
Fun… Tree Diagrams… Probability.
Presentation transcript:

1 Parrondo's Paradox

2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with probability p a <1/2 and tails with probability 1-p a. You win a dollar if the head appears.

3 Game B: repeatedly flip coins, but the coin that is flipped depends on the previous outcomes. Let w be the number of your wins so far and l the number of your losses. Each round we bet one dollar, so w- l represents winnings: if it is negative, you have lost money.

4 Here we use two biased coins (coin b and coin c). If 3|w- l, then flip coin b, which comes up head with proabability p b. Otherwise flip coin c, which comes up head with probability p c. Again, you win one dollar, if it comes up head.

5

6 Suppose p b =0.09 and p c =0.74. If we use coin b for 1/3 of the time that the winnings are a multiple of 3 and use coin c the other 2/3 of the time. The probability of winning is

7 But coin b may not be used 1/3 of the time! Intuitively, when starting with winning 0, use coin b and most likely lose, after which use coin c and most likely win. Thus, you may spend lots of time going back and forth between having lost one dollar and breaking even before either winning one dollar or losing two dollars.

8 Suppose we start playing Game B when the winning is 0, continuing until either lose three dollars or win three dollars. Note that if you are more likely to lose than win in this case, by symmetry you are more likely to lose 3 dollars than win 3 dollars whenever 3|w- l. Consider the Markov chain on the state space consisting of the integers {-3,-2,-1,0,1,2,3}. We show that it is more likely to reach -3 than 3.

9 Let z i be the probability that the game will reach -3 before reaching 3 when starting with winning i. We want to calculate z i, for i=-3,..,3, especially z 0. z 0 >1/2 means it is more likely to lose three dollars before winning three dollars starting from 0.

10 We have the following equations: Solving this system of equations, we have z -2 = (1-p c )z -3 +p c z -1, z -1 = (1-p c )z -2 +p c z 0, z 0 = (1-p b )z -1 +p b z 1, z 1 = (1-p c )z 0 +p c z 2, z 2 = (1-p c )z 1 +p c z 3,

11 Instead of solving the equations directly, there is a simpler way to determine the relating probability of reaching -3 or 3 first. Consider any sequence of moves that starts at 0 and ends at 3 before reaching -3. E.g. s=

12 Create a 1-1 and onto mapping of such sequences with the sequence that start at 0 and end at -3 before 3 by negating every number starting from the last 0 in the sequence. Eg. f(s)=

13 Lemma: For any sequence s of moves that starts at 0 and ends at 3 before -3, we have Pf: t 1 : # of transitions from 0 to 1 in s; t 2 : # of transitions from 0 to -1 in s; t 3 : the sum of the number of transitions from -2 to -1, -1 to 0, 1 to 2 and 2 to 3; t 4 : the sum of the number of transitions from 2 to 1, 1 to 0, -1 to -2 and -2 to -3.

14 Then, the probability that the sequence s occurs is Transform s to f(s). Then (0  1) is changed to (0  -1). After this point, in s the total number of transitions that move up 1 is 2 more than the number of transitions that move down 1, since in s it ends at 3. In f(s), the total number of transitions that move down 1 is two more than that of moving up 1.

15 It follows the probability f(s) occurs is Thus,

16 Let S be the set of all sequences of moves that start at 0 and end at 3 before -3. Which is 12321/15379 < 1. I.e. it is move likely to lose than win.

17 Another way of showing Game B is losing. Consider the Markov chain on states {0,1,2}, which indicates (w- l ) mod 3. Let  i be the stationary probability of this chain. The probability that we win a dollar in the stationary distribution, is Want to know if the above is >1/2 or <1/2. p b  0 + p c  1 + p c  2 = p b  0 + p c (1-  0 ) = p c – (p c -p b )  0.

18 The equations for the stationary distribution are:  Plugging p b and p c,  0  and p c -(p c -p b )  0 <1/2. Again, a losing game..  0 +  1 +  2 = 1 p b  0 + (1-p c )  2 =  1 p c  1 + (1-p b )  0 =  2 p c  2 + (1-p c )  1 =  0

19 Combining Game A and B Game C: Repeatedly perform the following: It seems Game C is a losing game. Start by flipping a fair coin d if d comes out head then proceed as in Game A. if tail, then proceed to Game B.

20 Analysis: If 3 | (w- l ), then you win with probability else win with By the previous lemma, i.e., Game C appears to be a winning game!

21 Analysis: It seems odd, we check with the Markov chain approach. Then by plugging the values. If the winnings from a round of game A, B and C are X A, X B and X C (respectively), then it seems that But E[X A ] 0?

22 Reason: the above equation is wrong! Why? Consider the above mentioned Markov chain on {0,1,2} for games B and C. Let s represent the current state, we have

23 Linearity holds for any given step; but we must condition on the current state. Combining the games we’ve changes how often the chain spends in each state, allowing two losing games to become a winning game!