For the 1 Dimensional Random Walk Problem

Slides:



Advertisements
Similar presentations
Statistics: Purpose, Approach, Method. The Basic Approach The basic principle behind the use of statistical tests of significance can be stated as: Compare.
Advertisements

Chapter 8: Probability: The Mathematics of Chance Lesson Plan
Continuous Random Variables and Probability Distributions
Chapter 7 The Normal Probability Distribution 7.5 Sampling Distributions; The Central Limit Theorem.
Normal and Sampling Distributions A normal distribution is uniquely determined by its mean, , and variance,  2 The random variable Z = (X-  /  is.
Standard Statistical Distributions Most elementary statistical books provide a survey of commonly used statistical distributions. The reason we study these.
Sect. 1.5: Probability Distributions for Large N: (Continuous Distributions)
Chapter 6: Probability Distributions
Continuous Probability Distributions  Continuous Random Variable  A random variable whose space (set of possible values) is an entire interval of numbers.
Copyright © Cengage Learning. All rights reserved. 6 Normal Probability Distributions.
40S Applied Math Mr. Knight – Killarney School Slide 1 Unit: Statistics Lesson: ST-5 The Binomial Distribution The Binomial Distribution Learning Outcome.
Statistics for Data Miners: Part I (continued) S.T. Balke.
The Gaussian (Normal) Distribution: More Details & Some Applications.
Chapter 8: Probability: The Mathematics of Chance Lesson Plan Probability Models and Rules Discrete Probability Models Equally Likely Outcomes Continuous.
Chapter 6 Some Continuous Probability Distributions.
1 Chapter 7 Sampling Distributions. 2 Chapter Outline  Selecting A Sample  Point Estimation  Introduction to Sampling Distributions  Sampling Distribution.
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
Chapter 8: Probability: The Mathematics of Chance Lesson Plan Probability Models and Rules Discrete Probability Models Equally Likely Outcomes Continuous.
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
CHAPTER 2.3 PROBABILITY DISTRIBUTIONS. 2.3 GAUSSIAN OR NORMAL ERROR DISTRIBUTION  The Gaussian distribution is an approximation to the binomial distribution.
Chapter 8: Probability: The Mathematics of Chance Probability Models and Rules 1 Probability Theory  The mathematical description of randomness.  Companies.
Central Limit Theorem Let X 1, X 2, …, X n be n independent, identically distributed random variables with mean  and standard deviation . For large n:
Chapter 8: Probability: The Mathematics of Chance Lesson Plan Probability Models and Rules Discrete Probability Models Equally Likely Outcomes Continuous.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Theoretical distributions: the Normal distribution.
Chapter 6 Normal Approximation to Binomial Lecture 4 Section: 6.6.
Probability Distributions  A variable (A, B, x, y, etc.) can take any of a specified set of values.  When the value of a variable is the outcome of a.
Chapter 6 Continuous Random Variables Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Chapter 6 – Continuous Probability Distribution Introduction A probability distribution is obtained when probability values are assigned to all possible.
Chapter 6 The Normal Distribution and Other Continuous Distributions
The normal distribution
Chapter Five The Binomial Probability Distribution and Related Topics
Lecture Slides Essentials of Statistics 5th Edition
Ch 9 實習.
Discrete Random Variables
Normal Distribution and Parameter Estimation
INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE
Sampling Variability & Sampling Distributions
Sampling Distributions
Discrete Probability Distributions
Probability & Statistics
Chapter 6. Continuous Random Variables
Chapter 6: Sampling Distributions
The Gaussian Probability Distribution Function
Sample Mean Distributions
Chapter 5 Sampling Distributions
“Théorie Analytique des Probabilités”, 1812
Chapter 7 Sampling Distributions.
Sampling Distributions
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
Gaussian (Normal) Distribution
Discrete Probability Distributions
Gaussian (Normal) Distribution
Using the Tables for the standard normal distribution
Probability & Statistics
Chapter 5 Sampling Distributions
For the 1 Dimensional Random Walk Problem
7.5 The Normal Curve Approximation to the Binomial Distribution
2.3 Estimating PDFs and PDF Parameters
Chapter 5 Sampling Distributions
Lecture 2 Binomial and Poisson Probability Distributions
PROBABILITY DISTRIBUTION
Chapter 8: Probability: The Mathematics of Chance Lesson Plan
AP Statistics Chapter 16 Notes.
Section 1.3: A General Discussion of Mean Values
Ch 9 實習.
Chapter 8: Probability: The Mathematics of Chance Lesson Plan
Discrete Probability Distributions
Presentation transcript:

Probability Distributions for Large N Continuous Distributions: Gaussian & Others

For the 1 Dimensional Random Walk Problem The Probability Distribution is Binomial: WN(n1) = [N!/(n1!n2!)]pn1qn2 Mean number of steps to the right:  = <n1> = Np Dispersion in n1: 2 = <(Δn1)2> = Npq Relative Width: (/ ) = (q½)(pN)½ for N increasing, the mean value increases  N, & the relative width decreases  (N)-½ N = 20 p = q = ½ q = 1 – p n2 = N - n1

See other sources for derivation details. In the following, as before, the results will only be summarized. See other sources for derivation details. Imagine N getting larger & larger. Based on what we just said, the relative width of WN(n1) gets smaller & smaller & the mean value <n1> gets larger & larger. If N is VERY, VERY large, we can treat W(n1) as a continuous function of a continuous variable n1. For N large, it’s convenient to look at the natural log ln[W(n1)] of W(n1), rather than the function itself. Do a Taylor’s series expansion of ln[W(n1)] about value of n1 where W(n1) has a maximum.

N is VERY, VERY Large!! It’s width = 2 = <(Δn1)2> = Npq. Taylor’s series expansion of ln[W(n1)] about n1 for which W(n1) = its max. Math (see references) shows that W(n1) = max at n1=  = <n1> = Np. It’s width = 2 = <(Δn1)2> = Npq. For ln[W(n1)], use Stirling’s Approximation for logs of large factorials.

Stirling’s Approximation: ln[N!] ≈ N[ln(N) – 1] + (½)ln(2N) (1) N is VERY, VERY Large!! Stirling’s Approximation: If N is a large integer, the natural log of it’s factorial is approximately: ln[N!] ≈ N[ln(N) – 1] + (½)ln(2N) (1) But, N is VERY, VERY Large, N >> ln(N), so neglect the last term in (1). So, in our case, we will use ln[N!] ≈ N[ln(N) – 1]

W(n1) = Ŵexp[-(n1 - <n1>)2/(2<(Δn1)2>)] In this large N, large n1 limit, the Binomial Distribution W(n1) becomes: W(n1) = Ŵexp[-(n1 - <n1>)2/(2<(Δn1)2>)] Here, Ŵ = [2π <(Δn1)2>]-½ This is the Gaussian Distribution or Normal Distribution. Again note that  = <n1> = Np, 2 = <(Δn1)2> = Npq.

The Central Limit Theorem The reasoning which led to this for the large N & continuous n1 limit started with the Binomial Distribution. But this is a very general result. Starting with one of MANY discrete probability distributions & taking the limit of LARGE N, will result in the Gaussian or Normal Distribution. This is called The Central Limit Theorem or The Law of Large Numbers.

One of the most important results of The Central Limit Theorem: probability theory is The Central Limit Theorem: The distribution of any random phenomenon tends to be Gaussian or Normal if we average it over a large number of independent repetitions. This theorem allows us to analyze and predict the results of chance phenomena when we average over many observations.

The Law of Large Numbers: The mean of the observed values Related to the Central Limit Theorem is The Law of Large Numbers: If a random phenomenon is repeated a large number of times, The proportion of trials on which each outcome occurs gets closer and closer to the probability of that outcome, and The mean of the observed values gets closer & closer to The mean  of a Gaussian Distribution which describes the data.

The Law of Large Numbers This is the observation that, as the number of repetitions of a probability experiment increases, two (related) phenomena occur: 1. The proportion with which a certain outcome is observed gets closer to the probability of the outcome. 2. The mean of the observed values gets closer and closer to the value of μ predicted by probability theory.

The Gaussian Probability Distribution In the limit of a large number of steps in the random walk, N (>>1), the Binomial Distribution becomes a Gaussian Distribution: W(n1) = [2π<(Δn1)2>]-½exp[-(n1 - <n1>)2/(2<(Δn1)2>)]  = <n1> = Np, 2 = <(Δn1)2> = Npq We can also convert to the probability distribution for displacement m, in the large N limit (after algebra): P(m) = [2π2]-½exp[-(m -)2/(22)]  = <m> = N(p – q), 2 = <(Δm)2> = 4Npq

Question: The Gaussian Distribution is also called the “Normal Distribution”.

“Abnormal Distribution” ? Question: The Gaussian Distribution is also called the “Normal Distribution”. So, shouldn’t logic dictate that there should also be an “Abnormal Distribution” ?

“Normal Distribution”. The Gaussian Distribution is also called the “Normal Distribution”.

“Normal Distribution”. “Paranormal The Gaussian Distribution is also called the “Normal Distribution”. Less well known is that there is also a “Paranormal Distribution!”

P(m) = [2π2]-½exp[-(m -)2/(22)] We can express this in terms of x = mℓ. As N  >> 1, x can be treated as continuous. In this case, |P(m+2) – P(m)| << P(m) & discrete values of P(m) get closer & closer together. Now, ask: What is the probability that, after N steps, the particle is in the range x to x + dx? Let the probability distribution for this ≡ P(x). Then, we have: P(x)dx = (½)P(m)(dx/ℓ). The range dx contains (½)(dx/ℓ) possible values of m, since the smallest possible dx is dx = 2ℓ.

P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2] After some math, we obtain the standard form of the Gaussian (Normal) Distribution P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2] μ ≡ N(p – q)ℓ ≡ mean value of x σ ≡ 2ℓ(Npq)-½ ≡ width of the distribution NOTE: The generality of the arguments we’ve used is such that a Gaussian Distribution occurs in the limit of large numbers for MANY discrete distributions!

P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2] μ ≡ N(p – q)ℓ σ ≡ 2ℓ(Npq)-½ Note: To deal with Gaussian distributions, we need to do integrals with them! Many are tabulated!! Is P(x) properly normalized? That is, does P(x)dx = 1? (limits -  < x < ) P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2]dx = (2π)-½σ-1exp[-y2/2σ2]dy (y = x – μ) = (2π)-½σ-1 [(2π)½σ] (from a table) P(x)dx = 1

P(x)dx = (2π)-½σ-1exp[-(x – μ)2/2σ2] μ ≡ N(p – q)ℓ σ ≡ 2ℓ(Npq)-½ Compute the mean value of x (<x>): <x> = xP(x)dx = (limits -  < x < ) Detailed math & using a table gives: <x> = μ ≡ N(p – q)ℓ Similarly, the dispersion in x (<(Δx)2>): <(Δx)2> = σ2 = 4Npqℓ2

Comparison of Binomial & Gaussian Distributions Dots = Binomial. Curve = Gaussian. With the with same mean μ & the same width σ

Comparison of Binomial & Gaussian Distributions Similar information as on previous slide. Blue Histogram = Binomial. Curve = Gaussian. With the with same mean μ & the same width σ

Some Well-known & Potentially Useful Properties of Gaussians Gaussian Width = 2σ 2σ P(x) =

Areas Under Portions of a Gaussian Distribution Two Graphs with the Same Information in Different Forms

Areas Under Portions of a Gaussian Distribution Again, Two Forms of the Same Information

Functions of Random Variables An important, often occurring problem is: Consider a random variable u. Suppose φ(u) ≡ any continuous function of u. Question If P(u)du ≡ Probability of finding u in the range u to u + du, what is the probability W(φ)dφ of finding φ in the range φ to φ + dφ?

W(φ)dφ ≡ P(u)|du/dφ|dφ Question If P(u)du ≡ Probability of finding u in the range u to u + du, what is the probability W(φ)dφ of finding φ in the range φ to φ + dφ? Answer using essentially the “Chain Rule” of differentiation, but take the absolute value to make sure that probability W ≥ 0: W(φ)dφ ≡ P(u)|du/dφ|dφ Caution!! φ(u) may not be a single valued function of u!

Example Reif’s book, page 31. Vector B of constant length & random direction . All  are equally likely or equally probable Equally Likely  The probability of finding θ between θ & θ + dθ is: P(θ)dθ ≡ (dθ/2π) Question What is the probability W(Bx)dBx that the x component of B lies between Bx & Bx + dBx? Clearly, we must have –B ≤ Bx ≤ B. Also, each value of dBx corresponds to 2 possible values of dθ. Also, dBx = |Bsinθ|dθ

W(Bx)dBx = 2P(θ)|dθ/dBx|dBx = (π)-1dBx/|Bsinθ| So, we have: W(Bx)dBx = 2P(θ)|dθ/dBx|dBx = (π)-1dBx/|Bsinθ| Note also that: |sinθ| = [1 – cos2θ]½ = [1 – (Bx)2/B2]½ so finally, W(Bx)dBx = (π)-1dBx[1 – (Bx)2/B2]-½, –B ≤ Bx ≤ B = 0, otherwise W not only has a maximum at Bx = B, it diverges there! It has a minimum at Bx = 0. So, it looks like    a W diverges at Bx = B, but it can be shown that it’s integral is finite. So that W(Bx) is a properly normalized probability: W(Bx)dBx= 1 (limits: –B ≤ Bx ≤ B)