Presentation is loading. Please wait.

Presentation is loading. Please wait.

Section 1.3: A General Discussion of Mean Values

Similar presentations


Presentation on theme: "Section 1.3: A General Discussion of Mean Values"— Presentation transcript:

1 Section 1.3: A General Discussion of Mean Values

2 P(u1), P(u2), P(u3),….., P(uM-1), P(uM)
The Binomial Distribution is only one example of a discrete probability distribution. Now, we’ll have a brief discussion of a General Distribution. Most of the following is valid for ANY discrete probability distribution. Let u = a variable which can take on any of M discrete values: u1,u2,u3,…,uM-1,uM with probabilities P(u1), P(u2), P(u3),….., P(uM-1), P(uM)

3 ū ≡ <u> ≡ (S2/S1) u1,u2,u3,…,uM-1,uM with probabilities
u  a variable which can take on any of M discrete values: u1,u2,u3,…,uM-1,uM with probabilities P(u1), P(u2), P(u3),….., P(uM-1), P(uM) The Mean (average) value of u is defined as the ratio of two sums: ū ≡ <u> ≡ (S2/S1) Here, S1 ≡ ∑iP(ui) or S1 ≡ P(u1) + P(u2) + P(u3) +…+ P(uM-1) + P(uM) For a properly normalized distribution, we must have: S1 = ∑iP(ui) = 1. We’ll assume this from now on.

4 Note: μ ≡ Standard notation for the
ū ≡ <u> ≡ (S2/S1) As just stated, S1 = 1, so ū ≡ <u> ≡ S2 Here, S2 ≡ ∑iuiP(ui) or S2 ≡ u1P(u1) + u2P(u2) + u3P(u3) + …+ uM-1P(uM-1) + uMP(uM) Note: μ ≡ Standard notation for the mean value. So, μ ≡ ū ≡ <u>

5 1. The Mean Value, μ ≡ ū ≡ <u>
Sometimes, ū is called the 1st moment of P(u). If F(u) is any function of u, the mean value of F(u) is: <F> ≡ ∑iF(ui)P(ui) Some simple mean values that are useful for describing any probability distribution P(u): 1. The Mean Value, μ ≡ ū ≡ <u> This is a measure of the central value of u about which the various values of ui are distributed. Consider the quantity Δu ≡ u - ū (deviation from the mean). It’s mean is: <Δu> = <u - ū> = ū – ū = 0  The mean value of the deviation from the mean is always zero!

6 <(Δu)2> ≥ 0, or <u2> ≥ (<u>)2
Now, look at (Δu)2 = (u - <u>)2 (square of the deviation from the mean). It’s mean value is: σ2  <(Δu)2> = <(u - <u>)2> = <u2 -2uū – (ū)2> σ2 = <u2> - 2<u><u> – (<u>)2  <u2> - (<u>)2 This is called the “Mean Square Deviation” (from the mean). It is also called several different (equivalent!) other names: The Dispersion or Variance or the 2nd Moment of P(u) about the mean. σ2 ≡ <(Δu)2> is a measurelof the spread of the u values about the mean ū. Note that <(Δu)2> = 0 if & only if ui = ū for all i. It can easily be shown that, <(Δu)2> ≥ 0, or <u2> ≥ (<u>)2 Note: σ2 ≡ Standard notation for the variance.

7 <(Δu)n> ≡ <(u - <u>)n>
Could also define the nth moment of P(u) about the mean: <(Δu)n> ≡ <(u - <u>)n> In Physics this is rarely used beyond n = 2 & almost never beyond n = 3 or 4. NOTE: From math: A knowledge of the probability distribution function P(u) gives complete information about the distribution of the values of u. But, a knowledge of only a few moments, like knowing just ū & <(Δu)2> implies only partial, though useful knowledge of the distribution. A knowledge of only some moments is not enough to uniquely determine P(u). Math Theorem In order to uniquely determine a distribution P(u), we need to know ALL moments of it. That is we need all moments for n = 0,1,2,3….  .

8 Section 1.4 Mean Values for the Random Walk Problem
Also we’ll discuss a few math “tricks” for doing discrete sums! We’ve found that: For N steps, the probability of making n1 steps to the right & n2 = N - n1 to the left is the Binomial Distribution: WN(n1) = [N!/(n1!n2!)]pn1qn2 p = the probability of a step to the right, q = 1 – p = the probability of a step to the left.

9 ∑(n1 = 0N) WN(n1) = 1? ∑(n1 = 0N) WN(n1) = 1.
Binomial Distribution: WN(n1) = [N!/(n1!n2!)]pn1qn2 p = the probability of a step to the right, q = 1 – p = the probability of a step to the left. First, lets verify normalization: ∑(n1 = 0N) WN(n1) = 1? Recall the binomial expansion: (p + q)N = ∑(n1 = 0N) [N!/(n1!n2!)]pn1qn2 = ∑(n1 = 0N) WN(n1) But, (p + q) = 1, so (p + q)N = 1, so ∑(n1 = 0N) WN(n1) = 1.

10 = ∑(n1 = 0N) n1[N!/(n1!(N-n1)!] pn1qN-n1 (1)
Question 1: What is the mean number of steps to the right? <n1> ≡ ∑(n1 = 0N) n1WN(n1) = ∑(n1 = 0N) n1[N!/(n1!(N-n1)!] pn1qN-n1 (1) Do this sum by looking it up in a table OR we can use a “trick” as follows. The following is a general procedure which usually works, even if it doesn’t always have mathematical “rigor”. Temporarily, treat p & q as “arbitrary”, continuous variables, ignoring that p + q =1. NOTE, if p is a continuous variable, then clearly: n1pn1 ≡ p[(pn1)/p]

11 n1pn1 ≡ p[(pn1)/p] μ = <n1> = Np
<n1> ≡ ∑(n1 = 0N) n1WN(n1) = ∑(n1 = 0N) n1[N!/(n1!(N-n1)!] pn1qN-n1 (1) Temporarily, treat p & q as “arbitrary”, continuous variables. If p is a continuous variable, then: n1pn1 ≡ p[(pn1)/p] Use this in (1) (interchanging sum & derivative): <n1> = ∑(n1 = 0N) [N!/(n1!(N-n1)!]n1pn1qn2 = ∑(n1 = 0N) [N!/(n1!(N-n1)!]p[(pn1)/p]qn2 = p[/p]∑(n1 = 0N) [N!/(n1!(N-n1)!]pn1qN-n1 = p[/p](p + q)N = pN(p + q)N-1 In our special case (p + q) = 1, (p + q)N-1 = 1, so μ = <n1> = Np

12 The mean number of steps to <n1> + <n2> = N(p + q) = N
Summary: For the Binomial Distribution, The mean number of steps to the right is: <n1> = Np We might have guessed this! Similarly, we can also easily show that The mean number of steps to the left is: <n2> = Nq Of course, <n1> + <n2> = N(p + q) = N as it should!

13 What is the mean displacement, <x> = <m>ℓ?
Question 2: What is the mean displacement, <x> = <m>ℓ? Clearly, m = n1 – n2, so <m> = <n1> - <n2> = N( p – q) & <x> = <m> ℓ = N( p – q)ℓ If p = q = ½, <m> = 0 so, <x> = <m>ℓ = 0

14 σ2 = <(Δn1)2> = <(n1)2> - (<n1>)2
Question 3: What is the dispersion (variance)? σ2 = <(Δn1)2> = <(n1 - <n1>)2> in the number of steps to the right? That is, what is the spread in n1 values about <n1>? Our general discussion has shown: σ2 = <(Δn1)2> = <(n1)2> - (<n1>)2

15 σ2 = <(Δn1)2> = <(n1)2> - (<n1>)2
We’ve just seen that the mean  = <n1> = Np So, we first need to calculate <(n1)2> <(n1)2> = ∑(n1 = 0N) (n1)2 WN(n1) = ∑(n1 = 0N) (n1)2[N!/(n1!(N-n1)!]pn1qN-n1 (2) Use a similar “trick” as we did before & note that: (n1)2pn1 ≡ [p(/p)]2pn1

16 <(n1)2> = (Np)2 + Npq = (<n1>)2 + Npq
After math (in the book) & using p + q = 1, we find: <(n1)2> = (Np)2 + Npq = (<n1>)2 + Npq So, finally, using <(Δn1)2> = <(n1)2> - (<n1>)2 σ is the dispersion or variance of the binomial distribution. The root mean square (rms) deviation from the mean is defined in general as:  [<(Δn1)2>]½. For the binomial distribution, this is σ = [Npq]½ Note that σ  The Distribution Width σ2 = <(Δn1)2> = Npq

17 Summary: For the Binomial Distribution
Dispersion or variance in number of steps to right: Root mean square (rms) deviation from the mean: σ  [<(Δn1)2>]½ = [Npq]½ and σ  Distribution Width Again note that: <n1> = Np. So, the relative width of the distribution is: (σ/μ) = [Npq]½(Np) = (q½)(pN)½ If p = q, this is: (σ/μ) = 1(N)½ = (N)-½  As N increases, the mean value increases  N but the relative width decreases  (N)-½ σ2 = <(Δn1)2> = Npq

18 Question 4: <x2> = <(Δm)2>ℓ2 = <(m - <m>)2>ℓ2
What is the dispersion <x2> = <(Δm)2>ℓ2 = <(m - <m>)2>ℓ2 in the net displacement? Or, what is the spread in m values about <m>? We had, m = n1 – n2 = 2n1 – N. So, <m> = 2<n1> – N. Δm = m - <m> = (2n1 – N) – (2<n1> - N) = 2(n1 – <n1>) = 2(Δn1) (Δm)2 = 4(Δn1)2.So, <(Δm)2> = 4<(Δn1)2> Using <(Δn1)2> = Npq, this becomes: l If p = q = ½, <(Δm)2> = N <(Δm)2> = 4Npq

19 WN(n1) = [N!/(n1!n2!)]pn1qn2 Relative Width: (/ ) = (q½)(pN)½
Summary: 1 Dimensional Random Walk Problem The Probability Distribution is Binomial: WN(n1) = [N!/(n1!n2!)]pn1qn2 Mean number of steps to the right: μ = <n1> = Np Dispersion in n1: <(Δn1)2> = Npq Relative Width: (/ ) = (q½)(pN)½ for N increasing, the mean value increases  N, & the relative width decreases  (N)-½


Download ppt "Section 1.3: A General Discussion of Mean Values"

Similar presentations


Ads by Google