Download presentation
Presentation is loading. Please wait.
1
Chapter 9 Mathematical Preliminaries
2
Stirling’s Approximation Fig. 9.2-1 by trapezoid rule take antilogs Fig. 9.2-2 by midpoint formula take antilogs 9.2..
3
To calculate C exactly use a clever trick: Let k ≥ 0, integration by parts 9.2
4
substitute 9.2
5
Binomial Bounds Show the volume of a sphere of radius λn in the n-dimensional unit hypercube is: Assuming 0 λ ½ (since the terms are reflected about n/2) the terms grow monotonically, and bounding the last by Stirling gives: 9.3
6
N.b.
7
The Gamma Function Idea: extend the factorial to non-integral arguments. by convention For n > 1, integrate by parts: dg = e −x dx f = x n−1 9.4
8
dx dy area = dxdy 9.4 r drrdθ area = rdrdθ
9
N – Dimensional Euclidean Space Use Pythagorean distance to define spheres: Consequently, their volume depends proportionally on r n converting to polar coordinates 9.5
10
just verify by substitution r2 tr2 t 9.5 nCnCn 12 2.00000 2π 3.14159 34π/3 4.18879 4π 2 /3 4.93420 58π 2 /15 5.26379 6π 3 /6 5.16771 716π 3 /105 4.72477 8π 4 /24 4.05871 2k2kπk/k!πk/k! → 0 From table on page 177 of book.
11
Interesting Facts about N-dimensional Euclidean Space C n → 0as n → ∞ V n (r) → 0 asn → ∞ for a fixed r Volume approaches 0 as the dimension increases! Almost all the volume is near the surface (as n → ∞) end of 9.5
12
What about the angle between random vectors, x and y, of the form (±1, ±1, …, ±1)? Hence, for large n, there are almost 2 n random diagonal lines which are almost perpendicular to each other! end of 9.8 By definition: length of projection along axis length of entire vector For large n, the diagonal line is almost perpendicular to each axis! Angle between the vector (1, 1, …, 1) and each coordinate axis: As n → ∞: cos θ → 0, θ → π/2.
13
Chebyshev’s Inequality Let X be a discrete or continuous random variable with p(x i ) = the probability that X = x i. The mean square is x 2 × p(x) ≥ 0 Chebyshev’s inequality 9.7
14
Variance The variance of X is the mean square about the mean value of X, So variance, (via linearity) is: 9.7 Note: V{1} = 0 → V{c} = 0 & V{cX} = c²V{X}
15
The Law of Large Numbers Suppose X and Y are independent random variables, with E{X} = a, E{Y} = b, V{X} = σ 2, V{Y} = τ 2. Then E{(X − a) ∙ (Y − b)} = E{X − a} ∙ E{Y − b} = 0 ∙ 0 = 0 And V{X + Y} = E{(X − a + Y − b) 2 } = E{(X − a) 2 } + 2E{(X − a)(Y − b)} + E{(Y − b) 2 } = V{X} + V{Y} = σ 2 + τ 2 because of independence 9.8 Consider n independent trials for X; called X 1, …, X n. The expectation of their average is (as expected!):
16
So, what is the probability that their average A is not close to the mean E{X} = a? Use Chebyshev’s inequality: Let n → ∞ Weak Law of Large Numbers: The average of a large enough number of independent trials comes arbitrarily close to the mean with arbitrarily high probability. 9.8 The variance of their average is (remember independence):
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.