Download presentation
Presentation is loading. Please wait.
Published byMillicent Phelps Modified over 9 years ago
1
Numerical quadrature for high-dimensional integrals László Szirmay-Kalos
2
Brick rule f (z) dz f (z m ) z = 1/M f(z m ) 01 z = 1/M f (z) dz 1/M f(z m ) Equally spaced abscissa: uniform grid
3
Error analysis of the brick rule 0 1 f (z) dz 1/M f(z m ) f Error = f/2/M·1/M·M= f/2/M=O(1/M)
4
Trapezoidal rule f (z) dz f (z m )+ f (z m+1 ))/2 z = 1/M f(z m ) w(z m ) w(z m ) = 0 1 f (z) dz 1/M f(z m ) w(z m ) 1 if 1 < m < M 1/2 if m = 1 or m = M Error = O(1/M)
5
Brick rule in higher dimensions f (x,y) dxdy 1/n j f (x,y j ) dx 1/n 2 i j f (x i,y j ) = 1/M f(z m ) n points [0,1] 2 f (z) dz 1/M f(z m ) M=n 2 z m =(x i,y j )
6
Error analysis for higher dimensions n 2 samples f (x,y) dxdy = 1/n j f (x,y j ) dx f y /2/n 1/n j 1/n ( i f (x i,y j ) f x /2/n ) f y /2/n = 1/M f(z m ) ( f x + f y ) /2 · M -0.5
7
Classical rules in D dimensions l Error: f/2 M - 1/D = O(M -1/D ) l Required samples for the same accuracy: O( ( f/error) D ) l Exponential core –big gaps between rows and columns
8
Monte-Carlo integration Trace back the integration to an expected value problem [0,1] D f (z) dz= [0,1] D f (z) 1 dz = [0,1] D f (z) p(z) dz = E[ f (z) ] p(z)= 1 f (z) E[ f (z) ] variance: D 2 [ f (z) ] Real line pdf of f (z)
9
Expected value estimation by averages f (z) E[ f (z) ]f *=1/M f(z m ) E[ f *]=E[1/M· f(z m )]= 1/M·E[ f(z m )]=E[ f (z) ] D 2 [ f *]= D 2 [1/M · f(z m )]= 1/M 2 ·D 2 [ f(z m )]= if samples are independent! = 1/M 2 ·M·D 2 [ f(z m )]= 1/M·D 2 [ f(z)]
10
Distribution of the average E[ f (z) ] pdf of f *=1/M f(z m ) M=10 M=40 M=160 Central limit theorem: normal distribution Three 9s law: Pr{| f *- E[f *] | 0.999 Probabilistic error bound (0.999 confidence): |1/M f(z m ) - [0,1] D f (z) dz | < 3D[ f ]/ M
11
Classical versus Monte-Carlo quadratures l Classical (brick rule): f/2 M -1/D – f : variation of the integrand –D dimension of the domain Monte-Carlo: 3D[ f ]/ M –3D[ f ]: standard deviation (square root of the variance) of the integrand –Independent of the dimension of the domain of the integrand!!!
12
Importance sampling Select samples using non-uniform densities f (z) dz= f (z)/p(z) ·p(z) dz = E[f (z)/p(z)] 1/M f(z m )/p(z m ) f (z)/p(z) E[ f (z)/p(z)]f *=1/M f(z m )/p(z m )
13
Optimal probability density Variance of f (z)/p(z) should be small l Optimal case f (z)/p(z) is constant, variance is zero p(z) f (z) and p(z) dz = 1 p(z) = f (z) / f (z) dz l Optimal selection is impossible since it needs the integral l Practice: where f is large p is large
14
Numerical integration f (z) dz 1/M f (z i ) l Good sample points? z 1, z 2,..., z n l Uniform (equidistribution) sequences: assymptotically correct result for any Riemann integrable function
15
Uniform sequence: necessary requirement l Let f be a brick at the center 1 f A f (z) dz = V (A) V(A)V(A) m(A)m(A) 1/M f (z i ) = m (A)/M lim m (A)/M = V(A) 1 0
16
Discrepancy l Difference between the relative number of points and the relative size of the area D* (z 1, z 2,..., z n ) = max | m(A)/M - V(A) | V m points Mpoints
17
Uniform sequences lim D(z 1,..., z n ) =0 l Necessary requirement: to integrate a step function discrepancy should converge to 0 l It is also sufficient + =
18
Other definition of uniformness l Scalar series: z 1, z 2,..., z n.. in [0,1] l 1-uniform: P(u < z n < v)=(v-u) u v 10
19
1-uniform sequences l Regular grid: D = 1/2 M l random series: D loglogM/2M l Multiples of irrational numbers modulo 1 –e.g.: {i 2 } l Halton (van der Corput) sequence in base b –D b 2 /(4(b+1)logb) logM/M if b is even –D (b-1)/(4 logb) logM/M if b is odd
20
Discrepancy of a random series Theorem of large numbers (theorem of iterated logarithm): 1, 2,…, M are independent r.v. with mean E and variance : Pr( limsup | i /M - E | 2 loglogM/M ) = 1 x is uniformly distributed: i (x) = 1 if i (x) < A and 0 otherwise: i /M = m(A)/ M, E = A, 2 = A-A 2 < 1/4 Pr( limsup | m(A)/ M - A | loglogM/2M ) = 1
21
Halton (Van der Corput) seq: H i is the radical inverse of i i binary form of i radical inverse H i 0 0 0.0 0 1 1 0.1 0.5 2 10 0.01 0.25 3 11 0.11 0.75 4 100 0.001 0.125 5 101 0.101 0.625 6 110 0.011 0.375 01234
22
Uniformness of the Halton sequence i binary form of i radical inverse H i 0 0 0.000 0 1 1 0.100 0.5 2 10 0.010 0.25 3 11 0.110 0.75 4 100 0.001 0.125 5 101 0.101 0.625 6 110 0.011 0.375 All fine enough interval decompositions: each interval will contain a sample before a second sample is placed
23
Discrepancy of the Halton sequence A A1A1 A2A2 A3A3 A4A4 |m(A)/M-A| = |m(A 1 )/M-A 1 +…+ m(A k )/M-A k | |m(A 1 )/M-A 1 | +…+ |m(A k+1 )/M-A k+1 | k+1 = 1+ log b M M b k D (1+ logM/logb)/ M = O(logM/M) Faure sequence: Halton with digit permutation
24
Progam: generation of the Halton sequence Progam: generation of the Halton sequence class Halton { double value, inv_base; Number( long i, int base ) { double f = inv_base = 1.0/base; value = 0.0; while ( i > 0 ) { value += f * (double)(i % base); i /= base; f *= inv_base; }
25
Incemental generation of the Halton sequence void Next( ) { double r = 1.0 - value - 0.0000000001; if (inv_base < r) value += inv_base; else { double h = inv_base, hh; do { hh = h; h *= inv_base; } while ( h >= r ); value += hh + h - 1.0; }
26
2,3,… -uniform sequences l 2-uniform: P(u 1 < z n < v 1, u 2 < z n+1 < v 2 ) = (v 1 -u 1 ) (v 2 -u 2 ) (z n,z n+1 )
27
-uniform sequences l Random series of independent samples –P(u 1 <z n < v 1, u 2 < z n+1 < v 2 ) = P(u 1 <z n < v 1 ) P( u 2 < z n+1 < v 2 ) l Franklin theorem: with probability 1: –fractional part of n -uniform – is a transcendent number (e.g. )
28
Sample points for integral quadrature l 1D integral: 1-uniform sequence l 2D integral: –2-uniform sequence –2 independent 1-uniform sequences l d-D integral –d-uniform sequence –d independent 1-uniform sequences
29
Independence of 1-uniform sequences: p 1, p 2 are relative primes p 1 n columns: samples uniform with period p 1 n p 2 m rows: samples uniform with period p 2 m p 1 n p 2 m cells: samples uniform with period SCM(p 1 n, p 2 m ) SCM= smallest common multiple
30
Multidimensional sequences l Regular grid l Halton with prime base numbers –(H 2 (i), H 3 (i), H 5 (i), H 7 (i), H 11 (i), …) l Weyl sequence: P k is the kth prime –(i P 1, i P 2, i P 3, i P 4, i P 5, …)
31
Low discrepancy sequences l Definition: – Discrepancy: O(log D M/M ) =O(M -(1- ) ) l Examples –M is not known in advance: l Multidimensional Halton sequence: O(log D M/M ) –M is known in advance: l Hammersley sequence: O(log D-1 M/M ) l Optimal? –O(1/M ) is impossible in D > 1 dimensions
32
O(log D M/M ) =O(M -(1- ) ) ? l O(log D M/M ) dominated by c log D M/M –different low-discrepancy sequences have significantly different c If M is large, then log D M < M –log D M/M < M /M = M -(1- ) –Cheat!: D=10, M = 10 100 l log D M = 100 10 M = 10 10
33
Error of the integrand l How uniformly are the sample point distributed? l How intensively the integrand changes
34
Variation of the function: Vitali f VvVv f V v =limsup f x i+1 f x i 0 1 | df (u)/du | du xixi
35
Vitali Variation in higher dimensions f V v =limsup f x i+1, y i+1 f x i+1, y i f x i, y i +1 f x i, y i | 2 f (u,v)/ u v | du dv Zero if f is constant along 1 axis f
36
Hardy-Krause variation V HK f = V V f x y V V f x 1 V V f 1 y = | 2 f (u,v)/ u v | du dv 0 1 | df (u,1)/du | du 0 1 | df (1,v)/dv | dv
37
Hardy-Krause variation of discontinuous functions f f Variation: Variation: f x i+1, y i+1 f x i+1, y i f x i, y i +1 f x i, y i
38
Koksma-Hlawka inequality error( f ) < V HK D(z 1, z 2,..., z n ) 1. Express: f (z) from its derivative e(u) e(u -z ) z uu f (1)- f (z) = z 1 f ’(u)du f (z)= f (1)- z 1 f ’(u)du f (z) = f(1)- 0 1 f ’(u) e(u-z) du
39
Express 1/M f(z i ) 1/M f (z i ) = = f(1)- 0 1 f ’(u) ·1/M e(u-z i ) du = = f(1)- 0 1 f ’(u) · m(u) /M du
40
Express 0 1 f(z)dz using partial integration 0 1 f (u) · 1 du = f (u) · u| 0 1 - 0 1 f ’(u) · u du = = f(1)- 0 1 f ’(u) · u du ab’= ab - a’b
41
Express |1/M f(z i ) - 0 1 f(z)dz| | 1/M f (z i ) - 0 1 f (z)dz| = = | 0 1 f ’(u) · (m(u)/M - u) du | = 0 1 | f ’(u) · (m(u)/M - u)| du = 0 1 | f ’(u)| du · max u | (m(u)/M - u) | = = V HK · D(z 1, z 2,..., z n ) upperbound
42
Importance sampling in quasi-Monte-Carlo integration Integration by variable transformation: z = T(y) f (z) dz = f (T(y)) | dT(y)/dy | dy p(y) = |dT (y)/dy| T y z
43
Optimal selection Variation of the integrand is 0: f (T(y)) ·| dT(y)/dy | = const f (z) ·| 1/ (dT -1 (z)/dz) | = const y = T -1 (z) = z f (u) du /const Since y is in [0,1]: T -1 (z max ) = f (u) du /const = 1 const = f (u) du z = T(y) = (inverse of z f (u) du/ f (u) du) (y)
44
Comparing to MC importance sampling – f (z) dz = f (z)/p(z)], p(z) f (z) –1. normalization: p(z) = f (z)/ f (u) du –2. probability distributions P(z)= z p(u) du –3. Generation of uniform random variable r. –6. Find sample z by transforming r by the inverse probability distribution: l z = P -1 (r)
45
MC versus QMC? l What can we expect from quasi-Monte Carlo quadrature if the integrand is of infinite variation? l Initial behavior of quasi-Monte Carlo –100, 1000 samples per pixel in computer graphics –They are assymptotically uniform.
46
QMC for integrands of unbounded variation N |Dom|= l / N Length of discontinuity l Number of samples in discontinuity M = l N
47
Decomposition of the integrand fs d = + integrand finite variation partdiscontinuity f f
48
Error of the quadrature error ( f ) error( s) + error( d ) QMCMC V HKD(z 1,z 2,...,z n ) |Dom| 3 f / M = 3 f l N -3/4 In d dimensions: 3 f l N -(d+1)/2d
49
Applicability of QMC l QMC is better than MC in lower dimensions –infinite variation –initial behaviour (n > base = d-th prime number)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.