Download presentation
Presentation is loading. Please wait.
Published bySri Hartanto Modified over 6 years ago
1
Aviv Rosenberg 10/01/18 Seminar on Experts and Bandits
Online Convex Optimization in the Bandit Setting: Gradient Descent without a Gradient Aviv Rosenberg 10/01/18 Seminar on Experts and Bandits
2
Online Convex Optimization Problem
Convex set π In every iteration we choose π₯ π‘ βπ and then get a convex cost function π π‘ :πβ βπΆ,πΆ for some πΆ>0 We want to minimize the regret π‘=1 π π π‘ π₯ π‘ β min π₯βπ π‘=1 π π π‘ (π₯)
3
Bandit Setting The Gradient Descent approach: π₯ π‘+1 = π₯ π‘ βππ» π π‘ ( π₯ π‘ ) Last week we saw π( π ) regret bound But now instead of π π‘ we get π π‘ ( π₯ π‘ ) Canβt compute π» π π‘ ( π₯ π‘ )! But we still want to use Gradient Descent Solution: estimate gradient using one point We will show π( π 3/4 ) regret bound
4
Notation and Assumptions
πΉ= π₯β β π : π₯ β€1 ; π= π₯β β π : π₯ =1 Expected regret πΌ π‘=1 π π π‘ ( π₯ π‘ ) β min π₯βπ π‘=1 π π π‘ (π₯) Projection of point π₯ onto convex set π π π π₯ = arg min π§βπ π₯βπ§ Assume π is a convex set such that ππΉβπβπ
πΉ 1βπΌ π= 1βπΌ π₯:π₯βπ βπ is also convex and 0β(1βπΌ)πβπ
πΉ π¦β 1βπΌ π β π¦= 1βπΌ π₯=πΌ0+ 1βπΌ π₯βπ
5
Part 1 Gradient Estimation
6
Gradient Estimation For a function π π‘ and πΏ>0 define
π π‘ π¦ = πΌ π£βπΉ π π‘ (π¦+πΏπ£) Lemma: π» π π‘ π¦ = π πΏ πΌ π’βπ π π‘ π¦+πΏπ’ π’ To get and unbiased estimator for β π π‘ π¦ we can sample a unit vector π’ uniformly and compute π πΏ π π‘ π¦+πΏπ’ π’
7
Proof π π‘ π¦ = πΌ π£βπΉ π π‘ (π¦+πΏπ£) For π=1 π π‘ π¦ = πΌ π£β[β1,1] π π‘ (π¦+πΏπ£) = πΌ π£β[βπΏ,πΏ] π π‘ (π¦+π£) = 1 2πΏ βπΏ πΏ π π‘ π¦+π£ ππ£ Differentiate using the fundamental theorem of calculus π» π π‘ π¦ = π π‘ β² π¦ = π π‘ π¦+πΏ β π π‘ π¦βπΏ 2πΏ = 1 πΏ πΌ π’β{β1,1} π π‘ π¦+πΏπ’ π’
8
Proof Cont. For π>1 Stokeβs theorem gives
π» πΏπΉ π π‘ π¦+π£ ππ£ = πΏπ π π‘ π¦+π’ π’ π’ ππ’ Vol π πΏπΉ π» πΏπΉ π π‘ π¦+π£ ππ£ Vol π πΏπΉ = Vol πβ1 πΏπ πΏπ π π‘ π¦+π’ π’ π’ ππ’ Vol πβ1 πΏπ Vol π πΏπΉ π» πΌ π£βπΏπΉ π π‘ (π¦+π£) = Vol πβ1 πΏπ πΌ π’βΞ΄π π π‘ π¦+π’ π’ π’ Vol π πΏπΉ π» πΌ π£βπΉ π π‘ (π¦+πΏπ£) = Vol πβ1 πΏπ πΌ π’βπ π π‘ π¦+πΏπ’ π’
9
Proof Cont. Vol π πΏπΉ π» πΌ π£βπΉ π π‘ (π¦+πΏπ£) = Vol πβ1 πΏπ πΌ π’βπ π π‘ π¦+πΏπ’ π’
π π‘ π¦ = πΌ π£βπΉ π π‘ (π¦+πΏπ£) Vol π πΏπΉ π» πΌ π£βπΉ π π‘ (π¦+πΏπ£) = Vol πβ1 πΏπ πΌ π’βπ π π‘ π¦+πΏπ’ π’ Vol π πΏπΉ π» π π‘ π¦ = Vol πβ1 πΏπ πΌ π’βπ π π‘ π¦+πΏπ’ π’ π» π π‘ π¦ = Vol πβ1 πΏπ Vol π πΏπΉ πΌ π’βπ π π‘ π¦+πΏπ’ π’ The following fact concludes the proof Vol πβ1 πΏπ Vol π πΏπΉ = π πΏ For example in β 2 : 2ππΏ π πΏ 2 = 2 πΏ
10
Part 2 Regret Bound for Estimated Gradients
11
Zinkevichβs Theorem Let β 1 ,β¦, β π :(1βπΌ)πββ be convex, differentiable functions Let π¦ 1 ,β¦, π¦ π β(1βπΌ)π be defined by π¦ 1 =0 and π¦ π‘+1 = π (1βπΌ)π ( π¦ π‘ βππ» β π‘ π¦ π‘ ) Let πΊ= max π‘ π» β π‘ π¦ π‘ Then for π= π
πΊ π and for every yβ(1βπΌ)π π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π
12
Expected Zinkevichβs Theorem
Let π 1 ,β¦, π π :(1βπΌ)πββ be convex, differentiable functions Let π 1 ,β¦, π π be random vectors such that πΌ π π‘ π¦ π‘ =π» π π‘ π¦ π‘ π π‘ β€πΊ (also implies β π π‘ π¦ π‘ β€πΊ) Let π¦ 1 ,β¦, π¦ π β(1βπΌ)π be defined by π¦ 1 =0 and π¦ π‘+1 = π (1βπΌ)π π¦ π‘ βπ π π‘ Then for π= π
πΊ π and for every yβ(1βπΌ)π πΌ π‘=1 π π π‘ ( π¦ π‘ ) β π‘=1 π π π‘ π¦ β€π
πΊ π
13
Proof π¦ π‘+1 = π (1βπΌ)π π¦ π‘ βπ π π‘ π¦ 1 =0 Define β π‘ :(1βπΌ)πββ by β π‘ π¦ = π π‘ π¦ + π¦ π π π‘ where π π‘ = π π‘ βπ» π π‘ π¦ π‘ Notice that π» β π‘ π¦ π‘ =π» π π‘ π¦ π‘ + π π‘ = π π‘ So our updates are the same as running regular gradient descent on β π‘ From Zinkevichβs Theorem π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π (1)
14
β π‘ π¦ = π π‘ π¦ + π¦ π π π‘ π π‘ = π π‘ βπ» π π‘ π¦ π‘ πΌ π π‘ π¦ π‘ =π» π π‘ π¦ π‘ Proof Cont. Notice that πΌ π π‘ | π¦ π‘ =πΌ π π‘ βπ» π π‘ π¦ π‘ | π¦ π‘ =πΌ π π‘ | π¦ π‘ ββ π π‘ π¦ π‘ =0 Therefore πΌ π¦ π‘ π π π‘ =πΌ πΌ π¦ π‘ π π π‘ | π¦ π‘ =πΌ π¦ π‘ π πΌ π π‘ | π¦ π‘ =0 πΌ π¦ π π π‘ = π¦ π πΌ π π‘ = π¦ π πΌ πΌ π π‘ | π¦ π‘ =0 We get the following connections πΌ β π‘ ( π¦ π‘ ) =πΌ π π‘ ( π¦ π‘ ) +πΌ π¦ π‘ π π π‘ =πΌ π π‘ ( π¦ π‘ ) (3) πΌ β π‘ (π¦) =πΌ π π‘ (π¦) +πΌ π¦ π π π‘ = π π‘ π¦ (2)
15
Proof Cont. 1 π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π
π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π πΌ β π‘ (π¦) = π π‘ (π¦) πΌ β π‘ ( π¦ π‘ ) =πΌ π π‘ ( π¦ π‘ ) Proof Cont. πΌ π‘=1 π π π‘ ( π¦ π‘ ) β π‘=1 π π π‘ π¦ = (3) π‘=1 π πΌ β π‘ π¦ π‘ β π‘=1 π π π‘ π¦ = (2) πΌ π‘=1 π β π‘ π¦ π‘ β π‘=1 π πΌ β π‘ π¦ = πΌ π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ π¦ β€ (1) π
πΊ π
16
Part 3 BGD Algorithm
17
Ideal World Algorithm π¦ 1 β0 For π‘β{1,β¦,π}:
π π‘ = π πΏ π π‘ π¦ π‘ +πΏ π’ π‘ π’ π‘ πΌ π π‘ π¦ π‘ =π» π π‘ π¦ π‘ Ideal World Algorithm π¦ 1 β0 For π‘β{1,β¦,π}: Select unit vector π’ π‘ uniformly at random Play π¦ π‘ and observe cost π π‘ π¦ π‘ Compute π π‘ using π’ π‘ π¦ π‘+1 β π π π¦ π‘ βπ π π‘ To compute π π‘ we need π π‘ π¦ π‘ +πΏ π’ π‘ So we need to play π₯ π‘ = π¦ π‘ +πΏ π’ π‘ instead But now we have problems: π₯ π‘ βπ?? The regret is for c t ( π₯ π‘ ) although we are doing Estimated Gradient Descent for π π‘ π¦ π‘
18
Bandit Gradient Descent Algorithm (BGD)
Parameters: π>0,πΏ>0,0<πΌ<1 π¦ 1 β0 For π‘β{1,β¦,π}: Select unit vector π’ π‘ uniformly at random π₯ π‘ β π¦ π‘ +πΏ π’ π‘ Play π₯ π‘ and observe cost π π‘ π₯ π‘ = π π‘ ( π¦ π‘ +πΏ π’ π‘ ) π π‘ β π πΏ π π‘ π₯ π‘ π’ π‘ = π πΏ π π‘ π¦ π‘ +πΏ π’ π‘ π’ π‘ π¦ π‘+1 β π 1βπΌ π π¦ π‘ βπ π π‘ π¦ 1 β0 For π‘β{1,β¦,π}: Select random unit vector π’ π‘ Play π¦ π‘ and observe π π‘ π¦ π‘ Compute π π‘ using π’ π‘ π¦ π‘+1 β π π π¦ π‘ βπ π π‘ π₯ π‘ βπ? We have low regret for π π‘ ( π¦ π‘ ) in 1βπΌ π Need to convert it to low regret for π π‘ π₯ π‘ in π
19
Observation 1 For any π₯βπ π‘=1 π π π‘ 1βπΌ π₯ β€ π‘=1 π π π‘ π₯ +2πΌπΆπ
π‘=1 π π π‘ 1βπΌ π₯ β€ π‘=1 π π π‘ π₯ +2πΌπΆπ Proof. From convexity π π‘ 1βπΌ π₯ = π π‘ πΌ0+ 1βπΌ π₯ β€ πΌπ π‘ βπΌ π π‘ π₯ = π π‘ π₯ +πΌ π π‘ 0 β π π‘ π₯ β€ π π‘ π₯ +2πΌπΆ
20
Observation 2 For any yβ 1βπΌ π and any xβπ
π π‘ π₯ β π π‘ π¦ β€ 2πΆ πΌπ | π¦βπ₯ | Proof. Denote Ξ=π₯βπ¦ If Ξ β₯πΌπ the observation follows from 2πΆβ€ 2πΆ πΌπ | π¦βπ₯ | Otherwise Ξ <πΌπ, let π§=π¦+πΌπ Ξ | Ξ | and π§βπ from yβ 1βπΌ π πΌπ Ξ Ξ βπΌππΉβπΌπ β π§β 1βπΌ π+πΌπβπ
21
Proof Cont. Notice that x= Ξ πΌπ π§+ 1β Ξ πΌπ π¦ So from convexity
π π‘ π₯ = π π‘ Ξ πΌπ π§+ 1β Ξ πΌπ π¦ β€ Ξ πΌπ π π‘ π§ + 1β Ξ πΌπ π π‘ π¦ = π π‘ π¦ + π π‘ π§ β π π‘ (π¦) πΌπ Ξ β€ π π‘ π¦ + 2πΆ πΌπ Ξ Other direction is also true π§=π¦+πΌπ Ξ Ξ Ξ=π₯βπ¦
22
BGD Regret Theorem For any πβ₯ 3π
π 2π 2 and for the following parameters π= πΏπ
ππΆ π πΏ= 3 π π
2 π 2 12π πΌ= 3 3π
π 2π π For every π₯βπ the BGD achieves regret πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π₯ β€3πΆ π 5/6 3 ππ
π
23
Proof π π‘ = π πΏ π π‘ π₯ π‘ π’ π‘ π₯ π‘ = π¦ π‘ +πΏ π’ π‘
π π‘ = π πΏ π π‘ π₯ π‘ π’ π‘ π₯ π‘ = π¦ π‘ +πΏ π’ π‘ π¦ π‘+1 = π 1βπΌ π π¦ π‘ βπ π π‘ Proof First we need to show that π₯ π‘ βπ Notice that 1βπΌ π+πΌππΉβ 1βπΌ π+πΌπβπ Since π¦ π‘ β 1βπΌ π, we just need to show that πΏβ€πΌπ πΏ= 3 π π
2 π 2 12π πΌπ= 3 3π
π 2π π π= 3 3π
π 2 π 2 π This is true because πβ₯ 3π
π 2π 2
24
Proof Cont. π π‘ π¦ π‘ = πΌ π£βπΉ π π‘ ( π¦ π‘ +πΏπ£)
π π‘ π¦ π‘ = πΌ π£βπΉ π π‘ ( π¦ π‘ +πΏπ£) π» π π‘ π¦ π‘ = π πΏ πΌ π’βπ π π‘ π¦ π‘ +πΏπ’ π’ π π‘ = π πΏ π π‘ π₯ π‘ π’ π‘ π₯ π‘ = π¦ π‘ +πΏ π’ π‘ π¦ π‘+1 = π 1βπΌ π π¦ π‘ βπ π π‘ Proof Cont. Now we want to bound the regret We have πΌ π π‘ π¦ π‘ =π» π π‘ π¦ π‘ π π‘ = π πΏ π π‘ π₯ π‘ π’ π‘ β€ ππΆ πΏ =:πΊ Expected Zinkevichβs Theorem says πΌ π‘=1 π π π‘ ( π¦ π‘ ) β π‘=1 π π π‘ π¦ β€π
πΊ π = π
ππΆ π πΏ (1) Where yβ 1βπΌ π and π= π
πΊ π = πΏπ
ππΆ π
25
Proof Cont. From observation 2 we get
π π‘ π¦ β π π‘ π₯ β€ 2πΆ πΌπ | π¦βπ₯ | For π¦β 1βπΌ π π π‘ π¦ π‘ = πΌ π£βπΉ π π‘ ( π¦ π‘ +πΏπ£) From observation 2 we get π π‘ π¦ π‘ β π π‘ π₯ π‘ β€ π π‘ π¦ π‘ β π π‘ π¦ π‘ + π π‘ π¦ π‘ β π π‘ π₯ π‘ β€2 2πΆ πΌπ πΏ π π‘ π¦ β π π‘ π¦ β€ 2πΆ πΌπ πΏ Now we get for π¦β 1βπΌ π πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π¦ β€πΌ π‘=1 π π π‘ π¦ π‘ +2 2πΆ πΌπ πΏ β π‘=1 π ( π π‘ π¦ β 2πΆ πΌπ πΏ)=
26
1 πΌ π‘=1 π π π‘ ( π¦ π‘ ) β π‘=1 π π π‘ π¦ β€ π
ππΆ π πΏ
Proof Cont. πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π¦ β€πΌ π‘=1 π π π‘ π¦ π‘ +2 2πΆ πΌπ πΏ β π‘=1 π ( π π‘ π¦ β 2πΆ πΌπ πΏ)= πΌ π‘=1 π π π‘ π¦ π‘ β π‘=1 π π π‘ π¦ +3π 2πΆ πΌπ πΏβ€ π
ππΆ π πΏ +3π 2πΆ πΌπ πΏ (2)
27
Proof Cont. π¦= 1βπΌ π₯ for some π₯βπ so we can use observation 1
π‘=1 π π π‘ 1βπΌ π₯ β€ π‘=1 π π π‘ π₯ +2πΌπΆπ 2 πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π¦ β€ π
ππΆ π πΏ +3π 2πΆ πΌπ πΏ Proof Cont. π¦= 1βπΌ π₯ for some π₯βπ so we can use observation 1 πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π₯ β€πΌ π‘=1 π π π‘ π₯ π‘ β π‘=1 π π π‘ 1βπΌ π₯ +2πΌπΆπ= πΌ π‘=1 π π π‘ π₯ π‘ β π‘=1 π π π‘ π¦ +2πΌπΆπβ€ π
ππΆ π πΏ +π 6πΆ πΌπ πΏ+2πΌπΆπ Substituting the parameters finishes the proof. πΏ= 3 π π
2 π 2 12π πΌ= 3 3π
π 2π π
28
BGD with Lipschitz Regret Theorem
If all π π‘ are πΏβLipschitz then for π sufficiently large and the parameters π= πΏπ
ππΆ π πΏ= π β1/4 π
ππΆπ 3(πΏπ+πΆ) πΌ= πΏ π The BGD achieves regret πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π₯ β€2 π 3/4 3π
ππΆ(πΏ+ πΆ π )
29
Part 4 Reshaping
30
Removing Dependence in 1/π
There are algorithms that for a convex set ππΉβπβπ
πΉ find an affine transformation π that puts π in near-isotropic position and run in time π π 4 ππππ¦πππ(π, π
π ) π π β β π is in isotropic position if the covariance matrix of a random sample from π π is the identity matrix This gives us πΉβπ(π)βππΉ So we have new π
=π and π=1 Also if π π‘ is πΏβ Lipschitz then π π‘ β π β1 is πΏπ
β Lipschitz
31
Removing Dependence in 1/π
So if we first put π in near-isotropic position we get the regret bound πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π₯ β€6 π π( πΆπΏπ
+πΆ) And without the Lipschitz condition πΌ π‘=1 π π π‘ ( π₯ π‘ ) β π‘=1 π π π‘ π₯ β€6 π ππΆ 2 π 3/4 3π
ππΆ(πΏ+ πΆ π ) 3πΆ π 5/6 3 ππ
π
32
Part 5 Adaptive Adversary
33
Expected Adaptive Zinkevichβs Theorem
Let π 1 ,β¦, π π :(1βπΌ)πββ be convex, differentiable functions ( π π‘ depends on π¦ 1 ,β¦, π¦ π‘β1 ) Let π 1 ,β¦, π π be random vectors such that πΌ π π‘ π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =π» π π‘ π¦ π‘ π π‘ β€πΊ (also implies β π π‘ ( π¦ π‘ ) β€πΊ) Let π¦ 1 ,β¦, π¦ π β(1βπΌ)π be defined by π¦ 1 =0 and π¦ π‘+1 = π (1βπΌ)π π¦ π‘ βπ π π‘ Then for π= π
πΊ π and for every π¦β(1βπΌ)π πΌ π‘=1 π π π‘ ( π¦ π‘ ) β π‘=1 π π π‘ π¦ β€3π
πΊ π
34
Proof π¦ π‘+1 = π (1βπΌ)π π¦ π‘ βπ π π‘ π¦ 1 =0 Define β π‘ :(1βπΌ)πββ by β π‘ π¦ = π π‘ π¦ + π¦ π π π‘ where π π‘ = π π‘ βπ» π π‘ π¦ π‘ Notice that π» β π‘ π¦ π‘ =π» π π‘ π¦ π‘ + π π‘ = π π‘ So our updates are the same as running regular gradient descent on β π‘ From Zinkevichβs Theorem π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π (1)
35
Proof Cont. β π‘ π¦ = π π‘ π¦ + π¦ π π π‘ π π‘ = π π‘ βπ» π π‘ π¦ π‘ πΌ π π‘ π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =π» π π‘ π¦ π‘ Notice that πΌ π π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =πΌ π π‘ βπ» π π‘ π¦ π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ = πΌ π π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ βπ» π π‘ π¦ π‘ =0 πΌ π¦ π‘ π π π‘ =πΌ πΌ π¦ π‘ π π π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =πΌ π¦ π‘ π πΌ π π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =0 We get the following connection between πΌ β π‘ ( π¦ π‘ ) and πΌ π π‘ ( π¦ π‘ ) πΌ β π‘ ( π¦ π‘ ) =πΌ π π‘ ( π¦ π‘ ) +πΌ π¦ π‘ π π π‘ =πΌ π π‘ ( π¦ π‘ ) (3)
36
Proof Cont. π¦ π‘+1 = π (1βπΌ)π π¦ π‘ βπ π π‘
π¦ π‘+1 = π (1βπΌ)π π¦ π‘ βπ π π‘ πΌ π π‘ π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =π» π π‘ π¦ π‘ π π‘ = π π‘ βπ» π π‘ π¦ π‘ π π‘ β€ π π‘ + π» π π‘ π¦ π‘ β€2πΊ For every 1β€π <π‘β€π we have πΌ π π π π π‘ =πΌ πΌ π π π π π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ Given π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ we know π π (so also π π ) and therefore πΌ π π π π π‘ =πΌ π π π πΌ π π‘ | π¦ 1 , π 1 ,β¦,π¦ π‘ , π π‘ =0 We use it to get πΌ π‘=1 π π π‘ β€πΌ π‘=1 π π π‘ = π‘=1 π πΌ π π‘ β€π <π‘β€π πΌ π π π π π‘ = π‘=1 π πΌ π π‘ β€ π‘=1 π πΌ 2πΊ 2 =4π πΊ 2
37
Proof Cont. Now we connect between πΌ β π‘ (π¦) and πΌ π π‘ (π¦)
β π‘ π¦ = π π‘ π¦ + π¦ π π π‘ πΌ π‘=1 π π π‘ β€2πΊ π πβπ
πΉ Proof Cont. Now we connect between πΌ β π‘ (π¦) and πΌ π π‘ (π¦) πΌ π‘=1 π β π‘ π¦ βπΌ π‘=1 π π π‘ (π¦) β€πΌ π‘=1 π β π‘ π¦ β π π‘ (π¦) = πΌ π¦ π π‘=1 π π π‘ β€πΌ π¦ π‘=1 π π π‘ β€π
πΌ π‘=1 π π π‘ β€2π
πΊ π (2)
38
Proof Cont. 1 π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π
π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ (π¦) β€π
πΊ π 2 πΌ π‘=1 π β π‘ π¦ βπΌ π‘=1 π π π‘ π¦ β€2π
πΊ π πΌ β π‘ ( π¦ π‘ ) =πΌ π π‘ ( π¦ π‘ ) Proof Cont. πΌ π‘=1 π π π‘ ( π¦ π‘ ) β π‘=1 π π π‘ π¦ = π‘=1 π πΌ π π‘ π¦ π‘ βπΌ π‘=1 π π π‘ π¦ = (3) π‘=1 π πΌ β π‘ π¦ π‘ βπΌ π‘=1 π π π‘ π¦ β€ (2) πΌ π‘=1 π β π‘ π¦ π‘ βπΌ π‘=1 π β π‘ π¦ +2 π
πΊ π = πΌ π‘=1 π β π‘ π¦ π‘ β π‘=1 π β π‘ π¦ +2π
πΊ π β€ (1) 3π
πΊ π
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.