Download presentation
Presentation is loading. Please wait.
1
Stochastic Approximation Neta Shoham
2
References This Presentation is totally based on the book Introduction to Stochastic Search and Optimization (2003) By James C. Spall The slides here are heavily based on slides that have been used as an aid in teaching from the book ISSO. The later slides can be found here: http://www.jhuapl.edu/ISSO/Pages/powerpoint.html http://www.jhuapl.edu/ISSO/Pages/powerpoint.html
3
Agenda 1. STOCHASTIC APPROXIMATION FOR ROOT FINDING 2. STOCHASTIC GRADIENT
4
STOCHASTIC APPROXIMATION FOR ROOT FINDING IN NONLINEAR MODELS
5
Stochastic Root-Finding Problem Focus is on finding (i.e., ) such that g( ) = 0 nonlinear g( ) is typically a nonlinear function of . Assume only noisy measurements of g( ) are available: Y k ( ) = g( ) + e k ( ), k = 0, 1, 2,…, Above problem arises frequently in practice Optimization with noisy measurements (g( ) represents gradient of loss function) (see Chapter 5 of ISSO) Equation solving in physics-based models Machine learning (see Chapter 11 of ISSO)
6
Some remarks on the noise measurements In many applications the measurements includes an input vector We can see it is not far from Y k ( ) = g( ) + e k ( ), by substituting If the inputs x k are independent and In the case where we will have the time varying problem
7
Core Algorithm for Stochastic Root-Finding Basic algorithm published in Robbins and Monro (1951) Algorithm is a stochastic analogue to steepest descent when used for optimization Noisy measurement Y k ( ) replaces exact gradient g( ) Generally wasteful to average measurements at given value of across iterations Average across iterations (changing ) Core Robbins-Monro algorithm for unconstrained root-finding is Constrained version of algorithm also exists
8
Sample mean as an SA algorithm X i are independent measurements E(X i )=μ the goal is to find μ Letting Puts this recursion in the frame work of SA, we can also let g(θ)=θ-μ and e k =μ- X k+1
9
Circular Error Probable (CEP): Example of Root-Finding (Example 4.3 in ISSO) Interested in estimating radius of circle about target such that half of impacts lie within circle ( is scalar radius) Define success variable Root-finding algorithm becomes Figure on next slide illustrates results for one study
10
True and estimated CEP: 1000 impact points with impact mean differing from target point (Example 4.3 in ISSO)
11
Convergence Conditions Central aspect of root-finding SA are conditions for formal convergence of the iterate to a root Provides rigorous basis for many popular algorithms (LMS, backpropagation, simulated annealing, etc.) Section 4.3 of ISSO contains two sets of conditions: “Statistics” conditions based on classical assumptions about g( ), noise, and gains a k “ODE” conditions based on connection to deterministic ordinary differential equation (ODE) Neither of statistics or engineering conditions is special case of other
12
“Statistics” Conditions A1 (Gain sequence) A2 (Search direction) for some positive definite B A3 (Mean zero noize) A4 (Growth and varince bounds) for all θ,k and some c>0 “ODE” Conditions B1 (Gain sequence) B2 (Relationship to ODE) g(θ) is continues and has a stable equilibrium point at θ * B3 (Iterate blondeness) for A, a compact subset of the domain of attraction (in B2) Let B4 (Bounded variance property of measurement error) B5 (Disappearing bias)
13
Connection to ODEs To motivate the connection to ODE note that in the deterministic case define: Then we can write Suppose Then the ODE can be regards as a limiting form of the difference equation.
14
ODE Convergence Paths for Nonlinear Problem in Example 4.6 in ISSO: Satisfies ODE Conditions Due to Asymptotic Stability and Global Domain of Attraction
15
Gain Selection Choice of the gain sequence a k is critical to the performance of SA Famous conditions for convergence are = and A common practical choice of gain sequence is where 1/2 0, and A 0 Strictly positive A (“stability constant”) allows for larger a (possibly faster convergence) without risking unstable behavior in early iterations and A can usually be pre-specified; critical coefficient a usually chosen by “trial-and-error”
16
Some asymptotic details and more about gain selection. Fabian (1968) shows that, under appropriate regularity conditions for a k =a/(k+1) α where Σ depends on {a k } and the Jacobian matrix of g(θ). Under general condition the rate of convergence is O(1/k -α ) in appropriate stochastic sense. (Note that in deterministic gradient descent the rate of convergence is O(c k ) 0<c<1). Asymptotically optimal gain is, a k =H(θ * ) -1 /(k+1), (a k is now a matrix) where H is the Jacobian matrix of g(θ), (equivalently to Newton-Raphson search). In practice we do not know θ * or H.
17
Constant step size In many application a constant step size is used to avoid to small step size for large k. Typical applications involve adaptive tracking or control where θ * is changing in time. Constant step size also appears in Neural Networks training although there is no variation in the underlying θ *. Algorithms with constant step size generally not formally convergence. (there is a theory for weak convergence) The restart trick : periodically restart the algorithm with and
18
Extensions to Basic Root-Finding SA (Section 4.5 of ISSO) Joint Parameter and State Evolution There exists state vector x k related to system being optimized E.g., state-space model governing evolution of x k, where model depends on values of Adaptive Estimation and Higher-Order Algorithms Adaptively estimating gain a k SA analogues of fast Newton-Raphson search Iterate Averaging See slides to follow Time-Varying Functions See slides to follow
19
Adaptive Estimation and Higher-Order Algorithms The aim is to enhance the convergence rate. SA analogues of Newton-Raphson search are trying to adaptively estimating the Jacobian in order to achieve the asymptotically optimality gain a k =H(θ * ) -1 /(k+1). One of the first adaptive gain (Kesten 1958) is based on the sign of. Frequent sign changes is an indication that we are close to θ * and the gain is decreased. While the above method are effective ways to speed the convergence, they are restricted in their range of application
20
Iterate Averaging Iterate averaging is important and relatively recent development in SA asymptotic Provides means for achieving optimal asymptotic performance without using optimal gains a k (optimal between the squences satisfies a k+1 /a k =1+o(a k )) Basic iterate average uses following sample mean as final estimate: finite-sample Results in finite-sample practice are mixed Success relies on large proportion of individual iterates hovering in some balanced way around Many practical problems have iterate approaching in roughly monotonic manner Monotonicity not consistent with good performance of iterate averaging; see plot on following slide
21
Contrasting Search Paths for Typical p = 2 Problem: Ineffective and Effective Uses of Iterate Averaging
22
Time-Varying Functions In some problems, the root-finding function varies with iteration: g k ( ) (rather than g( )) Adaptive control with time-varying target vector Experimental design with user-specified input values Let denote the root to g k ( ) = 0 Suppose that for some fixed value (equivalent to the fixed in conventional root-finding) In such cases, much standard theory continues to apply Plot on following slide shows case when g k ( ) represents a gradient function with scalar
23
Time-Varying g k ( ) = L k ( ) / for Loss Functions with Limiting Minimum
24
STOCHASTIC GRADIENT FORM OF STOCHASTIC APROXIMATION
25
Stochastic Gradient Formulation For differentiable L( ), recall familiar set of p equations and p unknowns for use in finding a minimum : Above is special case of root-finding problem Suppose cannot observe L( ) and g( ) except in presence of noise Adaptive control (target tracking) Simulation-based optimization Etc. unbiased measurement Seek unbiased measurement of L / for optimization
26
Stochastic Gradient Formulation (Cont’d Suppose L( ) = E[Q( , V )] V represents all random effects Q( , V) represents “observed” cost (noisy measurement of L( )) Seek a representation where Q / is an unbiased measurement of L / Not true when distribution function for V depends on desiredrepresentation Above implies that desired representation isnot where p V ( ) is density function for V
27
Stochastic Gradient Measurement and Algorithm When density p V ( ) is independent of , is unbiased measurement of L / ( in general Q( , V)/ is not equal to L / ) Above requires derivative – integral interchange in L/ = E[Q( , V)]/ = E[ Q( , V)/ ] to be valid Can use root-finding (Robbins-Monro) SA algorithm to attempt to find : Unbiased measurement satisfies key convergence conditions of SA (Section 4.3 in ISSO)
28
Example of conversion to preferred stochastic gradient form Suppose that Q(θ,V )=f (θ,Ζ)+W where W’~N(0,θ 2 σ 2 ) and Z is independent of θ V=(Z,W ) is dependent of θ we can write W’=θW, where W~N(0,σ 2 ). Now Q(θ,V’ )= f (θ,Ζ)+θW’ V’=(Z,W’ ) is independent of θ
29
Stochastic Gradient Tendency to Move Iterate in Correct Direction
30
Implementation of general non linear regression problem. Instantaneous input form Batch form The minimum is tied with fixed n (and z 1,…,z n ) To implement we need full set of data at the beginning
31
Stochastic Gradient and LMS Connections The basic linear model is: Consider standard MSE loss: L( )= Implies Q = Then the basic LMS algorithm is Hence LMS is direct application of stochastic gradient SA Proposition 5.1 in ISSO shows how SA convergence theory applies to LMS Implies convergence of LMS to
32
Neural Networks Neural networks (NNs) are general function approximators Actual output z k represented by a NN according to standard model z k = h( , x k ) + v k h( , x k ) represents NN output for input x k and weight values v k represents noise Diagram of simple feedforward NN on next slide backpropagation Most popular training method is backpropagation (mean-squared- type loss function) Backpropagation is following stochastic gradient recursion
33
Simple Feedforward Neural Network with p = 25 Weight Parameters
34
Image Restoration Aim is to recover true image subject to having recorded image corrupted by noise Common to construct least-squares type problem where H s represents a convolution of the measurement process (H) and the true pixel-by-pixel image (s) Can be solved by either batch linear regression methods or the LMS/RLS methods Nonlinear measurements need full power of stochastic gradient method Measurements modeled as Z = F(s, x, V)
35
Multiple pass implementation Recursive and batch form represent two extremes in implementation of stochastic gradient algorithm. An hybrid from is to use the instantaneous gradient, as in the recursive form, yet making multiply pass through the full data. The user may choose to restart the gain value at a 0.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.