Download presentation
Presentation is loading. Please wait.
Published byCharles Webb Modified over 9 years ago
1
1 Jorge Nocedal Northwestern University With S. Hansen, R. Byrd and Y. Singer IPAM, UCLA, Feb 2014 A Stochastic Quasi-Newton Method for Large-Scale Learning
2
Propose a robust quasi-Newton method that operates in the stochastic approximation regime purely stochastic method (not batch) – to compete with stochatic gradient (SG) method Full non-diagonal Hessian approximation Scalable to millions of parameters 2 Goal
3
3 Outline Are iterations of this following form viable? - theoretical considerations; iteration costs - differencing noisy gradients? Key ideas: compute curvature information pointwise at regular intervals, build on strength of BFGS updating recalling that it is an overwriting (and not averaging process) - results on text and speech problems - examine both training and testing errors
4
4 Problem Applications Simulation optimization Machine learning Algorithm not (yet) applicable to simulation based optimization
5
For loss function Robbins-Monro or stochastic gradient method using stochastic gradient (estimator) min-batch 5 Stochastic gradient method
6
1. Is there any reason to think that including a Hessian approximation will improve upon stochastic gradient method? 2.Iteration costs are so high that even if method is faster than SG in terms of training costs it will be a weaker learner 6 Why it won’t work ….
7
Number of iterations needed to compute an epsilon-accurate solution: Depends on the Hessian at true solution and the gradient covariance matrix Depends on the condition number of the Hessian at the true solution Completely removes the dependency on the condition number (Murata 98); cf Bottou-Bousquet 7 Theoretical Considerations
8
Assuming we obtain efficiencies of classical quasi-Newton methods in limited memory form Each iteration requires 4Md operations M = memory in limited memory implementation; M=5 d = dimension of the optimization problem 8 Computational cost Stochastic gradient method
9
assuming a min-batch b=50 cost of stochastic gradient = 50d Use of small mini-batches will be a game-changer b =10, 50, 100 9 Mini-batching
10
Mini-batching makes operation counts favorable but does not resolve challenges related to noise 1.Avoid differencing noise Curvature estimates cannot suffer from sporadic spikes in noise (Schraudolph et al. (99), Ribeiro et at (2013) Quasi-Newton updating is an overwriting process not an averaging process Control quality of curvature information 2.Cost of curvature computation Use of small mini-batches will be a game-changer b =10,50,100 10 Game changer? Not quite…
11
11 Desing of Stochastic Quasi-Newton Method Propose a method based on the famous BFGS formula all components seem to fit together well numerical performance appears to be strong Propose a new quasi-Newton updating formula Specifically designed to deal with noisy gradients Work in progress
12
12 Review of the deterministic BFGS method
13
13 The remarkable properties of BFGS method (convex case) Superlinear convergence; global convergence for strongly convex problems, self-correction properties Only need to approximate Hessian in a subspace Powell 76 Byrd-N 89
14
14 Adaptation to stochastic setting Cannot mimic classical approach and update after each iteration Since batch size b is small this will yield highly noisy curvature estimates Instead: Use a collection of iterates to define the correction pairs
15
15 Stochastic BFGS: Approach 1 Define two collections of size L: Define average iterate/gradient: New curvature pair:
16
16 Stochastic L-BFGS: First Approach
17
17 Stochastic BFGS: Approach 1 We could not make this work in a robust manner! 1. Two sources of error Sample variance Lack of sample uniformity 2. Initial reaction Control quality of average gradients Use of sample variance … dynamic sampling Proposed Solution Control quality of curvature y estimate directly
18
Standard definition arises from Hessian-vector products are often available Define curvature vector for L-BFGS via a Hessian-vector product perform only every L iterations 18 Key idea: avoid differencing
19
19 Structure of Hessian-vector product Mini-batch stochastic gradient 1.Code Hessian-vector product directly 2.Achieve sample uniformity automatically (c.f. Schraudolph) 3.Avoid numerical problems when || s|| is small 4.Control cost of y computation
20
20 The Proposed Algorithm
21
b: stochastic gradient batch size b H : Hessian-vector batch size L: controls frequency of quasi-Newton updating M: memory parameter in L-BFGS updating M=5 - use limited memory form 21 Algorithmic Parameters
22
22 Need Hessian to implement a quasi-Newton method? Are you out of your mind? We don’t need Hessian-vector product, but it has many Advantages: complete freedom in sampling and accuracy Ŧ Œäś ϖ⌥ ħ ??
23
23 Numerical Tests Stochastic gradient method (SGD) Stochastic quasi-Newton method (SQN) It is well know that SGD is highly sensitive to choice of steplength, and so will be the SQN method (though perhaps less)
24
b = 50, 300, 1000 M = 5, L = 20 b H = 1000 24 RCV1 Problem n = 112919, N = 688329 Accessed data points; includes Hessian- vector products sgd sqn
25
b = 100, 500, M = 5, L = 20, b H = 1000 25 Speech Problem n= 30315, N = 191607 sgd sqn
26
26 Varying Hessian batch bH: RCV1 b=300
27
27 Varying memory size M in limited memory BFGS: RCV1
28
28 Varying L-BFGS Memory Size: Synthetic problem
29
29 Generalization Error: RCV1 Problem SGD SQN
30
Synthetically Generated Logistic Regression: Singer et al –n = 50, N = 7000 –Training data :. RCV1 dataset – n = 112919, N = 688329 – Training data: SPEECH dataset – NF = 235, |C| = 129 – n = NF x |C| --> n= 30315, N = 191607 – Training data: 30 Test Problems
31
Iteration Costs mini-batch stochastic gradient SQN SGD mini-batch stochastic gradient Hessian-vector product every L iterations matrix-vector product 31
32
b = 50-1000 SGD SQN b H = 100-1000 L = 10-20 M = 3-20 Typical Parameter Values b = 300 b H = 1000 L = 20 M = 5 300n 370n 32 Iteration Costs
33
33 Hasn’t this been done before? Hessian-free Newton method: Martens (2010), Byrd et al (2011) - claim: stochastic Newton not competitive with stochastic BFGS Prior work: Schraudolph et al. - similar, cannot ensure quality of y - change BFGS formula in one-sided form
34
34 Supporting theory? Work in progress: Figen Oztoprak, Byrd, Soltnsev - combine classical analysis Murata, Nemirovsky et a - and asumptotic quasi-Newton theory - effect on constants (condition number) - invoke self-correction properties of BFGS Practical Implementation: limited memory BFGS - loses superlinear convergence property - enjoys self-correction mechanisms
35
SGD: SQN: b adp/iter b + b H /L adp/iter bn + b H n/L +4Mn work/iter bn work/iter b H =1000, M=5, L=200 Parameters L, M and b H provide freedom in adapting the SQN method to a specific application 35 Small batches: RCV1 Problem
36
36 Alternative quasi-Newton framework BFGS method was not derived with noisy gradients in mind - how do we know it is an appropriate framework - Start from scratch - derive quasi-Newton updating formulas tolerant to noise
37
Define quadratic model around a reference point z Using a collection indexed by I, natural to require i.e. residuals are zero in expectation Not enough information to determine the whole model 37 Foundations
38
Given a collection I, choose model q to minimize Differentiating w.r.t. g: Encouraging: obtained residual condition 38 Mean square error
39
39 The End
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.