Steepest Descent Optimization
Outline Newton Method -> Steepest Descent Convergence Rates Exact and Numerical Line Search Regularization
Given: f(x) ~ f(x0) + dx g + dx H dx Newton -> Steepest Descent Method Given: f(x) ~ f(x0) + dx g + dx H dx T T (1) Find: stationary point x* s.t. f(x*)=0 D Soln: Newton’s Method (2) a is step length that should be calculated so we never go uphill
Assume H is diagonally dominant Ill-conditioned f(x) ~ dx + dy 2 0 1 H = Well-conditioned Well-conditioned f(x) ~ dx + .001dy 2 Ill-conditioned 0 .001 H = So H ~ d /H -1 ij ij ii Therefore (2) becomes Preconditoned Steepest Descent if g is Premultiplied by approximation to inverse H
Poorly Conditioned Eqns Steepest Descent For k=1:niter end Ill-conditioned Well-conditioned Poorly Conditioned Eqns Converge Slowly
Outline Newton Method -> Steepest Descent Convergence Rates Exact and Numerical Line Search Regularization
Poorly Conditioned Eqns Steepest Descent For k=1:niter end Poorly Conditioned Eqns Converge Slowly
Outline Newton Method -> Steepest Descent Convergence Rates Exact and Numerical Line Search Regularization
Exact Line Search
Numerical Line Search Problem 1: Solve above eqn for alpha Problem 2: Write Matlab code for Newton’s method with Rosenbrock, But now compute step length Problem 3: Write Matlab code for Steepest Descent method with Rosenbrock, But now compute step length and Preconditoning