Steepest Descent Method Contours are shown below.

Slides:



Advertisements
Similar presentations
Curved Trajectories towards Local Minimum of a Function Al Jimenez Mathematics Department California Polytechnic State University San Luis Obispo, CA
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 14.
VC.04 Vector Fields Acting on a Curve. Example 1: What is a Vector Field?
Optimization of thermal processes
PARTIAL DERIVATIVES 14. PARTIAL DERIVATIVES 14.6 Directional Derivatives and the Gradient Vector In this section, we will learn how to find: The rate.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Gradient Methods April Preview Background Steepest Descent Conjugate Gradient.
Function Optimization Newton’s Method. Conjugate Gradients
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Tutorial 12 Unconstrained optimization Conjugate gradients.
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Gradient Methods Yaron Lipman May Preview Background Steepest Descent Conjugate Gradient.
Improved BP algorithms ( first order gradient method) 1.BP with momentum 2.Delta- bar- delta 3.Decoupled momentum 4.RProp 5.Adaptive BP 6.Trinary BP 7.BP.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
Function Optimization. Newton’s Method Conjugate Gradients Method
Why Function Optimization ?
X y z Point can be viewed as intersection of surfaces called coordinate surfaces. Coordinate surfaces are selected from 3 different sets.
Chapter 5 Linear Inequalities and Linear Programming Section 1 Linear Inequalities in Two Variables.

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
Chapter 5.1 Systems of linear inequalities in two variables.
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
Graphing Linear Inequalities
EGGN 598 – Probabilistic Biomechanics Ch.7 – First Order Reliability Methods Anthony J Petrella, PhD.
Nonlinear programming Unconstrained optimization techniques.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
L21 Numerical Methods part 1 Homework Review Search problem Line Search methods Summary 1 Test 4 Wed.
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Kelli Hardy Compton Study from Experimental Data.
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
L24 Numerical Methods part 4
Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.
Gradient Methods In Optimization
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 5.1 Inequalities in Two Variables The student will be able to graph linear.
Prims Algorithm for finding a minimum spanning tree
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computational Biology BS123A/MB223 UC-Irvine Ray Luo, MBB, BS.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
Supervised Learning in ANNs
Mental Math Activity.
Context-Specific CPDs
A Simple Artificial Neuron
EGGN 598 – Probabilistic Biomechanics Ch
Chapter 14.
Colours.
Chapter 7 Optimization.
Chapter 5 Linear Inequalities and Linear Programming
EGGN 598 – Probabilistic Biomechanics Ch
Instructor :Dr. Aamer Iqbal Bhatti
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Capabilities of Threshold Neurons
6.1 Introduction to Chi-Square Space
6.3 Gradient Search of Chi-Square Space
Click The Triangle! Sam Alderson 8th grade
Crypto Encryption Intro to public key.
First Grade Unit 1 Review.
Performance Optimization
Presentation transcript:

Steepest Descent Method Contours are shown below

The gradient at the point is If we choose x 1 1 = 3.22, x 2 1 = 1.39 as the starting point represented by the black dot on the figure, the black line shown in the figure represents the direction for a line search. Contours represent from red (f = 2) to blue (f = 20). Steepest Descent

Now, the question is how big should the step be along the direction of the gradient? We want to find the minimum along the line before taking the next step. The minimum along the line corresponds to the point where the new direction is orthogonal to the original direction. The new point is (x 1 2, x 2 2 ) = (2.47,-0.23) shown in blue. Steepest Descent

By the third iteration we can see that from the point (x 1 2, x 2 2 ) the new vector again misses the minimum, and here it seems that we could do better because we are close. Steepest descent is usually used as the first technique in a minimization procedure, however, a robust strategy that improves the choice of the new direction will greatly enhance the efficiency of the search for the minimum. Steepest Descent