Math 175: Numerical Analysis II

Slides:



Advertisements
Similar presentations
ECE 471/571 - Lecture 13 Gradient Descent 10/13/14.
Advertisements

Lecture 5 Newton-Raphson Method
Line Search.
Optimization.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
CS B553: A LGORITHMS FOR O PTIMIZATION AND L EARNING Univariate optimization.
Optimization 吳育德.
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Optimization Introduction & 1-D Unconstrained Optimization
MIT and James Orlin © Nonlinear Programming Theory.
Engineering Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Lectures on Numerical Methods 1 Numerical Methods Charudatt Kadolkar Copyright 2000 © Charudatt Kadolkar.
ISM 206 Lecture 6 Nonlinear Unconstrained Optimization.
Tier I: Mathematical Methods of Optimization
MATH 175: NUMERICAL ANALYSIS II CHAPTER 3: Differential Equations Lecturer: Jomar Fajardo Rabajante 2nd Sem AY IMSP, UPLB.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
MATH 175: NUMERICAL ANALYSIS II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Semester AY
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 8. Nonlinear equations.
Solving Non-Linear Equations (Root Finding)
MATH 175: Numerical Analysis II Lecturer: Jomar Fajardo Rabajante 2 nd Sem AY IMSP, UPLB.
Application of Differential Applied Optimization Problems.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
The Bisection Method. Introduction Bisection Method: Bisection Method = a numerical method in Mathematics to find a root of a given function.
Numerical Methods Solution of Equation.
Today’s class Numerical differentiation Roots of equation Bracketing methods Numerical Methods, Lecture 4 1 Prof. Jinbo Bi CSE, UConn.
L21 Numerical Methods part 1 Homework Review Search problem Line Search methods Summary 1 Test 4 Wed.
Chapter 10 Minimization or Maximization of Functions.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
L24 Numerical Methods part 4
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
SOLVING NONLINEAR EQUATIONS. SECANT METHOD MATH-415 Numerical Analysis 1.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Intelligent Numerical Computation1 Numerical Analysis Basic structures of a flowchart Solving a nonlinear equation with one variable Bisection method Newton.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
MATH II – QUADRATICS to solve quadratic equations. MATH III – MATH II –
Lecture 4 Numerical Analysis. Solution of Non-Linear Equations Chapter 2.
MATH 175: Numerical Analysis II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2 nd Sem AY
Optimal Control.
MATH342: Numerical Analysis Sunjae Kim.
MATH 175: Numerical Analysis II
Root Finding Methods Fish 559; Lecture 15 a.
Boundary-Value Problems for ODE )בעיות הגבול(
Non-linear Least-Squares
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Chapter 7 Optimization.
MATH 175: Numerical Analysis II
Optimization Part II G.Anuradha.
MATH 175: NUMERICAL ANALYSIS II
SOLUTION OF NONLINEAR EQUATIONS
MATH 174: Numerical Analysis
~ Least Squares example
MATH 174: Numerical Analysis I
MATH 174: NUMERICAL ANALYSIS I
MATH 174: NUMERICAL ANALYSIS I
MATH 175: NUMERICAL ANALYSIS II
~ Least Squares example
Part 4 - Chapter 13.
Some Comments on Root finding
MATH 175: Numerical Analysis II
MATH 175: NUMERICAL ANALYSIS II
MATH 175: Numerical Analysis II
MATH 175: Numerical Analysis II
MATH 175: NUMERICAL ANALYSIS II
L23 Numerical Methods part 3
Numerical Analysis – Solving Nonlinear Equations
Solutions for Nonlinear Equations
Bracketing.
Presentation transcript:

Math 175: Numerical Analysis II Lecturer: Jomar Fajardo Rabajante IMSP, UPLB 2nd Sem AY 2012-2013

SOLVING AND SOLVING AGAIN…  Solutions of Linear Equations [high-school/ Math 17] Solutions of Nonlinear Equations (finding roots of nonlinear equations) [Math 17/Math 175] Solutions of Linear Systems (Systems of Linear Equations) [high school/Math 17/Math 120/ then later in Math 175] Solutions of Nonlinear Systems (Systems of Nonlinear Equations) [Math 17/Math175]  Equations – one dimension; Systems of equations – n-dimension

SOLVING AND SOLVING AGAIN… We will DELAY the discussion of Numerical Solutions to LINEAR SYSTEMS. Your LABORATORY instructors will discuss Numerical Solutions to NONLINEAR SYSTEMS, specifically applying Multivariate Newton’s Method (similar to our Newton-Raphson but here we will use the concept of Jacobian).

NOW, We will discuss an optional topic which is NUMERICAL OPTIMIZATION (unconstrained) We will consider two methods: Golden-section Search (univariate) (lecture class) Newton’s Method (laboratory class) Univariate (Math 36) Multivariate (Math 38) Of course Exhaustive Search is still applicable but not encouraged. Another famous method is the Steepest Descent.

GOLDEN SECTION SEARCH We will only consider MINIMIZATION. Why? Similar to bisection method. It uses a bracket. It is also linearly but globally convergent. The function should be UNIMODAL on the interval in consideration (i.e. there is only ONE dip ).

GOLDEN SECTION SEARCH It uses the golden ratio or the divine proportion φ: Theorem: After k steps of Golden Section Search with starting interval [a,b], the midpoint of the final interval is within of the minimum, where The theorem says that (b-a) is cut approx (38.2%)^k times, then after that get the midpoint.

GOLDEN SECTION SEARCH Algorithm: Given f unimodal with minimum in [a,b] while (0.5*(g^k)*(b-a)>tol) if f(a+(1-g)*(b-a))<f(a+g*(b-a)) b=a+g*(b-a) else a=a+(1-g)*(b-a) end approxmin=(a+b)/2