2.7.6 Conjugate Gradient Method for a Sparse System Shi & Bo.

Slides:



Advertisements
Similar presentations
Optimization.
Advertisements

Optimization of thermal processes
Optimization 吳育德.
1 Numerical Solvers for BVPs By Dong Xu State Key Lab of CAD&CG, ZJU.
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Modern iterative methods For basic iterative methods, converge linearly Modern iterative methods, converge faster –Krylov subspace method Steepest descent.
Cojugate Gradient Method Zhengru Zhang ( 张争茹 ) Office: Math. Building 413(West) 2010 年教学实践周
Stochastic Matrix Factorization Max Welling. SMF Last time: The SVD can do a matrix factorization of the user-item-rating matrix. Main question to answer:
Jonathan Richard Shewchuk Reading Group Presention By David Cline
Numbers
Gradient Methods April Preview Background Steepest Descent Conjugate Gradient.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Design Optimization School of Engineering University of Bradford 1 Numerical optimization techniques Unconstrained multi-parameter optimization techniques.
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Gradient Methods May Preview Background Steepest Descent Conjugate Gradient.
Gradient Methods Yaron Lipman May Preview Background Steepest Descent Conjugate Gradient.
Optimization/Learning on the GPU (supplement figure slides) CIS 665 Joe Kider.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
Implementation of Nonlinear Conjugate Gradient Method for MLP Matt Peterson ECE 539 December 10, 2001.

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
UNCONSTRAINED MULTIVARIABLE
ENCI 303 Lecture PS-19 Optimization 2
Application of Differential Applied Optimization Problems.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Incorporating Iterative Refinement with Sparse Cholesky April 2007 Doron Pearl.
1 Marijn Bartel Schreuders Supervisor: Dr. Ir. M.B. Van Gijzen Date:Monday, 24 February 2014.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
L24 Numerical Methods part 4
Steepest Descent Method Contours are shown below.
Gradient Methods In Optimization
Survey of unconstrained optimization gradient based algorithms
9 Nov B - Introduction to Scientific Computing1 Sparse Systems and Iterative Methods Paul Heckbert Computer Science Department Carnegie Mellon.
3:00. 2:59 2:58 2:57 2:56 2:55 2:54 2:53 2:52.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Maximum Norms & Nonnegative Matrices  Weighted maximum norm e.g.) x1x1 x2x2.
 Radical expressions that contain the sum and difference of the same two terms are called conjugates.
10-1 人生与责任 淮安工业园区实验学校 连芳芳 “ 自我介绍 ” “ 自我介绍 ” 儿童时期的我.
1) CG is a numerical method to solve a linear system of equations 2) CG is used when A is Symmetric and Positive definite matrix (SPD) 3) CG of Hestenes.
Optimal Control.
Non-linear Minimization
Top Fire Protection Services Ottawa available on Dubinskyconstruction
Yahoo Mail Customer Support Number
Most Effective Techniques to Park your Manual Transmission Car
How do Power Car Windows Ensure Occupants Safety
CS5321 Numerical Optimization
Gradient Type 1 Gradient stops: 5 Stop #: 1 Position: 0
THANK YOU!.
Structure from Motion with Non-linear Least Squares
CS5321 Numerical Optimization
Optimization Part II G.Anuradha.
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Standard Slope Function
Thank you.
Thank you.
Numerical Linear Algebra
Solving Linear Systems: Iterative Methods and Sparse Systems
Introduction to Scientific Computing II
Performance Optimization
L23 Numerical Methods part 3
Structure from Motion with Non-linear Least Squares
Presentation transcript:

2.7.6 Conjugate Gradient Method for a Sparse System Shi & Bo

What is sparse system

Conjugate Gradient Method Ax=b The ordinary conjugate gradient algorithm: ▫steepest descent method

steepest descent method

Choose direction

Choose step size

When to stop

Symmetric but non-positive definite A

Thanks Reference ▫Numerical Recipes ▫Steepest Decent and Conjugate Gradients, w3.pppl.gov/m3d/reference/SteepestDecentandC G.ppt

Reference Numerical Recipes Steepest Decent and Conjugate Gradients