MODELING MATTER AT NANOSCALES 3. Empirical classical PES and typical procedures of optimization 3.04. Geometries from energy derivatives.

Slides:



Advertisements
Similar presentations
Lect.3 Modeling in The Time Domain Basil Hamed
Advertisements

Optimization.
Optimization : The min and max of a function
Optimization of thermal processes
Introduction to Molecular Orbitals
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Geometry Optimization Pertemuan VI. Geometry Optimization Backgrounds Real molecules vibrate thermally about their equilibrium structures. Finding minimum.
Potensial Energy Surface Pertemuan V. Definition Femtosecond spectroscopy experiments show that molecules vibrate in many different directions until an.
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
Tutorial 12 Unconstrained optimization Conjugate gradients.
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Tutorial 5-6 Function Optimization. Line Search. Taylor Series for Rn
Optimization Methods One-Dimensional Unconstrained Optimization
Frequency Calculation. *A nonlinear molecule with n atoms has 3n — 6 normal modes: the motion of each atom can be described by 3 vectors, along the x,
Function Optimization. Newton’s Method Conjugate Gradients Method
Advanced Topics in Optimization
Why Function Optimization ?
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
Molecular Modeling: Geometry Optimization C372 Introduction to Cheminformatics II Kelsey Forsythe.
Chapter 15 Oscillatory Motion.

9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
UNCONSTRAINED MULTIVARIABLE
Objectives of this course
Geometry Optimisation Modelling OH + C 2 H 4 *CH 2 -CH 2 -OH CH 3 -CH 2 -O* 3D PES.
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 101 Quasi-Newton Methods.
Nonlinear programming Unconstrained optimization techniques.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Multivariate Unconstrained Optimisation First we consider algorithms for functions for which derivatives are not available. Could try to extend direct.
Serge Andrianov Theory of Symplectic Formalism for Spin-Orbit Tracking Institute for Nuclear Physics Forschungszentrum Juelich Saint-Petersburg State University,
© 2011 Autodesk Freely licensed for use by educational institutions. Reuse and changes require a note indicating that content has been modified from the.
“On Sizing and Shifting The BFGS Update Within The Sized-Broyden Family of Secant Updates” Richard Tapia (Joint work with H. Yabe and H.J. Martinez) Rice.
1 MODELING MATTER AT NANOSCALES 4. Introduction to quantum treatments The variational method.
MODELING MATTER AT NANOSCALES 3. Empirical classical PES and typical procedures of optimization Classical potentials.
Numerical Methods.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
MODELING MATTER AT NANOSCALES 6.The theory of molecular orbitals for the description of nanosystems (part II) The density matrix.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Chapter 10 Minimization or Maximization of Functions.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
MODELING MATTER AT NANOSCALES 4. Introduction to quantum treatments Eigenvectors and eigenvalues of a matrix.
Computational Biology BS123A/MB223 UC-Irvine Ray Luo, MBB, BS.
0 Jack SimonsJack Simons, Henry Eyring Scientist and Professor Chemistry Department University of Utah Electronic Structure Theory Session 12.
Geometry Optimizations Lecture CompChem 2 Chemistry 347 Hope College.
Vectors Chapter 3 Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Non-linear Minimization
Structure Refinement BCHM 5984 September 7, 2009.
Chapter 14.
Numerical Analysis Lecture 45.
Collaborative Filtering Matrix Factorization Approach
METHOD OF STEEPEST DESCENT
Performance Optimization
What are optimization methods?
Section 3: Second Order Methods
Conjugate Direction Methods
Presentation transcript:

MODELING MATTER AT NANOSCALES 3. Empirical classical PES and typical procedures of optimization Geometries from energy derivatives

General algorithm: the line search strategy An unavoidable pathway for each kind of iterative minimization: {R k } k such that  k  N E(Z’,R k+1 )  E(Z’,R k ) is to find optimal step sizes in the way for finding a local or global minimum of a hypersurface in a given direction.

General algorithm: the line search strategy An unavoidable pathway for each kind of iterative minimization: {R k } k such that  k  N E(Z’,R k+1 )  E(Z’,R k ) is to find optimal step sizes in the way for finding a local or global minimum of a hypersurface in a given direction. line search It is performed by a search strategy known as line search, meaning: Computing E(Z’,R k ) and g(R k ) given R k at a certain k point of the search.

General algorithm: the line search strategy An unavoidable pathway for each kind of iterative minimization: {R k } k such that  k  N E(Z’,R k+1 )  E(Z’,R k ) is to find optimal step sizes in the way for finding a local or global minimum of a hypersurface in a given direction. line search It is performed by a search strategy known as line search, meaning: Computing E(Z’,R k ) and g(R k ) given R k at a certain k point of the search. Choosing a search direction as F(R k )

General algorithm: the line search strategy An unavoidable pathway for each kind of iterative minimization: {R k } k such that  k  N E(Z’,R k+1 )  E(Z’,R k ) is to find optimal step sizes in the way for finding a local or global minimum of a hypersurface in a given direction. line search It is performed by a search strategy known as line search, meaning: Computing E(Z’,R k ) and g(R k ) given R k at a certain k point of the search. Choosing a search direction as F(R k ) Obtaining step sizes  k > 0 that minimizes E[Z’, R k +  k F(R k )], i.e. the hypersurface at a better following location by appropriate mathematical procedures.

General algorithm: the line search strategy An unavoidable pathway for each kind of iterative minimization: {R k } k such that  k  N E(Z’,R k+1 )  E(Z’,R k ) is to find optimal step sizes in the way for finding a local or global minimum of a hypersurface in a given direction. line search It is performed by a search strategy known as line search, meaning: Computing E(Z’,R k ) and g(R k ) given R k at a certain k point of the search. Choosing a search direction as F(R k ) Obtaining step sizes  k > 0 that minimizes E[Z’, R k +  k F(R k )], i.e. the hypersurface at a better following location by appropriate mathematical procedures. Selecting the new geometry at the following local step k + 1, i.e. R k+1 = R k +  k F(R k ) and repeating the cycle from k+1 until achieving a value of , given a local convergence threshold that will be the corresponding final optimization step.

First order treatments

Simple gradient methods (first order): steepest descent The method of steepest descent only make use of gradient information on the potential energy surface in order to determine the F direction of search:

Simple gradient methods (first order): steepest descent The method of steepest descent only make use of gradient information on the potential energy surface in order to determine the F direction of search: Then, each optimization step k + 1 will depend on the previous one following:

Simple gradient methods (first order): steepest descent The method of steepest descent only make use of gradient information on the potential energy surface in order to determine the F direction of search: The inconvenience is that the gradient at a given point is always orthogonal to the gradient at the following point in the minimization It means a “zig-zag” approach to the minimum that looses much computing time. Then, each optimization step k + 1 will depend on the previous one following:

Simple gradient methods (first order): non – linear conjugate gradient The non – linear conjugate gradient method can be considered as a correction to the method of steepest descent. It remains only making use of gradient information on the potential energy surface as:

Simple gradient methods (first order): non – linear conjugate gradient The non – linear conjugate gradient method can be considered as a correction to the method of steepest descent. It remains only making use of gradient information on the potential energy surface as: but subsequent steps behave as: being  k+1 a matrix of real numbers.

Simple gradient methods (first order): non – linear conjugate gradient The non – linear conjugate gradient method can be considered as a correction to the method of steepest descent. It remains only making use of gradient information on the potential energy surface as: As well as in the case of steepest descent, each optimization step k + 1 will depend on the previous one by following: but subsequent steps behave as: being  k+1 a matrix of real numbers.

Simple gradient methods (first order): non – linear conjugate gradient several formulas are used depending on gradients themselves: To calculate terms of the  k+1 diagonal matrix in: - as Fletcher - Reeves:

Simple gradient methods (first order): non – linear conjugate gradient several formulas are used depending on gradients themselves: To calculate terms of the  k+1 diagonal matrix in: - or Polak - Ribière: or even other. - as Fletcher - Reeves:

Simple gradient methods (first order): non – linear conjugate gradient The big advantage of non – linear gradient conjugate method is that being:

Simple gradient methods (first order): non – linear conjugate gradient in steepest descent method, here it becomes non - orthogonal: The big advantage of non – linear gradient conjugate method is that being:

Simple gradient methods (first order): non – linear conjugate gradient in steepest descent method, here it becomes non - orthogonal: The big advantage of non – linear gradient conjugate method is that being: It means a more straightforward pathway to minima, avoiding zig – zags.

Simple gradient methods (first order): non – linear conjugate gradient It must be observed that non - linear gradient conjugate methods use the history of gradient development in a certain way. It means that curvatures (Hessians) are quite accounted. Advantages are: a rather simple formulas for updating the direction vector.

Simple gradient methods (first order): non – linear conjugate gradient It must be observed that non - linear gradient conjugate methods use the history of gradient development in a certain way. It means that curvatures (Hessians) are quite accounted. Advantages are: a rather simple formulas for updating the direction vector. being slightly more complicated than steepest descent, converges faster.

Simple gradient methods (first order): non – linear conjugate gradient It must be observed that non - linear gradient conjugate methods use the history of gradient development in a certain way. It means that curvatures (Hessians) are quite accounted. Advantages are: a rather simple formulas for updating the direction vector. being slightly more complicated than steepest descent, converges faster. more reliable than simpler methods.

Simple gradient methods (first order): non – linear conjugate gradient It must be observed that non - linear gradient conjugate methods use the history of gradient development in a certain way. It means that curvatures (Hessians) are quite accounted. Advantages are: a rather simple formulas for updating the direction vector. being slightly more complicated than steepest descent, converges faster. more reliable than simpler methods. It is considered that a good optimization strategy to perform initially a few steepest descent steps to reach a conformation near to the energy minimum and then finishing it with conjugate gradient steps until convergence.

Second order treatments

Gradient and Hessian path to optimize geometries The Taylor series can be truncated if R f is any stationary point of the system or even being at their nearest surroundings. Moreover, gradients must collapse at equilibrium or any other stationary geometry. Then:

Gradient and Hessian path to optimize geometries As the derivative with respect to the position of any a center at geometry R f is the gradient of energy at such coordinate, it can also be expanded as a Taylor’s series:

Gradient and Hessian path to optimize geometries As the derivative with respect to the position of any a center at geometry R f is the gradient of energy at such coordinate, it can also be expanded as a Taylor’s series: Therefore, the case of a full gradient matrix at the R f geometry in matrix notation is:

Gradient and Hessian path to optimize geometries When R f is the equilibrium geometry results that g(R f ) = g(R eq ) = 0 by definition, and therefore gradients and curvatures become related by displacements q as:

Gradient and Hessian path to optimize geometries When R f is the equilibrium geometry results that g(R f ) = g(R eq ) = 0 by definition, and therefore gradients and curvatures become related by displacements q as: Hessian matrix is considered non – singular when HH −1 = H −1 H = I n, where I n is the identity matrix of n order. Therefore, the displacement matrix can then be expressed as a product of the inverse Hessian by the gradient matrix if Hessian is non – singular:

Gradient and Hessian path to optimize geometries Therefore, being: whereas q = R eq – R, then:

Gradient and Hessian path to optimize geometries Therefore, being: whereas q = R eq – R, then: Remembering steps in the line – search procedure: arises that a good approach for step sizes is the inverse Hessian when it is non - singular:

Gradient and Hessian path to optimize geometries That means a way to find the equilibrium geometry of any system following a process where an arbitrary geometry can be progressively changed downwards in the hypersurface to the point when g(R) becomes 0 by means of the curvature provided by the Hessian.

Gradient and Hessian path to optimize geometries That means a way to find the equilibrium geometry of any system following a process where an arbitrary geometry can be progressively changed downwards in the hypersurface to the point when g(R) becomes 0 by means of the curvature provided by the Hessian. This is a general principle for finding optimized geometries guided by the gradient path and the curvature of hypersurfaces at each point.

Gradient and Hessian methods (second order): Newton - Raphson The Newton – Raphson scheme is based on the principle of using first derivatives of a function to find extrema and second derivatives to define the condition of maxima or minima. Therefore, the general recipe for an iterative term finding the next step is:

Gradient and Hessian methods (second order): Newton - Raphson The Newton – Raphson scheme is based on the principle of using first derivatives of a function to find extrema and second derivatives to define the condition of maxima or minima. Therefore, the general recipe for an iterative term finding the next step is: Observe that like the steepest descent method, Newton’s searches by the negative gradient direction.

Gradient and Hessian methods (second order): Newton - Raphson Newton – Raphson methods provide good output properties and fast convergence) if started near the equilibrium structure (either local or global). However, needs modifications if started far away from solution.

Gradient and Hessian methods (second order): Newton - Raphson Newton – Raphson methods provide good output properties and fast convergence) if started near the equilibrium structure (either local or global). However, needs modifications if started far away from solution. Another inconvenience is that the inverse Hessian is expensive to calculate.

Gradient and Hessian methods (second order): Pseudo - Newton - Raphson As not always is possible (as fast as desired) to compute H(R k ) at each iteration step, there are pseudo – Newton – Raphson methods that use approximate Hessian matrices: becoming: Being sure that B k is always definite positive and

Gradient and Hessian methods (second order): Pseudo - Newton - Raphson One case of pseudo – Newton – Raphson procedure is the Broyden – Fletcher – Goldfarb – Shanno procedure (BFGS), used by default in important program packages:

Gradient and Hessian methods (second order): Pseudo - Newton - Raphson One case of pseudo – Newton – Raphson procedure is the Broyden – Fletcher – Goldfarb – Shanno procedure (BFGS), used by default in important program packages: The evaluation of B(R 1 ) is performed by particular formulas or by other method, as those previously described.

Gradient and Hessian methods (second order): Pseudo - Newton - Raphson Pseudo – Newton – Raphson method use to speed up convergence without requiring the high computational resources of finding the Hessian matrix at each optimization step.

Gradient and Hessian methods (second order): Pseudo - Newton - Raphson Pseudo – Newton – Raphson method use to speed up convergence without requiring the high computational resources of finding the Hessian matrix at each optimization step. In certain cases, as those of large systems with too many degrees of freedom it could be better to perform steepest descent or conjugate gradient procedures.

Convergences There are several convergence criteria which are not mutually exclusive. If  is a convenient threshold: → Mathematical: and all terms in H k are definite positive

Convergences There are several convergence criteria which are not mutually exclusive. If  is a convenient threshold: → Mathematical: and all terms in H k are definite positive → Gradient only : although the minimum is not granted

Convergences There are several convergence criteria which are not mutually exclusive. If  is a convenient threshold: → Mathematical: and all terms in H k are definite positive → Gradient only: although the minimum is not granted → Both on gradient and displacements: and

Coordinates In practice, all matrices are treated in the 3N dimension space of x, y and z (or x 1, x 2, x 3 )on each nuclei or center of reference.

Coordinates It means that the Taylor series for energy can be expressed in terms of the X in place of the R matrix:

Coordinates It must be observed that a molecule with 2 atoms have 2N = 6 coordinates, although only one dimension is significant for their geometry: → 3 molecular dimensions ( x 1, x 2, x 3 ) are referred to the translation of the system with respect to any external point. x4x5x4x5 x2x2 x3x3 x1x1 x6x6

Coordinates It must be observed that a molecule with 2 atoms have 2N = 6 coordinates, although only one dimension is significant for their geometry: → 3 molecular dimensions ( x 1, x 2, x 3 ) are referred to the translation of the system with respect to any external point. → 2 molecular dimensions ( x 4, x 5 ) are referred to its rotational motion. x4x5x4x5 x2x2 x3x3 x1x1 x6x6

Coordinates It must be observed that a molecule with 2 atoms have 2N = 6 coordinates, although only one dimension is significant for their geometry: → 3 molecular dimensions ( x 1, x 2, x 3 ) are referred to the translation of the system with respect to any external point. → 2 molecular dimensions ( x 4, x 5 ) are referred to its rotational motion. → Molecules with more than 2 atoms (non cylindrical symmetry) require an additional coordinate for dealing with rotations. x4x5x4x5 x2x2 x3x3 x1x1 x6x6

Coordinates To be valid the expression in terms of Cartesian coordinates: the Hessian must be non – singular, i.e. HH -1 = H -1 H = I.

Coordinates To be valid the expression in terms of Cartesian coordinates: the Hessian must be non – singular, i.e. HH -1 = H -1 H = I. istranslation (involving x 1, x 2, x 3 ) and rotation (involving x 4, x 5, x 6 ) of the complete system is a constant sum of an overall kinetic energy to all terms in Hessian. It is comprehensive to the whole nanoscopic system. The problem is that a 3N Hessian matrix in terms of Cartesian coordinates is singular as well as translation (involving x 1, x 2, x 3 ) and rotation (involving x 4, x 5, x 6 ) of the complete system is a constant sum of an overall kinetic energy to all terms in Hessian. It is comprehensive to the whole nanoscopic system.

Coordinates The system´s geometry deals with the potential energy among the components that individually regards all other coordinates.

Coordinates Therefore, there are no potential energy gradient, nor Hessian, for the whole system´s translational nor rotational energy components.

Coordinates The solution is transforming the full X Cartesian coordinate matrix in an internal or a reduced Cartesian coordinate matrix X* with only the geometry significant 3N – 6 terms (or 3N – 5 in the case of cylindrical systems):

Coordinates The H(X*) Hessian matrix is non – singular and then all previous considerations holds for geometry optimizations.

Finding gradients and Hessians There are two main forms to compute gradients and Hessians: –Analytically, meaning the evaluation of formulas of total energy derivatives from the ones used for the hypersurface function.

Finding gradients and Hessians There are two main forms to compute gradients and Hessians: –Analytically, meaning the evaluation of formulas of total energy derivatives from the ones used for the hypersurface function. –Numerically, using series to evaluate points and their corresponding inter- and extrapolations

Finding gradients and Hessians: numerical approaches A good algorithm to be used when only energy is known and analytical gradients can not be obtained is that of Fletcher - Powell (FP).

Finding gradients and Hessians: numerical approaches A good algorithm to be used when only energy is known and analytical gradients can not be obtained is that of Fletcher - Powell (FP). It builds up an internal list of gradients by keeping track of the energy changes from one step to the next. The Fletcher - Powell algorithm is usually the method of choice when energy gradients cannot be computed.

Finding gradients and Hessians: numerical approaches A good algorithm to be used when only energy is known and analytical gradients can not be obtained is that of Fletcher - Powell (FP). It builds up an internal list of gradients by keeping track of the energy changes from one step to the next. The Fletcher - Powell algorithm is usually the method of choice when energy gradients cannot be computed. 1. Fletcher, R., A new approach to variable metric algorithms. The Computer Journal 1970, 13 (3), Fletcher, R.; Powell, M. J. D., A Rapidly Convergent Descent Method for Minimization. The Computer Journal 1963, 6 (2),

Finding gradients and Hessians: numerical approaches Some of the most efficient algorithms are the so – called quasi-Newton algorithms, which assume a quadratic potential surface near to minima.

Finding gradients and Hessians: numerical approaches Some of the most efficient algorithms are the so – called quasi-Newton algorithms, which assume a quadratic potential surface near to minima. The Berny algorithm internally builds up a second derivative Hessian matrix.

Finding gradients and Hessians: numeric and analytic approaches Another good algorithm is the geometric direct inversion of the iterative subspace (GDIIS) algorithm.

Finding gradients and Hessians: numeric and analytic approaches Another good algorithm is the geometric direct inversion of the iterative subspace (GDIIS) algorithm. Molecular mechanics programs often use:

Finding gradients and Hessians: numeric and analytic approaches The general DIIS procedure used in GAMESS that includes both numerical and analytical derivatives is described in the series: Hamilton, T. P.; Pulay, P., Direct inversion in the iterative subspace (DIIS) optimization of open-shell, excited-state, and small multiconfiguration SCF wave functions. J. Chem. Phys. 1986, 84 (10), Pulay, P., Improved SCF convergence acceleration. J. Comput. Chem. 1982, 3 (4), Pulay, P., Convergence acceleration of iterative sequences. The case of SCF iteration. Chem. Phys. Lett. 1980, 73 (2),

Finding gradients and Hessians: refinements near minima and transition states The Newton – Raphson approach can be expressed as before in terms of and internal or reduced Cartesian coordinate matrix in a stationary point:

Finding gradients and Hessians: refinements near minima and transition states The Newton – Raphson approach can be expressed as before in terms of and internal or reduced Cartesian coordinate matrix in a stationary point: For the sake of simplicity it can be written as:

Finding gradients and Hessians: refinements near minima and transition states A linear transformation is useful to be performed to treat variables:

Finding gradients and Hessians: refinements near minima and transition states A linear transformation is useful to be performed to treat variables: Then the corresponding displacements are:

Finding gradients and Hessians: refinements near minima and transition states A linear transformation is useful to be performed to treat variables: Then the corresponding displacements are: and gradients:

Finding gradients and Hessians: refinements near minima and transition states Such h i, q’ i and g’ i variables are called as the local principal modes or axes and the Newton – Raphson step along each principal axis is then:

Finding gradients and Hessians: refinements near minima and transition states Such h i, q’ i and g’ i variables are called as the local principal modes or axes and the Newton – Raphson step along each principal axis is then: Then, for stationary points: for minima

Finding gradients and Hessians: refinements near minima and transition states Such h i, q’ i and g’ i variables are called as the local principal modes or axes and the Newton – Raphson step along each principal axis is then: Then, for stationary points: for minima for maxima

Finding gradients and Hessians: refinements near minima and transition states Such h i, q’ i and g’ i variables are called as the local principal modes or axes and the Newton – Raphson step along each principal axis is then: Then, for stationary points: for minima for maxima for a  order saddle point

Finding gradients and Hessians: refinements near minima and transition states It conducts to stepping procedures where: where is an appropriately chosen shift parameter.

Finding gradients and Hessians: refinements near minima and transition states It conducts to stepping procedures where: where is an appropriately chosen shift parameter. Depending upon the value of, the sign of each (h i - ) will be positive or negative, and hence the direction of the step q i will be opposite or toward the direction of the gradient.

Finding gradients and Hessians: refinements near minima and transition states All these expressions facilitate the use of algebra to determine the best. These approaches are named as the eigenvector following procedures.

Finding gradients and Hessians: refinements near minima and transition states All these expressions facilitate the use of algebra to determine the best. These approaches are named as the eigenvector following procedures. They result excellent to find accurate minima and saddle points.

Finding gradients and Hessians: refinements near minima and transition states All these expressions facilitate the use of algebra to determine the best. These approaches are named as the eigenvector following procedures. They result excellent to find accurate minima and saddle points. Banerjee, A.; Adams, N.; Simons, J.; Shepard, R., Search for stationary points on surfaces. J. Phys. Chem. 1985, 89 (1), Simons, J.; Joergensen, P.; Taylor, H.; Ozment, J., Walking on potential energy surfaces. J. Phys. Chem. 1983, 87 (15), Cerjan, C. J.; Miller, W. H., On finding transition states. J. Chem. Phys. 1981, 75 (6),

Summary An interesting paper describing how all this matter works in recent programs is:

Comments The QM/MM chalenge is also being aforded:

Comments This is a still active field of research:

References A comprehensive illustration: Cramer, C. J., Essentials of Computational Chemistry. Theories and Models. 2nd. ed.; John Wiley & Sons Ltd: Chichester, 2004; p 596. An appendix in: Szabo, A.; Ostlund, N. S., Modern quantum chemistry: introduction to advanced electronic structure theory. First edition, revised ed.; McGraw-Hill: New York, 1989; p 466.