Introduction to Numerical Methods Andreas Gürtler Hans Maassen tel.: 52991.

Slides:



Advertisements
Similar presentations
Part 1 Chapter 4 Roundoff and Truncation Errors PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill.
Advertisements

Lecture 5 Newton-Raphson Method
Roundoff and truncation errors
2009 Spring Errors & Source of Errors SpringBIL108E Errors in Computing Several causes for malfunction in computer systems. –Hardware fails –Critical.
CML CML CS 230: Computer Organization and Assembly Language Aviral Shrivastava Department of Computer Science and Engineering School of Computing and Informatics.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Topics covered: Floating point arithmetic CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Asymptotic error expansion Example 1: Numerical differentiation –Truncation error via Taylor expansion.
Introduction to Scientific Computing ICE / ICE 508 Prof. Hyuckjae Lee KAIST- ICC
University of Washington Today Topics: Floating Point Background: Fractional binary numbers IEEE floating point standard: Definition Example and properties.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
Chapter 4 Roots of Equations
Round-Off and Truncation Errors
Lecture 2: Numerical Differentiation. Derivative as a gradient
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 6 Roots of Equations Bracketing Methods.
Revision.
1 Error Analysis Part 1 The Basics. 2 Key Concepts Analytical vs. numerical Methods Representation of floating-point numbers Concept of significant digits.
Chapter 3 Root Finding.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Numerical Analysis 1 EE, NCKU Tien-Hao Chang (Darby Chang)
Roots of Equations Chapter 3. Roots of Equations Also called “zeroes” of the equation –A value x such that f(x) = 0 Extremely important in applications.
Scientific Computing Algorithm Convergence and Root Finding Methods.
Floating Point Computation
Numbers and number systems
IT 251 Computer Organization and Architecture Introduction to Floating Point Numbers Chia-Chi Teng.
Numerical Computations in Linear Algebra. Mathematically posed problems that are to be solved, or whose solution is to be confirmed on a digital computer.
Computer Organization and Architecture Computer Arithmetic Chapter 9.
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI 230 Information Representation: Negative and Floating Point.
1 COMS 161 Introduction to Computing Title: Numeric Processing Date: October 22, 2004 Lecture Number: 24.
Lecture 2 Number Representation and accuracy
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
CISE301_Topic11 CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4:
Computing Systems Basic arithmetic for computers.
ES 240: Scientific and Engineering Computation. Chapter 4 Chapter 4: Errors Uchechukwu Ofoegbu Temple University.
Oct. 18, 2007SYSC 2001* - Fall SYSC2001-Ch9.ppt1 See Stallings Chapter 9 Computer Arithmetic.
Representing numbers and Basic MATLAB 1. Representing numbers  numbers used by computers do not behave the same as numbers used in mathematics  e.g.,
Spring Floating Point Computation Jyun-Ming Chen.
Round-off Errors.
Fixed and Floating Point Numbers Lesson 3 Ioan Despi.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter.
MECN 3500 Inter - Bayamon Lecture 3 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Dale Roberts Department of Computer and Information Science, School of Science, IUPUI CSCI N305 Information Representation: Floating Point Representation.
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
Problems with Floating-Point Representations Douglas Wilhelm Harder Department of Electrical and Computer Engineering University of Waterloo Copyright.
Computer Arithmetic Floating Point. We need a way to represent –numbers with fractions, e.g., –very small numbers, e.g., –very large.
Lecture 4 - Numerical Errors CVEN 302 June 10, 2002.
Numerical Methods Solution of Equation.
Numerical Analysis CC413 Propagation of Errors.
Floating Point Numbers Representation, Operations, and Accuracy CS223 Digital Design.
CS1Q Computer Systems Lecture 2 Simon Gay. Lecture 2CS1Q Computer Systems - Simon Gay2 Binary Numbers We’ll look at some details of the representation.
Numerical Analysis CC413 Propagation of Errors. 2 In numerical methods, the calculations are not made with exact numbers. How do these inaccuracies propagate.
Introduction to error analysis Class II "The purpose of computing is insight, not numbers", R. H. Hamming.
Module 2.2 Errors 03/08/2011. Sources of errors Data errors Modeling Implementation errors Absolute and relative errors Round off errors Overflow and.
1 M 277 (60 h) Mathematics for Computer Sciences Bibliography  Discrete Mathematics and its applications, Kenneth H. Rosen  Numerical Analysis, Richard.
Cosc 2150: Computer Organization Chapter 9, Part 3 Floating point numbers.
NUMERICAL ANALYSIS I. Introduction Numerical analysis is concerned with the process by which mathematical problems are solved by the operations.
MATH Lesson 2 Binary arithmetic.
Introduction to Numerical Analysis I
Machine arithmetic and associated errors Introduction to error analysis Class II.
Floating Point Math & Representation
Modified from Sharon Guffy
Roundoff and Truncation Errors
© University of Liverpool
Approximations and Round-Off Errors Chapter 3
SOLUTION OF NONLINEAR EQUATIONS
Roundoff and Truncation Errors
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Presentation transcript:

Introduction to Numerical Methods Andreas Gürtler Hans Maassen tel.: 52991

Plan for today Introduction Errors Representation of real numbers in a computer When can we solve a problem? A first numerical method: Bisection method for root-finding (solving f(x)=0) Practical part: Implementation of the bisection method

What are numerical methods for? Many problems can be formulated in simple mathematical equations This does not mean, that they are solved easily! For any application, you need numbers  Numerical Methods! Needed for most basic things: exp(3),sqrt(7),sin(42°), log(5),  Often, modeling and numerical calculations can help in design, construction, safety Note: many everyday’s problems are so complicated that they cannot be solved yet  Efficiency is crucial

Numerics is about numbers Numerical methods: Numerical approximation of solutions to understood problems Numerical representation of real numbers has far-reaching consequences Two main objectives –quantify errors Approximation without error estimation is useless –increase efficiency Solutions which take years or need more resources that you have are useless Nowadays, many fields depend on numerics

Errors

Numerical errors Roundoff error: finite precision numerical calculations are almost always approximations Truncation error: a calculation has to stop Examples: –Approximation (e.g. finite Taylor series) –Discretization It is crucial to know when to stop (i.e. when a calculations is converged!). To check this, change parameters (e.g. step size, number of basis states) and check result. modeling error

Truncation errors Truncation errors are problem specific Often, every step involves an approximation, e.g. a finite Taylor series The truncation errors accumulate Often, truncation errors can be calculated

Roundoff errors Example: Logistic Map x i+1 =r * x i * (1-x i ) x 0 = 0.7 ; r=4 single and double precision Precision of representation of numbers is finite - errors accumulate a real number x can be represented as fl(x) = x·(1+  ) : floating point computer representation | fl(x)-x| =  x absolute error (often also  x) | fl(x)-x|/x =  relative error

Computer representation of numbers

back to Matlab: data types Real numbers: floating point representation double (64 bit) standard! single (32 bit) less precision, half memory Integer: signed and unsigned (8,16,32 bit) a=uint8(255), b=int16(32767), c=int32( ) Integers use less space, calculations are precise! Complex: w=2+3i ; v=complex(x,y) x,y can be matrices! Boolean: true (=1) or false (=0) Strings: s='Hello World' special values: +inf, -inf, NaN Infinity, Not-a-Number check with isinf(x), isnan(x)

floating point standard IEEE 754 Normalized: N=1.f · 2 p Denormalized: N=0.f · 2 p Limits of double precision (64bit) FP numbers realmax ( ), realmin ( ) largest, smallest real number eps ( ) accuracy strictly speaking, FP numbers are not associative and not distributive, but they are to a very good approximation if used reasonably mathematically floating point numbers are not a field! Many fundamental theorems of analysis and algebra have to be used with care! Significant f51 – 0 Exponent p62 – 52 sign63 usebit (double)

Choose your unit system keep numbers in similar order of magnitude (choose natural units) e.g. atomic units hydrogen energy levels: E n =-1/(2n 2 ) rather than -2.18E-18/n 2 J Note: Calculation of Bohr radius in SI units exceeds range of single precision FP: a 0 =4  0 h 2 /m e e 2  5.3E-11m (involves /2* ) 15.3E-11 mdistance a 0 1/ E8 m/svelocity c 14.36E-18 Jenergy E h 19.1E-31 kgmass m e 11.62E-19 Ccharge e a.u.SI -

floating point caveats not all real numbers can be represented as FP E-16 Floating point calculations are not precise! sin(pi) is not 0, 1E20+1-1E20 is not 1 never compare two floats (a==b) directly try 100*(1/3)/100 == 1/3 FALSE!!! use abs(a-b)<eps (eps=2.2E-16) be careful with mixing integer and FP i=int32(1); i/2 produces 1 as i/2 stays integer! i/3 produces 0 ! This is much more dangerous in C and FORTRAN etc. Solution: explicit type conversion double(i)/2 produces 0.5

What can we solve numerically?

What can we solve Suppose we want to evaluate f(x) with perfect algorithm we have FP number x+  x with error  x  f(x) = f(x+  x)-f(x)  f’(x)  x (if f differentiable) relative error: Definition: condition number  >>1 : problem ill-conditioned  small: problem well-conditioned

well- and ill-conditioned methods Let’s try a simple calculation: 99-70*sqrt(2) (  ) suppose we have 1.4 as approximation for  2 We have 2 mathematically equivalent methods: f 1 : 99-70*  2 f 1 (1.4) = 1 f 2 : 1/(99+70*  2) f 2 (1.4)  Condition numbers: f 1 (x)= x    2   f 1 (x)= 1/(99+70 x)    2   0.5

what happened? f 1 : 99-70*  2 f 2 : 1/(99+70*  2) Condition number of subtraction, addition: f(x)=x-a   = |-x/(x-a)| ill-conditioned for x-a≈0 f(x)=x+a   = |x/(x+a)| ill-conditioned for x+a≈0 Condition number for multiplication, division: f(x)=ax  = |xa/(ax)| =1 f(x)=1/x  = |xx -2 /(x -1 )| =1 well- conditioned

numerical disasters Patriot system hit by SCUD missile –position predicted from time and velocity –the system up-time in 1/10 of a second was converted to seconds using 24bit precision (by multiplying with 1/10) –1/10 has non-terminating binary expansion –after 100h, the error accumulated to 0.34s –the SCUD travels 1600 m/s so it travels >500m in this time Ariane 5 –a 64bit FP number containing the horizontal velocity was converted to 16bit signed integer –range overflow followed from

Root finding: a standard numerical problem A common task is to solve an equation f(x)=0 task: find the zero-crossing (root) of f(x) Pitfalls: f(x) might have –no roots –extrema –many roots –singularities –roots which are not FP numbers –roots close to 0 or at very large x There are different methods with advantages and disadvantages

Root finding: bisection method The bisection method finds a root in [a,b] if f(x)  C(a,b), f(a)*f(b)<0 wanted: x 0 : f(x 0 )=0 (There is a root because of the intermediate value theorem) Bisection is slow, but always works! ab a1a1 b1b1 a2a2 b2b2 a 3 p 3 b 3 p1p1 p2p2 f(x)

Where does bisection converge to? If the function changes sign asound a root, bisection converges to the root If the function does not change sign, bisection cannot find the root! If the function changes sign around a singularity, bisection converges to the singularity! Important: stop criterion a p b f(x) problematic for singularities, small f’(x) large a,b p≈0 p≈0, small f’(x)

convergence Bisection always converges, but maybe slow f  C[a,b] and f(a)*f(b)<0 then {p n } converges to a root p of f and |p n -p|  (b-a)/2 n Proof:  n  1: b n -a n = (b-a)/2 n-1 and p  (a n,b n ) convergence is linear

order of convergence Let converge to p with if positive  exist with then p n converges with order   =1 : linear convergence  =2 : quadratic convergence

Implementation of the bisection method f(a)*f(b)<0   p  (a,b): f(p)=0 ab a1a1 b1b1 a2a2 b2b2 a 3 p 3 b 3 p1p1 p 2 Bisection in words: 1.choose an interval (a,b) around the root 2.calculate the center point p=(b+a)/2 3.if p approximates the root well enough, STOP 4.if the root is in [a,p] set b=p 5.if the root is in [p,b] 6. set a=p 7.goto step 2 initialize a,b p=(a+b)/2 while abs(b-a)>  do f(p)*f(a) b=p f(p)*f(b) a=p repeat

Exercise Implement the Bisection method in Matlab use it to solve x 3 = ln(1+x) (x>0)

solve x 3 = ln(1+x) (x>0) using bisection % bisection.m a=0.1; b=2; p=(a+b)/2; epsilon=1E-10; n=1; while abs(b-a)>epsilon if (f(p)*f(a)<0) b=p; end if (f(p)*f(b)<0) a=p; end p=(a+b)/2; n=n+1; end fprintf ('Root: %f, Steps: %d.',p,n); function ret=f(x) % function of which root should be found ret=x^3-log(1+x); end ab a1a1 b1b1 a2a2 b2b2 a 3 p 3 b 3 p1p1 p 2 Bisection in words: 1.choose an interval (a,b) around the root 2.calculate the center point p=(b+a)/2 3.if p approximates the root well enough, STOP 4.if the root is in (a,p) set b=p 5.if the root is in (p,b) 6. set a=p 7.goto step 2