Presentation is loading. Please wait.

Presentation is loading. Please wait.

CIS 541 – Numerical Methods

Similar presentations


Presentation on theme: "CIS 541 – Numerical Methods"— Presentation transcript:

1 CIS 541 – Numerical Methods
Mathematical Preliminaries

2 Derivatives Recall the limit definition of the first derivative.
February 28, 2019 OSU/CIS 541

3 Partial Derivatives Same as derivatives, keep each other dimension constant. February 28, 2019 OSU/CIS 541

4 Tangents and Gradients
Recall that the slope of a curve (defined as a 1D function) at any point x, is the first derivative of the function. That is, the linear approximation to the curve in the neighborhood of t is l(x) = b + f’(t)x f(x) t (1,f’(t)) y x February 28, 2019 OSU/CIS 541

5 Tangents and Gradients
Since we also want this linear approximation to intersect the curve at the point t. l(t) = f(t) = b + f’(t)t Or, b = f(t) - f’(t)t We say that the line l(x) interpolates the curve f(x) at the point t. February 28, 2019 OSU/CIS 541

6 Functions as curves We can think of the curve shown in the previous slide as the set of all points (x,f(x)). Then, the tangent vector at any point along the curve is February 28, 2019 OSU/CIS 541

7 Side note on Curves There are other ways to represent curves, rather than explicitly. Functions are a subset of curves (x,y(x)). Parametric equations represent the curve by the distance walked along the curve (x(t),y(t)). Circle: (cos, sin) Implicit representations define a contour or level-set of the function: f(x,y) = c. February 28, 2019 OSU/CIS 541

8 Tangent Planes and Gradients
In higher-dimensions, we have the same thing: A surface is a 2D function in 3D: Surface = (x, y, f(x,y) ) A volume or hyper-surface is a 3D function in 4D: Volume = (x, y, z, f(x,y,z) ) February 28, 2019 OSU/CIS 541

9 Tangent Planes and Gradients
The linear approximation to the higher-dimensional function at a point (s,t), has the form: ax+by+cz+d=0, or z(x,y) = … What is this plane? February 28, 2019 OSU/CIS 541

10 Construction of Tangent Planes
Images courtesy of TJ Murphy: February 28, 2019 OSU/CIS 541

11 Construction of Tangent Planes
February 28, 2019 OSU/CIS 541

12 Construction of Tangent Planes
February 28, 2019 OSU/CIS 541

13 Tangent Planes and Gradients
The formula for the plane is rather simple: z(s,t) = f(s,t) - interpolates z(s+dx,t) = f(s,t) + fx(s,t)dx = b + adx Linear in dx Of course, the plane does not stay close to the surface as you move away from the point (s,t). February 28, 2019 OSU/CIS 541

14 Tangent Planes and Gradients
The normal to the plane is thus: The 2D vector: is called the gradient of the function. It represents the direction of maximal change. February 28, 2019 OSU/CIS 541

15 Gradients The gradient thus indicates the direction to walk to get down the hill the fastest. Also used in graphics to determine illumination. February 28, 2019 OSU/CIS 541

16 Review of Functions Extrema of a function occur where f’(x)=0.
The second derivative determines whether the point is a minimum or maximum. The second derivative also gives us an indication of the curvature of the curve. That is, how fast it is oscillating or turning. February 28, 2019 OSU/CIS 541

17 The Class of Polynomials
Specific functions of the form: February 28, 2019 OSU/CIS 541

18 The Class of Polynomials
For many polynomials, the latter coefficients are zero. For example: p(x) = 3+x2+5x3 February 28, 2019 OSU/CIS 541

19 Taylor’s Series For a function, f(x), about a point c.
I.E. A polynomial February 28, 2019 OSU/CIS 541

20 Taylor’s Theorem Taylor’s Theorem allows us to truncate this infinite series: February 28, 2019 OSU/CIS 541

21 Taylor’s Theorem Some things to note:
(x-c)(n+1) quickly approaches zero if |x-c|<<1 (x-c)(n+1) increases quickly if |x-c|>>1 Higher-order derivatives may get smaller (for smooth functions). February 28, 2019 OSU/CIS 541

22 Higher Derivatives What is the 100th derivative of sin(x)?
Compare 3100 to 100! What is the 100th derivative of sin(1000x)? February 28, 2019 OSU/CIS 541

23 Taylor’s Theorem Hence, for points near c we can just drop the error term and we have a good polynomial approximation to the function (again, for points near c). Consider the case where (x-c)=0.5 For n=4, this leads to an error term around 2.6*10-4 f() Do this for other values of n. Do this for the case (x-c) = 0.1 February 28, 2019 OSU/CIS 541

24 Some Common Derivatives
February 28, 2019 OSU/CIS 541

25 Some Resulting Series About c=0 February 28, 2019 OSU/CIS 541

26 Some Resulting Series About c=0 February 28, 2019 OSU/CIS 541

27 Book’s Introduction Example
Eight terms for first series not even yielding a single significant digit. Only four for second series with four significant digits. February 28, 2019 OSU/CIS 541

28 Mean-Value Theorem Special case of Taylor’s Theorem, where n=0, x=b.
Assumes f(x) is continuous and its first derivative exists everywhere within (a,b). February 28, 2019 OSU/CIS 541

29 Mean-Value Theorem So what!?!?! What does this mean?
Function can not jump away from current value faster than the derivative will allow. f(x) secant a b February 28, 2019 OSU/CIS 541

30 Rolles Theorem If a and b are roots (f(a)=f(b)=0) of a continuous function f(x), which is not everywhere equal to zero, then f’(t)=0 for some point t in (a,b). I.e., What goes up, must come down. f(x) f’(t)=0 a t b February 28, 2019 OSU/CIS 541

31 Caveat For Taylor’s Series and Taylor’s Theorem to hold, the function and its derivatives must exist within the range you are trying to use it. That is, the function does not go to infinity, or have a discontinuity (implies f’(x) does not exist), … February 28, 2019 OSU/CIS 541

32 Implementing a Fast sin()
const int Max_Iters = 100,000,000; float x = -0.1; float delta = 0.2 / Max_Iters; float Reimann_sum = 0.0; for (int i=0; i<Max_Iters; i++) { Reimann_Sum += sinf(x); x+=delta; } Printf(“Integral of sin(x) from –0.1->0.1 equals: %f\n”, Reimann_Sum*delta ); February 28, 2019 OSU/CIS 541

33 Implementing a Fast sin()
const int Max_Iters = 100,000,000; float x = -0.1; float delta = 0.2 / Max_Iters; float Reimann_sum = 0.0; for (int i=0; i<Max_Iters; i++) { Reimann_Sum += my_sin(x); //my own sine func x+=delta; } Printf(“Integral of sin(x) from –0.1->0.1 equals: %f\n”, Reimann_Sum*delta ); February 28, 2019 OSU/CIS 541

34 Version 1.0 my_sin( const float x ) { float x2 = x*x; float x3 = x*x2;
return (x – x3/6.0 + x2*x3/120.0 ); } February 28, 2019 OSU/CIS 541

35 Version 2.0 – Horner’s Rule
Static const float fac3inv = 1.0 / 6.0f; Static const float fac5inv = 1.0 / 120.0f; my_sin( const float x ) { float x2 = x*x; return x*(1.0 – x2*(fac3inv - x2*fac5inv)); } February 28, 2019 OSU/CIS 541

36 Version 3.0 – Inline code const int Max_Iters = 100,000,000;
float x = -0.1; float delta = 0.2 / Max_Iters; float Reimann_sum = 0.0; for (i=0; i<Max_Iters; i++) { x2 = x*x; Reimann_Sum += x*(1.0–x2*(fac3inv-x2*fac5inv); x+=delta; } Printf(“Integral of sin(x) from –0.1->0.1 equals: %f\n”, Reimann_Sum*delta ); February 28, 2019 OSU/CIS 541

37 Max( |sin(x)-my_sin(x)| )
Timings Pentium III, 600MHz machine Time in seconds Result Max( |sin(x)-my_sin(x)| ) Using sinf 27 Version 1.0 20 *10-11 Version 2.0 13 8.0495*10-12 Version 3.0 2 February 28, 2019 OSU/CIS 541

38 Observations Is the result correct?
Why did we gain some accuracy with version 2.0? Is (–0.1,0.1) a fair range to consider? Is the original sinf() function optimized? How did we achieve our speed-ups? We will re-examine this after Lab1. Ask these question for Lab1 !!! February 28, 2019 OSU/CIS 541

39 Homework Read Chapters 1 and 2 for next class.
Start working on Lab 1 and Homework 1. February 28, 2019 OSU/CIS 541


Download ppt "CIS 541 – Numerical Methods"

Similar presentations


Ads by Google