Zeroing in on the Implicit Function Theorem Real Analysis II Spring, 2007.

Slides:



Advertisements
Similar presentations
8.3 Inverse Linear Transformations
Advertisements

8.2 Kernel And Range.
Copyright © Cengage Learning. All rights reserved. 14 Partial Derivatives.
ESSENTIAL CALCULUS CH11 Partial derivatives
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Grover. Part 2. Components of Grover Loop The Oracle -- O The Hadamard Transforms -- H The Zero State Phase Shift -- Z O is an Oracle H is Hadamards H.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
Ch 3.3: Linear Independence and the Wronskian
Chapter 1 Systems of Linear Equations
2.1 Functions and their Graphs p. 67. Assignment Pp #5-48 all.
Linear Functions.
7.4 Function Notation and Linear Functions
Mathematics for Business (Finance)
Zeroing in on the Implicit Function Theorem The Implicit Function Theorem for several equations in several unknowns.
Objective - To graph linear equations using x-y charts. One Variable Equations Two Variable Equations 2x - 3 = x = 14 x = 7 One Solution.
Functions Definition A function from a set S to a set T is a rule that assigns to each element of S a unique element of T. We write f : S → T. Let S =
ORDINARY DIFFERENTIAL EQUATION (ODE) LAPLACE TRANSFORM.
ECON 1150 Matrix Operations Special Matrices
Functions of Several Variables Local Linear Approximation.
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Sections 1.8/1.9: Linear Transformations
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
A vector space containing infinitely many vectors can be efficiently described by listing a set of vectors that SPAN the space. eg: describe the solutions.
2.1 Functions and their Graphs page 67. Learning Targets I can determine whether a given relations is a function. I can represent relations and function.
MA Day 25- February 11, 2013 Review of last week’s material Section 11.5: The Chain Rule Section 11.6: The Directional Derivative.
Section 4.1 – Antiderivatives and Indefinite Integration.
Differentiability for Functions of Two (or more!) Variables Local Linearity.
Linear Equation The equation Which express the real or complex quantity b in terms of the unknowns and the real or complex constants is called a linear.
SECTION 12.8 CHANGE OF VARIABLES IN MULTIPLE INTEGRALS.
Differential Equations Chapter 1. A differential equation in x and y is an equation that involves x, y, and derivatives of y. A mathematical model often.
2.1 Functions.
Local Linear Approximation for Functions of Several Variables.
Math Review and Lessons in Calculus
Matrices and Matrix Operations. Matrices An m×n matrix A is a rectangular array of mn real numbers arranged in m horizontal rows and n vertical columns.
Objectives The student will be able to:
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
1 Differential Equations 6 Copyright © Cengage Learning. All rights reserved. 6.1 DE & Slope Fields BC Day 1.
Instructor: Mircea Nicolescu Lecture 9
2.1 Functions and their Graphs Standard: Students will understand that when a element in the domain is mapped to a unique element in the range, the relation.
Chapter 2: Linear Equations and Functions Section 2.1: Represent Relations and Functions.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Copyright © Cengage Learning. All rights reserved.
Chapter 14 Partial Derivatives
Graphs and Applications of Linear Equations
8.2 Kernel And Range.
14.6 Directional Derivatives and the Gradient Vector
Functions and their Graphs
2.1 Functions and their Graphs
Complex Variables. Complex Variables Open Disks or Neighborhoods Definition. The set of all points z which satisfy the inequality |z – z0|
Section 4.1 – Antiderivatives and Indefinite Integration
13 Functions of Several Variables
Functions and their Graphs
2.1 – Represent Relations and Functions.
The Implicit Function Theorem---Part 1
Copyright © Cengage Learning. All rights reserved.
Linear Algebra Lecture 3.
Chapter 3 Section 6.
5.2 Least-Squares Fit to a Straight Line
2.1 Functions.
2.1 Functions.
2.1 Represent Relations and Functions
Relations And Functions.
2.1 Functions and Their Graphs
5.1 Functions and their Graphs
2.1 Represent Relations & Functions
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Directional Derivatives
Presentation transcript:

Zeroing in on the Implicit Function Theorem Real Analysis II Spring, 2007

Recall

In Summary Solving a system of m equations in n unknowns is equivalent to finding the “zeros” of a vector valued function from m→n.m→n. When m > n, such a system will “typically” have infinitely many solutions. In “nice” cases, the solution will be a function from  m-n →  n. The Implicit Function theorem will tell us what is meant by the word “typical.”

More reminders In general, the 0-level curves are not the graph of a function, but, portions of them may be. Though we cannot hope to solve a system “globally,” we can often find a solution function in the neighborhood of a single known solution. “Find” is perhaps an overstatement, the implicit function theorem is an existence theorem and we aren’t actually “finding” anything.

x y Consider the contour line f (x,y) = 0 in the xy-plane. Idea: At least in small regions, this curve might be described by a function y = g(x). Our goal: find such a function!

x y (a,b)(a,b)(a,b)(a,b) (a,b)(a,b)(a,b)(a,b) y = g(x) Start with a point (a,b) on the contour line, where the contour is not vertical: In a small box around (a,b), we can hope to find g(x). Make sure all of the y-partials in this box are close to D.

(a,b)(a,b)(a,b)(a,b) How to construct g(x) Define Iterate  x (y) to find its fixed point, where f (x,y) = 0. Let fixed point be g(x). x Non-trivial issues: Make sure the iterated map converges. (Quasi-Newton’s methods!) Make sure you get the right fixed point. (Don’t leave the box!) b

A bit of notation To simplify things, we will write our vector valued function F :  n+m   n. We will write our “input” variables as concatenations of n-vectors and m-vectors. e.g. (y,x)=(y 1, y 2,..., y n, x 1, x 2,..., x m ) So when we solve F(y,x)=0 we will be solving for the y-variables in terms of the x-variables.

Implicit Function Theorem--- in brief F :  n+m   n has continuous partials. Suppose b   n and a   m with F(b,a)=0. The n x n matrix that corresponds to the y partials of F (call it D) is invertible. Then “near” a there exists a unique function g(x) such that F(g(x),x)=0; moreover g(x) is continuous.

What on Earth...?! Mysterious hypotheses

Differentiable functions--- (As we know...?) A vector valued function F of several real variables is differentiable at a vector v if in some small neighborhood of v, the graph of F “looks a lot like” an affine function. That is, there is a linear transformation A so that for all z “close” to v, Suppose that F (v) = 0. When can we solve F(z) = 0 in some neighborhood of v? Well, our ability to find a solution to a particular system of equations depends on the geometry of the associated vector- valued function. Where A is the Jacobian matrix made up of all the partial derivatives of F.

So suppose we have F(b,a) = 0 and F is differentiable at (b,a), so there exists A so that for all (y,x) “close” to (b,a) Geometry and Solutions But F(b,a) = 0!

So suppose we have F(b,a) = 0 and F is differentiable at (b,a), so there exists A so that for all (y,x) “close” to (b,a) Geometry and Solutions Since the geometry of F near (b,a) is “just like” the geometry of A near 0, we should be able to solve F(y, x) = 0 for y in terms of x “near” the known solution (b,a) provided that we can solve A(y,x) = 0 for y in terms of x.

When can we do it? Since A is linear, A(x,y)=0 looks like this. We know that we will be able to solve for the variables y 1, y 2, and y 3 in terms of x 1 and x 2 if and only if the sub-matrix... is invertible.

Hypothesis no longer mysterious When can we solve F(y, x) = 0 for y in terms of x “near” (b,a)? When the domain of the function has more dimensions than the range, and when the “correct” square sub-matrix of A=F’(y,x) is invertible. That is, the square matrix made up of the partial derivatives of the y-variables. This sub-matrix is D in implicit function theorem.

 n+m t a x f(x)f(x) We want f continuous & unique F ( f(x),x ) = 0 mm (b,a)(b,a) r b r nn t r Partials of F in the ball are all close to the partials at (b,a)

 n+m t a x f(x)f(x) We want f continuous & unique F ( f(x),x ) = 0 mm (b,a)(b,a) r b r nn t