Response surfaces. We have a dependent variable y, independent variables x 1, x 2,...,x p The general form of the model y = f(x 1, x 2,...,x p ) +  Surface.

Slides:



Advertisements
Similar presentations
© 2003 Anita Lee-Post Linear Programming Part 2 By Anita Lee-Post.
Advertisements

Solving IPs – Cutting Plane Algorithm General Idea: Begin by solving the LP relaxation of the IP problem. If the LP relaxation results in an integer solution,
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Experimental Design, Response Surface Analysis, and Optimization
Optimisation.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions Today’s lecture is on optimality.
11.1 Introduction to Response Surface Methodology
Response Surface Method Principle Component Analysis
Linear Discriminant Functions
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Lecture 1: Basics of Math and Economics AGEC 352 Spring 2011 – January 12 R. Keeney.
MIT and James Orlin © Nonlinear Programming Theory.
Optimization Methods One-Dimensional Unconstrained Optimization
Industrial Applications of Response Surface Methodolgy John Borkowski Montana State University Pattaya Conference on Statistics Pattaya, Thailand.
Response Surfaces max(S(  )) Marco Lattuada Swiss Federal Institute of Technology - ETH Institut für Chemie und Bioingenieurwissenschaften ETH Hönggerberg/
1 MF-852 Financial Econometrics Lecture 2 Matrix Operations in Econometrics, Optimization with Excel Roy J. Epstein Fall 2003.
Unconstrained Optimization Problem
Lecture 17 Today: Start Chapter 9 Next day: More of Chapter 9.
Objectives: Set up a Linear Programming Problem Solve a Linear Programming Problem.
Chapter 3 Introduction to optimization models. Linear Programming The PCTech company makes and sells two models for computers, Basic and XP. Profits for.
10.1 Chapter 10 Optimization Designs Optimization Designs CS RO R Focus: A Few Continuous Factors Output: Best Settings Reference: Box, Hunter &
Linear programming Lecture (4) and lecture (5). Recall An optimization problem is a decision problem in which we are choosing among several decisions.
{ ln x for 0 < x < 2 x2 ln 2 for 2 < x < 4 If f(x) =
Do Now: ….. greatest profit ….. least cost ….. largest ….. smallest
ENCI 303 Lecture PS-19 Optimization 2
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Industrial Applications of Experimental Design John Borkowski Montana State University University of Economics and Finance HCMC, Vietnam.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
L4 Graphical Solution Homework See new Revised Schedule Review Graphical Solution Process Special conditions Summary 1 Read for W for.
Centerpoint Designs Include n c center points (0,…,0) in a factorial design Include n c center points (0,…,0) in a factorial design –Obtains estimate of.
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Functions of Several Variables Copyright © Cengage Learning. All rights reserved.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Linear Programming Advanced Math Topics Mrs. Mongold.
Hosted By The Math Institute
MER 160, Prof. Bruno1 Optimization The idea behind “optimization” is to find the “best” solution from a domain of “possible” solutions. Optimization methods.
Linear Equation: an equation whose graph forms a line. is linear. is not. In linear equations, all variables are taken to the first power. Linear means.
Gomory Cuts Updated 25 March Example ILP Example taken from “Operations Research: An Introduction” by Hamdy A. Taha (8 th Edition)“Operations Research:
Particle Swarm Optimization by Dr. Shubhajit Roy Chowdhury Centre for VLSI and Embedded Systems Technology, IIIT Hyderabad.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
M3 1.5 Systems of Linear Inequalities M3 1.5 Systems of Linear Inequalities Essential Questions: How can we write and graph a system of linear inequalities.
Lecture 18 Today: More Chapter 9 Next day: Finish Chapter 9.
Regress-itation Feb. 5, Outline Linear regression – Regression: predicting a continuous value Logistic regression – Classification: predicting a.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
LINEAR PROGRAMMING 3.4 Learning goals represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret.
Integer Programming, Branch & Bound Method
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
Section 2.2 Quadratic Functions. Graphs of Quadratic Functions.
Sullivan Algebra and Trigonometry: Section 12.9 Objectives of this Section Set Up a Linear Programming Problem Solve a Linear Programming Problem.
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Introduction to Statistical Quality Control, 4th Edition Chapter 13 Process Optimization with Designed Experiments.
Chapter 14 – Partial Derivatives 14.1 Functions of Several Variables 1 Objectives:  Use differential calculus to evaluate functions of several variables.
Linear programming Lecture (4) and lecture (5). Recall An optimization problem is a decision problem in which we are choosing among several decisions.
Gomory Cuts Updated 25 March 2009.
13 Functions of Several Variables
Lasso/LARS summary Nasimeh Asgarian.
3-3 Optimization with Linear Programming
Linear Programming Objectives: Set up a Linear Programming Problem
AP Calculus BC September 30, 2016.
L5 Optimal Design concepts pt A
Linear Inequalities in Two Variables
Concept of a Function.
Linear Programming Example: Maximize x + y x and y are called
LINEARPROGRAMMING 4/26/2019 9:23 AM 4/26/2019 9:23 AM 1.
LINEAR & QUADRATIC GRAPHS
What are optimization methods?
Centerpoint Designs Include nc center points (0,…,0) in a factorial design Obtains estimate of pure error (at center of region of interest) Tests of curvature.
Presentation transcript:

Response surfaces

We have a dependent variable y, independent variables x 1, x 2,...,x p The general form of the model y = f(x 1, x 2,...,x p ) +  Surface Graph Contour Map

The linear model y =  0 +  1 x 1 +  2 x  p x p + e Surface Graph Contour Map

The quadratic response model Linear terms Contour Map Surface Graph Quadratic terms

The quadratic response model (3 variables) Linear terms Quadratic terms To fit this model we would be given the data on y, x 1, x 2, x 3. From that data we would compute: We then regress y on x 1, x 2, x 3, u 4, u 5, u 6, u 7, u 8 and u 9

Exploration of a response surface The method of steepest ascent

Situation We have a dependent variable y, independent variables x 1, x 2,...,x p The general form of the model y = f(x 1, x 2,...,x p ) +  We want to find the values of x 1, x 2,...,x p to maximize (or minmize) y. We will assume that the form of f(x 1, x 2,...,x p ) is unknown. If it was known (e.g. A quadratic response model), we could estimate the parameters and determine the optimum values of x 1, x 2,...,x p using calculus

The method of steepest ascent: 1.Choose a region in the domain of f(x 1, x 2,...,x p ) 2.Collect data in that region 3.Fit a linear model (plane) to that data. 4.Determine from that plane the direction of its steepest ascent. (direction (  1,  2,...,  p )) 5.Move off in the direction of steepest ascent collecting on y. 6.Continue moving in that direction as long as y is increasing and stop when y stops increasing. 7.Choose a region surrounding that point and return to step 2. 8.Continue until the plane fitted to the data is horizontal 9.Consider fitting a quadratic response model in this region and determining where it is optimal.

The method of steepest ascent: domain of f(x 1, x 2,...,x p ) Initial region direction of steepest ascent. 2 nd region Final region Optimal (x 1, x 2 )

Example In this example we are interested in how the life (y) of a lathe cutting tool depends on Lathe velocity (V) and Cutting depth (D). In particular we are interested in what settings of V and D will result in the maximum life (y) of the tool. The variables V and D have been recoded into x 1 and x 2 so that when V = 100 then x 1 = 0 and when V = 700, x 1 = to 700 are the feasible values of V. Also when D = then x 2 = 0 and when D = 0.100, x 2 = to are the feasible values of V. –

The domain for (x 1, x 2 ) x2x2 x1x1 0

Initial Region (2 k design) x2x2 x1x1

Analysis Direction of steepest ascent: (  1,  2 ) = (1.114, )

Moving in the direction of steepest ascent Direction of steepest ascent: (  1,  2 ) = (1.114, ) Optimum (x 1, x 2 ) = (41.72, 58.24)

2 nd Region (2 k design)

Analysis Direction of steepest ascent: (  1,  2 ) = (-0.080, )

Moving in the direction of steepest ascent Direction of steepest ascent: (  1,  2 ) = (-0.080, )

To determine the precise optimum we will fit a quadratic response surface: The optimum then satisfies: which has solution:

The data

Location of the data points

Fitting a quadratic response surface: The optimum: