Download presentation
Presentation is loading. Please wait.
1
MAE 552 – Heuristic Optimization
Lecture 2 January 25, 2002
2
The optimization problem is then:
Find values of the variables that minimize or maximize the objective function while satisfying the constraints. The standard form of the constrained optimization problem can be written as: Minimize: F(x) objective function Subject to: gj(x)0 j=1,m inequality constraints hk(x)0 k=1,l equality constraints xi lower xi xi upper i=1,n side constraints where x=(x1, x2, x3, x4 , x5 ,xn) design variables
3
Conditions for Optimality
Unconstrained Problems F(x)=0 The gradient of F(x) must vanish at the optimum Hessian Matrix must be positive definite (i.e. all positive eigenvalues at optimum point).
4
Conditions for Optimality
Unconstrained Problems A positive definite Hessian at the minimum ensures only that a local minimum has been found The minimum is the global minimum only if it can be shown that the Hessian is positive definite for all possible values of x. This would imply a convex design space. Very hard to prove in practice!!!!
6
Conditions for Optimality
Constrained Problems Kuhn Tucker Conditions x* is feasible jgj = 0 j=1,m 3. These conditions only guarantee that x* is a local optimum.
7
Conditions for Optimality
Constrained Problems In addition to the Kuhn Tucker conditions two other conditions two other conditions must be satisfied to guarantee a global optima. Hessian must be positive definite for all x. Constraints must be convex. A constraint is convex if a line connecting any two points in the feasible space travels always lies in the feasible region of the design space.
8
Determining the Complexity of Problems
Why are some problems difficult to Solve? The number of possible solutions in the search space is so large as to forbid an exhaustive search for the best answer. The problem is so complex that just to facilitate any answer at all requires that we simplify the model such that any result is essentially useless. The evaluation function (objective function) that describes the quality of the proposed solution is noisy or time-varying, thereby requiring a series of solutions to be found. The person solving the problem is inadequately prepared or imagines some psychological barrier that prevents them from discovering a solution.
9
1. The Size of the Search Space
Example 1: Traveling Salesman Problem (TSP) A salesman must visit every city in a territory exactly once and then return home covering the shortest distance Given the cost of traveling between each pair of cities, how would the salesman plan his trip to minimize the distance traveled? Seattle NY Orlando LA Dallas
10
1. The Size of the Search Space
What is the size of the search space for the TSP? Each tour can be described as a permutation of the cities. Seattle-NY-Orlando-Dallas-LA LA-NY-Seattle-Dallas-Orlando etc. Certain Tours are identical Seattle-NY-Orlando-Dallas-LA NY-Orlando-Dallas-LA-Seattle Orlando-Dallas-LA-Seattle-NY Every tour can be represented in 2n different ways where n is the number of cities.
11
1. The Size of the Search Space
Number of permutations of n cites = n! 2n ways to represent each tour. Total number of unique tours =n!/(2n) = (n-1)!/2 Size of search space S = (n-1)!/2 So for our example n=5 and S=12 which could easily be solved by hand. S grows exponentially however n=10 S=180,000 possible solutions n=20 S=10,000,000,000,000,000 possible solutions
12
1. The Size of the Search Space
n=50 S=100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 possible solutions
13
1. The Size of the Search Space
Example 2: Nonlinear Programming Problem
14
1. The Size of the Search Space
What is the size of the search space for the NLP? If we assume a machine precision of 6 decimal places, then each Variable can take on 10,000,000 possible values. The total number of possible solutions =107n For n=2 there are 100,000,000,000,000 possible solutions For n=50 there are possible solutions!!!! Impossible to enumerate all these solutions even with the most powerful computers.
15
Problem => Model => Solution
2. Modeling the Problem Whenever we solve a problem we are actually only finding a solution to a MODEL of the problem Example: Finite Element Model, CFD Model etc. 2 Steps to Problem Solving Creating a Model for the Problem Using that Model to Generate a Solution Problem => Model => Solution The ‘solution’ is only a ‘solution’ in terms of the model used.
16
2. Modeling the Problem Example: Methods of Performing Nonlinear Aerodynamic Drag Predications Solution Method Time for 1 solution Linear Theory Solution 2 seconds Wing Fuselage Euler Solution 16 minutes Wing Fuselage Navier Stokes Solution 2 hours There is a tradeoff between the fidelity of the model and ability to solve it.
17
2. Modeling the Problem When faced with a complex problem we have two choices: Simplify the model and try for a more exact solution with a traditional optimizer. We can keep the model as is and possibly only find an approximate solution or use a nontraditional optimization method to try to find an exact solution. This can be written: Problem => Modela=>Solutionp(Modela) Problem=>Modelp=>Solutiona(Modelp) It is generally better to use strategy 2 because with strategy 1 there is no guarantee that the solution to an approximate model will be useful
18
3. System Changes over Time
Real world systems change over time Example: In the TSP the time to travel between cities could change as a result of many factors. Road conditions Traffic Patterns Accidents Suppose there are two possibilities that are equally likely for a trip from New York to Buffalo Everything goes fine and it takes 6 hours You get delayed by one of the factors above and it takes 7 hours
19
3. System Changes over Time
How can this information be put into the model? Trip takes 6 hours 50% of the time Trip takes 7 hours 50% of the time Buffalo Time=??? NY
20
3. System Changes over Time
How can this information be put into the model? Simple approach: 6.5 hours for the trip Problem: It never takes exactly 6.5 hours to make the trip so the solution found will be for the WRONG problem Repeatedly simulate the system with each case occurring 50% of the time. Problem: It could be expensive to run repeated simulations.
21
4. Constraints Real world problems do not allow you to choose from the entire search space.
22
4. Constraints Effect of Constraints
Good Effect: Remove part of the search space from consideration. Inequality Constraints: Cut the design space into Feasible and Infeasible regions X1 X2 Infeasible Feasible g: x1+x2 3
23
4. Constraints Effect of Constraints
Good Effect: Remove part of the search space from consideration. Equality Constraints:Reduce the design space to a line or a plane. g: x1+x2 = 3 X2 Infeasible Infeasible X1
24
4. Constraints Effect of Constraints
Good Effect: Remove part of the search space from consideration. If an algorithm can be designed so that it only considers the feasible part of the design space, the number of potential solutions can be reduced.
25
4. Constraints 2. Bad Effect: Need an algorithm that finds new solutions that are an improvement over previous solutions AND maintains feasibility. Often the optimal solution lie directly along one or more constraints. Difficult to move along constraint without corrupting solution. One option is to design an algorithm that locates a feasible design and then never corrupts it while searching for a design that better satisfies the objective function. Very difficult to do!!!!
26
Complexity Theory The complexity of decision and optimization problems are classified according to the relationship between solution time and input size. The simplest way to measure the running time of a program is to determine the overall number of instructions executed by the algorithm before halting. For a problem with input size n determine the cost (time) of applying the algorithm on the worst case instance of the problem. This provides an upper bound on the execution time.
27
Execution time grows no more than O(n) as n increases asymptotically
Complexity Theory The goal is to express the execution time for the algorithm in terms of the input variables. Time=F (Num of Inputs) The standard O notation is used to describe the execution cost of the algorithm with input size n: Execution time grows no more than O(n) as n increases asymptotically
28
Asymptotic Analysis Let t(x) be the running time of algorithm A on input x. The worst case running time of A is given by t(n)=max(t(x) | x such that |x| n) Upper bound: A has complexity O(f(n)) if t(n) is O(f(n)) (that is, we ignore constants) Lower bound: A has complexity (f(n)) if t(n) is (f(n))
29
Execution Time Bounded By
Complexity Theory Examples: Execution Time Bounded By O(n) bn2 O(n2) bn2+log(n)+n 2n+bn2+log(n)+n O(2n)
30
Size of input: number of bits needed to present the specific input
Input size Size of input: number of bits needed to present the specific input Existence of encoding scheme which is used to describe any problem instance For any pair of natural encoding schemes and for any instance x, the resulting strings are related
31
Complexity Classes For any function f(n), TIME(f(n)) is the set of decision problems which can be solved with a time complexity O(f(n)) P = the union of TIME(nk) for all k EXPTIME = the union of TIME (2nk) for all k P is contained in EXPTIME It is possible to prove (by diagonalization) that EXPTIME is not contained in P
32
Examples SATISFYING TRUTH ASSIGNMENT: given a logical equation and a truth assignment f, does f satisfy F? SATISFYING TRUTH ASSIGNMENT is in P SATISFIABILITY (simply, SAT): given a logical equation formula F, is F satisfiable? SAT is in EXPTIME. Open problem: SAT is in P?
33
Complexity Classes: Class NPO
Optimization problems such that A instance of a problem is recognizable in polynomial time Feasible solutions are recognizable in polynomial time Objective Function is computable in polynomial time
34
Class PO NPO problems solvable in polynomial time Examples: Linear, Quadratic Programming.
35
Class NP-hard problems
An optimization problem P is NP-hard if any problem if it is at least as hard to solve as any other NP problems Solvable in Exponential Time Only TRAVELING SALESMAN MAXIMUM QUADRATIC PROGRAMMING MAXIMUM KNAPSACK and Many More
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.