Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dr. Arslan Ornek IMPROVING SEARCH

Similar presentations


Presentation on theme: "Dr. Arslan Ornek IMPROVING SEARCH"— Presentation transcript:

1 Dr. Arslan Ornek IMPROVING SEARCH
11/17/2018 CHAPTER THREE IMPROVING SEARCH 11/17/2018 IND606 Fundamentals of Optimization IND517 Dynamic and Stochastic Programming

2 IND517 Dynamic and Stochastic Programming
NUMERICAL SEARCH is repeatedly trying different values of the decision variables in a systematic way until a satisfactory one emerges. Some optimization models admit closed – form solutions, some can be thought of as variations on a single search theme: IMPROVING SEARCH. IMPROVING SEARCH tries to better a current solution by checking others nearby. If any proves superior, the search advances to such a solution, and the process repeats. Otherwise, we stop with the current solutions (LOCAL IMPROVEMENT, HILLCLIMBING, LOCAL SEARCH, NEIGHBORHOOD SEARCH). 11/17/2018 IND517 Dynamic and Stochastic Programming

3 3.1 IMPROVING SEARCH, LOCAL AND GLOBAL OPTIMA
3.1 A SOLUTION is a choice of values for all decision variables. If an optimization model has n decision variables, solutions are n-dimensional. 3.2 For a model with decision vector x, the first solution visited by a search is denoted x(0), the next x(1), and so on. 11/17/2018 IND606 Fundamentals of Optimization

4 IND517 Dynamic and Stochastic Programming
Ex. 3.1 DCLUB LOCATION Pop center 1: 60 (-1,3) Pop center 2: 20 (1,3) Pop center 3: 30 (0,-4) Not within ½ mile of each pop center. 11/17/2018 IND517 Dynamic and Stochastic Programming

5 IND517 Dynamic and Stochastic Programming
Assume an objective function that is proportional to population and inversely proportional to 1+the square of its distance from the chosen location. s.t. constraints 11/17/2018 IND517 Dynamic and Stochastic Programming

6 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

7 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

8 IND517 Dynamic and Stochastic Programming
The point x(4) in Fig. 3.3 is (approximately) optimal because it is the feasible point falling on the highest contour (see 2.13) Fig. 3.3 also traces an improving search leading to optimal solution x(4). x(0) = (-5,0) with p(x(0)) = 3.5 x(1) = (-3,4) with p(x(1)) = 11.5 x(2) = (-1,4.5) with p(x(2)) = 21.6 x(3) = (0,3.5) with p(x(3)) = 36.1 x(4) = (-0.5,3) with p(x(4)) = 54.8 optimum 11/17/2018 IND517 Dynamic and Stochastic Programming

9 IND517 Dynamic and Stochastic Programming
3.3 IMPROVING SEARHES are numerical algorithms that begin at a feasible solution to a given optimization model and advance along a search path of feasible points with ever improving objective function value The NEIGHBORHOOD of a current solution x(t) consists of all nearby points; that is all points within a small positive distance of x(t) A solution is a LOCAL OPTIMUM (local max or local min) if it is feasible and if sufficiently small neighborhoods surrounding it contain no points that are both feasible and superior in objective value. 11/17/2018 IND517 Dynamic and Stochastic Programming

10 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

11 IND517 Dynamic and Stochastic Programming
3.6 Improving searches stop if they encounter a local optimum A solution is a GLOBAL OPTIMUM (global max or min) if it is feasible and no other feasible solution has superior objective value. Notice that global optima cannot be improved in any neighborhood Global optima are always local optima Local optima may not be global optima. 11/17/2018 IND517 Dynamic and Stochastic Programming

12 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

13 IND517 Dynamic and Stochastic Programming
3.10 The most tractable optimization models for improving search are those with mathematical forms assuring every local optimum is a global optimum When models have local optima that are not global, the most satisfactory available analysis is often to run several independent improving searches and accept the best local optimum discovered as a heuristic or approximate optimum. 11/17/2018 IND517 Dynamic and Stochastic Programming

14 3.2 SEARCH WITH IMPROVING AND FEASIBLE DIRECTIONS
Just how do we efficiently construct search paths satisfying the always feasible, constantly improving requirements of definition 3.3? We pursue a sequence of steps along straight-line move directions. Each begins at one of the numbered solutions x(t). There, a move direction is chosen along with a step size specifying how far the direction should be pursued. Together they determine new point x(t+1), and the search continues. 11/17/2018 IND606 Fundamentals of Optimization

15 IND517 Dynamic and Stochastic Programming
3.12 Improving searches advance from current solution x(t) to new solution x(t+1) as x(t+1) = x(t) + λ Δx where vector Δx defines a MOVE DIRECTION of solution change at x(t), and STEP SIZE multiplier λ>0 determines how far to pursue the direction. Ex. DClub. x(0)= (-5, 0) x(1)= (0.5, -2.75) Δx = x(1) - x(0) = (0.5, -2.75) – (-5, 0) = (5.5, -2.75) if λ = 1 then x(1) = x(0) + λ Δx = (0.5, -2.75) 11/17/2018 IND517 Dynamic and Stochastic Programming

16 IND517 Dynamic and Stochastic Programming
However, if Δx’ = (2, -1) defines the same direction of movement, then λ’ = 2.75 produces the identical move Vector Δx is an IMPROVING DIRECTION at current solution x(t) if the objective function value at x(t) + λ Δx is superior to that of x(t) for all λ>0 sufficiently small. i.e. f(x(t) + λ Δx ) > f(x(t) ) 11/17/2018 IND517 Dynamic and Stochastic Programming

17 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

18 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

19 IND606 Fundamentals of Optimization
3.14 Vector Δx is a feasible direction at current solution x(t) if point x(t) + λ Δx violates no model constraint if λ > 0 sufficiently small. at x(2) = (4,4) every direction is feasible x(1) = (7,0) Δx = (0,1) is feasible x(3) = (0.75,6) Δx = (0,1) is not feasible 11/17/2018 IND606 Fundamentals of Optimization

20 IND517 Dynamic and Stochastic Programming
Once an improving feasible direction has been discovered at the current solution, what step size λ should be applied? Improving searches normally apply the max step λ for which the selected move direction continues to retain feasibility and improve the objective function. 11/17/2018 IND517 Dynamic and Stochastic Programming

21 IND606 Fundamentals of Optimization
Let w(19) = (4,5) and determine λ when Δw = (-3,-8) If λ>5/8 or λ>4/3 constraints are violated. So, 11/17/2018 IND606 Fundamentals of Optimization

22 ALGORITHM 3A: CONTINUOUS IMPROVING SEARCH
STEP 0: INITIALIZATION. Choose any feasible starting solution x(0), and set solution index t←0. STEP 1: LOCAL OPTIMUM. If no improving feasible direction Δx exists at current solution x(t), stop. Under mild assumptions about the form of the model, point x(t) is a local optimum. STEP 2: MOVE DIRECTION. Construct an improving feasible direction at x(t) as Δx(t+1) . STEP 3: STEP SIZE. If there is a limit on step sizes for which direction Δ x(t+1) continues to both improve the objective function and retain feasibility, choose the largest such step size as λt+1. If not, stop; the model is unbounded. STEP 4: ADVANCE. Update Then increment t←t+1, and return to STEP 1. 11/17/2018 IND517 Dynamic and Stochastic Programming

23 IND517 Dynamic and Stochastic Programming
3.16 No optimization model solution at which an improving feasible direction is available can be a local optimum When a continuous improving search terminates at a solution admitting no improving feasible direction, and mild assumptions hold, the point is a local optimum. The mild assumptions caveat is there are no improving feasible directions at the current solution, yet it is not a local optimum. An optimization model is UNBOUNDED if it admits feasible solutions with arbitrarily good objective value (2.20) If an improving search discovers an improving feasible direction for a model that can be pursued forever without ceasing to improve or losing feasibility, the model is unbounded. 11/17/2018 IND517 Dynamic and Stochastic Programming

24 IND606 Fundamentals of Optimization
3.3 GRADIENT 3.19 The GRADIENT of f(x) = f(x1,…, xn), denoted is the vector of partial derivatives evaluated at x 3.20 Gradients show graphically as vectors perpendicular to contours of the objective function and point in the direction of most rapid objective values increase. 11/17/2018 IND606 Fundamentals of Optimization

25 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

26 IND517 Dynamic and Stochastic Programming
(Taylor series approximation of first order) Direction Δx is improving for max objective function f at point x if On the other hand, if Δx does not improve at x Direction Δx is improving for min objective function f at point x if On the other hand, if Δx does not improve at x. 11/17/2018 IND517 Dynamic and Stochastic Programming

27 IND517 Dynamic and Stochastic Programming
Since we need only choose When objective function gradient, is an improving direction for a max objective f, and is an improving direction for minimizing f Whether a direction is feasible at a solution x depends on whether it would lead to immediate violation of any active constraint at x, ie., any constraint satisfied as equality at x. 11/17/2018 IND517 Dynamic and Stochastic Programming

28 IND517 Dynamic and Stochastic Programming
We may denote general forms of LINEAR CONSTRAINTS by Where n denotes the number of decision variables, aj is the constraint’s coefficient for decision variable xj a is the the vector of coefficients, b is the constraint’s RHS. 11/17/2018 IND517 Dynamic and Stochastic Programming

29 IND517 Dynamic and Stochastic Programming
3.25 Direction is feasible for linearly constrained optimization model at solution x=(x1,…, xn) if and only if for all active greater than or equal to constraints , for all active less than or equal to constraints, and for all equality constraints 11/17/2018 IND517 Dynamic and Stochastic Programming

30 3.4 UNIMODAL AND CONVEX MODEL FORMS
3.26 An objective function f(x) is UNIMODAL if the straightline direction from every point in its domain to every better point is an improving direction. That is, for every x(1) and every x(2) with a better objective function value, direction Δx =(x(2) - x(1)) should be improving at x(1). unimodal for maximization x f(x) not unimodal x f(x) 11/17/2018 IND606 Fundamentals of Optimization

31 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

32 IND517 Dynamic and Stochastic Programming
3.27 Linear objective functions are unimodal in both max and min optimization models. UNCONSTRAINED LOCAL OPTIMA are solutions for which no point in some surrounding neighborhood has a better objective function value. An UNCONSTRAINED GLOBAL OPTIMUM is a solution yielding a better objective value than any other in the domain of the objective function If the objective function of an optimization model is unimodal, every unconstrained local optimum is an unconstrained global optimum The feasible set of an optimization problem is CONVEX if the line segment between every pair of feasible points falls entirely within the feasible region. 11/17/2018 IND517 Dynamic and Stochastic Programming

33 IND517 Dynamic and Stochastic Programming
11/17/2018 IND517 Dynamic and Stochastic Programming

34 IND517 Dynamic and Stochastic Programming
3.30 Discrete feasible sets are never convex The line segment between vector solutions x(1) and x(2) consists of all points of the form with If all constraints of an optimization model are linear, its feasible space is convex. 11/17/2018 IND517 Dynamic and Stochastic Programming

35 IND517 Dynamic and Stochastic Programming
3.33 If the feasible set of an optimization model is convex, there is a feasible direction leading from any feasible solution to any other If the objective function of optimization model is unimodal and the constraints produce a convex feasible set, every local optimum of the model is a global optimum. 11/17/2018 IND517 Dynamic and Stochastic Programming

36 3.5 STARTING FEASIBLE SOLUTIONS
Determining whether any feasible solution exists. We introduce the Two-PHASE and BIG-M strategies. 11/17/2018 IND606 Fundamentals of Optimization

37 ALGORITHM 3B: TWO-PHASE IMPROVING SEARCH
STEP 0. ARTIFICIAL MODEL: choose any convenient solution for the true model, and construct a corresponding Phase I model by adding (or subtracting) nonnegative artificial variables in each violated constraint. STEP 1. PHASE I: Assign values to artificial variables to complete a starting feasible solution for the artificial model. Then begin at that solution and perform an improving search to minimize the sum of the artificial variables. STEP 2. INFEASIBILITY: If Phase I search terminated with artificial sum=0, proceed Step 3; the original model is feasible. If Phase I terminated with a global min having artificial sum>0, stop; the original model is infeasible. Otherwise, repeat steps 1 from a different starting solution. STEP 3. PHASE II: Construct a starting feasible solution for the original model by deleting artificial components of the Phase I optimum. Then begin at that solution and perform an improving search to optimize the original objective function subject to original constraints. 11/17/2018 IND517 Dynamic and Stochastic Programming

38 IND517 Dynamic and Stochastic Programming
3.35 Phase I constraints are derived from those of the original model by considering each in relation to the starting solution chosen. Satisfied constraints simply become part of the Phase I model. Violated ones are augmented with a nonnegative ARTIFICIAL VARIABLE to permit artificial feasibility The PHASE I OBJECTIVE FUNCTION minimizes the sum of the artificial variables After fixing original variables at their arbitrarily chosen values, each artificial variable is initialized at the smallest value still needed to achieve feasibility in the corresponding constraint. 11/17/2018 IND517 Dynamic and Stochastic Programming

39 IND517 Dynamic and Stochastic Programming
3.38 If Phase I terminates with a solution having Phase I objective function value = 0, the components of the Phase I solution corresponding to original variables provide a feasible solution for the original model If Phase I terminates with a global min having Phase I objective function value > 0, the original model is infeasible If Phase I terminates with a local min that may not be global but has (Phase I) objective function value> 0, we can conclude nothing. Phase I search should be repeated from a new starting solution. 11/17/2018 IND517 Dynamic and Stochastic Programming

40 IND606 Fundamentals of Optimization
BIG-M METHOD 3.41 The BIG-M Method uses a large positive multiplier M to combine feasibility and optimality in a single objective function of the form max (original obj) – M (artificial variable sum) for an originally max problem or min (original obj) + M (artificial variable sum) for a min problem. 3.42 If a Big-M search terminates with a locally optimal solution having all artificial variables = 0, the components of the solution corresponding to original variables form a locally optimal solution for the original model. 11/17/2018 IND606 Fundamentals of Optimization

41 IND517 Dynamic and Stochastic Programming
3.43 If M is sufficiently large and Big-M search terminates with a global optimum having some artificial variables > 0, the original model is infeasible If Big-M search terminates with a local optimum having some artificial variables > 0 or the multiplier M may not be large enough, we can conclude nothing. The search should be repeated with a larger M and/or a new starting solution. 11/17/2018 IND517 Dynamic and Stochastic Programming


Download ppt "Dr. Arslan Ornek IMPROVING SEARCH"

Similar presentations


Ads by Google