1 15.053 Thursday, May 16 Review of 15.053 Handouts: Lecture Notes.

Slides:



Advertisements
Similar presentations
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Advertisements

Constraint Optimization We are interested in the general non-linear programming problem like the following Find x which optimizes f(x) subject to gi(x)
1 Material to Cover  relationship between different types of models  incorrect to round real to integer variables  logical relationship: site selection.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Solving LP Models Improving Search Special Form of Improving Search
Linear Programming (LP) (Chap.29)
Introduction to Algorithms
MIT and James Orlin © Dynamic Programming 1 –Recursion –Principle of Optimality.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Dragan Jovicic Harvinder Singh
Genetic Algorithms Contents 1. Basic Concepts 2. Algorithm
Lecture 10: Integer Programming & Branch-and-Bound
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Tuesday, May 14 Genetic Algorithms Handouts: Lecture Notes Question: when should there be an additional review session?
Non-Linear Problems General approach. Non-linear Optimization Many objective functions, tend to be non-linear. Design problems for which the objective.
Non Linear Programming 1
MIT and James Orlin © Nonlinear Programming Theory.
Computational Methods for Management and Economics Carla Gomes
MIT and James Orlin © Dynamic Programming 2 –Review –More examples.
1 Maximum matching Max Flow Shortest paths Min Cost Flow Linear Programming Mixed Integer Linear Programming Worst case polynomial time by Local Search.
EAs for Combinatorial Optimization Problems BLG 602E.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
1 Lecture 4 Maximal Flow Problems Set Covering Problems.
Genetic Algorithm.
ENCI 303 Lecture PS-19 Optimization 2
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Solving the Concave Cost Supply Scheduling Problem Xia Wang, Univ. of Maryland Bruce Golden, Univ. of Maryland Edward Wasil, American Univ. Presented at.
MIT and James Orlin1 NP-completeness in 2005.
Thursday, April 18 Nonlinear Programming (NLP)
Lecture 6 – Integer Programming Models Topics General model Logic constraint Defining decision variables Continuous vs. integral solution Applications:
1 Combinatorial Problem. 2 Graph Partition Undirected graph G=(V,E) V=V1  V2, V1  V2=  minimize the number of edges connect V1 and V2.
Thursday, May 9 Heuristic Search: methods for solving difficult optimization problems Handouts: Lecture Notes See the introduction to the paper.
Applications of Dynamic Programming and Heuristics to the Traveling Salesman Problem ERIC SALMON & JOSEPH SEWELL.
Exact and heuristics algorithms
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
15.053Tuesday, April 9 Branch and Bound Handouts: Lecture Notes.
Tuesday, April 30 Dynamic Programming – Recursion – Principle of Optimality Handouts: Lecture Notes.
IT Applications for Decision Making. Operations Research Initiated in England during the world war II Make scientifically based decisions regarding the.
Thursday, May 2 Dynamic Programming – Review – More examples Handouts: Lecture Notes.
8/14/04 J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 6 – Integer Programming Models Topics.
Lecture 6 – Integer Programming Models Topics General model Logic constraint Defining decision variables Continuous vs. integral solution Applications:
IE 312 Review 1. The Process 2 Problem Model Conclusions Problem Formulation Analysis.
DEPARTMENT/SEMESTER ME VII Sem COURSE NAME Operation Research Manav Rachna College of Engg.
Introduction to Integer Programming Integer programming models Thursday, April 4 Handouts: Lecture Notes.
1 Contents 1. Basic Concepts 2. Algorithm 3. Practical considerations Genetic Algorithm (GA)
1 Combinatorial Problem. 2 Graph Partition Undirected graph G=(V,E) V=V1  V2, V1  V2=  minimize the number of edges connect V1 and V2.
Approximation Algorithms based on linear programming.
Tuesday, March 19 The Network Simplex Method for Solving the Minimum Cost Flow Problem Handouts: Lecture Notes Warning: there is a lot to the network.
A MapReduced Based Hybrid Genetic Algorithm Using Island Approach for Solving Large Scale Time Dependent Vehicle Routing Problem Rohit Kondekar BT08CSE053.
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
The minimum cost flow problem
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 1. Introduction Mathematical Programming (Optimization) Problem: min/max
Lecture 5 – Integration of Network Flow Programming Models
1.3 Modeling with exponentially many constr.
Linear Programming.
Chapter 6. Large Scale Optimization
Chapter 3 The Simplex Method and Sensitivity Analysis
Integer Programming (정수계획법)
3.5 Minimum Cuts in Undirected Graphs
1.3 Modeling with exponentially many constr.
Integer Programming (정수계획법)
and 6.855J March 6, 2003 Maximum Flows 2
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 6. Large Scale Optimization
Lecture 12 Network Models.
Presentation transcript:

Thursday, May 16 Review of Handouts: Lecture Notes

2 Overview of Problem Types Nonlinear Programming “Easy” Nonlinear Programming Linear Programming Integer Programming Network Flows “Hard” Nonlinear Programming Dynamic programming

3 Overview of Problem Types Nonlinear Programming “Easy” Nonlinear Programming “Hard” Nonlinear Programming Integer Programming Network Flows Linear Programming Dynamic programming

4 Why the focus on linear programming? Linear programming illustrates much of what is important about modeling. Linear programming is a very useful tool in optimization! We can solve linear programs very efficiently. The state-of-the-art integer programming techniques rely on linear programming Linear Programming is the best way of teaching about performance guarantees and duality. Linear programming is very helpful for understanding other optimization approaches.

5 Topics through midterm 2 Linear programming –Formulations –Geometry –The simplex algorithm –Sensitivity Analysis –Duality Theory Network Optimization Integer programming –formulations –B&B –Cutting planes

6 Topics covered in the Final Exam Linear Programming Formulations Integer Programming Formulations Nonlinear Programming Dynamic Programming Heuristics

7 Rest of this lecture A very brief overview of the topics covered since the 2ndmidterm. Slides drawn from lectures If you have questions about the topics covered, ask them as I go along. I need to reserve time at the end for Sloan course evaluations.

8 What is a non-linear program? maximize Subject to A non-linear program is permitted to have non-linear constraints or objectives. A linear program is a special case of non- linear programming!

9 Portfolio Selection Example When trying to design a financial portfolio investors seek to simultaneously minimize risk and maximize return. Risk is often measured as the variance of the total return, a nonlinear function. FACT:

10 Portfolio Selection (cont’d) Two Methods are commonly used: –Min Risk s.t. Expected Return ≥Bound –Max Expected Return -θ (Risk) where θreflects the tradeoff between return and risk.

11 Regression, and estimating β Return on Stock A vs. Market Return The value βis the slope of the regression line. Here it is around.6 (lower expected gain than the market, and lower risk.)

12 Local vs. Global Optima Def’n: Let xbe a feasible solution, then –xis a global max_if f(x) ≥f(y) for every feasible y. –xis a local max_if f(x) ≥ f(y) for every feasible ysufficiently close to x(i.e. xj-ε ≤ yj≤xj+ εfor all jand some small ε). There may be several locally optimal solutions.

13 Convex Functions Convex Functions: f(λy + (1-λ)z) ≤ λf(y) + (1-λ)f(z) for every yand zand for 0≤ λ ≤1. e.g., f((y+z)/2) ≤ f(y)/2 + f(z)/2 We say “strict” convexity if sign is “<” for 0< λ <1. Line joining any points Is above the curve

14 Concave Functions Concave Functions: f(λy + (1-λ)z) ≥ λf(y) + (1-λ)f(z) for every yand zand for 0≤ λ ≤1. e.g., f((y+z)/2) ≥ f(y)/2 + f(z)/2 We say “strict” convexity if sign is “>” for 0< λ <1.

15 Convexity and Extreme Pointsxy We say that a set Sis convex, if for every two points xand yin S, and for every real number λin [0,1], λx + (1-λ)y εS. The feasible region of a linear program is convex. We say that an element w εSis an extreme point(vertex,corner point), if wis not the midpoint of any line segment contained in S.

16 Local Maximum (Minimum) Property A local max of a concave function on a convex feasible region is also a global max. A local min of a convex function on a convex feasible region is also a global min. Strict convexity or concavity implies that the global optimum isunique. Given this, we can efficiently solve: –Maximization Problems with a concave objective function and linear constraints –Minimization Problems with a convex objective function and linear constraints

17 Where is the optimal solution? Note: the optimal solution is not at a corner point. It is where the isocontour first hits the feasible region.

18 Another example: X Minimize (x-8)2+ (y-8) 2 Then the global unconstrained minimum is also feasible. The optimal solution is not on the boundary of the feasible region.

19 Finding a local maximum using Fibonacci Search. Where the maximum may be Length of search Interval 3.

20 The search finds a local maximum, but not necessarily a global maximum.

21 Approximating a non-linear function of 1 variable: the λmethody Choose different values of xto approximate the x-axis Approximate using piecewise linear segments

22 More on theλmethody Suppose that for –3 ≤x ≤-1, Then we approximate f(x) as λ 1 (-20) + λ 2 (-7 1/3) we represent x has λ 1 (-3) + λ 2 (-1) where λ 1 + λ 2 = 1 and λ 1, λ 2 ≥0

23 Approximating a non-linear objective function for a minimization NLP. original problem: minimize Suppose that where Approximate f(y). minimize –Note: when given a choice of representing y in alternative ways, the LP will choose one that leads to the least objective value for the approximation.

24 For minimizing a convex function, the λ- method automatically satisfies the additional adjacency property. + adjacency condition + other constraints

25 Dynamic programming Suppose that there are 50 matches on a table, and the person who picks up the last match wins. At each alternating turn, my opponent or I can pick up 1, 2 or 6 matches. Assuming that I go first, how can I be sure of winning the game?

26 Determining the strategy using DP n = number of matches left (n is the state/stage) g(n) = 1 if you can force a win at n matches. g(n) = 0 otherwise g(n) = optimal value function. At each state/stage you can make one of three decisions: take 1, 2 or 6 matches. g(1) = g(2) = g(6) = 1 (boundary conditions) g(3) = 0; g(4) = g(5) = 1. (why?) The recursion: g(n) = 1 if g(n-1) = 0 or g(n-2) = 0 or g(n-6) = 0; g(n) = 0 otherwise. Equivalently, g(n) = 1 –min (g(n-1), g(n-2), g(n-6)).

27 Dynamic Programming in General Break up a complex decision problem into a sequence of smaller decision subproblems. Stages: one solves decision problems one “stage” at a time. Stages often can be thought of as “time” in most instances. –Not every DP has stages –The previous shortest path problem has 6 stages –The match problem does not have stages.

28 Dynamic Programming in General States: The smaller decision subproblemsare often expressed in a very compact manner. The description of the smaller subproblemsis often referred to as the state. –match problem: “state” is the number of matches left At each state-stage, there are one or more decisions. The DP recursion determines the best decision. –match problem: how many matches to remove –shortest path example: go right and up or else go down and right

29 Optimal Capacity Expansion: What is the least cost way of building plants? Cost of $15 million in any year in which a plant is built. At most 3 plants a year can be built Cum. Demand Cost per plant in $millions

30 Finding a topological order Find a node with no incoming arc. Label it node 1. For i = 2 to n, find a node with no incoming arc from an unlabeled node. Label it node i.

31 Find d(j) using a recursion. d(j) is the shortest length of a path from node 1 to node j. Let cij = length of arc (i,j) What is d(j) computed in terms of d(1), … d(j-1)? Compute f(2), …, f(8) Example: d(4) = min { 3 + d(2), 2 + d(3) }

32 Finding optimal paragraph layouts Tex optimally decomposes paragraphs by selecting the breakpoints for each line optimally. It has a subroutine that computes the ugliness F(i,j) of a line that begins at word i and ends at word j-1. How can we use F(i,j) as part of a dynamic program whose solution will solve the paragraph problem. Tex optimally decomposes paragraphs by select- ingthe breakpoints for each line optimally. It has a subroutine that computes the ugliness F(i,j) of a line that begins at word i and ends at word j-1. How can we use F(i,j) as part of a dynamic program whose solution will solve the paragraph problem.

33 Capital Budgeting, again Investment budget = $14,000 Investment Cash Required (1000s) NPV added (1000s)

34 Capital Budgeting: stage 3 Consider stock 3: cost $4, NPV: $12

35 The recursion f(0,0) = 0; f(0,k) is undefined for k > 0 f(k, v) = min ( f(k-1, v), f(k-1, v-ak) + ck) either item k is included, or it is not The optimum solution to the original problem is max { f(n, v) : 0 ≤v ≤b }. Note: we solve the capital budgeting problem for all right hand sides less than b.

36 Heuristics: a way of dealing with hard combinatorial problems Construction heuristics: construct a solution. Example: Nearest neighbor heuristic begin choose an initial city for the tour; while there are any unvisited cities, then the next city on the tour is the nearest unvisited city; end

37 Improvement Methods These techniques start with a solution, and seek out simple methods for improving the solution. Example: Let T be a tour. Seek an improved tour T’ so that |T -T’| = 2.

38 Illustration of 2-opt heuristic

39 Take two edges out. Add 2 edges in.

40 Take two edges out. Add 2 edges in.

41 Local Optimality A solution y is said to be locally optimum (with respect to a given neighborhood) if there is no neighbor of y whose objective value is better than that of y. Example.2-Opt finds a locally optimum solution.

42 Improvement methods typically find locally optimum solutions. A solution y is said to be globally optimum if no other solution has a better objective value. Remark. Local optimality depends on what a neighborhood is, i.e., what modifications in the solution are permissible. –e.g. 2-interchanges –e.g., 3-interchanges

43 What is a neighborhood for the fire station problem?

44 Insertion heuristic with randomization Choose three cities randomly and obtain a tour T on the cities For k = 4 to n, choose a city that is not on T and insert it optimally into T. –Note: we can run this 1,000 times, and get many different answers. This increases the likelihood of getting a good solution. –Remark: simulated annealing will not be on the final exam.

45 GA terms chromosome (solution) gene (variable) 1 or 0 (values) alleles Selection Crossove rmutation population Objective: maximize fitness function (objective function)

46 A Simple Example: Maximize the number of 1’s Initial Population Fitness Average fitness 3 Usually populations are much bigger, say around 50 to 100, or more.

47 Crossover Operation: takes two solutions and creates a child (or more) whose genes are a mixture of the genes of the parents. Select two parents from the population. This is the selection step. There will be more on this later.

48 Crossover Operation: takes two solutions and creates a child (or more) whose genes are a mixture of the genes of the parents. 1 point crossover: Divide each parent into two parts at the same location k (chosen randomly.) Child 1 consists of genes 1 to k-1 from parent 1 and genes k to n from parent 2. Child 2 is the “reverse”.

49 Selection Operator Think of crossover as mating Selection biases mating so that fitter parents are more likely to mate. For example, let the probability of selecting member j be fitness(j)/total fitness Prob(1) = 4/12 = 1/3 Prob(3) = 2/12 = 1/6

50 Example with Selection and Crossover Only originalafter 5 generations after 10 generations

51 Mutation Previous difficulty: important genetic variability was lost from the population Idea: introduce genetic variability into the population through mutation simple mutation operation: randomly flip q% of the alleles (bits) in the population.

52 Previous Example with a 1% mutation rate originalafter 5 generations after 10 generations

53 Generation based GAs Then replace the original population by the children

54 Generation based GAs This creates the next generation. Then iterate

55 For genetic algorithms the final exam will cover basic terminology We will not cover: steady state, random keys We will cover terms mentioned on the previous slides

56 Any questions before we solicit feedback on ?