Download presentation
Presentation is loading. Please wait.
1
Major Design Strategies
2
The Greedy method Philosophy: optimizing (maximizing or minimizing) short term gain and hoping for the best without regard for long-term consequences. Algorithms are (positives): simple, easy to code, efficient Algorithms are (negatives): often lead to less than optimal results Eg. Find the shortest path between two vertices in a weighted graph Determine the minimum spanning tree in a weighted graph
3
Examples Minimize spanning tree Kruskal’s algorithm Prim’s algorithm
Shortest paths: Dijkstra Algorithm Bellman-Ford Algorithm (All-shortest paths) Connected components Knapsack problem Huffman codes
4
Parallel Building up a solution sequentially through a sequence of greedy choices => Leads to a θ(n) algorithm (linear) worst-case complexity. To do better, we must look for ways to build up multiple partial solutions in parallel.
5
Divide and conquer One of the most powerful strategies.
The problem input is divided according to some criteria into a set of smaller inputs to the same problem. The problem is then solved for each of the smaller inputs by: recursively - by further division into smaller inputs OR By invoking a prior solution The solution for the original input is obtained by expressing it in the same form. As a combination of the solution for these smaller inputs.
6
Examples Selecting the K- th smallest element in a list
Finding maximum and minimum element in a list Symbolic algebraic operations on polynomials Multiplication of polynomials Multiplication of large integers Matrix multiplications Discrete Fourier transform Fast Fourier transform – on PRAM, on Butterfly Net Inverse Fourier transform Inverting triangular matrices Inverting general matrices
7
Dynamic Programming Involves constructing a solution S to a given problem by building it up dynamically from solutions to smaller (or simpler problems) S1, S2,…Sn of the same type. The solution to any given smaller problem Si is itself built up from the solutions to even smaller (simpler) subproblems, etc. We start with the known solutions to the smallest (simplest) problem instances and build from there in a bottom-up fashion. To be able to reconstruct S from S1, S2,…Sn, some additional information is usually required.
8
Both used recursive division of the problem in smaller subproblems.
Combine – function that combines S1, S2,….Sn using the additional information to obtain S => S = Combine (S1, S2, ….. Sn) - Dynamic programming similar to Divide- and Conquer Conquer. uses a Bottom – Up Approach uses a Top – Down Approach Both used recursive division of the problem in smaller subproblems.
9
Never consider a given subproblem more than once
In general avoids generating suboptimal subproblems when the Principle of Optimality holds. => increased efficiency Examples: Optimal chained matrix product M1 (M2M3) M1(M2(M3M4)) . - Optimal Binary Search Trees. All pairs shortest paths. { Dijkstra - consider a single source (applied to all the sources) { Bellman Ford (All pairs) - consider any source. Traveling Salesman Problem.
10
Optimization Problems and Principle of Optimality
Dynamic Programming Optimization Problems and Principle of Optimality dynamic programming is most effective when Principle of Optimality holds. Given: The optimization problem The set of ALL FEASABLE SOLUTIONS: S that satisfy the constraints of the problem An optimal solution S is a solution that optimizes (Minimizes or Maximizes) the objective function. Need to: Optimize over ALL subproblems S1, S2,….Sn such that S = Combine (S1,S2,….Sn). Optimal => This might be intractable (there might be exponentially as many problems) => we can reduce to number of problems to consider if the Principle of Optimality holds.
11
assume v in G in the path P
Given on optimization problem and on associated function combine, the Principle of Optimality holds if the following is always tree: {If S = Combine (S1, S2,….Sn) and S is an optimal solution to the problem, then S1, S2,….Sn are optimal solution to their associated subproblems.} Eg. Find the shortest path in a graph (or diagraph) G, from vertex “a” to vertex “b” assume P is a path from a to b assume v in G in the path P
12
P = Combine (P1,P2) is the union of the two paths P1 & P2
P1 is a path from a to v P2 is a path from v to b P = Combine (P1,P2) is the union of the two paths P1 & P2 If P is a shortest path from a to b, then P1 is a shortest path from a to v and P2 is shortest path from v to b a b v
13
The Specialized Principle of Optimality holds if, given any sequence decisions D1,….Dn yielding an optimal solution S to the given problem, the subsequence of decisions D2,….Dn yields an optimal solution S’ to the single-instance derived problem resulting from having made the first decision D1. This is the special case of the Principle where m=1 and (Combine S’) = S. The additional information required to reconstruct S from S’ is the information associated with decision D1. Eg. The problem of finding the shortest path in G from a to b can be viewed as making a sequence of decisions D1,D2,…Dp= b for the vertices in the path, where D1 is the choice of a vertex v adjacent to a => S’ determined by the remaining decision D2,…..Dp must be a shortest path, so that the Specialized Optimality Principle holds.
14
Dynamic Programming in Parallel
Recurrence relation- level by level Concurrency could be exploited at each level All pairs shortest paths – we have linear number of levels => level-by-level parallelization Ω(n). Can the problems be solved in PRAM in polylogarithmic time? using a polynomial number of processors? n2pE on a EREW PRAM => Tn= 0(n) and cost = pTn = n3 -by using the Principle of Optimality => goal. Tn = 0(log n m) using np processors.
15
BACKTRACKING AND BRANCH AND BOUND
- Design strategies applicable to problems whose solutions can be expressed as sequences of decisions. the sequences of decisions can be modeled differently for a given problem => different state space trees. -the state space tree is: - implicit in Backtracking explicitly implemented in Branch & Bound. both, Backtracking and Branch & Bound, utilize. - Objective functions to limit the number of roles in the space tree that need to be examined.
16
Backtracking depth-fist search - access node E current current nodel being expanded (E-node) - Immediately its first child not yet visited becomes the new E-node Branch & Bound - Searches of the state space tree that generate all the children of the E-node when a node is accessed. {Backtracking: a node can be E-node many times {Branch & Bound: a node can be E-node only one time variations with respect to which node is expanded.
17
- FiFo LiFo priority queue. (least-cost Branch & Bound) - Backtracking and Branch&Bound inefficient in the worst case. - The choice of the objective function (or bounding function) is essential in making these 2 algorithms more efficient. Utilizing heuristics can lower the search of state space trees. Least cost B & B strategy utilizes a heuristic cost function associated with the nodes of the state space tree where the set of “live” nodes is maintained as a priority queue with respect to this cost function. => The next node to become E-node is the one that is the most promising to lead quickly to a goal.
18
“Least cost Branch&Bound” – is related to the general heuristic search strategy called A*-search
A*-search – can be applied to state space diagraphs and not only to state space trees. A*- used in Artificial Intelligence strategies for playing 2- person games: i.e, ALPHA-BETA Heuristic - look ahead a fixed number of moves - assign a heuristic value to positions reached there. - an estimate for the best move is obtained by working your way back to the correct position using the minimax strategy.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.