Fundamental Techniques CS 5050 Chapter 5 Goal: review complexity analysis Talk of categories used to descirbe algorithms.

Slides:



Advertisements
Similar presentations
Dynamic Programming 25-Mar-17.
Advertisements

Review: Search problem formulation
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Counting the bits Analysis of Algorithms Will it run on a larger problem? When will it fail?
Types of Algorithms.
Chapter 5 Fundamental Algorithm Design Techniques.
Analysis of Algorithms
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
Branch & Bound Algorithms
Introduction to Algorithms Jiafen Liu Sept
CS4413 Divide-and-Conquer
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
16-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
Fundamental Techniques CS 5050 Chapter 5 Goal: review complexity analysis Talk of categories used to describe algorithms. 1.
1 Dynamic Programming Jose Rolim University of Geneva.
Discrete Structure Li Tak Sing( 李德成 ) Lectures
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Greedy vs Dynamic Programming Approach
Review: Search problem formulation
Branch and Bound Searching Strategies
Dealing with NP-Complete Problems
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
Greedy Algorithms CIS 606 Spring Greedy Algorithms Similar to dynamic programming. Used for optimization problems. Idea – When we have a choice.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Dynamic Programming1 Modified by: Daniel Gomez-Prado, University of Massachusetts Amherst.
Fundamental Techniques
Backtracking.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Divide-and-Conquer 7 2  9 4   2   4   7
Bold Stroke January 13, 2003 Advanced Algorithms CS 539/441 OR In Search Of Efficient General Solutions Joe Hoffert
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Analysis of Algorithms
Recursion and Dynamic Programming. Recursive thinking… Recursion is a method where the solution to a problem depends on solutions to smaller instances.
Fundamentals of Algorithms MCS - 2 Lecture # 7
BackTracking CS335. N-Queens The object is to place queens on a chess board in such as way as no queen can capture another one in a single move –Recall.
Télécom 2A – Algo Complexity (1) Time Complexity and the divide and conquer strategy Or : how to measure algorithm run-time And : design efficient algorithms.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
Week 10 - Friday.  What did we talk about last time?  Graph representations  Adjacency matrix  Adjacency lists  Depth first search.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Branch and Bound Searching Strategies
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
BackTracking CS255.
Types of Algorithms.
Dynamic Programming.
Types of Algorithms.
Dynamic Programming Dynamic Programming 1/15/ :41 PM
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Dynamic Programming 23-Feb-19.
Types of Algorithms.
Merge Sort 4/28/ :13 AM Dynamic Programming Dynamic Programming.
Major Design Strategies
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Divide-and-Conquer 7 2  9 4   2   4   7
Major Design Strategies
Presentation transcript:

Fundamental Techniques CS 5050 Chapter 5 Goal: review complexity analysis Talk of categories used to descirbe algorithms.

Algorithmic Frameworks Greedy Divide-and-conquer Discard-and-conquer Top-down Refinement Dynamic Programming Backtracking/Brute Force Branch-and-bound Local Search (Heuristic)

Greedy Algorithms Short-view, tactical approach to minimize the value of an objective function. Eating whatever you want – no regard for health consequences or leaving some for others. Solutions are often not global, but can be when problems satisfy the “greedy-choice” property Example – Making change with US money to minimize number of coins; not true with other money systems. Example – Knapsack problem. Given items weight and value. Want best value in knapsack within total weight limit. Cannot be done using greedy methods. Doesn’t have greedy choice property

Fractional Knapsack Problem Many items, each with their own positive weight and positive benefit, and a maximum total weight Can choose a fraction of the item’s quantity maximize benefit, staying within weight limits. Algorithm –Items by maximal benefit (value/weight) on priority queue (or merely sort)– O(n log n) –Withdraw highest benefit item and take it all, or as much as will fit under the total quantity max greedy choice property – proof by contradiction

Task Scheduling Problem Set of tasks with specific start and end times (no flexibility. All must be scheduled at specific times) try to minimize number of machines Sort by start time O(n log n) as simpler to check for conflict. For each, put it on the first available machine or use an additional machine Optimal - Proof of this by contradiction. Let k be last new machine. Let i be first task scheduled on last machine. Show i conflicts with all other tasks at same time.

Divide and Conquer Algorithms Divide the problem into smaller subproblems - these should be equal sized Eventually get to a base case Examples – mergesort or quicksort Generally recursive Usually efficient, but sometimes not For example, Fibonacci numbers or Pascal triangle Desirability depends on work in spliting/combining

Consider Merge sort Sketch out the solution to the merge sort. What is the complexity? What skills do you have to help you?

Recurrence Relations Often of the form (e.g., merge sort) T(n) = b if n < 2 2T(n/2) + bn if n >= 2 Recursion tree – bn effort at each of log n levels Plug and chug – telescoping. Start with specific value of n to see pattern better. Math intensive. Iterative solution – T(n) = 2 i T(n/2 i ) + ibn becomes T(n) = b*n + b*n*log n Guess and test – eyeball, then attempt an inductive proof

Recursion tree – bn effort at each of log n levels. Show pictures (see posted handout) Guess and test – eyeball, then attempt an inductive proof. If can’t prove, try something bigger/smaller. This is not a good way for beginning students.

Look at several algorithms Use picture method to find complexity. Use master method to find complexity

Master method (different from text) method of text can give tighter bounds Of the form T(n) = c if n < d aT(n/b) +O(n k ) if n >= d – a > b k then T(n) is O(n log b a ) –a = b k then T(n) is O(n k log n) –a < b k then T(n) is O(n k ) Can work this out on your own using telescoping and math skills.

Large Integer multiplication Normal way is n 2 Breaking up into two parts I = I h 2 n/2 +I l J = J h 2 n/2 +J l Shift is cheap O(n) I *J= (I h 2 n/2 +I l )(J h 2 n/2 +J l )= I h J h 2 n + I l J h 2 n/2 + I l J h 2 n/2 + I l J l doesn’t help as work is the same (4*(n/2) 2 ) However, complicated patterns of sum/difference/multiple reduce the number of subproblems to ¾ or 7/8 –Big integer multiplication is O(n ) –Matrix multiplication is O(n )

Large Integer multiplication Idea – How think of? You want a way of reducing the total number of pieces you need to compute. Observe (I h -I l )(J l -J h ) = I h J l -I l J l – I h J h +I l J h Key is for one multiply, get two terms we need if add in two terms we already have So, instead of computing the four pieces shown earlier, we do this one multiplication to get two of the pieces we need!

Large Integer multiplication IJ = I h J h 2 n +[(I h -I l )(J l -J h )+I h J h +I l J l ] 2 n/2 + I l J l Tada – three multiplications instead of four by master formula O(n ) –a = 3 –b = 2 –k = 1

Try at seats! For Example Mult 75* Products 35* ( ) *53=3975

Matrix multiplication Same idea Breaking up into four parts doesn’t help Strassen’s algorithm: complicated patterns of sum/difference multiply reduce the number of subproblems to 7 (from 8) Won’t go through details, as not much is learned from the struggle. T(n) = 7T(n/2) + bn 2 –Matrix multiplication is O(n )

Discard and Conquer Top-Down Refinement Both are similar to divide and conquer Discard and conquer requires only that we solve one of several subproblems –Corresponds to proof by cases –Binary search is an example –Finding kth smallest is an example (Quicksearch) Top-down: assembles a solution from the solutions to several subproblems –Subproblems are not self-similar or balanced –This is the standard “problem-solving” method

Dynamic Programming Algorithms Reverse of divide and conquer, we build solutions from the base cases up Avoids possible duplicate calls in the recursive solution Implementation is usually iterative Examples – Fibonacci series, Pascal triangle, making change with coins of different relatively prime denominations

Good Dynamic Programming Algorithm Attributes Simple Subproblems Subproblem Optimality: optimal solution consists of optimal subproblem. Can you think of a real world example without subproblem optimality? Round trip discounts. Subproblem Overlap (sharing)

The 0-1 Knapsack Problem 0-1, means take item or leave it (no fractions) Now given units which we can take or leave Obvious solution of enumerating all subsets is Θ(2 n ) Difficulty is in characterizing subproblems –Find best solution for first k units – no good, as optimal solution doesn’t build on earlier solutions –Find best solution, first k units within quantity limit –Either use previous best at this limit, or this plus previous best at reduced limit – O(nW) Pseudo-polynomial – as it depends on a parameter W, which is not part of other solutions.

You could try them exhaustively, deciding about the last thing first: int value[MAX]; // value of each item int weight[MAX]: // weight of each item //You can use item "item" or items with lower number //The maximum weight you can have is maxWeight int bestValue(int item, int maxWeight) { if (item < 0) return 0; if (maxWeight < weight[item]) // current item can't be used, skip it return bestValue(item-1, maxWeight); useIt = bestValue(item-1, maxWeight - weight[item]) + value[item] dontUseIt = bestValue(item-1, maxWeight); return max (useIt, dontUseIt); }

Price per Pound The constant `price-per-pound' knapsack problem is often called the subset sum problem, because given a set of numbers, we seek a subset that adds up to a specific target number, i.e. the capacity of our knapsack. If we have a capacity of 10, consider a tree in which each level corresponds to considering each item (in order). Notice, in this simple example, about a third of the calls are duplicates.

We would need to store whether a specific weight could be achieved using only items 1-k. possible[item][max] = given the current item (or earlier in the list) and max value, can you achieve max?

Consider the weights: 2, 2, 6,5,4 with limit of 10 We could compute such a table in an iterative fashion: noyesno 2 yesnoyesno 6 yesnoyesnoyesnoyesnoyes 5noyesnoyes 4noyesnoyes

From the table Can you tell HOW to fill the knapsack?

Let's compare the two strategies: Normal/forgetful: wait until you are asked for a value before computing it. You may have to do some things twice, but you will never do anything you don't need. Compulsive/elephant: you do everything before you are asked to do it. You may compute things you never need, but you will never compute anything twice (as you never forget). Which is better? At first it seems better to wait until you need something, but in large recursions, almost everything is needed somewhere and many things are computed LOTS of times.

Consider the complexity for a max capacity of M and N different items. Normal: For each item, try it two ways (useIt or dontUseIt). O(N 2 ) Compulsive: Fill in array O(M*N) Which is better depends on values of M and N. Notice, the two complexities depend on different variables.

Clever Observation Since only the previous row is ever accessed, don’t really need to store all rows. However, couldn’t easily read back optimal choices

The Matrix Chain Problem –B 2x10 C 10x50 D 50x20 Is it associative? –Does order matter? BC = 2*10*50 –(BC)D = 2*10*50 + 2*50*20 = 2*50*30 (best) –B(CD) = 10*50*20 + 2*10*20 = 52*10*20 –Reduce number of multiplies by proper association –Naïve algorithm to find the proper association is exponential in number of matrices –Let N i,j denote the minimum number of multiplications to compute A i A i+1...A j

At seats -What is Algorithm? For each cell N(i,j) represents the cost of computing the multiplication of matrices i thru j. –Look at each possible division –Pick the best of the possibilities –k is division point N(i,k) + N(k+1,j) gives each piece multiply two pieces is d i xd k+1 and d k+1 x d j+1 subscripting – remember d i is rowsize of ith matrix

Backtracking Algorithms Brute Force Algorithms Exhaustive (brute force) search - depth-first through the solution space if it is structured Bad if happen to pick a first path that is very deep. Backtrack when we can’t go forward Brute force – if try everything. Example - tree traversal looking for a key or finding a solution to Eight Queens problem Heuristics help a lot, as for instance, knowledge about the structure of a binary search tree Usually implemented with a stack

Observe divide and conquer Observe that divide and conquer may be brute force. We can do depth first – better for storage We can do breath first – may be better for optimality, but must use a queue to store subproblems yet to explore What about a “best first” solution?

PRUNING However, sometimes we can determine that a given node in the solution space does not lead to the optimal solution--either because the given solution and all its successors are infeasible or because we have already found a solution that is guaranteed to be better than any successor of the given solution. In such cases, the given node and its successors need not be considered. In effect, we can prune the solution tree, thereby reducing the number of solutions to be considered.

Branch and Bound Algorithms Definition: An algorithmic technique to find the optimal solution by keeping the best solution found so far. If a partial solution cannot improve on the best, it is abandoned. Has a scoring mechanism to always choose the more promising best-first search.best-first search

Branch and Bound Algorithms For each subtree, best possible solution is computed. If it is worse than best so far, prune. Rather than wanting the absolute best solution, we may say “at least as good as bound”. If we find no solution, can relax the bound and start over. Backtrack sooner when we realize the branch is going bad - this is called pruning

Branch and Bound Algorithms Need an evaluation function of some kind Examples - games, integer linear programming Besides pruning, the evaluation function may give us hints on which branch to follow so never really throw out a case, just give it less priority. Greedy algorithms are extreme examples of branch and bound algorithms as ignore all other branches.

Example consider the scales balancing problem or dividing a set into approximately equal weight pieces What problem is this most like? Consider a partial solution in which we have placed k weights onto the pans (0 < k < n ) and, therefore, n-k weights remain to be placed. The difference between the weights of the left and right pans is computed the sum of the weights still to be placed is computed For any given subproblem, if the sum of the weights to be placed is less than the current difference in the pans, you have a measure of how close you can come to the desired goal. Prune if your best possible solution is worse than the best so far.

Local Search Algorithms Start at a random spot, then follow a gradient of improving solutions, termed hill climbing Locally optimal, but globally suboptimal Differs from greedy in that there is a sequence of solutions Example - travelling salesman. Come up with an initial route which visits all cities, but may not be optimal. Then iteratively try to improve it. Break and reconnect.

Relatives to Big Oh f(n) is  (g(n)) (pronounced big Omega) if g(n) is O(f(n)) In other words: there exists c > 0 and integer constant n 0 >1 such that f(n)  cg(n) for n  n 0. f(n) is  (g(n)) (pronounced big Theta) if f(n) is O(g(n)) and f(n) is  (g(n)). In other words, –there exists c’>0 and c”>0 and n 0 >1 such that –c’g(n)  f(n)  c”g(n) for n  n 0