Advanced Design and Analysis Techniques

Slides:



Advertisements
Similar presentations
Welcome to our presentation
Advertisements

Dynamic Programming Introduction Prof. Muhammad Saeed.
Dynamic Programming.
Multiplying Matrices Two matrices, A with (p x q) matrix and B with (q x r) matrix, can be multiplied to get C with dimensions p x r, using scalar multiplications.
Algorithm Design Methodologies Divide & Conquer Dynamic Programming Backtracking.
Dynamic Programming An algorithm design paradigm like divide-and-conquer “Programming”: A tabular method (not writing computer code) Divide-and-Conquer.
Overview What is Dynamic Programming? A Sequence of 4 Steps
Dynamic Programming.
ROMaN: Revenue driven Overlay Multicast Networking Varun Khare.
Analysis of Algorithms Dynamic Programming. A dynamic programming algorithm solves every sub problem just once and then Saves its answer in a table (array),
Introduction to Algorithms
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
CS223 Advanced Data Structures and Algorithms 1 Divide and Conquer Neil Tang 4/15/2010.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
1 Dynamic Programming (DP) Like divide-and-conquer, solve problem by combining the solutions to sub-problems. Differences between divide-and-conquer and.
Dynamic Programming Part 1: intro and the assembly-line scheduling problem.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 11.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 1 (Part 3) Design Patterns for Optimization Problems.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
Analysis of Algorithms CS 477/677
Sequence Alignment Oct 9, 2002 Joon Lee Genomics & Computational Biology.
Analysis of Algorithms
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Dynamic Programming UNC Chapel Hill Z. Guo.
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
1 Summary: Design Methods for Algorithms Andreas Klappenecker.
COSC 3101A - Design and Analysis of Algorithms 7 Dynamic Programming Assembly-Line Scheduling Matrix-Chain Multiplication Elements of DP Many of these.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
1 Ch20. Dynamic Programming. 2 BIRD’S-EYE VIEW Dynamic programming The most difficult one of the five design methods Has its foundation in the principle.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
CSC5101 Advanced Algorithms Analysis
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
1 Chapter 15-2: Dynamic Programming II. 2 Matrix Multiplication Let A be a matrix of dimension p x q and B be a matrix of dimension q x r Then, if we.
Dynamic Programming Csc 487/687 Computing for Bioinformatics.
Dynamic Programming Typically applied to optimization problems
Dynamic Programming (DP)
Lecture 12.
Advanced Algorithms Analysis and Design
Lecture 5 Dynamic Programming
Dynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Lecture 4 Divide-and-Conquer
Lecture 5 Dynamic Programming
Dynamic Programming.
Dynamic Programming.
Unit-5 Dynamic Programming
Merge Sort 1/12/2019 5:31 PM Dynamic Programming Dynamic Programming.
Dynamic Programming 1/15/2019 8:22 PM Dynamic Programming.
Dynamic Programming.
Dynamic Programming.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Merge Sort 1/18/ :45 AM Dynamic Programming Dynamic Programming.
Merge Sort 2/22/ :33 AM Dynamic Programming Dynamic Programming.
Analysis of Algorithms CS 477/677
Data Structure and Algorithms
Chapter 15: Dynamic Programming II
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming 2 Neil Tang 4/22/2008
Dynamic Programming General Idea
Dynamic Programming.
DYNAMIC PROGRAMMING.
Analysis of Algorithms CS 477/677
Matrix Chain Multiplication
CSCI 235, Spring 2019, Lecture 25 Dynamic Programming
Algorithms CSCI 235, Spring 2019 Lecture 27 Dynamic Programming II
Analysis of Algorithms CS 477/677
Presentation transcript:

Advanced Design and Analysis Techniques Part 1 15.1 and 15.2

Techniques -1 This part covers three important techniques for the design and analysis of efficient algorithms: dynamic programming (Chapter 15), greedy algorithms (Chapter 16),and amortized analysis (Chapter 17).

Techniques - 2 Earlier parts have presented other widely applicable techniques, such as divide-and-conquer, randomization, and the solution of recurrences.

Dynamic programming Dynamic programming typically applies to optimization problems in which a set of choices must be made in order to arrive at an optimal solution. Dynamic programming is effective when a given subproblem may arise from more than one partial set of choices; the key technique is to store the solution to each such subproblem in case it should reappear.

Greedy algorithms Like dynamic-programming algorithms, greedy algorithms typically apply to optimization problems in which a set of choices must be made in order to arrive at an optimal solution. The idea of a greedy algorithm is to make each choice in a locally optimal manner.

Dynamic programming -1 Dynamic programming, like the divide-and-conquer method, solves problems by combining the solutions to subproblems. Divide and- conquer algorithms partition the problem into independent subproblems, solve the subproblems recursively, and then combine their solutions to solve the original problem.

Dynamic programming -2 Dynamic programming is applicable when the subproblems are not independent, that is, when subproblems share subsubproblems. A dynamic-programming algorithm solves every subsubproblem just once and then saves its answer in a table, thereby avoiding the work of recomputing the answer every time the subsubproblem is encountered.

Dynamic programming -2 Dynamic programming is typically applied to optimization problems. In such problems there can be many possible solutions. Each solution has a value, and we wish to find a solution with the optimal (minimum or maximum) value. We call such a solution an optimal solution to the problem, as opposed to the optimal solution, since there may be several solutions that achieve the optimal value.

The development of a dynamic-programming algorithm The development of a dynamic-programming algorithm can be broken into a sequence of four steps. 1. Characterize the structure of an optimal solution. 2. Recursively define the value of an optimal solution. 3. Compute the value of an optimal solution in a bottom-up fashion. 4. Construct an optimal solution from computed information.

Assembly-line scheduling

Step 1: The structure of the fastest way through the factory

Step 2: A recursive solution

Step 3: Computing the fastest times

Step 4: Constructing the fastest way through the factory

Matrix-chain multiplication We can multiply two matrices A and B only if they are compatible: the number of columns of A must equal the number of rows of B. If A is a p × q matrix and B is a q × r matrix, the resulting matrix C is a p × r matrix

Counting the number of parenthesizations

Step 1: The structure of an optimal parenthesization Step 2: A recursive solution Step 3: Computing the optimal costs

Step 3: Computing the optimal costs

Step 4: Constructing an optimal solution