Advanced Algorithm Design and Analysis (Lecture 6) SW5 fall 2004 Simonas Šaltenis E1-215b

Slides:



Advertisements
Similar presentations
Comp 122, Spring 2004 Greedy Algorithms. greedy - 2 Lin / Devi Comp 122, Fall 2003 Overview  Like dynamic programming, used to solve optimization problems.
Advertisements

Types of Algorithms.
Greedy Algorithms Amihood Amir Bar-Ilan University.
1.1 Data Structure and Algorithm Lecture 6 Greedy Algorithm Topics Reference: Introduction to Algorithm by Cormen Chapter 17: Greedy Algorithm.
Greedy Algorithms Greed is good. (Some of the time)
Analysis of Algorithms
Lecture 8: Dynamic Programming Shang-Hua Teng. Longest Common Subsequence Biologists need to measure how similar strands of DNA are to determine how closely.
Overview What is Dynamic Programming? A Sequence of 4 Steps
COMP8620 Lecture 8 Dynamic Programming.
Advanced Algorithm Design and Analysis (Lecture 7) SW5 fall 2004 Simonas Šaltenis E1-215b
Greedy Algorithms Basic idea Connection to dynamic programming
CS38 Introduction to Algorithms Lecture 5 April 15, 2014.
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
Merge Sort 4/15/2017 6:09 PM The Greedy Method The Greedy Method.
Advanced Algorithm Design and Analysis (Lecture 10) SW5 fall 2004 Simonas Šaltenis E1-215b
3 -1 Chapter 3 String Matching String Matching Problem Given a text string T of length n and a pattern string P of length m, the exact string matching.
Advanced Algorithm Design and Analysis (Lecture 4) SW5 fall 2004 Simonas Šaltenis E1-215b
Comp 122, Fall 2004 Dynamic Programming. dynprog - 2 Lin / Devi Comp 122, Spring 2004 Longest Common Subsequence  Problem: Given 2 sequences, X =  x.
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming Solving Optimization Problems.
CS 206 Introduction to Computer Science II 10 / 14 / 2009 Instructor: Michael Eckmann.
Dynamic Programming Dynamic Programming algorithms address problems whose solution is recursive in nature, but has the following property: The direct implementation.
CSE 780 Algorithms Advanced Algorithms Greedy algorithm Job-select problem Greedy vs DP.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
UNIVERSITY OF SOUTH CAROLINA College of Engineering & Information Technology Bioinformatics Algorithms and Data Structures Chapter 11: Core String Edits.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2002 Lecture 1 (Part 3) Tuesday, 1/29/02 Design Patterns for Optimization.
Greedy Algorithms CSE 331 Section 2 James Daly. Reminders Exam 2 next week Thursday, April 9th Covers heaps to spanning trees Greedy algorithms (today)
The Design and Analysis of Algorithms
Lecture 7 Topics Dynamic Programming
1 Theory I Algorithm Design and Analysis (11 - Edit distance and approximate string matching) Prof. Dr. Th. Ottmann.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Algorithms and Data Structures Lecture X
Lecture 5 Dynamic Programming. Dynamic Programming Self-reducibility.
Advanced Algorithm Design and Analysis (Lecture 5) SW5 fall 2004 Simonas Šaltenis E1-215b
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 5: Advanced Design Techniques.
October 21, Algorithms and Data Structures Lecture X Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
7 -1 Chapter 7 Dynamic Programming Fibonacci sequence Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … F i = i if i  1 F i = F i-1 + F i-2 if.
CSCI 256 Data Structures and Algorithm Analysis Lecture 14 Some slides by Kevin Wayne copyright 2005, Pearson Addison Wesley all rights reserved, and some.
September 29, Algorithms and Data Structures Lecture V Simonas Šaltenis Aalborg University
Algorithm Paradigms High Level Approach To solving a Class of Problems.
We learnt what are forests, trees, bintrees, binary search trees. And some operation on the tree i.e. insertion, deletion, traversing What we learnt.
Greedy Methods and Backtracking Dr. Marina Gavrilova Computer Science University of Calgary Canada.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
Advanced Algorithm Design and Analysis (Lecture 12) SW5 fall 2004 Simonas Šaltenis E1-215b
Advanced Algorithm Design and Analysis (Lecture 14) SW5 fall 2004 Simonas Šaltenis E1-215b
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
COSC 3101NJ. Elder Announcements Midterm Exam: Fri Feb 27 CSE C –Two Blocks: 16:00-17:30 17:30-19:00 –The exam will be 1.5 hours in length. –You can attend.
Advanced Algorithm Design and Analysis (Lecture 1) SW5 fall 2004 Simonas Šaltenis E1-215b
Chapter 15 Dynamic Programming Lee, Hsiu-Hui Ack: This presentation is based on the lecture slides from Hsu, Lih-Hsing, as well as various materials from.
Dynamic Programming (Edit Distance). Edit Distance Input: – Two input strings S1 (of size n) and S2 (of size m) E.g., S1 = ATTTCTAGTGGGTAAA S2 = ATCTAGTTTAGGGATA.
TU/e Algorithms (2IL15) – Lecture 3 1 DYNAMIC PROGRAMMING
1 Algorithms CSCI 235, Fall 2015 Lecture 29 Greedy Algorithms.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Dynamic Programming for the Edit Distance Problem.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Greedy Algorithms General principle of greedy algorithm
The Greedy Method and Text Compression
The Greedy Method and Text Compression
Lecture 5 Dynamic Programming
Chapter 8 Dynamic Programming.
Merge Sort 11/28/2018 2:21 AM The Greedy Method The Greedy Method.
CSE 589 Applied Algorithms Spring 1999
Ch. 15: Dynamic Programming Ming-Te Chi
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Dynamic Programming II DP over Intervals
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Presentation transcript:

Advanced Algorithm Design and Analysis (Lecture 6) SW5 fall 2004 Simonas Šaltenis E1-215b

AALG, lecture 6, © Simonas Šaltenis, Dynamic Programming Goals of the lecture: to understand the principles of dynamic programming; use the examples of computing optimal binary search trees, approximate pattern matching, and coin changing to see how the principles work; to be able to apply the dynamic programming algorithm design technique.

AALG, lecture 6, © Simonas Šaltenis, Coin changing Problem: Change amount A into as few coins as possible, when we have n coin denominations: denom[1] > denom[2] > … > denom[n] = 1 For example: A = 12, denom = [10, 5, 1] Greedy algorithm works fine (for this example) Prove greedy choice property What if A =12, denom = [10, 6, 1]?

AALG, lecture 6, © Simonas Šaltenis, Dynamic programming Dynamic programming: A powerful technique to solve optimization problems Structure: To arrive at an optimal solution a number of choices are made Each choice generates a number of sub-problems Which choice to make is decided by looking at all possible choices and the solutions to sub-problems that each choice generates Compare this with a greedy choice. The solution to a specific sub-problem is used many times in the algorithm

AALG, lecture 6, © Simonas Šaltenis, Questions to think about Construction: What are the sub-problems? Which parameters define each sub-problem? Which choices have to be considered in each step of the algorithm? In which order do we have to solve sub- problems? How are the trivial sub-problems solved? Analysis: How many different sub-problems are there in total? How many choices have to be considered in each step of the algorithm?

AALG, lecture 6, © Simonas Šaltenis, Edit Distance Problem definition: Two strings: s[0..m-1], and t[0..n-1] Find edit distance dist(s,t)– the smallest number of edit operations that turns s into t Edit operations: Replace one letter with another Delete one letter Insert one letter Example: ghost delete g host insert u houst replace t by e house

AALG, lecture 6, © Simonas Šaltenis, Sub-problmes What are the sub-problems? Goal 1: To have as few sub-problems as possible Goal 2: Solution to the sub-problem should be possible by combining solutions to smaller sub- problems. Sub-problem: d i,j = dist(s[0..i], t[0..j]) Then dist(s, t) = d m-1,n-1

AALG, lecture 6, © Simonas Šaltenis, Making a choice How can we solve a sub-problem by looking at solutions of smaller sub- problems to make a choice? Let’s look at the last symbol: s[i] and t[j]. Do whatever is cheaper: If s[i] = t[j], then turn s[0..i-1] to t[0..j-1], else replace s[i] by t[j] and turn s[0..i-1] to t[0..j-1] Delete s[i] and turn s[0..i-1] to t[0..j] Insert insert t[j] at the end of s[0..i-1] and turn s[0..i] to t[0..j-1]

AALG, lecture 6, © Simonas Šaltenis, Recurrence In which order do we have to solve sub- problems? How do we solve trivial sub-problems? To turn empty string to t[0..j], do j+1 inserts To turn s[0..i] to empty string, do i+1 deletes

AALG, lecture 6, © Simonas Šaltenis, Algorithm EditDistance(s[0..m-1], t[0..n-1]) 01 for i  -1 to m-1 do dist[i,-1] = i+1 02 for j  0 to n-1 do dist[-1,j] = j+1 03 for i  0 to m-1 do 04 for j  0 to n-1 do 05 if s[i]  t[j] then 06 dist[i,j] = min(dist[i-1,j-1], dist[i-1,j]+1, dist[i,j-1]+1) 07 else 08 dist[i,j] = 1 + min(dist[i-1,j-1], dist[i-1,j], dist[i,j-1]) 09 return dist[m-1,n-1] What is the running time of this algorithm?

AALG, lecture 6, © Simonas Šaltenis, Approximate Text Searching Given p[0..m-1], find a sub-string of t (w = t[i,j]), such that dist(p, w) is minimal. Brute-force: compute edit distance between p and all possible sub-strings of t. Running time? What are the sub-problems? ad i,j = min{dist(p[0..i], t[l..j]) | 0  l j+1} The same recurrence as for d i,j ! The edit distance from p to the best match then is the minimum of ad m-1,0,ad m-1,1, …, ad m-1,n-1 Trivial problems are solved different: Think how.

AALG, lecture 6, © Simonas Šaltenis, Optimal BST Static database  the goal is to optimize searches Let’s assume all searches are successful Node (k i ) DepthProbabil ity (p i ) Contribu tion A B0 C D E F Total: C D E F A B

AALG, lecture 6, © Simonas Šaltenis, Sub-problems Input: keys k 1, k 2, …, k n Sub-problem options: k 1, k 2, …, k j k i, k i+1, …, k n Natural choice: pick as a root k r (1 r n) Generates sub-problems: k i, k i+1, …, k j Lets denote the expected search cost e[i,j]. If k r is root, then

AALG, lecture 6, © Simonas Šaltenis, Solving sub-problems How do I solve the trivial problem? In which order do I have to solve my problems?

AALG, lecture 6, © Simonas Šaltenis, Finishing up I can compute w(i,j) using w(i,j-1) w(i,j) = w(i,j-1) + p j An array w[i,j] is filled in parallel with e[i,j] array Need one more array to note which root k r gave the best solution to (i, j)-sub-problem What is the running time?

AALG, lecture 6, © Simonas Šaltenis, Elements of Dynamic Programming Dynamic programming is used for optimization problems A number of choices have to be made to arrive at an optimal solution At each step, consider all possible choices and solutions to sub-problems induced by these choices (compare to greedy algorithms) The order of solving of the sub-problems is important – from smaller to larger Usually a table of sub-problem solutions is used

AALG, lecture 6, © Simonas Šaltenis, Elements of Dynamic Programming To be sure that the algorithm finds an optimal solution, the optimal sub-structure property has to hold the simple “cut-and-paste” argument usually works, but not always! Longest simple path example – no optimal sub-structure!

AALG, lecture 6, © Simonas Šaltenis, Coin Changing: Sub-problems A =12, denom = [10, 6, 1]? What could be the sub-problems? Described by which parameters? How do we solve sub-problems? How do we solve the trivial sub-problems? In which order do I have to solve sub- problems?