Problem A subsequence is a sequence derived from another sequence by deleting some elements without changing the order of the remaining elements. Using.

Slides:



Advertisements
Similar presentations
Dynamic Programming ACM Workshop 24 August Dynamic Programming Dynamic Programming is a programming technique that dramatically reduces the runtime.
Advertisements

Dynamic Programming Nithya Tarek. Dynamic Programming Dynamic programming solves problems by combining the solutions to sub problems. Paradigms: Divide.
Overview What is Dynamic Programming? A Sequence of 4 Steps
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Dynamic Programming.
David Luebke 1 5/4/2015 CS 332: Algorithms Dynamic Programming Greedy Algorithms.
1 Dynamic Programming (1). 2 Dynamic Programming The dynamic programming archetype is used to solve problems that can be defined by recurrences with overlapping.
 2004 SDU Lecture11- All-pairs shortest paths. Dynamic programming Comparing to divide-and-conquer 1.Both partition the problem into sub-problems 2.Divide-and-conquer.
1 Dynamic Programming Jose Rolim University of Geneva.
Dynamic Programming Reading Material: Chapter 7..
Dynamic Programming Technique. D.P.2 The term Dynamic Programming comes from Control Theory, not computer science. Programming refers to the use of tables.
Dynamic Programming CIS 606 Spring 2010.
Dynamic Programming Solving Optimization Problems.
CSE 780 Algorithms Advanced Algorithms Greedy algorithm Job-select problem Greedy vs DP.
Dynamic Programming Optimization Problems Dynamic Programming Paradigm
Dynamic Programming1. 2 Outline and Reading Matrix Chain-Product (§5.3.1) The General Technique (§5.3.2) 0-1 Knapsack Problem (§5.3.3)
Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science San Jose State University.
Discrete Mathematics Recursion and Sequences
CSC 2300 Data Structures & Algorithms January 30, 2007 Chapter 2. Algorithm Analysis.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
© 2004 Goodrich, Tamassia Dynamic Programming1. © 2004 Goodrich, Tamassia Dynamic Programming2 Matrix Chain-Products (not in book) Dynamic Programming.
Lecture 8: Dynamic Programming Shang-Hua Teng. First Example: n choose k Many combinatorial problems require the calculation of the binomial coefficient.
Fundamental Techniques
Analysis of Algorithms
1 Dynamic Programming Jose Rolim University of Geneva.
Lecture 7 Topics Dynamic Programming
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Dynamic Programming – Part 2 Introduction to Algorithms Dynamic Programming – Part 2 CSE 680 Prof. Roger Crawfis.
Dynamic Programming From An Excel Perspective. Dynamic Programming From An Excel Perspective Ranette Halverson, Richard Simpson, Catherine Stringfellow.
CSCE350 Algorithms and Data Structure Lecture 17 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Dynamic Programming. Well known algorithm design techniques:. –Divide-and-conquer algorithms Another strategy for designing algorithms is dynamic programming.
ADA: 7. Dynamic Prog.1 Objective o introduce DP, its two hallmarks, and two major programming techniques o look at two examples: the fibonacci.
Fundamentals of Algorithms MCS - 2 Lecture # 7
1 7.Algorithm Efficiency What to measure? Space utilization: amount of memory required  Time efficiency: amount of time required to process the data Depends.
Review Introduction to Searching External and Internal Searching Types of Searching Linear or sequential search Binary Search Algorithms for Linear Search.
Dynamic Programming. What is dynamic programming? Break problem into subproblems Work backwards Can use ‘recursion’ ‘Programming’ - a mathematical term.
P ROBLEM Write an algorithm that calculates the most efficient route between two points as quickly as possible.
1 7.Algorithm Efficiency What to measure? Space utilization: amount of memory required  Time efficiency: amount of time required to process the data.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
Lecture21: Dynamic Programming Bohyung Han CSE, POSTECH CSED233: Data Structures (2014F)
Dynamic Programming Louis Siu What is Dynamic Programming (DP)? Not a single algorithm A technique for speeding up algorithms (making use of.
Dynamic Programming continued David Kauchak cs302 Spring 2012.
CSC 221: Recursion. Recursion: Definition Function that solves a problem by relying on itself to compute the correct solution for a smaller version of.
Dynamic Programming David Kauchak cs302 Spring 2013.
Dynamic Programming Greed is not always good.. Jaruloj Chongstitvatana Design and Analysis of Algorithm2 Outline Elements of dynamic programming.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 14.
Types of Algorithms. 2 Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We’ll talk about a classification.
1 Dynamic Programming Topic 07 Asst. Prof. Dr. Bunyarit Uyyanonvara IT Program, Image and Vision Computing Lab. School of Information and Computer Technology.
Dynamic Programming David Kauchak cs161 Summer 2009.
Chapter 7 Dynamic Programming 7.1 Introduction 7.2 The Longest Common Subsequence Problem 7.3 Matrix Chain Multiplication 7.4 The dynamic Programming Paradigm.
Dynamic Programming (DP) By Denon. Outline Introduction Fibonacci Numbers (Review) Longest Common Subsequence (LCS) More formal view on DP Subset Sum.
1 Today’s Material Dynamic Programming – Chapter 15 –Introduction to Dynamic Programming –0-1 Knapsack Problem –Longest Common Subsequence –Chain Matrix.
Dynamic Programming academy.zariba.com 1. Lecture Content 1.Fibonacci Numbers Revisited 2.Dynamic Programming 3.Examples 4.Homework 2.
Questions 4) What type of algorithmic problem-solving technique (greedy, divide-and-conquer, dynamic programming)
Fundamental Data Structures and Algorithms Ananda Guna March 18, 2003 Dynamic Programming Part 1.
9/27/10 A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne Adam Smith Algorithm Design and Analysis L ECTURE 16 Dynamic.
TU/e Algorithms (2IL15) – Lecture 4 1 DYNAMIC PROGRAMMING II
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Section Recursion  Recursion – defining an object (or function, algorithm, etc.) in terms of itself.  Recursion can be used to define sequences.
D ESIGN & A NALYSIS OF A LGORITHM 13 – D YNAMIC P ROGRAMMING Informatics Department Parahyangan Catholic University.
1 7.Algorithm Efficiency These factors vary from one machine/compiler (platform) to another  Count the number of times instructions are executed So, measure.
Recursion A problem solving technique where an algorithm is defined in terms of itself A recursive method is a method that calls itself A recursive algorithm.
Lecture 5 Dynamic Programming
JinJu Lee & Beatrice Seifert CSE 5311 Fall 2005 Week 10 (Nov 1 & 3)
Lecture 5 Dynamic Programming
Unit-5 Dynamic Programming
Ch. 15: Dynamic Programming Ming-Te Chi
Dynamic Programming-- Longest Common Subsequence
Presentation transcript:

Problem A subsequence is a sequence derived from another sequence by deleting some elements without changing the order of the remaining elements. Using code or pseudocode, write an algorithm that determines the longest common subsequence of two strings.

Introduction to Algorithms Dynamic programming An introduction Simon Ellis 27th March, 2014 Introduction to Algorithms

Fibonacci numbers Iterative solution

Fibonacci numbers (an aside) Closed form solution exists Binet’s formula To calculate any Fibonacci number Fn

Pascal’s Triangle Iterative solution Begin with 1 on row 0 Each new row is offset to the left by the width of one number Construct the elements of rows as follows: For each new value k, sum the value above and left with the value above and right to find k If there is no number, substitute zero in its place

Pascal’s Triangle

Pascal’s Triangle (an aside) Closed form solution exists To calculate values in row n Use symmetry to derive right-hand side of row (or calculate)

Pascal’s Triangle (another aside) 1 1 2 3 5 8 13 21 34

Calculating prime numbers Iterative solution Let P be the set of prime numbers; at start, P = ∅ For each value k = 2…n For each value j = 2…k – 1 If k/j is not an integer for all j, P = P ∪ { k } Output P Naïve, slow solution

Dynamic programming May refer to two things A mathematical optimisation method An approach to computer programming and algorithm design A method for solving complex problems using recursion Problems may be solved by solving smaller parts Keep addressing smaller parts until each subproblem becomes tractable Problem must possess both of the following traits Optimal substructure Overlapping subproblems

Optimal substructure A problem has optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems May be solved using a greedy algorithm Iff it can be proved by induction that this is optimal for all steps Dynamic programming may be used if subproblems exist Otherwise a brute-force search is required

Optimal substructure Optimal substructure Not optimal substructure Finding the shortest path between two cities by car e.g. if the shortest route from Seattle to Los Angeles passes through Portland and Sacramento, the shortest route from Portland to Los Angeles passes through Sacramento Not optimal substructure Buying the cheapest ticket from Sydney to New York Even if a ticket has stops in Dubai and London we cannot conclude that the cheapest ticket will stop in London as the price at which an airline sells multi-flight tickets is not the sum of the prices for which it would sell individual flights on the same trip

Overlapping subproblems A problem has overlapping subproblems if either of the following is true The problem can be broken down into subproblems which are reused several times A recursive algorithm solves the same subproblem over and over instead of generating new subproblems Examples Fibonacci numbers Pascal’s Triangle …

Examples of dynamic programming Fibonacci numbers Pascal’s Triangle Prime number generation Dijkstra’s algorithm/A* algorithm Towers of Hanoi game Matrix chain multiplication Beat tracking in music information retrieval software Neural networks Word-wrapping in word processors (especially LATEX) Transposition and refutation tables in computer chess games

Recursion Recursion is a powerful tool for solving dynamic programming problems Not always an optimal solution on its own Naïve Wasteful e.g. to calculate F100, a lot of work shall be wasted Two standard approaches to improving performance “Top-down” “Bottom-up”

“Top-down” approach In software design In dynamic programming Developing a program by starting with a large concept and adding increasing layers of specialisation In dynamic programming A solution which derives directly from the recursive solution Performance is improved by memoisation Solutions to subproblems are stored in a data structure If a subproblem is solved, we read from the table, else we calculate and add it

“Bottom-up” approach In software design In dynamic programming Developing a program by starting with many small objects and functions and building on them to create more functionality In dynamic programming Solving the subproblems first and aggregating them Performance is improved using stored data Memoisation may be used

Memoisation Optimisation technique Reduces need to repeat function calls, which may be expensive “Memoised” function stores results of previous calls with a specific set of inputs Function must be ‘referentially transparent’ i.e. calling the function must have exactly the same outcome as returning the value that would be produced by calling the function

Memoisation Not the same as using a lookup table Lookup tables are precalculated before use Memoised tables are calculated transparently as required Memoisation optimises for time over space Time/space trade-off in all algorithms ‘Computational complexity’ Complexity in time (how much time is required) Complexity in space (how much memory is required) Cannot reduce one without increasing the other

Computational complexity Roughly, the quantity of resources required to solve a problem computationally Resources are whatever is appropriate to the situation Time Memory Logic gates on a circuit board Number of processors in a supercomputer e.g. “Big-O” notation is a measure of complexity in time

Computational complexity Cannot reduce all complexity values simultaneously All software design requires a trade-off Primarily between time (runtime) and space (memory requirements) Memoisation optimises for time over space Calculations are performed more quickly… … but memory is required to store precalculated data

Longest common subsequence Assume two strings, S and T Solution? Generate all possible subsequences of both sequences, the set Q Return longest subsequence in Q int lcs(char *S, char *T, int m, int n) {    if (m == 0 || n == 0)      return 0;    if (S[m-1] == T[n-1])      return 1 + lcs(S, T, m-1, n-1);    else      return max(lcs(S, T, m, n-1), lcs(S, T, m-1, n)); }

Longest common subsequence Time: O(2n) Why? Is this an efficient algorithm for this problem? int lcs(char *S, char *T, int m, int n) {    if (m == 0 || n == 0)      return 0;    if (S[m-1] == T[n-1])      return 1 + lcs(S, T, m-1, n-1);    else      return max(lcs(S, T, m, n-1), lcs(S, T, m-1, n)); }

Longest common subsequence Can we use dynamic programming to improve performance? Does the problem meet the requirements? Does it have optimal substructure? Does it have overlapping subproblems?

Optimal substructure of LCS Assume Input sequences S[0..m-1] and T [0..n-1] with lengths m and n L(S[0..m-1], T [0..n-1]) is the length of the LCS of S and T L(S[0..m-1], T [0..n-1]) is defined recursively as follows If last characters of both sequences match L(S [0..m–1], T [0..n–1])=1 + L(S [0..m–2], T [0..n–2]) If last characters of both sequences do not match L(S [0..m–1], T [0..n–1]) = MAX( L(S [0..m–2], T [0..n–1]), L(S [0..m–1], T [0..n–2]) )

Optimal substructure of LCS Examples Last characters match Consider the input strings “AGGTAB” and “GXTXAYB” Length of LCS can be written as L(“AGGTAB”, “GXTXAYB”) = 1 + L(“AGGTA”, “GXTXAY”) Last characters do not match Consider the input strings “ABCDGH” and “AEDFHR” Last characters do not match for the strings. L(“ABCDGH”, “AEDFHR”) = MAX ( L(“ABCDG”, “AEDFHR”), L(“ABCDGH”, “AEDFH”) )

Overlapping subproblems in LCS Consider the original code Many of the same problems being solved repeatedly Subproblems overlap Performance can be improved by memoisation lcs("AXYT", "AYZX”) / \ lcs("AXY", "AYZX") lcs("AXYT", "AYZ") / \ / \ lcs("AX", "AYZX") lcs("AXY", "AYZ") lcs("AXY", "AYZ") lcs("AXYT", "AY")

Example of LCS of two strings ∅ A G C T

Example of LCS of two strings ∅ A G C T    (G)

Example of LCS of two strings ∅ A G C T    (G) (A) (GA)

Example of LCS of two strings ∅ A G C T    (G) (A) (GA) (AC) (GC)

Example of LCS of two strings New runtime O(n ⋅ m) Much less than previously: O(2n) But increased space requirement to store working data

ANY QUESTIONS?

References http://faculty.ycp.edu/~dbabcock/cs360/lectures/lecture13.html http://www.algorithmist.com/index.php/Longest_Common_Subsequ ence