Week 6 - Wednesday CS322.

Slides:



Advertisements
Similar presentations
Analysis of Algorithms II
Advertisements

Week 6 - Wednesday CS322.
Comp 122, Spring 2004 Divide and Conquer (Merge Sort)
11 Computer Algorithms Lecture 6 Recurrence Ch. 4 (till Master Theorem) Some of these slides are courtesy of D. Plaisted et al, UNC and M. Nicolescu, UNR.
Algorithms Recurrences. Definition – a recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs Example.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Divide-and-Conquer1 7 2  9 4   2  2 79  4   72  29  94  4.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 3 Recurrence equations Formulating recurrence equations Solving recurrence equations.
Analysis of Recursive Algorithms
Recurrences Part 3. Recursive Algorithms Recurrences are useful for analyzing recursive algorithms Recurrence – an equation or inequality that describes.
MATH 224 – Discrete Mathematics
Sequences Informally, a sequence is a set of elements written in a row. – This concept is represented in CS using one- dimensional arrays The goal of mathematics.
Week 6 - Monday CS322.
Divide-and-Conquer 7 2  9 4   2   4   7
Induction and recursion
A Review of Recursion Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Lecture 8. How to Form Recursive relations 1. Recap Asymptotic analysis helps to highlight the order of growth of functions to compare algorithms Common.
Analysis of Algorithms
Week 5 - Wednesday.  What did we talk about last time?  Exam 1!
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Week 12 - Wednesday.  What did we talk about last time?  Asymptotic notation.
Week 6 - Monday.  What did we talk about last time?  Exam 1!  Before that:  Recursion.
Proofs, Induction and Recursion Basic Proof Techniques Mathematical Induction Recursive Functions and Recurrence Relations.
Recurrences David Kauchak cs161 Summer Administrative Algorithms graded on efficiency! Be specific about the run times (e.g. log bases) Reminder:
CS 103 Discrete Structures Lecture 13 Induction and Recursion (1)
Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch]
Divide-and-Conquer UNC Chapel HillZ. Guo. Divide-and-Conquer It’s a technique instead of an algorithm Recursive in structure – Divide the problem into.
Mathematical Induction Section 5.1. Climbing an Infinite Ladder Suppose we have an infinite ladder: 1.We can reach the first rung of the ladder. 2.If.
Divide and Conquer Faculty Name: Ruhi Fatima Topics Covered Divide and Conquer Matrix multiplication Recurrence.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
Section Recursion 2  Recursion – defining an object (or function, algorithm, etc.) in terms of itself.  Recursion can be used to define sequences.
Advanced Algorithms Analysis and Design By Dr. Nazir Ahmad Zafar Dr Nazir A. Zafar Advanced Algorithms Analysis and Design.
5.1 Exponential Functions
MATH 224 – Discrete Mathematics
A Question How many people here don’t like recursion? Why not?
Modeling with Recurrence Relations
Proofs, Recursion and Analysis of Algorithms
COMP108 Algorithmic Foundations Divide and Conquer
Divide-and-Conquer 6/30/2018 9:16 AM
Unit 1. Sorting and Divide and Conquer
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015.
CS 3343: Analysis of Algorithms
Divide and Conquer divide and conquer algorithms typically recursively divide a problem into several smaller sub-problems until the sub-problems are.
Analysis of Algorithms
CS 3343: Analysis of Algorithms
CSCE 411 Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Data Structures Review Session
Divide-and-Conquer 7 2  9 4   2   4   7
Key Words and Introduction to Expressions
Recurrences (Method 4) Alexandra Stefan.
Copyright © Cengage Learning. All rights reserved.
CS 2210 Discrete Structures Advanced Counting
Solving Recurrence Relations
Divide and Conquer (Merge Sort)
Divide-and-Conquer 7 2  9 4   2   4   7
Ch. 2: Getting Started.
Trevor Brown CS 341: Algorithms Trevor Brown
Divide & Conquer Algorithms
Copyright © Cengage Learning. All rights reserved.
At the end of this session, learner will be able to:
David Kauchak cs161 Summer 2009
Algorithms Recurrences.
Mathematical Induction
The Selection Problem.
Divide-and-Conquer 7 2  9 4   2   4   7
ITEC324 Principle of CS III
Algorithms and Data Structures Lecture III
Presentation transcript:

Week 6 - Wednesday CS322

Last time What did we talk about last time? Exam 1 post-mortem Optimal prefix codes for data compression

Questions?

Logical warmup On the planet Og, there are green people and red people Likewise, there are northerners and southerners Green northerners tell the truth Red northerners lie Green southerners lie Red southerners tell the truth Consider the following statements by two natives named Ork and Bork: Ork: Bork is from the north Bork: Ork is from the south Ork: Bork is red Bork: Ork is green What are the colors and origins of Ork and Bork?

Optimal prefix code example Create an optimal prefix code tree for the following letters (and their frequencies) a 0.4 b 0.3 c 0.2 d 0.1

Divide and Conquer

Divide and conquer Divide and conquer algorithms are ones in which we divide a problem into parts and recursively solve each part Then, we do some work to combine the solutions to each part into a final solution Divide and conquer algorithms are often simple However, their running time can be challenging to compute because recursion is involved

Mergesort algorithm If there are two elements in the array or fewer then Make sure they're in order Else Divide list into two halves Recursively merge sort the two halves Merge the two sorted halves together into the final list

Time for mergesort The algorithm is simple, but recursive We'll use T(n) to describe the total running time recursively 𝑇 𝑛 ≤𝑐, 𝑛 ≤2 𝑇 𝑛 ≤2𝑇 𝑛 2 +𝑐𝑛, 𝑛>2 Is it really the same constant c for both? No, but it's an inequality, so we just take the bigger one

Recursive running time If we can, we want to turn the recursive version of T(n) into an explicit (non-recursive) Big Oh bound Before we do, note that we could similarly have written: 𝑇 𝑛 ≤2𝑇 𝑛 2 +𝑂(𝑛) Also, we can't guarantee that n is even A more accurate statement would be 𝑇 𝑛 ≤𝑇 𝑛 2 +𝑇 𝑛 2 +𝑐𝑛 Usually, we ignore that issue and assume that n is a power of 2, evenly divisible forever

Intuition about mergesort recursion Each time, the recursion cuts the work in half while doubling the number of problems The total work at each level is thus always cn To go from n to 2, we have to cut the size in half (log2 n) – 1 times cn cn cn/2 cn/2 cn/4 cn/4 cn/4 cn/4

Checking a solution We know that there's cn work at each level and approximately log2 n levels If we think that the running time O(n log n), we can guess that T(n) ≤ cn log2 n and substitute that in for T(n/2) 𝑇 𝑛 ≤2𝑇 𝑛 2 +𝑐𝑛 ≤2𝑐 𝑛 2 log 2 𝑛 2 +𝑐𝑛 =𝑐𝑛 ( log 2 𝑛 −1)+𝑐𝑛 =𝑐𝑛 log 2 𝑛 −𝑐𝑛+𝑐𝑛 =𝑐𝑛 log 2 𝑛

Recursively defined sequences Defining a sequence recursively as with Mergesort is called a recurrence relation The initial conditions give the starting point Example: Initial conditions c0 = 1 c1 = 2 Recurrence relation ck = ck-1 + kck-2 + 1, for all integers k  2 Find c2, c3, and c4

Writing recurrence relations in multiple ways Consider the following recurrence relation: sk = 3sk-1 – 1, for all integers k  1 Now consider this one: sk+1 = 3sk – 1, for all integers k  0 Both recurrence relations have the same meaning

Differences in initial conditions Even if the recurrence relations are equivalent, different initial conditions can cause a different sequence Example: ak = 3ak-1, for all integers k  2 a1 = 2 bk = 3bk-1, for all integers k  2 b1 = 1 Find a1, a2, and a3 Find b1, b2, and b3

Practice Using just your wits, you should be able to figure out recursive definitions for many sequences Give a recurrence relation for positive even integers: 2, 4, 6, 8, … Give a recurrence relation for the triangular numbers: 1, 3, 6, 10, 15, … Give a recurrence relation for the perfect squares: 1, 4, 9, 16, 25, … Give a recurrence relation for factorial: 1, 2, 6, 24, 120, 720 …

The proof is on fire Perhaps you believe that you have the correct recurrence relation for perfect squares Can you prove it? Hint: Use mathematical induction Recursion and induction are two sides of the same coin The right cross and the left jab, if you will, of the computer scientist's arsenal

Fibonacci We all know and love Fibonacci by now Recall that it is supposed to model rabbit populations For the first month of their lives, they cannot reproduce After that, they reproduce every single month

Fibonacci rules From this information about rabbit physiology (which is a simplification, of course) we can think about pairs of rabbits At time k, rabbits born at time k – 1 will not reproduce Any rabbits born at k – 2 or earlier will, however So, we assume that all the rabbits from time k – 2 have doubled between time k – 1 and k Thus, our recurrence relation is: Fk = Fk – 1 + Fk – 2, k  2 Assuming one starting pair of rabbits, our initial conditions are: F0 = 1 F1 = 1

Compound interest Interest is compounded based on some period of time We can define the value recursively Let i is the annual percentage rate (APR) of interest Let m be the number of times per year the interest is compounded Thus, the total value of the investment at the kth period is Pk = Pk-1 + Pk-1(i/m), k  1 P0 = initial principle

Solving Recurrence Relations

Recursion … is confusing We don't naturally think recursively (but perhaps you can raise your children to think that way?) With an interest rate of i, a principle of P0, and m periods per year, the investment will yield Po(i/m + 1)k after k periods

Finding explicit formulas by iteration We want to be able to turn recurrence relations into explicit formulas whenever possible Often, the simplest way is to find these formulas by iteration The technique of iteration relies on writing out many expansions of the recursive sequence and looking for patterns That's it

Iteration example Find a pattern for the following recurrence relation: ak = ak-1 + 2 a0 = 1 Start at the first term Write the next below Do not combine like terms! Leave everything in expanded form until patterns emerge

Arithmetic sequence In principle, we should use mathematical induction to prove that the explicit formula we guess actually holds The previous example (odd integers) shows a simple example of an arithmetic sequence These are recurrences of the form: ak = ak-1 + d, for integers k ≥ 1 Note that these recurrences are always equivalent to an = a0 + dn, for all integers n ≥ 0

Geometric sequence Find a pattern for the following recurrence relation: ak = rak-1, k ≥ 1 a0 = a Again, start at the first term Write the next below Do not combine like terms! Leave everything in expanded form until patterns emerge

Geometric sequence It appears that any geometric sequence with the following form ak = rak-1, k ≥ 1 is equivalent to an = a0rn, for all integers n ≥ 0 This result applies directly to compound interest calculation

Employing outside formulas Intelligent pattern matching gets you a long way However, it is sometimes necessary to substitute in some known formula to simplify a series of terms Recall Geometric series: 1 + r + r2 + … + rn = (rn+1 – 1)/(r – 1) Arithmetic series: 1 + 2 + 3 + … + n = n(n + 1)/2

How many edges are in a complete graph? In a complete graph, every node is connected to every other node If we want to make a complete graph with k nodes, we can take a complete graph with k – 1 nodes, add a new node, and add k – 1 edges (so that all the old nodes are connected to the new node Recursively, this means that the number of edges in a complete graph is sk = sk-1 + (k – 1), k ≥ 2 s1 = 0 (no edges in a graph with a single node) Use iteration to solve this recurrence relation

Further recurrence relations We have seen that recurrence relations of the form 𝑇 𝑛 ≤2𝑇 𝑛 2 +𝑐𝑛 are bounded by O(n log n) What about 𝑇 𝑛 ≤𝑞𝑇 𝑛 2 +𝑐𝑛 where q is bigger than 2 (more than two sub-problems)? There will still be log2n levels of recursion However, there will not be a consistent cn amount of work at each level

Consider q = 3 cn (3/2)cn (9/4)cn cn cn/2 cn/2 cn/2 cn/4 cn/4 cn/4

Converting to summation For q = 3, it's 𝑇 𝑛 ≤ 𝑗=0 log 2 𝑛−1 3 2 𝑗 𝑐𝑛 In general, it's 𝑇 𝑛 ≤ 𝑗=0 log 2 𝑛−1 𝑞 2 𝑗 𝑐𝑛 =𝑐𝑛 𝑗=0 log 2 𝑛−1 𝑞 2 𝑗 This is a geometric series, where 𝑟= 𝑞 2 𝑇 𝑛 ≤𝑐𝑛 𝑟 log 2 𝑛 −1 𝑟−1 ≤𝑐𝑛 𝑟 log 2 𝑛 𝑟−1

Final bound 𝑇 𝑛 ≤𝑐𝑛 𝑟 log 2 𝑛 −1 𝑟−1 ≤𝑐𝑛 𝑟 log 2 𝑛 𝑟−1 Since r – 1 is a constant, we can pull it out 𝑇 𝑛 ≤ 𝑐 𝑟−1 𝑛 𝑟 log 2 𝑛 For 𝑎>1 and 𝑏>1, 𝑎 log 𝑏 = 𝑏 log 𝑎 , thus 𝑟 log 2 𝑛 = 𝑛 log 2 𝑟 = 𝑛 log 2 (𝑞/2) = 𝑛 ( log 2 𝑞 )−1 𝑇 𝑛 ≤ 𝑐 𝑟−1 𝑛∙ 𝑛 ( log 2 𝑞 )−1 ≤ 𝑐 𝑟−1 𝑛 log 2 𝑞 which is 𝑂 𝑛 log 2 𝑞

What about a single sub-problem? We will still have log2 n – 1 levels However, we'll cut our work in half each time 𝑇 𝑛 ≤𝑇 𝑛 2 +𝑐𝑛≤ 𝑗=0 log 2 𝑛−1 1 2 𝑗 𝑐𝑛 =𝑐𝑛 𝑗=0 log 2 𝑛−1 1 2 𝑗 Summing all the way to infinity, 1+ 1 2 + 1 4 +…=2 Thus, 𝑇 𝑛 ≤2𝑐𝑛 which is 𝑂(𝑛)

What might that look like in code? Here's a non-recursive version in Java We've just shown that this is O(n), in spite of the two for loops int counter = 0; for( int i = 1; i <= n; i *= 2 ) for( int j = 1; j <= i; j++ ) counter++;

Quiz

Upcoming

Next time… Convolutions and the Fast Fourier Transform Guest lecture by Dr. McDevitt!

Reminders Homework 4 is due on Monday Read section 5.6