Chapter 2 Analysis Framework 2.1 – 2.2. Homework Remember questions due Wed. Remember questions due Wed. Read sections 2.1 and 2.2 Read sections 2.1 and.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 2 Some of the sides are exported from different sources.
Advertisements

Analysis of Algorithms
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
Introduction to Analysis of Algorithms
Scott Grissom, copyright 2004 Chapter 5 Slide 1 Analysis of Algorithms (Ch 5) Chapter 5 focuses on: algorithm analysis searching algorithms sorting algorithms.
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Elementary Data Structures and Algorithms
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
Lecture 6 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Analysis and Design of Algorithms. According to math historians the true origin of the word algorism: comes from a famous Persian author named ál-Khâwrázmî.
Last Class: Graph Undirected and directed (digraph): G=, vertex and edge Adjacency matrix and adjacency linked list
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
{ CS203 Lecture 7 John Hurley Cal State LA. 2 Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
CS 3343: Analysis of Algorithms
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Analysis of Algorithms
Analysis of Algorithm Efficiency Dr. Yingwu Zhu p5-11, p16-29, p43-53, p93-96.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
CMPT 438 Algorithms. Why Study Algorithms? Necessary in any computer programming problem ▫Improve algorithm efficiency: run faster, process more data,
Introduction to Analysis of Algorithms COMP171 Fall 2005.
CSC 211 Data Structures Lecture 13
Asymptotic Analysis (based on slides used at UMBC)
CSC – 332 Data Structures Generics Analysis of Algorithms Dr. Curry Guinn.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
Algorithm Analysis Part of slides are borrowed from UST.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
2-0 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical.
CS 150: Analysis of Algorithms. Goals for this Unit Begin a focus on data structures and algorithms Understand the nature of the performance of algorithms.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Lecture 4 Jianjun Hu Department of Computer Science and Engineerintg University of South Carolina CSCE350 Algorithms and Data Structure.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
LECTURE 2 : fundamentals of analysis of algorithm efficiency Introduction to design and analysis algorithm 1.
CMPT 438 Algorithms.
Theoretical analysis of time efficiency
Analysis of Algorithms
Analysis of algorithms
Analysis of algorithms
Introduction to Algorithms
Analysis of algorithms
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
Big-Oh and Execution Time: A Review
Analysis of algorithms
Algorithm Analysis (not included in any exams!)
CS 201 Fundamental Structures of Computer Science
Chapter 2.
Fundamentals of the Analysis of Algorithm Efficiency
Searching, Sorting, and Asymptotic Complexity
CSC 380: Design and Analysis of Algorithms
At the end of this session, learner will be able to:
Analysis of algorithms
Fundamentals of the Analysis of Algorithm Efficiency
Presentation transcript:

Chapter 2 Analysis Framework 2.1 – 2.2

Homework Remember questions due Wed. Remember questions due Wed. Read sections 2.1 and 2.2 Read sections 2.1 and 2.2 pages and pages and 52-59

Agenda: Analysis of Algorithms Blackboard Blackboard Issues: Issues: –Correctness –Time efficiency –Space efficiency –Optimality Approaches: Approaches: –Theoretical analysis –Empirical analysis

Time Efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size

Theoretical analysis of time efficiency Basic operation: the operation that contributes most towards the running time of the algorithm. Basic operation: the operation that contributes most towards the running time of the algorithm. T(n) ≈ c op C(n) running time execution time for basic operation Number of times basic operation is executed input size

Input size and basic operation examples Problem Input size measure Basic operation Search for key in list of n items Number of items in list n Key comparison Multiply two matrices of floating point numbers Dimensions of matrices Floating point multiplication Compute a n n Floating point multiplication Graph problem #vertices and/or edges Visiting a vertex or traversing an edge

Empirical analysis of time efficiency Select a specific (typical) sample of inputs Select a specific (typical) sample of inputs Use physical unit of time (e.g., milliseconds) Use physical unit of time (e.g., milliseconds) OR OR Count actual number of basic operations Count actual number of basic operations Analyze the empirical data Analyze the empirical data

Best-case, average-case, worst-case For some algorithms efficiency depends on type of input: Worst case:W(n) – maximum over inputs of size n Worst case:W(n) – maximum over inputs of size n Best case:B(n) – minimum over inputs of size n Best case:B(n) – minimum over inputs of size n Average case:A(n) – “average” over inputs of size n Average case:A(n) – “average” over inputs of size n –Number of times the basic operation will be executed on typical input –NOT the average of worst and best case –Expected number of basic operations repetitions considered as a random variable under some assumption about the probability distribution of all possible inputs of size n

Example: Sequential search Problem: Given a list of n elements and a search key K, find an element equal to K, if any. Problem: Given a list of n elements and a search key K, find an element equal to K, if any. Algorithm: Scan the list and compare its successive elements with K until either a matching element is found (successful search) of the list is exhausted (unsuccessful search) Algorithm: Scan the list and compare its successive elements with K until either a matching element is found (successful search) of the list is exhausted (unsuccessful search) Worst case Worst case Best case Best case Average case Average case

Types of formulas for basic operation count Exact formula Exact formula e.g., C(n) = n(n-1)/2 e.g., C(n) = n(n-1)/2 Formula indicating order of growth with specific multiplicative constant Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n 2 e.g., C(n) ≈ 0.5 n 2 Formula indicating order of growth with unknown multiplicative constant Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn 2 e.g., C(n) ≈ cn 2

Order of growth Most important: Order of growth within a constant multiple as n → ∞ Most important: Order of growth within a constant multiple as n → ∞ Example: Example: –How much faster will algorithm run on computer that is twice as fast? –How much longer does it take to solve problem of double input size? See table 2.1 See table 2.1

Table 2.1

Day 4: Agenda O, Θ, Ω O, Θ, Ω Limits Limits Definitions Definitions Examples Examples Code  O(?) Code  O(?)

Homework Due on Monday 11:59PM Due on Monday 11:59PM Electronic submission see website. Electronic submission see website. Try to log into Blackboard Try to log into Blackboard Finish reading 2.1 and 2.2 Finish reading 2.1 and 2.2 page 60-61, questions 2, 3, 4, 5, & 9 page 60-61, questions 2, 3, 4, 5, & 9

Asymptotic growth rate A way of comparing functions that ignores constant factors and small input sizes O(g(n)): class of functions t (n) that grow no faster than g(n) O(g(n)): class of functions t (n) that grow no faster than g(n) Θ(g(n)): class of functions t (n) that grow at same rate as g(n) Θ(g(n)): class of functions t (n) that grow at same rate as g(n) Ω(g(n)): class of functions t (n) that grow at least as fast as g(n) Ω(g(n)): class of functions t (n) that grow at least as fast as g(n) see figures 2.1, 2.2, 2.3

Big-oh

Big-omega

Big-theta

Using Limits if t(n) grows slower than g(n) if t(n) grows faster than g(n) if t(n) grows at the same rate as g(n)

L’Hopital’s Rule

Examples: 10n vs. 2n 2 n(n+1)/2 vs. n 2 log b n vs. log c n

Definition f(n) = O( g(n) ) if there exists f(n) = O( g(n) ) if there exists a positive constant c and a positive constant c and a non-negative interger n 0 a non-negative interger n 0 such that f(n)  c·g(n) for every n > n 0 such that f(n)  c·g(n) for every n > n 0Examples: 10n is O(2n 2 ) 10n is O(2n 2 ) 5n+20 is O(10n) 5n+20 is O(10n)

Basic Asymptotic Efficiency classes 1constant log n logarithmic nlinear n log n n2n2n2n2quadratic n3n3n3n3cubic 2n2n2n2nexponential n!factorial

Non-recursive algorithm analysis Analysis Steps: Decide on parameter n indicating input size Decide on parameter n indicating input size Identify algorithm’s basic operation Identify algorithm’s basic operation Determine worst, average, and best case for input of size n Determine worst, average, and best case for input of size n Set up summation for C(n) reflecting algorithm’s loop structure Set up summation for C(n) reflecting algorithm’s loop structure Simplify summation using standard formulas Simplify summation using standard formulas

Example for (x = 0; x < n; x++) a[x] = max(b[x],c[x]);

Example for (x = 0; x < n; x++) for (y = x; y < n; y++) a[x][y] = max(b[x],c[y]);

Example for (x = 0; x < n; x++) for (y = 0; y < n/2; y++) for (z = 0; z < n/3; z++) a[z] = max(a[x],c[y]);

Example y = n while (y > 0) if (a[y--] == b[y--]) break;

Day 5: Agenda Go over the answer to hw2 Go over the answer to hw2 Try electronic submission Try electronic submission Do some problems on the board Do some problems on the board Time permitting: Recursive algorithm analysis Time permitting: Recursive algorithm analysis

Homework Remember to electronically submit hw3 before Tues. morning Remember to electronically submit hw3 before Tues. morning Read section 2.3 thoroughly! pages Read section 2.3 thoroughly! pages 61-67

Day 6: Agenda First, what have we learned so far about non-recursive algorithm analysis First, what have we learned so far about non-recursive algorithm analysis Second, log b n and b n The enigma is solved. Second, log b n and b n The enigma is solved. Third, what is the deal with Abercrombie & Fitch Third, what is the deal with Abercrombie & Fitch Abercrombie & Fitch Abercrombie & Fitch Fourth, recursive analysis tool kit. Fourth, recursive analysis tool kit.

Homework 4 and Exam 1 Last homework before Exam 1 Last homework before Exam 1 Due on Friday 2/6 (electronic?) Due on Friday 2/6 (electronic?) Will be returned on Monday 2/9 Will be returned on Monday 2/9 All solutions will be posted next Monday 2/9 All solutions will be posted next Monday 2/9 Exam 1 Wed. 2/11 Exam 1 Wed. 2/11

Homework 4 Page 68 questions 4, 5 and 6 Page 68 questions 4, 5 and 6 Pages 77 and 78 questions 8 and 9 Pages 77 and 78 questions 8 and 9 We will do similar example questions all day on Wed 2/4 We will do similar example questions all day on Wed 2/4

What have we learned? Non-recursive algorithm analysis Non-recursive algorithm analysis First, Identify problem size. First, Identify problem size. –Typically, –a loop counter –an array size –a set size –the size of a value

Basic operations Second, identify the basic operation Second, identify the basic operation Usually a small block of code or even a single statement that is executed over and over. Usually a small block of code or even a single statement that is executed over and over. Sometimes the basic operation is a comparison that is hidden inside of the loop Sometimes the basic operation is a comparison that is hidden inside of the loop Example: Example: while (target != a[x]) x++; while (target != a[x]) x++;

Single loops One loop from 1 to n One loop from 1 to n O(n) O(n) Be aware this is the same as Be aware this is the same as –2 independent loops from 1 to n –c independent loops from 1 to n –A loop from 5 to n-1 –A loop from 0 to n/2 –A loop from 0 to n/c

Nested loops for (x = 0; x < n; x++) for (y = 0; y < n; y++)  O(n 2 ) for (x = 0; x < n; x++) for (y = 0; y < n; y++) for (z = 0; z < n; z++)  O(n 3 ) But remember: We can have c independent nested loops, or the loops can be terminated early n/c.

Most non-recursive algorithms reduce to one of these efficiency classes 1constant log n logarithmic nlinear n log n n2n2n2n2quadratic n3n3n3n3cubic 2n2n2n2nexponential

What else? Best cases often arise when loops terminate early for specific inputs. Best cases often arise when loops terminate early for specific inputs. For worst cases, consider the following: Is it possible that a loop will NOT terminate early For worst cases, consider the following: Is it possible that a loop will NOT terminate early Average cases are not the mid-pointer or average of the best and worst case Average cases are not the mid-pointer or average of the best and worst case –Average cases require us to consider a set of typical input –We won’t really worry too much about average case until we hit more difficult problems.

Anything else? Important questions you should be asking? Important questions you should be asking? How does log 2 n arise in non-recursive algorithms? How does log 2 n arise in non-recursive algorithms? How does 2 n arise? How does 2 n arise? Dr. B. Please, show me the actual loops that cause this! Dr. B. Please, show me the actual loops that cause this!

Log 2 n for (x = 0; x < n; x++) { Basic operation; x = x*2; } for every item in the list do Basic operation; eliminate or discard half the items; Note: These types of log 2 n loops can be nested inside of O(n), O(n 2 ), or O(n k ) loops, which leads to n log nn log n n 2 log nn 2 log n n k log nn k log n

Log 2 n Which, by the way, is pretty much equivalent to log b n, ln, or the magical lg(n) Which, by the way, is pretty much equivalent to log b n, ln, or the magical lg(n) But, don’t be mistaken But, don’t be mistaken O(n log n) is different than O(n) even though log n grows so slow. O(n log n) is different than O(n) even though log n grows so slow.

Last thing: 2 n How does 2 n arise in real algorithms? How does 2 n arise in real algorithms? Lets consider n n : Lets consider n n : How can you program n nested loops How can you program n nested loops Its impossible right? Its impossible right? So, how the heck does it happen in practice? So, how the heck does it happen in practice?

Think Write an algorithm that prints “*” 2 n times. Write an algorithm that prints “*” 2 n times. Is it hard? Is it hard? It depends on how you think. It depends on how you think.

Recursive way fun(int x) { if (x > 0) { print “*”; fun(x-1);fun(x-1);}} Then call the function fun(n);

Non-recursive way for (x = 0; x < pow(2,n); x++) print “*”; WTF? Be very wary of your input size. If you input size is truly n, then this is truly O(2 n ) with just one loop.

That’s it! That’s really it. That’s really it. That’s everything I want you do know about non-recursive algorithm analysis That’s everything I want you do know about non-recursive algorithm analysis Well, for now. Well, for now.

Enigma We know that log 2 n =  (log 3 n) We know that log 2 n =  (log 3 n) But what about 2 n =  (3 n ) But what about 2 n =  (3 n ) Here is how I will disprove it… Here is how I will disprove it… This gives you an idea of how I like to see questions answered. This gives you an idea of how I like to see questions answered.

Algorithm Analysis Recursive Algorithm Toolkit

Example Recursive evaluation of n ! Definition: n ! = 1*2*…*(n-1)*n Definition: n ! = 1*2*…*(n-1)*n Recursive definition of n!: Recursive definition of n!: if n=0 then F(n) := 1 else F(n) := F(n-1) * n return F(n) Recurrence for number of multiplications to compute n!: Recurrence for number of multiplications to compute n!:

Important recurrence types: One (constant) operation reduces problem size by one. One (constant) operation reduces problem size by one. T(n) = T(n-1) + c T(1) = d Solution: T(n) = (n-1)c + d linear

Important recurrence types: A pass through input reduces problem size by one. A pass through input reduces problem size by one. T(n) = T(n-1) + cn T(1) = d Solution: T(n) = [n(n+1)/2 – 1] c + d quadratic

Important recurrence types: One (constant) operation reduces problem size by half. One (constant) operation reduces problem size by half. T(n) = T(n/2) + c T(1) = d Solution: T(n) = c lg n + d logarithmic

Important recurrence types: A pass through input reduces problem size by half. A pass through input reduces problem size by half. T(n) = 2T(n/2) + cn T(1) = d Solution: T(n) = cn lg n + d n n log n

A general divide-and- conquer recurrence T(n) = aT(n/b) + f (n) where f (n) = Θ(n k ) 1. a < b k T(n) = Θ(n k ) 2. a = b k T(n) = Θ(n k lg n ) 3. a > b k T(n) = Θ ( n log b a ) Note: the same results hold with O instead of Θ.

Recursive Algorithm Analysis Input: an array of floats a[0…n-1] and a integer counter x Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); }

Example Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Fun(5, a) Fun(5, a) Fun(4, a) Fun(4, a) Fun(3, a) Fun(3, a) Fun(2, a) Fun(2, a) Fun(1, a) Fun(1, a) Fun(0, a) Fun(0, a) return a[0]  1 return a[0]  1

Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } First, identify the input size First, identify the input size The running time seems to depend on the value of x The running time seems to depend on the value of x

Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Second, identify what terminates the recursion Second, identify what terminates the recursion Think of the running time as function Fun(x) Think of the running time as function Fun(x) Fun(0) is the base case Fun(0) is the base case

Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Third, identify the basic operation Third, identify the basic operation The basic operation could be a constant operation The basic operation could be a constant operation Or it could be embedded in a loop that depends on the input size Or it could be embedded in a loop that depends on the input size

Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Fourth, identify the recursive call and how the input size is changed. Fourth, identify the recursive call and how the input size is changed. Warning the input size reduction may not be part of the recursive call. Warning the input size reduction may not be part of the recursive call.

Analysis Fun(int x, float a[]) { if (x == 0) return a[0]; if (a[0] > a[x]) swap(a[0], a[x]); Fun(x-1, a); } Finally, put all the pieces together Finally, put all the pieces together Base case: Fun(0) Base case: Fun(0) Recursive structure: Fun(x) = Fun(x-1) Recursive structure: Fun(x) = Fun(x-1) Interior complexity Fun(x) = Fun(x-1) + O(1) Interior complexity Fun(x) = Fun(x-1) + O(1) Recursive algorithms usually fit one of the 4 basic models Recursive algorithms usually fit one of the 4 basic models

Recursive Algorithm Analysis Input: vector of integers v Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example Fun( ): temp = Fun( ) Fun( ): temp = Fun( ) Fun( ): temp = Fun(3 1 4) Fun( ): temp = Fun(3 1 4) Fun(3 1 4): temp = Fun(3 1) Fun(3 1 4): temp = Fun(3 1) Fun(3 1): temp = Fun(3) Fun(3 1): temp = Fun(3) Fun(3): return 1; Fun(3): return 1; Fun(3 1): v[q] = 3; v[n] = 1; v: ; return 2 Fun(3 1): v[q] = 3; v[n] = 1; v: ; return 2 Fun(3 1 4): v[q] = 3; v[n] = 4; v: ; return 3 Fun(3 1 4): v[q] = 3; v[n] = 4; v: ; return 3 Fun( ): v[q] = 4; v[n] = 2; v: ; return 4 Fun( ): v[q] = 4; v[n] = 2; v: ; return 4 Fun( ): v[q] = 4; v[n] = 5; v: ; return 5 Fun( ): v[q] = 4; v[n] = 5; v: ; return 5 Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example First, the input size is the vector size. First, the input size is the vector size. Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example Second, when the size of the vector is 1 the recursion terminates Second, when the size of the vector is 1 the recursion terminates Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example Third, the basic operations are not simple Third, the basic operations are not simple Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example There is O(1) compare and swap There is O(1) compare and swap Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example There is an O(n) loop with O(3) operations inside There is an O(n) loop with O(3) operations inside Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example Fourth, the recursive call decrease the vector size (n) by one Fourth, the recursive call decrease the vector size (n) by one Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Example Finally, we can describe the running time as Finally, we can describe the running time as T(n) = T(n-1) + O(n) T(n) = T(n-1) + O(n) Which is the quadratic basic form. Which is the quadratic basic form. Fun(v[1…n]) { if size of v is 1 return 1 else q = Fun(v[1..n-1]); if (v[q] > v[n]) swap(v[q], v[n]); swap(v[q], v[n]); temp = 0; for x = 1 to n if (v[x] > temp) if (v[x] > temp) temp = v[x]; temp = v[x]; p = x; p = x; return p; }

Summary of simple recursive algorithms How does the input size (i.e., terminating variable) change for each recursive function call. How does the input size (i.e., terminating variable) change for each recursive function call. In practice there are to basic options In practice there are to basic options 1. n  n – 1 2. n  n/2

n  n-1 Case #1: fun(n) { if (n==1) quit O(1)fun(n-1) }  O(n) Case #2: fun(n) { if (n==1) quit O(n)fun(n-1) }  O(n 2 ) This means the recursion is simply an O(n) loop. This leads to two common cases:

n  n-1 Generic case Generic case fun(n) { if (n==1) quit O(n k ) fun(n-1) }  O(n k+1 )

n  n/2 Case #3: fun(n) { if (n==1) quit O(1)fun(n/2) }  O(log n) Case #4: fun(n) { if (n==1) quit O(n) fun(n/2) fun(n/2) }  O(n log n) This means the recursion is simply an O(log n) loop. This leads to three common cases: Case #5: fun(n) { if (n==1) quit O(n)fun(n/2) }  O(n)

Generic Case: T(n) = aT(n/b) + f (n) where f (n) = Θ(n k ) 1. a < b k T(n) = Θ(n k ) 2. a = b k T(n) = Θ(n k lg n ) 3. a > b k T(n) = Θ ( n log b a ) 1/b is the reduction factor a is the number of recursive calls f(n) is the interior complexity of the recursive function.

Multiple Recursive Calls O(2 n ) arises from multiple recursive function calls where the input size is not reduced by a factor of ½ fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-1) }  O(2 n )

Multiple Recursive Calls fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-1) }  O(2 n ) Here the recursion acts as an O(n) loop, but for each loop the number of recursive calls is doubled. Here the recursion acts as an O(n) loop, but for each loop the number of recursive calls is doubled. Number of operations = … + 2 n = (2 n -1)+2 n = O(2 n ) Number of operations = … + 2 n = (2 n -1)+2 n = O(2 n )

Multiple Recursive Calls fun(n) { if (n==1) quit O(n) fun(n-1) fun(n-1) }  O(???)

Multiple Recursive Calls fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-1) fun(n-1) }  O(???)

Day 10 Agenda (can you believe it’s day 10?) The last algorithm analysis enigma. The last algorithm analysis enigma. Summary of enigma Summary of enigma Can things like 1.5 n or n arise? Can things like 1.5 n or n arise? Homework 4 solutions Homework 4 solutions Exam review Exam review

Last Enigma fun1(n) { if (n==1) quit O(1) fun1(n-1) fun1(n-1) }  O(2 n ) fun2(n) { if (n==1) quit O(n) fun2(n-1) fun2(n-1) }  O(2 n ) fun3(n,m) { if (n==1) quit O(m) fun3(n-1,m) fun3(n-1,m) }  O(n2 n ) if we call fun(n,n)

Last Enigma 44 88 … 2n2n 22 1 = … + 2 n =(2 n - 1) + 2 n =2(2 n ) – 1 =O(2 n ) fun1(n) { if (n==1) quit O(1) fun1(n-1) fun1(n-1) }  O(2 n )

Last Enigma n n-1 n-2  4(n-2)  8(n-3) …  2 n (1)  2(n-1)  1(n) = ??? =O(2 n ) You have to see my program to believe it! fun2(n) { if (n==1) quit O(n) fun2(n-1) fun2(n-1) }  O(2 n ) n

Last Enigma n nn nn nn nn nn nn nn  4n  8n … 2nn2nn  2n  1n = ( … + 2 n )n =((2 n - 1) + 2n)n =(2(2 n ) – 1)n =O(n2 n ) fun3(n,m) { if (n==1) quit O(m) fun3(n-1,m) fun3(n-1,m) }  O(n2 n ) if we call fun(n,n)

Summary of Enigmas log grows so small, the base makes no difference. log grows so small, the base makes no difference. log 3 n =  (log 2 n) log 3 n =  (log 2 n) log 100 n =  (log 2 n) log 100 n =  (log 2 n) log e n =  (log 2 n) log e n =  (log 2 n)

Summary of Enigmas Exponential grow so fast that the base makes a huge difference. Exponential grow so fast that the base makes a huge difference. 2 n   (3 n ) 2 n   (3 n ) (2+0.01) n   (2 n ) (2+0.01) n   (2 n ) However, However, 2 n = O(3 n ) 2 n = O(3 n ) 3 n =  (2 n ) 3 n =  (2 n ) 2 n+1 = 2· 2 n =  (2 n ) 2 n+1 = 2· 2 n =  (2 n )

Summary of Enigmas Cutting the input size in half recursively creates a log n loop that does NOT increase the efficiency class (except for Case #3 O(1) internal complexity) Cutting the input size in half recursively creates a log n loop that does NOT increase the efficiency class (except for Case #3 O(1) internal complexity) Case #3: fun(n) { if (n==1) quit O(1)fun(n/2) }  O(log n) Case #5: fun(n) { if (n==1) quit O(n)fun(n/2) }  O(n) Generic case: fun(n) { if (n==1) quit O(n k ) fun(n/2) }  O(n k )

Summary of Enigmas Here we cut the input size in half (log n), but we spawn two recursive calls for each recursive level (2 n ). This adds a factor of log n to the internal complexity. Here we cut the input size in half (log n), but we spawn two recursive calls for each recursive level (2 n ). This adds a factor of log n to the internal complexity. However, it adds a factor of n if the internal complexity is O(1) However, it adds a factor of n if the internal complexity is O(1) Exception: fun(n) { if (n==1) quit O(1) fun(n/2) fun(n/2) }  O(n) Case #4: fun(n) { if (n==1) quit O(n) fun(n/2) fun(n/2) }  O(n log n) Generic case fun(n) { if (n==1) quit O(n k ) fun(n/2) fun(n/2) }  O(n k )

Fibonacci numbers (here is where things get difficult to analyze) The Fibonacci sequence: The Fibonacci sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, … Fibonacci recurrence: Fibonacci recurrence: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 Another example: Another example: A(n) = 3A(n-1) - 2(n-2) A(0) = 1 A(1) = 3 2nd order linear homogeneous recurrence relation with constant coefficients

How do we handle things like this? fun(n) { if (n==1) quit O(1) fun(n-1) fun(n-2) }  O(???) The simple way is to draw a picture. The simple way is to draw a picture.

Exam 1 Chapters 1 and 2 only Chapters 1 and 2 only Review hw solutions Review hw solutions Go through powerpoint slides and make your cheat sheet. Go through powerpoint slides and make your cheat sheet.

Homework 4 BTW, hw serves three purposes: BTW, hw serves three purposes: 1.It helps me gauge if I’m going too fast. 2.It helps improve the grades of those who put forth effort but may have test anxiety 3.It helps prepare you for the exams hw1, hw2, hw3 & hw4 = 9 points hw1, hw2, hw3 & hw4 = 9 points exam1 = 10 points exam1 = 10 points