Data Structures Mohamed Mustaq Ahmed Chapter 2- Algorithms.

Slides:



Advertisements
Similar presentations
Faculty of Communications, Health and Science School of Computer and Information Science CSP1250 Data Structures with Java Java Collections by D Watt &
Advertisements

MATH 224 – Discrete Mathematics
CHAPTER 2 ALGORITHM ANALYSIS 【 Definition 】 An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In addition,
HST 952 Computing for Biomedical Scientists Lecture 10.
Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
CS102 Algorithms and Programming II1 Recursion Recursion is a technique that solves a problem by solving a smaller problem of the same type. A recursive.
Scott Grissom, copyright 2004 Chapter 5 Slide 1 Analysis of Algorithms (Ch 5) Chapter 5 focuses on: algorithm analysis searching algorithms sorting algorithms.
Complexity Analysis (Part I)
Chapter 10 Recursion. Copyright © 2005 Pearson Addison-Wesley. All rights reserved Chapter Objectives Explain the underlying concepts of recursion.
Chapter 15 Recursive Algorithms. 2 Recursion Recursion is a programming technique in which a method can call itself to solve a problem A recursive definition.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Analysis of Algorithm.
 2006 Pearson Education, Inc. All rights reserved Searching and Sorting.
CS Discrete Mathematical Structures Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, 9:30-11:30a.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
Algorithm Analysis (Big O)
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
A Review of Recursion Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Analysis of Algorithm Lecture 3 Recurrence, control structure and few examples (Part 1) Huma Ayub (Assistant Professor) Department of Software Engineering.
Week 2 CS 361: Advanced Data Structures and Algorithms
Discrete Mathematics Algorithms. Introduction  An algorithm is a finite set of instructions with the following characteristics:  Precision: steps are.
Chapter 13 Recursion. Topics Simple Recursion Recursion with a Return Value Recursion with Two Base Cases Binary Search Revisited Animation Using Recursion.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Introduction to complexity. 2 Analysis of Algorithms Why do we need to analyze algorithms? –To measure the performance –Comparison of different algorithms.
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
1 Chapter 1 Analysis Basics. 2 Chapter Outline What is analysis? What to count and consider Mathematical background Rates of growth Tournament method.
Chapter 12 Recursion, Complexity, and Searching and Sorting
Analysis of Algorithms
2-1 2 Algorithms Principles. Efficiency. Complexity. O-notation. Recursion. © 2001, D.A. Watt and D.F. Brown.
10 Recursion © 2010 David A Watt, University of Glasgow Accelerated Programming 2 Part I: Python Programming 1.
Review Introduction to Searching External and Internal Searching Types of Searching Linear or sequential search Binary Search Algorithms for Linear Search.
Chapter 8 Recursion Modified.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Chapter 4 Recursion. Copyright © 2004 Pearson Addison-Wesley. All rights reserved.1-2 Chapter Objectives Explain the underlying concepts of recursion.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Today’s topics Orders of growth of processes Relating types of procedures to different orders of growth.
Data Structure Introduction.
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Lauren Milne Summer 2015.
Algorithms and data structures: basic definitions An algorithm is a precise set of instructions for solving a particular task. A data structure is any.
Java Methods Big-O Analysis of Algorithms Object-Oriented Programming
3.3 Complexity of Algorithms
Data Structure and Algorithms. Algorithms: efficiency and complexity Recursion Reading Algorithms.
Algorithm Analysis (Big O)
27-Jan-16 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Recursion A function is said to be recursive if it calls itself, either directly or indirectly. void repeat( int n ) { cout
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
Section 1.7 Comparing Algorithms: Big-O Analysis.
1 Algorithms Searching and Sorting Algorithm Efficiency.
Lecture 11 Recursion. A recursive function is a function that calls itself either directly, or indirectly through another function; it is an alternative.
Nyhoff, ADTs, Data Structures and Problem Solving with C++, Second Edition, © 2005 Pearson Education, Inc. All rights reserved Recursion,
Algorithm Analysis 1.
Recursion CENG 707.
Analysis of Algorithms
Analysis of Algorithms
Introduction to complexity
Analysis of Algorithms
Recursive Thinking Chapter 9 introduces the technique of recursive programming. As you have seen, recursive programming involves spotting smaller occurrences.
Recursive Thinking Chapter 9 introduces the technique of recursive programming. As you have seen, recursive programming involves spotting smaller occurrences.
Analysis of algorithms
Algorithm design and Analysis
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
CS 201 Fundamental Structures of Computer Science
Analysis of Algorithms
Analysis of Algorithms
Algorithms and data structures: basic definitions
Presentation transcript:

Data Structures Mohamed Mustaq Ahmed Chapter 2- Algorithms

2- 2 Algorithms Overview Principles. Efficiency. Complexity. O-notation. Recursion.

2- 3 Principles (1) An algorithm is a step-by-step procedure for solving a stated problem. The algorithm will be performed by a processor. The processor may be: –human, –mechanical, or –electronic.

2- 4 Principles (2) Characteristics The algorithm must be expressed in steps that the processor is capable of performing. The algorithm must eventually terminate. The algorithm must be expressed in some language that the processor “understands”. The stated problem must be solvable, i.e., capable of solution by a step-by-step procedure.

2- 5 Efficiency Given several algorithms to solve the same problem, which algorithm is “best”? Given an algorithm, is it feasible to use it at all? In other words, is it efficient enough to be usable in practice? How much time does the algorithm require? How much space (memory) does the algorithm require? In general, both time and space requirements depend on the algorithm’s input (typically the “size” of the input).

2- 6 Example: efficiency Hypothetical profile of two sorting algorithms: Algorithm B’s time grows more slowly than A’s. items to be sorted, n time (ms) Key: Algorithm A Algorithm B

2- 7 Efficiency: measuring time methodology Measure time in seconds? +is useful in practice –depends on language, compiler, and processor. Count algorithm steps? +does not depend on compiler or processor –depends on difficulty of steps. Count characteristic operations? (e.g., arithmetic ops in math algorithms, comparisons in searching algorithms) +depends only on the algorithm itself +measures the algorithm’s intrinsic efficiency.

2- 8 Powers Consider a number b and a non-negative integer n. Then b to the power of n (written b n ) is the multiplication of n copies of b: b n = b  …  b E.g.:b 3 = b  b  b b 2 = b  b b 1 = b b 0 = 1

2- 9 Logarithms Consider a positive number y. Then the logarithm of y to the base 2 (written log 2 y) is the number of copies of 2 that must be multiplied together to equal y. If y is a power of 2, log 2 y is an integer: E.g.:log 2 1= 0 log 2 2= 1 log 2 4= 2 log 2 8= 3 since 2 0 = 1 since 2 1 = 2 since 2 2 = 4 since 2 3 = 8 If y is not a power of 2, log 2 y is fractional: E.g.:log 2 5  2.32 log 2 7  2.81

2- 10 Logarithm laws log 2 (2 n )= n log 2 (x  y)= log 2 x + log 2 y log 2 (x/y)= log 2 x – log 2 y since the no. of 2s multiplied to make xy is the sum of the no. of 2s multiplied to make x and the no. of 2s multiplied to make y

2- 11 Logarithms example (1) How many times must we halve the value of n (discarding any remainders) to reach 1? Suppose that n is a power of 2: E.g.:8  4  2  1(8 must be halved 3 times) 16  8  4  2  1(16 must be halved 4 times) If n = 2 m, n must be halved m times. Suppose that n is not a power of 2: E.g.:9  4  2  1(9 must be halved 3 times) 15  7  3  1(15 must be halved 3 times) If 2 m <= n < 2 m+1, n must be halved m times.

2- 12 Logarithms example (2) In general, n must be halved m times if: 2 m  n < 2 m+1 i.e.,log 2 (2 m )  log 2 n < log 2 (2 m+1 ) i.e.,m  log 2 n < m+1 i.e.,m = floor(log 2 n). The floor of x (written floor(x) or  x  ) is the largest integer not greater than x. Conclusion: n must be halved floor(log 2 n) times to reach 1. Also: n must be halved floor(log 2 n)+1 times to reach 0.

2- 13 Example: power algorithms (1) Simple power algorithm: To compute b n : 1.Set p to 1. 2.For i = 1, …, n, repeat: 2.1.Multiply p by b. 3.Terminate with answer p.

2- 14 Example: power algorithms (2) Analysis (counting multiplications): Step 2.1 performs a multiplication. This step is repeated n times. No. of multiplications = n

2- 15 Example: power algorithms (3) Implementation in C++: int power (int b, int n) { // Return b n (where n is non-negative integer) int p = 1; for (int i = 1; i <= n; i++) p *= b; return p; } Power.cpp

2- 16 Example: power algorithms (4) Idea: b 1000 = b 500  b 500. If we know b 500, we can compute b 1000 with only 1 more multiplication! Smart power algorithm: To compute b n : 1.Set p to 1, set q to b, and set m to n. 2.While m > 0, repeat: 2.1.If m is odd, multiply p by q. 2.2.Halve m (discarding any remainder). 2.3.Multiply q by itself. 3.Terminate with answer p.

2- 17 Example: power algorithms (5) Analysis (counting multiplications): Steps 2.1–3 together perform at most 2 multiplications. They are repeated as often as we must halve the value of n (discarding any remainder) until it reaches 0, i.e., floor(log 2 n) + 1 times. Max. no. of multiplications= 2(floor(log 2 n) + 1) = 2 floor(log 2 n) + 2

2- 18 Example: power algorithms (6) Implementation in C++: int power2 (int b, int n) { // Return b n (where n is non-negative integer) int p = 1, q = b, m = n; while (m > 0) { if (m%2 != 0) p *= q; m /= 2; q *= q; } return p; } Example: Power2.cpp

multiplications n Example: power algorithms (7) Comparison: simple power algorithm smart power algorithm

2- 20 Complexity For many interesting algorithms, the exact number of operations is too difficult to analyze mathematically. To simplify the analysis: –identify the fastest-growing term –neglect slower-growing terms –neglect the constant factor in the fastest-growing term. The resulting formula is the algorithm’s time complexity. It focuses on the growth rate of the algorithm’s time requirement. Similarly for space complexity.

2- 21 Example : analysis of power algorithms (1) Analysis of simple power algorithm (counting multiplications) : No. of multiplications = n Time taken is approximately proportional to n. Time complexity is of order n. This is written O(n).

2- 22 Analysis of smart power algorithm (counting multiplications): Max. no. of multiplications = 2 floor(log 2 n) + 2 then tolog 2 n Time complexity is of order log n. This is written O(log n). then tofloor(log 2 n) Simplify to2 floor(log 2 n) Neglect slow-growing term, +2. Neglect constant factor, 2. Neglect floor(), which on average subtracts 0.5, a constant term. Example : analysis of power algorithms (2)

n Comparison: n log n Example : analysis of power algorithms (3)

2- 24 O-notation (1) We have seen that an O(log n) algorithm is inherently better than an O(n) algorithm for large values of n. O(log n) signifies a slower growth rate than O(n). Complexity O(X) means “of order X”, i.e., growing is proportional to X. Here X signifies the growth rate, neglecting slower-growing terms and constant factors.

2- 25 O-notation (2) Common time complexities: O(1)constant time(feasible) O(log n)logarithmic time(feasible) O(n)linear time(feasible) O(n log n)log linear time(feasible) O(n 2 )quadratic time(sometimes feasible) O(n 3 )cubic time(sometimes feasible) O(2 n )exponential time (rarely feasible)

2- 26 Growth rates (1) Comparison of growth rates: log n n n log n n2n ,600 n3n3 1,0008,00027,00064,000 2n2n 1, million1.1 billion1.1 trillion

n Growth rates (2) Graphically: log n n n log nn2n2 2n2n

2- 28 Example: growth rates (1) Consider a problem that requires n data items to be processed. Consider several competing algorithms to solve this problem. Suppose that their time requirements on a particular processor are as follows: Algorithm Log:0.3 log 2 nseconds Algorithm Lin:0.1 nseconds Algorithm LogLin:0.03 n log 2 nseconds Algorithm Quad:0.01 n 2 seconds Algorithm Cub:0.001 n 3 seconds Algorithm Exp: n seconds

2- 29 Log Lin LogLin Quad Cub Exp :00 n 0:01 Example: growth rates (2) Compare how many data items (n) each algorithm can process in 1, 2, …, 10 seconds: 0:020:030:040:050:060:070:080:090:10

2- 30 Recursion A recursive algorithm is one expressed in terms of itself. In other words, at least one step of a recursive algorithm is a “call” to itself. In C++, a recursive method is one that calls itself.

2- 31 When should recursion be used? Sometimes an algorithm can be expressed using either iteration or recursion. The recursive version tends to be: +more elegant and easier to understand –less efficient (extra calls consume time and space). Sometimes an algorithm can be expressed only using recursion.

2- 32 When does recursion work? Given a recursive algorithm, how can we be sure that it terminates? The algorithm must have: –one or more “easy” cases –one or more “hard” cases. In an “easy” case, the algorithm must give a direct answer without calling itself (termination). In a “hard” case, the algorithm may call itself.

2- 33 Example: recursive power algorithms (1) Recursive definition of b n : b n = 1if n = 0 b n = b  b n–1 if n > 0 b n = b  b  b n–2 if n > 0 Simple recursive power algorithm: To compute b n : 1.If n = 0: 1.1.Terminate with answer 1. 2.If n > 0: 2.1.Terminate with answer b  b n–1. Easy case: solved directly. Hard case: solved by computing b n–1, (recursion, which is easier since n-1 is closer than n to 0). Recursion.

2- 34 Example: recursive power algorithms (2) Implementation in C++: int power3 (int b, int n) { // Return b n (where n is non-negative). if (n == 0) return 1; else return b * power3(b, n-1); } Power3.cpp

2- 35 Example: recursive power algorithms (3) Idea: b 1000 = b 500  b 500, and b 1001 = b  b 500  b 500. Alternative recursive definition of b n : b n = 1if n = 0 b n = b n/2  b n/2 if n > 0 and n is even b n = b  b n/2  b n/2 if n > 0 and n is odd (Recall: n/2 discards the remainder if n is odd.)

2- 36 Example: recursive power algorithms (4) Smart recursive power algorithm: To compute b n : 1.If n = 0: 1.1.Terminate with answer 1. 2.If n > 0: 2.1.Let p be b n/ If n is even: Terminate with answer p  p. 2.3.If n is odd: Terminate with answer b  p  p. Easy case: solved directly. Hard case: solved by comput- ing b n/2, which is easier since n/2 is closer than n to 0. Recursion.

2- 37 Example: recursive power algorithms (5) Implementation in C++: int power4 (int b, int n) { // Return b n (where n is non-negative). if (n == 0) return 1; else { int p = power4(b, n/2); if (n % 2 == 0) return p * p; else return b * p * p; } } Power4.cpp

2- 38 Example: recursive power algorithms (6) Analysis (counting multiplications – Time Complexities): Each recursive power algorithm performs the same number of multiplications as the corresponding non-recursive algorithm. So their time complexities are the same: non- recursive recursive Simple power algorithm O(n)O(n)O(n)O(n) Smart power algorithm O(log n)

2- 39 Example: recursive power algorithms (7) Analysis (Space Complexities): The non-recursive power algorithms use constant space, i.e., O(1). A recursive algorithm uses extra space for each recursive call. The simple recursive power algorithm calls itself n times before returning, whereas the smart recursive power algorithm calls itself floor(log 2 n) times. non-recursiverecursive Simple power algorithmO(1)O(n)O(n) Smart power algorithmO(1)O(log n)

2- 40 Example: Towers of Hanoi (1) Three vertical poles (1, 2, 3) are mounted on a platform. A number of differently-sized disks are threaded on to pole 1, forming a tower with the largest disk at the bottom and the smallest disk at the top. We may move one disk (or more) at a time, from any pole to any other pole, but we must never place a larger disk on top of a smaller disk. Problem: Move the tower of disks from pole 1 to pole 2.

2- 41 Example: Towers of Hanoi (2) Animation (with 2 disks):

2- 42 Example: Towers of Hanoi (3) Towers of Hanoi algorithm: To move a tower of n disks from pole source to pole dest: 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate.

If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedest 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare Example: Towers of Hanoi (4) Animation (with 6 disks): 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare 1.If n = 1: 1.1.Move a single disk from source to dest. 2.If n > 1: 2.1.Let spare be the remaining pole, other than source and dest. 2.2.Move a tower of (n–1) disks from source to spare. 2.3.Move a single disk from source to dest. 2.4.Move a tower of (n–1) disks from spare to dest. 3.Terminate. sourcedestspare

2- 44 Implementation in C++: class Hanoi { public: void shift (int n, int source, int dest) { if (n == 1) move(source, dest); else { int spare = 6 - source - dest; shift(n-1, source, spare); move(source, dest); shift(n-1, spare, dest); } } void move (int source, int dest) { cout << "Move disk from " << source << " to " << dest << "." << endl; } } ; Example: Towers of Hanoi (5) Example: Hanoi.cppHanoi.cpp

2- 45 Example: Towers of Hanoi (6) Analysis (counting moves): Let the total no. of moves required to move a tower of n disks be moves(n). Then: moves(n) = 1if n = 1 moves(n) = moves(n–1)if n > 1 Solution: moves(n) = 2 n – 1 Time complexity is O(2 n ).