Download presentation
Presentation is loading. Please wait.
1
ICS 353: Design and Analysis of Algorithms
King Fahd University of Petroleum & Minerals Information & Computer Science Department ICS 353: Design and Analysis of Algorithms Spring Semester (092) 01: Dr. Wasfi G. Al-Khatib SMW: 9-9:50am - 22:130 02: Dr. Muhammad H. Alsuwaiyel SM: 5-6:15pm - 24:110
2
Basic Concepts in Algorithmic Analysis
Outline Introduction Time Complexity Space Complexity Optimal Algorithms How to estimate the running time of an algorithm Worst Case Analysis and Average Case Analysis Amortized Analysis Input Size and Problem Instance
3
Reading Assignment M. Alsuwaiyel, Introduction to Algorithms: Design Techniques and Analysis, World Scientific Publishing Co., Inc Chapter 1 In particular, sections 1-3,6,8-14 will be discussed in class.
4
What is an algorithm? An algorithm is defined as a finite set of steps, each of which may require one or more operations and if carried out on a set of inputs, will produce one or more outputs after a finite amount of time. Examples of Algorithms Examples of computations that are not algorithms
5
Properties of Algorithms
Definiteness: It must be clear what should be done. Effectiveness: Each step must be such that it can, at least in principle, be carried out by a person using pencil and paper in a finite amount of time. E.g. integer arithmetic. An algorithm produces one or more outputs and may have zero or more externally supplied inputs. Finiteness: Algorithms should terminate after a finite number of operations.
6
Our Objective Find the most efficient algorithm for solving a particular problem. In order to achieve the objective, we need to determine: How can we find such algorithm? What does it mean to be an efficient algorithm? How can one tell that it is more efficient than other algorithms?
7
In the First Chapter We will answer the following two questions
What does an efficient algorithm mean? How can one tell that it is more efficient than other algorithms? based on some easy-to-understand searching and sorting algorithms that we may have seen earlier.
8
Searching Problem Assume A is an array with n elements A[1], A[2], … A[n]. For a given element x, we must determine whether there is an index j; 1 ≤ j ≤ n, such that x = A[j] Two algorithms, among others, address this problem Linear Search Binary Search
9
Linear Search Algorithm
Algorithm: LINEARSEARCH Input: array A[1..n] of n elements and an element x. Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise. 1. j 1 2. while (j < n) and (x A[j]) 3. j j + 1 4. end while 5. if x = A[j] then return j else return 0
10
Analyzing Linear Search
One way to measure efficiency is to count how many statements get executed before the algorithm terminates One should keep an eye, though, on statements that are executed “repeatedly”. What will be the number of “element” comparisons if x First appears in the first element of A First appears in the middle element of A First appears in the last element of A Doesn’t appear in A.
11
Binary Search We can do “better” than linear search if we knew that the elements of A are sorted, say in non-decreasing order. The idea is that you can compare x to the middle element of A, say A[middle]. If x < A[middle] then you know that x cannot be an element from A[middle+1], A[middle+2], …A[n]. Why? If x > A[middle] then you know that x cannot be an element from A[1], A[2], …A[middle-1]. Why?
12
Binary Search Algorithm
Algorithm: BINARYSEARCH Input: An array A[1..n] of n elements sorted in nondecreasing order and an element x. Output: j if x = A[j], 1 ≤ j ≤ n, and 0 otherwise. 1. low 1; high n; j 0 2. while (low ≤ high) and (j = 0) 3. mid (low + high)/2 4. if x = A[mid] then j mid 5. else if x < A[mid] then high mid - 1 6. else low mid + 1 7. end while 8. return j
13
Worst Case Analysis of Binary Search
What to do: Find the maximum number of element comparisons How to do: The number of “element” comparisons is equal to the number of iterations of the while loop in steps 2-7. HOW? How many elements of the input do we have in the First iteration Second iteration Third iteration … ith iteration The last iteration occurs when the size of input we have =
14
Theorem The number of comparisons performed by Algorithm BINARYSEARCH on a sorted array of size n is at most
15
Insertion Sort Algorithm: INSERTIONSORT
Input: An array A[1..n] of n elements. Output: A[1..n] sorted in nondecreasing order. 1. for i 2 to n 2. x A[i] 3. j i - 1 4. while (j > 0) and (A[j] > x) A[j + 1] A[j] j j - 1 7. end while 8. A[j + 1] x 9. end for
16
Insertion Sort Example
5 2 9 8 4
17
Insertion Sort Example
5 2 9 8 4 x=9 2 5 9 8 4 x=8 2 5 9 8 4 x=4 2 5 8 9 4 2 4 5 8 9
18
Analyzing Insertion Sort
The minimum number of element comparisons is which occurs when The maximum number of element comparisons is which occurs when The number of element assignments is
19
Time Complexity One way of measuring the performance of an algorithm is how fast it executes. The question is how to measure this “time”? Is having a digital stop watch suitable?
20
Order of Growth As measuring time is subjective to many factors, we look for a more “objective” measure, i.e. the number of operations Since counting the exact number of operations is cumbersome, sometimes impossible, we can always focus our attention to asymptotic analysis, where constants and lower-order terms are ignored. E.g. n3, 1000n3, and 10n n2+5n-1 are all “the same” The reason we can do this is that we are always interested in comparing different algorithms for arbitrary large number of inputs.
21
Example Growth rate for some function
22
Growth rate for same previous functions showing larger input sizes
Example Growth rate for same previous functions showing larger input sizes
23
Running Times for Different Sizes of Inputs of Different Functions
24
Asymptotic Analysis: Big-oh (O())
Definition: For T(n) a non-negatively valued function, T(n) is in the set O(f(n)) if there exist two positive constants c and n0 such that T(n) cf(n) for all n > n0. Usage: The algorithm is in O(n2) in [best, average, worst] case. Meaning: For all data sets big enough (i.e., n>n0), the algorithm always executes in less than or equal to cf(n) steps in [best, average, worst] case.
25
Big O() O() notation indicates an upper bound.
Usually, we look for the tightest upper bound: while T(n) = 3n2 is in O(n3), we prefer O(n2).
26
Big O() Examples Example 1: Find c and n0 to show that
T(n) = (n+2)/2 is in O(n) Example 2: Find c and n0 to show that T(n)=c1n2+c2n is in O(n2) Example 3: T(n) = c. We say this is in O(1).
27
Asymptotic Analysis: Big-Omega (())
Definition: For T(n) a non-negatively valued function, T(n) is in the set (g(n)) if there exist two positive constants c and n0 such that T(n) >= cg(n) for all n > n0. Meaning: For all data sets big enough (i.e., n > n0), the algorithm always executes in more than or equal to cg(n) steps. () notation indicates a lower bound.
28
() Example Find c and n0 to show that T(n) = c1n2 + c2n is in (n2) .
29
Asymptotic Analysis: Big Theta (())
When O() and () meet, we indicate this by using () (big-Theta) notation. Definition: An algorithm is said to be (h(n)) if it is in O(h(n)) and it is in (h(n)).
30
Example Show that log(n!) is in (n log n).
31
Complexity Classes and small-oh (o())
Using () notation, one can divide the functions into different equivalence classes, where f(n) and g(n) belong to the same equivalence class if f(n) = (g(n)) To show that two functions belong to different equivalence classes, the small-oh notation has been introduced Definition: Let f(n) and g(n) be two functions from the set of natural numbers to the set of non-negative real numbers. f(n) is said to be in o(g(n)) if for every constant c > 0, there is a positive integer n0 such that f(n) < cg(n) for all n n0.
32
Simplifying Rules If f(n) is in O(g(n)) and g(n) is in O(h(n)), then f(n) is in O(h(n)) If f(n) is in O(kg(n)) for any constant k > 0, then f(n) is in ……… If f1(n) is in O(g1(n)) and f2(n) is in O(g2(n)), then (f1 + f2)(n) is in ……… If f1(n) is in O(g1(n)) and f2(n) is in O(g2(n)) then f1(n)f2(n) is in ……… You can safely “globally” replace O with or in the above, where the above rules will still hold.
33
Very Useful Simplifying Rule
Let f(n) and g(n) be be two functions from the set of natural numbers to the set of non-negative real numbers such that: Then if L < then f(n) is in if L > 0 then f(n) is in if 0 < L < then f(n) is in if L = 0 then f(n) is in
34
Space Complexity Space complexity refers to the number of memory cells needed to carry out the computational steps required in an algorithm excluding memory cells needed to hold the input. Compare additional space needed to carry out SELECTIONSORT to that of BOTTOMUPSORT if we have an array with 2 million elements!
35
Examples What is the space complexity for Linear search Binary search
Selection sort Insertion sort Merge (that merges two sorted lists) Bottom up merge sort
36
Optimal Algorithms If one can show that there is no algorithm that solves a certain problem in asymptotically less than that of a certain algorithm A, we call A an optimal algorithm.
37
Optimal Algorithms: Example
A decision tree for sorting three elements Figure 12.1 page 338 from the textbook
38
Optimal Algorithms: Example
Consider the sorting problem of n distinct elements using element comparison-based sorting Using the decision tree model, the number of possible solutions (leaf nodes) in the binary tree is equal to You have learnt earlier that a binary tree with n nodes has height of at least log n (Observation 3.3 page 111 of the textbook) Hence, the length of the longest path in a decision tree for sorting n distinct elements is at least Therefore, Insertion sort is Selection sort is Merge sort is Quick sort is
39
Estimating the Running Time of an Algorithm
As mentioned earlier, we need to focus on counting those operations which represent, in general, the behavior of the algorithm This is achieved by Counting the frequency of basic operations. Basic operation is an operation with highest frequency to within a constant factor among all other elementary operations Recurrence Relations
40
Counting the Frequency of Basic Operations
Sometimes, it is easier to compute the frequency of an operation that is a good representative of the overall time complexity of the algorithm For example, Algorithm MERGE. Counting the number of iterations The number of iterations in a while loop and/or a for loop is a good indication of the total number of operations
41
Example 1 sum = 0; for (j=1; j<=n; j++) for (i=1; i<=j; i++)
for (k=0; k<n; k++) A[k] = k;
42
Example 2 for j := 1 to do sum[j] := 0; for i := 1 to j2 do
sum[j] := sum[j] + i; end for; return sum[1.. ];
43
Example 3 count := 0; for i := 1 to n do m := n/i
for j := 1 to m do count := count + 1 ; end for; return count;
44
Examples 4 & 5 sum1 = 0; for (k=1; k<=n; k*=2)
for (j=1; j<=n; j++) sum1++; sum2 = 0; for (j=1; j<=k; j++) sum2++;
45
Example 6 count := 0; while n >= 1 do for j := 1 to n do
execute_algorithm_x; count := count + 1; end for n := n / 2; end while return count;
46
Example 7 count := 0; for i := 1 to n do j := 2; while j <= n do
j := j2; count := count + 1; end while end for; return count;
47
Recurrence Relations The number of operations can be represented as a recurrence relation. There are very well known techniques, other than expanding the recurrence relation, which we will study in order to solve these recurrences
48
Example MergeSort(A,p,r) //Recursive Merge Sort if p < r then
q := (p+r)/2; MergeSort(A,p,q); MergeSort(A,q+1,r); Merge(A,p,q,r); end if; What is the call to sort an array with n elements? Let us assume that the overall cost of sorting n elements is T(n), assuming that n is a power of two. If n = 1, do we know T(n)? What is the cost of MergeSort(A,p,q)? What is the cost of MergeSort(A,q+1,r)? What is the cost of Merge(A,p,q,r)?
49
Worst Case Analysis In worst case analysis of time complexity we select the maximum cost among all possible inputs of size n. One can do that for the() notation as well as the O() notation. However, it is better use it with the () notation. Why?
50
Average Case Analysis Probabilities of all inputs is an important piece of prior knowledge in order to compute the number of operations on average Usually, average case analysis is lengthy and complicated, even with simplifying assumptions.
51
Computing the Average Running Time
The running time in this case is taken to be the average time over all inputs of size n. Assume we have k inputs, where each input costs Ci operations, and each input can occur with probability Pi, 1 i k, the average running time is given by
52
Average Case Analysis of Linear Search
Assume that the probability that key x appears in any position in the array (1, 2, …, n) or does not appear in the array is equally likely This means that we have a total of ……… different inputs, each with probability ……… What is the number of comparisons for each input? Therefore, the average running time of linear search = ………
53
Average Case Analysis of Insertion Sort
Assume that array A contains the numbers from 1..n ( i.e. elements are distinct) Assume that all n! permutations of the input are equally likely. What is the number of comparisons for inserting A[i] in its proper position in A[1..i]? What about on average? Therefore, the total number of comparisons on average is
54
Amortized Analysis Consider an algorithm in which an operation is executed repeatedly with the property that its running time fluctuates throughout the execution of the algorithm. For example, this operation takes a large amount of time occasionally and runs much faster most of the time, then this is an indication that Assuming that an exact bound is too hard, if not impossible to find, amortized analysis maybe the way to analyze the problem.
55
Amortized vs. Average Analysis
In amortized analysis, the time taken by the operation throughout the execution of the algorithm is averaged. Amortized analysis guarantees the average cost of the operation, and thus the algorithm, in the worst case. This is to be contrasted with the average time analysis in which the average is taken over all instances of the same size.
56
Amortized Analysis Techniques
Aggregrate Analysis If one shows that for all n, a sequence of n operations takes T(n) time in the worst case, then the amortized cost per operation of the algorithm equals Accounting Method Potential Method
57
Example 1 Consider a doubly linked list that initially consists of one node which contains the integer 0. We have as input an array A[1..n] of n positive integers that are to be processed as follows: If the current integer x is odd, append x to the list. Otherwise, append x and then remove all odd elements before x in the list. for j 1 to n xA[j] append x to the list if x is even then while pred(x) is odd delete pred(x) end while end if end for What is the time complexity of the algorithm without amortized analysis? What is the time complexity of the algorithm with amortized analysis?
58
Example 2 Suppose we want to allocate storage for an unknown number of elements x1, x2, ... in a stream of input as follows: Allocate an array A0 of reasonable size, say m. When this array becomes full, upon the arrival of the (m + 1)st element, a new array A1 of size 2m is allocated and all the elements stored in A0 are moved from A0 to A1. Next, the (m+1)st element is stored in A1[m+1]. This process continues until all elements have been stored. What is the time complexity of the algorithm without amortized analysis? What is the time complexity of the algorithm with amortized analysis?
59
Input Size and Problem Instance
When discussing a problem, as opposed to an algorithm, we usually talk of a problem instance. A problem instance translates to input in the context of an algorithm that solves that problem. For example, an array A of n integers is called an instance of the problem of sorting numbers. At the same time, in the context of discussing Algorithm insertionsort, we refer to this array as an input to the algorithm.
60
Input Size The input size, as a quantity, is not a precise measure of the input, and its interpretation is subject to the problem for which the algorithm is, or is to be, designed. In sorting and searching problems, we use In graph algorithms, the input size usually refers to ..... In matrix operations, the input size is commonly taken to be the dimensions of the input matrices. These “heterogeneous” measures have brought about some inconsistencies when comparing the amount of time or space required by two algorithms. For example, an algorithm for adding two n n matrices which performs n2 additions sounds quadratic, while it is indeed linear in the input size.
61
Example
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.