Algorithm Design Techniques, Greedy Method – Knapsack Problem, Job Sequencing, Divide and Conquer Method – Quick Sort, Finding Maximum and Minimum, Dynamic.

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

Chapter 1 – Basic Concepts
the fourth iteration of this loop is shown here
Introduction to Analysis of Algorithms
Tirgul 9 Amortized analysis Graph representation.
Complexity Analysis (Part I)
Cmpt-225 Algorithm Efficiency.
CS333 / Cutler Amortized Analysis 1 Amortized Analysis The average cost of a sequence of n operations on a given Data Structure. Aggregate Analysis Accounting.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Amortized Analysis Not just consider one operation, but a sequence of operations on a given data structure. Average cost over a sequence of operations.
Chapter 1 Introduction Definition of Algorithm An algorithm is a finite sequence of precise instructions for performing a computation or for solving.
Algorithm Design and Analysis Liao Minghong School of Computer Science and Technology of HIT July, 2003.
CS 473Lecture 121 CS473-Algorithms I Lecture 12 Amortized Analysis.
C. – C. Yao Data Structure. C. – C. Yao Chap 1 Basic Concepts.
Lecture 2 Computational Complexity
Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 14 Prof. Charles E. Leiserson.
Analysis of Algorithms
Amortized Analysis The problem domains vary widely, so this approach is not tied to any single data structure The goal is to guarantee the average performance.
Amortized Analysis Typically, most data structures provide absolute guarantees on the worst case time for performing a single operation. We will study.
TECH Computer Science Dynamic Sets and Searching Analysis Technique  Amortized Analysis // average cost of each operation in the worst case Dynamic Sets.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
MS 101: Algorithms Instructor Neelima Gupta
MS 101: Algorithms Instructor Neelima Gupta
Algorithm Analysis Part of slides are borrowed from UST.
1 Algorithms  Algorithms are simply a list of steps required to solve some particular problem  They are designed as abstractions of processes carried.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Amortized Analysis. Problem What is the time complexity of n insert operations into an dynamic array, which doubles its size each time it is completely.
CSE373: Data Structures & Algorithms Lecture 8: Amortized Analysis Dan Grossman Fall 2013.
Amortized Analysis In amortized analysis, the time required to perform a sequence of operations is averaged over all the operations performed Aggregate.
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
Algorithm Complexity is concerned about how fast or slow particular algorithm performs.
Algorithm Design Techniques, Greedy Method – Knapsack Problem, Job Sequencing, Divide and Conquer Method – Quick Sort, Finding Maximum and Minimum, Dynamic.
Algorithm Analysis 1.
Complexity Analysis (Part I)
CMPT 438 Algorithms.
Amortized Analysis.
Mathematical Foundation
Introduction to Analysis of Algorithms
Analysis of Algorithms
Introduction to Algorithms
Introduction Algorithms Order Analysis of Algorithm
Big-O notation.
Complexity analysis.
Introduction to Algorithms
Amortized Analysis The problem domains vary widely, so this approach is not tied to any single data structure The goal is to guarantee the average performance.
Algorithm Analysis CSE 2011 Winter September 2018.
DATA STRUCTURES Introduction: Basic Concepts and Notations
Algorithms Furqan Majeed.
CS 3343: Analysis of Algorithms
Computation.
Algorithm Analysis (not included in any exams!)
Algorithm design and Analysis
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Objective of This Course
Introduction to Algorithms Analysis
CS 201 Fundamental Structures of Computer Science
Programming and Data Structure
Asst. Dr.Surasak Mungsing
Analysis of Algorithms
Analysis of Algorithms
Amortized Analysis and Heaps Intro
Analysis of Algorithms
David Kauchak cs161 Summer 2009
Estimating Algorithm Performance
Complexity Analysis (Part I)
Analysis of Algorithms
Complexity Analysis (Part I)
Presentation transcript:

Algorithm Design Techniques, Greedy Method – Knapsack Problem, Job Sequencing, Divide and Conquer Method – Quick Sort, Finding Maximum and Minimum, Dynamic Programming- Matrix chain multiplication

What is an algorithm? For example, Algorithm: INTRODUCTION Algorithm is a set of steps to complete a task. For example, Task: to make a cup of tea. Algorithm: add water and milk to the kettle, Boil it, add tea powder, Add sugar, and then serve it in cup.

What is Computer algorithm ‘’a set of steps to accomplish or complete a task that is described precisely enough that a computer can run it’’. Characteristics of an algorithm:- Must take an input. Must give some output (yes/no, value etc.) Definiteness –each instruction is clear and unambiguous. Finiteness –algorithm terminates after a finite number of steps. Effectiveness –every instruction must be basic i.e. simple instruction. Eg: algorithm to sort a set of values

Given two algorithms for a task, how do we find out which one is better? One naive way of doing this is – implement both the algorithms and run the two programs on your computer for different inputs and see which one takes less time. There are many problems with this approach for analysis of algorithms. 1) It might be possible that for some inputs, first algorithm performs better than the second. And for some inputs second performs better. 2) It might also be possible that for some inputs, first algorithm perform better on one machine and the second works better on other machine for some other inputs.

Time taken by an algorithm? Performance measurement or Apostoriori Analysis: Implementing the algorithm in a machine and then calculating the time taken bythe system to execute the program successfully. Performance Evaluation or Apriori Analysis. Before implementing the algorithm in a system. This is done as follows 1.How long the algorithm takes :-will be represented as a function of the size of the input. f(n)→how long it takes if ‘n’ is the size of input. 2.How fast the function that characterizes the running time grows with the input size. “Rate of growth of running time”. The algorithm with less rate of growth of running time is considered better

Efficiency of Algorithm The complexity of an algorithm : a function describing the efficiency of the algorithm in terms of the amount of data the algorithm must process. There are two main complexity measures of the efficiency of an algorithm: Time complexity is a function describing the amount of time an algorithm takes in terms of the amount of input to the algorithm. "Time" can mean the number of memory accesses performed, the number of comparisons between integers, the number of times some inner loop is executed, or some other natural unit related to the amount of real time the algorithm will take Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm For example, we might say "this algorithm takes n2 time," where n is the number of items in the input. Or we might say "this algorithm takes constant extra space," because the amount of extra memory needed doesn't vary with the number of items processed. For both time and space, we are interested in the asymptotic complexity of the algorithm: When n (the number of items of input) goes to infinity, what happens to the performance of the algorithm?

Asymptotic Analysis In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how does the time (or space) taken by an algorithm increases with the input size. Example : let us consider the search problem (searching a given item) in a sorted array. One way to search is Linear Search (order of growth is linear) and other way is Binary Search (order of growth is logarithmic). T To understand how Asymptotic Analysis solves the above mentioned problems in analyzing algorithms, We can see that for the same input set linear search algorithm will take more time than binary search. The reason is the order of growth of Binary Search with respect to input size logarithmic while the order of growth of Linear Search is linear.

How to calculate time complexity / A function to implement bubble sort void bubbleSort(int arr[], int n) {    int i, j;    for (i = 0; i < n-1; i++)              // Last i elements are already in place          for (j = 0; j < n-i-1; j++)            if (arr[j] > arr[j+1])               swap(&arr[j], &arr[j+1]); } Assume that case 1: array already sorted – Time complexity – n times case2 : array not sorted – time complexity- n2 times case 3: array sorted in reverse order - n2 times

Analysis of Algorithm We can have three cases to analyze an algorithm: 1) Worst Case 2) Average Case 3) Best Case Worst Case Analysis (Usually Done) In the worst case analysis, we calculate upper bound on running time of an algorithm. We must know the case that causes maximum number of operations to be executed. Average Case Analysis (Sometimes done) In average case analysis, we take all possible inputs and calculate computing time for all of the inputs. Sum all the calculated values and divide the sum by total number of inputs. We must know (or predict) distribution of cases. Best Case Analysis (Bogus) In the best case analysis, we calculate lower bound on running time of an algorithm. We must know the case that causes minimum number of operations to be executed. Worst Case Analysis (Usually Done) Eg: For Linear Search, the worst case happens when the element to be searched (x in the above code) is not present in the array. When x is not present, the search() functions compares it with all the elements of arr[] one by one. Therefore, the worst case time complexity of linear search would be ?(n). Average Case Analysis (Sometimes done) Eg: For the linear search problem, let us assume that all cases are uniformly distributed (including the case of x not being present in array). So we sum all the cases and divide the sum by (n+1). It is ?(n) Best Case Analysis (Bogus) Eg: In the linear search problem, the best case occurs when x is present at the first location. The number of operations in the best case is constant (not dependent on n). So time complexity in the best case would be ?(1) Most of the times, we do worst case analysis to analyze algorithms. In the worst analysis, we guarantee an upper bound on the running time of an algorithm which is good information. The average case analysis is not easy to do in most of the practical cases and it is rarely done. In the average case analysis, we must know (or predict) the mathematical distribution of all possible inputs. The Best Case analysis is bogus. Guaranteeing a lower bound on an algorithm doesn’t provide any information as in the worst case, an algorithm may take years to run.

Asymptotic notations Asymptotic notations are mathematical tools to represent time complexity of algorithms for asymptotic analysis. The following 3 asymptotic notations are mostly used to represent time complexity of algorithms. Θ Notation Big O Notation Ω Notation Assignment: Upload notes on Asymptotic Notations with example (http://ignou.ac.in/userfiles/Unit2finalversion_Asymptotic%20Bouneds.pdf)

Asymptotic notations O(expression) is the set of functions that grow slower than or at the same rate as expression. Omega(expression) is the set of functions that grow faster than or at the same rate as expression. Theta(expression) consist of all the functions that lie in both O(expression) and Omega(expression). Suppose you've calculated that an algorithm takes f(n) operations, where, f(n) = 3*n^2 + 2*n + 4. // n^2 means square of n Since this polynomial grows at the same rate as n^2, then you could say that the function f lies in the set Theta(n^2). (It also lies in the sets O(n^2) and Omega(n^2) for the same reason.) The simplest explanation is, because Theta denotes the same as the expression. Hence, as f(n) grows by a factor of n^2, the time complexity can be best represented as Theta(n^2)

Amortized Analysis 6. Amortized Cost Analysis Time Analysis Demo – YouTube ▶ 10:08 : https://www.youtube.com/watch?v=uso8zq5ZtzQ is used for algorithms where an occasional operation is very slow, but most of the other operations are faster. In Amortized Analysis, we analyze a sequence of operations and guarantee a worst case average time which is lower than the worst case time of a particular expensive operation. The example data structures whose operations are analyzed using Amortized Analysis are Hash Tables, Disjoint Sets and Splay Trees. In an amortized analysis, the time required to perform a sequence of data structure operations is average over all operation performed.

Amortized Analysis, Let us consider an example of a simple hash table insertions. How do we decide table size? -- There is a trade-off between space and time, if we make hash-table size big, search time becomes fast, but space required becomes high. The solution to this trade-off problem is to use Dynamic Table (or Arrays). The idea is to increase size of table whenever it becomes full. Following are the steps to follow when table becomes full. 1) Allocate memory for a larger table of size, typically twice the old table. 2) Copy the contents of old table to new table. 3) Free the old table. If the table has space available, we simply insert new item in available space.

Amortized Analysis, Amortized cost Given a sequence of n operations , amortized cost is Cost(n operations) ----------------- n

Amortized Analysis, What is the time complexity of n insertions using the above scheme? If we use simple analysis, the worst case cost of an insertion is O(n). Therefore, worst case cost of n inserts is n * O(n) which is O(n2). This analysis gives an upper bound, but not a tight upper bound for n insertions as all insertions don’t take Θ(n) time. So using Amortized Analysis, we could prove that the Dynamic Table scheme has O(1) insertion time which is a great result used in hashing.. Also, the concept of dynamic table is used in vectors in C++, ArrayList in Java. Following are few important notes. 1) Amortized cost of a sequence of operations can be seen as expenses of a salaried person. The average monthly expense of the person is less than or equal to the salary, but the person can spend more money in a particular month by buying a car or something. In other months, he or she saves money for the expensive month. 2) The above Amortized Analysis done for Dynamic Array example is called Aggregate Method. There are two more powerful ways to do Amortized analysis called Accounting Method and Potential Method. We will be discussing the other two methods in separate posts. 3) The amortized analysis doesn’t involve probability. There is also another different notion of average case running time where algorithms use randomization to make them faster and expected running time is faster than the worst case running time. These algorithms are analyzed using Randomized Analysis. Examples of these algorithms are Randomized Quick Sort, Quick Select and Hashing. We will soon be covering Randomized analysis in a different post

Amortized Analysis Three main techniques used for amortized analysis: The aggregate method, where the total running time for a sequence of operations is analyzed. The accounting (or banker's) method, where we impose an extra charge on inexpensive operations and use it to pay for expensive operations later on. The potential (or physicist's) method, in which we derive a potential function characterizing the amount of extra work we can do in each step. This potential either increases or decreases with each successive operation, but cannot be negative.

Amortized Analysis - aggregate method The aggregate method: determines an upper bound T(n)on the total cost of a sequence of N operations. The amortized cost per operation is then T(n)=n The accounting method: overcharges some operations n early in the sequence, storing the overcharge as “prepaid credit” on specific objects in the data structure. The credit is used later in the sequence to pay for operations that are charged less than they actually cost. The potential method: determines the amortized cost of each operation (like the accounting method) and may overcharge operations early on to compensate for undercharges later. This method maintains the credit as the potential energy of the data structure instead of associating the credit with individual objects within the data.

•ENQUEUE(x): PUSH x into Stack Implement a QUEUE with two STACKs having constant a mortized cost for each QUEUE operation Name the two STACKs as Stack1and Stack2,we can implement the QUEUE s follows: •ENQUEUE(x): PUSH x into Stack DEQUEUE(x): If Stack 2is not empty, then simply POP from Stack2and return the element. If Stack2is empty, POP all the elements of Stack1, PUSH them into Stack2, then POP from tack2and return the result. B)Choose any two from the three methods to prove the amortized cost or each QUEUE operation is O(1) •Aggregate method Consider a sequence of n operations. The sequence of operations will involve at most n lements. The cost associated with each element will be at most 4 i.e. (pushed into Stack1 , popped from Stack1, pushed to Stack2, and popped from Stack2). Hence, the actual cost of n operations will be upper bounded by T(n)= 4 n. Hence, the amortized cost of each operation can be T(n)/n = 4n / n = 4 = O(1). • Accounting method show that the potential function P(n) satisfies P(n EQUEUE are 3 and 1. We We guess that the amortized costs for ENQUEUE and D We have P(0) = 0. If an element is not popped, then ) - P(0) >= 0 for all n. UEUE operation. The cost for last popped once. Thus, the cost of 3 is paid for by ENQ it's only pushed twice and Note: Alternatively, we can set the costs for ENQUE pop operation is paid for by the DEQUEUE. respectively. UE and DEQUEUE as 4 and 0 s in Stack We guess the potential function P(n) = 2 * #Element Potential method P(0) >= 0 for all n. . P(0) = 0 and P(n) - 1 ts in Stack ENQUEUE: Actual cost of PUSH is 1. Number of elemen cost + ΔP = 1 + 2 = 3. and Delta P increases by 2. Amortized cost = actual increases by 1 DEQUEUE: If Stack ement in Stack is not empty. Actual cost of DEQUEUE is 1. The #El 2 l cost + ΔP = 1 + 0 = 1. stays the same, i.e. ΔP = 0. Amortized cost = actua The ΔP = 0 - 2x = - 2x. Amortized cost = actual cos cost of POP is 2x. is empty. Let x = #Elements in Stack1. The actual Therefore, the amortized costs for ENQUEUE and DEQU t + ΔP = (2x+1) + (-2x) = 1. EUE are 3 and 1 respectiv