Presentation is loading. Please wait.

Presentation is loading. Please wait.

Complexity analysis.

Similar presentations


Presentation on theme: "Complexity analysis."— Presentation transcript:

1 Complexity analysis

2 Average and Worst case Analysis
Worst-case complexity: Maximum time required for program execution (Run slowest among all inputs) In the worst case analysis, we calculate upper bound on running time of an algorithm. Average Complexity: Average time required for program execution. Gives the necessary information about algorithm’s behavior on random input Best Case  Minimum time required for program execution (Run Fastest among all inputs). It gives lower bound on running time of algorithm for any instances of input.

3 Why Worst Case Analysis?
The worst-case running time of an algorithm gives an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer. For some algorithms, the worst case occurs fairly often. For example, in searching a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. The “average case” is often roughly as bad as the worst case.

4 Asymptotic Notations

5 Asymptotic Notations Asymptotic notation is useful describe the running time of the algorithm. Asymptotic notations give time complexity as “fastest possible”, “slowest possible” or “average time”. Asymptotic notation is useful because it allows us to concentrate on the main factor determining a functions growth.

6 Asymptotic Notations Following are commonly used asymptotic notations used in calculating running time complexity of an algorithm. 1. Big O Notation f(n)=O(g(n)) (read: f of n is big oh of g of n), if there exists a positive integer n0 and a positive number c such that |f(n)|≤c|g(n)|, for all n≥n0. 2. Omega (Ω) Notation f(n)=Ω(g(n))(read: f of n is omega of g of n), if there exists a positive integer n0 and a positive integer c such that |f(n)|≥c|g(n)|, for all n≥n0. 3. Theta (θ) Notation f(n)= Ө(g(n))(read: f of n is thita of g of n), if there exists a positive integer n0 and a positive integer c1 and c2 such that c1|g(n)| ≤ |f(n)| ≤c2|g(n)|, for all n≥n0.

7 (upper bound –worst Case)
Big O notation (upper bound –worst Case)

8 1.Big O notation (upper bound –worst Case)
The Ο(n) is the formal way to express the upper bound of an algorithm's running time. It always indicates the maximum time required by an algorithm for all input values. That means Big - Oh notation describes the worst case of an algorithm time complexity. Definition: O(g(n)) = {f(n) : there exists positive constants c and n0 such that 0 ≤ f(n) ≤ c g(n) for all n ≥ n0.

9 Big O notation (contd.) Consider the following f(n) and g(n)... f(n) = 3n + 2 g(n) = n If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C x g(n) for all values of C > 0 and n0>= 1 f(n) <= C g(n) ⇒3n + 2 <= C n Above condition is always TRUE for all values of C = 4 and n >= 2. By using Big - Oh notation we can represent the time complexity as follows... 3n + 2 = O(n) f(n) g(n) 16n3+12n2+12n n3 f(n) = O(n3) 34n – 90 n f(n) = O(n) 56 1 f(n) = O(1)

10 Big O notation (contd.) Properties of Big-O Notation
Property 1 : If f(n) = Cg(n) then f(n) is O(g(n)) Example:- f(n) = 20 *g(n) then f(n) is O(g(n)). Property 2: if f1(n) is O(g(n) & f2(n) is O(g(n) then f1(n) + f2(n) is O(g(n)) Property 3 : if f1(n) is O(g1(n) & f2(n) is O(g2(n) then f1(n) * f2(n) is O(g1(n) + O(g2(n) Property 4: The function ank is O(nk).

11 (lower bound- best case )
Omega ( )notation (lower bound- best case )

12 Omega ( )notation (lower bound- best case )
 -notation provides an asymptotic lower bound on a function. It measures the best case time complexity or best amount of time an algorithm can possibly take to complete. Definition:

13 Omega ( )notation (contd.)
Consider the following f(n) and g(n)... f(n) = 3n + 2 g(n) = n If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of C > 0 and n0>= 1 f(n) >= C g(n) ⇒3n + 2 <= C n Above condition is always TRUE for all values of C = 1 and n >= 1. By using Big - Omega notation we can represent the time complexity as follows... 3n + 2 = Ω(n) f(n) g(n) 16n3+12n2+12n n3 f(n) = Ω(n3) 34n – 90 n f(n) = Ω(n) 56 1 f(n) = Ω(1)

14 (upper bound as well as lower bound – Average case )
Theta (θ)notation (upper bound as well as lower bound – Average case )

15 It measures the Average case Complexity.
2. Theta (θ)notation(upper bound as well as lower bound – Average case ) The θ(n) is the formal way to express both the lower bound and upper bound of an algorithm's running time.  It measures the Average case Complexity. Definition:

16 θ-notation(contd.) Consider the following f(n) and g(n)... f(n) = 3n + 2 g(n) = n If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) >= C2 g(n) for all values of C1, C2 > 0 and n0>= 1 C1 g(n) <= f(n) >= ⇒C2 g(n) C1 n <= 3n + 2 >= C2 n Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 1. By using Big - Theta notation we can represent the time compexity as follows... 3n + 2 = Θ(n) f(n) g(n) 16n3+12n2+12n n3 f(n) = Ө(n3) 34n – 90 n f(n) = Ө(n) 56 1 f(n) = Ө(1)

17 Relations Between Q, O, W

18 Common Growth Rates Function Growth Rate Name c Constant log N
Logarithmic log2N Log-squared N Linear N log N N2 Quadratic N3 Cubic 2N Exponential

19 P,NP & NP completeness

20 P,NP & NP completeness Class P
P is a complexity class that represents the set of all decision problems that can be solved in polynomial time. That is, given an instance of the problem, the answer yes or no can be decided in polynomial time. “P” stands for “polynomial time.” Class NP The term “NP” stands for “nondeterministic polynomial” .NP is a complexity class that represents the set of all decision problems for which the instances where the answer is "yes" have proofs that can be verified in polynomial time .Eg. Sudoku, chess game Class NP-Complete NP-Complete is a complexity class which represents the set of all problems X in NP for which it is possible to reduce any other NP problem Y to X in polynomial time. Class NP-hard The term NP-hard refers to any problem that is at least as hard as any problem in NP. Thus, the NP-complete problems are precisely the intersection of the class of NP-hard problems with the class NP

21 Amortized Complexity

22 Amortized Complexity An amortized analysis is any strategy for analyzing a sequence of operations to show that the average cost per operation is small, even though a single operation within the sequence might be expensive. Even though we’re taking averages, however, probability is not involved. In an amortized analysis, the time required to perform a sequence of data structure operations is averaged over all the operations performed. Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case. A. Aggregate method B. The accounting method C. The potential method D. Dynamic tables

23 Short description Aggregate Method: We show that for n operation, the sequence takes worst case T(n) Accounting Method: we overcharge some operations early and use them to as prepaid charge later. Potential Method: we maintain credit as potential energy associated with the structure as a whole.

24 A) Aggregate Method: We show that for n operation, the sequence takes worst case T(n) i.e. we determine an upper bound T(n) on the total sequence of n operations. The cost of each will then be T(n)/n. Note that this amortized cost applies to each operation, even when there are several types of operations in the sequence

25 Imagine that you run a business, and you need to buy a car
Imagine that you run a business, and you need to buy a car. The car costs €10,000. If you buy it, you will have to make a €10,000 payment this year. However, you plan to use the car for the next ten years. Running the car for a year costs another €1,000. Now, there are two possible ways of looking at the above situation. The first way is the way we used above: there is one year with a lot of expenses, and another nine years with a smaller amount of expenses. The second way is to sum it all up. The total expenses of buying a car and using it for ten years sum up to €20,000. Hence, you can say that the car will cost me €2,000 per year. This is amortization

26 Aggregate Method(contd.)
Let take an example, a stack with a new operation MULTIPOP (S,k) If we consider PUSH and POP to be elementary operations, then Multipop takes O(n) in the worst case.

27 B) Accounting Method In the accounting method of amortized analysis, we assign differing charges to different operations, with some operations charged more or less than they actually cost. The amount we charge an operation is called its amortized cost. When an operation’s amortized cost exceeds its actual cost, the difference is assigned to specific objects in the data structure as credit. Credit can be used later on to help pay for operations whose amortized cost is less than their actual cost. Thus, one can view the amortized cost of an operation as being split between its actual cost and credit that is either deposited or used up. This method is very different from aggregate analysis, in which all operations have the same amortized cost. we overcharge some operations early and use them to as prepaid charge later.

28 Accounting Method (contd.)
One must choose the amortized costs of operations carefully. If we want analysis with amortized costs to show that in the worst case the average cost per operation is small, the total amortized cost of a sequence of operations must be an upper bound on the total actual cost of the sequence. Moreover, as in aggregate analysis, this relationship must hold for all sequences of operations. If we denote the actual cost of the ith operation by ci and the amortized cost of the ith operation by , we require Amortized cost ĉi

29 Accounting Method (contd.)
Thing about this: when we push a plate onto a stack, we use $1 to pay actual cost of the push and we leave $1 on the plate. At any point, every plate on the stack has a dollar on top of it. When we execute a pop operation, we charge it nothing and pay its cost with the dollar that is on top of it. 3 ops: Push(S,x) Pop(S) Multi-pop(S,k) Assigned cost: 2 Actual cost: 1 min(|S|,k)

30 c) Potential Method Instead of representing prepaid work as credit stored with specific objects in the data structure, the potential method of amortized analysis represents the prepaid work as “potential energy,” or just “potential,” that can be released to pay for future operations. The potential is associated with the data structure as a whole rather than with specific objects within the data structure. The potential method works as follows. We start with an initial data structure D0 on which n operations are performed. For each i = 1, 2, , n, we let ci be the actual cost of the ith operation and Di be the data structure that results after applying the ith operation to data structure Di−1.

31 n i=1 ^ci = n i=1(ci + ᶲ(Di) - ᶲ (Di-1) ) = n i=1 ci + Ψ(Dn) - Ψ(D0)
A potential function ф maps each data structure Di to a real number ф(Di ), which is the potential associated with data structure Di.The amortized cost of the ith operation with respect to potential function ф is defined by n i=1 ^ci   =  n i=1(ci   + ᶲ(Di) - ᶲ (Di-1) )                                =  n i=1 ci   + Ψ(Dn) - Ψ(D0) 

32 Applications of amortized analysis
Vectors/ tables Disjoint sets Priority queues Heaps, Binomial heaps, Fibonacci heaps Hashing

33


Download ppt "Complexity analysis."

Similar presentations


Ads by Google