Download presentation
Presentation is loading. Please wait.
1
Algorithms
2
Algorithms What is an algorithm?
An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. But this one is good enough for now…
3
Algorithms Properties of algorithms:
Correctness of output for every possible input, Finiteness of the number of calculation steps, Effectiveness of each calculation step and Input from a specified set, Output from a specified set (solution), Definiteness of every step in the computation,
4
Algorithm Examples Example: an algorithm that finds the maximum element in a finite sequence procedure max(a1, a2, …, an: integers) max := a1 for i := 2 to n if max < ai then max := ai {max is the largest element}
5
Algorithm Examples If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search. The binary search algorithm iteratively restricts the relevant search interval until it closes in on the position of the element to be located. Fall 2002
6
Algorithm Design and Analysis
Design an algorithm Prove the algorithm is correct. Loop invariant. Recursive function. Formal (mathematical) proof. Analyze the algorithm Time Worst case, best case, average case. For some algorithms, worst case occurs often, average case is often roughly as bad as the worst case. So generally, worse case running time. Space
7
2 Use of Loop Initial condition Invariant condition Termination
8
3 Efficiency of Algorithm
Removing Redundant computations outside loops Referencing of array element Inefficiency due to late termination Early Detection of Desired output condition
9
Removing Redundant computations outside loops
X=0; For i=1 to N do Begin X=x+0.01; Y=(a*a*a + c) * x * x + b * b * x end
10
W= (a*a*a + c) Z= b * b X=0; For i=1 to N do Begin X=x+0.01; Y= W * x * x + Z * x end
11
Referencing of array element
12
Inefficiency due to late termination (Bubble sort)
Compare each element (except the last one) with its neighbor to the right If they are out of order, swap them This puts the largest element at the very end The last element is now in the correct and final place Compare each element (except the last two) with its neighbor to the right This puts the second largest element next to last The last two elements are now in their correct and final places Compare each element (except the last three) with its neighbor to the right Continue as above until you have no unsorted elements on the left
13
Bubble sort Algorithm 1 I=N; 2 for i 1 to N-1 do for j 1 to I do If current key > Next key then Exchange the items ; end 7 end 8 if no Exchange was made during the above loop then return; Else I = I – 1;
14
Example of bubble sort 7 2 8 5 4 2 7 5 4 8 2 5 4 7 8 2 4 5 7 8 2 7 8 5 4 2 7 5 4 8 2 5 4 7 8 2 4 5 7 8 2 7 8 5 4 2 5 7 4 8 2 4 5 7 8 (done) 2 7 5 8 4 2 5 4 7 8 2 7 5 4 8
15
Early Detection of Desired output condition
Conclusions: Algorithms for solving the same problem can differ dramatically in their efficiency. much more significant than the differences due to hardware and software. Bubble sort Algorithm 1 I=N; 2 for i 1 to N-1 do for j 1 to I do If current key > Next key then Exchange the items ; end 7 end 8 if no Exchange was made during the above loop then return; Else I = I – 1;
16
4 Estimating & Specifying Execution Times
Justification for the use of Problem size as a Measure Computational Cost as a Function of Problem Size for a Range of Computational Complexities
17
Justification for the use of Problem size as a Measure
1 [Brute force method ] 2 local integer i ; 3 result x ; [t1 1] 4 for i 1 to n-1 do [t2 n-1] 5 result result * x; [t3 n-1] 6 end 7 return result ; [t4 1]
18
Thus the Total Running time is:
= t1 + (n-1) * t2 + (n-1) * t3 + t4 =(t1 + t4) + (n-1) * (t2 + t3) First term is constant doesn’t depend on n value as n increases the second term grow.
19
Computational Cost as a Function of Problem Size for a Range of Computational Complexities
Exponential behavior Logarithmic behavior
20
Complexity In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n = 230. Fall 2002
21
Complexity For example, let us assume two algorithms A and B that solve the same class of problems. The time complexity of A is 5,000n, the one for B is 1.1n for an input with n elements. For n = 10, A requires 50,000 steps, but B only 3, so B seems to be superior to A. For n = 1000, however, A requires 5,000,000 steps, while B requires 2.51041 steps. Fall 2002
22
Complexity This means that algorithm B cannot be used for large inputs, while algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. Fall 2002
23
5 Order Notation notation is the language we use for calculating how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem. Big oh (O) Theta Q Omega W Small oh o Small Omega w Fall 2002
24
Three Common Sets f(n) = O(g(n)) means c g(n) is an Upper Bound on f(n) f(n) = (g(n)) means c g(n) is a Lower Bound on f(n) f(n) = (g(n)) means c1 g(n) is an Upper Bound on f(n) and c2 g(n) is a Lower Bound on f(n) These bounds hold for all inputs beyond some threshold n0. Asymptotic or Big-O notation. O Omega Sigma
25
Asymptotic Notation Q, O, W, o, w
Defined for functions over the natural numbers. Ex: f(n) = Q(n2). Describes how f(n) grows in comparison to n2. Define a set of functions; in practice used to compare two function sizes. The notations describe different rate-of-growth relations between the defining function and the defined set of functions. Comp 122
26
big-O notation The growth of functions is usually described using the big-O notation. Definition: Let f and g be functions from the integers or the real numbers to the real numbers.
27
big-O notation We say that f(n) is O(g(n)) if there are constants C and k such that |f(n)| C.|g(n)| whenever n > k.
28
Order natation Big oh is define as follow
f(n) =O(g(n)) = lim [f(n) / g(n)] = c n Where C not equal to 0 The idea behind the big-O notation is to establish an upper boundary for the growth of a function f(n) for large n. This boundary is specified by a function g(n) that is usually much simpler than f(n). Comp 122
29
There exist positive constants c
such that there is a positive constant n0 such that … cg(n) f(n) n n0 f(n) = O( g(n))
30
O-notation g(n) is an asymptotic upper bound for f(n).
Intuitively: Set of all functions whose rate of growth is the same as or lower than that of g(n). g(n) is an asymptotic upper bound for f(n).
31
1 is stronger than O f(n) = (g(n)) f(n) = O(g(n)) or (g(n)) O(g(n)) 2 We write f(n) =O(g(n)) for f(n) Î O(g(n))
32
Properties of O -notation
Transitivity ( A to B ,B to C & A to C ) f(n) = O(g(n)) ^ g(n) = O(h(n)) f(n) = O(h(n)) 2) Reflexivity f(n) = O(f(n)) 3)Transpose Symmetry f(n) = O(g(n)) => g(n) = (f(n))
33
-notation For function g(n), we define (g(n)), big-Omega of n, as the set: (g(n)) = {f(n) : positive constants c and n0, such that n n0, we have 0 cg(n) f(n)} Intuitively: Set of all functions whose rate of growth is the same as or higher than that of g(n). g(n) is an asymptotic lower bound for f(n). Comp 122
34
1 is stronger than f(n) = (g(n)) f(n) = (g(n)) or (g(n)) (g(n)) 2 We write f(n) = (g(n)) for f(n) Î (g(n))
35
Properties of -notation
Transitivity ( A to B ,B to C & A to C ) f(n) = (g(n)) ^ g(n) = (h(n)) f(n) = (h(n)) 2) Reflexivity f(n) = (f(n)) 3)Transpose Symmetry f(n) = (g(n)) => g(n) = O (f(n))
36
-notation For function g(n), we define (g(n)), big-Theta of n, as the set: (g(n)) = {f(n) : positive constants c1, c2, and n0, such that n n0, we have c1g(n) f(n) c2g(n) } Intuitively: Set of all functions that have the same rate of growth as g(n). g(n) is an asymptotically tight bound for f(n). Comp 122
37
Properties of -notation
Transitivity ( A to B ,B to C & A to C ) f(n) = (g(n)) ^ g(n) = (h(n)) f(n) = (h(n)) 2) Reflexivity f(n) = (f(n)) 3)Symmetry f(n) = (g(n)) => g(n) = (f(n)) 4) Maximum Max(f(n),g(n)) = (f(n)+ g(n))
38
Relations Between Q, O, W f(n) = O(g(n)) means c g(n) is an Upper Bound on f(n) f(n) = (g(n)) means c g(n) is a Lower Bound on f(n) f(n) = (g(n)) means c1 g(n) is an Upper Bound on f(n) and c2 g(n) is a Lower Bound on f(n) These bounds hold for all inputs beyond some threshold n0.
39
Notes on o-notation O-notation may or may not be asymptotically tight for upper bound. o-notition is used to denote an upper bound that is not tight. 2n = o(n2), but 2n2 o(n2). Difference: for some positive constant c in O-notation, but all positive constants c in o-notation.
40
o-notation For a given function g(n),
o(g(n))={f(n): for any positive constant c,there exists a positive n0 such that 0 f(n) cg(n) for all n n0} Write f(n) o( g(n)), or simply f(n) = o( g(n)).
41
-notation For a given function g(n),
(g(n))={f(n): for any positive constant c, there exists a positive n0 such that 0 cg(n) f(n) for all n n0} Write f(n) ( g(n)), or simply f(n) = ( g(n)). -notation, similar to o-notation, denotes lower bound that is not asymptotically tight.
42
Measuring the execution time
While working with the big O notation , we discarded the multiplicative constant. The function f(n)=1000n2 & g(n)=0.0001n2 are treated as identical even through g(n) is a million time larger than f(n) for all values of n. Means execution time is total depend on value of n,
43
Measuring the execution time
For this we can draw the following conclusion , 1 The algorithm of all complexity of classes take roughly the same amount of time for n<= 10. 2 The algorithm whose running time is n! becomes useless well before n=20. 3 The algorithm whose running time is 2n has a greater operating range than the one with running time of n! but it becomes impartial for n>40.
44
Measuring the execution time
A)iteration Power1(x , n) 1 [Brute force method ] local integer i ; result x ; [t1 1] for i 1 to n-1 do [t2 n-1] result result * x; [t3 n-1] End return result ; [t4 1] For ex consider x=2 , n=5
45
Measuring the execution time
Iteration For ex x=2 , n=5 Multiplication Required=4
46
Measuring the execution time
Recursive method Power2(x,n) Local real y; If n = 1 then Return x; End Y power2(x , [n/2]); If odd(n) then Return y* y * x Else return y* y ; end
47
Measuring the execution time
Recursive For ex x=2 , n=5 Return 2 YPower2(2,2) Return 2*2 YPower2(2,1) Return 4*4*2 Multiplication Required=3
48
Measuring the execution time
Recursive method Local real y; If n = 1 then Return x; End Y power2(x , [n/2]); If odd(n) then Return y* y * x Else return y* y ; end Power1(x,n) 1 [Brute force method ] local integer i ; result x ; [t1 1] for i 1 to n-1 do [t2 n-1] result result * x; [t3 n-1] End 7 return result ; [t4 1]
49
Measuring the execution time
Iteration For ex x=2 , n=5 Multiplication Required=4 Recursive For ex x=2 , n=5 Multiplication Required=3
50
6 Algorithm Strategies Brute force Divide & conquer
Dynamic Programming Greedy Algorithm General Algorithm
51
Brute force : (Without using any method or techniques which can find the output)
In computer science, brute-force search, is a very general problem-solving technique that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. While a brute-force search is simple to implement, and will always find a solution if it exists, its cost is proportional to the number of candidate solutions – which in many practical problems tends to grow very quickly as the size of the problem increases. Therefore, brute-force search is typically used when the problem size is limited . The method is also used when the simplicity of implementation is more important than speed.
52
2) Divide & conquer: Divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem. This technique is the basis of efficient algorithms for all kinds of problems, such as sorting (e.g., quicksort, merge sort)
53
3) Dynamic Programming :
dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure (like depth-first search). In order to solve a given problem, using a dynamic programming approach, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem.
54
4) Greedy Algorithm The greedy algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to lead to the optimal solution.
55
5) General Algorithm Genetic Algorithm In the field of artificial intelligence, a genetic algorithm (GA) is a search heuristic that mimics the process of natural selection. This heuristic (also sometimes called a metaheuristic) is routinely used to generate useful solutions to optimization and search problems.[1]Genetic algorithms belong to the larger class of evolutionary algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover.
56
7 Design using Recursion
Computer efficiency Human efficiency
57
a*1 =a if b=1 a*b=a *(b-1)+a Draw the recursive implementation for above statement ?
58
a*1 =a if b=1 a*b=a *(b-1)+a recursive rmulti (int a, int b) { If (b= = 1) return a; else return (a + rmulti(a , b-1)); }
59
a*1 =a a*b=a *(b-1)+a iterative imulti (int a, int b) {
int result = a; While (b>1) result = a + result ; b-- ; } return result ;
60
Conversation of recursion to iterative
61
Conversation of recursion to iterative
a*1 =a a*b=a *(b-1)+a recursive rmulti (int a, int b) { If (b= = 1) return a; else return (a + rmulti(a , b-1)); } iterative imulti (int a, int b) { Int result = a; While (b>1) result = a + result ; b-- ; } return result ;
62
Conversion of iterative to recursion function
Covert the following function recursion implementation to iterative X n .
63
Conversion of recursion to iterative function
Power1(x,n) [Brute force method ] local integer i ; result x ; for i 1 to n-1 do result result * x; End return result ;
64
Conversion of iterative to recursion function
Recursive method Power2(x,n) Local real y; If n = 1 then Return x; End Y power2(x , [n/2]); If odd(n) then Return y* y * x Else return y* y ; end Iterative Power1(x,n) [Brute force method ] local integer i ; result x ; for i 1 to n-1 do result result * x; End return result ;
65
Conversion of iterative to recursion function
Covert the following Fibonacci series function recursion implementation to iterative . n if n=0 or n=1 Fib(n) = Fin(n-2) +Fib (n-1) if n>1
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.