Download presentation
Presentation is loading. Please wait.
Published byBritney Norris Modified over 9 years ago
1
How difficult is it to solve a problem?
2
Computability and Feasibility Can a specific problem be solved by computation? If so, can a suitably efficient solution be designed so that the ‘solution’ is feasible? Computability is concerned with whether a problem can or cannot be solved. Feasibility is concerned with how efficiently a problem can be solved. We shall look at feasibility first then consider computability.
3
Computability and Feasibility There are two standard measures of the efficiency of an algorithm: o Time: How long the algorithm takes to execute. o Space: How much memory the algorithm uses for the storage of data during execution. When more than one algorithm exists which can solve a problem there is often a trade-off between time and space efficiency. i.e. one solution may be quicker but requires more memory than another solution.
4
Computability and Feasibility For a particular algorithm, the amount of space and time required usually varies depending upon the number of inputs to the algorithm – e.g. adding up a thousand numbers takes a lot longer than adding up ten numbers. Therefore, efficiency is normally expressed as a function of the number of inputs to an algorithm (commonly referred to as ‘n’).
5
Computability and Feasibility Space complexity can be easily measured - we will look briefly at this later but shall focus on time complexity for now. Measuring time complexity is more difficult than space complexity as the same algorithm run on two different computers could take very different amounts of time to execute as a result of factors such as: o Processor clock speed. o Efficiency of hardware implementation of operations used e.g. add. o Other processes executing concurrently.
6
Computability and Feasibility We need a measure of time complexity that focuses on the algorithm rather than the hardware that it is being executed on. We use Big O notation to do this.
7
Computability and Feasibility The time efficiency of an algorithm is established in the following way: Identify the basic operation that is performed by the algorithm. Establish how many times this operation is carried out as a function of the number of inputs (n). Consider how quickly this value grows and identify the dominant part in this (e.g. in 4n 2 +3 the dominant part is n 2 ). This is the order of complexity of the algorithm. It is written as O(g) where g is the growth rate.
8
Computability and Feasibility Algorithm for summing a list of numbers in a file: What is the basic operation? Total Total + Number How many times is it carried out? It is in a loop, executed once for each number in the file. So, if a file contained n items the basic operation would be executed n times. The algorithm is said to have order of complexity O(n). Total 0 WHILE NOT at end of file READ Number from file Total Total + Number END WHILE PRINT Total
9
Computability and Feasibility Algorithm to sum a list of numbers entered at the keyboard, user terminating list by entering 0: What is the basic operation? Total Total + Number How many times is it carried out? It is in a loop, executed once for each number entered plus once more for the 0 terminator. So, if n items were entered the operation would be executed n+1 times. The rate of growth is n as 1 is a constant. The algorithm is said to have order of complexity O(n) since this is the dominant term in n+1. Total 0 REPEAT INPUT Number Total Total + Number UNTIL Number = 0 PRINT Total
10
Computability and Feasibility A very naive sorting algorithm (based on bubble sort) to sort n items in an array: The basic operation is the IF command that compares two numbers to check if they are out of order. This is inside two loops. The outer one executes n-1 times and the inner one executes n-1 times. So the IF command is executed (n-1)*(n-1)= n 2 -2n+1 times. n 2 is the dominant term in this expression so the algorithm is of order of complexity O(n 2 ). FOR i 1 TO n-1 DO FOR j 1 TO n-1 DO IF arr[j] > arr[j + 1] THEN SWAP (arr[j], arr[j + 1]) ENDIF ENDFOR
11
Computability and Feasibility In addition to the number of inputs, the efficiency of some algorithms depends upon the values of the inputs made to them. For example: o The naive implementation of the bubble sort algorithm on the previous slide takes the same amount of time to execute for a list of length n regardless of what the input values are. o A more sophisticated implementation will terminate as soon as this list is sorted, so will be more efficient if the input list is already partially sorted than if it is not.
12
Computability and Feasibility In this improved version of the bubble sort algorithm on the earlier slide, the outer FOR loop that repeated n-1 times is replaced by a REPEAT loop that terminates when the inner loop passes through the entire list without having to make any swaps i.e. the list is sorted. i 1 REPEAT swapmade FALSE FOR j 1 TO n-1 DO IF arr[j] > arr[j + 1] THEN SWAP (arr[j], arr[j + 1]) swapmade TRUE ENDIF ENDFOR i i + 1 UNTIL swapmade = FALSE
13
Computability and Feasibility Initiali=1i=2i=3i=4 [1]30 2010 [2]5020 1020 [3]2010 30 [4]1040 [5]4050 Initiali=1 [1]10 [2]20 [3]30 [4]40 [5]50 In the best case the initial list is already ordered. The algorithm terminates after one iteration of the outer loop because no swaps are made. n-1 (4) comparisons have been made, so the best case order of complexity is O(n). In the worst case the initial list is far from being ordered. The outer loop iterates the maximum n-1 (4) times. The inner loop iterates n-1 (4) times. So, the basic operation executes (n-1)*(n-1) = n 2 -2n+1 times. So the worst case order of complexity is O (n 2 ), the same as the more naive algorithm.
14
Computability and Feasibility For the improved sort algorithm (as for many algorithms), there is a significant difference between best and worst case performance:
15
Computability and Feasibility Three standard measures of efficiency are therefore made for an algorithm: o Worst case complexity is the amount of resources that an algorithm uses to solve a problem with the least favourable set of inputs. It is the upper bound on the amount of resources consumed by the algorithm. o Best case complexity is the amount of resources that an algorithm uses to solve a problem with the most favourable set of inputs. It is the lower bound on the amount of resources consumed by the algorithm. o Average case complexity is the amount of resources that an algorithm uses to solve a problem averaged over all sets of possible inputs.
16
Computability and Feasibility For many problems, more than one algorithm can provide a solution. For example an ordered list could be searched using a linear search or a binary search algorithm. Computer scientists therefore often consider the complexity of a problem such as searching rather than the complexity of a particular algorithm such as binary search. The complexity of a problem is the worst case complexity of the most efficient algorithm which solves the problem.
17
Computability and Feasibility Problems are grouped into classes based upon their complexity. This table shows how the execution time of an algorithm increases as the number of inputs does for logarithmic, linear, polynomial and exponential classes: Easy to solveAlmost impossible to solve
18
Computability and Feasibility The time taken to solve exponential time complexity problems grows rapidly as the number of inputs grows. e.g. for just 50 inputs an algorithm of time complexity O(2 n ) would execute the basic operation more than 1130000000000000 times. Problems that have a polynomial (or better) time complexity are known as tractable. Problems that can be solved but which have a worse than polynomial time complexity are known as intractable. Intractable problems are effectively unsolvable in the general case (except for very small numbers of inputs) as a solution would take such an unreasonably long time to produce that it is unlikely to be useful. Intractable problems may sometimes be tackled using heuristics to produce a close to optimal solution.
19
Computability and Feasibility Travelling salesman: Given a set of n cities and the distances between them, find the shortest tour that connects them all and allows the salesman to visit every city, before returning home. Bin packing: There are n objects with sizes in the range 0 to 1 which must be fitted into bins, each of which has capacity 1. What is the minimum number of bins required? Knapsack problem: A knapsack has a fixed capacity. There are a set of objects, each of which has a size and value. What is the most valuable set of objects that can be put into the knapsack? Other types of problem which are often intractable include scheduling and optimisation problems.
20
Computability and Feasibility Why are these types of problem intractable? Because the only way to produce an optimal solution is to try out every possible solution – a brute force approach. And, as the number of inputs to the problem increases, the number of possible solutions increase very rapidly.
21
Computability and Feasibility e.g. For the travelling salesman problem: AB 20 AB C 1016 2 cities = 1 route (A B A) Length 40 3 cities = 2 routes (A B C A), (A C B A) Both length 46 (as one is the reverse of the other) AB 20 C 16 D 108 22 15 4 cities = 6 routes: A B C D A length 54A D C B A length 54 A B D C A length 73A C D B A length 73 A C B D A length 55A D B C A length 55 Optimum route is 54 long. For n cities, (n-1)*(n-2)*....* 1 routes = (n-1)! routes. For just 20 cities, 121645100408832000 routes Conclusion: Intractable!!!
22
Computability and Feasibility We have focussed on time efficiency as speed is the aspect of complexity that is usually of most concern. However space efficiency can also be an important factor when choosing between different algorithms that have the same purpose. Sometimes there can be a trade-off between space and time efficiency. A good example of this is when sorting data.
23
Computability and Feasibility Simple sorting algorithms such as bubble sort and insertion sort have very poor worst case time complexity of O(n 2 ). The data is sorted ‘in situ’ within the existing list so to sort n items only n memory locations are required. No extra memory locations are required during the sort process. Merge sort is a much more time efficient sort algorithm with worst case time complexity O(n log 2 n). However, merge sort is less space efficient. This is because it is a recursive algorithm and copies of subsections of the list are usually generated each time the algorithm recurses. The next two slides show the merge sort algorithm and why this extra storage space is required.
24
Computability and Feasibility The merge sort procedure: If list length <= 1 then exit as list is sorted If list length > 1 then o Split the unsorted list into two sublists of about half the size. o Recursively call the merge sort procedure to sort the first of the two sublists. o Recursively call the merge sort procedure to sort the second of the two sublists. o Merge the two sublists back into one sorted list.
25
Computability and Feasibility 2949111880372229491137188022 29491137188022 29374911 1880 371149291880 29374911 182280 18222937491180 Recursing Unwinding Recursion An example execution of merge sort on a seven item list: Additional storage space for temporary storage of lists.
26
Computability and Feasibility We have seen that some problems admit no effective solution because they take too long to compute (intractable problems). But are there any problems that simply cannot be solved, however much time and space we could give them? Yes – several problems have been proven to be non-computable. The most famous non-computable problem is the halting problem.
27
Computability and Feasibility The halting problem: Is it possible to write a program that can tell, given a program and its inputs, whether the program will halt? (without executing the program being tested) The answer is no, it has been proven that in the general case this problem cannot be solved. We shall look briefly at the proof of this although A Level candidates are only required to know what the problem is and that it is not computable. It uses a standard technique called proof by contradiction in which we: o Make a proposition. o Show that if the proposition were true, a contradiction must occur. o Therefore we prove that the proposition must be untrue.
28
Computability and Feasibility Assume that the halting function does exist. It is defined as a function h with two inputs p and i. o p is the program that the halting function is testing. o i is the input that will be made to the program p. The function h should output 0 if p halts on the input n and 1 if p does not halt. h ( p, i ) = 0, if p halts on input i 1, if p does not halt {
29
Computability and Feasibility Now construct a program r(n) which calls the function h using n as both parameters i and p: the existence of a computable halting function h(p, i) which has two integer arguments - p 7 is the G¨odel encoded integer number for the algorithm and i is its (encoded) integer input: h(p, i) = ( 0 if p halts on input i 1 if p does not (4) One can then construct a program r(n) having one integer argument n in such a way that it calls the function h(n, n) as a subroutine and ( r(n) halts if h(n, n) = 1 r(n) loops infinitely (i.e., never stops) otherwise. The application of the halting function h on the program r and input n results in h(r, n) = ( 0 if h(n, n) = 1 1 if h(n, n) = 0 (5) A contradiction is clearly manifest once we put n = r in the last equation above. r ( n ) Halts if h ( n, n ) = 1 Loops forever otherwise {
30
Computability and Feasibility Now use the halting function h to test if the program r terminates with input n. So: Recall the definition of r: And substitute the defintion of r into h gives: Now, if r = n we would have: A contradiction! the existence of a computable halting function h(p, i) which has two integer arguments - p 7 is the G¨odel encoded integer number for the algorithm and i is its (encoded) integer input: h(p, i) = ( 0 if p halts on input i 1 if p does not (4) One can then construct a program r(n) having one integer argument n in such a way that it calls the function h(n, n) as a subroutine and ( r(n) halts if h(n, n) = 1 r(n) loops infinitely (i.e., never stops) otherwise. The application of the halting function h on the program r and input n results in h(r, n) = ( 0 if h(n, n) = 1 1 if h(n, n) = 0 (5) A contradiction is clearly manifest once we put n = r in the last equation above. h ( r, n ) = 0, if r halts on input n 1, if r does not halt { r ( n ) Halts if h ( n, n ) = 1 Loops forever otherwise { h ( r, n ) = 0, if h ( n, n ) = 1 1, if h ( n, n ) = 0 { h ( n, n ) = 0, if h ( n, n ) = 1 1, if h ( n, n ) = 0 {
31
Computability and Feasibility We have arrived at a contradiction, i.e. that h(n,n)=0 if h(n,n)=1 and vice-versa. Since the steps we have taken in arriving at this contradiction are sound, it follows that the original proposition, i.e. that the halting function h exists must be untrue. Therefore the halting problem is not computable. Other problems which have proven to be non- computable are the Entscheidungsproblem, Hilbert’s tenth problem and the Post correspondence problem. the existence of a computable halting function h(p, i) which has two integer arguments - p 7 is the G¨odel encoded integer number for the algorithm and i is its (encoded) integer input: h(p, i) = ( 0 if p halts on input i 1 if p does not (4) One can then construct a program r(n) having one integer argument n in such a way that it calls the function h(n, n) as a subroutine and ( r(n) halts if h(n, n) = 1 r(n) loops infinitely (i.e., never stops) otherwise. The application of the halting function h on the program r and input n results in h(r, n) = ( 0 if h(n, n) = 1 1 if h(n, n) = 0 (5) A contradiction is clearly manifest once we put n = r in the last equation above.
32
Computability and Feasibility End of Presentation
33
Computability and Feasibility ExpressionDominant TermOrder of Complexity n + 5 nO(n) 6n + n 2 + 12 n2n2 O(n 2 ) 2 n + n 3 + 12 2n2n O(2 n ) Q1) Dominant terms and order of complexity:
34
Computability and Feasibility Q2) Orders of complexity in order, most efficient first: O(1) O(log 2 n) O(n) O(n 2 ) O(n 4 ) O(3 n )
35
Computability and Feasibility Basic Operation: Comparison of item in list with search value. Order of Time Complexity: O(n) Explanation: The loop is repeated once for each item in the array, so n times for an array containing n items. INPUT SearchValue Found False FOR Index 1 TO ListLength DO IF ArrayValue[Index] = SearchValue THEN Found True ENDFOR OUTPUT Found Q3)
36
Computability and Feasibility Why different complexities: This is because the loop terminates as soon as the item is found, if it is in the list. So the number of times the basic operation is executed depends upon where the item is in the list. INPUT SearchValue Found False Index 0 REPEAT Index Index + 1 IF ArrayValue[Index] = SearchValue THEN Found True UNTIL Index = ListLength OR Found = True ENDFOR OUTPUT Found Q4)
37
Computability and Feasibility Best Case: Item is at start of list, in which case only one item needs to be examined so O(1). Worst Case: Item is at end of list so all n items must be checked, so O(n). INPUT SearchValue Found False Index 0 REPEAT Index Index + 1 IF ArrayValue[Index] = SearchValue THEN Found True UNTIL Index = ListLength OR Found = True ENDFOR OUTPUT Found Q4)
38
Computability and Feasibility Order of Complexity Tractable? O(3 n ) N O(n 10 ) Y O(log 2 n) Y O(2 n ) N Q5) Which algorithms are tractable? (though not necessarily quick to solve!)
39
Computability and Feasibility Q6) The significance of the Halting problem is: o That there are some problems that we can prove cannot be solved by a computer. o That we know that it is not possible, in the general case, to write a program that will determine if another program will halt on a given set of inputs.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.