Download presentation
Presentation is loading. Please wait.
1
It is all in the algorithm
1.2.2 Complexity 2nd Edition It is all in the algorithm Ian F. C. Smith EPFL, Switzerland
2
Module Information Intended audience Key words Author
Intermediate Key words Complexity “Big Oh” notation Optimality Author Ian Smith, EPFL, Switzerland Reviewer (1st Edition) William Rasdorf, NCSU
3
What there is to learn At the end of this module, there will be answers to the following questions: Are there certain tasks that computers cannot do and if so, can faster computers help in these cases? What are the cases when computational requirements are nearly independent of task size? Can small changes have a big effect? Can one classify tasks in order to know whether or not a program will perform well for full-scale tasks? Why is engineering experience so valuable?
4
Outline “Big Oh” Notation (continued from 1.2.1) Optimality
Current Status Approaches for Exponential Tasks
5
Calculations with "Big Oh" Notation
Calculations with “Big Oh” notation involve three elements: The pair (n, f(n)) A constant c Form of g(n) Knowledge of any two allows determination of the 3rd from the upper bound expression f(n) = c ◦ g(n). This leads to upper-bound predictions of f(n) for any n.
6
Tightness A problem which is O(n2) can be modeled by an algorithm which is O(n3). This algorithm would not be optimal. An algorithm is optimal when the complexity of the problem is equal to that of the algorithm. It is always preferable to find the algorithm that is optimal. The closer the algorithm is to the optimal complexity, the higher the tightness.
7
Usefulness of Optimal Algorithms
Complexity classification shows that some optimal algorithms have execution times that are nearly independent of task size! Example: Binary Search Given an ordered list of n elements (EL1 < EL2 < EL3 < … < ELn), place a new element, NE, in the correct order. We compare a naϊve algorithm and binary search.
8
Why is computing so successful? Compare two algorithms
Naïve strategy (NE is the new element; EL1 is element one in an ordered list) NE < EL1 → NE = EL1 → Update list ↓ NE < EL2 → NE = EL2 → Update list NE < EL3 → NE = EL3 → Update list … Yes No Yes No Yes No
9
NE < EL(n/2) → NE < EL(n/4) → NE < EL(n/8) → …
Binary search NE < EL(n/2) → NE < EL(n/4) → NE < EL(n/8) → … → Update list when previous two comparisons are only one place away ↓ NE < EL(3n/4) → … → Update list as above NE < EL(7n/8) → … → Update list as above … Yes Yes Yes Yes No Yes Yes No Yes Yes No
10
Binary search (cont’d.)
When there are 256 elements in a list, only eight comparisons (log2 256) are required to place NE in the correct order. Much faster than the naïve strategy when n is large! 8 32 64 128 256 16
11
Logarithms Log2n is the number of times we have to divide n by 2 to get 1. Alternatively, n=2k (log2n=k). There are a maximum of k iterations in "divide and conquer" and binary search algorithms. In this case, f(n) is O(log(n)).
12
Logarithms (cont’d.) Note that log (n) time complexity is independent of the base of the logarithm. Logan = logbn ◦ logab Since logab is a constant and there are no constants in "Big Oh" notation, O(log(n)) refers to expressions for f(n) in all bases.
13
Plot execution time in terms of number of list elements
Implications Plot execution time in terms of number of list elements Execution time Not much difference n 106 108 When the complexity is log(n), there is not much difference in execution time for 106(1 Million) list elements and 108(100 Million) list elements!
14
Practical Examples of Complexity
Matrix inversion is “Polynomial 3” or O(n3) Sorting n things using binary search is O(n log n) Information retrieval (one thing among n ordered things) is O(log n) Listing the total number of possible sequences of n events is O(n!)
15
Growth Rate of Common Functions
Execution time, ms N Empirical tests using small numbers of n can be misleading. For large values of n, exponential time is always worse than polynomial time.
16
Review Quiz I Define "Big Oh" notation In words In mathematical terms
What does this notation describe in practical terms? Name six categories within "Big Oh" notation.
17
Answers to Review Quiz I
Define "Big Oh" notation? In words A classification of relationships between input size and trends in execution time. In mathematical terms If, n = task size (integer, n>0) f(n) = execution time (>0) g(n) = relative execution time Then, there exists a positive constant c such that: f(n) ≤ c ◦ g(n)
18
Answers to Review Quiz - I
What does this notation describe in practical terms? “Big Oh” notation provides a measure of how sensitive execution time is to the size of the task. This notation can indicate whether or not a computer program is able to accommodate full-scale engineering tasks. Situations when programs are resilient to very large tasks are also indicated.
19
Answers to Review Quiz - I
Name six categories within "Big Oh" notation. Type Notation Logarithmic O(log(n)): f(n) ≤ c ◦ log(n) Linear O(n): f(n) ≤ c ◦ n Polynomial O(nq): f(n) ≤ c ◦ nq Exponential O(an): f(n) ≤ c ◦ an Factorial O(n!): f(n) ≤ c ◦ n! Double exponential O(nn): f(n) ≤ c ◦ nn
20
Example 2 Example 1 is given in module 1.2.1
Tasks can move from linear complexity to exponential complexity upon small changes in requirements. The next example illustrates this. Goal: To construct a brick wall with three workers. Tasks: Bring bricks to site Prepare foundation Prepare cement Bring bricks to the foundation Build wall Finishing
21
Example 2: Part A Part A: Requirement Set 1
A worker can only perform one task at a time A task is only performed by one worker All workers can perform all tasks Tasks should start and finish at the times shown in Figure A1
22
Example 2: Part A (cont’d.)
Task Time (hours) Figure A1: Required start and finish time for tasks
23
Example 2: Possible Solution
A possible solution to Example 2, part A is: The algorithm used to finds the solution is linear in the number of tasks. Worker Tasks 1 1 , 4 2 2 , 5 3 3 , 6
24
Figure A2: Algorithm for Requirement Set 1
Example 2: Algorithm A M = No. of workers N = No. of tasks Figure A2: Algorithm for Requirement Set 1
25
Example 2: Part B Part B: Requirement Set 2
This is Requirement Set 1 with the following change: Workers can only perform specified tasks as given below: Worker Title Tasks 1 Foreman 1,2,4,5,6 2 Cement specialist (mason) 1,3,6 3 Non-specialist 1,4,6
26
Figure B1: Algorithm for requirement set 2
Example 2: Algorithm B M = No. of workers N = No. of tasks Figure B1: Algorithm for requirement set 2
27
Example 2: Backtracking increases complexity
With requirement set 2, a worker is assigned a task according to availability (FREE(W)) and capacity (CAP(W)) and then this is checked to see if other workers are available to perform remaining tasks. Trial 1 (T1 is task1; W1 is worker 1) T1 W √ T2 W1 T2 W2 T2 W Backtrack to beginning, since no worker can do Task 2.
28
Example 2: Backtracking (cont’d.)
Trial 2 T1 W √ T2 W1 √ T3 W1 T3 W2 T3 W3 Backtrack T2 W2 T2 W3 Backtrack to beginning since tasks 2 and 3 cannot be assigned.
29
Example 2: Backtracking (cont’d.)
Trial 3 T1 W √ T2 W √ T3 W1 T3 W2 √ T4 W1 √ T5 W1 T5 W2 T5 W3 Backtrack T4 W2 T4 W3 √ T5 W1 √ T6 W1 T6 W2 √
30
Example 2: New Solution The new solution is:
The modified algorithm is exponential in the number of tasks. If it takes 0.1 seconds to find a solution for six tasks, it will take 2176 centuries to solve for 35 tasks!! Worker Task 1 2 , 5 2 3 , 6 3 1 , 4
31
Example 2: Execution Times
Assume that 6 tasks take 0.1 s Number of tasks Time Time for a computer 106 times faster 6 0.1 s 10-7 s 10 8.1 s 10-6 s 15 32.8 min 0.002 s 20 5.5 days 0.5 s 25 3.7 years 1.9 min 30 9 centuries 7.9 hours 35 2176 centuries 79.4 hours 50 31,200 centuries
32
Outline “Big Oh” Notation (continued from 1.2.1) Optimality
Current Status Approaches for Exponential Tasks
33
Introduction to Optimality
An algorithm is optimal when: O(task) = O(algorithm) As was observed in the first example (part A), it is possible to choose an algorithm that is not optimal. Once an algorithm is found to be optimal, substantial improvements are not possible.
34
Optimality (cont’d.) Programmers often wish to “optimize” code. This effort is justified only if the effort changes the order (O) of the implementation. Proving optimality is an active area of research. Especially interesting are the scenarios where the best known algorithms are exponential.
35
Optimality (cont’d.) A small change in definition of the task can change the order from linear to exponential (and vice-versa). There are both opportunities and risks.
36
Tasks involving exponential algorithms
Graph theory Design Modeling Sequencing and scheduling Mathematical programming Algebra and number theory All of these areas have potential applications in civil engineering.
37
Tasks involving exponential algorithms (cont’d.)
Games and puzzles Non-monotonic logic Natural language processing Image recognition Optimization All of these areas have potential applications in civil engineering … even games and puzzles!
38
Civil Engineering Examples of Exponential Algorithms
Design Data interpretation Configuration Construction scheduling Infrastructure management
39
Outline “Big Oh” Notation (continued from 1.2.1) Optimality
Current Status Approaches for Exponential Tasks
40
Current status For many tasks, the best known algorithm has a computational complexity that is exponential with the task size. We cannot prove that some algorithms are optimal. Many of these unprovable algorithms are “equivalent” in that they can be transformed to the same algorithm in polynomial time.
41
Current status (cont’d.)
In module (Simplification), it was shown that a polynomial term in an expression for f(n) that also contains an exponential term has no influence on complexity. Therefore, if one of these equivalent algorithms is simplified to a polynomial algorithm without loss of accuracy and reliability, many other algorithms can also be simplified!
42
Review Quiz - II What is the optimality condition?
What is the aim of optimizing a code?
43
Answers to Review Quiz - II
What is the optimality condition? O(task) = O(algorithm) An optimal algorithm cannot be substantially improved further. What is the aim of optimizing a code? To reduce execution time through lower complexity.
44
Outline “Big Oh” Notation (continued from 1.2.1) Optimality
Current Status Approaches for Exponential Tasks
45
Approaches To solve exponential tasks in reasonable time:
Use heuristics. These are special rules and strategies that are often based on experience in the application area. They are, however, not reliable in every situation. Lower solution requirements. For example, it is more practical to use a polynomial algorithm to generate a set of good solutions that can be sorted, instead of an exponential algorithm that does not finish.
46
Approaches (cont’d.) Carefully choose objects to be searched Strategies for this will be given in the latter part of the course on optimization. Implement well-designed user interfaces Engineers participate in completing the task while considering information that is not modeled. Parallel, distributed and cloud computing While massively parallel implementations may have a significant effect on polynomial time algorithms, there is little to be gained when algorithms are exponential, see Module 6.
47
exponential complexity linear complexity
Approaches (cont’d.) The study of strategies for solving exponential tasks is part of artificial intelligence (AI) and operations research (OR). Quantum Computing Quantum computers have the potential to convert exponential complexity linear complexity These computers are currently of most interest in the field of cryptography. However, there are limits. For example, factorial complexity cannot be changed to linear complexity.
48
Questions Are there certain tasks that computers cannot do and if so, can faster computers help in these cases? Yes, and sometimes, faster computers cannot help! What are the cases when computational requirements are nearly independent of task size? Cases of O(log n) complexity Can small changes have a big effect? Yes! Can one classify tasks? Yes, O-notation Why is engineering experience so valuable? Engineering experience provides the heuristics to do the things that computers cannot do.
49
Further reading Computers and Intractability: A Guide to the Theory of NP-Completeness, M. Garey and D. Johnson, W.H. Freeman and Company, New York, 1979. Algorithmics, The Spirit of Computing, 3rd ed., D. Harel, Haddison-Wesley, 2004. Engineering Informatics - Fundamentals of Computer-Aided Engineering, B. Raphael and I.F.C. Smith, Wiley, 2013
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.