Download presentation
Presentation is loading. Please wait.
1
CSC 380: Design and Analysis of Algorithms
Dr. Curry Guinn
2
Quick Info Dr. Curry Guinn CIS 2015 guinnc@uncw.edu
Office Hours: MTF: 10:00am-11:00m and by appointment
3
Today Class canceled Friday Analysis of Algorithms
Problem types Data Structures Some math The model Relative rates of growth Big oh and its kin Canvas Quiz this Thursday (night) and following Tuesday (night), Homework due Sunday, Jan 27
4
Sieve of Eratosthenes Input: Integer n ≥ 2
Output: List of primes less than or equal to n for p ← 2 to n do A[p] ← p for p ← 2 to n do if A[p] 0 //p hasn’t been previously eliminated from the list j ← p* p while j ≤ n do A[j] ← 0 //mark element as eliminated j ← j + p Example: Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
5
Analysis of algorithms
How good is the algorithm? time efficiency space efficiency Does there exist a better algorithm? lower bounds optimality Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
6
Important problem types
sorting searching string processing graph problems combinatorial problems geometric problems numerical problems Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
7
Fundamental data structures
graph tree set and dictionary list array linked list string stack queue priority queue Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 1 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
8
Is This Algorithm Fast? Problem: given a problem, how fast does this code solve that problem? "My program finds all the primes between 2 and 1,000,000,000 in 1.37 seconds." How good is this solution? Could try to measure the time it takes, but that is subject to lots of errors multitasking operating system speed of computer language solution is written in
9
Math background: exponents
XY , or "X to the Yth power"; X multiplied by itself Y times Some useful identities XA XB = XA+B XA / XB = XA-B (XA)B = XAB XN+XN = 2XN 2N+2N = 2N+1 exponents grow really fast: use doubling salary example, $1000 initial, yearly $1000 increase, vs $1 initial, doubles every year. Which is better after 20 years? logs grow really slow. logs are inverses of exponents.
10
Math background:Logarithms
definition: XA = B if and only if logX B = A intuition: logX B means: "the power X must be raised to, to get B" In this course, a logarithm with no base implies base 2. log B means log2 B Examples log2 16 = 4 (because 24 = 16) log = 3 (because 103 = 1000)
11
Logarithm identities Identities for logs with addition, multiplication, powers: log (AB) = log A + log B log (A/B) = log A – log B log (AB) = B log A Proof: Let X = log A, Y = log B, and Z = log AB. Then 2X = A, 2Y = B, and 2Z = AB. So, 2X 2Y = AB = 2Z. Therefore, X + Y = Z.
12
Some helpful mathematics
N + N + N + …. + N (total of N times) N*N = N2 which is O(N2) … + N N(N+1)/2 = N2/2 + N/2 is O(N2)
13
Analysis of Algorithms
What do we mean by an “efficient” algorithm? We mean an algorithm that uses few resources. By far the most important resource is time. Thus, when we say an algorithm is efficient (assuming we do not qualify this further), we mean that it can be executed quickly.
14
Is there some way to measure efficiency that does not depend on the state of current technology?
Yes! The Idea Determine the number of “steps” an algorithm requires when given some input. We need to define “step” in some reasonable way. Write this as a formula, based on the input.
15
Generally, when we determine the efficiency of an algorithm, we are interested in:
Time Used by the Algorithm Expressed in terms of number of steps. People also talk about “space efficiency”, etc. How the Size of the Input Affects Running Time Think about giving an algorithm a list of items to operate on. The size of the problem is the length of the list. Worst-Case Behavior What is the slowest the algorithm ever runs for a given input size? Occasionally we also analyze average-case behavior.
16
Input size and basic operation examples
Problem Input size measure Basic operation Searching for key in a list of n items Number of list’s items, i.e. n Key comparison Multiplication of two matrices Matrix dimensions or total number of elements Multiplication of two numbers Checking primality of a given integer n n’size = number of digits (in binary representation) Division Typical graph problem #vertices and/or edges Visiting a vertex or traversing an edge A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
17
RAM model Typically use a simple model for basic operation costs
RAM (Random Access Machine) model RAM model has all the basic operations: +, -, *, / , =, comparisons fixed sized integers (e.g., 32-bit) infinite memory All basic operations take exactly one time unit (one CPU instruction) to execute analysis= determining run-time efficiency. model = estimate, not meant to represent everything in real-world
18
Critique of the model Strengths: Weaknesses: simple
easier to prove things about the model than the real machine can estimate algorithm behavior on any hardware/software Weaknesses: not all operations take the same amount of time in a real machine does not account for page faults, disk accesses, limited memory, floating point math, etc model = approximation of real world. can predict run-time of algorithm on machine.
19
Relative rates of growth
Most algorithms' runtime can be expressed as a function of the input size N Rate of growth: measure of how quickly the graph of a function rises Goal: distinguish between fast- and slow-growing functions We only care about very large input sizes (for small sizes, most any algorithm is fast enough) Motivation: we usually care only about algorithm performance when there are large number of inputs. We usually don’t care about small changes in run-time performance. (inaccuracy of estimates make small changes less relevant). Consider algorithms with slow growth rate better than those with fast growth rates.
20
Growth rate example Consider these graphs of functions.
Perhaps each one represents an algorithm: n3 + 2n2 100n Which grows faster?
21
Growth rate example How about now?
22
“The fundamental law of computer science: As machines become more powerful, the efficiency of algorithms grows more important, not less.” — Nick Trefethen An algorithm (or function or technique …) that works well when used with large problems & large systems is said to be scalable. Or “it scales well”.
23
Big O Definition: T(N) = O(f(N)) if there exist positive constants c , n0 such that: T(N) c · f(N) for all N n0 Idea: We are concerned with how the function grows when N is large. We are not picky about constant factors: coarse distinctions among functions Lingo: "T(N) grows no faster than f(N)."
24
Big O c , n0 > 0 such that f(N) c g(N) when N n0
f(N) grows no faster than g(N) for “large” N
25
Preferred big-O usage pick tightest bound. If f(N) = 5N, then:
f(N) = O(N5) f(N) = O(N3) f(N) = O(N log N) f(N) = O(N) preferred ignore constant factors and low order terms T(N) = O(N), not T(N) = O(5N) T(N) = O(N3), not T(N) = O(N3 + N2 + N log N) remove non-base-2 logarithms f(N) = O(N log6 N) f(N) = O(N log N) preferred
26
Big-O of selected functions
27
Big Omega, Theta Defn: T(N) = (g(N)) if there are positive constants c and n0 such that T(N) c g(N) for all N n0 Lingo: "T(N) grows no slower than g(N)." Defn: T(N) = (g(N)) if and only if T(N) = O(g(N)) and T(N) = (g(N)). Big-O, Omega, and Theta establish a relative ordering among all functions of N
28
Intuition about the notations
O (Big-O) (Big-Omega) (Theta) = o (little-O) <
29
Big-Omega f(N) grows no slower than g(N) for “large” N
c , n0 > 0 such that f(N) c g(N) when N n0 f(N) grows no slower than g(N) for “large” N
30
Big Theta: f(N) = (g(N))
the growth rate of f(N) is the same as the growth rate of g(N)
31
An O(1) algorithm is constant time.
The running time of such an algorithm is essentially independent of the input. Such algorithms are rare, since they cannot even read all of their input. An O(logbn) [for some b] algorithm is logarithmic time. We do not care what b is. An O(n) algorithm is linear time. Such algorithms are not rare. This is as fast as an algorithm can be and still read all of its input. An O(n logbn) [for some b] algorithm is log-linear time. This is about as slow as an algorithm can be and still be truly useful (scalable). An O(n2) algorithm is quadratic time. These are usually too slow. An O(bn) [for some b] algorithm is exponential time. These algorithms are much too slow to be useful.
32
Hammerin’ the terminolgy
T(N) = O(f(N)) f(N) is an upper bound on T(N) T(N) grows no faster than f(N) T(N) = (g(N)) g(N) is a lower bound on T(N) T(N) grows at least as fast as g(N) T(N) = o(h(N)) (little-O) T(N) grows strictly slower than h(N)
33
Notations Asymptotically less than or equal to O (Big-O)
Asymptotically greater than or equal to (Big-Omega) Asymptotically equal to (Big-Theta) Asymptotically strictly less o (Little-O)
34
Facts about big-O If T(N) is a polynomial of degree k, then: T(N) = (Nk) example: 17n3 + 2n2 + 4n + 1 = (n3)
35
Hierarchy of Big-O Functions, ranked in increasing order of growth: 1
log n n n log n n2 n2 log n n3 ... 2n n! nn
36
Various growth rates
37
Techniques for Determining Which Grows Faster
Evaluate: limit is Big-Oh relation f(N) = o(g(N)) c 0 f(N) = (g(N)) g(N) = o(f(N)) no limit no relation
38
Techniques, cont'd L'Hôpital's rule: If and , then
example: f(N) = N, g(N) = log N Use L'Hôpital's rule f'(N) = 1, g'(N) = 1/N g(N) = o(f(N))
39
Program loop runtimes for (int i = 0; i < n; i += c) // O(n) statement(s); Adding to the loop counter means that the loop runtime grows linearly when compared to its maximum value n. Loop executes its body exactly n / c times. for j in range(0, n, c): // O(n) statement(s) Or if myList contains n elements for item in myList: // O(n)
40
for (int i = 0; i < n; i *= c) // O(log n)
statement(s); Multiplying the loop counter means that the maximum value n must grow exponentially to linearly increase the loop runtime; therefore, it is logarithmic. Loop executes its body exactly logc n times. j = 0 while j < n: // O(log n) statement(s) j *= c
41
The loop maximum is n2, so the runtime is quadratic.
for (int i = 0; i < n * n; i += c) // O(n2) statement(s); The loop maximum is n2, so the runtime is quadratic. Loop executes its body exactly (n2 / c) times. for j in range(0, n*n, c): // O(n2) statement(s)
42
More loop runtimes Nesting loops multiplies their runtimes.
for (int i = 0; i < n; i += c) { //O(n2) for (int j = 0; j < n; i += c) { statement; } } for j in range(0, n, c): // O(n2) for k in range(0, n, c): statement(s) Or if myList contains n elements for item in myList: // O(n2) for item in myList:
43
Loops in sequence add together their runtimes, which means the loop set with the larger runtime dominates. for (int i = 0; i < n; i += c) { // O(n) statement; } // O(nlog n) for (int i = 0; i < n; i += c) { for (int j = 0; j < n; i *= c) { } }
44
Types of runtime analysis
Express the running time as f(N), where N is the size of the input worst case: your enemy gets to pick the input average case: need to assume a probability distribution on the inputs However, even with input size N, cost of an algorithm could vary on different input.
45
Example: Sequential search
Worst case Best case Average case A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
46
Some rules When considering the growth rate of a function using Big-O
Ignore the lower order terms and the coefficients of the highest-order term No need to specify the base of logarithm Changing the base from one constant to another changes the value of the logarithm by only a constant factor If T1(N) = O(f(N) and T2(N) = O(g(N)), then T1(N) + T2(N) = max(O(f(N)), O(g(N))), T1(N) * T2(N) = O(f(N) * g(N))
47
For Next Class, Friday Canvas Quiz 1 due by tomorrow night, 11:59pm, Jan 17 No late Canvas quizzes No make up quizzes Canvas Quiz 2 due by Tuesday night, 11:59pm, Jan 23 Homework 1 is due Sunday, 11:59pm, Jan 27, 11:59pm
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.