TCSS 342, Winter 2006 Lecture Notes

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
Introduction to Analysis of Algorithms
Cmpt-225 Algorithm Efficiency.
1 TCSS 342, Winter 2005 Lecture Notes Course Overview, Review of Math Concepts, Algorithm Analysis and Big-Oh Notation Weiss book, Chapter 5, pp
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
CSE 373 Data Structures and Algorithms Lecture 4: Asymptotic Analysis II / Math Review.
TCSS 342 Lecture Notes Course Overview, Review of Math Concepts,
CSE 373: Data Structures and Algorithms Lecture 4: Math Review/Asymptotic Analysis II 1.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Program Performance & Asymptotic Notations CSE, POSTECH.
Week 2 CS 361: Advanced Data Structures and Algorithms
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
CS 3343: Analysis of Algorithms
Analysis of Algorithms
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
CSC – 332 Data Structures Generics Analysis of Algorithms Dr. Curry Guinn.
Algorithm Analysis Part of slides are borrowed from UST.
CSE 373: Data Structures and Algorithms
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
CSE 373: Data Structures and Algorithms Lecture 4: Math Review/Asymptotic Analysis II 1.
1 CSE 373 Algorithm Analysis and Runtime Complexity slides created by Marty Stepp ©
CSE 421 Algorithms Richard Anderson Winter 2009 Lecture 4.
TCSS 342 Autumn 2004 Version TCSS 342 Data Structures & Algorithms Autumn 2004 Ed Hong.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Algorithm Analysis 1.
CMPT 438 Algorithms.
Analysis of Algorithms
Chapter 2 Algorithm Analysis
COMP9024: Data Structures and Algorithms
COMP9024: Data Structures and Algorithms
Analysis of Algorithms
CSE 373: Data Structures and Algorithms Pep Talk; Algorithm Analysis
Algorithms Algorithm Analysis.
Analysis of Algorithms
COMP9024: Data Structures and Algorithms
Analysis of Algorithms
CS 3343: Analysis of Algorithms
Analysis of Algorithms
Algorithm Analysis (not included in any exams!)
CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis
Analysis of Algorithms
Introduction to Algorithms Analysis
Analysis of Algorithms
CS 3343: Analysis of Algorithms
Analysis of Algorithms
CS 201 Fundamental Structures of Computer Science
Programming and Data Structure
Analysis of Algorithms
Programming and Data Structure
Searching, Sorting, and Asymptotic Complexity
CSE 373 Data Structures and Algorithms
Performance Evaluation
CSC 380: Design and Analysis of Algorithms
Math/CSE 1019N: Discrete Mathematics for Computer Science Winter 2007
At the end of this session, learner will be able to:
CSC 380: Design and Analysis of Algorithms
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
CSE 373: Data Structures and Algorithms
Estimating Algorithm Performance
Big Omega, Theta Defn: T(N) = (g(N)) if there are positive constants c and n0 such that T(N)  c g(N) for all N  n0 . Lingo: “T(N) grows no slower than.
Data Structures and Algorithms
Algorithm Course Dr. Aref Rashad
Analysis of Algorithms
Presentation transcript:

TCSS 342, Winter 2006 Lecture Notes Course Overview, Review of Math Concepts, Algorithm Analysis and Big-Oh Notation

Course objectives (broad) prepare you to be a good software engineer (specific) learn basic data structures and algorithms data structures – how data is organized algorithms – unambiguous sequence of steps to compute something Motivational speech. knowing data structures and algorithms is part of being able to contribute to software devlopment, getting a good job.

Software design goals What are some goals one should have for good software? Get students to suggest. Ones from Lewis & Chase book include: Correctness Efficiency (run-time) Reliability, robustness, and usability Maintainability and reusability Ask, how do data structures and algorithms apply to above goals? (modular, small-sized program pieces = easier to understand, debug, figure out correctness/efficiency).

Course content data structures algorithms data structures + algorithms = programs algorithm analysis – determining how long an algorithm will take to solve a problem Who cares? Aren't computers fast enough and getting faster?

An example Given an array of 1,000,000 integers, … find the maximum integer in the array. Now suppose we are asked to find the kth largest element (The Selection Problem) … 1 2 999,999

Candidate solutions candidate solution 1 candidate solution 2 sort the entire array (from small to large), using Java's Arrays.sort() pick out the (1,000,000 – k)th element candidate solution 2 sort the first k elements for each of the remaining 1,000,000 – k elements, keep the k largest in an array pick out the smallest of the k survivors

Is either solution good? Is there a better solution? What makes a solution "better" than another? Is it entirely based on runtime? How would you go about determining which solution is better? could code them, test them could somehow make predictions and analysis of each solution, without coding

Why algorithm analysis? as computers get faster and problem sizes get bigger, analysis will become more important The difference between good and bad algorithms will get bigger being able to analyze algorithms will help us identify good ones without having to program them and test them first

Why data structures? when programming, you are an engineer engineers have a bag of tools and tricks – and the knowledge of which tool is the right one for a given problem Examples: arrays, lists, stacks, queues, trees, hash tables, heaps, graphs

Development practices modular (flexible) code appropriate commenting of code each method needs a comment explaining its parameters and its behavior writing code to match a rigid specification being able to choose the right data structures to solve a variety of programming problems using an integrated development environment (IDE) incremental development Syllabus: Expected to know java (lewis & chase, appendix A has review) Emphasize comments expected in programs turned in for this class. Incremental development: calculator example. UML: useful way to organize classes, esp with large programs. 2 minute overview: class name/ class data (attributes), include variables & constants/ class methods (operations) whole thing class diagram. will use in software engineering class in future.

Math background: exponents XY = "X to the Yth power"; X multiplied by itself Y times Some useful identities XA XB = XA+B XA / XB = XA-B (XA)B = XAB XN+XN = 2XN 2N+2N = 2N+1 exponents grow really fast: use doubling salary example, $1000 initial, yearly $1000 increase, vs $1 initial, doubles every year. Which is better after 20 years? logs grow really slow. logs are inverses of exponents.

Logarithms Logarithms Examples definition: XA = B if and only if logX B = A intuition: logX B means "the power X must be raised to, to get B" a logarithm with no base implies base 2 log B means log2 B Examples log2 16 = 4 (because 24 = 16) log10 1000 = 3 (because 103 = 1000)

Logarithms, continued log AB = log A + log B log A/B = log A – log B Proof: (let's write it together!) log A/B = log A – log B log (AB) = B log A example: log432 = (log2 32) / (log2 4) = 5 / 2 Proof: Let X = log A, Y = log B, and Z = log AB. Then 2X = A, 2Y = B, and 2Z = AB. So, 2X 2Y = AB = 2Z. Therefore, X + Y = Z.

Arithmetic series Series Example: for some expression Expr (possibly containing i ), means the sum of all values of Expr with each value of i between j and k inclusive Example: = (2(0) + 1) + (2(1) + 1) + (2(2) + 1) + (2(3) + 1) + (2(4) + 1) = 1 + 3 + 5 + 7 + 9 = 25

Series identities sum from 1 through N inclusive is there an intuition for this identity? sum of all numbers from 1 to N 1 + 2 + 3 + ... + (N-2) + (N-1) + N how many terms are in this sum? Can we rearrange them? Intuition for the "sum from 1 through N" identity: If you have 1 + 2 + 3 + ... + n-2 + n-1 + n you can pair up numbers from each end. (1 + n) + (2 + n-1) + (3 + n-2) The sum of each pair is n+1, and there are n/2 pairs (because we had n elements to start with, and we're pairing them), so the sum of all pairs is (n+1) * n/2. This is the identity!

Series sum of powers of 2 sum of powers of any number a 1 + 2 + 4 + 8 + 16 + 32 = 64 - 1 think about binary representation of numbers... sum of powers of any number a ("Geometric progression" for a>0, a ≠1)

Algorithm performance How to determine how much time an algorithm A uses to solve problem X ? Depends on input; use input size "N" as parameter Determine function f(N) representing cost empirical analysis: code it and use timer running on many inputs algorithm analysis: Analyze steps of algorithm, estimating amount of work each step takes Coding often impractical at early design stages. Algorithm Analysis: determining how long an algorithm will take to solve a problem USING A MODEL.

RAM model Typically use a simple model for basic operation costs RAM (Random Access Machine) model RAM model has all the basic operations: +, -, *, / , =, comparisons fixed sized integers (e.g., 32-bit) infinite memory All basic operations take exactly one time unit (one CPU instruction) to execute analysis= determining run-time efficiency. model = estimate, not meant to represent everything in real-world

Critique of the model Strengths: Weaknesses: simple easier to prove things about the model than the real machine can estimate algorithm behavior on any hardware/software Weaknesses: not all operations take the same amount of time in a real machine does not account for page faults, disk accesses, limited memory, floating point math, etc model = approximation of real world. can predict run-time of algorithm on machine.

Why use models? model real world Idea: useful statements using the model translate into useful statements about real computers

Relative rates of growth most algorithms' runtime can be expressed as a function of the input size N rate of growth: measure of how quickly the graph of a function rises goal: distinguish between fast- and slow-growing functions we only care about very large input sizes (for small sizes, most any algorithm is fast enough) this helps us discover which algorithms will run more quickly or slowly, for large input sizes Motivation: we usually care only about algorithm performance when there are large number of inputs. We usually don’t care about small changes in run-time performance. (inaccuracy of estimates make small changes less relevant). Consider algorithms with slow growth rate better than those with fast growth rates.

Growth rate example Consider these graphs of functions. Perhaps each one represents an algorithm: n3 + 2n2 100n2 + 1000 Which grows faster?

Growth rate example How about now?

Big-Oh notation Defn: T(N) = O(f(N)) if there exist positive constants c , n0 such that: T(N)  c · f(N) for all N  n0 idea: We are concerned with how the function grows when N is large. We are not picky about constant factors: coarse distinctions among functions Lingo: "T(N) grows no faster than f(N)." Important (not in Lewis & Chase book). Write on board. (for next slide).

Examples n = O(2n) ? 2n = O(n) ? n = O(n2) ? n2 = O(n) ? n = O(1) ? 10 log n = O(n) ? 214n + 34 = O(2n2 + 8n) ?

Preferred big-Oh usage pick tightest bound. If f(N) = 5N, then: f(N) = O(N5) f(N) = O(N3) f(N) = O(N)  preferred f(N) = O(N log N) ignore constant factors and low order terms T(N) = O(N), not T(N) = O(5N) T(N) = O(N3), not T(N) = O(N3 + N2 + N log N) Bad style: f(N)  O(g(N)) Wrong: f(N)  O(g(N))

Big-Oh of selected functions

Ten-fold processor speedup Book has errata, including on this figure. 2^n = exponential complexity.

Big omega, theta Defn: T(N) = (g(N)) if there are positive constants c and n0 such that T(N)  c g(N) for all N  n0 Lingo: "T(N) grows no slower than g(N)." Defn: T(N) = (h(N)) if and only if T(N) = O(h(N)) and T(N) = (h(N)). Big-Oh, Omega, and Theta establish a relative order among all functions of N

Intuition, little-Oh Defn: T(N) = o(p(N)) if T(N) = O(p(N)) and T(N)  (p(N)) notation intuition O (Big-Oh)   (Big-Omega)   (Theta) = o (little-Oh) <

More about asymptotics Fact: If f(N) = O(g(N)), then g(N) = (f(N)). Proof: Suppose f(N) = O(g(N)). Then there exist constants c and n0 such that f(N)  c g(N) for all N  n0 Then g(N)  (1/c) f(N) for all N  n0, and so g(N) = (f(N))

More terminology T(N) = O(f(N)) T(N) = (g(N)) T(N) = o(h(N)) f(N) is an upper bound on T(N) T(N) grows no faster than f(N) T(N) = (g(N)) g(N) is a lower bound on T(N) T(N) grows at least as fast as g(N) T(N) = o(h(N)) T(N) grows strictly slower than h(N)

Facts about big-Oh If T1(N) = O(f(N)) and T2(N) = O(g(N)), then T1(N) + T2(N) = O(f(N) + g(N)) T1(N) * T2(N) = O(f(N) * g(N)) If T(N) is a polynomial of degree k, then: T(N) = (Nk) example: 17n3 + 2n2 + 4n + 1 = (n3) logk N = O(N), for any constant k

Techniques Algebra ex. f(N) = N / log N g(N) = log N same as asking which grows faster, N or log 2 N Evaluate: limit is Big-Oh relation f(N) = o(g(N)) c  0 f(N) = (g(N))  g(N) = o(f(N)) no limit no relation

Techniques, cont'd L'Hôpital's rule: If and , then example: f(N) = N, g(N) = log N Use L'Hôpital's rule f'(N) = 1, g'(N) = 1/N  g(N) = o(f(N))

Program loop runtimes for (int i = 0; i < n; i += c) // O(n) statement(s); Adding to the loop counter means that the loop runtime grows linearly when compared to its maximum value n. for (int i = 0; i < n; i *= c) // O(log n) Multiplying the loop counter means that the maximum value n must grow exponentially to linearly increase the loop runtime; therefore, it is logarithmic. for (int i = 0; i < n * n; i += c) // O(n2) The loop maximum is n2, so the runtime is quadratic.

More loop runtimes Nesting loops multiplies their runtimes. for (int i = 0; i < n; i += c) // O(n2) for (int j = 0; j < n; i += c) statement; Nesting loops multiplies their runtimes. for (int i = 0; i < n; i += c) for (int i = 0; i < n; i += c) // O(n log n) for (int j = 0; j < n; i *= c) Loops in sequence add together their runtimes, which means the loop set with the larger runtime dominates.

Maximum subsequence sum The maximum contiguous subsequence sum problem: Given a sequence of integers A0, A1, ..., An - 1, find the maximum value of for any integers 0  (i, j) < n. (This sum is zero if all numbers in the sequence are negative.)

First algorithm (brute force) try all possible combinations of subsequences // implement together function maxSubsequence(array[]): max sum = 0 for each starting index i, for each ending index j, add up the sum from Ai to Aj if this sum is bigger than max, max sum = this sum return max sum What is the runtime (Big-Oh) of this algorithm? How could it be improved?

Second algorithm (improved) still try all possible combinations, but don't redundantly add the sums key observation: in other words, we don't need to throw away partial sums can we use this information to remove one of the loops from our algorithm? // implement together function maxSubsequence2(array[]): What is the runtime (Big-Oh) of this new algorithm? Can it still be improved further?

Third algorithm (improved!) must avoid trying all possible combinations; to do this, we must find a way to broadly eliminate many potential combinations from consideration For convenience let Ai,j to denote the subsequence from index i to j. Si,j to denote the sum of subsequence Ai,j.

Third algorithm, continued Claim #1: A subsequence with a negative sum cannot be the start of the maximum-sum subsequence.

Third algorithm, continued Claim #1, more formally: If Ai, j is a subsequence such that , then there is no q>j such that Ai,q is the maximum-sum subsequence. Proof: (do it together in class) Can this help us produce a better algorithm?

Third algorithm, continued Claim #2: When examining subsequences left - to - right, for some starting index i, searching over ending index j, suppose Ai,j is the first subsequence with a negative sum. (so , and this is not true for smaller j) Then the maximum-sum subsequence cannot start at any index p satisfying i<p j. Proof: (do it together in class)

Third algorithm, continued These figures show the possible contents of Ai,j

Third algorithm, continued Can we eliminate another loop from our algorithm? // implement together function maxSubsequence3(array[]): What is its runtime (Big-Oh)? Is there an even better algorithm than this third algorithm? Can you make a strong argument about why or why not? It's hard to get much better, because this algorithm runs in linear time and you must examine every element, so that's about as good as you can do. Asymptotically, that is THE best you can do.

Kinds of runtime analysis Express the running time as f(N), where N is the size of the input worst case: your enemy gets to pick the input average case: need to assume a probability distribution on the inputs amortized: your enemy gets to pick the inputs/operations, but you only have to guarantee speed over a large number of operations However, even with input size N, cost of an algorithm could vary on different input.

References Lewis & Chase book, Chapter 1 (mostly sections 1.2, 1.6) Rosen’s Discrete Math book, mostly sections 2.1-2.3. (Also various other sections for math review)