Ajinkya Nene, Lynbrook CS Club. 04/21/2014. Definition Allows you to figure the worst-case bound for the performance of an algorithm (most useful for.

Slides:



Advertisements
Similar presentations
Chapter 1 – Basic Concepts
Advertisements

Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
UMass Lowell Computer Science Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 3 Tuesday, 2/9/10 Amortized Analysis.
CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis Dan Grossman Spring 2010.
Chapter 3 Growth of Functions
CPSC 411, Fall 2008: Set 6 1 CPSC 411 Design and Analysis of Algorithms Set 6: Amortized Analysis Prof. Jennifer Welch Fall 2008.
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
UMass Lowell Computer Science Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2005 Lecture 3 Tuesday, 2/8/05 Amortized Analysis.
UMass Lowell Computer Science Graduate Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 3 Tuesday, 2/10/09 Amortized Analysis.
Amortized Analysis (chap. 17) Not just consider one operation, but a sequence of operations on a given data structure. Average cost over a sequence of.
CS333 / Cutler Amortized Analysis 1 Amortized Analysis The average cost of a sequence of n operations on a given Data Structure. Aggregate Analysis Accounting.
Andreas Klappenecker [based on the slides of Prof. Welch]
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
Data Structure Algorithm Analysis TA: Abbas Sarraf
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Analysis of Algorithm.
CS2336: Computer Science II
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 2 (Part 2) Tuesday, 9/11/01 Amortized Analysis.
Amortized Analysis Not just consider one operation, but a sequence of operations on a given data structure. Average cost over a sequence of operations.
CS503: Fourteenth Lecture, Fall 2008 Amortized Analysis, Sets Michael Barnathan.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
CSE373: Data Structures and Algorithms Lecture 4: Asymptotic Analysis Aaron Bauer Winter 2014.
CS 473Lecture 121 CS473-Algorithms I Lecture 12 Amortized Analysis.
Week 2 CS 361: Advanced Data Structures and Algorithms
Algorithm Analysis. Algorithm An algorithm is a clearly specified set of instructions which, when followed, solves a problem. recipes directions for putting.
Mathematics Review and Asymptotic Notation
CS 3343: Analysis of Algorithms
Analysis of Algorithms
Amortized Analysis The problem domains vary widely, so this approach is not tied to any single data structure The goal is to guarantee the average performance.
COMP Recursion, Searching, and Selection Yi Hong June 12, 2015.
Computer Algorithms Submitted by: Rishi Jethwa Suvarna Angal.
Fundamentals CSE 373 Data Structures Lecture 5. 12/26/03Fundamentals - Lecture 52 Mathematical Background Today, we will review: ›Logs and exponents ›Series.
CS 361 – Chapter 1 What is this class about? –Data structures, useful algorithm techniques –Programming needs to address both –You already have done some.
Getting Started Introduction to Algorithms Jeff Chastine.
MS 101: Algorithms Instructor Neelima Gupta
SortingBigOh ASFA AP Computer Science A. Big-O refers to the order of an algorithm runtime growth in relation to the number of items I. O(l) - constant.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
RUNNING TIME 10.4 – 10.5 (P. 551 – 555). RUNNING TIME analysis of algorithms involves analyzing their effectiveness analysis of algorithms involves analyzing.
Amortized Analysis We examined worst-case, average-case and best-case analysis performance In amortized analysis we care for the cost of one operation.
CSE373: Data Structures & Algorithms Lecture 10: Implementing Union-Find Dan Grossman Fall 2013.
Amortized Analysis Some of the slides are from Prof. Leong Hon Wai’s resources at National University of Singapore.
CSCE 411H Design and Analysis of Algorithms Set 6: Amortized Analysis Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 6 1 * Slides adapted.
Week 10 - Friday.  What did we talk about last time?  Graph representations  Adjacency matrix  Adjacency lists  Depth first search.
DS.A.1 Algorithm Analysis Chapter 2 Overview Definitions of Big-Oh and Other Notations Common Functions and Growth Rates Simple Model of Computation Worst.
Amortized Analysis. Problem What is the time complexity of n insert operations into an dynamic array, which doubles its size each time it is completely.
Introduction to Algorithms Amortized Analysis My T. UF.
Amortized Analysis In amortized analysis, the time required to perform a sequence of operations is averaged over all the operations performed Aggregate.
Complexity of Algorithms Fundamental Data Structures and Algorithms Ananda Guna January 13, 2005.
Algorithm Analysis 1.
Sorting and "Big Oh" ASFA AP Computer Science A SortingBigOh.
Andreas Klappenecker [partially based on the slides of Prof. Welch]
Analysis of Algorithms
Analysis of Algorithms
Analysis of Algorithms
Complexity analysis.
CSCE 411 Design and Analysis of Algorithms
CS 3343: Analysis of Algorithms
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Data Structures Review Session
Analysis of Algorithms
DS.A.1 Algorithm Analysis Chapter 2 Overview
Searching CLRS, Sections 9.1 – 9.3.
CSCE 411 Design and Analysis of Algorithms
Revision of C++.
Recurrences.
Lecture 21 Amortized Analysis
Analysis of Algorithms
CSCE 411 Design and Analysis of Algorithms
Divide-and-Conquer 7 2  9 4   2   4   7
Presentation transcript:

Ajinkya Nene, Lynbrook CS Club. 04/21/2014

Definition Allows you to figure the worst-case bound for the performance of an algorithm (most useful for usaco) o In usaco, analyzing algorithms is usually just looking at number of calls or for loops The average running time is sometimes different from the worst case running time when an algorithm uses randomization o For this case you use randomized analysis You can also get a best case running time (really only see these in computer science papers)

Worst Case Analysis Big O Notation. This represents the Worst Case Running time for some algorithm o O (n 2 ) – represents that the algorithm will grow in quadratic time as n increases o O(n) – the algorithm will grow in linear time o O(logn) – the algorithm’s worst case grows logarithmically (usually in terms of base 2) o O (n*logn) – grows a bit faster than linear. Usually good in practice If you had a O (n^2 + n) you can say this is O(n^2) as n^2 grows much faster

For Loop Examples for (int i = 0; i < n; i++) { for (int ii = 0; ii < m; ii++) { // do something } For each outer loop run, the inner loop will run m times. The outer loop runs n times. This is O(n*m) for (int i = 0; i < n - 1; i++) { for (int ii = i + 1; ii < n; ii++) { //do something } Note the sequence of costs: (Assuming “do something” is O(1)) (n-1) + (n-2) + … + (1) = (n-1)(n)/2 = (n 2 -n)/2. O((n 2 -n)/2) is O(n 2 )

Binary Search Example/Master Theorem int binary_search(int A[], int key, int imin, int imax) { if (imax < imin) { return KEY_NOT_FOUND; else { int imid = midpoint(imin, imax); if (A[imid] > key) return binary_search(A, key, imin, imid-1); else if (A[imid] < key) return binary_search(A, key, imid+1, imax); else return imid; }

Proof that Binary Search is log(n) The recurrence relation for binary search is: o T(n) = T(n/2) + O(1) (this is the worst case scenario) Master Theorem: T(n) = aT(n/b) + f(n) o N is the size of the problem o A is number of subrecursions o n/b is size of subproblems o F(n) cost of work done outside recursive calls In this case a = 1, b = 2 and f(n) = O(1) = O(n log_b(a) ). The master theorem also states that: o If f(n) = O(n log_b(a) * log k (n)) (in this case k = 0), then T(n) = O(n log_b(a) * log k+1 (n)) o So, in this case T(n) = O(log 0+1 n) = O(log n)

METHODS FOR AMORTIZED ANALYSIS

First Method: Averaging Method Determines Upper Bound T(n) on the total cost of a sequence of n operations, then calculates the amortized cost to be T(n) / n Example: Consider an Empty Stack And a sequence of n Push, Pop, MULTIPOP operations MULTIPOP(S, k) 1.while not Stack-Empty(S) and k > 0 2. Pop(S); 3. K  k-1 Worst Case Analysis: For N Operations Worst Case is O(n) Averaging Method Analysis: The upper bound is O(n) as the each operation takes O(1) time and there are n operations: O(n)/n = O(1). At most 1 pop/push is used each operation.

Second Method: Accounting Method We assign artificial charges to different method Any overcharge for an operation on an item is stored (in an bank account) reserved for that item. Later, a different operation on that item can pay for its cost with the credit for that item. The balanced in the (bank) account is not allowed to become negative. The sum of the amortized cost for any sequence of operations is an upper bound for the actual total cost of these operations. The amortized cost of each operation must be chosen wisely in order to pay for each operation at or before the cost is incurred.

Example: Accounting Method Example: Consider an Empty Stack And a sequence of n Push, Pop, MULTIPOP operations MULTIPOP(S, k) 1.while not Stack-Empty(S) and k > 0 2. Pop(S); 3. K  k-1 Actual Costs: Amortized Costs: Push: 1 Push: 2 Pop : 1 Pop: 0 Multipop : min (k, s) Multipop: 0 Suppose $1 represents a unit cost. When pushing a plate, use one dollar to pay the actual cost of the push and leave one dollar on the plate as credit. Whenever POPing a plate, the one dollar on the plate is used to pay the actual cost of the POP. (same for MULTIPOP). By charging PUSH a little more, do not charge POP or MULTIPOP. The amortized cost is thus O(n) as it is O(1) per operation The base condition holds that the balance never goes negative.

Case Study: Union Find Find: Determine which subset a particular element is in Union: Join two subsets together into one To represent this structure (usually set a fixed representative element for each set) Sets are stored as trees. Root is representative (start of with N separate sets) Union: merges two trees. Each element has parent (root’s are there own parents. Need to use path compression and union by rank for faster worst- case time

Ackerman Function A o (x) = x + 1 A k + 1 (x) = A x k + 1 (x) Where A k i is the the i-fold composition of A k. Basically A k i = A k …. A k  i times

Approach Define the rank of a node: rank (u) = 2 + height (T m (u)) where T i (u) represents the subtree rooted at u at time i in the execution m union and find instructions Define a new function &(u) for which it is defined to be the greatest value k for which rank (parent (u)) >= A k (rank(u)) For n >= 5: the maximum value &(u) can have is the inverse Ackerman function, α(x) – 1. If &(u) = k o n > floor of (log (n)) + 2 >= rank (parent (u)) >= A k (rank (u)) >= A k (2). o Thus, α(n) > k

Continued… Outline: All union functions are constant time giving O(m). Next, tally the separate the number of time units apportioned to the vertices and to the find instructions and show that in each case the total is O((m + n) α(n)). To get O(m α(n)): note that there are most α(n) units charged to find instructions. This is because at most one unit is charged for the α(n) possible values of &() since for each k the last vertex is on &(k) = x, and only it gives it time unit charged to fine. There are at most m find instructions giving O(mα(n).. To get the O(nα(n)) part you count the number of charges to some vertex x over the entire computation.