Chapter 20 Computational complexity. This chapter discusses n Algorithmic efficiency n A commonly used measure: computational complexity n The effects.

Slides:



Advertisements
Similar presentations
12-Apr-15 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Advertisements

MATH 224 – Discrete Mathematics
Advance Data Structure and Algorithm COSC600 Dr. Yanggon Kim Chapter 2 Algorithm Analysis.
CHAPTER 2 ALGORITHM ANALYSIS 【 Definition 】 An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In addition,
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 2 Some of the sides are exported from different sources.
Analysys & Complexity of Algorithms Big Oh Notation.
Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Asymptotic Growth Rate
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
Complexity Analysis (Part I)
Analysis of Algorithms. Time and space To analyze an algorithm means: –developing a formula for predicting how fast an algorithm is, based on the size.
Complexity Analysis (Part I)
CS 280 Data Structures Professor John Peterson. Big O Notation We use a mathematical notation called “Big O” to talk about the performance of an algorithm.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Algorithm Analysis. Algorithm Def An algorithm is a step-by-step procedure.
Chapter 1 Algorithm Analysis
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Chapter 2.6 Comparison of Algorithms modified from Clifford A. Shaffer and George Bebis.
Week 2 CS 361: Advanced Data Structures and Algorithms
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Chapter 12 Recursion, Complexity, and Searching and Sorting
Analysis of Algorithms
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
MS 101: Algorithms Instructor Neelima Gupta
Algorithms and data structures: basic definitions An algorithm is a precise set of instructions for solving a particular task. A data structure is any.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
Algorithm Analysis (Big O)
27-Jan-16 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Section 1.7 Comparing Algorithms: Big-O Analysis.
Algorithm Complexity Analysis (Chapter 10.4) Dr. Yingwu Zhu.
Algorithm Analysis 1.
Complexity Analysis (Part I)
Analysis of Algorithms
Thought for the Day “Years wrinkle the skin, but to give up enthusiasm wrinkles the soul.” – Douglas MacArthur.
Introduction to Algorithms
Algorithm Analysis CSE 2011 Winter September 2018.
Efficiency (Chapter 2).
Big-Oh and Execution Time: A Review
Algorithm Analysis (not included in any exams!)
Building Java Programs
Chapter 12: Analysis of Algorithms
What is CS 253 about? Contrary to the wide spread belief that the #1 job of computers is to perform calculations (which is why the are called “computers”),
Introduction to Algorithms Analysis
CS 201 Fundamental Structures of Computer Science
Analysys & Complexity of Algorithms
DS.A.1 Algorithm Analysis Chapter 2 Overview
Programming and Data Structure
Analyzing an Algorithm Computing the Order of Magnitude Big O Notation
Building Java Programs
Analysis of Algorithms
Discrete Mathematics 7th edition, 2009
Analysis of Algorithms
Sum this up for me Let’s write a method to calculate the sum from 1 to some n public static int sum1(int n) { int sum = 0; for (int i = 1; i
Analysis of Algorithms
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Complexity Analysis (Part I)
Analysis of Algorithms
Algorithms and data structures: basic definitions
Complexity Analysis (Part I)
Presentation transcript:

Chapter 20 Computational complexity

This chapter discusses n Algorithmic efficiency n A commonly used measure: computational complexity n The effects of algorithm choice on execution time.

Measuring program efficiency n There are many factors that affect execution time: u Hardware u Operating system u System environment u Programming language and compiler u Run-time system or interpreter u Algorithm(s) u Data on which the program is run

Time complexity n We limit our attention to the algorithm(s) and data. n Our measure of program efficiency is called computational complexity or time complexity. n Each instance of a problem has some inherent size. n Execution time often depends on size. n For the most part, we want to know the worst case behavior of an algorithm. n We measure time cost by the number of primitive steps the algorithm performs in solving the problem.

Time complexity (cont.) n Time complexity of a method M is a function of t M from the natural numbers N to the positive reals R +. t M : N-> R + such that t M (n) is the maximum number of steps for method M to solve a problem of size n.

Comparing different algorithms n Let f and g be functions from the natural numbers to the positive reals, n f,g: N-> R + We say f is O-dominated by g, and write f g, provided: u there is a positive integer n o, and u there is a positive real c, such that u for all n n o, f(n) c g(n).

Time complexity

Relation observations n f, g, h, f, g, h : N-> R + n The relation is reflexive: for any function f, f f. n The relation is transitive:f g and g h imply f h. n The relation is not antisymmetric: there are functions such that f g and g f, but f g. n The relation is not total: there are functions such that neither f g nor g f.

Relation observations (cont.) n If f g and g f, f and g are said to have the same magnitude; f g. n If f g but not g f, then f g. n [f + g](n) = f(n) + g(n). n [f g](n) = f(n) g(n). n [max(f,g)](n) = max(f(n), g(n)).

Computational rules n If c 1 and c 2 are positive real constants, then f c 1 f+c 2. n If f is a polynomial function, f(n) = c k n k +c k-1 n k-1 + … +c0, where c k > 0, then f n k. n If f g and f g,then f f g g. n If f g and f g,then f +f max(g,g). Also f +g max(f,g). n If a and b are > 1, log a n log b n. n 1 log a n n nlog a n n 2 n 3 n n.

Complexity classes n Set of functions that are O- dominated by f. O(f) = { g | g f } (big oh of f) n Set of functions that O-dominate f. (f) = { g | f g } (omega of f) n Set of functions that have the same magnitude as f. (f) = { g | g f } (theta of f)

Complexity classes (cont.) n If t M = c, then M is constant ( (1)). n If t M = n, then M is linear ( (n)). n If t M = n 2, then M is quadratic ( (n 2 )). n If not t M = n k, then M is exponential ((n k )).

Problem complexity n Some problems have a known complexity. i.e. sorting can be done in n log n time. n Some problems are known to be exponential. They are called intractable. n Some problems are unsolvable. n Observing the complexity of a problem is generally difficult because it involves proving something about all possible methods for solving the problem.

Cost of a method n To determine the time-cost of a method, we must count the steps performed in the worst case. n A method with constant time complexity needs to use only a fixed amount of its data. n Any method that, in the worst case, must examine all its data is at least linear. n Methods that are better than linear typically require the data to have some organization or structure.

Cost of a statement n A simple statement requires a constant number of steps. n The number of steps of a method invocation is the number of steps in the method the statement invokes. n The worst case number of steps required by a conditional is the maximum of the number of steps required by each alternative.

Cost of a statement (cont.) n A sequence of statements takes the maximum of the each statements steps. n Time-cost of a loop is the number of steps required by the loop body multiplied by the number of times the loop body is executed. n Recursion time-cost is determined by the depth of recursion.

Example public double average(StudentList students) { int i, sum, count; count = students.size(); sum = 0; i = 0; while ( i < count) { sum = sum + students.get(i).finalExam(); i = i+1; } return (double)sum / (double)count; } n This method is (n).

Example (cont.) Suppose get(i) is also linear (requires : ic 1 +c 0 steps). value of i: 012…n-1 steps: c 0 c 1 +c 0 2c 1 +c 0 …(n-1)c 1 +c 0 n The method is quadratic (n 2 ).

Example boolean hasDuplicates (List list) { int i; int j; int n; boolean found; n = list.size(); found = false; for (i = 0; i < n-1 && !found; i=i+1) for (j = i+1; j < n && !found; j=j+1) found = list.get(i).equals( list.get(j)); return found; }

Example (cont.) n The outer loop is performed n-1 times. The number of iterations depends on the second loop. value of i: 012…n-3n-2 steps: n-1n-2n-3…21 n Summing these altogether, we get ½(n 2 -n). n The method is quadratic (n 2 ).

Analyzing different algorithms n Given a list of integers, find the maximum sum of the elements of a sublist of zero of more contiguous elements. listsublistmaxsum (-2,4,-3,5,3,-5,1)(4,-3,5,3)9 (2,4,5)(2,4,5)11 (-1,-2,3)(3)3 (-1,-2,-3)()0

maxSublistSum int maxSublistSum (IntegerList list) The maximum sum of a sublist of the given list. if list.getInt(i)>0 for some i, 0<=i<list.size(), then max{ list.getInt(i)+…+list.getInt(j) | 0<=i<=j<list.size() } else 0

maxSublistSum (cont.)

n This makes it (n 3 ). It can be improved to (n 2 ) with a little adjustment.

Recursive method n Base cases are lists of length 0 and 1. n If we have longer list, we divide it in half and consider possible cases. (divide and conquer) n There are 3 possible cases: u The sublist with the max sum is in the left half of the list (x 0 … x mid ) u The sublist with the max sum in the right half of the list (x mid+1 … x n-1 ) u The sublist with the max sum overlaps the middle of the list (includes x mid and x mid+1 ).

maxSublistSum private int maxSublistSum (IntegerList list) The maximum sum of a sublist of the given list. That is, the maximum sum of a sublist of(list.get(first), …, list.get(last)). require: 0 <= first <= last < list.size() n The principle method goes something like this: int maxSublistSum (IntegerList list) { if (list.size() == 0) return 0; else return maxSublistSum(List,0,list.size()-1); }

n The method is (n log n).

Iterative method n Having found the maximum sum in the first i of the list, we look at i+1.

Iterative method n There are 2 possibilities for the maximum sum sublist of the first i+1 elements: u it doesnt include x i, in which case it is the same as the maximum sum sublist of the first i elements. u it does include x i. n There are 2 possibilities if the maximum sum sublist ends with x i : u It consists only of x i. u It includes x i-1, in which case it is x i appended to the maximum sum sublist ending with x i-1.

n The method is (n).

Weve covered n Time complexity. n Comparing time-cost functions. n Computing time-cost for simple examples.

Glossary