CS 2210 Discrete Structures Algorithms and Complexity

Slides:



Advertisements
Similar presentations
22C:19 Discrete Math Algorithms and Complexity
Advertisements

MATH 224 – Discrete Mathematics
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Algorithms Chapter 3 Sec 3.1 &3.2 With Question/Answer Animations.
CompSci 102 Discrete Math for Computer Science
CSE115/ENGR160 Discrete Mathematics 02/28/12
CSE115/ENGR160 Discrete Mathematics 03/03/11 Ming-Hsuan Yang UC Merced 1.
CSE115/ENGR160 Discrete Mathematics 03/10/11
Chapter 11: Limitations of Algorithmic Power
Analysis of Algorithms CPS212 Gordon College. Measuring the efficiency of algorithms There are 2 algorithms: algo1 and algo2 that produce the same results.
Algorithms Chapter 3 With Question/Answer Animations.
February 17, 2015Applied Discrete Mathematics Week 3: Algorithms 1 Double Summations Table 2 in 4 th Edition: Section th Edition: Section th.
1 Complexity Lecture Ref. Handout p
Chapter Complexity of Algorithms –Time Complexity –Understanding the complexity of Algorithms 1.
Discrete Mathematics Algorithms. Introduction  An algorithm is a finite set of instructions with the following characteristics:  Precision: steps are.
Discrete Structures Lecture 11: Algorithms Miss, Yanyan,Ji United International College Thanks to Professor Michael Hvidsten.
2.3 Functions A function is an assignment of each element of one set to a specific element of some other set. Synonymous terms: function, assignment, map.
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Algorithms Section 3.1. Problems and Algorithms In many domains there are key general problems that ask for output with specific properties when given.
Analysis of Algorithms
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
Jessie Zhao Course page: 1.
22C:19 Discrete Structures Algorithms and Complexity Fall 2014 Sukumar Ghosh.
Fall 2002CMSC Discrete Structures1 Enough Mathematical Appetizers! Let us look at something more interesting: Algorithms.
MCA-2012Data Structure1 Algorithms Rizwan Rehman CCS, DU.
The Fundamentals: Algorithms, the Integers & Matrices.
CompSci 102 Discrete Math for Computer Science
Chapter Algorithms 3.2 The Growth of Functions 3.3 Complexity of Algorithms 3.4 The Integers and Division 3.5 Primes and Greatest Common Divisors.
Section 3.1. Section Summary Properties of Algorithms Algorithms for Searching and Sorting Greedy Algorithms Halting Problem.
CS 103 Discrete Structures Lecture 12
3.3 Complexity of Algorithms
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
The Fundamentals. Algorithms What is an algorithm? An algorithm is “a finite set of precise instructions for performing a computation or for solving.
22C:19 Discrete Math Algorithms and Integers Fall 2010 Sukumar Ghosh.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Copyright © 2014 Curt Hill Algorithm Analysis How Do We Determine the Complexity of Algorithms.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
Chapter 3 Chapter Summary  Algorithms o Example Algorithms searching for an element in a list sorting a list so its elements are in some prescribed.
Chapter 3 With Question/Answer Animations 1. Chapter Summary Algorithm and Growth of Function (This slide) Algorithms - Sec 3.1 – Lecture 13 Example Algorithms.
Discrete Mathematics Chapter 2 The Fundamentals : Algorithms, the Integers, and Matrices. 大葉大學 資訊工程系 黃鈴玲.
Chapter 3 Sec 3.1 &3.2 With Question/Answer Animations 1.
Algorithms (Section 3.1) The Growth of Functions (Section 3.2)
Algorithms Chapter 3.
CSE15 Discrete Mathematics 03/06/17
Growth of Functions & Algorithms
Applied Discrete Mathematics Week 2: Functions and Sequences
Analysis of Algorithms
CSE15 Discrete Mathematics 03/13/17
COP 3503 FALL 2012 Shayan Javed Lecture 15
Introduction to Algorithms
The Growth of Functions
Enough Mathematical Appetizers!
Computation.
CS 2210 Discrete Structures Algorithms and Complexity
Algorithm design and Analysis
Enough Mathematical Appetizers!
How Hard Can It Be?.
CS 2210 Discrete Structures Algorithms and Complexity
Applied Discrete Mathematics Week 6: Computation
Discrete Mathematics CMP-101 Lecture 12 Sorting, Bubble Sort, Insertion Sort, Greedy Algorithms Abdul Hameed
Discrete Mathematics CS 2610
Chapter 11 Limitations of Algorithm Power
Counting Discrete Mathematics.
CMSC 203, Section 0401 Discrete Structures Fall 2004 Matt Gaston
Enough Mathematical Appetizers!
CSCE 222 Discrete Structures for Computing
Enough Mathematical Appetizers!
Algorithms Chapter 3 With Question/Answer Animations.
Algorithms Chapter 3 With Question/Answer Animations
Discrete Mathematics CS 2610
Presentation transcript:

CS 2210 Discrete Structures Algorithms and Complexity Fall 2019 Sukumar Ghosh

What is an algorithm A finite set (or sequence) of precise instructions for performing a computation. Example: Maxima finding procedure max (a1, a2, …, an: integers) max := a1 for i :=2 to n if max < ai then max := ai return max {the largest element}

Flowchart for maxima finding start max := a1 Given n elements, can you count the total number of operations? i: = 2 no max < ai yes max = ai i: = i + 1 no i = n? yes end

Time complexity of algorithms Counts the largest number of basic operations required to execute an algorithm. Example: Maxima finding procedure max (a1, a2, …, an: integers) max := a1 1 operation for i :=2 to n 1 operation i:=2 if max < a1 then max := ai {n-1 times} {2 ops + 1 op to check if i > n + 1 op to increment i} return max {the largest element} The total number of operations is 4(n-1)+2 = 4n-2

Time complexity of algorithms Example of linear search (Search x in a list a1 a2 a3 … an) k := 1 {1 op} while k ≤ n do {n ops k ≤ n} {if x = ak then found else k: = k+1} {2n ops + 1 op} search failed The maximum number of operations is 3n+2. If we are lucky, then search can end even in the first iteration.

Time complexity of algorithms Binary search (Search x in a sorted list a1 < a2 < a3 <…< an) {search failed} How many operations? Roughly log n. Why?

Bubble Sort procedure bubblesort ( A : list of items ) n = length (A) repeat for i = 1 to n-1 do if A[i-1] > A[i] then swap (A[i-1], A[i]) end if end for n: = n - 1 until n=0 end procedure

Bubble Sort

Bubble Sort 3 2 4 1 5 [n=5 items are there] 2 3 1 4 5 (first pass) n-1 operations 2 1 3 4 5 (second pass) n-2 operations 1 2 3 4 5 (third pass) n-3 operations … 1 2 3 4 5 (fourth pass) 1 The worst case time complexity is (n-1) + (n-2) + (n-3) + … + 1 = n(n-1)/2

The Big-O notation It is a measure of the growth of functions and often used to measure the complexity of algorithms. DEF. Let f and g be functions from the set of integers (or real numbers) to the set of real numbers. Then f is O(g(x)) if there are constants C and k, such that |f(x)| ≤ C|g(x)| for all x > k Intuitively, f(x) grows “slower than” some multiple of g(x) as x grows without bound. Thus O(g(x)) defines an upper bound of f(x).

The Big-O notation x2 + 2x + 1 = O(x2) y = x2 + 2x + 1 y= 4x2 x2 + 2x + 1 = O(x2) Since = 4 x2 > x2 + 4x + 1 whenever x > 1, 4x2 defines an upper bound of the growth of x2 + 2x + 1 y = x2 4 3 2 1 1 2 Defines an upper bound of the growth of functions

The Big-Ω (omega) notation DEF. Let f and g be functions from the set of integers (or real numbers) to the set of real numbers. Then f is Ω(g(x)) if there are constants C and k, such that |f(x)| ≥ C|g(x)| for all x > k Example. 7x2 + 9x + 4 is Ω(x2), since 7x2 + 9x + 4 ≥ 1. x2 for all x Thus Ω defines the lower bound of the growth of a function Question. Is 7x2 + 9x + 4 Ω(x)?

The Big-Theta (Θ) notation DEF. Let f and g be functions from the set of integers (or real numbers) to the set of real numbers. Then f is Θ(g(x)) if there are constants C1 and C2 a positive real number k, such that C1.|g(x)| ≤ |f(x)| ≤ C2.|g(x)| for all x > k Example. 7x2 + 9x + 4 is Θ(x2), since 1. x2 ≤ 7x2 + 9x + 4 ≤ 8. x2 for all x > 10

Average case performance EXAMPLE. Compute the average case complexity of the linear search algorithm. a1 a2 a3 a4 a5 ….. an (Search for x from this list) If x is the 1st element then it takes 5 steps If x is the 2nd element then it takes 8 steps If x is the ith element then it takes (3i + 2) steps So, the average number of steps = 1/n [5+8+…+(3n+2)] = ?

Classification of complexity Terminology Θ(1) Constant complexity Θ(log n) Θ(log n)c Logarithmic complexity Poly-logarithmic complexity Θ(n) Linear complexity Θ(nc) Polynomial complexity Θ(bn) (b>1) Exponential complexity Θ(n!) Factorial complexity We also use such terms when Θ is replaced by O (big-O)

Exercise Complexity of n5 O(2n) True or false? Complexity of 2n O(n5) True or false? Complexity of log (n!) Θ(n log n) True or false? Complexity of 12+22+32+…+n2 Ω(n3) True or false?” Let S = {0, 1, 2, …, n}. Think of an algorithm that generates all the subsets of three elements from S, and compute its complexity in big-O notation.

Greedy Algorithms In optimization problems, many algorithms that use the best choice at each step are called greedy algorithms. Example. Devise an algorithm for making change for n cents using quarters, dimes, nickels, and pennies using the least number of total coins?

Greedy Change-making Algorithm Let be the denomination of the coins, (and ) for i:= 1 to r while n ≥ ci begin add a coin of value ci to the change n := n- ci end Question. Is this optimal? Does it use the least number of coins? Let the coins be 1, 5, 10, 25 cents. For making 38 cents, you will use 1 quarter 1 dime 3 cents The total count is 5, and it is optimum.

Greedy Change-making Algorithm But if you don’t use a nickel, and you make a change for 30 cents using the same algorithm, the you will use 1 quarter and 5 cents (total 6 coins). But the optimum is 3 coins (use 3 dimes!) So, greedy algorithms produce results, but the results may be sub-optimal.

Greedy Routing Algorithm C B If you need to reach point B from point A in the fewest number of hops, Then which route will you take? If the knowledge is local, then you are tempted to use a greedy algorithm, and reach B in 5 hops, although it is possible to reach B in only two hops.

Other classification of problems Problems that have polynomial worst-case complexity are called tractable. Otherwise they are called intractable. Problems for which no solution exists are known as unsolvable problems (like the halting problem). Otherwise they are called solvable. Many solvable problems are believed to have the property that no polynomial time solution exists for them, but a solution, if known, can be checked in polynomial time. These belong to the class NP (as opposed to the class of tractable problems that belong to class P)

Estimation of complexity Source: D. Harel. Algorithmics: The Spirit of Computing . Addison-Wesley, 2nd edition, 1992

The Halting Problem The Halting problem asks the question. Given a program and an input to the program, determine if the program will eventually stop when it is given that input. Run the program with that input. If the program stops, then we know it stops.  But if the program doesn't stop in a reasonable amount of time, then we cannot conclude that it won't stop. Maybe we didn't wait long enough! The question is not decidable in general!

The Traveling Salesman Problem Starting from a node, you have to visit every other node and return To you starting point. Find the shortest route? NP-complete

3-Satisfiability Problem Consider an expression like this: (x1∨x2∨x3) ∧ (x4∨x5∨x6) ∧ (¬x2∨x4∨¬x7) ∧ (¬ x4∨x8∨x3) ∧ (¬x2∨¬ x4∨¬x9) Does there exist an assignment of values of x1, x2, … x9 so that this formula is true? NP-Complete problem!