CS 263.  Classification of algorithm against a model pattern ◦ Each model demonstrate the performance scalability of an algorithm  Sorting algorithms.

Slides:



Advertisements
Similar presentations
Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. Al-Azhar University
Advertisements

CS50 SECTION: WEEK 3 Kenny Yu. Announcements  Watch Problem Set 3’s walkthrough online if you are having trouble.  Problem Set 1’s feedback have been.
QuickSort 4 February QuickSort(S) Fast divide and conquer algorithm first discovered by C. A. R. Hoare in If the number of elements in.
Computational Complexity 1. Time Complexity 2. Space Complexity.
Quicksort CS 3358 Data Structures. Sorting II/ Slide 2 Introduction Fastest known sorting algorithm in practice * Average case: O(N log N) * Worst case:
Chapter 19: Searching and Sorting Algorithms
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Introduction to Analysis of Algorithms
© 2006 Pearson Addison-Wesley. All rights reserved10-1 Chapter 10 Algorithm Efficiency and Sorting CS102 Sections 51 and 52 Marc Smith and Jim Ten Eyck.
Cmpt-225 Algorithm Efficiency.
Analysis of Algorithms 7/2/2015CS202 - Fundamentals of Computer Science II1.
Analysis of Algorithm.
Elementary Data Structures and Algorithms
CS2336: Computer Science II
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
© 2006 Pearson Addison-Wesley. All rights reserved10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
 2006 Pearson Education, Inc. All rights reserved Searching and Sorting.
Analysis of Algorithms Spring 2015CS202 - Fundamentals of Computer Science II1.
COMP s1 Computing 2 Complexity
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Program Performance & Asymptotic Notations CSE, POSTECH.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
CSC 201 Analysis and Design of Algorithms Lecture 04: CSC 201 Analysis and Design of Algorithms Lecture 04: Time complexity analysis in form of Big-Oh.
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
 Pearson Education, Inc. All rights reserved Searching and Sorting.
C++ Programming: From Problem Analysis to Program Design, Second Edition Chapter 19: Searching and Sorting.
Big Oh Algorithms are compared to each other by expressing their efficiency in big-oh notation Big O notation is used in Computer Science to describe the.
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Algorithm Analysis (Algorithm Complexity). Correctness is Not Enough It isn’t sufficient that our algorithms perform the required tasks. We want them.
Sorting and Searching Pepper. Common Collection and Array Actions Sort in a certain order ◦ Max ◦ Min Shuffle Search ◦ Sequential (contains) ◦ Binary.
CSC 211 Data Structures Lecture 13
Data Structure Introduction.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
Java Methods Big-O Analysis of Algorithms Object-Oriented Programming
1 5. Abstract Data Structures & Algorithms 5.6 Algorithm Evaluation.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Scalability for Search Scaling means how a system must grow if resources or work grows –Scalability is the ability of a system, network, or process, to.
C++ How to Program, 7/e © by Pearson Education, Inc. All Rights Reserved.
Searching Topics Sequential Search Binary Search.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
Search Algorithms Written by J.J. Shepherd. Sequential Search Examines each element one at a time until the item searched for is found or not found Simplest.
Big O David Kauchak cs302 Spring Administrative Assignment 1: how’d it go? Assignment 2: out soon… Lab code.
0 Introduction to asymptotic complexity Search algorithms You are responsible for: Weiss, chapter 5, as follows: 5.1 What is algorithmic analysis? 5.2.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
329 3/30/98 CSE 143 Searching and Sorting [Sections 12.4, ]
Algorithm Complexity & Big-O Notation: From the Basics CompSci Club 29 May 2014.
WHICH SEARCH OR SORT IS BETTER?. COMPARING ALGORITHMS Time efficiency refers to how long it takes an algorithm to run Space efficiency refers to the amount.
Analysis of Algorithms Spring 2016CS202 - Fundamentals of Computer Science II1.
1 Algorithms Searching and Sorting Algorithm Efficiency.
1 5. Abstract Data Structures & Algorithms 5.6 Algorithm Evaluation.
Algorithm Complexity is concerned about how fast or slow particular algorithm performs.
Searching and Sorting Searching algorithms with simple arrays
CS 302 Data Structures Algorithm Efficiency.
Chapter 2 Algorithm Analysis
Design and Analysis of Algorithms
Introduction to Analysis of Algorithms
Algorithmic Efficency
Big O notation Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. Big O specifically describes the worst-case.
Big-O notation.
Introduction to Algorithms
Teach A level Computing: Algorithms and Data Structures
Efficiency (Chapter 2).
CS 3343: Analysis of Algorithms
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Presentation transcript:

CS 263

 Classification of algorithm against a model pattern ◦ Each model demonstrate the performance scalability of an algorithm  Sorting algorithms might have different model patterns  Depending on the number of records sorted, one model might work better than another  However.. An increase in records would invoke high overhead in processing at some point ◦ A search of records would assume that the record is the last to be found (or non-existent), a worst-case scenario  Noted as O(N)  A 25 record search would take five times as long as the 5 record search

 The presumption is that we’re talking about speed/performance  We could also be concerned with other resources ◦ Memory utilization ◦ Disk trashing  Depending on the resource we’re targeting, the algorithm might change ◦ Slower algorithm (time) vs. memory consumption

 Constants and variables ◦ Algorithm 1  Compare each array element against every other element for my $i (0.. $#array) { for my $j (0.. $#array) { next if $j == $i; # Compare $i, $j }  O(N^2) ◦ Algorithm 2  Optimized algorithm, cuts run time in half for my $i (0.. $#array - 1) { for my $j ($i $#array) { # Compare $i, $j } }  The notation is NOT O(N 2 /2), but still O(N 2 ) ◦ The “divide by 2” remains constant regardless of input size

 Big O will not care if you purchase more RAM ◦ It’s THEORY  So why bother? ◦ It serves as an indicator of which algorithm to use when you consider your circumstances  Big O servers as a “limiting behavior” of a function ◦ An upper bound of performance ◦ Big O also referred to as  Landau notation  Bachmann-Landau notation  Asymptotic notation

 O(1)No growth curve ◦ Performance is independent of the size of the data set  O(N) ◦ Performance is directly proportional to the size of the data set  O(N+M) ◦ Two data sets combined, and that determines performance  O(N 2 ) ◦ Each element of a set requires to be processed against all others. Bubble sorts are in this category  O(N*M) ◦ Each element of one data set is processed against each element of another data set  Set of regular expressions needs to be processed against a text file  O(N 3 ) ◦ Nested looping going on here…

 O(log N) and O(N log N) ◦ Data set is “iteratively partitioned” (example… balanced binary tree)  Unbalanced trees are O(N 2 ) to build and O(N) to search ◦ The O(log N) refers to the number of times you can partition a set in half iteratively  Log 2 N grows slowly (doubling N has a small effect) and the curve flattens out ◦ Building the tree is more expensive

 Scaling order ◦ O(1) ◦ O(log N) ◦ O(N) ◦ O(N log N) ◦ O(N 2 ) ◦ O(2 N )  Efficiency is NOT the same as scalability ◦ Well coded O(N 2 ) algorithm might outperform a poorly coded O(N log N) one, but at some point their performance curves will cross

 Measure performance of algorithm as the number of steps performed approaches infinity ◦ when the number of steps is exceeded  Algorithm with the greater running time will always take longer to execute that algorithm with the shorter running time  Example: ◦ Bubble sort uses nested loops  Running time is O(N 2 ) ◦ Merge sort  Divides array into halves, sorts each half, and then merges the halves  Running time is O(N log 2 N) ◦ While the Merge sort has a shorter running time, for smaller arrays the Bubble sort will be more efficient

 Misconception: An algorithm that works on a small project will scale up when data increases  If an algorithm of O(N 2 ) type works fine, coding complication of trying to switch the routine to an O(N log N) algorithm match

# O(n^2) for my $i (0..$#a-1) { for (0..$#a-1-$i) { ($a[$_],$a[$_+1]) = ($a[$_+1],$a[$_]) if ($a[$_+1] < $a[$_]); } # O(n log(n)) for my $i (0..$#a-1) { # looping over already-sorted items is not needed for (0..$#a-1-$i) { ($a[$_],$a[$_+1]) = ($a[$_+1],$a[$_]) if ($a[$_+1] < $a[$_]); }

 Searching via a loop ◦ O(N)  Reduce the time to O(log(N)) ◦ Break out of the loop once the “hit” is found