Download presentation
Presentation is loading. Please wait.
1
CS 263
2
Classification of algorithm against a model pattern ◦ Each model demonstrate the performance scalability of an algorithm Sorting algorithms might have different model patterns Depending on the number of records sorted, one model might work better than another However.. An increase in records would invoke high overhead in processing at some point ◦ A search of records would assume that the record is the last to be found (or non-existent), a worst-case scenario Noted as O(N) A 25 record search would take five times as long as the 5 record search
3
The presumption is that we’re talking about speed/performance We could also be concerned with other resources ◦ Memory utilization ◦ Disk trashing Depending on the resource we’re targeting, the algorithm might change ◦ Slower algorithm (time) vs. memory consumption
4
Constants and variables ◦ Algorithm 1 Compare each array element against every other element for my $i (0.. $#array) { for my $j (0.. $#array) { next if $j == $i; # Compare $i, $j } O(N^2) ◦ Algorithm 2 Optimized algorithm, cuts run time in half for my $i (0.. $#array - 1) { for my $j ($i + 1.. $#array) { # Compare $i, $j } } The notation is NOT O(N 2 /2), but still O(N 2 ) ◦ The “divide by 2” remains constant regardless of input size
5
Big O will not care if you purchase more RAM ◦ It’s THEORY So why bother? ◦ It serves as an indicator of which algorithm to use when you consider your circumstances Big O servers as a “limiting behavior” of a function ◦ An upper bound of performance ◦ Big O also referred to as Landau notation Bachmann-Landau notation Asymptotic notation
6
O(1)No growth curve ◦ Performance is independent of the size of the data set O(N) ◦ Performance is directly proportional to the size of the data set O(N+M) ◦ Two data sets combined, and that determines performance O(N 2 ) ◦ Each element of a set requires to be processed against all others. Bubble sorts are in this category O(N*M) ◦ Each element of one data set is processed against each element of another data set Set of regular expressions needs to be processed against a text file O(N 3 ) ◦ Nested looping going on here…
7
O(log N) and O(N log N) ◦ Data set is “iteratively partitioned” (example… balanced binary tree) Unbalanced trees are O(N 2 ) to build and O(N) to search ◦ The O(log N) refers to the number of times you can partition a set in half iteratively Log 2 N grows slowly (doubling N has a small effect) and the curve flattens out ◦ Building the tree is more expensive
8
Scaling order ◦ O(1) ◦ O(log N) ◦ O(N) ◦ O(N log N) ◦ O(N 2 ) ◦ O(2 N ) Efficiency is NOT the same as scalability ◦ Well coded O(N 2 ) algorithm might outperform a poorly coded O(N log N) one, but at some point their performance curves will cross
9
Measure performance of algorithm as the number of steps performed approaches infinity ◦ when the number of steps is exceeded Algorithm with the greater running time will always take longer to execute that algorithm with the shorter running time Example: ◦ Bubble sort uses nested loops Running time is O(N 2 ) ◦ Merge sort Divides array into halves, sorts each half, and then merges the halves Running time is O(N log 2 N) ◦ While the Merge sort has a shorter running time, for smaller arrays the Bubble sort will be more efficient
10
Misconception: An algorithm that works on a small project will scale up when data increases If an algorithm of O(N 2 ) type works fine, coding complication of trying to switch the routine to an O(N log N) algorithm match
11
# O(n^2) for my $i (0..$#a-1) { for (0..$#a-1-$i) { ($a[$_],$a[$_+1]) = ($a[$_+1],$a[$_]) if ($a[$_+1] < $a[$_]); } # O(n log(n)) for my $i (0..$#a-1) { # looping over already-sorted items is not needed for (0..$#a-1-$i) { ($a[$_],$a[$_+1]) = ($a[$_+1],$a[$_]) if ($a[$_+1] < $a[$_]); }
12
Searching via a loop ◦ O(N) Reduce the time to O(log(N)) ◦ Break out of the loop once the “hit” is found
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.