Download presentation
Presentation is loading. Please wait.
Published byStewart Mosley Modified over 9 years ago
1
Lecture 9 CS203
2
Math Ahead! The rest of this lecture uses a few math principles that you learned in HS but may have forgotten. Do not worry (too much) if your math background is shaky. We introduce mathematical material at a gentle pace. When I started working on my MS here, I had not taken a math class in 25 years, and I managed to learn this material. You can too. On the other hand, if you want to study this material in more detail, you will not be disappointed. You just have to wait until you take CS312. 2
3
Summations Summation is the operation of adding a sequence of numbers; the result is their sum or total. Summation is designated with the Greek symbol sigma (∑) 3 Find the sum Start from 1 Iterate through values of i Stop at 100 This summation means "the sum of all integers between 1 and 100 inclusive"
4
4 Useful Mathematic Summations
5
5 The first summation on the previous slide would usually be expressed this way: For the following values, the value of the summation is 5050, since 100(101)/2 = 10100/2 = 5050:
6
Logarithms The logarithm of a number is the exponent to which another number, the base, must be raised to yield the number. Here is the notation: log b (y) = x where b is the base. The parentheses are usually left out in practice. Examples: log 2 8 = 3 log 10 10,000 = 4 The word "logarithm" is derived (by an early-modern mathematician) from Greek and means roughly "number reasoning." Interestingly (to me, anyway) it is completely unrelated to it anagram "algorithm," which is derived from a Latin version of an Arabic version of the name of the medieval Persian mathematician Kwarizmi 6
7
Logarithms In other fields, log without an stated base is understood to refer to log 10 or log e. In CS, log without further qualification is understood to refer to log 2, pronounced "log base 2" or "binary logarithm." The base is not important in comparing algorithms, but, as you will see, the base is almost always 2 when we are calculating the complexity of programming algorithms. 7
8
Recurrence Relations A recurrence relation is a rule by which a sequence is generated Eg, the sequence 5, 8, 11, 14, 17, 20… Is described by the recurrence relation a 0 = 5 a n = a n-1 + 3 Divide-and-conquer algorithms are often described in terms of recurrence relations 8
9
9 Analyzing Binary Search Binary Search searches an array or list that is *sorted* In each step, the algorithm compares the search key value with the key value of the middle element of the array. If the keys match, then a matching element has been found and its index, or position, is returned. Otherwise, if the search key is less than the middle element's key, then the algorithm repeats its action on the sub-array to the left of the middle element or, if the search key is greater, on the sub-array to the right. If the remaining array at any step to be searched is empty, then the key cannot be found in the array and a special "not found" indication is returned.
10
10 Logarithm: Analyzing Binary Search Each iteration in binary search contains a fixed number of operations, denoted by c. Let T(n) denote the time complexity for a binary search on a list of n elements. Since we are studying the rate of growth of execution time, we define T(1) to equal 1. Assume n is a power of 2; this makes the math simpler and, if it is not true, the difference is trivial. Let k=log n. In other words, n = 2 k Since binary search eliminates half of the input after two comparisons, CS-style recurrence relation
11
11 Logarithmic Time Ignoring constants and smaller terms, the complexity of the binary search algorithm is O(log n). An algorithm with the O(log n) time complexity is called a logarithmic algorithm. The base of the log is 2, but the base does not affect a logarithmic growth rate, so it can be omitted. The time to execute a logarithmic algorithm grows slowly as the problem size increases. If you square the input size (with base = 2), the time taken doubles.
12
Sorting Sorting is a classic subject in computer science. There are three reasons for studying sorting algorithms. First, sorting algorithms illustrate many creative approaches to problem solving that can be applied to other problems. Second, sorting algorithms are good for practicing fundamental programming techniques using selection statements, loops, methods, and arrays. Third, sorting algorithms are excellent examples to demonstrate algorithm performance. 12
13
Sorting These sorting algorithms apply to sorting any type of object, as long as we can find a way to order them. For simplicity, though, we will first sort numeric values, then more complex objects. When we sort more complex numbers, we will sort them by some key. For example, if class Student has instance variables representing CIN, GPA and name, we will sort according to one of these or some combination of them. We have already done this with the priority queue and other examples. 13
14
Sorting Arrays and Lists are reference types; that is, the variable we pass between methods does not contain the array or list, but a reference to it. Therefore, you can sort an array or list in Java with a void method that takes the reference variable, and sorts the elements without returning anything. All other references to the data structure can still be used to access the sorted structure. On the other hand, you can copy all the elements, construct a new sorted list, and return it. This practice may become more common in the near future for reasons you will learn about in a few weeks. 14
15
Bubble Sort Recall that bubble sort repeatedly iterates through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. This iteration is repeated until no swaps are needed, which indicates that the list is sorted. The algorithm gets its name from the way smaller elements "bubble" to the top of the list. Text adapted from Wikipedia 15
16
Bubble Sort 16 The number of comparisons is always at least as large as the number of swaps. Therefore, in studying the time complexity, we count the comparisons. The largest key always floats to the right position in the first pass, the next largest rises to the next position in the next pass, etc.
17
Bubble Sort Recall that we are estimating the effect of growth in n, not the exact number of CPU cycles we need. We do not care about the division by 2 since this does not affect the rate of growth. As n increases, the lower-order term n/2 is dominated by the n 2 term. Therefore, we also disregard the lower order term. Bubble sort time: O(n 2 ) 17 In the best case, the data is already sorted, so that we only need one pass, making bubble sort O(n) in this case. More importantly, though, for the worst case, the number of comparisons is :
18
Selection Sort and Insertion Sort These two sorts grow sorted sublists one element at a time. Selection sort finds the lowest value and moves it to the bottom, or finds the largest element and moves it to the top, then repeats the process for the rest of the values, repeatedly until the list is sorted. Insertion sort takes one value at a time and places it in the correct spot in the sorted sublist, in the same way most people would sort a hand of cards. 18
19
Analyzing Selection Sort The number of comparisons in selection sort is n-1 for the first iteration, n-2 for the second iteration, and so on. Let T(n) denote the complexity for selection sort and c denote the total number of other operations in each iteration. So, 19 Ignoring constants and smaller terms, the complexity of selection sort is O(n 2 ).
20
Analyzing Insertion Sort Where selection sort always inserts an element at the end of the sorted sublist, insertion sort requires inserting some elements in arbitrary places. At the kth iteration to insert an element to a sorted array of size k, it may take k comparisons to find the insertion position, and, if the structure we are sorting is an array, k moves to insert the element. Let T(n) denote the complexity for insertion sort and c denote the total number of other operations such as assignments in each iteration. So, 20 Ignoring constants and smaller terms, the complexity of the insertion sort algorithm is O(n 2 ). These two terms are just twice the values for selection sort
21
Quadratic Time An algorithm with O(n 2 ) time complexity is called a quadratic algorithm. Algorithms with nested loops are often quadratic. A quadratic algorithm's expense grows quickly as the problem size increases. If you double the input size, the time for the algorithm is quadrupled. 21
22
Sorting We often teach bubble sort, selection sort, and insertion sort first because they are easy to understand. Other sort methods are more efficient in average and worst cases. In particular, there are sort algorithms with O(n log n) complexity. In other words, the consumption of CPU cycles grows proportionally to n times log of n. These algorithms usually involve performing an operation that is O(log n) n times. Since log n grows much more slowly than n, this is a dramatic improvement over O(n 2) : If n = 2, n 2 = 4 and n log n = 2 If n = 100, n 2 = 10000 and n log n = 664 If n = 10000, n 2 = 100,000,000 and n log n = 132,877 22
23
Merge Sort Merge Sort is a divide and conquer algorithm, like the Towers Of Hanoi algorithm mergeSort(list): firstHalf = mergeSort(firstHalf); secondHalf = mergeSort(secondHalf); list = merge(firstHalf, secondHalf); merge: add the lesser of firstHalf [0] and secondHalf [0] to the new, larger list repeat until one of the (already sorted) sublists is exhausted add the rest of the remaining sublist to the larger list. 23
24
Merge Sort 24
25
Merge Two Sorted Lists 25
26
Merge Sort Time Assume n is a power of 2. This assumption makes the math simpler. If n is not a power of 2, the difference is trivial. Merge sort splits the list into two sublists, sorts the sublists using the same algorithm recursively, and then merges the sublists. Each recursive call merge sorts half the list, so the depth of the recursion is the number of times you need to split n to get lists of size 1; this is log n. The single-item lists are, obviously, sorted. Merge Sort reassembles the list in log n steps, just as it broke the list down. To merge two subarrays, across all the sublists at one level of recursion, takes at most n-1 comparisons to compare the elements from the two subarrays and n moves to move elements to the new array. The total size of all the sublists is n, the original size of the unsorted list. The total merge time is 2n-1, which is O(n). This happens log n times. Thus, Merge Sort is O(n log n) 26
27
27 Merge Sort Time Here it is again, but with more math. Let T(n) denote the time required for sorting an array of n elements using merge sort. Without loss of generality, assume n is a power of 2. The merge sort algorithm splits the array into two subarrays, sorts the subarrays using the same algorithm recursively, and then merges the subarrays. So, The first T(n/2) is the time for sorting the first half of the array and the second T(n/2) is the time for sorting the second half.
28
28 Merge Sort Time To merge two subarrays takes at most n-1 comparisons to compare the elements from the two subarrays and n moves to move elements to the temporary array. So, the merge time is 2n-1. 2 log n = n and T(1) = 1 2n – 2 0 a =2 so a-1 = 1 Subtractive term, so the -1 from the summation becomes +1
29
Quick Sort Quick sort, developed by C. A. R. Hoare (1962), works as follows: Select an element, called the pivot, in the array. Divide the array into two parts such that all the elements in the first part are less than or equal to the pivot and all the elements in the second part are greater than the pivot. Recursively apply the quick sort algorithm to the first part and then the second part. 29
30
Quick Sort function quicksort(a) // an array of zero or one elements is already sorted if length(a) ≤ 1 return a select and remove a pivot element pivot from array create empty lists less and greater for each x in a if x ≤ pivot then append x to less else append x to greater // two recursive calls return concatenate(quicksort(less), list(pivot), quicksort(greater)) The earliest version of quicksort used the first index as the pivot, and demos of quicksort often still do this for simplicity. However, in an already-sorted array, this will cause the worst case O(n 2 ) behavior. The middle index is safer, although yet more complex solutions to this problem also exist. 30
31
Quick Sort 31
32
Quick Sort How to accomplish the partition in a single array: Search from the beginning of the list for the first element greater than the pivot and from the end for the last element less than the pivot. When found, swap them. Continue until the list is partitioned. Finally, swap the last element in the left partition with the pivot 32
33
package demos; import javax.swing.JOptionPane; public class Demo { // http://www.mycstutorials.com/articles/sorting/quicksort public void quickSort(int array[]) { for (int x : array)System.out.print(x + " "); System.out.println(); quickSort(array, 0, array.length - 1); } public void quickSort(int array[], int start, int end) { int i = start; // index of left-to-right scan int k = end; // index of right-to-left scan if (end - start >= 1) // check that there are at least two elements { int pivot = array[start]; // set the first element as pivot System.out.println("pivot " + pivot); System.out.println("i: " + i + " k: " + k); while (k > i) // while the scan indices have not met { while (array[i] i) { // from the left, look for the first i++; // element greater than the pivot System.out.println("i: " + i); } while (array[k] > pivot && k >= start && k >= i) { // from the right, look for the first k--; // element not greater than the pivot System.out.println("k: " + k); } if (k > i) // if the left seekindex is still smaller than swap(array, i, k); // the right index, swap the corresponding elements } swap(array, start, k); // after the indices cross, swap the last element in the left partition with the // pivot quickSort(array, start, k - 1); // quicksort the left partition quickSort(array, k + 1, end); // quicksort the right partition } 33
34
else // if there is only one element in the partition, do not do any sorting { return; // the array is sorted, so exit } public void swap(int array[], int index1, int index2) { int temp = array[index1]; // store the first value in a temp array[index1] = array[index2]; // copy the value of the second into the // first array[index2] = temp; // copy the value of the temp into the second for (int x : array) System.out.print(x + " "); System.out.println(); } public static void main(String[] args) { Demo q = new Demo(); int[] myArray = { 5, 4, 10, 11, 9, 8, 1 }; q.quickSort(myArray); } 34
35
35
36
Quick Sort Partition Time To partition an array of n elements takes n-1 comparisons and n moves in the worst case. So, the time required for partition is O(n). 36
37
Worst-Case Time 37 F In the worst case, each time the pivot divides the array into one big subarray with the other empty. F The size of the big subarray is one less than the one before divided, so the O(n) partitioning occurs n-1 times. F Worst-case time:
38
Best-Case Time 38 F In the best case, each time the pivot divides the array into two parts of about the same size, so we partition log n times. F Since the O(n) partitioning occurs log n times, Quicksort is O(n log n) in this case.
39
Average-Case Time 39 F On the average, each time the pivot will not divide the array into two parts of the same size nor one empty part. F Statistically, the sizes of the two parts are very close. So the average time is O(n logn). The exact eprformance depends on the data.
40
Bucket Sort All sort algorithms discussed so far are general sorting algorithms that work for any types of keys (e.g., integers, strings, and any comparable objects). These algorithms sort the elements by comparing their keys. The lower bound for general sorting algorithms is O(nlogn). So, no sorting algorithms based on comparisons can perform better than O(n log n). However, if the keys are small integers, you can use bucket sort without having to compare the keys. 40
41
Bucket Sort The bucket sort algorithm works as follows. Assume the keys are in the range from 0 to N-1. We need N buckets labeled 0, 1,..., and N-1. If an element’s key is i, the element is put into the bucket i. Each bucket holds the elements with the same key value. You can use an ArrayList to implement a bucket. Bucket Sort is O(n) 41
42
Common Recurrence Relations 42
43
Comparing Common Growth Functions Constant time 43 Logarithmic time Linear time Log-linear time Quadratic time Cubic time Exponential time
44
Comparing Common Growth Functions 44
45
.jar files .jar files are used for distributing Java applications and libraries. The file format is.zip, but the extension.jar identifies them as Java Archives They contain bytecode (.class files), any other files from the application or library (like images or audio files), and can also contain source code The JDK contains command line tools for making jar files, but this is easier to do with Eclipse Jar files may be executable, meaning that they are configured to launch the main() of some class contained in the jar
46
.jar files
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.