Lecture 9 CS203. Math Ahead!  The rest of this lecture uses a few math principles that you learned in HS but may have forgotten.  Do not worry (too.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

MATH 224 – Discrete Mathematics
CSE Lecture 3 – Algorithms I
Searching Kruse and Ryba Ch and 9.6. Problem: Search We are given a list of records. Each record has an associated key. Give efficient algorithm.
© Copyright 2012 by Pearson Education, Inc. All Rights Reserved. 1 Chapter 17 Sorting.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 24 Sorting.
Sorting Chapter Sorting Consider list x 1, x 2, x 3, … x n We seek to arrange the elements of the list in order –Ascending or descending Some O(n.
Chapter 19: Searching and Sorting Algorithms
Nyhoff, ADTs, Data Structures and Problem Solving with C++, Second Edition, © 2005 Pearson Education, Inc. All rights reserved Sorting.
Ver. 1.0 Session 5 Data Structures and Algorithms Objectives In this session, you will learn to: Sort data by using quick sort Sort data by using merge.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved L17 (Chapter 23) Algorithm.
Introduction to Analysis of Algorithms
Scott Grissom, copyright 2004 Chapter 5 Slide 1 Analysis of Algorithms (Ch 5) Chapter 5 focuses on: algorithm analysis searching algorithms sorting algorithms.
Quicksort. 2 Introduction * Fastest known sorting algorithm in practice * Average case: O(N log N) * Worst case: O(N 2 ) n But, the worst case seldom.
Quicksort.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Data Structures Review Session 1
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved L18 (Chapter 23) Algorithm.
Analysis of Algorithm.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
CS 106 Introduction to Computer Science I 10 / 16 / 2006 Instructor: Michael Eckmann.
 2006 Pearson Education, Inc. All rights reserved Searching and Sorting.
COMP s1 Computing 2 Complexity
1 Data Structures and Algorithms Sorting. 2  Sorting is the process of arranging a list of items into a particular order  There must be some value on.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
(C) 2010 Pearson Education, Inc. All rights reserved. Java How to Program, 8/e.
Chapter 19 Searching, Sorting and Big O
Chapter 12 Recursion, Complexity, and Searching and Sorting
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
Analysis of Algorithms
HKOI 2006 Intermediate Training Searching and Sorting 1/4/2006.
Merge Sort. What Is Sorting? To arrange a collection of items in some specified order. Numerical order Lexicographical order Input: sequence of numbers.
 Pearson Education, Inc. All rights reserved Searching and Sorting.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CS 61B Data Structures and Programming Methodology July 28, 2008 David Sun.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 26 Sorting.
Sorting Chapter Sorting Consider list x 1, x 2, x 3, … x n We seek to arrange the elements of the list in order –Ascending or descending Some O(n.
Algorithms and their Applications CS2004 ( ) Professor Jasna Kuljis (adapted from Dr Steve Swift) 6.1 Classic Algorithms - Sorting.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Java Methods Big-O Analysis of Algorithms Object-Oriented Programming
Big Java by Cay Horstmann Copyright © 2009 by John Wiley & Sons. All rights reserved. Selection Sort Sorts an array by repeatedly finding the smallest.
1 Searching and Sorting Searching algorithms with simple arrays Sorting algorithms with simple arrays –Selection Sort –Insertion Sort –Bubble Sort –Quick.
Chapter 9 Sorting. The efficiency of data handling can often be increased if the data are sorted according to some criteria of order. The first step is.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Liang, Introduction to Java Programming, Ninth Edition, (c) 2013 Pearson Education, Inc. All rights reserved. 1 Chapter 25 Sorting.
C++ How to Program, 7/e © by Pearson Education, Inc. All Rights Reserved.
1. Searching The basic characteristics of any searching algorithm is that searching should be efficient, it should have less number of computations involved.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
INTRO2CS Tirgul 8 1. Searching and Sorting  Tips for debugging  Binary search  Sorting algorithms:  Bogo sort  Bubble sort  Quick sort and maybe.
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 26 Sorting.
Liang, Introduction to Java Programming, Tenth Edition, (c) 2013 Pearson Education, Inc. All rights reserved. 1 Chapter 23 Sorting.
LECTURE 9 CS203. Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search) and sorting (selection sort.
Lecture 10 CS Sorting Sorting is a classic subject in computer science. There are three reasons for studying sorting algorithms. First, sorting.
Searching and Sorting Searching algorithms with simple arrays
Chapter 23 Sorting Jung Soo (Sue) Lim Cal State LA.
Chapter 24 Sorting.
Chapter 23 Sorting CS1: Java Programming Colorado State University
19 Searching and Sorting.
Teach A level Computing: Algorithms and Data Structures
Algorithm design and Analysis
Data Structures Review Session
1 Lecture 13 CS2013.
CS2013 Lecture 5 John Hurley Cal State LA.
Sub-Quadratic Sorting Algorithms
CS203 Lecture 15.
Presentation transcript:

Lecture 9 CS203

Math Ahead!  The rest of this lecture uses a few math principles that you learned in HS but may have forgotten.  Do not worry (too much) if your math background is shaky. We introduce mathematical material at a gentle pace. When I started working on my MS here, I had not taken a math class in 25 years, and I managed to learn this material. You can too.  On the other hand, if you want to study this material in more detail, you will not be disappointed. You just have to wait until you take CS312. 2

Summations Summation is the operation of adding a sequence of numbers; the result is their sum or total. Summation is designated with the Greek symbol sigma (∑) 3 Find the sum Start from 1 Iterate through values of i Stop at 100 This summation means "the sum of all integers between 1 and 100 inclusive"

4 Useful Mathematic Summations

5 The first summation on the previous slide would usually be expressed this way: For the following values, the value of the summation is 5050, since 100(101)/2 = 10100/2 = 5050:

Logarithms  The logarithm of a number is the exponent to which another number, the base, must be raised to yield the number.  Here is the notation: log b (y) = x where b is the base. The parentheses are usually left out in practice.  Examples:  log 2 8 = 3  log 10 10,000 = 4  The word "logarithm" is derived (by an early-modern mathematician) from Greek and means roughly "number reasoning." Interestingly (to me, anyway) it is completely unrelated to it anagram "algorithm," which is derived from a Latin version of an Arabic version of the name of the medieval Persian mathematician Kwarizmi 6

Logarithms  In other fields, log without an stated base is understood to refer to log 10 or log e.  In CS, log without further qualification is understood to refer to log 2, pronounced "log base 2" or "binary logarithm." The base is not important in comparing algorithms, but, as you will see, the base is almost always 2 when we are calculating the complexity of programming algorithms. 7

Recurrence Relations  A recurrence relation is a rule by which a sequence is generated  Eg, the sequence 5, 8, 11, 14, 17, 20…  Is described by the recurrence relation a 0 = 5 a n = a n  Divide-and-conquer algorithms are often described in terms of recurrence relations 8

9 Analyzing Binary Search  Binary Search searches an array or list that is *sorted*  In each step, the algorithm compares the search key value with the key value of the middle element of the array. If the keys match, then a matching element has been found and its index, or position, is returned.  Otherwise, if the search key is less than the middle element's key, then the algorithm repeats its action on the sub-array to the left of the middle element or, if the search key is greater, on the sub-array to the right.  If the remaining array at any step to be searched is empty, then the key cannot be found in the array and a special "not found" indication is returned.

10 Logarithm: Analyzing Binary Search Each iteration in binary search contains a fixed number of operations, denoted by c. Let T(n) denote the time complexity for a binary search on a list of n elements. Since we are studying the rate of growth of execution time, we define T(1) to equal 1. Assume n is a power of 2; this makes the math simpler and, if it is not true, the difference is trivial. Let k=log n. In other words, n = 2 k Since binary search eliminates half of the input after two comparisons, CS-style recurrence relation

11 Logarithmic Time  Ignoring constants and smaller terms, the complexity of the binary search algorithm is O(log n). An algorithm with the O(log n) time complexity is called a logarithmic algorithm.  The base of the log is 2, but the base does not affect a logarithmic growth rate, so it can be omitted.  The time to execute a logarithmic algorithm grows slowly as the problem size increases. If you square the input size (with base = 2), the time taken doubles.

Sorting Sorting is a classic subject in computer science. There are three reasons for studying sorting algorithms.  First, sorting algorithms illustrate many creative approaches to problem solving that can be applied to other problems.  Second, sorting algorithms are good for practicing fundamental programming techniques using selection statements, loops, methods, and arrays.  Third, sorting algorithms are excellent examples to demonstrate algorithm performance. 12

Sorting These sorting algorithms apply to sorting any type of object, as long as we can find a way to order them. For simplicity, though, we will first sort numeric values, then more complex objects. When we sort more complex numbers, we will sort them by some key. For example, if class Student has instance variables representing CIN, GPA and name, we will sort according to one of these or some combination of them. We have already done this with the priority queue and other examples. 13

Sorting Arrays and Lists are reference types; that is, the variable we pass between methods does not contain the array or list, but a reference to it. Therefore, you can sort an array or list in Java with a void method that takes the reference variable, and sorts the elements without returning anything. All other references to the data structure can still be used to access the sorted structure. On the other hand, you can copy all the elements, construct a new sorted list, and return it. This practice may become more common in the near future for reasons you will learn about in a few weeks. 14

Bubble Sort  Recall that bubble sort repeatedly iterates through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order.  This iteration is repeated until no swaps are needed, which indicates that the list is sorted.  The algorithm gets its name from the way smaller elements "bubble" to the top of the list.  Text adapted from Wikipedia 15

Bubble Sort 16 The number of comparisons is always at least as large as the number of swaps. Therefore, in studying the time complexity, we count the comparisons. The largest key always floats to the right position in the first pass, the next largest rises to the next position in the next pass, etc.

Bubble Sort Recall that we are estimating the effect of growth in n, not the exact number of CPU cycles we need. We do not care about the division by 2 since this does not affect the rate of growth. As n increases, the lower-order term n/2 is dominated by the n 2 term. Therefore, we also disregard the lower order term. Bubble sort time: O(n 2 ) 17 In the best case, the data is already sorted, so that we only need one pass, making bubble sort O(n) in this case. More importantly, though, for the worst case, the number of comparisons is :

Selection Sort and Insertion Sort  These two sorts grow sorted sublists one element at a time.  Selection sort finds the lowest value and moves it to the bottom, or finds the largest element and moves it to the top, then repeats the process for the rest of the values, repeatedly until the list is sorted.  Insertion sort takes one value at a time and places it in the correct spot in the sorted sublist, in the same way most people would sort a hand of cards. 18

Analyzing Selection Sort The number of comparisons in selection sort is n-1 for the first iteration, n-2 for the second iteration, and so on. Let T(n) denote the complexity for selection sort and c denote the total number of other operations in each iteration. So, 19 Ignoring constants and smaller terms, the complexity of selection sort is O(n 2 ).

Analyzing Insertion Sort  Where selection sort always inserts an element at the end of the sorted sublist, insertion sort requires inserting some elements in arbitrary places.  At the kth iteration to insert an element to a sorted array of size k, it may take k comparisons to find the insertion position, and, if the structure we are sorting is an array, k moves to insert the element. Let T(n) denote the complexity for insertion sort and c denote the total number of other operations such as assignments in each iteration. So, 20 Ignoring constants and smaller terms, the complexity of the insertion sort algorithm is O(n 2 ). These two terms are just twice the values for selection sort

Quadratic Time  An algorithm with O(n 2 ) time complexity is called a quadratic algorithm.  Algorithms with nested loops are often quadratic.  A quadratic algorithm's expense grows quickly as the problem size increases. If you double the input size, the time for the algorithm is quadrupled. 21

Sorting  We often teach bubble sort, selection sort, and insertion sort first because they are easy to understand. Other sort methods are more efficient in average and worst cases.  In particular, there are sort algorithms with O(n log n) complexity. In other words, the consumption of CPU cycles grows proportionally to n times log of n. These algorithms usually involve performing an operation that is O(log n) n times.  Since log n grows much more slowly than n, this is a dramatic improvement over O(n 2) :  If n = 2, n 2 = 4 and n log n = 2  If n = 100, n 2 = and n log n = 664  If n = 10000, n 2 = 100,000,000 and n log n = 132,877 22

Merge Sort Merge Sort is a divide and conquer algorithm, like the Towers Of Hanoi algorithm mergeSort(list): firstHalf = mergeSort(firstHalf); secondHalf = mergeSort(secondHalf); list = merge(firstHalf, secondHalf); merge: add the lesser of firstHalf [0] and secondHalf [0] to the new, larger list repeat until one of the (already sorted) sublists is exhausted add the rest of the remaining sublist to the larger list. 23

Merge Sort 24

Merge Two Sorted Lists 25

Merge Sort Time Assume n is a power of 2. This assumption makes the math simpler. If n is not a power of 2, the difference is trivial. Merge sort splits the list into two sublists, sorts the sublists using the same algorithm recursively, and then merges the sublists. Each recursive call merge sorts half the list, so the depth of the recursion is the number of times you need to split n to get lists of size 1; this is log n. The single-item lists are, obviously, sorted. Merge Sort reassembles the list in log n steps, just as it broke the list down. To merge two subarrays, across all the sublists at one level of recursion, takes at most n-1 comparisons to compare the elements from the two subarrays and n moves to move elements to the new array. The total size of all the sublists is n, the original size of the unsorted list. The total merge time is 2n-1, which is O(n). This happens log n times. Thus, Merge Sort is O(n log n) 26

27 Merge Sort Time Here it is again, but with more math. Let T(n) denote the time required for sorting an array of n elements using merge sort. Without loss of generality, assume n is a power of 2. The merge sort algorithm splits the array into two subarrays, sorts the subarrays using the same algorithm recursively, and then merges the subarrays. So, The first T(n/2) is the time for sorting the first half of the array and the second T(n/2) is the time for sorting the second half.

28 Merge Sort Time  To merge two subarrays takes at most n-1 comparisons to compare the elements from the two subarrays and n moves to move elements to the temporary array. So, the merge time is 2n-1. 2 log n = n and T(1) = 1 2n – 2 0 a =2 so a-1 = 1 Subtractive term, so the -1 from the summation becomes +1

Quick Sort Quick sort, developed by C. A. R. Hoare (1962), works as follows:  Select an element, called the pivot, in the array.  Divide the array into two parts such that all the elements in the first part are less than or equal to the pivot and all the elements in the second part are greater than the pivot.  Recursively apply the quick sort algorithm to the first part and then the second part. 29

Quick Sort function quicksort(a) // an array of zero or one elements is already sorted if length(a) ≤ 1 return a select and remove a pivot element pivot from array create empty lists less and greater for each x in a if x ≤ pivot then append x to less else append x to greater // two recursive calls return concatenate(quicksort(less), list(pivot), quicksort(greater)) The earliest version of quicksort used the first index as the pivot, and demos of quicksort often still do this for simplicity. However, in an already-sorted array, this will cause the worst case O(n 2 ) behavior. The middle index is safer, although yet more complex solutions to this problem also exist. 30

Quick Sort 31

Quick Sort How to accomplish the partition in a single array: Search from the beginning of the list for the first element greater than the pivot and from the end for the last element less than the pivot. When found, swap them. Continue until the list is partitioned. Finally, swap the last element in the left partition with the pivot 32

package demos; import javax.swing.JOptionPane; public class Demo { // public void quickSort(int array[]) { for (int x : array)System.out.print(x + " "); System.out.println(); quickSort(array, 0, array.length - 1); } public void quickSort(int array[], int start, int end) { int i = start; // index of left-to-right scan int k = end; // index of right-to-left scan if (end - start >= 1) // check that there are at least two elements { int pivot = array[start]; // set the first element as pivot System.out.println("pivot " + pivot); System.out.println("i: " + i + " k: " + k); while (k > i) // while the scan indices have not met { while (array[i] i) { // from the left, look for the first i++; // element greater than the pivot System.out.println("i: " + i); } while (array[k] > pivot && k >= start && k >= i) { // from the right, look for the first k--; // element not greater than the pivot System.out.println("k: " + k); } if (k > i) // if the left seekindex is still smaller than swap(array, i, k); // the right index, swap the corresponding elements } swap(array, start, k); // after the indices cross, swap the last element in the left partition with the // pivot quickSort(array, start, k - 1); // quicksort the left partition quickSort(array, k + 1, end); // quicksort the right partition } 33

else // if there is only one element in the partition, do not do any sorting { return; // the array is sorted, so exit } public void swap(int array[], int index1, int index2) { int temp = array[index1]; // store the first value in a temp array[index1] = array[index2]; // copy the value of the second into the // first array[index2] = temp; // copy the value of the temp into the second for (int x : array) System.out.print(x + " "); System.out.println(); } public static void main(String[] args) { Demo q = new Demo(); int[] myArray = { 5, 4, 10, 11, 9, 8, 1 }; q.quickSort(myArray); } 34

35

Quick Sort Partition Time To partition an array of n elements takes n-1 comparisons and n moves in the worst case. So, the time required for partition is O(n). 36

Worst-Case Time 37 F In the worst case, each time the pivot divides the array into one big subarray with the other empty. F The size of the big subarray is one less than the one before divided, so the O(n) partitioning occurs n-1 times. F Worst-case time:

Best-Case Time 38 F In the best case, each time the pivot divides the array into two parts of about the same size, so we partition log n times. F Since the O(n) partitioning occurs log n times, Quicksort is O(n log n) in this case.

Average-Case Time 39 F On the average, each time the pivot will not divide the array into two parts of the same size nor one empty part. F Statistically, the sizes of the two parts are very close. So the average time is O(n logn). The exact eprformance depends on the data.

Bucket Sort  All sort algorithms discussed so far are general sorting algorithms that work for any types of keys (e.g., integers, strings, and any comparable objects).  These algorithms sort the elements by comparing their keys. The lower bound for general sorting algorithms is O(nlogn). So, no sorting algorithms based on comparisons can perform better than O(n log n).  However, if the keys are small integers, you can use bucket sort without having to compare the keys. 40

Bucket Sort The bucket sort algorithm works as follows. Assume the keys are in the range from 0 to N-1. We need N buckets labeled 0, 1,..., and N-1. If an element’s key is i, the element is put into the bucket i. Each bucket holds the elements with the same key value. You can use an ArrayList to implement a bucket. Bucket Sort is O(n) 41

Common Recurrence Relations 42

Comparing Common Growth Functions Constant time 43 Logarithmic time Linear time Log-linear time Quadratic time Cubic time Exponential time

Comparing Common Growth Functions 44

.jar files .jar files are used for distributing Java applications and libraries.  The file format is.zip, but the extension.jar identifies them as Java Archives  They contain bytecode (.class files), any other files from the application or library (like images or audio files), and can also contain source code  The JDK contains command line tools for making jar files, but this is easier to do with Eclipse  Jar files may be executable, meaning that they are configured to launch the main() of some class contained in the jar

.jar files